source
stringlengths
273
149k
source_labels
sequence
paper_id
stringlengths
9
11
target
stringlengths
18
668
Normalising Flows (NFs) are a class of likelihood-based generative models that have recently gained popularity. They are based on the idea of transforming a simple density into that of the data. We seek to better understand this class of models, and how they compare to previously proposed techniques for generative modeling and unsupervised representation learning. For this purpose we reinterpret NFs in the framework of Variational Autoencoders (VAEs), and present a new form of VAE that generalises normalising flows. The new generalised model also reveals a close connection to denoising autoencoders, and we therefore call our model the Variational Denoising Autoencoder (VDAE). Using our unified model, we systematically examine the model space between flows, variational autoencoders, and denoising autoencoders, in a set of preliminary experiments on the MNIST handwritten digits. The experiments shed light on the modeling assumptions implicit in these models, and they suggest multiple new directions for future research in this space. Unsupervised learning offers the promise of leveraging unlabeled data to learn representations useful for downstream tasks when labeled data is scarce BID47, or even to generate novel data in domains where it is costly to obtain BID15. Generative models are particularly appealing for this as they provide a statistical model of the data, x, usually in the form of a joint probability density p (x). The model's density function, its samples and representations can then be leveraged in applications ranging from semi-supervised learning and speech and (conditional) image synthesis BID44 BID30 BID14 BID26 to gene expression analysis BID13 and molecule design BID10.In practice, data x is often high-dimensional and the optimization associated with learning p (x) can be challenging due to an abundance of local minima BID39 and difficulty in sampling from rich high-dimensional distributions BID34. Despite this, generative modelling has undergone a surge of advancements with recent developments in likelihood-based models BID25 BID8 BID44 and Generative Adversarial Networks (GANs; BID11). The former class is particularly attractive, as it offers (approximate) likelihood evaluation and the ability to train models using likelihood maximisation, as well as interpretable latent representations. Autoencoders have a rich history in the unsupervised learning literature owing to their intuitive and simple construction for learning complex latent representations of data. Through fitting a parameterised mapping from the data through a lower dimensional or otherwise constrained layer back to the same data, the model learns to summarise the data in a compact latent representation. Many variants of autoencoders have been proposed to encourage the model to better encode the underlying structure of the data though regularising or otherwise constraining the model (e.g., BID38 BID1 .Denoising Autoencoders (DAEs) are a variant of the autoencoder under which noise is added to the input data that the model must then output noise-free, i.e. x = f θ (x +) where is sampled from a, possibly structured BID48 BID49, noise distribution ∼ q. They are inspired by the idea that a good representation z would be robust to noise corrupting the data x and that adding noise would discourage the model from simply learning the identity mapping. Although DAEs have been cast as generative models, sampling and computing likelihoods under the model remains challenging. Variational Autoencoders (VAEs) instead assume a probabilistic latent variable model, in which n-dimensional data x correspond to m-dimensional latent representations z following some tractable prior distribution, i.e. x ∼ p φ (x|z) with z ∼ p (z) BID25. The task is then to learn parameters φ, which requires maximising the log marginal likelihood DISPLAYFORM0 In the majority of practical cases (e.g. p φ (x|z) taken to be a flexible neural network-conditional distribution) the above integral is intractable. A variational lower bound on the marginal likelihood is constructed using a variational approximation q θ (z|x) to the unknown posterior p (z|x): DISPLAYFORM1 The right-hand side of, denoted L (θ, φ), is known as the evidence lower bound (ELBO). It can be jointly optimised with stochastic optimisation w.r.t. parameters θ and φ in place of.Conditionals q θ (z|x) and p φ (x|z) can be viewed respectively as probabilistically encoding data x in the latent space, and reconstructing it from samples of this encoding. The first term of the ELBO encourages good reconstructions, whereas the second term encourages the model's latent variables to be distributed according to the prior p (z). Generating new data using this model is accomplished by reconstructing samples from the prior. Normalising Flows (NFs) suppose that the sought distribution p (x) can be obtained by warping a simple base density p (z), e.g. a normal distribution BID36. They make use of the change of variables formula to obtain p (x) through a learned invertible transformation z = f θ (x) as DISPLAYFORM2 Typically, f θ: R n → R n is obtained by stacking several simpler mappings, i.e. DISPLAYFORM3 and the log-determinant obtained as the sum of log-determinants of these mappings. This formulation allows for exact maximum likelihood learning, but requires f θ to be invertible and to have a tractable inverse and Jacobian determinant. This restricts the flexibility of known transformations that can be used in NFs BID8 BID3 and leads to large and computationally intensive models in practice BID26.NFs can also be thought of as VAEs with encoder and decoder modelled as Dirac deltas p θ (x|z) = δ (f θ (z)) and q θ (z|x) = δ f −1 θ (x), constructed using a restricted set of transformations. Furthermore, because NFs model continuous density, to prevent trivial solutions with infinite point densities discrete data must be dequantised by adding random noise BID42 BID40.The contribution of this work is two-fold. First, we shed new light on the relationship between DAEs, VAEs and NFs, and discuss the pros and cons of these model classes. Then, we also introduce several extensions of these models, which we collectively refer to as the Variational Denoising Autoencoders (VDAEs).In the most general form VDAEs generalise NFs and DAEs to discrete data and learned noise distributions. However, when the amount of injected noise is small, VDAE attains a form that allows for using non-invertible transformations (e.g. f θ : R n → R m, with m n). We demonstrate these theoretical advantages through preliminary experimental on the binary and continuous versions of the MNIST dataset. We model data x, a n dimensional vector that can have either continuous or discrete support. As is customary for VAEs, our model for x is hierarchical and assumes a set of latent variables z with tractable prior distribution p(z), and a flexible neural-network conditional distribution p(x|z). On top of this standard VAE setup, we specify the dimension of z to equal the dimension of the data x. In order to form the variational lower bound to train this model, we need an approximate inference model, or encoder, q θ (z|x). Here, we will use an encoder that samples the latents z as DISPLAYFORM0 where q is a tractable noise distribution and f θ (x) is a one-to-one transformation with tractable Jacobian-determinant. In order to use the encoder q θ (z|x) implied by this procedure, we not only need to sample from it, but we must also evaluate its entropy for the KL-term in. To do this we make use of the fact that z is a one-to-one transformation of the noise, given the training data x. Using the standard formulas for a change of variables, we thus get the following expression for the entropy of q θ (z|x): DISPLAYFORM1 where q (x|x) is a distribution whose sampling process is described in. Our variational lower bound on the data log marginal likelihood then becomes DISPLAYFORM2 where againx = x + and z = f θ (x).This is similar to a denoising autoencoder in that we try to reconstruct the original data x from the corrupted datax through the conditional model p(x|z). The difference with classical denoising autoencoders is that our objective has additional terms that regularise our latent representations z to be distributed according to a prior distribution p(z). In addition, the proposed setup allows us to learn the noise distribution q, where this is treated as a fixed hyperparameter in the literature on denoising autoencoders. This model is also a generalisation of normalising flows. Specifically, consider the special case where we take DISPLAYFORM3 then the lower bound in becomes the standard normalising flow log-likelihood. We provide a detailed derivation in Appendix A.The advantage of our generalised model over standard normalising flows is that our model allows for non-zero noise level σ 2. Interestingly, successful applications of normalising flows in the literature often already add a significant amount of noise in order to dequantise the data, and empirical suggest higher amounts of noise lead to models that produce better-looking samples (e.g. BID26 model only the top 5 bits of the data).In addition, our model does not require tying the parameters of the encoder and decoder. Although we are still using a flow-based encoder q θ (z|x), our decoder is not restricted to a specific functional form. The conditional distribution p(x|z) can e.g. have discrete support if the data x is discrete, making our model naturally applicable to data such as text or other highly-structured data, without requiring an explicit dequantisation step. When adding a significant amount of noise, a decoupled decoder will generally be able to achieve a higher variational lower bound compared to using the tied-parameter decoder. The VAE we proposed in Section 2 is more general than NFs, but it still requires an invertible one-to-one encoder with tractable Jacobian-determinant. This restricts our modeling choices since all transformations used in the encoder can only be chosen from a small set of transformations for which we know how to compute inverses and Jacobian-determinants. Additionally, the representation given by our encoder will be of the same dimension as our data x, which may not be optimal for all applications (e.g. model-based reinforcement learning BID15 BID17 or compression BID2). To relax these restrictions further we generalise our model to allow non-invertible encoders as well. We proceed by taking our model from Section 2, withx = x + and z = f θ (x), and performing a Taylor expansion of the ing latent variables z(x,) around = 0 (see Appendix B). This gives DISPLAYFORM0 where DISPLAYFORM1 is the Jacobian of f θ.For small noise levels, as used in Section 2, the O term becomes negligible. If the noise distribution is Gaussian, i.e. q = N 0, σ 2 I n, this means that for small σ we get DISPLAYFORM2 Using this form of encoder q θ (z|x), together with general prior p(z) and conditional distribution p(x|z), we get a VAE that still generalises NFs but now also allows us to choose non-invertible non-one-to-one transformations f θ. We refer to this even broader class of VAE as L-VDAE, for Linearised-VDAE. Evaluating the entropy H [q θ (z|x)] in this case requires computing the log-determinant of the covariance matrix C = JJ T for the data x: DISPLAYFORM0 where m is the dimensionality of z. When using transformations f θ without a tractable Jacobian (e.g. the general Residual Network (ResNet; BID20) blocks), we explicitly evaluate C and compute log det C = m i log λ i, where the eigenvalues λ i are obtained using the eigenvalue decomposition C = QΛQ T with Λ = diag (λ i |i = 1, . . ., m). The decomposition is further re-used in the backward pass when evaluating the derivative of the log-determinant using Jacobi's formula: DISPLAYFORM1 Evaluation of the Jacobian J can be done by performing reverse-mode automatic differentiation with respect to each element of z, thus incurs a factor of m additional computational cost. Covariance matrix C is obtained using a single matrix multiplication and takes O m 2 n operations with the eigenvalue decomposition taking another O m 3 operations. Taken together, evaluation of FORMULA11 takes O m 2 n operations, which is comparable to the O d 3 cost of Glow's 1x1 invertible convolutions in later layers (i.e. after repeating the use of the multi-scale architecture from BID9 that trades spatial dimensions for channels), where d refers to the number of channels used in the convolution. This computational cost is permissive for small latent space dimensionalities m. However, scaling up L-VDAE to larger latent spaces would require stochastic approximations of the log-determinant BID18. These approximations can be implemented efficiently through Jacobian-vector and vector-Jacobian products, without evaluating C or J explicitly, and can be optimised directly by backpropagating through the approximation. With this approach computational complexity will be linear in n subject to some regularity conditions. Sampling from the Gaussian variational posterior q φ is necessary for training and inference in L-VDAE. It can be accomplished using the standard reparameterisation trick BID25, where random normal noise ω ∼ N (0, I n) is transformed into the posterior sample as z = f θ (x) + Jω. We implement this as a Jacobian-vector product, which enables efficient sampling for cases when the Jacobian log-determinant of f θ is cheaper to evaluate than the Jacobian itself (e.g. when f θ is a flow). VDAE blends ideas from the VAE and NFs literature and is closely related to both model families. It is most similar to methods that combined variational inference and NFs by using the latter as part of the approximate variational posterior BID36 BID29 BID43. These methods use a strategy in which samples from the (Gaussian) posterior are further transformed with a NF, whereas in VDAE the posterior distribution is implicitly defined using a sampling procedure inspired by DAE, where posterior samples are obtained by transforming data with added noise using NFs. VDAE is a natural formulation of DAEs as probabilistic models. It is conceptually similar to the Denoising VAEs BID23, which propose an alternative probabilistic formulation of DAEs as VAEs. The method of Im et al., however, does not generalise NFs and, in contrast to VDAE, it requires explicitly choosing the type and amount of corruption noise. The idea of challenging the default choice of using uniform noise for dequantisation in NFs was also explored in Flow++ BID21, where the authors learned a flexible conditional noise model q (|x) as NF itself. Our sampling procedure is similar to dequantisation in Flow++, as it can be viewed as a of applying a NF to dequantisation given by an implicitly conditioned noise model. The main differences, however, are that in VDAE the decoder reconstructs the original (quantised) data, which is also what makes our model applicable to highly-structured data; and, in contrast to Flow++, VDAE can inject substancially more noise than a single dequantisation bin. In relation to VAEs, the linearised form of VDAE can be viewed as an extension of the vanilla VAE BID25 ) that replaces the diagonal Gaussian posterior with a Jacobian-based full covariance posterior. It is thus similar to methods that extend VAE with more flexible prior (e.g. autoregressive BID7 or mixture ) or variational posterior (e.g. full covariance Gaussian (a) or mixture BID35 BID32 ) distributions. Notably, unlike some of these methods, L-VDAE does not increase the number of parameters of the inference or generative networks. As a method that increases flexibility of transformations in NFs, L-VDAE with non-invertible encoders can be compared to Invertible Residual Networks (i-ResNets; BID3) and FFJORD BID12. These methods too depart from the requirement of restricting the form of the Jacobian of the ing transformation. Both, i-ResNets and FFJORD also drop the requirement of having an analytical inverse, which is similar to how VDAE seeks to learn an approximate inverse using its decoder network. However, unlike VDAE, these methods guarantee invertibility and provide ways of computing the exact inverse. Notably, the methods differ considerably in how they achieve the above generalisations. In i-ResNets Behrmann et al. make use of the ResNet network architecture BID20 and identify conditions on the eigenvalues of the residual blocks, under which they parameterise invertible mappings. They then make use of spectral normalisation BID33 to guarantee that the condition is satisfied throughout training; and employ fixed point iteration to invert the residual blocks for generation. i-ResNets further lift the restriction on the form of the Jacobian in a computationally tractable way by using Taylor series expansion in conjunction with stochastic trace estimation BID22.FFJORD BID12 ) is inspired by the re-interpretation of ResNets and NFs as discrete approximations of solutions to the initial value problem of some underlying ODE continuously transforming data x (from data-space to the latent z-space;). BID12; parameterise this ODE as a neural network f θ (z (t), t) to obtain Continuous-time Normalising Flows (CNFs), in which the change in log-density at time t is given by the instantaneous change of variables formula DISPLAYFORM0 The right-hand side of FORMULA13 is given by the trace of the Jacobian of transformation f θ instead of the log-determinant as in NFs. Combined with the use of stochastic trace estimation BID22, this difference alleviates the need to restrict transformations f θ to those with a tractable Jacobian log-determinant. However, the use of ODEs also necessitates employing an ODE solver to integrate for every evaluation of, and backpropagation through log p θ (z (t)). The number of function evaluations required for this increases with training and may become prohibitively large BID12.Finally, VDAE is loosely related to autoregressive generative models, as they both fall into the class of likelihood-based generative models. Autoregressive models factorise likelihood of highdimensional data as a product of simple per-dimension conditional distributions, i.e. p (x) = i p (x i |x 0, . . ., x i−1) (van den BID44 b). Factorised structure of these models necessitates sequential sampling, and a good choice of the ordering of dimensions of x. Overcoming these challenges in practice often requires highly engineered solutions, for example as in BID46 or BID31. Furthermore, data representations formed by hidden layers of autoregressive models appear to be more challenging to manipulate than in VAEs or NFs BID26. We performed empirical studies of the performance of VDAE on the image generation task on the MNIST dataset BID30, comparing it to a VAE implementation with a fully factorised Gaussian posterior and to the NICE BID8 normalising flow as baselines. For the VDAE encoder we used additive couplings to construct f θ from the implicit variational posterior; and, unless otherwise specified, fully-connected ResNet blocks followed by a sigmoid transformation to obtain the decoder parameters µ φ and p φ. A Gaussian distribution N (µ φ (z), λI n ) with a learned parameter λ was used for the continuous MNIST decoder; and Bernoulli (p φ (z)) for binary MNIST. Similarly, unless otherwise specified, ResNet blocks with linear projection layers to change dimensionality were used for the L-VDAE encoder and decoder. Details of the chosen architectures can be found in Appendix D. To model discrete 8-bit pixel values with continuous density models, we followed the procedure of BID42 to dequantise the data, and added noise u ∼ U to the pixel values prior to normalising them to. Note that for VDAE this was done prior and in addition to adding noise from the posterior sampling procedure to the inputs. Decoupled encoder and decoder We start by confirming for a range of noise levels and architectures that de-coupling the encoder and decoder networks in VDAE allows for achieving higher ELBOs. FIG0 compares ELBO attained by VDAE with Gaussian noise q = N 0, σ 2 I for a range of fixed σ. The show that any decoupling of the weights improves over the coupled network, in which the NICE flow is used in the encoder and its inverse -in the decoder. Specifically, we observe that for architectures with a sigmoid activation in the last layer of the decoder, the ELBO rapidly improves with decreasing noise levels. Based on these , in the following experiments we only consider the more general ResNet architecture in the VDAE decoder. We report average test set performance over 10 training runs; when sufficiently large, standard deviations are also given. Qualitative samples are drawn from models with the best test ELBO among the training runs. NLL was estimated via 5000 importance samples as in.Quantitative We now consider the cases when i) noise variance σ 2, in case of VDAE; or ii) in case of L-VDAE, the covariance scale σ from FORMULA10; are optimised together with the model. Results of these experiments are shown in TAB0. For ease of presentation, we also include evaluation for existing flow models (reproduced from BID3).We first note that for cases when the latent dimensionality is smaller than the input space (i.e. m < n), L-VDAE consistently outperforms the VAE baseline in terms of the achieved ELBO, albeit by a small margin. This is consistent with L-VDAE having a more powerful variational posterior. Moreover, for L-VDAE increasing the dimensionality of the latent space consistently improves the variational lower bound. Surprisingly, L-VDAE with n = m and VDAE break this trend and do not improve on the ELBOs obtained for m = 128. We also note that neither of our proposed extensions manage to achieve likelihoods comparable to NFs, including the NICE baseline. Both shortcomings could be explained by the difference in architectures between the methods. In contrast to the L-VDAE with m = n, which employs a NICE flow in the encoder, L-VDAE with m < n makes use of the more expressive ResNet blocks. Similarly, the flexibility of the NICE flow used in VDAE for the implicit posterior may be insufficient for a denoising VAE. We also observe that when using a NICE flow in the decoder, VDAE outperforms L-VDAE in terms of likelihood, signalling that the VDAE approach can further improve on the linearised models, if combined with a more powerful flow. Qualitative We found that without additional regularisation, such as fixing the decoder variance λ 2 or the noise variance σ 2 to values larger than what would have been learned by the model, or assigning a higher weight to the KL-term in the optimisation objective, our models would not produce high-quality samples for the continuous MNIST dataset. We thus omit continuous MNIST model samples from the main text, but explore the effect of fixing the noise variance on sample quality in Appendix E. To explore the applicability of VDAE to structured data, we applied it to the binarised version of the MNIST dataset. As is customary for dynamic MNIST, digits were binarised by sampling from a Bernoulli distribution given by the pixel intensities. Results in TAB1 mirror those we observed on the continuous MNIST, namely L-VDAE consistently achieves higher ELBO than the VAE baseline, which tends to improve as the latent dimensionality grows; and L-VDAE and VDAE, which make use of NICE in the decoder, attain significantly worse likelihood despite the increased dimensionality. Finally, VDAE also improves on L-VDAE with a NICE encoder. However, as shown in FIG1, and in contrast to the continuous MNIST , all our models produce plausible handwritten digit samples. We introduced Variational Denoising Autoencoders (VDAEs), a family of models the bridges the gap between VAEs, NFs and DAEs. Our model extends NFs to discrete data and non-invertible encoders that use lower-dimensional latent representations. Preliminary experiments on the MNIST handwritten digits demonstrate that our model can be successfully applied to data with discrete support, attaining competitive likelihoods and generating plausible digit samples. We also identified a failure mode of our models, in which their performance does not scale well to cases when latent and input dimensionalities are the same (i.e. when a flow-based encoder is used).Future work should address limitations of the method identified in our experiments. In particular, replacing additive coupling blocks with the more powerful invertible convolutions, affine coupling blocks and invertible residual blocks BID9 BID26 BID3 can significantly improve the variational posterior for high dimensions. It can also be interesting to explicitly condition the transformation f θ used for defining the posterior sampling procedure on the data x, for example by defining f θ (x,) ≡ f x,θ using a hyper-network BID16. For convenience we start by repeating the variational lower bound for our VDAE model, as presented in Section 2: DISPLAYFORM0 n×n we zero-out all elements of W L. The same scheme was employed for initialising additive coupling blocks, which can be viewed as residual blocks of a restricted form. Projection layers reduce dimensionality of their inputs using a linear map y = xW with x ∈ R 1×n, y ∈ R 1×m and W ∈ R n×m. This generally leads to loss of information and makes model training harder. To mitigate this effect we initialise the rows of W using a set of m random orthogonal vectors. The decoder projection layers, mapping data to higher dimensions, are then initialised to W T. All models were trained for 1000 epochs using the ADAM optimiser BID24 ) with a batch size of 1000 samples. To improve stability of the training, the learning rate was warmed up from 10 −5 to the chosen learning rate (see below) over the first 10 epochs. Further, the KL term was warmed up by linearly annealing its weight β from 0 to 1 over the first 100 epochs BID5.For each experiment, the learning rate schedule S ∈ {linear, none}, learning rate α ∈ 10 −5, 10 DISPLAYFORM0 and ADAM optimiser parameters β 2 ∈ {0.9, 0.99, 0.999, 0.9999} and ∈ 10 −4, 10 −5, 10 DISPLAYFORM1 were determined by using Bayesian optimisation of the ELBO on the validation set. NICE When implementing the model (standalone, or part of VDAE), we closely followed the architecture and hyper-parameters described in BID8. Namely, the network consisted of 4 additive coupling blocks, each with 5 fully-connected hidden layers of size 1024 with ReLU activations, followed by a linear layer (see Appendix C). Dimension partitioning was alternated between even and odd dimensions after every block. When used as a standalone model, a L 2 regularisation with weight λ = 0.01 was used to improve sample quality. L-VDAE and vanilla VAE When not used in conjunction with a NICE model in the encoder, the L-VDAE and VAE models employed a fully-connected ResNet architecture with B consecutive residual blocks followed by a linear projection layer to higher or lower dimensions. In the encoder, the last projection layer parameterised the means of the Gaussian variational posterior (and, in case of VAE, a parallel projection layer parameterised the log-variances). A sequence of 4 residual-projection "blocks" was used with the last block i = 4 projecting to m dimensions (dimensionality of the latents) and the blocks before it, respectively to min 2 i · m, 28 × 28 dimensions. Each residual block consisted of 2 hidden layers with ReLU activations followed by a linear layer (see Appendix C). The residual block hidden size H ∈ {32, 64, 128, 256, 1024} and the block multiplicity B ∈ {1, 2, 3} were chosen through Bayesian optimisation as described above. Unless otherwise specified, when used together with a NICE model, the VDAE and L-VDAE models employed a ResNet architecture in the decoder. In this case, the ResNet architecture was chosen to closely resemble that of the NICE model. Specifically, hyper-parameter values B = 1 and H = 1024 were used, and no projection layers were employed. Priors We employed a logistic prior with s = 1 and µ = 0 (as in BID8) for models that made use of the NICE flow (even if it was only used in the encoder network); and a factorised normal prior otherwise. Sample quality deteriorates at the extremes of the noise level spectrum: at high noise levels the model appears to be unable to learn the distribution, whereas at low noise levels the model appears to focus too much on the reconstruction error instead of organising the latent space. Just as in DAEs, the noise variance σ 2 in VDAEs can be used as a regulariser; and can be tuned for sample quality.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HklKEUUY_E
We explore the relationship between Normalising Flows and Variational- and Denoising Autoencoders, and propose a novel model that generalises them.
We investigate methods to efficiently learn diverse strategies in reinforcement learning for a generative structured prediction problem: query reformulation. In the proposed framework an agent consists of multiple specialized sub-agents and a meta-agent that learns to aggregate the answers from sub-agents to produce a final answer. Sub-agents are trained on disjoint partitions of the training data, while the meta-agent is trained on the full training set. Our method makes learning faster, because it is highly parallelizable, and has better generalization performance than strong baselines, such as an ensemble of agents trained on the full data. We evaluate on the tasks of document retrieval and question answering. The improved performance seems due to the increased diversity of reformulation strategies. This suggests that multi-agent, hierarchical approaches might play an important role in structured prediction tasks of this kind. However, we also find that it is not obvious how to characterize diversity in this context, and a first attempt based on clustering did not produce good . Furthermore, reinforcement learning for the reformulation task is hard in high-performance regimes. At best, it only marginally improves over the state of the art, which highlights the complexity of training models in this framework for end-to-end language understanding problems. Reinforcement learning (RL) has proven effective in several language tasks, such as machine translation (; ; BID1, question-answering BID12), and text summarization . In RL efficient exploration is key to achieve good performance. The ability to explore in parallel a diverse set of strategies often speeds up training and leads to a better policy .In this work, we propose a simple method to achieve efficient parallelized exploration of diverse policies, inspired by hierarchical reinforcement learning BID7;; ). We structure the agent into multiple sub-agents, which are trained on disjoint subsets of the training data. Sub-agents are co-ordinated by a meta-agent, called aggregator, that groups and scores answers from the sub-agents for each given input. Unlike sub-agents, the aggregator is a generalist since it learns a policy for the entire training set. We argue that it is easier to train multiple sub-agents than a single generalist one since each sub-agent only needs to learn a policy that performs well for a subset of examples. Moreover, specializing agents on different partitions of the data encourages them to learn distinct policies, thus giving the aggregator the possibility to see answers from a population of diverse agents. Learning a single policy that in an equally diverse strategy is more challenging. Since each sub-agent is trained on a fraction of the data, and there is no communication between them, training can be done faster than training a single agent on the full data. Additionally, it is easier to parallelize than applying existing distributed algorithms such as asynchronous SGD or A3C , as the sub-agents do not need to exchange weights or gradients. After training the sub-agents, only their actions need to be sent to the aggregator. We build upon the works of and Buck et al. (2018b). Hence, we evaluate our method on the same tasks: query reformulation for document retrieval and question-answering. We show that it outperforms a strong baseline of an ensemble of agents trained on the full dataset. We also found that performance and reformulation diversity are correlated (Sec. 5.5). Our main contributions are the following:• A simple method to achieve more diverse strategies and better generalization performance than a model average ensemble.• Training can be easily parallelized in the proposed method.• An interesting finding that contradicts our, perhaps naive, intuition: specializing agents on semantically similar data does not work as well as random partitioning. An explanation is given in Appendix F.• We report new state-of-the art on several datasets using BERT .However improve marginally using reinforcement learning and on the question answering task we see no improvements. The proposed approach is inspired by the mixture of experts, which was introduced more than two decades ago and has been a topic of intense study since then. The idea consists of training a set of agents, each specializing in some task or data. One or more gating mechanisms then select subsets of the agents that will handle a new input. Recently, BID6 revisited the idea and showed strong performances in the supervised learning tasks of language modeling and machine translation. Their method requires that output vectors of experts are exchanged between machines. Since these vectors can be large, the network bandwidth becomes a bottleneck. They used a variety of techniques to mitigate this problem. BID0 later proposed a method to further reduce communication overhead by only exchanging the probability distributions of the different agents. Our method, instead, requires only scalars (rewards) and short strings (original query, reformulations, and answers) to be exchanged. Therefore, the communication overhead is small. Previous works used specialized agents to improve exploration in RL (; BID7). For instance, BID8 and use a population of agents to achieve a high diversity of strategies that leads to better generalization performance and faster convergence. use experts to learn subtasks and later merge them into a single agent using distillation . The experiments are often carried out in simulated environments, such as robot control and videogames . In these environments, rewards are frequently available, the states have low diversity (e.g., same image ), and responses usually are fast (60 frames per second). We, instead, evaluate our approach on tasks whose inputs (queries) and states (documents and answers) are diverse because they are in natural language, and the environment responses are slow (0.5-5 seconds per query).Somewhat similarly motivated is the work of BID5. They train many heterogeneous response models and further train an RL agent to pick one response per utterance. We describe our setup using a generic end-to-end search task. The problem consists in learning to reformulate a query so that the underlying retrieval system can return a better . We frame the problem as an RL task (; b), in which the query reformulation system is an RL-agent that interacts with an environment that provides answers and rewards. The goal of the agent is to generate reformulations such that the expected returned reward Figure 1: a) A vanilla search system. The query q 0 is given to the system which outputs a a 0. b) The search system with a reformulator. The reformulator queries the system with q 0 and its reformulations {q 1, ...q N} and receives back the {a 0, ..., a N}. A selector then decides the best a i for q 0. c) The proposed system. The original query is reformulated multiple times by different reformulators. Reformulations are used to obtain from the search system, which are then sent to the aggregator, which picks the best for the original query based on a learned weighted majority voting scheme. Reformulators are independently trained on disjoint partitions of the dataset thus increasing the variability of reformulations.(i.e., correct answers) is maximized. The environment is treated as a black-box, i.e., the agent does not have direct access to any of its internal mechanisms. Figure 1 -(b) illustrates this framework. Figure 1-(c) illustrates the agent. An input query q 0 is given to the N sub-agents. A sub-agent is any system that accepts as input a query and returns a corresponding reformulation. Thus, sub-agents can be heterogeneous. Here we train each sub-agent on a partition of the training set. The i-th agent queries the underlying search system with the reformulation q i and receives a a i. The set {(q i, a i)|0 ≤ i ≤ N } is given to the aggregator, which then decides which will be final. The first step for training the agent is to partition the training set. We randomly split it into equalsized subsets. For an analysis of how other partitioning methods affect performance, see Appendix F. In our implementation, a sub-agent is a sequence-to-sequence model BID9 ) trained on a partition of the dataset. It receives as an input the original query q 0 and outputs a list of reformulated queries (q i) using beam search. Each reformulation q i is given to the same environment that returns a list of (a). We then use REINFORCE BID14 to train the sub-agent. At training time, instead of using beam search, we sample reformulations. We also add the identity agent (i.e., the reformulation is the original query) to the pool of sub-agents. The accumulated rank score is computed as s DISPLAYFORM0 1 ranki,j, where rank i,j is the rank of the j-th when retrieved using q i. The relevance score s R j is the prediction that the a j is relevant 3 to query q 0. It is computed as: DISPLAYFORM1 where DISPLAYFORM2 and W 2 ∈ R D×1 are weight matrices, b 1 ∈ R D and b 2 ∈ R 1 are biases. The brackets in [x; y] represent the concatenation of vectors x and y. The symbol denotes the element-wise multiplication, σ is the sigmoid function, and ReLU is a Rectified Linear Unit function . The function f CNN is implemented as a CNN encoder 1 followed by average pooling over the sequence . The function f BOW is the average word embeddings of the . At test time, the top-K answers with respect to s j = s A j s R j are returned. We train the aggregator with stochastic gradient descent (SGD) to minimize the cross-entropy loss: DISPLAYFORM3 where J * is the set of indexes of the ground-truth . The architecture details and hyperparameters can be found in Appendix B. This task involves rewriting a query to improve the relevance of a search engine's . The environment receives a query and returns a list of documents, the observation, and a reward computed using a list of ground truth documents. We use Lucene 2 in its default configuration as our search engine, with BM25 ranking. The input is a query and the output is a ranked list of documents. TREC-CAR: Introduced by , in this dataset the input query is the concatenation of a Wikipedia article title with the title of one of its section. The ground-truth documents are the paragraphs within that section. The corpus consists of all of the English Wikipedia paragraphs, except the abstracts. The released dataset has five predefined folds, and we use the first four as a training set (approx. 3M queries), and the remaining as a validation set (approx. 700k queries). The test set is the same used evaluate the submissions to TREC-CAR 2017 (approx. 1,800 queries).Jeopardy: This dataset was introduced by. The input is a Jeopardy! question. The ground-truth document is a Wikipedia article whose title is the answer to the question. The corpus consists of all English Wikipedia articles. , this dataset consists of academic papers crawled from Microsoft Academic API. 3 A query is the title of a paper and the ground-truth answer consists of the papers cited within. Each document in the corpus consists of its title and abstract. The goal of query reformulation is to increase the proportion of relevant documents returned. We use recall as the reward: R@K = |D K ∩D * | |D * |, where D K are the top-K retrieved documents and D * are the relevant documents. We also experimented using other metrics such as NDCG, MAP, MRR, and R-Precision but these ed in similar or slightly worse performance than Recall@40. Despite the agents optimizing for Recall, we report the main in MAP as this is a more commonly used metric in information retrieval. For in other metrics, see Appendix A. 1 In the preliminary experiments, we found CNNs to work better than LSTMs (Table 1 : MAP scores on the test sets of the document retrieval datasets. The weights of the agents are initialized from a single model pretrained for ten days on the full training set. We give the original query to Lucene with BM25 as a ranking function and use the retrieved documents as . PRF: This is the pseudo relevance feedback method . We expand the original query with terms from the documents retrieved by the Lucene search engine using the original query. The top-N TF-IDF terms from each of the top-K retrieved documents are added to the original query, where N and K are selected by a grid search on the validation data. Relevance Model (RM3): A re-implementation of the query expansion model of. The probability of adding a term t to the original query is given by: DISPLAYFORM0 where P (d) is the probability of retrieving the document d, assumed uniform over the set, P (t|d) and P (q 0 |d) are the probabilities assigned by the language model obtained from d to t and q 0, respectively. DISPLAYFORM1 |q|, where tf(t, d) is the term frequency of t in d. We set the interpolation parameter λ to 0.65, which was the best value found by a grid-search on the development set. We use a Dirichlet smoothed language model to compute a language model from a document d ∈ D 0: DISPLAYFORM2 where u is a scalar constant (u = 1500 in our experiments), and P (t|C) is the probability of t occurring in the entire corpus C.We use the N terms with the highest P (t|q 0) in an expanded query, where N = 100 was the best value found by a grid-search on the development set. This is the sequence-to-sequence model trained with reinforcement learning from. The reformulated query is formed by appending new terms to the original query. The terms are selected from the documents retrieved using the original query. The agent is trained from scratch. We train N RL-RNN agents with different initial weights on the full training set. At test time, we average the probability distributions of all the N agents at each time step and select the token with the highest probability, as done by BID9. We evaluate the following variants of the proposed method:RL-N-Full: We train N RL-RNN agents with different initial weights on the full training set. The answers are obtained using the best (greedy) reformulations of all the agents and are given to the aggregator. RL-N-Bagging: This is the same as RL-N-Full but we construct the training set of each RL-RNN agent by sampling with replacement D times from the full training set, which has a size of D. This is known as the bootstrap sample and leads to approximately 63% unique samples, the rest being duplicates. RL-N-Sub: This is the proposed agent. It is similar to RL-N-Full but the multiple sub-agents are trained on random partitions of the dataset (see Figure 1 -(c)).BERT Aggregator: We experimented replacing our aggregator with BERT , which holds state-of-the-art in a wide range of textual tasks. Using the same notation used in their paper, we feed the query as sentence A and the document text as sentence B. We truncate the document text such that concatenation of query, document, and separator tokens have a maximum length of 512 tokens. We use a pretrained BERT LARGE model as a binary classification model, that is, we feed the [CLS] vector to a single layer neural network and obtain the probability of the document being correct. We obtain the final list of documents by ranking them with respect these probabilities. We train with the same objective used to train our aggregator (Equation 3). To compare how well our proposed reformulation agents perform against the best non-neural reformulation method, we implemented two variants of the system. In one the initial list of candidate documents a j is given by RM3 (RM3 + BERT Aggregator), in the other by RL-10-Sub (RL-10-Sub + BERT Aggregator). A summary of the document retrieval is shown in Table 1. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and 2.7 TFLOPS as an estimate of the single-precision floating-point of a K80 GPU. Since the subagents are frozen during the training of the aggregator, we pre-compute all (q 0, q i, a i, r i) tuples from the training set, thus avoiding sub-agent or environment calls. This reduces its training time to less than 6 hours (0.06 × 10 18 FLOPs). Since this cost is negligible when compared to the sub-agents', we do not include it in the table. The proposed methods (RL-10-{Sub, Bagging, Full}) have 20-60% relative performance improvement over the standard ensemble (RL-10-Ensemble) while training ten times faster. More interestingly, RL-10-Sub has a better performance than the single-agent version (RL-RNN), uses the same computational budget, and trains on a fraction of the time. Lastly, we found that RL-10-Sub (Pretrained) has the best balance between performance and training cost across all datasets. Compared to the top-performing system in the TREC-CAR 2017 Track , an RL-10-Full with an ensemble of 10 aggregators yields a relative performance improvement of approximately 20%. By replacing our aggregator with BERT, we improve performance by 50-100% in all three datasets (RL-10-Sub + BERT Aggregator). This is a remarkable improvement given that we used BERT without any modification from its original implementation. Without using our reformulation agents, the performance drops by 3-10% (RM3 + BERT Aggregator). For an analysis of the aggregator's contribution to the overall performance, see Appendix C.We compare performance of the full system (reformulators + aggregator) for different numbers of agents in FIG2. The performance is stable across all datasets after more than ten sub-agents are used, indicating robustness. For more related experiments, see Appendix D. Table 2: Main on the question-answering task (SearchQA dataset). On the question-answering task, we compare against the active question answering agent proposed by Buck et al. (2018b). The environment receives a question and returns an answer and a reward computed against a ground truth answer. We use either BiDAF or BERT as a question-answering system. We use as a reward the token level F1 score on the answer (see Section 5.3). We follow Buck et al. (2018b) to train BiDAF and BERT. We emphasize that parameters are frozen when we train and evaluate the reformulation system. Training and evaluation are performed on the SearchQA dataset . The data contains Jeopardy! clues as questions. Each clue has a correct answer and a list of 50 snippets from Google's top search . The training, validation and test sets contain 99,820, 13,393 and 27,248 examples, respectively. BiDAF/BERT: The original question is given to the question-answering system without any modification (see Figure 1 -(a)).AQA: The best model from Buck et al. (2018b). It consists of a reformulator and a selector. The reformulator is a subword-based seq2seq model that produces twenty reformulations of a question with beam search. Answers for the original question and its reformulations are obtained from BiDAF. These are given to the selector which then chooses one of the answers as final (see Figure 1-(b) ). The reformulator is pretrained on zero-shot translation. AQA-N-{Full, Sub}: Similar to the RL-N-{Full, Sub} models, we use AQA reformulators as the sub-agents followed by an aggregator to create AQA-N-Full and AQA-N-Sub models, whose subagents are trained on the full and random partitions of the dataset, respectively. Table 3: Diversity scores of reformulations from different methods. For pBLEU and pCos, lower values mean higher diversity. Higher diversity scores are associated with higher F1/oracle scores. We use the macro-averaged F1 score as the main metric. It measures the average bag of tokens overlap between the prediction and ground truth answer. We take the F1 over the ground truth answer for a given question and then average over all of the questions. Oracle: Additionally, we present the oracle performances, which are from a perfect aggregator that predicts s R j = 1 for relevant answers and s R j = 0, otherwise. Results are presented in Table 2. When using BiDAF as the Q&A system, our methods (AQA-10-{Full, Sub}) have both better F1 and oracle performances than single-agents AQA methods, while training in one-tenth of the time. Even when the ensemble method is given ten times more training time (AQA-10-Full, extra budget), our method performs better. We achieve state-of-theart on SearchQA by a wide margin with BERT. Our reformulation strategy (BERT + AQA-10-Sub), however, could not improve upon this underlying Q&A system. We conjecture that, although there is room for improvement, as the oracle performance is 5-7% higher than BERT alone, the reformulations and answers do not contain enough information for the aggregator to discriminate good from bad answers. One possible way to fix this is to give the context of the answer to the aggregator, although in our experiments we could not find any successful way to use this extra information. We observe a drop in F1 of approximately 1% when the original query is removed from the pool of reformulations, which shows that the gains come mostly from the multiple reformulations and not from the aggregator falling back on selecting the original query. In accordance with the'mixture of experts' idea, we expected specialisation to be advantageous for agents and tried several meaningful clustering approaches (cfȦppendix F). However, we surprisingly found that random clusterings were superior and query diversity being an important reason. We evaluate query diversity vs. performance using four metrics (see Appendix E): pCos, pBLEU, PINC, and Length Std. Table 3 shows that the multiple agents trained on partitions of the dataset (AQA-10-Sub) produce more diverse queries than a single agent with beam search (AQA) and multiple agents trained on the full training set (AQA-10-Full). This suggests that its higher performance can be partly attributed to the higher diversity of the learned policies. We proposed a method to build a better query reformulation system by training multiple sub-agents on partitions of the data using reinforcement learning and an aggregator that learns to combine the answers of the multiple agents given a new query. We showed the effectiveness and efficiency of the proposed approach on the tasks of document retrieval and question answering. We also found that a first attempt based on semantic clustering did not produce good , and that diversity was an important but hard to characterize reason for improved performance. One interesting orthogonal extension would be to introduce diversity on the beam search decoder BID11 ), thus shedding light on the question of whether the gains come from the increased capacity of the system due to the use of the multiple agents, the diversity of reformulations, or both. Furthermore, we found that reinforcement learning for the reformulation task is hard when the underlying system already performs extremely well on the task. This might be due to the tasks being too constrained (which makes it possible for machines to almost reach human performance), and requires further exploration. AGGREGATOR: The encoder f q0 is a word-level two-layer CNN with filter sizes of 9 and 3, respectively, and 128 and 256 kernels, respectively. D = 512. No dropout is used. ADAM is the optimizer with learning rate of 10 −4 and mini-batch of size 64. It is trained for 100 epochs. We use mini-batches of size 64, SGD as the optimizer, and learning rate of 10 −3.AGGREGATOR: The encoder f q0 is a token-level, three-layer CNN with filter sizes of 3, and 128, 256, and 256 kernels, respectively. We train it for 100 epochs with mini-batches of size 64 with SGD and learning rate of 10 −3. To isolate the contribution of the Aggregator from the gains brought by the multiple reformulators, we use the aggregator to re-rank the list of documents obtained with the rewrite from a single reformulator (RL-RNN Greedy + Aggregator). We also use beam search or sampling to produce K rewrites from a single reformulator (RL-RNN K Sampled/Beam + Aggregator). The K lists of ranked documents returned by the environment are then merged into a single list and re-ranked by the Aggregator. The are shown in table 7. The higher performance obtained with ten rewrites produced by different reformulators (RL-10-Sub) when compared 20 sampled rewrites from a single agent (RL-RNN 20 Sampled + Aggregator) indicates that the gains the proposed method comes mostly from the pool of diverse reformulators, and not from the simple use of a re-ranking function (Aggregator). To validate the effectiveness of the proposed aggregation function, we conducted a comparison study on the TREC-CAR dataset. We present the in Table 8. We notice that removing or changing the accumulated rank or relevance score functions in a performance drop between 0.4-1.4% in MAP. The largest drop occurs when we remove the aggregated rank (s j = s R j), suggesting that the rank of a document obtained from the reformulation phase is a helpful signal to the re-ranking phase. Not reported in the table, we also experimented concatenating to the input vector z i (eq. 2) a vector to represent each sub-agent. These vectors were learned during training and allowed the aggregator to distinguish sub-agents. However, we did not notice any performance improvement. Table 7: Multiple reformulators vs. aggregator contribution. Numbers are MAP scores on the dev set. Using a single reformulator with the aggregator (RL-RNN Greedy/Sampled/Beam + Aggregator) improves performance by a small margin over the single reformulator without the aggregator (RL-RNN). Using ten reformulators with the aggregator (RL-10-Sub) leads to better performance, thus indicating that the pool of diverse reformulators is responsible for most of the gains of the proposed method. Aggregator Function MAP Diff DISPLAYFORM0 11.9 -0.4 s Table 9: Partitioning strategies and the corresponding evaluation metrics. We notice that the random strategy generally in the best quality sub-agents, leading to the best scores on both of the tasks. Reinforcement learning algorithms that use non-linear function approximators, such as neural networks, are known to be unstable BID10;; ). Ensemble methods are known to reduce this variance (; BID3 BID10 . Since the proposed method can be viewed as an ensemble, we compare the AQA-10-Sub's F1 variance against a single agent (AQA) on ten runs. Our method has a much smaller variance: 0.20 vs. 1.07. We emphasize that it also has a higher performance than the AQA-10-Ensemble. We argue that the higher stability is due to the use of multiple agents. Answers from agents that diverged during training can be discarded by the aggregator. In the single-agent case, answers come from only one, possibly bad, policy. Here we define the metrics used in query diversity analysis (Sec. 5.5):PCOS: Mean pair-wise cosine distance: DISPLAYFORM0 q,q ∈Q n cos #q, #q, where Q n is a set of reformulated queries for the n-th original query in the development set and #q is the token count vector of q. PBLEU: Mean pair-wise sentence-level BLEU : DISPLAYFORM1 PINC: Mean pair-wise paraphrase in k-gram changes : DISPLAYFORM2, where K is the maximum number of k-grams considered (we use K = 4).LENGTH STD: Standard deviation of the reformulation lengths: DISPLAYFORM3 APPENDIX F ON DATA PARTITIONING Throughout this paper, we used sub-agents trained on random partitions of the dataset. We now investigate how different data partitioning strategies affect final performance of the system. Specifically, we compare the random split against a mini-batch K-means clustering algorithm .Balanced K-means Clustering For K-means, we experimented with three types of features: average question embedding (Q), average answer embedding (A), and the concatenation of these two (Q+A). The word embeddings were obtained from.The clusters returned by the K-means can be highly unbalanced. This is undesirable since some subagents might end up being trained with too few examples and thus may have a worse generalization performance than the others. To address this problem, we use a greedy cluster balancing algorithm as a post-processing step (see Algorithm 1 for the pseudocode). item ← randomly select an item from c 8:move item to the closest cluster in C remaining 9:sort C remaining by descending order of sizes 10:end while 11: end for 12: return C Evaluation Metric In order to gain insight into the effect of a partitioning strategy, we first define three evaluation metrics. Let π i be the i-th sub-agent trained on the i-th partition out of K partitions obtained from clustering. We further use s ij to denote the score, either F-1 in the case of question answering or R@40 for document retrieval, obtained by the i-th sub-agent π i on the j-th partition. Out-of-partition score computes the generalization capability of the sub-agents outside the partitions on which they were trained: DISPLAYFORM4 This score reflects the general quality of the sub-agents. Out-of-partition variance computes how much each sub-agent's performance on the partitions, on which it was not trained, varies: DISPLAYFORM5 It indicates the general stability of the sub-agents. If it is high, it means that the sub-agent must be carefully combined in order for the overall performance to be high. Out-of-partition error computes the generalization gap between the partition on which the sub-agent was trained and the other partitions: DISPLAYFORM6 This error must be low, and otherwise, would indicate that each sub-agent has overfit the particular partition, implying the worse generalization. Result We present the in Table 9. Although we could obtain a good with the clustering-based strategy, we notice that this strategy is highly sensitive to the choice of features. Q+A is optimal for SearchQA, while A is for TREC-CAR. On the other hand, the random strategy performs stably across both of the tasks, making it a preferred strategy. Based on comparing Q and Q+A for SearchQA, we conjecture that it is important to have sub-agents that are not specialized too much to their own partitions for the proposed approach to work well. Furthermore, we see that the absolute performance of the sub-agents alone is not the best proxy for the final performance, based on TREC-CAR. TAB11 shows four reformulation examples by various methods. The proposed method (AQA-10-Sub) performs better in the first and second examples than the other methods. Note that, despite the large diversity of reformulations, BiDAF still returns the correct answer. In the third example, the proposed method fails to produce the right answer whereas the other methods perform well. In the fourth example, despite the correct answer is in the set of returned answers, the aggregator fails to set a high score for it.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJeypMU5wE
We use reinforcement learning for query reformulation on two tasks and surprisingly find that when training multiple agents diversity of the reformulations is more important than specialisation.
We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that keep the agent in desirable situations, both during training and at convergence. We formulate these problems as {\em constrained} Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them. Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints. Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction. The field of reinforcement learning (RL) has witnessed tremendous success in many high-dimensional control problems, including video games , board games , robot locomotion , manipulation , navigation , and obstacle avoidance . In RL, the ultimate goal is to optimize the expected sum of rewards/costs, and the agent is free to explore any behavior as long as it leads to performance improvement. Although this freedom might be acceptable in many problems, including those involving simulators, and could expedite learning a good policy, it might be harmful in many other problems and could cause damage to the agent (robot) or to the environment (objects or people nearby). In such domains, it is absolutely crucial that while the agent optimizes long-term performance, it only executes safe policies both during training and at convergence. A natural way to incorporate safety is via constraints. A standard model for RL with constraints is constrained Markov decision process (CMDP) , where in addition to its standard objective, the agent must satisfy constraints on expectations of auxiliary costs. Although optimal policies for finite CMDPs with known models can be obtained by linear programming , there are not many for solving CMDPs when the model is unknown or the state and/or action spaces are large or infinite. A common approach to solve CMDPs is to use the Lagrangian method , which augments the original objective function with a penalty on constraint violation and computes the saddle-point of the constrained policy optimization via primal-dual methods . Although safety is ensured when the policy converges asymptotically, a major drawback of this approach is that it makes no guarantee with regards to the safety of the policies generated during training. A few algorithms have been recently proposed to solve CMDPs at scale while remaining safe during training. One such algorithm is constrained policy optimization (CPO) . CPO extends the trust-region policy optimization (TRPO) algorithm (a) to handle the constraints in a principled way and has shown promising empirical in terms scalability, performance, and constraint satisfaction, both during training and at convergence. Another class of these algorithms is by Chow et al. . These algorithms use the notion of Lyapunov functions that have a long history in control theory to analyze the stability of dynamical systems . Lyapunov functions have been used in RL to guarantee closed-loop stability . They also have been used to guarantee that a model-based RL agent can be brought back to a "region of attraction" during exploration . Chow et al. use the theoretical properties of the Lyapunov functions and propose safe approximate policy and value iteration algorithms. They prove theories for their algorithms when the CMDP is finite with known dynamics, and empirically evaluate them in more general settings. However, their algorithms are value-function-based, and thus are restricted to discrete-action domains. In this paper, we build on the problem formulation and theoretical findings of the Lyapunov-based approach to solve CMDPs, and extend it to tackle continuous action problems that play an important role in control theory and robotics. We propose Lyapunov-based safe RL algorithms that can handle problems with large or infinite action spaces, and return safe policies both during training and at convergence. To do so, there are two major difficulties that need to be addressed: 1) the policy update becomes an optimization problem over the large or continuous action space (similar to standard MDPs with large actions), and 2) the policy update is a constrained optimization problem in which the (Lyapunov) constraints involve integration over the action space, and thus, it is often impossible to have them in closed-form. Since the number of Lyapunov constraints is equal to the number of states, the situation is even more challenging when the problem has a large state space. To address the first difficulty, we switch from value-function-based to policy gradient (PG) algorithms. To address the second difficulty, we propose two approaches to solve our constrained policy optimization problem (a problem with infinite constraints, each involving an integral over the continuous action space) that can work with any standard on-policy (e.g., proximal policy optimization (PPO) ) and off-policy (e.g., deep deterministic policy gradient (DDPG) ) PG algorithm. Our first approach, which we call policy parameter projection or θ-projection, is a constrained optimization method that combines PG with a projection of the policy parameters onto the set of feasible solutions induced by the Lyapunov constraints. Our second approach, which we call action projection or a-projection, uses the concept of a safety layer introduced by to handle simple single-step constraints, extends this concept to general trajectorybased constraints, solves the constrained policy optimization problem in closed-form using Lyapunov functions, and integrates this closed-form into the policy network via safety-layer augmentation. Since both approaches guarantee safety at every policy update, they manage to maintain safety throughout training (ignoring errors ing from function approximation), ensuring that all intermediate policies are safe to be deployed. To prevent constraint violations due to function approximation errors, similar to CPO, we offer a safeguard policy update rule that decreases constraint cost and ensures near-constraint satisfaction. Our proposed algorithms have two main advantages over CPO. First, since CPO is closely connected to TRPO, it can only be trivially combined with PG algorithms that are regularized with relative entropy, such as PPO. This restricts CPO to on-policy PG algorithms. On the contrary, our algorithms can work with any on-policy (e.g., PPO) and off-policy (e.g., DDPG) PG algorithm. Having an off-policy implementation is beneficial, since off-policy algorithms are potentially more data-efficient, as they can use the data from the replay buffer. Second, while CPO is not a back-propagatable algorithm, due to the backtracking line-search procedure and the conjugate gradient iterations for computing natural gradient in TRPO, our algorithms can be trained end-to-end, which is crucial for scalable and efficient implementation . In fact, we show in Section 3.1 that CPO (minus the line search) can be viewed as a special case of the on-policy version (PPO version) of our θ-projection algorithm, corresponding to a specific approximation of the constraints. We evaluate our algorithms and compare them with CPO and the Lagrangian method on several continuous control (MuJoCo) tasks and a real-world robot navigation problem, in which the robot must satisfy certain constraints, while minimizing its expected cumulative cost. Results show that our algorithms outperform the baselines in terms of balancing the performance and constraint satisfaction (during training), and generalize better to new and more complex environments. We consider the RL problem in which the agent's interaction with the environment is modeled as a Markov decision process (MDP). A MDP is a tuple (X, A, γ, c, P, x0), where X and A are the state and action spaces; γ ∈ is a discounting factor; c(x, a) ∈ [0, Cmax] is the immediate cost function; P (·|x, a) is the transition probability distribution; and x0 ∈ X is the initial state. Although we consider deterministic initial state and cost function, our can be easily generalized to random initial states and costs. We model the RL problems in which there are constraints on the cumulative cost using CMDPs. The CMDP model extends MDP by introducing additional costs and the associated constraints, and is defined by (X, A, γ, c, P, x0, d, d0), where the first six components are the same as in the unconstrained MDP; d(x) ∈ [0, Dmax] is the (state-dependent) immediate constraint cost; and d0 ∈ R ≥0 is an upper-bound on the expected cumulative constraint cost. To formalize the optimization problem associated with CMDPs, let ∆ be the set of Markovian stationary policies, i.e., ∆ = {π : X × A →, a π(a|x) = 1}. At each state x ∈ X, we define the generic Bellman operator w.r.t. a policy π ∈ ∆ and a cost function h as. Given a policy π ∈ ∆, we define the expected cumulative cost and the safety constraint function (expected cumulative constraint cost) as Cπ(x0) ], respectively. The safety constraint is then defined as Dπ(x0) ≤ d0. The goal in CMDPs is to solve the constrained optimization problem It has been shown that if the feasibility set is non-empty, then there exists an optimal policy in the class of stationary Markovian policies ∆ (, Theorem 3.1). 2.1 POLICY GRADIENT ALGORITHMS Policy gradient (PG) algorithms optimize a policy by computing a sample estimate of the gradient of the expected cumulative cost induced by the policy, and then updating the policy parameter in the gradient direction. In general, stochastic policies that give a probability distribution over actions are parameterized by a κ-dimensional vector θ, so the space of policies can be written as {π θ, θ ∈ Θ ⊂ R κ}. Since in this setting a policy π is uniquely defined by its parameter θ, policy-dependent functions can be written as a function of θ or π interchangeably. DDPG and PPO are two PG algorithms that have recently gained popularity in solving continuous control problems. DDPG is an off-policy Q-learning style algorithm that jointly trains a deterministic policy π θ (x) and a Q-value approximator Q(x, a; φ). The Q-value approximator is trained to fit the true Q-value function and the deterministic policy is trained to optimize Q(x, π θ (x); φ) via chain-rule. The PPO algorithm we use in this paper is a penalty form of TRPO (a) with an adaptive rule to tune the DKL penalty weight β k. PPO trains a policy π θ (x) by optimizing a loss function that consists of the standard policy gradient objective and a penalty on the KL-divergence between the current θ and previous θ policies, i.e., DKL(θ, θ) Lagrangian method is a straightforward way to address the constraint Dπ θ (x0) ≤ d0 in CMDPs. Lagrangian method adds the constraint costs d(x) to the task costs c(x, a) and transform the constrained optimization problem to a penalty form, i.e., min θ∈Θ max λ≥0 E[The method then jointly optimizes θ and λ to find a saddle-point of the penalized objective. The optimization of θ may be performed by any PG algorithm on the augmented cost c(x, a) + λd(x), while λ is optimized by stochastic gradient descent. As described in Sec. 1, although the Lagrangian approach is easy to implement (see Appendix A for the details), in practice, it often violates the constraints during training. While at each step during training, the objective encourages finding a safe solution, the current value of λ may lead to an unsafe policy. This is why the Lagrangian method may not be suitable for solving problems in which safety is crucial during training. Since in this paper, we extend the Lyapunov-based approach to CMDPs of to PG algorithms, we end this section by introducing some terms and notations from that are important in developing our safe PG algorithms. We refer readers to Appendix B for details. We define a set of Lyapunov functions w.r.t. initial state x0 ∈ X and constraint threshold d0 as, where πB is a feasible policy of, i.e., Dπ B (x0) ≤ d0. We refer to the constraints in this feasibility set as the Lyapunov constraints. For an arbitrary Lyapunov function L ∈ Lπ B (x0, d0), we denote by, ∀x ∈ X, the set of L-induced Markov stationary policies. The contraction property of T π,d, together with L(x0) ≤ d0, imply that any L-induced policy in FL is a feasible policy of. However, FL(x) does not always contain an optimal solution of, and thus, it is necessary to design a Lyapunov function that provides this guarantee. In other words, the main goal of the Lyapunov approach is to construct a Lyapunov function L ∈ Lπ B (x0, d0), such that FL contains an optimal policy π show in their Theorem 1 that without loss of optimality, the Lyapunov function that satisfies the above criterion can be expressed as Lπ B, (x):= E ∞ t=0 γ t d(xt) + (xt) | πB, x, in which (x) ≥ 0 is a specific immediate auxiliary constraint cost that keeps track of the maximum constraint budget available for policy improvement (from πB to π *). They propose ways to construct such, as well as an auxiliary constraint cost surrogate, which is a tight upper-bound on and can be computed more efficiently. They use this construction to propose the safe (approximate) policy and value iteration algorithms, whose objective is to solve the following LP (, Eq. 6) during policy improvement: where x ) are the value and state-action value functions (w.r.t. the cost function c), and is the Lyapunov function. In any iterative policy optimization method, such as those studied in this paper, the feasible policy πB at each iteration can be set to the policy computed at the previous iteration (which is feasible). In LP, there are as many constraints as the number of states and each constraint involves an integral over the entire action space. When the state space is large, even if the integral in the constraint has a closed-form (e.g., for finite actions), solving becomes numerically intractable. Chow et al. assumed that the number of actions is finite and focused on value-function-based RL algorithms, and addressed the large state issue by policy distillation. Since in this paper, we are interested in problems with large action spaces, solving will be even more challenging. To address this issue, in the next section, we first switch from value-function-based algorithms to PG algorithms, then propose an optimization problem with Lyapunov constraints, analogous to, that is suitable for PG, and finally present two methods to solve our proposed optimization problem efficiently. We now present our approach to solve CMDPs in a way that guarantees safety both at convergence and during training. Similar to , our Lyapunov-based safe PG algorithms solve a constrained optimization problem analogous to. In particular, our algorithms consist of two components, a baseline PG algorithm, such as DDPG or PPO, and an effective method to solve the general Lyapunov-based policy optimization problem, the analogous to, i.e, In the next two sections, we present two approaches to solve efficiently. We call these approaches 1) θ-projection, a constrained optimization method that combines PG with projecting the policy parameter θ onto the set of feasible solutions induced by the Lyapunov constraints, and 2) a-projection, in which we embed the Lyapunov constraints into the policy network via a safety layer. 3.1 THE θ-PROJECTION APPROACH The θ-projection approach is based on the minorization-maximization technique in conservative PG and Taylor series expansion, and can be applied to both onpolicy and off-policy algorithms. Following Theorem 4.1 in , we first have the following bound for the cumulative cost:, where µ θ B,x 0 is the γ-visiting distribution of π θ B starting at the initial state x0, and β is the weight for the entropy-based regularization. 1 Using this , we denote by the surrogate cumulative cost. It has been shown in Eq. 10 of (a) that replacing the objective function Cπ θ (x0) with its surrogate C π θ (x0; π θ B) in solving will still lead to policy improvement. In order to effectively compute the improved policy parameter θ+, one further approximates the function C π θ (x0; π θ B) with its Taylor series expansion around θB. In particular, the term is approximated up to its first order, and the term DKL(θ, θB) is approximated up to its second order. These altogether allow us to replace the objective function in Similarly, regarding the constraints in, we can use the Taylor series expansion (around θB) to approximate the LHS of the Lyapunov constraints as Using the above approximations, at each iteration, our safe PG algorithm updates the policy by solving the following constrained optimization problem with semi-infinite dimensional Lyapunov constraints: the above max-operator is non-differentiable, this may still lead to numerical instability in gradient descent algorithms. Similar to the surrogate constraint in TRPO (to transform the max D KL constraint to an average DKL constraint, see Eq. 12 in (a) ), a more numerically stable way is to approximate the Lyapunov constraint using the average constraint surrogate where M is the number of on-policy sample trajectories of π θ B. In order to effectively compute the gradient of the Lyapunov value function, consider the special case when the auxiliary constraint surrogate is chosen as = (1 − γ)(d0 − Dπ θ B (x0)) (see Appendix B for justification). Using the fact that is θ-independent, the gradient term in can be written as are the constraint value functions, respectively. Since the integral is equal to E a∼π θ [Q W θ B (x i, a)], the average constraint surrogate can be approximated (approximation is because of the choice of) by the inequality Dπ θ B (x0) +, which is equivalent to the constraint used in CPO (see Section 6.1 in ). This shows that CPO (minus the line search) belongs to the class of our Lyapunov-based PG algorithms with θ-projection. We refer to the DDPG and PPO versions of our θ-projection safe PG algorithms as SDDPG and SPPO. Derivation details and the pseudo-code (Algorithm 4) of these algorithms are given in Appendix C. The main characteristic of the Lyapunov approach is to break down a trajectory-based constraint into a sequence of single-step state dependent constraints. However, when the state space is infinite, the feasibility set is characterized by infinite dimensional constraints, and thus, it is counter-intuitive to directly enforce these Lyapunov constraints (as opposed to the original trajectory-based constraint) into the policy update optimization. To address this, we leverage the idea of a safety layer from , that was applied to simple single-step constraints, and propose a novel approach to embed the set of Lyapunov constraints into the policy network. This way, we reformulate the CMDP problem as an unconstrained optimization problem and optimize its policy parameter θ (of the augmented network) using any standard unconstrained PG algorithm. At every given state, the unconstrained action is first computed and then passed through the safety layer, where a feasible action mapping is constructed by projecting unconstrained actions onto the feasibility set w.r.t. Lyapunov constraints. This constraint projection approach can guarantee safety during training. We now describe how the action mapping (to the set of Lyapunov constraints) works 2. Recall from the policy improvement problem in that the Lyapunov constraint is imposed at every state x ∈ X. Given a baseline feasible policy πB = π θ B, for any arbitrary policy parameter θ ∈ Θ, we denote by Ξ(πB, θ) = {θ ∈ Θ : QL π B (x, π θ (x)) − QL π B (x, πB(x)) ≤ (x), ∀x ∈ X }, the projection of θ onto the feasibility set induced by the Lyapunov constraints. One way to construct a feasible policy π Ξ(π B,θ) from a parameter θ is to solve the following 2 -projection problem: We refer to this operation as the Lyapunov safety layer. Intuitively, this projection perturbs the unconstrained action as little as possible in the Euclidean norm in order to satisfy the Lyapunov constraints. Since this projection guarantees safety, if we have access to a closed form of the projection, we may insert it into the policy parameterization and simply solve an unconstrained policy optimization problem, i.e., θ+ ∈ arg min θ∈Θ Cπ Ξ(π B,θ) (x0), using any standard PG algorithm. To simplify the projection, we can approximate the LHS of the Lyapunov constraint with its first-order Taylor series (w.r.t. action a = πB(x)). Thus, at any given state x ∈ X, the safety layer solves the following projection problem: where η(x) ∈ is the mixing parameter that controls the trade-off between projecting on unconstrained policy (for return maximization) and on baseline policy (for safety), and is the action-gradient of the state-action Lyapunov function. Similar to the analysis of Section 3.1, if the auxiliary cost is state-independent, one can readily find gL π B (x) by computing the gradient of the constraint action-value function ∇aQW θ B (x, a) | a=π B (x). Note that the objective function in is positive-definite and quadratic, and the constraint approximation is linear. Therefore, the solution of this (convex) projection problem can be effectively computed by an in-graph QP-solver, such as OPT-Net . Combined with the above projection procedure, this further implies that the CMDP problem can be effectively solved using an end-to-end PG training pipeline (such as DDPG or PPO). When the CMDP has a single constraint (and thus a single Lyapunov constraint), the policy π Ξ(π B,θ) (x) has the following analytical solution. Proposition 1. At any given state x ∈ X, the solution to the optimization problem has the form The closed-form solution is essentially a linear projection of the unconstrained action π θ (x) onto the Lyapunov-safe hyper-plane with slope gL π B (x) and intercept (x) = (1 − γ)(d0 − Dπ B (x0)). It is possible to extend this closed-form solution to handle multiple constraints, if there is at most one constraint active at a time (see Proposition 1 in ).We refer to the DDPG and PPO versions of our a-projection safe Lyapunov-based PG algorithms as SDDPG a-projection and SPPO a-projection. Derivation and pseudo-code (Algorithm 5) of these algorithms are in Appendix C. We empirically evaluate 3 our Lyapunov-based safe PG algorithms to assess their: (i) performance in terms of cost and safety during training, and (ii) robustness w.r.t. constraint violation. We use three simulated robot locomotion continuous control tasks in the MuJoCo simulator . The notion of safety in these tasks is motivated by physical constraints: (i) HalfCheetah-Safe: this is a modification of the MuJoCo HalfCheetah problem in which we impose constraints on the speed of Cheetah in order to force it to run smoothly. The video shows that the policy learned by our algorithm in slower but much smoother movement of Cheetah compared to the policies learned by PPO and Lagrangian 4; (ii) Point-Circle: the agent is rewarded for running in a wide circle, but is constrained to stay within a safe region defined by |x| ≤ x lim; (iii) Point-Gather & Ant-Gather: the agent is rewarded for collecting target objects in a terrain map, while being constrained to avoid bombs. The last two tasks were first introduced in by adding constraints to the original MuJoCo tasks: Point and Ant. Details of these tasks are given in Appendix D. We compare our algorithms with two state-of-the-art unconstrained algorithms, DDPG and PPO, and two constrained methods, Lagrangian with optimized Lagrange multiplier (Appendix A) and on-policy CPO. We use the CPO algorithm that is based on PPO (unlike the original CPO that is based on TRPO) and coincides with our SPPO algorithm derived in Section 4.1. SPPO preserves the essence of CPO by adding the first-order constraint and relative entropy regularization to the policy optimization problem. The main difference between CPO and SPPO is that the latter does not perform backtracking line-search in learning rate. We compare with SPPO instead of CPO to 1) avoid the additional computational complexity of line-search in TRPO, while maintaining the performance of PG using PPO, 2) have a back-propagatable version of CPO, and 3) have a fair comparison with other back-propagatable safe PG algorithms, such as our DDPG and a-projection based algorithms. Figures 1a, 1b, 2a, 2b, 8a, 8b, 9a, 9b show that our Lyapunov-based PG algorithms are stable in learning and all converge to feasible policies with reasonable performance. Figures 1c, 1d, 2c, 2d, 8c, 8d, 9c, 9b show the algorithms in terms of constraint violation during training. These figures indicate that our algorithms quickly stabilize the constraint cost below the threshold, while the unconstrained DDPG and PPO violate the constraints, and Lagrangian tends to jiggle around the threshold. Moreover, it is worth-noting that the Lagrangian method can be sensitive to the initialization of the Lagrange multiplier λ 0. If λ 0 is too large, it would make policy updates overly conservative, and if it is too small, then we will have more constraint violation. Without further knowledge about the environment, we treat λ 0 as a hyper-parameter and optimize it via grid-search. See Appendix D for more details and for the experimental of Ant-Gather and Point-Circle. a-projection vs. θ-projection: The figures indicate that in many cases DDPG and PPO with aprojection converge faster and have lower constraint violation than their θ-projection counterparts (i.e., SDDPG and SPPO). This corroborates with the hypothesis that a-projection is less conservative during policy updates than θ-projection (which is what CPO is based on) and generates smoother gradient updates during end-to-end training. In most experiments (HalfCheetah, PointGather, and AntGather) the DDPG algorithms tend to have faster learning than their PPO counterparts, while the PPO algorithms perform better in terms of constraint satisfaction. The faster learning behavior is due to the improved dataefficiency when using off-policy samples in PG, however, the covariate-shift 5 in off-policy data makes tight constraint control more challenging. We now evaluate safe policy optimization algorithms on a real robot task -a map-less navigation task -where a noisy differential drive robot with limited sensors (Fig. 3a) is required to navigate to a goal outside of its field of view in unseen environments while avoiding collision. The main goal is to learn a policy that drives the robot to goal as efficiently as possible, while limiting the impact energy of collisions, since the collision can damage the robot and environment. Here the CMDP is non-discounting and has a fixed horizon. The agent's observations consist of the relative goal position, agent's velocity, and Lidar measurements (Fig. 3a). The actions are the linear and angular velocity at the robot's center of the mass. 6 The transition probability captures the noisy robot's dynamics, whose exact formulation is unknown to the robot. The robot must navigate to arbitrary goal positions collision-free in a previously unseen environment, and without access to the indoor map and any work-space topology. We reward the agent for reaching the goal, which translates to an immediate cost that measures the relative distance to the goal. To measure the total impact energy of obstacle collisions, we impose an immediate constraint cost to account for the speed during collision, with a constraint threshold d 0 that characterizes the agent's maximum tolerable collision impact energy to any object. Different from the standard approach, where a constraint on collision speed is explicitly imposed to the learning problem at each time step, we emphasize that a CMDP constraint is required here because it allows the robot to lightly brush off the obstacle (such as walls) but prevent it from ramming into any objects. Other use cases of CMDP constraints in robot navigation include collision avoidance or limiting total battery usage of the task. Experimental Results: We evaluate the learning algorithms on success rate and constraint control averaged over 100 episodes with random initialization. The task is successful if the robot reaches the goal before the constraint threshold (total energy of collision) is exhausted. While all methods converge to policies with reasonable performance, Figure 4a and 4b show that the Lyapunov-based PG algorithms have higher success rates, due to their robust abilities of controlling the total constraint, as well minimizing the distance to goal. Although the unconstrained method often yields a lower distance to goal, it violates the constraint more frequently leading to a lower success rate. Lagrangian approach is less robust to initialization of parameters, and therefore it generally has lower success rate and higher variability than the Lyapunov-based methods. Unfortunately due to function approximation error and stochasticity of the problem, all the algorithms converged pre-maturely with constraints above the threshold, possibly due to the overly conservative constraint threshold (d 0 = 100). Inspection of trajectories shows that the Lagrangian method tends to zigzag and has more collisions, while the SDDPG chooses a safer path to reach the goal (Figures 5a and 5b). Next, we evaluate how well the methods generalize to (i) longer trajectories, and (ii) new environments. The tasks are trained in a 22 by 18 meters environment (Fig. 7) with goals placed within 5 to 10 meters from the robot initial state. In a much larger evaluation environment (60 by 47 meters) with goals placed up to 15 meters away from the goal, the success rate of all methods degrades as the goals are further away (Fig. 6a). The safety methods (a-projection -SL-DDPG, and θ-projection -SG-DDPG) outperform unconstrained and Lagrangian (DDPG and LA-DDPG), while retaining the lower constraints even when the task becomes more difficult (Fig. 6b). Finally, we deployed the SL-DDPG policy onto the real Fetch robot in an everyday office environment. 7 Fetch robot weights 150 kilograms, and reaches maximum speed of 7 km/h making the collision force a safety paramount. Figure 5c shows the top down view of the robot log. Robot travelled, through narrow corridors and around people walking through the office, for a total of 500 meters to complete five repetitions of 12 tasks, each averaging about 10 meters to the goal. The robot robustly avoids both static and dynamic (humans) obstacles coming into its path. We observed additional "wobbling" effects, that was not present in simulation. This is likely due to the wheel slippage at the floor that the policy was not trained for. In several occasions when the robot could not find a clear path, the policy instructed the robot to stay put instead of narrowly passing by the obstacle. This is precisely the safety behavior we want to achieve with the Lyapunov-based algorithms. We used the notion of Lyapunov functions and developed a class of safe RL algorithms for continuous action problems. Each algorithm in this class is a combination of one of our two proposed projections: θ-projection and a-projection, with any on-policy (e.g., PPO) or off-policy (e.g., DDPG) PG algorithm. We evaluated our algorithms on four high-dimensional simulated robot locomotion MuJoCo tasks and compared them with several baselines. To demonstrate the effectiveness of our algorithms in solving real-world problems, we also applied them to an indoor robot navigation problem, to ensure that the robot's path is optimal and collision-free. Our indicate that our algorithms 1) achieve safe learning, 2) have better data-efficiency, 3) can be more naturally integrated within the standard end-to-end differentiable PG training pipeline, and 4) are scalable to tackle real-world problems. Our work is a step forward in deploying RL to real-world problems in which safety guarantees are of paramount importance. Future work includes 1) extending a-projection to stochastic policies and 2) extensions of the Lyapunov approach to model-based RL and use it for safe exploration. We first state a number of mild technical and notational assumptions that we make throughout this section. Assumption 1 (Differentiability). For any state-action pair (x, a), π θ (a|x) is continuously differentiable in θ and ∇ θ π θ (a|x) is a Lipschitz function in θ for every x ∈ X and a ∈ A. Assumption 2 (Strict Feasibility). There exists a transient policy π θ (·|x) such that D π θ (x 0) < d 0 in the constrained problem. Assumption 3 (Step Sizes). The step size schedules {α 3,k}, {α 2,k}, and {α 1,k} satisfy Assumption 1 imposes smoothness on the optimal policy. Assumption 2 guarantees the existence of a local saddle point in the Lagrangian analysis. Assumption 3 refers to step sizes corresponding to policy updates and indicates that the update corresponding to {α 3,k} is on the fastest time-scale, the updates corresponding to {α 2,k} is on the intermediate time-scale, and the update corresponding to {α 1,k} is on the slowest time-scale. As this assumption refers to user-defined parameters, they can always be chosen to be satisfied. To solve the CMDP, we employ the Lagrangian relaxation procedure to convert it to the following unconstrained problem: where λ is the Lagrange multiplier. Notice that L(θ, λ) is a linear function in λ. Then, there exists a local saddle point (θ *, λ *) for the minimax optimization problem max λ≥0 min θ L(θ, λ), such that for some r > 0, ∀θ ∈ R κ ∩ B θ * (r), and ∀λ ∈ [0, λ max], we have where B θ * (r) is a hyper-dimensional ball centered at θ * with radius r > 0. In the following, we present a policy gradient (PG) algorithm and an actor-critic (AC) algorithm. While the PG algorithm updates its parameters after observing several trajectories, the AC algorithm is incremental and updates its parameters at each time-step. We now present a policy gradient algorithm to solve the optimization problem. The idea of the algorithm is to descend in θ and ascend in λ using the gradients of L(θ, λ) w.r.t. θ and λ, i.e., The unit of observation in this algorithm is a system trajectory generated by following the current policy π θ k. At each iteration, the algorithm generates N trajectories by following the current policy π θ k, uses them to estimate the gradients in, and then uses these estimates to update the parameters θ, λ. Let ξ = {x 0, a 0, c 0, x 1, a 1, c 1, . . ., x T −1, a T −1, c T −1, x T} be a trajectory generated by following the policy θ, where x T = x Tar is the target state of the system and T is the (random) stopping time. The cost, constraint cost, and probability of ξ are defined as C(ξ) = respectively. Based on the definition of P θ (ξ), one obtains ∇ θ log P θ (ξ) = T −1 k=0 ∇ θ log π θ (a k |x k). Algorithm 1 contains the pseudo-code of our proposed PG algorithm. What appears inside the parentheses on the right-hand-side of the update equations are the estimates of the gradients of L(θ, λ) w.r.t. θ, λ (estimates of the expressions in). Gradient estimates of the Lagrangian function are given by Input: parameterized policy π(·|·; θ) Initialization: policy parameter θ = θ 0, and the Lagrangian parameter λ = λ 0 for i = 0, 1, 2,... do for j = 1, 2,... do Generate N trajectories {ξ j,i} N j=1 by starting at x 0 and following the policy θ i. end for end for where the likelihood gradient is 2, which ensures the convergence of the algorithm. Recall from Assumption 3 that the step-size schedules satisfy the standard conditions for stochastic approximation algorithms, and ensure that the policy parameter θ update is on the fast time-scale {α 2,i}, and the Lagrange multiplier λ update is on the slow time-scale {α 1,i}. This in a two time-scale stochastic approximation algorithm that has been shown to converge to a (local) saddle point of the objective function L(θ, λ). This convergence proof makes use of standard in stochastic approximation theory, because in the limit when the step-size is sufficiently small, analyzing the convergence of PG is equivalent to analyzing the stability of an ordinary differential equation (ODE) w.r.t. its equilibrium point. In PG, the unit of observation is a system trajectory. This may in high variance for the gradient estimates, especially when the length of the trajectories is long. To address this issue, we propose two actor-critic algorithms that use value function approximation in the gradient estimates and update the parameters incrementally (after each state-action transition). We present two actor-critic algorithms for optimizing. These algorithms are still based on the above gradient estimates. Algorithm 2 contains the pseudo-code of these algorithms. The projection operator Γ Λ is necessary to ensure the convergence of the algorithms. Recall from Assumption 3 that the step-size schedules satisfy the standard conditions for stochastic approximation algorithms, and ensure that the critic update is on the fastest time-scale α 3,k, the policy update α 2,k is on the intermediate timescale, and finally the Lagrange multiplier update is on the slowest time-scale α 1,k. This in three time-scale stochastic approximation algorithms. Using the PG theorem from , one can show that where µ θ is the discounted visiting distribution and Q θ is the action-value function of policy θ. We can show that, where is the temporal-difference (TD) error, andV θ is an estimator of the value function V θ. Traditionally, for convergence guarantees in actor-critic algorithms, the critic uses linear approximation for the value function, where the feature vector ψ(·) belongs to a low-dimensional space R κ2. The linear approximationV θ,v belongs to a low-dimensional subspace Input: Parameterized policy π(·|·; θ) and value function feature vector φ(·) Initialization: policy parameters θ = θ0; Lagrangian parameter λ = λ0; value function weight v = v0 // NAC Algorithm: Critic Update:, where Ψ is a short-hand notation for the set of features, i.e., Ψ(x) = ψ (x). Recently with the advances in deep neural networks, it has become increasingly popular to model the critic with a deep neural network, based on the objective function of minimizing the MSE of Bellman residual w.r.t. V θ or Q θ . In this section, we revisit the Lyapunov approach to solving CMDPs that was proposed by and report the mathematical that are important in developing our safe policy optimization algorithms. To start, without loss of generality, we assume that we have access to a baseline feasible policy of, π B; i.e., π B satisfies D π B (x 0) ≤ d 0. We define a set of Lyapunov functions w.r.t. initial state x 0 ∈ X and constraint threshold d 0 as and call the constraints in this feasibility set the Lyapunov constraints. For any arbitrary Lyapunov function L ∈ L π B (x 0, d 0), we denote by the set of L-induced Markov stationary policies. Since T π,d is a contraction mapping , any L-induced policy π has the property, ∀x ∈ X. Together with the property that L(x 0) ≤ d 0, they imply that any L-induced policy is a feasible policy of. However, in general, the set F L (x) does not necessarily contain an optimal policy of, and thus, it is necessary to design a Lyapunov function (w.r.t. a baseline policy π B) that provides this guarantee. In other words, the main goal is to construct a Lyapunov function Chow et al. show in their Theorem 1 that 1) without loss of optimality, the Lyapunov function can be expressed as where (x) ≥ 0 is some auxiliary constraint cost uniformly upper-bounded by and 2) if the baseline policy π B satisfies the condition where D = max x∈X max π D π (x) is the maximum constraint cost, then the Lyapunov function candidate L * also satisfies the properties of, and thus, its induced feasible policy set F L * contains an optimal policy. Furthermore, suppose that the distance between the baseline and optimal policies can be estimated efficiently. Using the set of L * -induced feasible policies and noting that the safe Bellman operator, ∀x ∈ X, has a unique fixed point V *, such that V * (x 0) is a solution of and an optimal policy can be constructed via greedification, i.e., π. This shows that under the above assumption, can be solved using standard dynamic programming (DP) algorithms. While this connects CMDP with Bellman's principle of optimality, verifying whether π B satisfies this assumption is challenging when a good estimate of D T V (π * ||π B) is not available. To address this issue, Chow et al. propose to approximate * with an auxiliary constraint cost, which is the largest auxiliary cost satisfying the Lyapunov condition, ∀x ∈ X, and the safety condition L (x 0) ≤ d 0. The intuition here is that the larger, the larger the set of policies F L. Thus, by choosing the largest such auxiliary cost, we hope to have a better chance of including the optimal policy π * in the set of feasible policies. Specifically, is computed by solving the following linear program (LP): where 1(x 0) represents a one-hot vector in which the non-zero element is located at x = x 0. When π B is a feasible policy, this problem has a non-empty solution. Furthermore, according to the derivations in , the maximizer of has the following form: Input: Initial feasible policy π0; for k = 0, 1, 2,... do Step 0: With π b = π k, evaluate the Lyapunov function L k, where k is a solution of Step 1: Evaluate the cost value function Vπ k (x) = Cπ k (x); Then update the policy by solving the following problem: They also show that by further restricting (x) to be a constant function, the maximizer is given by Using the construction of the Lyapunov function L, propose the safe policy iteration (SPI) algorithm (see Algorithm 3) in which the Lyapunov function is updated via bootstrapping, i.e., at each iteration L is recomputed using w.r.t. the current baseline policy. At each iteration k, this algorithm has the following properties: 1) Consistent Feasibility, i.e., if the current policy π k is feasible, then π k+1 is also feasible; 2) Monotonic Policy Improvement, i.e., C π k+1 (x) ≤ C π k (x) for any x ∈ X; and 3) Asymptotic Convergence. Despite all these nice properties, SPI is still a value-function-based algorithm, and thus, it is not straightforward to use it in continuous action problems. The main reason is that the greedification step becomes an optimization problem over the continuous set of actions that is not necessarily easy to solve. In Section 3, we show how we use SPI and its nice properties to develop safe policy optimization algorithms that can handle continuous action problems. Our algorithms can be thought as combinations of DDPG or PPO (or any other on-policy or off-policy policy optimization algorithm) with a SPI-inspired critic that evaluates the policy and computes its corresponding Lyapunov function. The computed Lyapunov function is then used to guarantee safe policy update, i.e., the new policy is selected from a restricted set of safe policies defined by the Lyapunov function of the current policy. In this section, we first provide the details of the derivation of the θ-projection and a-projection procedures described in Section 3, and then provide the pseudo-codes of our safe PG algorithms. To derive our θ-projection algorithms, we first consider the original Lyapunov constraint in that is given by where the baseline policy is parameterized as π B = π θ B. Using the first-order Taylor series expansion w.r.t. θ = θ B, at any arbitrary x ∈ X, the term E a∼π θ Q L θ B (x, a) = a∈A π θ (a|x) Q Lπ B (x, a) da on left-hand-side of the above inequality can be written as which implies that Note that the objective function of the constrained minimization problem in contains a regularization term: that controls the distance θ − θ B to be small. For most practical purposes, here one can assume the higher-order term O(θ − θ B 2) to be much smaller than the first-order term Therefore, one can approximate the original Lyapunov constraint in with the following constraint: Furthermore, following the same line of arguments used in TRPO (to transform the max D KL constraint to an average D KL constraint, see Eq. 12 in (a) ), a more numerically stable way is to approximate the Lyapunov constraint using the average constraint surrogate, i.e., Now consider the special case when auxiliary constraint surrogate is chosen as a constant, i.e.,. The justification of such choice comes from analyzing the solution of optimization problem. Then, one can write the Lyapunov action-value function Q L θ B (x, a) as Since the second term is independent of θ, for any state x ∈ X, the gradient term where are the constraint value function and constraint state-action value function, respectively. The second equality is based on the standard log-likelihood gradient property in PG algorithms . Collectively, one can then re-write the Lyapunov average constraint surrogate as where is the auxiliary constraint cost defined specifically by the Lyapunov-based approach, to guarantee constraint satisfaction. By expanding the auxiliary constraint cost on the right-hand-side, the above constraint is equivalent to the constraint used in CPO, i.e., For any arbitrary state x ∈ X, consider the following constraint in the safety-layer projection problem given in: Using first-order Taylor series expansion of the Lyapunov state-action value function Q Lπ B (x, a) w.r.t. action a = π B (x), the Lyapunov value function Q Lπ B (x, a) can be re-written as Note that the objective function of the action-projection problem in contains a regularization term 2 that controls the distance a − π B (x) to be small. For most practical purposes, here one can assume the higher-order term O(a − π B (x) 2 ) to be much smaller than the first-order term (a − π B (x)) g Lπ B (x). Therefore, one can approximate the original action-based Lyapunov constraint in with the constraint a − π B (x) g Lπ B (x) ≤ (x) that is the constraint in. Similar to the analysis of the θ-projection approach, if the auxiliary cost is state-independent, the action-gradient term g Lπ B (x) is equal to the gradient of the constraint action-value function, where Q W θ B is the state-action constraint value function w.r.t. the baseline policy. The rest of the proof follows the from Proposition 1 in . This completes the derivations of the a-projection approach. Algorithms 4 and 5 contain the pseudo-code of our safe Lyapunov-based policy gradient (PG) algorithms with θ-projection and a-projection, respectively. Due to function approximation errors, even with the Lyapunov constraints, in practice a safe PG algorithm may take a bad step and produce an infeasible policy update and cannot automatically recover from such a bad step. To address this issue, similar to , we propose the following safeguard policy update rule to decrease the constraint cost: where α sg,k is the learning rate for the safeguard update. If α sg,k >> α k (learning rate of PG), then with the safeguard update, θ will quickly recover from the bad step, however, it might be overly conservative. This approach is principled because as soon as π θ k is unsafe/infeasible w.r.t. the CMDP constraints, the algorithm uses a limiting search direction. One can directly extend this safeguard update to the multiple-constraint scenario by doing gradient descent over the constraint that has the worst violation. Another remedy to reduce the chance of constraint violation is to do constraint tightening on the constraint cost threshold. Specifically, instead of d 0, one may pose the constraint based on d 0 · (1 − δ), where δ ∈ is the factor of safety for providing additional buffer to constraint violation. Additional techniques in cost-shaping have been proposed in to smooth out the sparse constraint costs. While these techniques can further ensure safety, construction of the cost-shaping term requires knowledge of the environment, which makes the safe PG algorithms more complicated. Input: Initial feasible policy π0; for k = 0, 1, 2,... do Step 0: of T steps by starting at x0 and following the policy θ k Step 1: Using the trajectories {ξ j,k} N j=1, estimate the critic Q θ (x, a) and the constraint critic Q D,θ (x, a); • For DDPG, these functions are trained by minimizing the MSE of Bellman residual, and one can also use off-policy samples from replay buffer ; • For PPO these functions can be estimated by the generalized advantage function technique from Schulman et al. (2015b) Step 2: Based on the closed form solution of a QP problem with an LP constraint in Section 10.2 of , calculate λ * k with the following formula: β k is the adaptive penalty weight of the DKL(π||π θ k) regularizer, and Step 3: Update the policy parameter by following the objective gradient; • For DDPG • For PPO, Step 4: At any given state x ∈ X, compute the feasible action probability a * (x) via action projection in the safety layer, that takes inputs ∇aQL(x, a) = ∇aQ D,θ k (x, a) and (x) = (1 − γ)(d0 − Q D,θ k (x0, π k (x0))), for any a ∈ A. end for Return Final policy π θ k *, Our experiments are performed on safety-augmented versions of standard MuJoCo domains . HalfCheetah-Safe. The agent is a the standard HalfCheetah (a 2-legged simulated robot rewarded for running at high speed) augmented with safety constraints. We choose the safety constraints to be defined on the speed limit. We constrain the speed to be less than 1, i.e., constraint cost is thus 1[|v| > 1]. Episodes are of length 200. The constraint threshold is 50. Point Circle. This environment is taken from . The agent is a point mass (controlled via a pivot). The agent is initialized at and rewarded for moving counter-clockwise along a circle of radius 15 according to the reward, for position x, y and velocity dx, dy. The safety constraint is defined as the agent staying in a position satisfying |x| ≤ 2.5. The constraint cost is thus 1[|x| > 2.5]. Episodes are of length 65. The constraint threshold is 7. Input: Initial feasible policy π0; for k = 0, 1, 2,... do Step 0: of T steps by starting at x0 and following the policy θ k Step 1: Using the trajectories {ξ j,k} N j=1, estimate the critic Q θ (x, a) and the constraint critic Q D,θ (x, a); • For DDPG, these functions are trained by minimizing the MSE of Bellman residual, and one can also use off-policy samples from replay buffer ; • For PPO these functions can be estimated by the generalized advantage function technique from Schulman et al. (2015b) Step 2: Update the policy parameter by following the objective gradient; • For DDPG • For PPO, where β k is the adaptive penalty weight of the DKL(π||π θ k) regularizer, and Step 3: At any given state x ∈ X, compute the feasible action probability a * (x) via action projection in the safety layer, that takes inputs ∇aQL(x, a) = ∇aQ D,θ k (x, a) and Point Gather. This environment is taken from . The agent is a point mass (controlled via a pivot) and the environment includes randomly positioned apples (2 apples) and bombs (8 bombs). The agent given a reward of 10 for each apple collected and a penalty of −10 for each bomb. The safety constraint is defined as the number of bombs collected during the episode. Episodes are of length 15. The constraint threshold is 4 for DDPG and 2 for PPO. Ant Gather. This environment is the same as Point Circle, only with an Ant agent (quadrapedal simulated robot). Each episode is initialized with 8 apples and 8 bombs. The agent receives a reward of 10 for each apple collected, a penalty of −20 for each bomb collected, and a penalty of −20 if the episode terminates prematurely (because the Ant falls). Episodes are of length at most 500. The constraint threshold is 10 and 5 for DDPG and PPO, respectively. In these experiments, there are three different agents: a point-mass (X ⊆ R 9, A ⊆ R 2); an ant quadruped robot (X ⊆ R 32, A ⊆ R 8); and a half-cheetah (X ⊆ R 18, A ⊆ R 6). For all experiments, we use two neural networks with two hidden layers of size and ReLU activation to model the mean and log-variance of the Gaussian actor policy, and two neural networks with two hidden layers of size and tanh activation to model the critic and constraint critic. To build a low variance sample gradient estimate, we use GAE-λ (b) On top of GAE-λ, in all experiments and for each algorithm (SDDPG, SPPO, SDDPG a-projection, SPPO a-projection, CPO, Lagrangian, and the unconstrained PG counterparts), we systematically explored different parameter settings by doing grid-search over the following factors: (i) learning rates in the actor-critic algorithm, (ii) batch size, (iii) regularization parameters of the policy relative entropy term, (iv) with-or-without natural policy gradient updates, (v) with-or-without the emergency safeguard PG updates (see Appendix C.4 for more details). Although each algorithm might have a different parameter setting that leads to the optimal performance in training, the reported here are the best ones for each algorithm, chosen by the same criteria (which is based on the value of return plus certain degree of constraint satisfaction). To account for the variability during training, in each learning curve, a 1-SD confidence interval is also computed over 10 separate random runs (under the same parameter setting). In all numerical experiments and for each algorithm (SPPO θ-projection, SDDPG θ-projection, SPPO a-projection, SDDPG a-projection, CPO, Lagrangian, and the unconstrained PG counterparts), we systematically explored various hyper-parameter settings by doing grid-search over the following factors: (i) learning rates in the actor-critic algorithm, (ii) batch size, (iii) regularization parameters of the policy relative entropy term, (iv) with-or-without natural policy gradient updates, (v) with-orwithout the emergency safeguard PG updates (see Appendix C.4 for more details). Although each algorithm might have a different parameter setting that leads to the optimal training performance, the reported in the paper are the best ones for each algorithm, chosen by the same criteria (which is based on value of return + certain degree of constraint satisfaction). In our experiments, we compare the two classes of safe RL algorithms, one derived from θ-projection (constrained policy optimization) and one from the a-projection (safety layer), with the unconstrained and Lagrangian baselines in four problems: PointGather, AntGather, PointCircle, and HalfCheetahSafe. We perform these experiments with both off-policy (DDPG) and on-policy (PPO) versions of the algorithms. In PointCircle DDPG, although the Lagrangian algorithm significantly outperforms the safe RL algorithms in terms of return, it violates the constraint more often. The only experiment in which Lagrangian performs similarly to the safe algorithms in terms of both return and constraint violation is PointCircle PPO. In all other experiments that are performed in the HalfCheetahSafe, PointGather and AntGather domains, either (i) the policy learned by Lagrangian has a significantly lower performance than that learned by one of the safe algorithms (see HalfCheetahSafe DDPG, PointGather DDPG, AntGather DDPG), or (ii) the Lagrangian method violates the constraint during training, while the safe algorithms do not (see HalfCheetahSafe PPO, PointGather PPO, AntGather PPO). This clearly illustrates the effectiveness of our Lyapunov-based safe RL algorithms, when compared to Lagrangian method. Mapless navigation task is a continuous control task with a goal of navigating a robot to any arbitrary goal position collision-free and without memory of the workspace topology. The goal is usually within 5 − 10 meters from the robot agent, but it is not visible to the agent before the task starts, due to both limited sensor range and the presence of obstacles that block a clear line of sight. The agent's observations, x = (g,ġ, l) ∈ R 68, consists of the relative goal position, the relative goal velocity, and the Lidar measurements. Relative goal position, g, is the relative polar coordinates between the goal position and the current robot pose, andġ is the time derivative of g, which indicates the speed of the robot navigating to the goal. This information is available from the robot's localization sensors. Vector l is the noisy Lidar input (Fig. 3a), which measures the nearest obstacle in a direction within a 220 • field of view split in 64 bins, up to 5 meters in depth. The action is given by a ∈ R 2, which is linear and angular velocity vector at the robot's center of the mass. The transition probability P: X × A → X captures the noisy differential drive robot dynamics. Without knowing the full nonlinear system dynamics, we here assume knowledge of a simplified blackbox kinematics simulator operating at 5Hz in which Gaussian noise, N (0, 0.1), is added to both the observations and actions in order to model the noise in sensing, dynamics, and action actuations in real-world. The objective of the P2P task is to navigate the robot to reach within 30 centimeters from any real-time goal. While the dynamics of this system is simpler than that of HalfCheetah. But unlike the MuJoCo tasks where the underlying dynamics are deterministic, in this robot experiment the sensor, localization, and dynamics noise paired with partial world observations and unexpected obstacles make this safe RL much more challenging. More descriptions about the indoor robot navigation problem and its implementation details can be found in Section 3 and 4 of . Fetch robot weights 150 kilograms, and reaches maximum speed of 7 km/h making the collision force a safety paramount. Here the CMDP is non-discounting and has a finite-horizon of T = 100. We reward the agent for reaching the goal, which translates to an immediate cost of c(x, a) = g 2, which measures the relative distance to goal. To measure the impact energy of obstacle collisions, we impose an immediate constraint cost of d(x, a) = ġ · 1{l ≤ r impact}/T, where r impact is the impact radius w.r.t. the Lidar depth signal, to account for the speed during collision, with a constraint threshold d 0 that characterizes the agent's maximum tolerable collision impact energy to any objects. (Here the total impact energy is proportional to the robot's speed during any collisions.) Under this CMDP framework (Fig. 3b), the main goal is to train a policy π * that drives the robot along the shortest path to the goal and to limit the average impact energy of obstacle collisions. Furthermore, due to limited data any intermediate point-to-point policy is deployed on the robot to collect more samples for further training, therefore guaranteeing safety during training is critical in this application.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkxeThNFPH
A general framework for incorporating long-term safety constraints in policy-based reinforcement learning
Generative networks are known to be difficult to assess. Recent works on generative models, especially on generative adversarial networks, produce nice samples of varied categories of images. But the validation of their quality is highly dependent on the method used. A good generator should generate data which contain meaningful and varied information and that fit the distribution of a dataset. This paper presents a new method to assess a generator. Our approach is based on training a classifier with a mixture of real and generated samples. We train a generative model over a labeled training set, then we use this generative model to sample new data points that we mix with the original training data. This mixture of real and generated data is thus used to train a classifier which is afterwards tested on a given labeled test dataset. We compare this with the score of the same classifier trained on the real training data mixed with noise. By computing the classifier's accuracy with different ratios of samples from both distributions (real and generated) we are able to estimate if the generator successfully fits and is able to generalize the distribution of the dataset. Our experiments compare the of different generators from the VAE and GAN framework on MNIST and fashion MNIST dataset. Generative network approaches have been widely used to generate samples in recent years. Methods such as GAN BID2, WGAN BID0, CGAN BID6, CVAE BID15 and VAE BID3 have produced nice samples on various image datasets such as MNIST, bedrooms BID10 or imageNet BID8.One commonly accepted tool to evaluate a generative model trained on images is visual assessment to validate the realistic character of samples. One case of this method is called'visual Turing tests', in which samples are visualized by humans who try to guess if the images are generated or not. It has been used to assess generative models of images from ImageNet BID1 and also on digit images BID4. BID13 proposes to automate this method with the inception score, which replaces the human judgment by the use of a pretrained classifier to assess the variability of the samples with an entropy measure of the predicted classes and the confidence in the prediction. Unfortunately, those two methods do not indicate if the generator collapses to a particular mode of the data distribution. Log-likelihood based evaluation metrics were widely used to evaluate generative models but as shown in Lucas BID5, those evaluations can be misleading in high dimensional cases. The solution we propose to estimate both sample quality and global fit of the data distribution is to incorporate generated data into the training phase of a classifier before evaluating it. Using generated samples for training has several advantages over using only the original dataset. First, it can make training more efficient when the amount of data is low. As shown in BID7, where the conditional distribution P (Y |X)(X represents the samples and Y the classes) learned by a generative model is compared to the same conditional distribution learned by a discriminative model, the generative model performs better in learning this conditional distribution by regularizing the model when the amount of data is low. Secondly, once the generative model is trained, it can sample as much images as needed and can produce interpolations between two images which will induce less risk of overfitting on the data. Other works use generative models for data augmentation BID11 or to produce labeled data BID14 in order to improve the training of discriminative models, but their intention is not to use it to evaluate or compare generative neural networks. Our method evaluates the quality of a generative model by assessing its capacity to fit the real distribution of the data. For this purpose, we use the samples generated by a given trained generative model. Our work aims to show how this data augmentation can benefit the training of a classifier and how we can use this benefit as an evaluation tool in order to assess a generative model. This method evaluates whether the information of the original distribution is still present in the generated data and whether the generator is able to produce new samples that are eventually close to unseen data. We compare classifiers trained over mixtures of generated and real data with varying ratios and with varying total amounts of data. This allows us to compare generative models in various data settings (i.e., when there is few or many data points).The next section will present the related work on generative models, the exploitation of the generated samples and their evaluation. We then present our generative model evaluation framework before presenting experimental on several generative models with different datasets. The variational auto-encoder (VAE) framework BID3, BID12 ) is a particular kind of auto-encoder which has control over its latent space, in which each variable is a sample from a prior distribution, often chosen as an univariate normal distribution N(0, I) (where I is the identity matrix). The VAE learns to map this low dimensional latent space to the observation space. This characteristic makes the VAE an interesting option for generating new data after training. The particularity of the latent space comes from the minimization of the KL divergence between the distribution of the latent space and the prior N(0, I). For the sake of simplicity, in this paper we will speak about the decoder of the VAE as a generator. Generative adversarial networks BID2 are a framework of models that learn by a game between two networks: a generator that learns to produce images from a distribution P and a discriminator which learns to discriminate between generated and true images. The generator wants to fool the discriminator and the discriminator wants to beat the generator. This class of generative models can produce visually realistic samples from diverse datasets but they suffer from instabilities in their training. Some recent approaches such as Wasserstein GAN (WGAN) BID0 try to address those issues by enforcing a Lipschitz constraint on the discriminator. Conditional neural networks BID15 and in particular Conditional Variational Autoencoders (CVAE) or Conditional Generative adversarial networks (CGAN) BID6 are a class of generative models that have control over the sample's class. By imposing a label during training, a conditional generative network can generate from any class and thus produces labeled data. The conditional approach has been used to improve the quality of generative networks and make them more discriminative BID9. They are particularly adapted for our setup because we need to generate labeled data to train our classifiers. In BID11, a generator is used to perform data augmentation. Instead of designing a composition of fine tuned transformations for this objective, the authors use adversarial training to learn a sequence of incremental operations (for example rotating or swapping words in a sentence). Their approach uses a GAN to be able to generalize in terms of better data-augmentation and to increase their performance on different datasets such as Cifar10 and the ACE relation extraction task. BID14 also learns a sequence of transformations with generative networks from the GAN family, but they use a 3D model as input and create an augmented view of it. Our approach is similar by using generative networks for data-augmentation but we do not attempt to learn transformations. Instead, we use the generated data to assess if the generative model has been able to generalize over the distribution of the data. The evaluation of generative networks is discussed in. The authors show that different metrics (as Parzen windows, Nearest Neighbor or Log likelihood) applied to generative models can lead to different . Good in one application of a generative model can not be used as evidence of good performance in another application. Their is that evaluation based on sample visual quality is a bad indicator for the entropy of samples. Conversely, the log-likelihood can be used to produce samples with high entropy but does not assure good visual quality. The method we propose can both estimate the quality and the entropy of samples as we will show in Section 3.The quality of the internal representation of a generator can also be estimated with a discriminator. In BID10 they use the discriminator of a ACGAN as feature extractor for evaluating the quality of unsupervised representation learning algorithms. They apply the feature extractor on supervised datasets and evaluate the performance of linear models fitted on top of these features. They experiment a good accuracy on Cifar10 thanks to this method. This approach gives insight on how the discriminator estimates if an image is true or false. If the discriminator has good enough feature extractors for classification, it means that the generator samples are hard to be discriminated from samples from the true distribution. It assess indirectly the quality of the generator. This method is however applicable only if a deep convolutional neural networks is used as discriminator and can not be applied, e.g., on variational auto-encoders. The principal difference between a discriminator and our classifier is that it is not involved in the training process. In our approach, the generator is completely independent from the classifier and therefore there is no bias from the classifier in the generator. Parzen windows estimate is a method to estimate the unknown probability density function f of a probability distribution P. This method uses a mixture of simpler probability density functions, called kernels, as approximates for f. In general, a popular kernel used is an isotropic Gaussian centered on a given data point with a small variance (the variance is an hyper parameter here). The idea, like other methods based on Kernel Density Estimation, is to have a small window on each data point such that we apply some smoothing over the function we try to approximate. However, even if the number of samples is high, Parzen windows estimator can be still very far from the true likelihood as shown in , and thus cannot be a good approach to evaluate if the data distribution learned by a model is close to the original one. Multi-scale structural similarity (MS-SIM, BID16) is a measurement that gives a way to incorporate image details at different resolutions in order to compare two images. This similarity is generally used in the context of image compression to compare image before and after compression. In BID9 the authors use this similarity to estimate the variability inside a class. They randomly sample two images of a certain class and measure the MS-SIM. If the is high, then images are considered different. By operating this process several times, the similarity should give an insight on the entropy of P (X|Y) (X a data point and Y its class): if the MS-SIM gives high , the entropy is high; otherwise, the entropy is low. However, it can not estimate if the sample comes from one or several modes of the distribution P (X|Y). For example, if we want to generate images of cats, the MS-SIM similarity can not differentiate a generator that produces different kinds of black cats from a network that produces different cats of different colors. In our method, if the generator is able to generate in only one mode of P (X|Y), the score will be low in the testing phase. Another approach that aims to evaluate a generative model by using a conditional distribution learned by a classifier is the inception score BID13 BID9. The authors use a pretrained inception classifier model to get the conditional label distribution P (Y |X) over the generated samples. They proposed the following score in order to evaluate a generative model: DISPLAYFORM0 When the score is high, the generator produces samples on varied classes (Cross entropy of P (Y |X), P (Y) is high) and the samples look like real images from the original dataset (entropy of P (Y |X) is low ). Inception score can be seen as a measurement of the variability of the generated data while penalizing the uncertainty of P (Y |X). Unfortunately, it does not estimate if the samples are varied inside a certain class (the entropy of p(X|Y)). Our approach imposes a high entropy of P (Y) and gives an unbiased indicator about the entropy of both P (Y |X) and P (X|Y). We evaluate generators in a supervised training setup. We have a dataset D composed of pair of examples (x, y) where x is a data point and y the label associated to this data point. By iterating this method on diverse values of τ and n we can evaluate the quality of a generator given a dataset. Often, generative models are presented on popular datasets like MNIST. Fashion-MNIST BID18 can also serve as a direct drop-in replacement for the original MNIST dataset. This dataset is however more complex than MNIST as images have a higher variability. Thus, we use these datasets in order to evaluate different generative models. We used two different methods in order to get conditional generative models. The first uses traditional generative neural network which can not produce labeled data. In order to associate each generated sample to a label, we train one generator for each specific class y on D train. This makes us able to label any generated sample. Once the training of those generators is done, we mix the samples obtained by each generator in order to produce D gen. For the experiments, we compare two generative models is this setting: a standard VAE and a WGAN. The second method uses conditional generative models which can generate samples in all classes while controlling the class of a particular sample. Conditional models can thus generate various labeled samples and produce a whole dataset D gen directly. In this last case, we ran our experiments on CVAE and CGAN. Once the dataset D gen is generated, we mixed it with the real dataset D train. As we can generate as much data as we want, we experimented different ratios between real datasets and generated datasets. We call τ the probability of sampling from D gen. We made experiments with different values for τ = [0. 000, 0.125, 0.250, 0.375, 0.500, 0.625, 0.750, 0.875, 1.000]. τ = 0 implies that we use only data from D train. In this specific setting, we compare the effectiveness of the data augmentation with generated samples versus classic data augmentation as isotropic Gaussian noise with an optimized variance or a random pixel dropout with a probability α of putting a pixel to 0. We also train a classifier without any data augmentation as baseline. We use a standard CNN with a softmax output as classifier to predict the labels on this mixture of samples. On each epoch we evaluate this classifier over a validation set D valid. Then, we choose the classifier that performs best on this validation set. We use early stopping to stop the training if the accuracy does not improve anymore on the validation set after 50 epochs. The classifier is then tested on D test. We train a classifier for each value of τ. We assess the quality of the generative model by comparing the test score of this classifier when τ = 0 versus the best test score of the classifier with τ > 0. The gives an indication on how the learned distribution from D train fits and generalizes the distribution from D train. In order to be able to compare from different generators on a given dataset, we always use the same classifier architecture. To be able to estimate the impact of learning from generative settings versus discriminative ones, we have made variable the amount of data in D train used to train both generator and classifier. Thus, we repeat all our experiments for the following amount of data samples:. This allows us to measure the regularization capacity of the generated samples over the classifier's training. We interpret this regularization capacity of those samples as a capacity of generalization. In FIG0, we present the test accuracy when τ increase. When τ = 0 there is no generated data, this is the of the baseline without data augmentation. Our interpretation of the figure is that if the accuracy is better than baseline with a low τ (< 0.5) it means that the generator is able to generalize by learning meaningful informations about the dataset. When τ > 0.5 if the accuracy is maintained it means the generated data can replace the dataset in most parts of the distribution. When τ = 1 there is no more original data, the classifier is thus trained only on generated samples. If the accuracy is still better than the baseline, it means that the generator has fit the training distribution (and eventually has learned to generalize if this score is high over the test set).(a) Relative accuracy wrt. baseline on mnist for different models (b) Relative accuracy wrt. baseline on fashionmnist for different models Figure 2: Representation of the data augmentation capacity of each generative models. For each number of training example we show the maximum accuracy a generative model can achieve by tuning τ. We also show when tuning hyper-parameter for data augmentation method. Following this interpretation, FIG0 allows us to compare different generative neural networks on fashion-MNIST. For example we can see that VAE (FIG0) and CVAE (FIG0) are able to maintain the baseline accuracy when we do data augmentation with generated samples for different amount of data n. But the accuracy goes down when there are only generated data, which means that these generative models in those settings did not fit the whole distribution. CGAN has the same kind of behavior but the degradation is worse when τ = 1. However WGAN is able improve the accuracy for τ < 0.5 and to maintain it even when τ = 1. FIG0 also allows to estimate the entropy of P (X). As a deep neural network can not be trained efficiently without various data samples, if the the accuracy on test set is good with only generated data (τ = 0), necessary the entropy of P (X) is high as the entropy of P (X|Y). This condition is sufficient but not necessary to assess the quality of a generator on a given task. Our shows that training one WGAN per class is the best solution to fit the complete distribution, whatever the number of training data used. The bad on CGAN can be explained by the difficulties to train this model and it's instability. Figure 2 represents the best accuracy that each model can achieve when τ is tuned. It shows that, aside from CGAN on MNIST, all generator can be used to increase accuracy, whatever the number of training data. In this context τ can be seen as a tunable hyper parameter for data augmentation. Figure 2 also shows that the capacity of generalization is particularly effective when the number of example is low. This can be explained by the fact that when the data number increases the need of data augmentation decrease. DISPLAYFORM0 The shown in TAB1 summarize Figure 2 for each generator G by a numerical value Ψ G (Eq. 2). We call Ψ G the data augmentation capacity of a generator. It is computed, for a given generator G, by the mean of the differences between the accuracy on mixture with τ tuned and the baseline accuracy for each number n of training data. The different number of data is arbitrary chosen. In our experiment we compute it with [0.2%, 1%, 2%, 10%, 20%, 100%,] of the train set of the dataset. The important thing is to operate at different scales to estimate how the generative model is able to generalize. The of TAB1 indicates if a generative model is globally able to perform data augmentation for a given data set. A positive indicates that the generative model is able to generalize well on the dataset at different sizes. In case of low amount of data, it is better to refer to Figure 2 to choose the best model for that specific case. We presented a method to estimate how well a generative model has learned to generalize in a conditional setting. Using generated samples as data augmentation in order to improve a discriminative model is a well known technique. However, assessing the quality of a generative model with a discriminative model seems to have been less explored. As we have shown, this evaluation is meaningful to measure how well the generative model can sample data points that are probable under the true distribution we want to approximate. In this paper we applied this method on images samples. It means that we can correlate our measure with a visually assess of the samples quality as the generator outputs are in pixel space. Our assessment method can also be used in other spaces as long as labeled data are available. The relative benefits of discriminative and generative models have been studied in BID7. They found that for a small number of training examples n, a generative model will be less prone to overfitting. A discriminative model on a small number of examples risks to learn some spurious nodes that will penalize the generalization capacity. Our are coherent with their as the data augmentation induced by generated data is particularly effective when n is low. Our evaluation was performed on several datasets, generative models, different ratios and amounts of training data. With the current , WGAN seems to be the most efficient solution. However, this should be confirmed by experiments on other datasets, generative models and with different types of discriminative models to get a more general comparison. This will be explored in further experiments. As presented in BID17, the sampling method of a generator can be adapted, which can have an impact on the data produced. A way to boost the performance of a generator can be to focus on improving the sampling phase instead of the model design or the training. An extension of this work could be to look into the impact of several sampling techniques on the performance of a generative model. This paper introduces a new method to assess and compare the performances of generative models on various labeled datasets. By training a classifier on several mixture of generated and real data we can estimate the ability of a generative model to generalize. When addition of generated data into the training set achieved better data augmentation than traditional data augmentation as Gaussian noise or random dropout, it demonstrates the ability of generative models to create meaningful samples. By varying the number of training data, we compute a data augmentation capacity Ψ G for each model on MNIST and fashion-MNIST datasets. Ψ G is a global estimation of the generalization capacity of a generative model on a given dataset. The presented here are produced on image datasets but this method can be used on all kinds of datasets or generative models as long as labeled data is available. A ADDITIONAL It represents the overfitting capacity of a VAE. All four samples set look good, but for example, the top left trained with only 50 different data often produce similar images (as the samples on top right trained with 100 images). When the number of training images increases the variability seems good afterwards but as we can see in FIG2 when τ = 1 the generator generalizes better the distribution when n is < 1000 than when > 1000 is high. Relative accuracy improvement between the baseline trained on original data and the accuracy with generated or noise data augmentation in training. τ is the ratio between the number of generated data and the total number of data used for training.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJ1HFlZAb
Evaluating generative networks through their data augmentation capacity on discrimative models.
We propose Automating Science Journalism (ASJ), the process of producing a press release from a scientific paper, as a novel task that can serve as a new benchmark for neural abstractive summarization. ASJ is a challenging task as it requires long source texts to be summarized to long target texts, while also paraphrasing complex scientific concepts to be understood by the general audience. For this purpose, we introduce a specialized dataset for ASJ that contains scientific papers and their press releases from Science Daily. While state-of-the-art sequence-to-sequence (seq2seq) models could easily generate convincing press releases for ASJ, these are generally nonfactual and deviate from the source. To address this issue, we improve seq2seq generation via transfer learning by co-training with new targets: (i) scientific abstracts of sources and (ii) partitioned press releases. We further design a measure for factuality that scores how pertinent to the scientific papers the press releases under our seq2seq models are. Our quantitative and qualitative evaluation shows sizable improvements over a strong baseline, suggesting that the proposed framework could improve seq2seq summarization beyond ASJ. Neural text summarization has undergone an exciting evolution recently: from extractive through abstractive to hybrid models; from maximum likelihood to reinforcement learning objectives ; from small to large datasets that are also abstractive ; from short to orders of magnitude longer sources and targets ; from models trained from scratch to using pre-trained representations . Such evolution was largely supported by the emergence of seq2seq models . These advances are yet to be challenged with a seq2seq summarization task that summarizes a long source to a long target with extreme paraphrasing. Below we argue that ASJ is a natural testbed for such a challenge. Science journalism is one of the few direct connections between scientific research and the general public, lead by media outlets such as Science Daily, Scientific American, and Popular Science. Their journalists face an incredibly difficult task: not only must they carefully read the scientific papers and write factual summaries, but they also need to paraphrase complex scientific concepts using a language that is accessible to the general public. To emulate what a journalist would do, we present a dataset of about 50,000 scientific papers paired with their corresponding Science Daily press releases, and we seek to train a seq2seq model to transform the former into the latter, i.e., an input scientific paper into an output popular summary. Ideally, our model would both identify and extract the relevant points in a scientific paper and it would present them in a format that a layman can understand, just as science journalists do. We now ask: would such a model be successful without a factual and accurate representation of scientific knowledge? Recent work suggests that even simple training of word embeddings could capture certain scientific knowledge from 3.3 million scientific abstracts . Therefore, here we propose to transfer knowledge from domains from which a seq2seq model would be able to extract factual knowledge using transfer learning . We frame our approach as multitask learning (MTL). We perform co-training using both scientific abstracts and parts of the target press releases, and we view these additional domains as potential training sources for representation of scientific facts within the seq2seq model, which ideally would be helpful to ASJ. We demonstrate that MTL improves factuality in seq2seq summarization, and we measure this automatically using a novel evaluation measure that extracts random fragments of the source and evaluates the likelihood of the target given these fragments. We believe that the insights from our experiments can guide future work on a variety of seq2seq tasks. The contributions of our work can be summarized as follows: 1. We present a novel application task for seq2seq modelling: automating science journalism (ASJ). 2. We present a novel, highly abstractive dataset for summarization for the ASJ task with long source and target texts, where complex scientific notions in the source are paraphrased and explained in simple terms in the target text. 3. We propose a transfer learning approach that significantly improves the factuality of the generated summaries. 4. We propose an automatic evaluation measure that targets factuality. The rest of this paper is organized as follows: Section 2 discusses previous work. Section 3 describes the new data that we propose for ASJ. Section 4 presents our models. Section 5 introduces our transfer learning experiments for summarization. Section 6 describes our evaluation setup. Section 7 discusses our experiments and the . Section 8 concludes and points to promising directions for future work. Our work rethinks the task of neural text summarization, the utilization of scientific datasets for generation and the applications of multitask learning. Neural Text Summarization. We define automating science journalism as a text summarization task where the source is the scientific paper and the target is a press release summary about the paper, i.e., a shorter version of a full press release. Existing neural models for this task can be abstractive, i.e., paraphrase the source, , extractive , i.e., extract words, phrases, or entire sentences from the source, or hybrid . Unlike previous work, our task is abstractive not only to shorten the source, but also to change the technical style of the scientific papers. At the same time, we need to ensure factuality by accurately paraphrasing scientific concepts from the source. Scientific Datasets for Generation. The task of automating science journalism has not received much attention so far, partly due to the lack of a benchmark datasets for training neural models. proposed that the abundance of scientific articles online and their press coverage provide an opportunity to develop neural models, and presented pioneering on the Science Daily dataset using an RNN seq2seq model. Other work preserved the style of the source (; ;) or generated very short targets taking the form of blog titles . However, none of the above work faced our challenging task of not only presenting relevant information, but also integrating it into articles that use popular language rather than high-level scientific style. Multitask Learning. The nature of our task and the corresponding datasets make it possible to use recent advances in transfer learning for NLP . Namely, we combine datasets sharing a source domain, i.e., scientific articles, with different target domains, i.e., abstracts and press releases. Thus, we propose a novel mutitask learning (MTL) setup for summarization. For this component, we take inspiration from recent work on automatically generating news articles using the GROVER model . An important characteristic of GROVER is that it is trained on multiple variations of the same dataset. For example, in some instances, the headline might be used to generate the body, whereas in others, the body might be used to generate the headline. Similarly, via a special tag, we can signal to the decoder to generate either an abstract or a press release, or to generate the target in several steps by conditioning on the intermediate outputs. Finally, other constructions for signaling to the decoder were proposed in the context of neural machine translation and summarization with user preferences that contain tags, similarly applied as in recent advances for pre-training contextual word embeddings . Unlike these techniques, our task here is automating science journalism. Our SD Dataset. Our dataset in its original form consists of 50,305 pairs of: a full-text scientific paper as a source and a corresponding Science Daily press release as a target (SD). We download the scientific papers as PDF files and then convert them to raw text. We do not perform explicit pre-processing, and thus the papers do not follow any standard format (e.g., some start with the title, the abstract, or just the main body) and do not exclude extraneous symbols, characters, or spaces. ArXiv . For domain transfer, we use the arXiv dataset that links full-text scientific papers as sources from the arXiv database to their abstracts as targets. This dataset does not include papers that are excessively long (e.g., thesis), too short, or have no abstract structure. The dataset consists of 215K paper-abstract pairs, each paper averaging 4,938 words and each abstract averaging 220 words. The figures and the tables were removed using regular expressions to only preserve the textual content of the articles. Furthermore, math formulas and citation markers were normalized to special tokens. When compared to Science Daily, not only are the target texts in this dataset more extractive, but the source texts are also much more mathematical. Comparison and Discussion. In contrast to most summarization datasets, our Science Daily dataset is unique in two distinct ways: (i) The target summaries are more abstractive, i.e., they have lower coverage and density compared to the arXiv dataset (see Figure 2). The explicit formulas for these statistics are COVERAGE(A, S) = (1/|S|) f ∈F (A,S) |f | and DENSITY(A, S) = (1/|S|) f ∈F (A,S) |f | 2, where F(A, S) is the set of extractive fragments, a sequence of tokens that is shared between the source and the target, for a set of articles {A} and a corresponding set of summaries {S}, |f | is the number of words in the fragment |f | and |S| is the number of words in the summary S . In plain words, coverage represents the fraction of words that are in an extractive fragment, while density represents the average length of these fragments. (ii) Both our source and target sequences are relatively large, with each article averaging around 7,000 words and each press release averaging around 550 words. For comparison, the standard dataset CNN/ Daily Mail is much shorter, with sources of 800 words and targets of 50 words, and even the arXiv dataset has much shorter targets, with sources of 6,000 words and targets of 200 words. Figure 2 in the Appendix offers additional comparison of the coverage and the density for some popular datasets for summarization, while Figure 3 presents more information about the length of the sources in our Science Daily dataset. For the basis of our models we used the FAIRSEQ library , and we focused on convolutional seq2seq implementations . FCONV. Our first model is a small vanilla convolutional seq2seq model, corresponding to FAIRSEQ's ISWLT de-en, using four encoder layers with embedding dimension of 256, 256 channels, and a kernel size of 3, for short; three decoder layers with input and output embedding dimensions of 256. We trained the model until convergence on the dev set with a learning rate of 0.25, Nesterov accelerated gradient (NAG) descent, dropout of 0.2, and a gradient threshold of 0.1. Our second model is a state-of-the-art model for neural story generation (; 2019). It introduces attention between the output from the encoders and the decoder layers, as well as multi-head self-attention on the decoder layers that is gated and equipped with a multi-scale mechanism for down-sampling . Since our sources are three orders of magnitude larger than the writing prompt sources for which STORY has been used, we additionally equip the encoders with gated multi-scale multi-head self-attention. In sum, following FAIRSEQs implementation, our model uses two followed by a single encoder layers with 256 embedding dimensions; four followed by two followed by a single decoder layers with 256 input and 256 output embedding dimensions; four gated self-attention heads both on the encoders and on thhe decoders with projected inputs and down-sampling. We trained the model until convergence on the dev set with a learning rate of 0.25, NAG, dropout of 0.2, and a gradient threshold of 1.0. Finally, training for both FCONV and STORY took usually around 20-30 epochs, depending on the batch size, which is around 30-40. We design transfer learning experiments for the FCONV and the STORY models as follows below by constructing datasets for seq2seq summarization. BASE. The original Science Daily dataset, introduced in Section 3 with a train/dev/test split of 40,247/5,029/5,029. The experiments with this dataset are baselines for our transfer learning experiments. AB. Here, we augmented the Science Daily dataset with the arXiv dataset with specially designed tags, as follows: 1. The source is pre-pended with the tag <begin-paper> and appended with the tags <end-paper> <begin-press> for examples in Science Daily, and similarly we only replace press with abstract for examples in the arXiv dataset. 2. The target is appended with end-press or end-abstract, respectively. Tags are used to indicate the source domain (arXiv or Science Daily) and the target domain (abstract or press release). In order to ensure equal balance between the two datasets, we took 40,000 points from their training sets, 5,000 from their test, and 5,000 from their dev, for a final train/dev/test split of 80,000/10,000/10,000. We hypothesize that the encoder layers and the decoder attention mechanism will focus on these tags while processing the source and while generating the output, respectively. PART. Augmented Science Daily with partitioned targets as follows: 1. For each source-target pair in Science Daily, we preserve the source body body and we divide the target into three equal parts part-1, part-2 and part-3. 2. We construct the sources-target pairs as follows: for all bodies body, for indices i equal to 2 or 3, the source is and for i equal to 1, the source is <begin-body> body <end-body> <begin-part-i> where the corresponding target to the source is part-i <end-part-i>. 3. During inference, we generate the parts part-i autoregressively from part-1 to part-3. In this way, instead of training the model to generate the full press release, we train it to generate only specific sections. Namely, we make the assumption that press releases are divided roughly into three equal parts: highlights, body, and , which allows us to co-train with different domains of summarization and thus to transfer signals from one domain to another. Furthermore, in this way we also increase the BASE split threefold, which yields a 120,741/15,087/15,087 train/dev/test split. Ultimately, we convert all textual data to byte pair encoding (BPE) with 32,000 BPE tokens both on the source and on the target side using the guidelines by FAIRSEQ. ROUGE. We begin by evaluating using , following FAIRSEQs convention. During inference we either use beam search with beam size five or top-k random sampling from the top 10 candidates . We generate 400-600 tokens for all datasets except for PART, where we generate 150-300 tokens for each part. Results. Table 1 presents for FCONV and for top-k sampling. AB outperforms BASE and PART by 2.0/0.7/1.7 and 8.4/2.4/7.4 ROUGE points absolute, respectively. We can see in Table 2 similar for FCONV and beam search. AB outperforms BASE and PART by 2.6/0.8/1.6 and 10.7/2.6/9.0 ROUGE points absolute, respectively. Thus, we can conclude that co-training with abstracts improves ROUGE scores noticeably for FCONV. Our RA Evaluation Measure. Standard procedures for evaluating abstractive summarization systems often require extensive human resources, with most papers relying on crowd-sourcing services to obtain adequate amount of data (see Section D in the Appendix for more detail). Because of these limitations, we face the challenge of meaningfully and scientifically evaluating our models without the need for human annotations. We define a conditional model by a probability distributionp(·|·; θ) (i.e., parameterized by θ) that generates autoregressively and from which the perxplexity of a target given a source could be computed. For a source s = (s 1, . . ., s m), we use this model for generation by selecting an output Given that it is unfeasible to traverse the entire solution space {h}, we use a heuristic search procedure B (for example, beam search or top-k random sampling) designed so that Furthermore, we assume that there are ground-truth distribution p s (·) and a joint distribution p h,s (·, ·), where h is a random variable representing the target. Then, we can measure how closep(h|s) is to p h,s (h, s)/p s (s). The assumption of training is that for a source-target pair (s, t) in our dataset the following holds p s,t (s, t) = p(s). Therefore, for our model we want p(t|s) = p(t, s)/p(s) = 1. Hence, while we do not know the true value of p h|s (h|s), we can make the assumption that for t and s, such that s = s, p t|s (t|s) < p t|s (t|s). We seek to test this condition forp in our evaluation. In addition to this relationship, we further evaluate how well the model processes meaning within the source. During encoding and decoding, positional embeddings are added to the token embeddings to give the sequence representations a sense of order. Because of this, the model performs computations on a window of words based on both its meaning and location. This is necessary, but we conjecture that a good automatic summarizer should not rely too much on the structure of the source rather than on its meaning when extracting information. A model that pays too much attention to word order will not generalize well to different structured inputs and it will likely generate a poor summary. Note that Science Daily and other real-world datasets or applications do not have sources with a welldefined structure, and thus summarization for these domains should not rely on absolute position. To test the above-mentioned two properties, instead of calculatingp(t|s) for the entire source sequence s, we calculatep(t|r), where r = (r 1, . . ., r 100) is a random 100-word sub-sequence of s. This will ensure empirically that any differences in probability are due to the model's processing of meaning rather than to the sequence structure of the input. Moreover, given that r has 100 words, it is likely that the ground truth relationship p t|r (t|r) > p t|r (t|r) will still hold, where r is a 100-word sub-sequence of s. With this objective, we design our evaluation experiment as follows: 1. Take 1,000 data points from the test set. 2. For each source-target pair (s, t), generate ten points with target t as follows: one with a 100-word fragment (sub-sequence) r of s, and nine with fragments of sources from random sources in the test set (in the case of AB, items of the same source domain). Add the ing pairs to our evaluation dataset. For PART take only fragments coming from body. 3. For each group of ten consecutive points in our evaluation set, input the sources into the trained modelp and calculate the probability of the common target sequence for each. 4. Report the percentage of groups where the true source yields the highest probability. We call this evaluation measure random access (RA). Note that RA is conceptually similar to the prompt ranking procedure in in terms of calculating scores, but importantly it is different in terms of the random access property that we require. We conjecture that random access is important to test summarization systems because the sources are orders of magnitude larger than the writing prompts used for neural story generation. Below, we show quantitatively and qualitatively that RA measures the contributions of the experiments AB and PART. Results. Table 5 presents for FCONV. Both AB and PART outperform BASE significantly by 39.8 and 39.1 RA points absolute, respectively. Table 6 presents for STORY. Both AB and PART outperform BASE significantly by 42.6 and 51.1 RA points absolute, respectively. In general, RA is in agreement with the ROUGE scores, but it is more sensitive to AB and PART. We conjecture that RA could be a good measure for the generalizability of transfer learning experiments in summarization. In this section, we focus qualitatively on the advantages and the limitations of our experiments of using transfer learning for summarization. Apart from clearly improving ROUGE and RA, AB and PART provide the following: topical and factual generation; memorization and utilization of scientific concepts other than the current source; semantic and syntactic structure (largely due to the self-attention mechanism) that could serve as a convincing press release. We discuss this in the details below. We find that training using self-attention models on BASE yields irrelevant summaries with logical structure, whereas FCONV and STORY on AB do exactly the opposite. Additionally, generation from FCONV on AB. The generations exhibit high extractive ability, with the model being able to correctly pick authors and keywords from the source (see tables 7 and 8 in the Appendix for more details). Upon speculation, the samples from PART generated by STORY are able to extract relevant information, albeit sometime they fail to present it accurately (see tables 10 and 11 in the Appendix for more details). We additionally find that when training STORY on AB, our generations are able to memorize and use scientific concepts. The generations write with conceptual and logical accuracy, while focusing on specific information relevant to the source paper. Beam search generation is particularly good. It demonstrates structured and concise writing with sections that are both relevant and conceptually accurate. For example, generations mention that x-ray crystallography was used to determine the three-dimensional structure of the proteins. The target article mentions this was done by the study's authors in a previous work, but this technique is not mentioned in the source, which is all the model sees (see table 7 for more details). This demonstrates a very important and promising phenomenon; similar to where unsupervised word embeddings captured information about materials, the model learns representations of key concepts such as what x-ray crystallography is, and is applying this knowledge accurately at generation time. Limitations of Transfer Learning (AB and PART). In many cases, the output of AB and PART is repetitive, not being able to match named entities, diverging from the topic, and is limited in the sense that it only has a direct access to a single scientific paper. We discuss this in more detail below. There are not too many differences between the summaries generated by FCONV on AB and BASE, with both sharing a common problem: although the main ideas are covered in the generated output, both struggle with logic and factual accuracy. This is especially noticeable in the top-k random sampling generation, which is much less concise and coherent compared to the beam search generation. 1 We find that STORY often over-fits to a set of concepts, and then creates a story around those concepts rather than based on the input sequence. For example, a source paper about the structural similarities of DNA in archaea and eukaryotes might not be accurately summarized by STORY models: the models may elaborate on topics separate from the target topic, even though still focusing on DNA. Despite lacking topicality, generations do exhibit some conceptual understanding and logical coherence (see table 7 for more details). Sometimes, we observe that generations that are topical, but fail to capture external information, such as the fact that there are concerns about the conducted research in the source. Information of such kind, involving external sources, cannot be captured by a seq2seq model which only performs inference from a scientific paper as a source, which is a limitation of our Science Daily dataset and the seq2seq models (for example, table 9 in the Appendix). Although the above-mentioned trends are not perfect and are far from proposing a convincing solution to ASJ, when qualitatively and quantitatively compared to models trained solely on BASE, transfer learning improves the factuality of the models. We hypothesize that due to the high correlation between the language in the source and what is in the target for the abstract dataset (as in AB) or the target itself (as in PART), co-training helps the model focus on presenting correct and relevant information from the source. Such language-correlation is low for press releases, hence SD benefits from an MTL setting. In this work, we proposed a novel application for seq2seq modelling (ASJ), presented a highly abstractive dataset for summarization with long sources and targets (SD), proposed MTL as a means of improving factuality (AB and PART), and proposed a novel factuality-based evaluation (RA). Our transfer learning approach and our random access evaluation measure are in principle domainagnostic and hence are applicable and could be potentially useful for a variety of summarizationrelated seq2seq tasks. Our experimental have demonstrated that MTL via special tagging for seq2seq models is a helpful step for summarization. In future work, we plan to address the limitations of the current state of AB and PART by equipping our models with pre-trained representations on large corpora, e.g., from the Web, and to use these pre-trained models as knowledge bases , thus expanding our transfer learning objectives for better factual seq2seq summarization. Ilya Sutskever, Oriol Vinayals, and Quoc V Le. Sequence to sequence learning with neural networks. In Generations with FCONV and STORY on BASE and AB Target (truncated): the colorado state university researcher studies how these hardy microbes -which constitute one of three surviving domains of life -express their genes, produce their energy, and thrive in hot, lightless environments. it turns out, we're not so different -biochemically, anyway -from archaea after all. santangelo, associate professor in the department of biochemistry and molecular biology, was on a team that found striking parallels between how archaeal cells and more complex cells, including humans' and animals', package and store their genetic material. the breakthrough study, published in the study was led by karolin luger, now a structural biologist at the university of colorado boulder. most of the reported in science were completed while luger was a csu faculty member, from 1999 to 2015. a little high school biology review: eukaryotes are cells with a nucleus and membrane-bound organelles, and they include fungal, plant and animal -including human -cells. they're set apart from their less complex counterparts, prokaryotes, by the absence of a nucleus. while archaea and bacteria are both prokaryotes, they are only distantly related. archaea are the likely progenitors of eukaryotes and share many of the same proteins that control gene expression. one of life's most fundamental processes -the mechanics by which dna bends, folds and crams itself into a cell nucleus -is common across all eukaryotes, from microscopic protists to plants to humans. packed inside the nucleus of every eukaryotic cell is several feet of genetic material that is compacted in a very specific way. small sections of dna are wrapped, like thread around a spool, roughly two times around eight small proteins called histones. this entire dna-histone complex is called a nucleosome, and a string of compacted nucleosomes is called chromatin. in 1997, luger and colleagues first reported the exact structure of eukaryotic nucleosomes via x-ray crystallography. science paper collaborator john reeve had discovered in the 1990s that histone proteins were not limited to eukaryotes, but were also found in nucleus-free archaea cells. reeves and luger began a collaboration to crystallize histone-based archaeal chromatin and compare that structure with eukaryotic chromatin. after years of stops and starts and trouble growing reliable archaeal histone crystals -luger called it a " gnarly crystallographic problem " -the scientists succeeded in resolving the structure of archaeal chromatin, revealing its structural similarity to eukaryotes. in the data, the archaeal dna seemed to form long, curvy, repeating superhelices. the researchers were n't sure if the structure was real, or an artifact of the experiment. that's where santagelo's team at csu provided key expertise. " my group took up the challenge of determining whether the structure resolved in the crystals represented a biologically meaningful structure, " he said. santangelo's team made variants of the archaeal histones and tested how the cells fared, as they disrupted the dna superhelix.... STORY, AB, top-k sampling: the structure, which is understood as a helix structure, represents a key step in the process of creating dna structures with different sequence sequences, and is essential for all biological processes that can be controlled by their genetic makeup. the structure is called nucleosome assembly. the assembly is composed of two domains: four double helix and three double helix. the dna is the nucleus where dna is wrapped and the dna strand is broken. the chromosome is copied into four distinct strands. dna is called histones, which are the building blocks of dna and are then divided into two identical strands. one strand of the dna was broken into six separate histone proteins, and the other of the two double helix in the form of dna. the histones are arranged in the middle of the nucleosome. histones are the most important structures in our cells, and in most cancers a chromosome is formed. the histone is responsible for the formation of proteins. the research team, led by professor dr thomas edron, from the university of colorado, has now discovered how this is achieved and how the histone can be modified. the team used x-ray crystallography to obtain a 3d structure of the histone that had been stripped from dna, and that is found to be essential in the development of diseases such as cancer. this work is published in the journal professor edrin said: " our work shows how the structure can be used as a tool to design new drugs to fight cancer. it is also exciting to note that the structure of a dna assembly is not fully understood and must also be formed by many different proteins. this can be accomplished by making a protein in a specific pattern. the is the dna that is not formed by the dna. it was previously thought that the dna would be formed through a double helix by a long single-stranded dna strand. " the scientists discovered that the extra dna is broken down in a short sequence of four double helices, which in turn form a " double helix ". professor ekaterina luger, who led the study, said:' the structure was so interesting that it was so simple. it could be used to control a cancer cell's development to cancer cells', prof edrat said. the next step for the research will be to find out how the different histone structures affect the stability and function of the cell's architecture.' <end-press> Notes: Model focuses on the technical subpoint of the nucleosome rather than the motivation of studying archea. Article follows chronological structure, and attempts to outline a process through which dna is created. Brings up cancer several times which is not mentioned once in the source nor the target. Writes that "extra dna is broken down in a short sequence of four double helices, which in turn form a " double helix. " This is not entirely accurate, however the source mentions "nucleosome consisting of four distinct histones". The number four is mentioned several other times in the source. STORY, AB, beam search: now, a team of researchers from the university of colorado, ohio state university and colorado state university have been able to show that the structure of the nucleosome, which is called nucleosome remodeling, plays a key role in the development of multicellular organisms. the study, published in the journal " we have shown that the nucleosome is formed in the nucleus of a living organism, and that this is the first step in the assembly of the dna complex, " said pamela marino, an associate professor in ohio state's department of chemistry and biochemistry and a senior author of the study. " we now have a better understanding of how the nucleosome works and how it works. " histone proteins are the building blocks of proteins, and they are involved in a variety of biological functions, marino said. histones are the amino acids that make up the cell's dna. when dna is copied, the proteins are copied to form proteins, which are called histones. histones have been studied for more than a decade, but until now it has been difficult to determine how these histone proteins assemble and how histones are assembled in the cell. marino and marino's work was supported by the howard hughes medical institute, the national institutes of health, and the u.s. department of energy. the research was funded by the national science foundation. the researchers used a technique called x-ray crystallography, which allows scientists to determine the atomic structure of proteins. the technique was able to take a closer look at the structure and function of histones, but the technique could also be used to study other proteins in the same way that dna is wrapped around the dna ladder. " this is a great example of the power of research in the field, " marino added. " it's not just a matter of time before, but it's a big step forward. " the work was done in collaboration with a team led by marino who is now a postdoctoral researcher at ohio state. other co-authors of the paper are postdoctoral researcher zachary sandman, a former ohio state graduate student in biochemistry and molecular biology and a member of the marino lab. <end-press> Notes: Writing is more fluent and concise than the story generation. Standard topic sequence; less technical. More quotes from author throughout entire generation. Portion highlighted in green seems more or less factually accurate. Follows coherent structure: starst with findings, then , ttechniques, and finally a . FCONV, AB, top-k sampling: " we know that the dna's structure is made by archaea. " there are a lot of evidence that we can see how it can cause cell death. " the new study, published in the journal the new dna was discovered by the lab of the cu institute of technology and technology. " we've shown that these cells have a " molecular sequence " that is called " the dna that we've seen in a new study in this study, " says study co-author thomas santangelo, a professor of molecular biosciences. " this research is a major advance in understanding how a long strand of chromatin can then attach to dna in a " string. " " this is the first time that dna is a type of protein called a " junk " dna. the scientists also found that the proteins that are supposed to be present in all other archaea. the study was published online april 14 in the current issue of the journal " it is a very small, simple, non-functional. " he said that these findings suggest that archaea are very similar to those in eukaryotic cells. " but you can now have been studying the molecular structure of the nucleosomes. " the archaeal genome, we have been able to sequence the structure, but we did n't know what we had seen in this process. " if you do n't want to do so, " luger says. " you have a single cell, " she said. " now, it's been very difficult to study how archaea are present in the nucleosome. " it's a simple example of how the cells are packed with two, but not all of the dna strands that are found in the nucleus. the new research shows that archaeal cells have been the first to see how much of the cells had not been in the genome. but the archaeal histone-based chromatin that had been found to be very similar, but it was possible to find that the two types of archaeal molecules have been known to be important because they are more closely related to their dna, " said luger, who is also an investigator of the university of colorado boulder, the university and the university, which was funded by the howard hughes medical institute, and the howard l., mary e. reeve and karolin luger. the first study was made from the lab and now have a method to find new ways to treat the disease, " he says. the " we are now looking for a new mechanism for this process, " lu says. a second finding is that the first thing we know about the proteins -the proteins they have to make, " dunman said. luger said the study also found a way to determine the dna structure, the researchers found that archaeal histone-based dna was a major feature of the genome, " dr. santangeland said. <end-press> Notes: Topic sentence is not concise or coherenet, but mentions key details like archea, the CU institute of technologu, dna, and author thomas santangelo. No strong structural underpinning; ideas and quotes are presented in more or less chaotic order and are difficult to follow. FCONV, AB, beam search: the study, published in the journal " it's the first time we have been able to understand the structure of the dna, " said study co-author thomas santangelo, phd, professor of chemistry and biochemistry at the university of colorado at boulder. " this is the first example of a new type of cell, " says luger, who is also a howard hughes medical institute investigator and senior author of the study. " we've shown that it is possible to determine the structure and function of the cell's dna. the researchers found that the archaeal dna in the nucleus of the nucleus. " if you do n't know what they are doing, " luger says. " it is very important to understand how the dna is wrapped around the nucleus, " he said. " what we've found is that it has been a mystery. " the study was published online in the june issue of the journal the new study, which was funded by the national institutes of health -lrb-hhmi -rrb-, a postdoctoral fellow in luger's lab at colorado state university, who led the research. " now we know that some of these proteins have been found in many archaea, but it is not the case, " she said. he said the study's first author and a postdoctoral researcher in the luger lab's lab. " but it's not the first thing that we have found that it was n't possible to find a way to make them. " " we have shown that archaeal cells in the cell nucleus, which is a key component of the nucleosome. " there is a lot of evidence that the cells are organized in a cell's nucleus, the researchers say. they found that they were able to sequence the dna of a cell. " in fact, it is n't clear that it could be possible to control the formation of the cells. " i think this is a very important step toward understanding how many of the proteins in the genome. " the first step is to find out how the proteins are wrapped around a string of proteins. " these proteins are very similar to those found in archaea. " they are not able to do so. " he and his colleagues are now working to understand what happens to the cell. " in addition to luger and colleagues at the cu institute for genome architecture and function at the molecular level. " <end-press> Notes: There are many non-sentences (eg. "we have shown that archaeal cells in the cell nucleus, which is a key component of the nucleosome"). Identifies key words and phrases, but does not present them logically or in any coherent structure. STORY, BASE, beam search: now, researchers at the university of california, san diego school of medicine have discovered a new type of dna repair system that can break down the dna double helix. the study, published in the journal " we've known for a long time that dna damage is repaired, but we've never been able to figure out how to fix dna damage in a way that is not possible before, " said study senior author and uc san diego biochemistry professor richard a. wilson. " but what we've found is that it's not enough to repair dna damage. " dna repair is the process by which the body's own dna repair machinery is damaged and repaired. one of the most important processes in the genome is dna repair, which is a process that occurs in every cell in the body. the repair process is called repair, and the repair is repaired. the dna repair process occurs when the dna is damaged or damaged by a chemical reaction called dna repair. dna damage occurs when a dna repair enzyme is damaged by the chemical reaction, or damage. in the new study, wilson and his colleagues used a technique called x-ray crystallography to determine the three-dimensional structure of the damaged dna. " we were able to show that the damage caused by dna damage can be repaired, " wilson said. " this is the first time we've seen a complete picture of the repair mechanism. " the study was funded by the national institutes of health, the national science foundation and the national institute of general medical sciences. the research was supported by the department of energy's office of science, and by the california institute for regenerative medicine -lrb-cirm -rrb-, the american cancer society and the howard hughes medical institute. the work was partially supported by a grant from the national health and medical research council of australia. other authors of the paper include jinghui wang, a former graduate student in wilson's lab who is now a postdoctoral researcher at harvard medical school and now a post-doctoral researcher at the harvard john a. paulson school of engineering and applied sciences at seas Notes: Incorrectly focuses on dna damage and author richard wilson, neither of which are mentioned in the source or the target. Highly repetitive (keeps describing what dna repair is). Roughly the last third of the article is dedicated towards funding and other contributors. FCONV, BASE, beam search: the study, published in the journal the researchers found that archaea, which are thought to be a common ancestor with the eukaryotic cells, " says senior study author and a professor of chemistry and biochemistry at the university of colorado boulder. the research was supported by the howard hughes medical institute -lrb-hhmi -rrb-investigator. " the discovery of a new type of histone protein, which is known as chromatin. the researchers used a dna sequence to which the cell divides, " said luger, who is also a professor in the department of biochemistry and molecular biology at the cu school of medicine. " this is the first time anyone has been able to find a way to understand how genes are regulated, " she said. " it's a surprise to me, " he said. the study was published online in the scientific journal " the question of how the archaeal chromatin structure is very similar to the assembly of the dna strands. the archaea, the researchers were able to determine the structure of the cell's dna. " we found that the archaeal histone protein is a very similar pattern. " the research team, led by the national institutes of health, the national institute for genome architecture and function. " what we found is that it is possible to understand the origins of the nucleosome. " in the new study, the scientists used a technique called " dna, " the researchers say. " our research shows that archaea are the most common ancestor of the human genome. the team's findings are published in this week's issue of the journal " this discovery is a significant step forward in understanding the molecular mechanisms of gene regulation. the new research, published online ahead of print in the octamer, which was the first author of the study. " in the past, we have a lot of questions about how the cell is able to bind to the dna of the archaeal histones, which can be used to treat the disease. " this research was funded by a grant from the national science foundation and the national cancer institute. the work was supported in part by the hhmi medical institute. " there is no evidence for the evolution of eukaryotic chromatin structures. the scientists found that archaeal histone molecules are the same, but it's not the case for cancer. " " the idea is that the histone molecules that make up the cell, " lugy says. " if we can figure out what's going on.' Notes: Topic sentence is not logically sound, but correctly identifies the subject archea and refers to the correct university. It also extracts relevant concepts like histone proteins, chromatin structure and nucleosome. Many filler sentences and phrases that are common to press releases but not factually accurate. Technique is not properly identified (simply referred to as "dna"). Generations with FCONV on AB Target (truncated): the researchers at the national autonomous university of mexico looked at levels of fat, called triglycerides, in the rats' blood. they found that after feeding the rats fat at the beginning of their rest period, their blood fat levels spiked more drastically than when fed during the beginning of their active phase. when they removed the part of the rat's brain that controls the 24-hour cycle, there was no longer a change in fat levels. high blood fat levels are associated with heart disease and diabetes. these diseases are associated with a lifestyle where humans ignore the signals of the biological clock, and eat in the evening and night. this study demonstrates why such a lifestyle out of sync with our 24-hour cycle may in high blood fat levels and thus in a higher risk for heart problems. commenting on the study, author ruud buijs said:' the fact that we can ignore our biological clock is important for survival; we can decide to sleep during the day when we are extremely tired or we run away from danger at night. however, doing this frequently -with shift work, jet lag, or staying up late at night -will harm our health in the long-term especially when we eat at times when we should sleep.' <end-press> top-k sampling the team found that the scn has an increase in glucose levels and in the normal levels of fat, " said ruudbuijs. " the circadian clock is a major regulator of the function of the clock, which causes the body to be able to produce the normal level of triglycerides that cause a liver in the body, and that they are not only able to eat the cells that do n't do the job, " says professor ruudr buijs, a professor in the department of cell biology and physiology at the school of medicine in the school for biomedical research at universidad nacional autonoma de la cruz in the united states. " in this study, we are now working towards developing new therapies for the future of the human body. the findings also showed that when the activity of the other, " explains ruudbugs. the research team has studied how the mice that make it more difficult to get the right place, " buij said. " if you have the same clock, " explained joslr, phd, a researcher at the institute for biomedical investigation, the study shows that the clock's day night variations in levels of the normal cells, but in the past, we have a new understanding of the role of this protein that is not a cause, " added buiain. " they are important for the study of other factors and other factors in the brain, " dr buieves said. the study, published in the " this is the first study to investigate whether there are a lot of different types of animals, including humans, in which the cells are able to function properly by the body clock, it is not only that the researchers have found that if the day night varies in the future. " the study was funded by nasa's <end-press> the study suggests that when we do not have this function, " he said. in the second experiment, the researchers were able to investigate the role in the clock. " <end-press> that has not yet been established. the suggest that a very low uptake of fat is the key in a day, and the other is not sufficient for their function in the liver and in other tissues. " " what we found is that we can also see how the brain is able to be responsible for the increased uptake of blood and then a " normal " state that is the major cause of death worldwide. the researchers found that when a person is awake to the normal state of the body. " to further investigate the mechanism of the master clock. these animals were also able to measure the activity in the mice, and in mice, the cells also also play a role in their ability to keep the body heat, " adds buiks. in these experiments, the team used a mouse model to study how they interact with the body -and thus, the animals were able, and their work is now a major step towards this goal. <end-press> beam search: the study, published in the journal " this is the first study to investigate the mechanisms behind day night variations in post-meal glucose levels, " said ruud buijs, phd, professor of cell biology and physiology at the universidad nacional autonoma de mexico, mexico city, mexico. the study was published online in the " our study shows that the suprachiasmatic nucleus -lrb-scn -rrb-, which is a key factor in the development of heart disease, " says ruudbuijman, an associate professor of anatomy at the institute for biomedical research at universidad nacional autnoma de mxico, a professor in the department of anatomy, faculty of medicine at the university of california, san diego. " we found that the day night changes in the body. " the researchers found that when the mice were awake, the researchers were able to show that the scn has a major role in day night variation in blood glucose levels in the brain. the researchers also found that in the normal mice, they found that they were not able to do so, " buij said. " it is important to note that this is a major cause of death worldwide. " our findings suggest that the circadian clock may be an important risk factor contributing to the risk of cardiovascular disease, and postprandial triglycerides -lrb-tgs -rrb-. " the of this study are the first to investigate whether the scn is responsible for the time of the day and night. in the future, the research team has shown that the activity of these genes is important for food and circadian regulation of the body's ability to function properly. " " we have shown that it is not a good indicator for the risk for cardiovascular disease. " this study is a first step towards understanding the mechanisms underlying the mechanism of the scn, " explains buirish. " these findings highlight the importance of the clock, " he says. " if we do n't know how much it is. " this study was supported by a grant from the american heart pan x & hussain, who is also a member of the national institutes of health and the national institute of general medical sciences. <end-press> Table 8: Model gets one of the authors ruud buijs correct, whereas models trained solely on the SD dataset would generate completely irrelevant authors, often from the university of exeter. Both generations mention key words such as scn, day night variations, triglycerides, and circadian, suggesting that the model has accurately identified the subject matter of the text. In highlighted portions, the beam search generation seems to most accurately identify the topic, however incorrectly writes glucose levels instead of fat levels. Top-k sampling is not as concise or accurate, but includes more key words and ideas (eg. the circadian clock, triglycerides, normal levels of fat). STORY on PART. part-1 generation Target (truncated): a project funded by a research agency of the us department of defense is now giving rise to concerns about being possibly misused for the purpose of biological warfare. the programme called' insect allies' intends for insects to be used for dispersing genetically modified viruses to agricultural plants in fields. these viruses would be engineered so they can alter the chromosomes of plants through' genome editing'. this would allow for genetic modifications to be implemented quickly and at a large scale on crops that are already growing in fields, such as corn. in the journal it is argued by the programs funders, that genome editing using synthetic viruses will open up unprecedented possibilities for changing the properties of crop plants already growing in fields. plants could, for example, be genetically altered to nearly instantly become less susceptible to pests or droughts. until now, genetic engineering of commercial seeds always occurred in laboratories. with farmers planting seeds, needing to anticipate what environmental conditions will likely arise during a growing season. this means that, in the case of an unexpected drought, only farmers who had already planted drought-resistant seeds would gain a benefit. however, the originators of this project claim that genetic engineering in fields would offer farmers the possibility to alter the genetic properties of their crops at any time. use of this technology would represent a radical break with many existing farming practices, potentially jeopardizing their coexistence. at the end of 2016, darpa -lrb-defense advanced research projects agency -rrb-put out a call for tenders for a 4-year research work plan. this program has distributed a total of 27 million us dollars, aiming to develop genetically modified viruses that can genetically edit crops in fields. the first of three consortia, drawn from 14 american research facilities, announced their participation in mid-2017. <end-part-1> beam search: in a new study published in the journal " this is the first time we've been able to show that it is possible to control the spread of the virus in the environment, " says the study's lead author, dr. jrg schmiedmayer from the max planck institute for evolutionary biology in germany. " in this way, we can control the transmission of a virus in a way that makes it more difficult for a pathogen to spread. " the researchers developed a method that allows them to identify the genes responsible for the dispersal of the mosquitoes. the method is based on the fact that the insects are genetically modified so that they can be passed on to the next generation. in the case of the insect, the researchers used the method to introduce genetic mutations into the vector. the researchers then used the new method to produce a set of genetically modified mosquitoes that were genetically modified to be able to infect the host plant. the ing transgenic transgenic plants were then introduced into the environment in which they were introduced to the environment. <end-part-1> top-k sampling: " we are trying to understand the biological basis of the insect's genome and how it is able to control its spread over time, " said professor jon d'aquila of bayer's institute of evolutionary biology and natural resources. " in a sense, we need to find ways to control the spread of pests to crops and to improve the sustainability of these technologies. " a number of studies have shown that the insect can be used to breed crops with transgenic crops. these plants are used to control crop pests. the researchers were able to develop the new biomethods using a genetic approach for pest development, a method that can be applied to a number and quality of crop crop crops. <end-part-1>, the researchers used the new method to study the genetic effects of genetic resistance. " it is a simple approach to study wild weed pests, but the new system is very important, " explained the researcher. " by studying a transgenic crop, we found that the pest is transmitted to the soil in a way that makes it difficult to control pest spread. " the researchers also found that in their current model, the transgenic plants can not only produce transgenic plants, but also produce their own genetic material. " the transgenic flies are capable of producing seeds that are not available, " d'azastra-carval-cornell, which is a staple crop for the agriculture industry, is a good model for understanding the spread and spread of diseases such as smallpox, aphids, and rice crops. " our model is a useful tool for investigating the spread process in crops, and that could help us to understand why these diseases are caused in a particular way, " noted d'aglia. <end-part-1> openels-casanova, the first author of the study, added: " the wild pest is a huge crop that is a great pest worldwide. <end-part-1> herreragon, a member of the institute for evolutionary biology at the max planck institute for evolution in mainz, is familiar with the idea that the plants can survive in Table 9: Both generations seem to stay on topic. However, the source is insufficient to convery that the study is "giving rise to concerns about being possibly misused for the purpose of biological warfare. ", because this is not information that could be found in the scientific paper. STORY on PART. part-2 generation Target: that is, until it is possible to improve the modelling of their dynamics. the challenge of such models is that the volcanic eruption dynamics are very complex, involving simultaneous unrelated processes and offering a variety of possible scenarios. the authors built on a previous study demonstrating the influence of noise in triggering eruptions. namely, they assumed that, under complex friction forces, the volcano plug and conduit exhibit a previously identified mechanism, called stick-slip behaviour, which causes the volcanic plug to rise and fall in an attenuated manner. they then studied the influence of random disturbances on these dynamics. <end-part-2> beam search: in this study, the researchers used a mathematical model to simulate the dynamics of the friction between the two liquids. they found that the movement of the viscous fluid causes the motion of the fluid to change. in the case of the motion, the friction of the liquid causes the friction to decrease. this leads to the formation of the frictional motion, which can be explained by the interaction between the friction and the friction. the researchers show that the motion is not only dependent on the motion but also on the behavior of the granular fluid. the mechanism of friction is not yet fully understood, but it is also important to understand how the motion can be affected by the motion. the research was carried out in collaboration with the department of mathematical physics at the university of bayreuth in germany, and the technical university of dresden in italy. the of the study have been published in the scientific journal the study was funded by the natural sciences and engineering research council of canada, the fonds de recherche du qubec and the cole normale suprieure in france. <end-part-2> top-k sampling: in this case, the team of scientists studied the movement of the tendril and the viscous fluid. this phenomenon is a characteristic of the granular phase in which the friction and pressure interact with the friction. this is known as " granular phase. " in the simulation model, the movement between the rock and the friction force is measured by a combination of a high field field and the experimental measurements. " the movement in the second step is completely random. we are trying to explain the behaviour of the motion of the rock on the ground, " said dr. peter schll, the first author of the publication. the model was used to determine the speed and intensity of the movement. the simulation showed that the granular behavior is always random, and can only be measured with a precision of about a thousandth of a millimetre. <end-part-2> with the new model, this model can be used to predict the behaviour. <end-part-2> opener and his colleague dr. thomas kll from the university of freiburg have now developed a model that describes the granular state in a very simple way. <end-part-2> sional avalanches of the viscous fluids are the most common example of the model. <end-part-2> hoc and his colleagues also show that when a solid is in the direction it is pulled away from its original position, the sliding force is completely proportional to the velocity of the drunken tide. <end-part-2> no. 1 is the third phase of the process. <end-part-2> dressels, a scientist at the international research council, is not convinced that this mechanism is likely to be valid in other fields. <end-part-2> kamatz and his fellow researchers were able to use mathematical equations to describe the behavior of the torquato, an element in which a solid has an external force. <end-part-2> scaling the experimental model to measure the motion in a specific way. the scientists observed a large amount of friction between the friction coefficient and the sliding velocity of a tendrip, as a . <end-part-2> ke-all describes the motion observed in the experiment. <end-part-2> Table 10: Both generations talk about friction but at many points do not follow a logical thread. STORY on PART. part-3 generation Target: infecting mouse fibroblasts with t. gondii, the team used genetic techniques to remove, and restore, calpain activity. they found that in the absence of calpain, parasites could not escape the infected cell, just as they had observed for malaria parasites. over the past 40 years, malaria has become increasingly resistant to drugs that once controlled this devastating disease, leading to an alarming increase in deaths. targeting host proteins rather than the parasite itself might give the parasite less scope to develop resistance, since the parasite does n't have genetic control over host proteins. greenbaum plans to continue to explore the viability of calpain as a drug target for antiparasitic drugs. this work was funded by the ellison medical foundation, national institute for allergy and infectious diseases, the ritter foundation, and the penn genome frontiers institute, and the penn institute for translational medicine and therapeutics. <end-part-3> beam search: " this is the first time we've been able to show that the parasite's ability to invade host cell membranes has been shown to be critical for the host cell's survival and invasion, " said tsri research associate and first author of the study, who was the first to demonstrate that this parasite has a role in the host's immune response. " we've shown that it's possible to block this parasite's defense against the parasite. " tsri scientists are now working to find out more about how the parasite works and how it works. tsri researchers are currently working on developing a vaccine that could be used to treat other types of malaria, such as malaria. the research was supported by the national institutes of health, the burroughs wellcome fund, and the national institute of general medical sciences. additional support came from the wistar institute, the university of pennsylvania cancer research fund, the national center for research resources, the cancer prevention and research institute of texas, the american cancer society, the howard hughes medical institute -lrb-hhmi -rrband penn's perelman school of medicine. <end-part3> top-k sampling: this study was supported by an national institutes of health grant to the university of pennsylvania. <end-part3>, an aggressive malaria parasite that can invade the human body, is thought to have a fatal effect in other parasitic parasites. <end-part3> sellin is a co-author of this study. co-authors on the study are: jingyana zabdeld¡unk¿, md, ph.d., of the perelman school of medicine at the university, and matthew s. kornanwi, and jingjie ma, m.d., of university of california, riverside. <end-part3> chapou is a professor at the department of medicine and molecular medicine and the department at the icahn school of dentistry at mount sinai and the university's division of infectious diseases. <end-part3> tez is an emeritus member of the department and the institute of medicine, school of pharmaceutical sciences, university of michigan. <end-part3> king has been a major leader in research and treatment of malaria, where it is a common problem. the work of the university is a unique example of collaboration between a team led by the university and the national center for biotechnology resources at the national institutes for health in bethesda, maryland, and was funded by the national institute of biomedical imaging and bioengineering in part at the u.s. department of defense -lrb-nih -rrb-, and the bill and melinda gates foundation. <end-part-3> thers and his collaborators are currently conducting research on the parasite-host interaction of the parasite and the parasite. guerra is an expert in parasitology and the the molecular pathology institute at the school of pharmacy and molecular therapeutics. the research was supported in part by the nih director's research program, the department through the national cancer institute, the human frontier science program of the national science foundation, the national basic research program of china and the u.s. government's cooperative research focus program at the nih. <end-part-3> chitiza was also supported by the european research council. <end-part-3> keen is the recipient of an american association of the american college of veterinary medicine, which seeks to harness and conserve basic research and scientific partnerships with The ROUGE metrics are a class of automatic evaluation methods that compare an automatically produced summary to a reference human-produced summary, and generate scores based on the degree of word and phrase overlap. The two ROUGE metrics we use in this paper are ROUGE-N and ROUGE-L. We will describe each in detail here: ROUGE-N. This variant measures the overlap of N -grams between the system and reference summaries. This is computed as follows: where n is the length of gram n and Count match (gram n) is the number of exact occurrences of gram n in the generated summary. It is important to note that ROUGE-N is a recall-based measure. That is, since the denominator contains the total number of a given n-gram, this metric is trying to see how well the model can generate all all of the n-grams in the target summary. This contrasts with accuracy-based metrics, which seek to evaluate not if the model can generate all the information in the target, but rather if the generated information is correct. A [complete] evaluation metric should consider both accuracy and recall. ROUGE-L: Longest Common Subsequence. Another ROUGE metric is based on measuring the longest common subsequence between the generated and reference summaries (LCS). Note that this is different than having a common n-gram, as having a common sub-sequence only requires the words to be in the correct order, not consecutive. For a reference summary of u sentences containing a total of m words and a generated summary of v sentences containing a total of n words, the LCS based F-measure can be computed as follows: is the union of the common subsequences between r i and every sentencein the generated summary C. For example if r i = {w 1 w 2 w 3 w 4 w 5} and C = ({w 3 w 5 w 7}, {w 2 w 4 w 6 w 8}), then LCS ∪ (r i, C) = {w 3 w 5} ∪ {w 2 w 4} = {w 2 w 3 w 4 w 5} C FURTHER DETAILS ABOUT THE DATA For summarization evaluation, methods fall into two categories: manual and automatic. One of the most popular manual methods for evaluating summaries is the Pyramid method. In this procedure, for each source within the evaluation set references summaries are Figure 3: Distribution of the length of the articles that we have collected for selected publishers. We ignore the tail of articles below 1,000 words. written and subject content units (SCUs) are manually extracted and merged, with each one assigned a weight. The system's score is the average of the pyramid score of each generated summary, which is calculated as the normalized sum of the weights of the SCUs it contains. Another technique is called the responsiveness method, where human annotators read both the source text and the system summary and assign a subjective score on a Likert scale. This method might provide meaningful if implemented via crowdsourcing, butgiven the lengths of our sequences, it is likely to be unfeasible for our task. Positional Embeddings. For an input sequence (x 1, x 2, . . ., x m), each word is embedded in a vector space as a vector w i ∈ R 512. These embeddings are not fixed and are trainable parameters in our model. Furthermore, since convolutional neural networks (CNNs), unlike RNNs, have no inherent sense of order we add to each word embedding what is called a positional embedding (p 1, . . ., p m) to obtain a final input representation of (w 1 + p 1, . . ., w m + p m). Positional embeddings are also added to the output tokens generated by the decoder. As with the initial word embedding, the p i vectors are also vectors, and are learned during training. Convolutional Structure. The encoder and the decoder of the convolutional seq2seq model are composed of a series of convolutional layers. The l th layer (or block) will output z l = (h l 1, . . ., h l m) for the encoder and h l = (z l 1, . . ., z l n) for the decoder, where n is the number of output tokens. Note that the layers do not change the length of the embedding sequence, and thus one can think of a convolutional layer as adjusting rather than recreating input representations. At its core, what are called kernels will sweep over the input embedding and will combine representations in a certain local region of length k through a weighted linear combination called a convolution. The input to the convolutional kernel is a matrix X ∈ R k×d, which is a concatenation of k input elements embedded in d dimensions. The kernel itself can be parameterized as a weight matrix W ∈ R 2d×kd and a bias term b ∈ R 2d. Applying the kernel to an input region X ∈ R kd in an output element y ∈ R 2d. Performing the convolutions over the entire input sequence, including the regions padded with 0 to preserve the embedding size, in an intermediate output Y = (y 1, . . ., y m). The reason why the convolutions double the dimensionality of the embeddings is to implement a gating mechanism. Namely, each output element y is divided into equally sized parts A and B such that y = [A B]. A is designed to contain the information itself and B assigns relevance to each element using the following equation This ensures that each layer of the convolution adds new relationships to the embeddings rather than removing them. The encoder network in the complete seq2seq model consists of a series of these convolutions blocks, and it outputs the final embedding of the input document. The decoder is similar, but instead of being fed the entire sequence of source tokens, it is given a sequence of only the i previously generated tokens from the model. In order to ensure that the convolutions are masked (i.e., do not refer to future tokens in a sequence), there is padding only at the beginning of the sequence. As in the encoder model, this is then passed through a series of layers. In each layer, there are a set of convolutions, a gating mechanism, and also a subsequent attention mechanism that selectively uses the encoder output to modify this embedding. This step is the link between the input sequence and the output of the model, and it is designed to allow the model to focus on different areas of the text during generation. Also, as in the encoder layers, there is a residual connection to the input of the layer. The top decoder output h L i, i.e., the hidden state corresponding to the i-th token, is then fed through two linear layers and a softmax layer that produces a distribution for the next word in the decoding When training the model, subsequent tokens of the target are fed into the decoder and the KLdivergence between the output distribution and a one-hot encoding for the next token is accumulated in a training loss, which is optimized via back-propagation. During the generation, this distribution is used to pick the next token in the sequence, and the ing sequence (y 1, . . ., y i+1) is then fed back into the decoder until the end-of-sequence tag < /s > is reached. It is not obvious, however, how to pick from this distribution. One method is to choose the token with the highest probability. This greedy approach might not yield the best overall output. In our experiments, we use two main search techniques: (i) beam search, which expands all possible next steps and keeps the k most likely ones, where the number of parallel searches k is user-specified, and (ii) top-k-sampling proposed by , where the model chooses the k highest probability tokens, and then chooses from them uniformly at random. Self-attention. Self-attention is a popular feature of seq2seq models that makes it easier to model relationships between the tokens in a sequence. For analyzing documents such as scientific papers, when combined with the convolutional architecture, the self-attention mechanism might be helpful for modeling long-term dependencies. proposed to combine self-attention with a convolutional sequence-to-sequence model for story generation. The mechanism is appended to the convolutional decoder layers, which are passed the output embedding through three separate paths to calculate queries, keys, and values for each item in the decoder sequence. For an item h L i, the attention scores for items j ∈ (1, . . ., t) are calculated as dot-products q(h The softmax operation is applied to these scores, creating a set of weights σ j used to update h L i as follows: This mechanism allows the decoder to directly model relationships between tokens that are not within the bounded context of a convolution. In this way, during generation the decoder can condition on all of its previous outputs, thus enabling it to use a long-term context. For our task, we investigate adding a self-attention mechanism to the decoder and the encoder layers, which might help the model relate different sections of the source paper and add long-range structure to the generated press release. It seems that FCONV extracts relevant information from the source, but it is not able to present these in a structured and accurate way. This is likely because FCONV does not implement self-attention, making it difficult for the encoder and the decoder to model relationships within the text.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJeRVRVYwS
New: application of seq2seq modelling to automating sciene journalism; highly abstractive dataset; transfer learning tricks; automatic evaluation measure.
The interpretability of an AI agent's behavior is of utmost importance for effective human-AI interaction. To this end, there has been increasing interest in characterizing and generating interpretable behavior of the agent. An alternative approach to guarantee that the agent generates interpretable behavior would be to design the agent's environment such that uninterpretable behaviors are either prohibitively expensive or unavailable to the agent. To date, there has been work under the umbrella of goal or plan recognition design exploring this notion of environment redesign in some specific instances of interpretable of behavior. In this position paper, we scope the landscape of interpretable behavior and environment redesign in all its different flavors. Specifically, we focus on three specific types of interpretable behaviors -- explicability, legibility, and predictability -- and present a general framework for the problem of environment design that can be instantiated to achieve each of the three interpretable behaviors. We also discuss how specific instantiations of this framework correspond to prior works on environment design and identify exciting opportunities for future work. The design of human-aware AI agents must ensure that its decisions are interpretable to the human in the loop. Uninterpretable behavior can lead to increased cognitive load on the human -from reduced trust, productivity to increased risk of danger around the agent BID7. BID5 emphasises in the Roadmap for U.S. Robotics -"humans must be able to read and recognize agent activities in order to interpret the agent's understanding". The agent's behavior may be uninterpretable if the human: has incorrect notion of the agent's beliefs and capabilities BID15 BID1 ) FORMULA3 is unaware of the agent's goals and rewards BID6 BID12 cannot predict the agent's plan or policy BID8 BID12. Thus, in order to be interpretable, the agent must take into account the human's expectations of its behavior -i.e. the human mental model BID0 ). There are many ways in which considerations of the human mental model can affect agent behavior. * equal contribution There has been significant interest recently in characterizing different notions of interpretable behavior of a human-aware AI agent. Three important properties of interpretable behaviors emerge, namely - explicability: when the agent behavior conforms to the expectations of the human; legibility: when the agent behavior reveals its objectives or intentions to the observer; and predictability: when the (remaining) agent behavior can be precisely predicted by the human. In existing works, the generation of these behaviors was explored from the point of view of the agent -i.e. the agent altered its planning process by using its estimation of the mental model of the human in the loop in order to exhibit the desired behavior. We refer the reader to ) for a detailed treatise of these concepts. A parallel thread of work, under the umbrella of goal and plan recognition design, has looked at the notion of changing the environment of an agent to increase interpretability of the behaviors available to an agent. The design of environment can be optimized in order to maximize (or minimize) some objective for the actor (for example, optimalcost to a goal, desired behavioral property) BID16 BID9 ). An environment design problem takes the initial environment configuration as input, along with set of modifications allowed in the environment and outputs a sequence of modifications, which can be applied to the initial environment to derive a new environment in which the desired objective is optimized. The problem of environment design is more suited for structured settings where an actor performs repetitive objectives (for example, on factory floors, etc). It is also suited for settings involving multiple actors, where the expectations of the observers are the same but there are multiple actors in the environment, making environment design an effective choice (for example, in restaurants with waiter robots where the human customers have same expectations).Goal (and Plan) Recognition Design In existing work, the concept of environment design for planning agents has been studied in the concept of goal (or plan) recognition The domain can be updated for more explicable behavior by disabling the robot coffee holder; (c) To induce legible behavior, we can add dividing walls to constrain the agent and help the observer reduce uncertainty in their mental model; and (d) To induce predictable behavior we can reduce uncertainty about the order of pickup by including a tray that allows the agent to pick up the objects in any order.design BID10 BID13 in order to make the goals (or plans) of an actor easier to recognize to the observer. Immediately, this should remind the reader of legibility (and predictability) introduced above. The goal of this paper is to bridge the gap between these two parallel threads of work and explore the full spectrum of interpretable behavior in the context of environment design. We will discuss how existing work in environment design fits into this narrative and highlight much needed gaps in existing work as exciting avenues for future work. Adopting the reasoning capabilities of an agent to deal with the human mental model, as done in classical human-aware decision-making versus the design of environments are indeed complimentary approaches towards achieving the same purpose: behavior of the agent that is more interpretable to the observer. While one could conceive of the most general framework that accounts for both, it is useful to recognize that these have their own unique set of features. Perhaps, the biggest advantage of environment design is that the process of generation of interpretable behavior is offloaded from the actor onto the design process. In other words, the computational overhead of generating interpretable behavior -in having to deal with the human mental model and reason in the space of models -is now part of the design process only, which can be done offline. We will see later, in our formulation, how the actor in the "Design for Interpretability" is modeled as a cost-optimal agent that is able to produce interpretable behavior while still planning in the traditional sense. In addition, design also means that the observer does not have to count on the agent to be cooperative and interpretable, and instead can deal with adversarial agents as well. At the end of this paper, we will see that this advantage does come with a caveat. In general, the notion of interpretable behavior is complemented by communication: e.g. authors in balance considerations of explanations and explicability, while authors in ) balance out intention projection actions for the sake of legibility and predictability. Communication operates in model space by being able to change the observer's beliefs. Design, also operating in model space, can be seen as an alternative to communication that complements the notion of interpretable behavior. Consider an office setting where an office assistant robot, responsible for delivering items such as coffee or mail to the employees, is about to be deployed (FIG0). The robot (actor) will be supervised by office security guards (observer) who have worked with previous generation office assistant robots and have some expectations regarding their functions. In particular, they expect the robot to carry one item at a time (i.e. either mail or coffee) and each robot generally has a strong preference on the order in which it picks up these items (though the order changes from robot to robot). Unknown to the guards, the new model adds more flexibility to the robot by removing the need for the robots to have fixed preference on the order to pick up items and installs a coffee cup holder that allows the robot to carry both mail and coffee at the same time. Now if we allow the new robot to simply act optimally in the original setting, it would unnecessary confuse the observers. If the robot was built to generate interpretable behavior, it will change its behavior (and possibly settle for suboptimal decisions in its own model) in order to conform to expectations or it will provide explanations that address these model differences. However, the same effect can be achieved if the designers who are deploying the robot also designed the environment to ensure that decisions of the new robot remain interpretable to the occupants of the office. If the designers wish to prioritize explicability, then the change that they would need to make would be to disable the coffee holder, this will cause the robot to choose one of the items first, deliver it and then move on to the second one. For explicability, it does not matter which one the robot chooses as the user would simply assume that the order chosen by the robot is the one enforced by the robot's model. As for legibility, the aim is to help the user differentiate between the models as early as possible, one way to do it would be to disable the coffee holder and then build introduce obstacles as shown in FIG0. Finally, for predictability, the focus is to allow the user to be able to predict the entire plan as early as possible. One possible design for this scenario is to disable the coffee holder and provide the robot with a tray that allows the robot to carry both items at the same time. The observer can see the tray and realizes the robot can place both items in the tray and the order of picking up no longer matters. In predictability, we may need to add additional obstacles to further restrict the space of possible plans that can be done by the robot FIG0 ). An interpretable decision making problem involves two entities: an actor (A) and an observer (O). The actor operates in an environment while being observed by the observer. Definition 1. An interpretable decision making problem is a tuple, P Int = P A, P O, Int, where: -P A is the decision making problem of the actor A DISPLAYFORM0 O } is observer's mental model of the actor, represented by a set of possible decision making problems that the observer believes that the actor may be solving.-Int: Π → R is the interpretability score that is used to evaluate agent plans (where Π is the space of plans)Interestingly, we do not require that P A ∈ P O -i.e. the problems in P O can be different from P A in all possible aspects (e.g. state space, action space, initial state and goals). The solution to P Int is a plan or policy that not only solves P A but also satisfies some desired properties of interpretable behaviors (measured through the interpretability score). The score could reflect properties like explicability, legibility or predictability of the plan. Explicability The actor's behavior is considered explicable if it aligns with at least one of the observer's expected plans (as per their mental model). The set of plans expected by the observer consists of all the cost-optimal solutions for problems in P O. The target of explicability is thus to generate behavior that belongs to this set of expected plans. Legibility With legibility, the objective of the actor is to inform the observer about its model -i.e. reduce the size of P O. An actor's behavior is said to be perfectly legible if it can be derived from only one model in P O. The longer it takes for a plan prefix to achieve this, the worse is the plan's legibility. This notion of interpretability thus helps the observer narrow down their belief over the possible actor models as quickly as possible. Predictability The objective of the actor with predictability is to generate the most disambiguating behavior -i.e. given the actor's plan prefix, the observer should be able to predict its completion. These predictions would be in terms of cost-optimal completions of a given prefix in the possible problems in the mental model. This means that if there exists the same unique completion in all of the models then the plan is predictable even though not legible. The shorter the length of the disambiguating plan prefix, the better the predictability of the plan. An empty prefix would thus correspond to the most predictable plan. In this section, we present a general formulation for the design problem for interpretable behaviors. Given an environment design, we assume that the actor is a rational agent and therefore is incentivized to generate cost-optimal plans. Let the set of cost optimal plans of the actor be Π * P A. A costoptimal plan solution to P A can exist anywhere on the spectrum of interpretability from high to low. Therefore, we need a measure to quantify the interpretability score for the actor's set of cost-optimal plans. To that end, we introduce the worst-case interpretability score wci as follows: Definition 2. The worst-case interpretability score wci(·), for P Int is defined as DISPLAYFORM0 Int(·) is instantiated for each type of interpretable behavior separately and is discussed in detail at the end of this section. The higher the interpretablity score, the better the interpretability of the behavior (in terms of either of three properties). Therefore, the worst-case interpretability score is the minimum interpretability score of a cost-optimal plan of the actor. We can now define the design problem for interpretability. When a modification is applied to the environment, both the actor's decision making problem and the observer's mental model are modified, thereby changing the worst-case interpretability score of the actor's cost-optimal plans for the given decision making problem. Let P denote the set of valid configurations in the real environment. Although P A ∈ P, problems in P O might not necessarily be in P if the observer has incorrect or infeasible notions about the actor's model. Therefore, we represent the set of configurations that the observer thinks are possible as P, and P O ⊆ P. Definition 3. The design problem for interpretability, DPInt, is a tuple DISPLAYFORM1 A ∈ P and P O 0 ⊆ P are the initial models.• ∆ is the set of modifications that can be applied to the environment. ξ is a sequence of modifications.• Λ A: ∆ × P → P and Λ O: ∆ × P → P are the model transition function that specify the ing model after applying a modification to the existing models. The set of possible modifications includes modifying the set of states, action preconditions, action effects, action costs, initial state and goal. Each modification ξ ∈ ∆ is associated with a cost, such that, C(ξ) = ξi∈ξ C(ξ). After applying ξ to both P 0 A and P O 0, the ing actor decision making problem model and observer mental model are represented as P |ξ| A and P O |ξ| respectively. Let P |ξ| Int be the modified interpretable decision making problem after applying the modification ξ to P Int. Our objective here is to solve DP-Int such that the worst-case interpretability score of P Int is maximized. Apart from that, the design cost of ξ has to be minimized, as well as the cost of a plan π A that solves P |ξ| A. Definition 4. A solution to DP-Int, is a sequence of modifications ξ with DISPLAYFORM2 This completes the general framework of design for interpretability. In the following, we will look at specific instances of design for the different notions of interpretability. In order to be explicable, the actor's plan has to be consistent with the observer's expectations of it. The observer has an implicit assumption that the actor is a rational agent. Therefore the set of plans expected by the observer includes the cost-optimal plans for all the planning models in the observer's mental model. Let Π * P O be the set of expected plans for the observer. Given the set of expected plans, the explicability of the actor's plan depends on how different it is from the expected plans. In order to quantify the explicability of a plan, we introduce the following scoring function: Definition 5. The explicability score Exp(·) of an actor's plan π A that solves P A is defined as follows: DISPLAYFORM0 Here δ P O (·) computes the distance between two plans with respect to the observer's mental model. For example, the distance function could compute a cost-based difference between the two plans in the observer's mental model. Plugging this scoring function in Equation 2 allows us to instantiate the design problem for explicability. In order to be legible, the actor's plan has to reveal its problem to the observer as early on as possible. Therefore, the legibility of a plan is inversely proportional to the length of its shortest prefix that has unique cost optimal completion for more than one problem in the observer's mental model. Definition 6. The legibility score Leg(·) of an actor's plan, π A, that solves P A is defined as follows: DISPLAYFORM0 with unique cost optimal completion ofπ A in each model, andΠ P A is the set of all prefixes of π A. Plugging this scoring function in Equation 2 allows us to instantiate the design problem for legibility. Goal Recognition Design The work on goal recognition design (GRD) BID10 ) is a special case of the design problem for legibility. The GRD problem involves an actor and an observer where the observer's mental model consists of planning models that have the exact same state space, actions and initial state as the actor's planning model. However, each planning model in the observer's mental model has a different goal. The actor's true goal is one of them, and the objective of GRD problem is to redesign the environment, such that, the true goal of the actor is revealed to the observer as early as possible. The interpretability problem defined here is a general one, where the observer's mental model can be different in all possible ways from the actor's actual planning model. In order to be predictable, the plan has to be the mostdisambiguating plan among the set of plans the observer is considering -i.e. the observer should be able to predict the rest of the plan after seeing the prefix. Therefore, predictability of a plan is inversely proportional to the length of its shortest prefix which ensures only one optimal completion solving only a single problem in the observer's mental model. We can quantify the predictability score as follows: Definition 7. The predictability score P red(·) of an actor's plan π A that solves P A is defined as follows: DISPLAYFORM0 where π is an optimal completion ofπ A, andΠ P A is the set of all prefixes of a plan π A. Plugging this scoring function in Equation 2 allows us to instantiate the design problem for predictability. The predictability problem corresponds to the plan recognition design (PRD) problem BID13. However, our proposed framework in terms of possible observer models subsumes the plan library based approaches in being able to support a generative model of observer expectations. We will now highlight limitations of the proposed framework and discuss how they may be extended in the future. Multiple decision making problems. The problem of environment design, as studied in this paper, is suitable for settings where the actor performs a single repetitive task. However, our formulation can be easily extended to handle an array of tasks that the agent performs in its environment by considering a set of decision making problems for the actor, where the worst-case score is decided by taking either minimum (or average) over the wci(·) for the set of problems. Interpretability Score. The three properties of interpretable agent behavior are not mutually exclusive. A plan can be explicable, legible and predictable at the same time. In general, a plan can have any combination of the three properties. In Equation 2, Int(·) uses one of these properties at a time. In order to handle more than one property at a time, one could formulate Int(·) as a linear combination of the three properties. In general, the design objective would be to minimize the worst-case interpretability score such that the scores for each property are maximized in the modified environment, or at least allow the designer pathways to trade off among potentially competing metrics. Cost of the agent. In Section 1.3 we mentioned an advantage of the design process in the context of interpretabilitythe ability to offload the computational load on the actor, in having to reason about the observer model, to the offline design stage. However, there is never any free lunch. The effect of environment design is more permanent than operating on the human mental model. That is to say, interpretable behavior while targeted for a particular human in the loop or for a particular interaction, does not (usually) affect the actor going forward. However, in case of design of environment, the actor has to live with the design decisions for the rest of its life. That means, for example, if the environment has been designed to promote explicable behavior, the actor would be incurring additional cost for its behaviors (than it would have had in the original environment). This also affects not only a particular decision making problem at hand, but also everything that the actor does in the environment, and for all the agents it interacts with. As such there is a "loss of autonomy" is some sense due to environment design, the cost of which can and should be incorporated in the design process.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkxg4a3m9N
We present an approach to redesign the environment such that uninterpretable agent behaviors are minimized or eliminated.
Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders (VAEs). In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients. Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings. We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets. Generative models present the possibility of learning structure from data in unsupervised or semisupervised settings, thereby facilitating more flexible systems to learn and perform tasks in computer vision, robotics, and other application domains with limited human involvement. Latent variable models, a class of generative models, are particularly well-suited to learning hidden structure. They frame the process of data generation as a mapping from a set of latent variables underlying the data. When this mapping is parameterized by a deep neural network, the model can learn complex, non-linear relationships, such as object identities and dynamics . However, performing exact posterior inference in these models is computationally intractable, necessitating the use of approximate inference methods. Variational inference is a scalable approximate inference method, transforming inference into a non-convex optimization problem. Using a set of approximate posterior distributions, e.g. Gaussians, variational inference attempts to find the distribution that most closely matches the true posterior. This matching is accomplished by maximizing a lower bound on the marginal log-likelihood, or model evidence, which can also be used to learn the model parameters. The ensuing expectation-maximization procedure alternates between optimizing the approximate posteriors and model parameters (; ;). Amortized inference avoids exactly computing optimized approximate posterior distributions for each data example, instead learning a separate inference model to perform this task. Taking the data example as input, this model outputs an estimate of the corresponding approximate posterior. When the generative and inference models are parameterized with neural networks, the ing set-up is referred to as a variational auto-encoder (VAE) .We introduce a new class of inference models, referred to as iterative inference models, inspired by recent work in learning to learn . Rather than directly mapping the data to the approximate posterior, these models learn how to iteratively estimate the approximate posterior by repeatedly encoding the corresponding gradients, i.e. learning to infer. With inference computation distributed over multiple iterations, we conjecture that this model set-up should provide improved inference estimates over standard inference models given sufficient model capacity. Our work is presented as follows: Section 2 contains on latent variable models, variational inference, and inference models; Section 3 motivates and introduces iterative inference models; Section 4 presents this approach for latent Gaussian models, showing that a particular form of iterative inference models reduces to standard inference models under mild assumptions; Section 5 contains empirical ; and Section 6 concludes our work. Latent variable models are generative probabilistic models that use local (per data example) latent variables, z, to model observations, x, using global (across data examples) parameters, θ. A model is defined by the joint distribution p θ (x, z) = p θ (x|z)p θ (z), which is composed of the conditional likelihood and the prior. Learning the model parameters and inferring the posterior p(z|x) are intractable for all but the simplest models, as they require evaluating the marginal likelihood, p θ (x) = p θ (x, z)dz, which involves integrating the model over z. For this reason, we often turn to approximate inference methods. Variational inference reformulates this intractable integration as an optimization problem by introducing an approximate posterior 1 q(z|x), typically chosen from some tractable family of distributions, and minimizing the KL-divergence from the true posterior, D KL (q(z|x)||p(z|x)). This quantity cannot be minimized directly, as it contains the true posterior. Instead, the KL-divergence can be decomposed into DISPLAYFORM0 where L is the evidence lower bound (ELBO), which is defined as:L ≡ E z∼q(z|x) [log p θ (x, z) − log q(z|x)] = E z∼q(z|x) [log p θ (x|z)] − D KL (q(z|x)||p θ (z)).Briefly, the first term in eq. 3 can be considered as a reconstruction term, as it expresses how well the output fits the data example. The second term can be considered as a regularization term, as it quantifies the dissimilarity between the latent representation and the prior. Because log p θ (x) is not a function of q(z|x), in eq. 1 we can minimize D KL (q(z|x)||p(z|x)), thereby performing approximate inference, by maximizing L w.r.t. q(z|x). Likewise, because D KL (q(z|x)||p(z|x)) is non-negative, L is a lower bound on log p θ (x), meaning that if we have inferred an optimal q(z|x), learning corresponds to maximizing L w.r.t. θ. The optimization procedures involved in inference and learning, when implemented using conventional gradient ascent techniques, are respectively the expectation and maximization steps of the variational EM algorithm (; ;), which alternate until convergence. When q(z|x) takes a parametric form, the expectation step for data example x (i) involves finding a set of distribution parameters, λ (i), that are optimal. With a factorized Gaussian density over continuous variables, i.e. DISPLAYFORM0 q ), this entails repeatedly estimating the stochastic gradients DISPLAYFORM1. This direct optimization procedure, which is repeated for each example, is not only computationally costly for expressive generative models and large data sets, but also sensitive to step sizes and initial conditions. Amortized inference replaces the optimization of each set of local approximate posterior parameters, λ (i), with the optimization of a set of global parameters, φ, contained within an inference model. Taking x (i) as input, this model directly outputs estimates of λ (i). Sharing the inference model across data examples allows for an efficient algorithm, in which φ and θ can be updated jointly. The canonical example, the variational auto-encoder (VAE) , employs the reparameterization trick to propagate stochastic gradients from the generative model to the inference model, both of which are parameterized by neural networks. The formulation has an intuitive interpretation: the inference model encodes x into q(z|x), and the generative model decodes samples from q(z|x) into p(x|z). Throughout the rest of this paper, we refer to inference models of this form as standard inference models. Optimization surface of L (in nats) for a 2-D latent Gaussian model and a particular MNIST data example. Shown on the plot are the MAP (optimal estimate), the output of a standard inference model (VAE), and an expectation step trajectory of variational EM using stochastic gradient ascent. The plot on the right shows the estimates of each inference scheme near the optimum. The expectation step arrives at a better final inference estimate than the standard inference model. In Section 3.2, we introduce our contribution, iterative inference models. We first motivate our approach in Section 3.1 by interpreting standard inference models in VAEs as optimization models, i.e. models that learn to perform optimization. Using insights from other optimization models, this interpretation extends and improves upon standard inference models. As described in Section 2.1, variational inference transforms inference into the maximization of L w.r.t. the parameters of q(z|x), constituting the expectation step of the variational EM algorithm. In general, this is a non-convex optimization problem, making it somewhat surprising that an inference model can learn to output reasonable estimates of q(z|x) across data examples. Of course, directly comparing inference schemes is complicated by the fact that generative models adapt to accommodate their approximate posteriors. Nevertheless, inference models attempt to replace traditional optimization techniques with a learned mapping from x to q(z|x).We demonstrate this point in FIG0 by visualizing the optimization surface of L defined by a trained 2-D latent Gaussian model and a particular data example, in this case, a binarized MNIST digit. To visualize the surface, we use a 2-D point estimate as the approximate posterior, q(z|x) = δ(z = µ q), where µ q = (µ 1, µ 2) ∈ R 2 and δ is the Dirac delta function. See Appendix C.1 for further details. Shown on the plot are the MAP (i.e. optimal) estimate, the estimate from a trained inference model, and an expectation step trajectory using stochastic gradient ascent on µ q. The expectation step arrives at a better final estimate, but it requires many iterations and is dependent on the step size and initial estimate. The inference model outputs a near-optimal estimate in one forward pass without hand tuning (other than the architecture), but it is restricted to this single estimate. Note that the inference model does not attain the optimal estimate, ing in an "amortization gap" .This example illustrates how inference models differ from conventional optimization techniques. Despite having no convergence guarantees on inference optimization, inference models have been shown to work well empirically. However, by learning a direct mapping from x to q(z|x), standard inference models are restricted to only single-step estimation procedures, which may yield worse inference estimates. The ing large amortization gap then limits the quality of the accompanying generative model. To improve upon this paradigm, we take inspiration from the area of learning to learn, where showed that an optimizer model, instantiated as a recurrent neural network, can learn to optimize the parameters of an optimizee model, another neural network,. θ refers to the generative model (decoder) parameters. ∇ λ L denotes the gradients of the ELBO w.r.t. the distribution parameters, λ, of the approximate posterior, q(z|x). Iterative inference models learn to perform approximate inference optimization by using these gradients and a set of inference model (encoder) parameters, φ. See FIG6 for a similar set of diagrams with unrolled computational graphs.for various tasks. The optimizer model receives the optimizee's parameter gradients and outputs updates to these parameters to improve the optimizee's loss. Because the computational graph is differentiable, the optimizer itself can also be learned. Optimization models can learn to adaptively adjust update step sizes, potentially speeding up and improving optimization. focus primarily on parameter optimization (i.e. learning), we apply an analogous approach to inference optimization in latent variable models. We refer to this class of optimization models as iterative inference models, as they are inference models that iteratively update their approximate posterior estimates. Our work differs from that of in three distinct ways: variational inference is a qualitatively different optimization problem, involving amortization across data examples rather than learning tasks; we utilize nonrecurrent optimization models, providing a more computationally efficient model that breaks the assumption that previous gradient information is essential for learned optimization; and we provide a novel model formulation that approximates gradient steps using locally computed errors on latent and observed variables (see Section 4.1). We formalize our approach in the following section. We present iterative inference models starting from the context of standard inference models. For a standard inference model f with parameters φ, the estimate of the approximate posterior distribution parameters λ (i) for data example x (i) is of the form: DISPLAYFORM0 We propose to instead use an iterative inference model, also denoted as f with parameters φ. With DISPLAYFORM1 t; θ) as the ELBO for data example x (i) at inference iteration t, the model uses DISPLAYFORM2 t, to output updated estimates of λ (i): DISPLAYFORM3 where DISPLAYFORM4 t is the estimate of λ (i) at inference iteration t. We use f t to highlight that the form of f at iteration t may depend on hidden states within the iterative inference model, such as those found within recurrent neural networks. See Figures 2 and 8 for schematic comparisons of iterative inference models with variational EM and standard inference models. As with standard inference models, the parameters of an iterative inference model can be updated using stochastic estimates of ∇ φ L, obtained through the reparameterization trick or other methods. Model parameter updating is typically performed using standard optimization techniques. Note that eq. 5 is in a general form and contains, as a special case, the residual updating scheme used in. We now describe an example of iterative inference models for latent Gaussian generative models, deriving the gradients to understand the source of the approximate posterior updates. Latent Gaussian models are latent variable models with Gaussian prior distributions over latent variables: DISPLAYFORM0 This class of models is often used in VAEs and is a common choice for representing continuous-valued latent variables. While the approximate posterior can be any probability density, it is typically also chosen as Gaussian: q(z|x) = N (z; µ q, diag σ 2 q). With this choice, λ (i) corresponds to {µ DISPLAYFORM1 q} for example x (i). Dropping the superscript (i) to simplify notation, we can express eq. 5 for this model as: DISPLAYFORM2 DISPLAYFORM3 where f µq t and f σ 2 q t are the iterative inference models for updating µ q and σ 2 q respectively. For continuous observations, we can use a Gaussian output density: DISPLAYFORM4 is a non-linear function of z, and σ 2 x is a global parameter, a common assumption in these models. The approximate posterior parameter gradients for this model are (see Appendix A): DISPLAYFORM5 where ∼ N (0, I) is the auxiliary noise variable from the reparameterization trick, denotes element-wise multiplication, and all division is performed element-wise. In Appendix A, we also derive the corresponding gradients for a Bernoulli output distribution, which take a similar form. Although we only derive gradients for these two output distributions, note that iterative inference models can be used with any distribution form. We now briefly discuss the terms in eqs. 8 and 9. Re-expressing the reparameterized latent variable as z = µ q + σ q, the gradients have two shared terms, (x − µ x)/σ 2 x and (z − µ p)/σ 2 p, the precision-weighted errors at the observed ("bottom-up") and latent ("top-down") levels respectively. The terms ∂µx ∂µq and ∂µx ∂σ 2 q are the Jacobian matrices of µ x w.r.t. the approximate posterior parameters, which effectively invert the output model. Understanding the significance of each term, in the following section we provide an alternative formulation of iterative inference models for latent Gaussian generative models. The approximate posterior gradients are inherently stochastic, arising from the fact that evaluating L involves approximating expectations (eq. 2) using Monte Carlo samples of z ∼ q(z|x). As these estimates always contain some degree of noise, a close approximation to these gradients should also suffice for updating the approximate posterior parameters. The motivations for this are two-fold: approximate gradients may be easier to compute, especially in an online setting, and by encoding more general terms, the inference model may be able to approximate higher-order approximate posterior derivatives, allowing for faster convergence. We now provide an alternative formulation of iterative inference models for latent Gaussian models that approximates gradient information. With the exception of ∂µx ∂µq and ∂µx ∂σ 2 q, all terms in eqs. 8 and 9 can be easily computed using x and the distribution parameters of p(x|z), p(z), and q(z|x). Likewise, higher-order approximate posterior derivatives consist of these common terms as well as higher-order derivatives of the output model. As the output model derivatives are themselves functions, by encoding only the common terms, we can offload these (approximate) derivative calculations onto the iterative inference model. Again dropping the superscript (i), one possible set-up is formulated as follows: DISPLAYFORM0 DISPLAYFORM1 where, in the case of a Gaussian output density, the stochastic error terms are defined as DISPLAYFORM2. This encoding scheme resembles the approach taken in DRAW (Gregor et al. FORMULA0), where reconstruction errors, x − µ t,x, are iteratively encoded. However, DRAW and later variants (Gregor et al. FORMULA0) do not explicitly account for latent errors, ε z,t, or approximate posterior estimates. If possible, these terms must instead be implicitly handled by the inference model's hidden states. In Section 5.2, we demonstrate that iterative inference models of this form do indeed learn to infer. Unlike gradient encoding iterative inference models, these error encoding models do not require gradients at test time and they empirically perform well even with few inference iterations. Under a certain set of assumptions, single-iteration iterative inference models of the derivative approximating form proposed in Section 4.1 are equivalent to standard inference models, as used in conventional VAEs. Specifically, assuming:1. the initial approximate posterior estimate is a global constant: DISPLAYFORM0 we are in the limit of infinite samples of the initial auxiliary variable 0, then the initial approximate posterior estimate (µ q,0, σ 2 q,0) and initial latent error (ε z,0) are constants and the initial observation error (ε x,0) is a constant affine transformation of the observation (x). When the inference model is a neural network, then encoding x or an affine transformation of x is equivalent (assuming the inputs are properly normalized). Therefore, eqs. 10 and 11 simplify to that of a standard inference model, eq. 4. From this perspective, standard inference models can be interpreted as single-step optimization models that learn to approximate derivatives at a single latent point. In the following section, we consider the case in which the second assumption is violated; iterative inference models naturally handle this case, whereas standard inference models do not. Hierarchical latent variable models contain higher level latent variables that provide empirical priors on lower level variables; p θ (z) is thus observation-dependent (see Figure 7 in Appendix A.6). The approximate posterior gradients for an intermediate level in a hierarchical latent Gaussian model (see Appendix A.6) take a similar form as eqs. 8 and 9, comprising bottom-up errors from lower variables and top-down errors from higher variables. Iterative inference models encode both of these errors, either directly or through the gradient. However, standard inference models, which map x and lower latent variables to each level of latent variables, can only approximate bottom-up information. Lacking top-down prior information, these models must either use a less expressive prior or output poor approximate posterior estimates. Sønderby et al. identified this phenomenon, proposing a "top-down inference" technique. Iterative inference models formalize and extend this technique. We performed experiments using latent Gaussian models trained on MNIST, Omniglot (Lake et al. FORMULA14). MNIST and Omniglot were dynamically binarized and modeled with Bernoulli output distributions, and Street View House Numbers and CIFAR-10 were modeled with Gaussian output distributions, using the procedure from. All experiments presented here use fully-connected neural networks. Reported values of L were estimated using 1 sample (Figures 3, 5, 6), and reported values of − log p(x) were estimated using 5,000 importance weighted samples TAB0. Additional experiment details, including model architectures and optimizers, can be found in Appendix C. We present additional experiments on text data in Appendix D. Source code will be released online. To confirm the ability of iterative inference models to optimize the approximate posterior, we tested these models in the simplified setting of a 2D latent Gaussian model, trained on MNIST, with a point estimate approximate posterior. The generative model architecture and approximate posterior form are identical to those used in Section 3.1 (see Appendix C.1). Here we show a from encoding x and ∇ µq L through a feedforward neural network. In Figure 3, we visualize an optimization trajectory taken by this model for a particular test example. Despite lacking convergence guarantees, the model learns to adaptively adjust inference update step sizes to navigate the optimization surface, arriving and remaining at a near-optimal approximate posterior estimate for this example. Approximate inference optimization can also be visualized through data reconstructions. In eq. 3, the reconstruction term encourages q(z|x) to represent outputs that closely match the data examples. As this is typically the dominant term in L, during inference optimization, the output reconstructions should improve in terms of visual quality, more closely matching x. We demonstrate this phenomenon with iterative inference models for several data sets in Figure 4 (see Appendix C.2 for additional reconstructions.). Reconstruction quality noticeably improves during inference. We highlight two unique aspects of iterative inference models: direct improvement with additional samples and inference iterations. These aspects provide two advantageous qualitative differences over standard inference models. Additional approximate posterior samples provide more precise gradient estimates, potentially allowing an iterative inference model to output more precise updates. To verify this, we trained standard and iterative inference models on MNIST using 1, 5, 10, and 20 approximate posterior samples. Iterative inference models were trained by encoding the data (x) and approximate posterior gradients (∇ λ L) for 5 iterations. The are shown in FIG4, where we observe that the iterative inference model improves by more than 1 nat with additional samples, while the standard inference model improves by roughly 0.5 nats. We investigated the effect of training with additional inference iterations while encoding approximate posterior gradients (∇ λ L) or errors (ε x, ε z), with or without the data (x). Section 4 and Appendix A define these terms. Note that the encoded terms affect the number of input parameters to the inference model. Here, the iterative inference model that only encodes ∇ λ L has fewer input parameters than a standard inference model, whereas the models that encode errors or data have strictly more input parameters. Experiments were performed on MNIST, with for 2, 5, 10, and 16 inference iterations in FIG4. All encoding schemes outperformed standard inference models with the same architecture, which we found to be consistent over a range of architectures. Encoding the data was beneficial, allowing the inference model to trade off between learning a direct and iterative mapping. Encoding errors allows the iterative inference model to approximate higher order derivatives (Section 4.1), which we observe helps when training with fewer inference iterations. However, it appears that these approximations are less helpful with additional iterations, where derivative approximation errors likely limit performance. TAB0 contains the estimated marginal log-likelihood on MNIST and CIFAR-10 for standard and iterative inference models, including hierarchical inference models. Iterative inference models were trained by encoding the data and errors for 5 inference iterations. With the same architecture, iterative inference models outperform their standard counterparts. See Appendix C.5 for details and discussion. We also compared the inference optimization performance of iterative inference models with variational EM expectation steps using various optimizers. In Figure 6, we observe that the iterative inference model empirically converges substantially faster to better estimates, even with only local gradient information. See Appendix C.6 for details and discussion. To summarize, iterative inference models outperform standard inference models in terms of inference capabilities, yet are far more computationally efficient than variational EM. Consider a latent variable model, p θ (x, z) = p θ (x|z)p θ (z), where the prior on z is a factorized Gaussian density, p θ (z) = N (z; µ p, diag σ 2 x), and the conditional likelihood, p θ (x|z), is Bernoulli for binary observations or Gaussian for continuous observations. We introduce an approximate posterior distribution, q(z|x), which can be any parametric probability density defined over real values. Here, we assume that q also takes the form of a factorized Gaussian density, q(z|x) = N (z; µ q, diag σ 2 q). The objective during variational inference is to maximize L w.r.t. the parameters of q(z|x), i.e. µ q and σ DISPLAYFORM0 To solve this optimization problem, we will inspect the gradients ∇ µq L and ∇ σ 2 q L, which we now derive. The objective can be written as: DISPLAYFORM1 Plugging in p θ (z) and q(z|x): DISPLAYFORM2 Since expectation and differentiation are linear operators, we can take the expectation and derivative of each term individually. We can write the log-prior as: DISPLAYFORM0 where n z is the dimensionality of z. We want to evaluate the following terms: DISPLAYFORM1 and DISPLAYFORM2 To take these derivatives, we will use the reparameterization trick to re-express z = µ q + σ q, where ∼ N (0, I) is an auxiliary standard Gaussian variable, and denotes the element-wise product. We can now perform the expectations over, allowing us to bring the gradient operators inside the expectation brackets. The first term in eqs. 17 and 18 does not depend on µ q or σ 2 q, so we can write: DISPLAYFORM3 and DISPLAYFORM4 To simplify notation, we define the following term: DISPLAYFORM5 allowing us to rewrite eqs. 19 and 20 as: DISPLAYFORM6 and DISPLAYFORM7 We must now find ∂ξ ∂µq and DISPLAYFORM8 and DISPLAYFORM9 where division is performed element-wise. Plugging eqs. 24 and 25 back into eqs. 22 and 23, we get: DISPLAYFORM10 and DISPLAYFORM11 Putting everything together, we can express the gradients as: DISPLAYFORM12 and DISPLAYFORM13 A.3 GRADIENT OF THE LOG-APPROXIMATE POSTERIOR We can write the log-approximate posterior as: DISPLAYFORM14 where n z is the dimensionality of z. Again, we will use the reparameterization trick to re-express the gradients. However, notice what happens when plugging the reparameterized z = µ q + σ q into the second term of eq. 30: DISPLAYFORM15 This term does not depend on µ q or σ 2 q. Also notice that the first term in eq. 30 depends only on σ 2 q. Therefore, the gradient of the entire term w.r.t. µ q is zero: DISPLAYFORM16 The gradient w.r.t. σ 2 q is DISPLAYFORM17 Note that the expectation has been dropped, as the term does not depend on the value of the sampled z. Thus, the gradient of the entire term w.r.t. σ 2 q is: DISPLAYFORM18 The form of the conditional likelihood will depend on the data, e.g. binary, discrete, continuous, etc. Here, we derive the gradient for Bernoulli (binary) and Gaussian (continuous) conditional likelihoods. Bernoulli Output Distribution The log of a Bernoulli output distribution takes the form: DISPLAYFORM0 where µ x = µ x (z, θ) is the mean of the output distribution. We drop the explicit dependence on z and θ to simplify notation. We want to compute the gradients DISPLAYFORM1 and DISPLAYFORM2 Again, we use the reparameterization trick to re-express the expectations, allowing us to bring the gradient operators inside the brackets. Using z = µ q + σ q, eqs. 36 and 37 become: DISPLAYFORM3 and DISPLAYFORM4 where µ x is re-expressed as function of µ q, σ 2 q,, and θ. Distributing the gradient operators yields: DISPLAYFORM5 and DISPLAYFORM6 Taking the partial derivatives and combining terms gives: DISPLAYFORM7 and DISPLAYFORM8 Gaussian Output Density The log of a Gaussian output density takes the form: DISPLAYFORM9 where µ x = µ x (z, θ) is the mean of the output distribution and σ 2 x = σ 2 x (θ) is the variance. We assume σ 2 x is not a function of z to simplify the derivation, however, using σ 2 x = σ 2 x (z, θ) is possible and would simply in additional gradient terms in ∇ µq L and ∇ σ 2 q L. We want to compute the gradients DISPLAYFORM10 and DISPLAYFORM11 The first term in eqs. 45 and 46 is zero, since σ 2 x does not depend on µ q or σ 2 q. To take the gradients, we will again use the reparameterization trick to re-express z = µ q + σ q. We now implicitly express µ x as µ x (µ q, σ 2 q, θ). We can then write: DISPLAYFORM12 and DISPLAYFORM13 To simplify notation, we define the following term: DISPLAYFORM14 allowing us to rewrite eqs. 47 and 48 as DISPLAYFORM15 and DISPLAYFORM16 We must now find ∂ξ ∂µq and DISPLAYFORM17 and DISPLAYFORM18 Plugging these expressions back into eqs. 50 and 51 gives DISPLAYFORM19 and DISPLAYFORM20 Despite having different distribution forms, Bernoulli and Gaussian output distributions in approximate posterior gradients of a similar form: the Jacobian of the output model multiplied by a weighted error term. A.5 SUMMARY Putting the gradient terms from log p θ (x|z), log p θ (z), and log q(z|x) together, we arrive at Bernoulli Output Distribution: DISPLAYFORM21 Gaussian Output Distribution: Figure 7: Plate notation for a hierarchical latent variable model consisting of L levels of latent variables. Variables at higher levels provide empirical priors on variables at lower levels. With data-dependent priors, the model has more flexibility in representing the intricacies of each data example. DISPLAYFORM22 Hierarchical latent variable models factorize the latent variables over multiple levels, z = {z 1, z 2, . . ., z L}. Latent variables at higher levels provide empirical priors on latent variables at lower levels. For an intermediate latent level, we use the notation DISPLAYFORM0 DISPLAYFORM1. FORMULA0 Notice that these gradients take a similar form to those of a one-level latent variable model. The first terms inside each expectation can be interpreted as a "bottom-up" gradient coming from reconstruction errors at the level below. The second terms inside the expectations can be interpreted as "top-down" errors coming from priors generated by the level above. The last term in the variance gradient expresses a form of regularization. Standard hierarchical inference models only contain bottom-up information, and therefore have no way of estimating the second term in each of these gradients. Equation 5 provides a general form for an iterative inference model. Here, we provide specific implementation details for these models. Code for reproducing the experiments will be released online. As mentioned in , gradients can be on vastly different scales, which is undesirable for training neural networks. To handle this issue, we adopt the technique they proposed: replacing ∇ λ L with the concatenation of [α log(|∇ λ L| +), sign(∇ λ L)], where α is a scaling constant and is a small constant for numerical stability. This is performed for both parameters in λ = {µ q, log σ 2 q}. When encoding the errors, we instead input the concatenation of [ε x, ε z] (see section 4.1 for definitions of these terms). As we use global variances on the output and prior densities, we drop σ 2 x and σ 2 p from these expressions because they are constant across all examples. We also found it beneficial to encode the current estimates of µ q and log σ 2 q. We end by again noting that encoding gradients or errors over successive iterations can be difficult, as the distributions of these inputs change quickly during both learning and inference. Work remains to be done in developing iterative encoding architectures that handle this aspect more thoroughly, perhaps through some form of input normalization or saturation. For the output form of these models, we use a gated updating scheme, sometimes referred to as a "highway" connection . Specifically, approximate posterior parameters are updated according to DISPLAYFORM0 where represents element-wise multiplication and DISPLAYFORM1 is the gating function for λ at time t, which we combine with the iterative inference model f t. We found that this yielded improved performance and stability over the residual updating scheme used in. In our experiments with latent Gaussian models, we found that means tend to receive updates over many iterations, whereas variances (or log variances) tend to receive far fewer updates, often just a single large update. Further work could perhaps be done in developing schemes that update these two sets of parameters differently. We parameterize iterative inference models as neural networks. Although exclusively use recurrent neural networks, we note that optimization models can also be instantiated with feed-forward networks. Note that even with a feed-forward network, because the entire model is run over multiple iterations, the model is technically a recurrent network, though quite different from the standard RNN formulation. RNN iterative inference models, through hidden or memory states, are able to account for non-local curvature information, analogous to momentum or other moment terms in conventional optimization techniques. Feed-forward networks are unable to capture and utilize this information, but purely local curvature information is still sufficient to update the output estimate, e.g. vanilla stochastic gradient descent. propagate optimizer parameter gradients (∇ φ L) from the optimizee's loss at each optimization step, giving each step equal weight. We take the same approach; we found it aids in training recurrent iterative inference models and is essential for training feed-forward iterative inference models. With a recurrent model, ∇ φ L is calculated using stochastic backpropagation through time. With a feedforward model, we accumulate ∇ φ L at each step using stochastic backpropagation, then average over the total number of steps. The advantage of using a feed-forward iterative inference model is that it maintains a constant memory footprint, as we do not need to keep track of gradients across iterations. However, as mentioned above, this limits the iterative inference model to only local optimization information. Overall, we found iterative inference models were not difficult to train. Almost immediately, these models started learning to improve their estimates. As noted by , some care must be taken to ensure that the input gradients stay within a reasonable range. We found their log transformation trick to work well in accomplishing this. We also observed that the level of stochasticity in the gradients played a larger role in inference performance for iterative inference models. For instance, in the Gaussian case, we noticed a sizable difference in performance between approximating the KL-divergence and evaluating it analytically. This difference was much less noticeable for standard inference models. In all experiments, inference model and generative model parameters were learned jointly using the AdaM optimizer (Kingma & Ba FORMULA0). The learning rate was set to 0.0002 for both sets of parameters and all other optimizer parameters were set to their default values. Learning rates were decayed exponentially by a factor of 0.999 at every epoch. All models utilized exponential linear unit (ELU) activation functions , although we found other non-linearities to work as well. Unless otherwise stated, all inference models were symmetric to their corresponding generative models, with the addition of "highway" connections between hidden layers. Though not essential, we found that these connections improved stability and performance. Iterative inference models for all experiments were implemented as feed-forward networks to make comparison with standard inference models easier. See appendix B for further details. To visualize the optimization surface and trajectories of latent Gaussian models, we trained models with 2 latent dimensions and a point estimate approximate posterior. That is, q(z|x) = δ(z = µ q) is a Dirac delta function at the point µ q = (µ 1, µ 2). We used a 2D point estimate approximate posterior instead of a 1D Gaussian density because it in more variety in the optimization surface, making it easier to visualize the optimization. We trained these models on binarized MNIST due to the data set's relatively low complexity, meaning that 2 latent dimensions can reasonably capture the relevant information specific to a data example. The generative models consisted of a neural network with 2 hidden layers, each with 512 units. The output of the generative model was the mean of a Bernoulli distribution, and log p θ (x|z) was evaluated using binary cross-entropy. KL-divergences were estimated using 1 sample of z ∼ q(z|x). The optimization surface of each model was evaluated on a grid with range [-5, 5] in increments of 0.05 for each latent variable. To approximate the MAP estimate, we up-sampled the optimization surface using a cubic interpolation scheme. FIG0 visualizes the ELBO optimization surface after training for 80 epochs. Figure 3 visualizes the ELBO optimization surface after training (by encoding x, ε x, and ε z) for 50 epochs. For the qualitative shown in figure 4, we trained iterative inference models on MNIST, Omniglot, and Street View House Numbers by encoding approximate posterior gradients (∇ λ L) for 16 inference iterations. For CIFAR-10, we had difficulty in obtaining sharp reconstructions in a reasonable number of inference iterations, so we trained an iterative inference model by encoding errors for 10 inference iterations. For binarized MNIST and Omniglot, we used a generative model architecture with 2 hidden layers, each with 512 units, a latent space of size 64, and a symmetric iterative inference model, with the addition of highway connections at each layer. For Street View House Numbers and CIFAR-10, we used 3 hidden layers in the iterative inference and 1 in the generative model, with 2048 units at each hidden layer and a latent space of size 1024. We used the same architecture of 2 hidden layers, each with 512 units, for the output model and inference models. The latent variables consisted of 64 dimensions. Each model was trained by drawing the corresponding number of samples from the approximate posterior distribution using the reparameterization trick, yielding lower variance ELBO estimates and gradients. Iterative inference models were trained by encoding the data (x) and the approximate posterior gradients (∇ λ L) for 5 inference iterations. All models were trained for 1,500 epochs. The model architecture for all encoding schemes was identical to that used in the previous section. All models were trained by evaluating the ELBO with a single approximate posterior sample. We trained all models for 1,500 epochs. We were unable to run multiple trials for each experimental set-up, but on a subset of runs for standard and iterative inference models, we observed that final performance had a standard deviation less than 0.1 nats, below the difference in performance between models trained with different numbers of inference iterations. Directly comparing inference optimization performance between inference techniques is difficult; inference estimates affect learning, ing in models that are better suited to the inference scheme. Instead, to quantitatively compare the performance between standard and iterative inference models, we trained models with the same architecture using each inference model form. We trained both one-level and hierarchical models on MNIST and one-level models on CIFAR-10. In each case, iterative inference models were trained by encoding the data and errors for 5 inference iterations. We estimated marginal log-likelihoods for each model using 5,000 importance weighted samples per data example. C.5.1 MNIST For MNIST, one-level models consisted of a latent variable of size 64, and the inference and generative networks both consisted of 2 hidden layers, each with 512 units. Hierarchical models consisted of 2 levels with latent variables of size 64 and 32 in hierarchically ascending order. At each level, the inference and generative networks consisted of 2 hidden layers, with 512 units at the first level and 256 units at the second level. At the first level of latent variables, we also used a set of deterministic units, also of size 64, in both the inference and generative networks. Hierarchical models included batch normalization layers at each hidden layer of the inference and generative networks; we found this beneficial for training both standard and iterative inference models. Both encoder and decoder networks in the hierarchical model utilized highway skip connections at each layer at both levels. For CIFAR-10, models consisted of a latent variable of size 1024, an encoder network with 3 hidden layers of 2048 units with highway connections, and a decoder network with 1 hidden layer with 2048 units. The variance of the output Gaussian distribution was a global variable for this model. We note that the reported in table 1 are significantly worse than those typically reported in the literature, however these are for relatively small fully-connected networks rather than larger convolutional networks. We also experimented with hierarchical iterative inference models on CIFAR-10, but found these models more difficult to train without running into numerical instabilities. C.6 COMPARISON WITH VARIATIONAL EM Variational EM is not typically used in practice, as it does not scale well with large models or large data sets. However, because iterative inference models iteratively optimize the approximate posterior parameters, we felt it would be beneficial to provide a comparison of inference optimization performance between iterative inference models and expectation steps from variational EM. We used one-level latent Gaussian models trained with iterative inference models on MNIST for 16 iterations. We compared against vanilla SGD, SGD with momentum, RMSProp, and AdaM, trying learning rates in {0.5, 0.4, 0.3, 0.2, 0.1, 0.01, 0.001}. In all comparisons, we found that iterative inference models outperformed conventional optimization techniques by large margins. Figure 6 shows the optimization performance on the test set for all optimizers and an iterative inference model trained by encoding the approximate posterior gradients. The iterative inference model quickly arrives at a stable approximate posterior estimate, outperforming all optimizers. It is important to note that the iterative inference model here actually has less derivative information than the adaptive optimizers; it only has access to the local gradient. Also, despite only being trained using 16 iterations, the iterative inference remains stable for hundreds of iterations. We also compared the optimization techniques on the basis of wall clock time: FIG0 reproduces the from figure 6. We observe that, despite requiring more time per inference iteration, the iterative inference model still outperforms the conventional optimization techniques. Concurrent with our work, propose closing the amortization gap by performing inference optimization steps after initially encoding the data with a standard inference model, reporting substantial gains on sparse, high-dimensional data, such as text and ratings. We observe similar findings and present a confirmatory experimental on the RCV1 data set , which consists of 10,000 dimensions containing word counts. We follow the same processing procedure as , encoding data using normalized TF-IDF features and modeling the data using a multinomial distribution. For encoder and decoder, we use 2-layer networks, each with 512 units and ELU non-linearities. We use a latent variable of size 512 as well. The iterative inference model was trained by encoding gradients for 16 steps. We evaluate the models by reporting (an upper bound on) perplexity on the test set TAB1. Perplexity, P, is defined as DISPLAYFORM0 where N is the number of examples and N i is the total number of word counts in example i. We evaluate perplexity by estimating each log p(x i) with 5,000 importance weighted samples. We observe that iterative inference models outperform standard inference models on this data set by a similar margin reported by. Note, however, that iterative inference models here have substantially fewer input parameters than standard inference models (2,048 vs. 10,000). We also run a single optimization procedure for an order of magnitude fewer steps than that of.In FIG0, we further illustrate the optimization capabilities of the iterative inference model used here. Plotting the average gradient magnitude of the approximate posterior for inference iterations in FIG0, we see that over successive iterations, the magnitude decreases. This implies that the model is capable of arriving at near-optimal estimates, where the gradient is close to zero. In FIG0, we plot the average relative improvement in the ELBO over inference iterations. We see that the model is quickly able to improve its inference estimates, eventually reaching a relative improvement of roughly 25%.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1Z3W-b0W
We propose a new class of inference models that iteratively encode gradients to estimate approximate posterior distributions.
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The ing algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10. Our demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem. Any learning system that makes small changes to its parameters will only improve if the changes are correlated to the gradient of the loss function. Given that people and animals can also show clear behavioral improvements on specific tasks , however the brain determines its synaptic updates, on average, the changes in must also correlate with the gradients of some loss function related to the task . As such, the brain may have some way of calculating at least an estimator of gradients. To-date, the bulk of models for how the brain may estimate gradients are framed in terms of setting up a system where there are both bottom-up, feedforward and top-down, feedback connections. The feedback connections are used for propagating activity that can be used to estimate a gradient (; ; ; ; ; ;). In all such models, the gradient estimator is less biased the more the feedback connections mirror the feedforward weights. For example, in the REINFORCE algorithm , and related algorithms like AGREL , learning is optimal when the feedforward and feedback connections are perfectly symmetric, such that for any two neurons i and j the synaptic weight from i to j equals the weight from j to i, e.g. W ji = W ij (Figure 1). Some algorithms simply assume weight symmetry, such as Equilibrium Propagation . The requirement for synaptic weight symmetry is sometimes referred to as the "weight transport problem", since it seems to mandate that the values of the feedforward synaptic weights are somehow transported into the feedback weights, which is not biologically realistic (-01-12;). Solving the weight transport problem is crucial to biologically realistic gradient estimation algorithms , and is thus an important topic of study. Several solutions to the weight transport problem have been proposed for biological models, including hard-wired sign symmetry , random fixed feedback weights , and learning to make the feedback weights symmetric (; ; ;). Learning to make the weights symmetric is promising because it is both more biologically feasible than hard-wired sign symmetry and it leads to less bias in the gradient estimator (and thereby, better training ) than using fixed random feedback weights . However, of the current proposals for learning weight symmetry some do not actually work well in practice and others still rely on some biologically unrealistic assumptions, including scalar value activation functions (as opposed to all-or-none spikes) and separate error feedback pathways with one-to-one matching between processing neurons for the forward pass and error propagation neurons for the backward pass;. Interestingly, learning weight symmetry is implicitly a causal inference problem-the feedback weights need to represent the causal influence of the upstream neuron on its downstream partners. As such, we may look to the causal infererence literature to develop better, more biologically realistic algorithms for learning weight symmetry. In econometrics, which focuses on quasi-experiments, researchers have developed various means of estimating causality without the need to actually randomize and control the variables in question;. Among such quasi-experimental methods, regression discontinuity design (RDD) is particularly promising. It uses the discontinuity introduced by a threshold to estimate causal effects. For example, RDD can be used to estimate the causal impact of getting into a particular school (which is a discontinuous, all-or-none variable) on later earning power. RDD is also potentially promising for estimating causal impact in biological neural networks, because real neurons communicate with discontinuous, all-or-none spikes. Indeed, it has been shown that the RDD approach can produce unbiased estimators of causal effects in a system of spiking neurons. Given that learning weight symmetry is fundamentally a causal estimation problem, we hypothesized that RDD could be used to solve the weight transport problem in biologically realistic, spiking neural networks. Here, we present a learning rule for feedback synaptic weights that is a special case of the RDD algorithm previously developed for spiking neural networks . Our algorithm takes advantage of a neuron's spiking discontinuity to infer the causal effect of its spiking on the activity of downstream neurons. Since this causal effect is proportional to the feedforward synaptic weight between the two neurons, by estimating it, feedback synapses can align their weights to be symmetric with the reciprocal feedforward weights, thereby overcoming the weight transport problem. We demonstrate that this leads to the reduction of a cost function which measures the weight symmetry (or the lack thereof), that it can lead to better weight symmetry in spiking neural networks than other algorithms for weight alignment and it leads to better learning in deep neural networks in comparison to the use of fixed feedback weights . Altogether, these demonstrate a novel algorithm for solving the weight transport problem that takes advantage of discontinuous spiking, and which could be used in future models of biologically plausible gradient estimation. Previous work has shown that even when feedback weights in a neural network are initialized randomly and remain fixed throughout training, the feedforward weights learn to partially align themselves to the feedback weights, an algorithm known as feedback alignment . While feedback alignment is successful at matching the learning performance of true gradient descent in relatively shallow networks, it does not scale well to deeper networks and performs poorly on difficult computer vision tasks . The gap in learning performance between feedback alignment and gradient descent can be overcome if feedback weights are continually updated to match the sign of the reciprocal feedforward weights . Furthermore, learning the feedback weights in order to make them more symmetric to the feedforward weights has been shown to improve learning over feedback alignment . To understand the underlying dynamics of learning weight symmetry, define the symmetric alignment cost function, R SA, as one possible cost function that, when minimized, leads to weight symmetry: where W are feedforward weights and Y are feedback weights. The first two terms are simply weight regularization terms that can be minimized using techniques like weight decay. But, the third term is the critical one for ensuring weight alignment. In this paper we present a biologically plausible method of minimizing the third term. This method is based on the work of , who demonstrated that neurons can estimate their causal effect on a global reward signal using the discontinuity introduced by spiking. This is accomplished using RDD, wherein a piecewise linear model is fit around a discontinuity, and the differences in the regression intercepts indicates the causal impact of the discontinuous variable. , neurons learn a piece-wise linear model of a reward signal as a function of their input drive, and estimate the causal effect of spiking by looking at the discontinuity at the spike threshold. Here, we modify this technique to perform causal inference on the effect of spiking on downstream neurons, rather than a reward signal. We leverage this to develop a learning rule for feedback weights that induces weight symmetry and improves training. The primary contributions of this paper are as follows: • We demonstrate that spiking neurons can accurately estimate the causal effect of their spiking on downstream neurons by using a piece-wise linear model of the feedback as a function of the input drive to the neuron. • We present a learning rule for feedback weights that uses the causal effect estimator to encourage weight symmetry. We show that when feedback weights update using this algorithm it minimizes the symmetric alignment cost function, R SA. • We demonstrate that this learning weight symmetry rule improves training and test accuracy over feedback alignment, approaching gradient-descent-level performance on Fashion-MNIST, SVHN, CIFAR-10 and VOC in deeper networks. In this work, we utilize a spiking neural network model for aligning feedforward and feedback weights. However, due to the intense computational demands of spiking neural networks, we only use spikes for the RDD algorithm. We then use the feedback weights learned by the RDD algorithm for training a non-spiking convolutional neural network. We do this because the goal of our work here is to develop an algorithm for aligning feedback weights in spiking networks, not for training feedforward weights in spiking networks on other tasks. Hence, in the interest of computational expediency, we only used spiking neurons when learning to align the weights. Additional details on this procedure are given below. At the start of every training epoch of a convolutional neural network, we use an RDD feedback weight training phase, during which all fully-connected sets of feedback weights in the network are updated. To perform these updates, we simulate a separate network of leaky integrate-and-fire (LIF) neurons. LIF neurons incorporate key elements of real neurons such as voltages, spiking thresholds and refractory periods. Each epoch, we begin by training the feedback weights in the LIF network. These weights are then transferred to the convolutional network, which is used for training the feedforward weights. The new feedforward weights are then transferred to the LIF net, and another feedback training phase with the LIF net starts the next epoch (Figure 2A). During the feedback training phase, the LIF network undergoes a training phase lasting 90 s of simulated time (30 s per set of feedback weights) (Figure 2B). We find that the spiking network used for RDD feedback training and the convolutional neural network are very closely matched in the activity of the units (Figure S1), which gives us confidence that this approach of using a separate non-spiking network for training the feedforward weights is legitimate. During the feedback training phase, a small subset of neurons in the first layer receive driving input that causes them to spike, while other neurons in this layer receive no input (see Appendix A.2). The subset of neurons that receive driving input is randomly selected every 100 ms of simulated time. This continues for 30 s in simulated time, after which the same process occurs for the subsequent hidden layers in the network. This protocol enforces sparse, de-correlated firing patterns that improve the causal inference procedure of RDD. During the RDD feedback training phase, each unit in the network is simulated as a leaky integrateand-fire neuron. Spiking inputs from the previous layer arrive at feedforward synapses, where they are convolved with a temporal exponential kernel to simulate post-synaptic spike responses p = [p 1, p 2, ..., p m] (see Appendix A.1). The neurons can also receive driving inputp i, instead of synaptic inputs. The total feedforward input to neuron i is thus defined as: where W ij is the feedforward weight to neuron i from neuron j in the previous layer, and ω is a hyperparameter. The voltage of the neuron, v i, evolves as: where g L and g D are leak and dendritic conductance constants, respectively. The input drive to the neuron, u i, is similarly modeled: If the voltage v i passes a spiking threshold θ, the neuron spikes and the voltage is reset to a value v reset = −1 (Figure 2C). Note that the input drive does not reset. This helps us to perform regressions both above and below the spike threshold. In addition to feedforward inputs, spiking inputs from the downstream layer arrive at feedback synapses, where they create post-synaptic spike responses q = [q 1, q 2, ..., q n]. These responses are used in the causal effect estimation (see below). Whenever the voltage approaches the threshold θ (ie. |v i − θ| < α where α is a constant), an RDD window is initiated, lasting T = 30 ms in simulated time (Figure 2C). At the end of this time window, at each feedback synapse, the maximum input drive during the RDD window, u provides a measure of how strongly neuron i was driven by its inputs (and whether or not it passed the spiking threshold θ), while ∆q avg k is a measure of how the input received as feedback from neuron k changed after neuron i was driven close to its spiking threshold. These two values are then used to fit a piece-wise linear model of ∆q avg k as a function of u max i (Figure 2D). This piece-wise linear model is defined as: The parameters c ik are updated to perform linear regression using gradient descent: An estimate of the causal effect of neuron i spiking on the activity of neuron k, β ik, is then defined as the difference in the two sides of the piece-wise linear function at the spiking threshold: Finally, the weight at the feedback synapse, Y ik, is updated to be a scaled version of β ik: where γ is a hyperparameter and σ 2 β is the standard deviation of β values for all feedback synapses in the layer. This ensures that the scale of the full set of feedback weights between two layers in the network remains stable during training. To measure how well the causal effect estimate at each feedback synapse, β ik, and thus the feedback weight Y ik, reflects the reciprocal feedforward weight W ki, we can measure the percentage of feedback weights that have the same sign as the reciprocal feedforward weights (Figure 3A). When training on CIFAR-10 with no RDD feedback training phase (ie. feedback weights remain fixed throughout training), the feedback alignment effect somewhat increases the sign alignment during training, but it is ineffective at aligning the signs of weights in earlier layers in the network. Compared to feedback alignment, the addition of an RDD feedback training phase greatly increases the sign aligmnent between feedback and feedforward weights for all layers in the network, especially at earlier layers. In addition, the RDD algorithm increases sign alignment throughout the hierarchy more than the current state-of-the-art algorithm for weight alignment introduced recently by (Figure 3A). Furthermore, RDD feedback training changes feedback weights to not only match the sign but also the magnitude of the reciprocal feedforward weights (Figure 3B), which makes it better for weight alignment than hard-wired sign symmetry . Figure 3: A. Evolution of sign alignment (the percent of feedforward and feedback weights that have the same sign) for each fully-connected layer in the network when trained on CIFAR-10 using RDD feedback training (blue), using the algorithm proposed by (purple), and using feedback alignment (red). B. Feedforward vs. feedback weights for each fully-connected layer at the end of training, with RDD feedback training (blue), the Akrout algorithm (purple), and feedback alignment (red). The symmetric alignment cost function (Equation 1) can be broken down as: where we define R decay and R self as: R decay is simply a weight regularization term that can be minimized using techniques like weight decay. R self, in contrast, measures how well aligned in direction the two matrices are. Our learning rule for feedback weights minimizes the R self term for weights throughout the network (Figure 4). By comparison, feedback alignment decreases R self to a smaller extent, and its ability to do so diminishes at earlier layers in the network. This helps to explain why our algorithm induces weight alignment, and can improve training performance (see below). Figure 4: Evolution of Rself for each fully-connected layer in the network when trained on CIFAR-10 using RDD feedback training (solid lines) and using feedback alignment (dashed lines). RDD feedback training dramatically decreases this loss compared to feedback alignment, especially in earlier layers. We trained the same network architecture (see Appendix A.3) on the Fashion-MNIST, SVHN, CIFAR-10 and VOC datasets using standard autograd techniques (backprop), feedback alignment and our RDD feedback training phase. RDD feedback training substantially improved the network's performance over feedback alignment, and led to backprop-level accuracy on the train and test sets (Figure 5). In order to understand how the brain learns complex tasks that require coordinated plasticity across many layers of synaptic connections, it is important to consider the weight transport problem. Here, we presented an algorithm for updating feedback weights in a network of spiking neurons that takes advantage of the spiking discontinuity to estimate the causal effect between two neurons (Figure 2). We showed that this algorithm enforces weight alignment (Figure 3), and identified a loss function, R self, that is minimized by our algorithm (Figure 4). Finally, we demonstrated that our algorithm allows deep neural networks to achieve better learning performance than feedback alignment on Fashion-MNIST and CIFAR-10 (Figure 5). These demonstrate the potential power of RDD as a means for solving the weight transport problem in biologically plausible deep learning models. One aspect of our algorithm that is still biologically implausible is that it does not adhere to Dale's principle, which states that a neuron performs the same action on all of its target cells (Strata & Harvey). This means that a neuron's outgoing connections cannot include both positive and negative weights. However, even under this constraint, a neuron can have an excitatory effect on one downstream target and an inhibitory effect on another, by activating intermediary inhibitory interneurons. Because our algorithm provides a causal estimate of one neuron's impact on another, theoretically, it could capture such polysynaptic effects. Therefore, this algorithm is in theory compatible with Dale's principle. Future work should test the effects of this algorithm when implemented in a network of neurons that are explicitly excitatory or inhibitory. A APPENDIX Post-synaptic spike responses at feedforward synapses, p, were calculated from pre-synaptic binary spikes using an exponential kernel function κ: wheret jk is the k th spike time of input neuron j and κ is given by: where τ s = 0.003 s and τ L = 0.01 s represent short and long time constants, and Θ is the Heaviside step function. Post-synaptic spike responses at feedback synapses, q, were computed in the same way. A.2.1 WEIGHT SCALING Weights were shared between the convolutional network and the network of LIF neurons, but feedforward weights in the LIF network were scaled versions of the convolutional network weights: where W Conv is a feedforward weight matrix in the convolutional network, W LIF is the corresponding weight matrix in the LIF network, m is the number of units in the upstream layer (ie. the number of columns in W Conv), σ 2 W Conv is the standard deviation of W Conv and ψ is a hyperparameter. This rescaling ensures that spike rates in the LIF network stay within an optimal range for the RDD algorithm to converge quickly, even if the scale of the feedforward weights in the convolutional network changes during training. This avoids situations where the scale of feedforward weights is so small that little or no spiking occurs in the LIF neurons. The RDD feedback training paradigm is implemented as follows. We start by providing driving input to the first layer in the network of LIF neurons. To create this driving input, we choose a subset of 20% of the neurons in that layer, and create a unique input spike train for each of these neurons using a Poisson process with a rate of 200 Hz. All other neurons in the layer receive no driving input. Every 100 ms, a new set of neurons to receive driving input is randomly chosen. After 30 s, this layer stops receiving driving input, and the process repeats for the next layer in the network. The network architectures used to train on Fashion-MNIST and CIFAR-10 are described in Table 1. Inputs were randomly cropped and flipped during training, and batch normalization was used at each layer. Networks were trained using a minibatch size of 32. A.4 AKROUT ALGORITHM IMPLEMENTATION In experiments that compared sign alignment using our RDD algorithm with the algorithm, we kept the same RDD feedback training paradigm (ie. layers were sequentially driven, and a small subset of neurons in each layer was active at once). However, rather than updating feedback weights using RDD, we recorded the mean firing rates of the active neurons in the upstream layer, r l, and the mean firing rates in the downstream layer, r l+1. We then used the following feedback weight update rule: where Y are the feedback weights between layers l + 1 and l, and η and λ WD are learning rate and weight decay hyperparameters, respectively. Figure S1: Comparison of average spike rates in the fully-connected layers of the LIF network vs. activities of the same layers in the convolutional network, when both sets of layers were fed the same input. Spike rates in the LIF network are largely correlated with activities of units in the convolutional network.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJxWxxSYvB
We present a learning rule for feedback weights in a spiking neural network that addresses the weight transport problem.
Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair $q(z|x)$/$p(x|z)$ can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space. However, these approximations are well-documented to become degenerate in training. Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match. Conversely, diffusion maps (DM) automatically \textit{infer} the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism. In this paper, we propose \textbf{a)} a principled measure for recognizing the mismatch between data and latent distributions and \textbf{b)} a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model. The measure, the \textit{locally bi-Lipschitz property}, is a sufficient condition for a homeomorphism and easy to compute and interpret. The method, the \textit{variational diffusion autoencoder} (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data. To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization. We prove approximation theoretic for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space in a random walk over the reconstructed manifold. Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models. Recent developments in generative models such as variational auto-encoders and generative adversarial networks have made it possible to sample remarkably realistic points from complex high dimensional distributions at low computational cost. While their methods are very different -one is derived from variational inference and the other from game theory -their ends both involve learning smooth mappings from a user-defined prior distribution to the modeled distribution. These maps are closely tied to manifold learning when the prior is supported over a Euclidean space (e.g. Gaussian or uniform priors) and the data lie on a manifold (also known as the Manifold Hypothesis, see ;). This is because manifolds themselves are defined by sets that have homeomorphisms to such spaces. Learning such maps is beneficial to any machine learning task, and may shed light on the success of VAEs and GANs in modeling complex distributions. Furthermore, the connection to manifold learning may explain why these generative models fail when they do. Known as posterior collapse in VAEs (; ; ;) and mode collapse in GANs , both describe cases where the forward/reverse mapping to/from Euclidean space collapses large parts of the input to a single output. This violates the bijective requirement of the homeomorphic mapping. It also in degenerate latent spaces and poor generative performance. A major cause of such failings is when Figure 1: A diagram depicting one step of the diffusion process modeled by the variational diffusion autoencoder (VDAE). The diffusion and inverse diffusion maps ψ, ψ −1, as well as the covariance C of the random walk on M Z, are all approximated by neural networks. the geometries of the prior and target data do not agree. We explore this issue of prior mismatch and previous treatments of it in Section 3. Given their connection to manifold learning, it is natural to look to classical approaches in the field for ways to improve VAEs. One of the most principled methods is spectral learning (Schölkopf et al., 1998; ;) which involves describing data from a manifold X ⊂ M X by the eigenfunctions of a kernel on M X. We focus specifically on DMs, where show that normalizations of the kernel approximate a very specific diffusion process, the heat kernel over M X. A crucial property of the heat kernel is that, like its physical analogue, it defines a diffusion process that has a uniform stationary distribution -in other words, drawing from this stationary distribution draws uniformly from the data manifold. established another crucial property of DMs, namely that distances in local neighborhoods in the eigenfunction space are nearly isometric to corresponding geodesic distances on the manifold. However, despite its strong theoretical guarantees, DMs are poorly equipped for large scale generative modeling as they are not easily scalable and do not provide an inverse mapping from the intrinsic feature space. In this paper we address issues in variational inference and manifold learning by combining ideas from both. Theory in manifold learning allows us to better recognize prior mismatch, whereas variational inference provides a method to learn the difficult to approximate inverse diffusion map. Our contributions: 1) We introduce the locally bi-Lipschitz property, a sufficient condition for a homeomorphism, for measuring the stability of a mapping between latent and data distributions. 2) We introduce VDAEs, a class of variational autoencoders whose encoder-decoder feedforward pass approximates the diffusion process on the data manifold with respect to a user-defined kernel k. 3) We show that deep neural networks are capable of learning such diffusion processes, and 4) that networks approximating this process produce random walks that have certain desirable properties, including well defined transition and stationary distributions. 5) Finally, we demonstrate the utility of the VDAE framework on a set of real and synthetic datasets, and show that they have superior performance and satisfy the locally bi-Lipschitz property where GANs and VAEs do not. Variational inference is a machine learning method that combines Bayesian statistics and latent variable models to approximate some probability density p(x). VI assumes and exploits a latent variable structure in the assumed data generation process, that the observations x ∼ p(x) are conditionally distributed given unobserved latent vari-ables z. By modeling the conditional distribution, then marginalizing over z, as in we obtain the model evidence, or likelihood that x could have instead been drawn from p θ (x). Maximizing Eq. 1 leads to an algorithm for finding likely approximations of p(x). As the cost of computing this integral scales exponentially with the dimension of z, we instead maximize the evidence lower bound (ELBO): where q(z|x) is usually an approximation of p θ (z|x). Optimizing the ELBO is sped up by taking stochastic gradients , and further accelerated by learning a global function approximator q φ in an autoencoding structure . Diffusion maps on the other hand, are a class of kernel methods that perform non-linear dimensionality reduction on a set of observations X ⊆ M X, where M X is the data manifold. Given a symmetric and positive kernel k, DM considers the induced random walk on the graph of X, where given x, y ∈ X, the transition probabilities p(y|x) = p(x, y) are row normalized versions of k(x, y). Moreover, the diffusion map ψ embeds the data X ∈ R m into the Euclidean space R D so that the diffusion distance is approximated by Euclidean distance. This is a powerful property, as it allows the arbitrarily complex random walk induced by k on M X to become an isotropic Gaussian random walk on ψ(M X). SpectralNet is an algorithm introduced by algorithm in Shaham et al. (2018b) to speed up the diffusion map. Until recently, the method ψ k could only be computed via the eigendecomposition of K. As a , DMs were only be tractable for small datasets, or on larger datasets by combining landmark-based estimates and Nystrom approximation techniques. However, Shaham et al. (2018b) propose approximations of the function ψ itself in the case that the kernel k is symmetric. In particular, we will leverage SpectralNet to enforce our diffusion embedding prior. Locally bi-lipschitz coordinates by kernel eigenfunctions. analyzed the construction of local coordinates of Riemannian manifolds by Laplacian eigenfunctions and diffusion map coordinates. They establish, for all x ∈ X, the existence of some neighborhood U (x) and d spectral coordinates given U (x) that define a bi-Lipschitz mapping from U (x) to R d. With a smooth compact Riemannian manifold, U (x) can be chosen to be a geodesic ball with radius a constant multiple of the inradius (the radius of the largest possible ball around x without intersecting with the manifold boundary), where the constant is uniform for all x, but the indices of the d spectral coordinates as well as the local bi-Lipschitz constants may depend on x. Specifically, the Lipschitz constants involve inverse of the inradius at x multiplied again by some global constants. For completeness we give a simplified statement of the in the supplementary material. Using the compactness of the manifold, one can always cover the manifold with m many neighborhoods (geodesic balls) on which the bi-Lipschitz property in holds. As a , there are a total of D spectral coordinates, D ≤ md (in practice D is much smaller than md, since the selected spectral coordinates in the proof of tend to be low-frequency ones, and thus the selection on different neighborhoods tend to overlap), such that on each of the m neighborhoods, there exists a subset of d spectral coordinates out of the D ones which are bi-Lipschitz on the neighborhood, and the Lipschitz constants can be bounded uniformly from below and above. Our proposed measure and model is motivated by degenerate latent spaces and poor generative performance in a variational inference framework arising from prior mismatch: when the topologies of the data and prior distributions do not agree. In real world data, this is usually due to two factors: first, when the dimensionalities of the distributions do not match, and second, when the geometries do not match. It is easy to see that homeomorphisms between the distributions will not exist in either case: pointwise correspondences cannot be established, thus the bijective condition cannot be met. As a , the model has poor generative performance -for each point not captured in the pointwise correspondence, the latent or generated distribution loses expressivity. Though the default choice of Gaussian distribution for p(z) is mathematically elegant and computationally expedient, there are many datasets, real and synthetic, for which this distribution is ill-suited. It is well known that spherical distributions are superior for modeling directional data , which can be found in fields as diverse as bioinformatics , geology , material science , natural image processing , and simply preprocessed datasets 1. Additionally observe that no homeomorphism exists between R k and S 1 for any k. For data distributed on more complex manifolds, the literature is sparse due to the difficult nature of such study. However, the manifold hypothesis is well-known and studied . Previous research on alleviating prior mismatch exists.; consider VAEs with the von-Mises Fisher prior, a geometrically hyperspherical prior. further model arbitrarily complex manifolds as priors, but require explicit knowledge of the manifold (i.e. its projection map, scalar curvature, and volume). consider mixtures of any pre-existing priors. But while these methods increase the expressivity of the priors available, they do not prescribe a method for choosing the prior itself. That responsibility still lies with the user. Convserly, our method chooses the best prior automatically. To our knowledge, ours is the first to take a data-driven approach to prior selection. By using some data to inform the prior, we not only guarantee the existence of a homeomorphism between data and prior distributions, we explicitly define it by the learned diffusion mapψ. In this section we propose VDAEs, a variational inference method that, given the data manifold M X, observations X ⊂ M X, and a kernel k, models the geometry of X by approximating a random walk over the latent diffusion manifold M Z:= ψ(M X). The model is trained by maximizing the local evidence: the evidence (i.e. log-likelihood) of each point given its random walk neighborhood. Points are generated from the trained model by sampling from π, the stationary distribution of the ing random walk. Starting from some point x ∈ X, we can roughly describe one step of the walk as the composition of three functions: 1) the approximate diffusion mapψ Θ: M X → M Z, 2) a sampling procedure from the learned diffusion process z ∼ q φ (z |x) = N (ψ Θ (x),C φ ) on M Z, and 3) the learned inverse diffusion mapψ where the constant c is user-defined and fixed. We rely crucially on three advantages of our latent spaceψ Θ (X): a) that it is well-defined (given the first D eigenvalues of k are distinct), b) well-approximated (given SpectralNet) and c) that Euclidean distances in M Z approximate single-step random walk distances on M X (see Section 2 and). Thus the transition probabilities induced by k can be approximated by Gaussian kernels 2 in M Z. Therefore, to model a diffusion random walk over M Z, we must learn the functionsψ Θ,ψ −1 θ,C φ that approximate the diffusion map, its inverse, and the covariance of the random walk on M Z, at all points z ∈ M Z. SpectralNet gives usψ Θ. To learnψ −1 θ andC φ, we use variational inference. Formally, let us define U x:= B d (x, δ) ∩ M X, where B d (x, δ) as the δ-ball around x with respect to d(·, ·), the diffusion distance on M Z. For each x ∈ X, we define the local evidence of x as where p(x |x)| Ux is the restriction of p(x |x) to U x. The ing local evidence lower bound is: Note that the neighborhood reconstruction error should be differentiated from the self reconstruction error that is in VAEs. Eq. 4 produces the empirical loss function: where is the deterministic, differentiable function, depending onψ Θ andC φ, that generates q φ by the reparameterization trick 3 . Algorithm 1 VDAE training Θ, φ, θ ← Initialize parameters Obtain parameters Θ for the approximate diffusion mapψ Θ by Shaham et al. (2018b) while not converged do Take one step of the diffusion random walk Compute gradients of the loss, i.e. Eq. equation 4 Update φ, θ using g Here we discuss the algorithm for generating data points from p(x). Composing q φ (z |x)(≈ p θ (z |x)) with p θ (x |z) gives us an approximation of p θ (x |x). Then the simple, parallelizable, and fast random walk based sampling procedure naturally arises: initialize with an arbitrary point on the manifold x 0 ∈ M X, then pick suitably large N and for n = 1,..., N draw x n ∼ p(x|x n−1). Eventually, our diffusion random walk converges on its stationary distribution π. , this is guaranteed to be the uniform distribution on the data manifold. See Section 6.2 for examples of points drawn from this procedure. We now introduce a practical implementation VDAEs, considering the case whereψ Θ (x), q φ (z |x) and p θ (x |z) are neural network functions, as they are in VAEs and SpectralNet, respectively. The neighborhood reconstruction error. Since q φ (z |x) models the neighborhood ofψ Θ (x), we may sample q φ to obtain z (the neighbor of x in the latent space). This gives p θ (x |x) ≈ ψ −1 θ (q φ (z |x)), where ψ −1 exists due to the bi-Lipschitz property. We can efficiently approximate x ∈ M X by considering the closest embedded data pointψ Θ (x) ∈ M Z to z =ψ Θ (x). This is because Euclidean distance on M Z approximates the diffusion distance on M X. In other words, x ∼ p θ (x |x) ≈ψ −1 θ (q φ (z |x)) which we approximate empirically by where A ⊆ X is the training batch. On the other hand, the divergence of random walk distributions −D KL (q φ (z |x)||p θ (z |x)) can be modeled simply as the divergence of two Gaussian kernels defined on M Z. Though p θ (z |x) is intractable, the diffusion map ψ gives us the diffusion embedding Z, which is an approximation of the true distribution of p θ (z |x) in a neighborhood around z = ψ(x). We estimate the first and second moments of this distribution in R D by computing the local Mahalanobis distance of points in the neighborhood. Then, by minimizing the KL divergence between q φ (z |x) and the one implied by this Mahalanobis distance, we obtain the loss: where is the covariance of the points in a neighborhood of z = ψ(x) ∈ Z, and α is a scaling parameter. Note that C φ (x) does not have to be diagonal, and in fact is most likely not. Combining Eqs. 6 and 7 we obtain Algorithm 1. Now we consider the sampling procedure. Since we use neural networks to approximate q φ (z |x) and p θ (x |z), the generation procedure is highly parallelizable. We empirically observe the random walk enjoys rapid mixing properties -it does not take many iterations of the random walk to sample from all of M Z 4. This leads to Algorithm 2. Algorithm 2 VDAE sampling Take one step of the diffusion random walk Map back into input space t ← t + 1 We theoretically prove that the desired inverse map ψ −1 from spectral coordinate codes back to the manifold can be approximated by a decoder network, where the network complexity is bounded by quantities related to the intrinsic geometry of the manifold. This section relies heavily on the known bi-Lipschitz property of , which we are approximating with the VDAE latent space without the need for regularization. The theory for the capacity of the encoder to map M to the diffusion map space ψ(M) has already been considered in Shaham et al. (2018a) and. We instead focus on the decoder, which requires a different treatment. The following theorem is proved in Appendix A.3, based upon the in. to have a subset of coordinates that are locally bi-Lipschitz. Let X = [X 1, ..., X m] be the set of all m extrinsic coordinates of the manifold. Then there exists a sparsely-connected ReLU network f N, with 4DC M X nodes in the first layer, 8dmN nodes in the second layer, and 2mN nodes in the third layer, and m nodes in the output layer, such that where the norm is interpreted as Here C ψ depends on how sparsely X(ψ(x)) Ui can be represented in terms of the ReLU wavelet frame on each neighborhood U i, and C M X on the curvature and dimension of the manifold M X. Theorem 1 is complementary to the theorem in Shaham et al. (2018a), which provides guarantees for the encoder, as Theorem 1 demonstrates a similar approximation theoretic argument for the decoder. The proof is built on two properties of ReLU neural networks: 1) their ability to split curved domains into small, almost Euclidean patches, 2) their ability to build differences of bump functions VDAE SVAE VAE GAN Figure 2: Reconstructed images from the rotating bulldog example plotted in the latent space of VDAE (left), Spherical VAE (SVAE, left-middle) and VAE (right-middle), and GAN (right) on each patch, which allows one to borrow approximation from the theory of wavelets on spaces of homogeneous type. The proof also crucially uses the bi-Lipschitz property of the diffusion embedding. The key insight of Theorem 1 is that, because of the bi-Lipschitz property, the coordinates of the manifold in the ambient space R m can be thought of as functions of the diffusion coordinates. We show that because each of coordinates function X i is a Lipschitz function, the ReLU wavelet coefficients of X i are necessarily 1. This allows us to use the existing guarantees of Shaham et al. (2018a) to complete the desired bound. We also discuss the connections between the distribution at each point in diffusion map space, q φ (z|x), and the of this distribution after being decoded through the decoder network f N (z) for z ∼ q φ (z|X). Similar to , we characterize the covariance matrix The following theorem is proved in Appendix A.3. Theorem 2. Let f N be a neural network approximation to X as in Theorem 1, such that it approximates the extrinsic manifold coordinates. Let C ∈ R m×m be the covariance matrix, Σ) with small enough Σ that there exists a patch U z0 ⊂ M around z 0 satisfying the bi-Lipschitz property of , and such that P r(z ∼ q φ (z|x) ∈ ψ(U z0)) <. Then the number of eigenvalues of C greater than is at most d, and C = J z0 ΣJ 6 EXPERIMENTAL We consider the problem of generating new frames from a video of rigid movement. We take 200 frames of a color video (each frame is 100 × 80 × 3) of a spinning bulldog. Due to the spinning of figure and the fixed , this creates a low-dimensional approximately circular manifold. We compare our method to VAE, the (with a bi-lipchitz constraint on the critic), and the hyperspherical. For the VAE, we use a two dimensional Gaussian prior p θ (z), such that z ∼ N (0, I 2). The noise injected to the GAN is drawn from a two dimensional uniform distribution p θ (z), such that z i ∼ U, i = 1, 2. For the spherical VAE, we use a latent dimension of D = 2, which highlights the dimension mismatch issue that occurs with a spherical prior. This is a benefit of VDAE, even if we choose D > d the latent embedding will still only be locally d dimensional. We use the same architecture for all networks which consists of one hidden layer with 512 neurons, activation function for all networks are tanh. In Fig. 2, we present 300 generated samples, by displaying them on a scatter plot with coordinates corresponding to their latent dimensions z 1 and z 2. In this series of experiments, we visualize the of the sampling procedure in Algorithm 2 on three synthetic manifolds. As discussed in 4.2, we randomly select an initial seed point, then recursively sample from p θ (x |x) many times to simulate a random walk on the manifold. In the top row of Fig. 3, we highlight the location of the initial seed point, take 20 steps of the random walk, and display the ing generated points on three learned manifolds. Clearly after a large number of resampling iterations, the algorithm continues to generate points on the manifold, and the distribution of sampled points converges to a uniform stationary distribution on the manifold. Moreover, this stationary distribution is reached very quickly. In the bottom row of the same Fig. 3, we show p θ (x |x) by sampling a large number of points sampled from the single seed point. As can be seen, a single step of p θ (x |x) covers a large part of the latent space. The architecture also uses one hidden layer of 512 neurons and tanh activations. In this section, we deal with the problem of generating samples from data with multiple clusters in an unsupervised fashion (i.e. no a priori knowledge of the cluster structure). Clustered data creates a problem for many generative models, as the topology of the latent space (i.e. normal distribution) differs from the topology of the data space with multiple clusters. In our first experiment, we show that our method is capable of generating new points from a particular cluster given an input point from that cluster. This generation is done in an unsupervised fashion, which is a different setting from the approach of conditional that require training labels. We demonstrate this property on MNIST in Figure 4, and show that the newly generated points after short diffusion time remain in the equivalent class to the seeded image. Here the architecture is a standard fully convolutional architecture. Details can be found in Appendix A.4. Figure 4: An example of cluster conditional sampling with our method, given a seed point (top left of each image grid). The DVAE is able to produce examples via the random walk that stay approximately within the cluster of the seed point, without any supervised knowledge of the cluster. The problem of addressing difference in topologies between the latent space of a generative model and the output data has been acknowledged in recent works about rejection sampling . Rejection sampling of neural networks consists of generating a large collection of samples using a standard GAN, and then designing a probabilistic algorithm to decide in a post-hoc fashion whether the points were truly in the support of the data distribution p(x). In the following experiment, we compare to the standard example in the generative model literature. The data consists of nine bounded spherical densities with significant minimal separation, lying on a 5 × 5 grid. A standard GAN or VAE struggles to avoid generating points in the gaps between Figure 5: Comparison between GAN, DRS-GAN, and our samples on a 5 × 5 Gaussian grid. GAN and DRS-GAN samples taken from. Shown from left-right are Original, GAN, DRS-GAN, and our method. these densities, and thus requires the post-sampling rejection analysis. On the other hand, our model creates a latent space that separates each of these clusters into their own features and only generates points that exist in the neighborhood of training data. Figure 5 clearly shows that this in significantly fewer points generated in the gaps between clusters, as well as eliminating the need to generate additional points that are not in final generated set. Our VDAE architecture here uses one hidden layer of 512 neurons and tanh activations. GAN and DRS-GAN architectures are as described in. Here we describe a practical method for computing the local bi-Lipschitz property, then use it to evaluate several methods on the MNIST dataset. Let Z and X be metric spaces and f: Z → X. We define, for each z ∈ Z and k ∈ N, the function bilip k (z): where Z:= f −1 (X) is the latent embedding of our dataset X 5, d X and d Z are metrics on X and Z, and U z,k is the k-nearest neighborhood of z. Intuitively, increasing values of K can be thought of as an increasing tendency of the learned map to stretch or compress regions of space. By analyzing various statistics of the local bi-Lipschitz measure evaluated at all points of a latent space Z, we can gain insight into how well-behaved a homeomorphism f is. In Table 1 we report the mean and standard deviation, over 10 runs, of the local bi-Lipschitz property for several methods trained on the MNIST dataset. The comparison is between the Wassertein GAN (WGAN), the VAE, the hyperspherical VAE (SVAE), and our method. We use standard architectures prescribed by their respective papers to train the methods. For our method we use a single 500 unit hidden layer network architecture with ReLU nonlinearities for both the encoder and decoder. By constraining our latent space to be the diffusion embedding of the data, our method finds a mapping that automatically enjoys the homeomorphic properties of an ideal mapping, and this is reflected in the low values of the local bi-Lipschitz constant. Conversely, other methods do not consider the topology of the data in the prior distribution. This is especially appparent in the VAE and SVAE, which must generate from the entirety of the input distribution X since they minimize a reconstruction loss. Interestingly, the mode collapse tendency of GANs alleviate the pathology of the bi-Lipschitz constant by allowing the GAN to focus on a subset of the distribution -but this comes at the cost, of course, of collapsing to a few modes of the dataset. Our method is able to reconstruct the entirety of X while simultaneously maintaining a low local bi-Lipschitz constant. We begin with taking the log of the random walk transition likelihood, where q(z) is an arbitrary distribution. We let q(z) to be the conditional distribution q(z |x). Furthermore, if we make the simplifying assumption that p θ (x |z, z) = p θ (x |z), then we obtain Eq. 4 To state the in , we need the following set-up: (C1) M is a d-dimensional smooth compact manifold, possibly having boundary, equipped with a smooth (at least C 2) Riemannian metric g; We denote the geodesic distance by d M, and the geodesic ball centering at x with radius r by B M (x, r). Under (C1), for each point x ∈ M, there exists r M (x) which is the inradius, that is, r is the largest number s.t. B M (x, r) is contained M. Let M be the Laplacian-Beltrami operator on M with Neumann boundary condition, which is self-adjoint on L 2 (M, µ), µ being the Riemannian volume given by g. Suppose that M is re-scaled to have volume 1. The next condition we need concerns the spectrum of the manifold Laplacian (C2) M has discrete spectrum, and the eigenvalues λ 0 ≤ λ 1 ≤ · · · satisfy the Weyl's estimate, i.e. exists constant C which only depends on M s.t. Let ψ j be the eigenfunction associated with λ j, {ψ j} j form an orthonormal bases of L 2 (M, µ). The last condition is (C3) The heat kernel (defined by the heat equation on M) has the spectral representation as That is, Ψ is bi-Lipschitz on the neighborhood B(x, c 1 r M (x)) with the Lipschitz constants indicated as above. The subscript x in Ψ x emphasizes that the indices j 1, · · ·, j d may depend on x. Proof of Theorem 1. The proof of Theorem 1 is actually a simple extension of the following theorem, Theorem 4, which needs to be proved for each individual extrinsic coordinate X k, hence the additional factor of m coming from the L2 norm of m functions. Theorem 4. Let M ⊂ R m be a smooth d-dimensional manifold, ψ(M) ⊂ R D be the diffusion map for D ≥ d large enough to have a subset of coordinates that are locally bi-Lipschitz. Let one of the m extrinsic coordinates of the manifold be denoted X(ψ(x)) for x ∈ M. Then there exists a sparsely-connected ReLU network f N, with 4DC M nodes in the first layer, 8dN nodes in the second layer, and 2N nodes in the third layer, such that where C ψ depends on how sparsely X(ψ(x)) Ui can be represented in terms of the ReLU wavelet frame on each neighborhood U i, and C M on the curvature and dimension of the manifold M. Proof of Theorem 4. The proof borrows from the main theorem of Shaham et al. (2018a). We adopt this notation and summarize the changes in the proof here. For a full description of the theory and guarantees for neural networks on manifolds, see Shaham et al. (2018a). Let C M be the number of neighborhoods U i = B(x i, δ) ∩ M needed to cover M such that ∀x, y ∈ U i, (1 First, we note that as in Shaham et al. (2018a), the first layer of a neural network is capable of using 4D units to select the subset of d coordinates ψ(x) from ψ(x) for x ∈ U i and zeroing out the other D−d coordinates with ReLU bump functions. Then we can define X(ψ(x)) = X(ψ(x)) on x ∈ U i. Now to apply the theorem from Shaham et al. (2018a), we must establish that X Ui: ψ(U i) → R can be written efficiently in terms of ReLU functions. Because of the manifold and diffusion metrics being bi-Lipschitz, we know at a minimum that ψ is invertible on ψ(U i). Because of this invertibility, we will slightly abuse notation and refer to X(ψ(x)) = X(x), where this is understood to be the extrinsic coordinate of the manifold at the point x that cooresponds to ψ(x). we also know that ∀x, y ∈ U i, |X(ψ(x)) − X(ψ(y))| = |X(x) − X(y)| ≤ max where ∇X(z) is understood to be the gradient of X(z) at the point z ∈ M. This means X(ψ(x)) is a Lipschitz function w.r.t. ψ(x). Because X(ψ(x)) Lipschitz continuous, it can be approximated by step functions on a ball of radius 2 − to an error that is at most with the fact that ψ(U i) is compact, gives the fact that on ψ(U i), set of ReLU wavelet coefficients is in 1. And from Shaham et al. (2018a), if on a local patch the function is expressible in terms of ReLU wavelet coefficients in 1, then there is an approximation rate of 1 √ N for N ReLU wavelet terms.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkg8FJBYDS
We combine variational inference and manifold learning (specifically VAEs and diffusion maps) to build a generative model based on a diffusion random walk on a data manifold; we generate samples by drawing from the walk's stationary distribution.
While deep learning and deep reinforcement learning systems have demonstrated impressive in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge, particularly as these algorithms learn individual tasks from scratch. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single task learning are not fully understood. Motivated by the insight that gradient interference causes optimization challenges, we develop a simple and general approach for avoiding interference between gradients from different tasks, by altering the gradients through a technique we refer to as “gradient surgery”. We propose a form of gradient surgery that projects the gradient of a task onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task reinforcement learning problems, we find that this approach leads to substantial gains in efficiency and performance. Further, it can be effectively combined with previously-proposed multi-task architectures for enhanced performance in a model-agnostic way. While deep learning and deep reinforcement learning (RL) have shown considerable promise in enabling systems to perform complex tasks, the data requirements of current methods make it difficult to learn a breadth of capabilities particularly when all tasks are learned individually from scratch. A natural approach to such multi-task learning problems is to train a single network on all tasks jointly, with the aim of discovering shared structure across the tasks in a way that achieves greater efficiency and performance than solving the tasks individually. However, learning multiple tasks all at once in a difficult optimization problem, sometimes leading to worse overall performance and data efficiency compared to learning tasks individually (; a). These optimization challenges are so prevalent that multiple multi-task RL algorithms have considered using independent training as a subroutine of the algorithm before distilling the independent models into a multi-tasking model; a;; ), producing a multi-task model but losing out on the efficiency gains over independent training. If we could tackle the optimization challenges of multi-task learning effectively, we may be able to actually realize the hypothesized benefits of multi-task learning without the cost in final performance. While there has been a significant amount of research in multi-task learning , the optimization challenges are not well understood. Prior work has described varying learning speeds of different tasks and plateaus in the optimization landscape as potential causes, while a range of other works have focused on the model architecture (b;). In this work, we instead hypothesize that the central optimization issue in multi-task learning arises from gradients from different tasks conflicting with one another. In particular, we define two gradients to be conflicting if they point away from one another (i.e., have a negative cosine similarity). As a concrete example, consider the 2D optimization landscapes of two task objectives shown in Figure 1. The optimization landscape of each task consists of a deep valley, as has been characterized of neural network optimization landscapes in the past . When considering the combined optimization landscape for multiple tasks, SGD produces gradients that struggle to efficiently find the optimum. This occurs due to a gradient thrashing phenomenon, where the gradient of one task destabilizes optimization in the valley. We can observe this in Figure 1 (d) when the optimization reaches the deep valley of task 1, but is prevented from traversing the valley to an optimum. In Section 6.2, we find experimentally that this thrashing phenomenon also occurs in a neural network multi-task learning problem. The core contribution of this work is a method for mitigating gradient interference by altering the gradients directly, i.e. by performing "gradient surgery". If two gradients are conflicting, we alter the gradients by projecting each onto the normal plane of the other, preventing the interfering components of the gradient from being applied to the network. We refer to this particular form of gradient surgery as projecting conflicting gradients (PCGrad). PCGrad is model-agnostic, requiring only a single modification to the application of gradients. Hence, it is easy to apply to a range of problem settings, including multi-task supervised learning and multi-task reinforcement learning, and can also be readily combined with other multi-task learning approaches, such as those that modify the architecture. We evaluate PCGrad on multi-task CIFAR classification, multi-objective scene understanding, a challenging multi-task RL domain, and goal-conditioned RL. Across the board, we find PCGrad leads to significant improvements in terms of data efficiency, optimization speed, and final performance compared to prior approaches. Further, on multi-task supervised learning tasks, PCGrad can be successfully combined with prior state-of-the-art methods for multi-task learning for even greater performance. The goal of multi-task learning is to find parameters θ of a model f θ that achieve high average performance across all the training tasks drawn from a distribution of tasks p(T). More formally, we aim to solve the problem: min, where L i is a loss function for the i-th task T i that we want to minimize. To obtain a model that solves a specific task from the task distribution p(T), we define a task-conditioned model f θ (y|x, z i), with input x, output y, and encoding z i for task T i, which could be provided as a one-hot vector or in any other form. 3 MULTI-TASK LEARNING VIA GRADIENT SURGERY While the multi-task problem can in principle be solved by simply applying a standard single-task algorithm with a suitable task identifier provided to the model or a simple multi-head or multi-output model, a number of prior works (; a;) have found this learning problem to be difficult, especially in the reinforcement learning setting. We hypothesize that one of the main challenges of multi-task learning can be characterized by conflicting and thrashing gradients, and find that this can significantly impede learning progress, especially when combined with iterative data collection. We identify possible causes for this problem and propose a simple and general approach to mitigate it. We hypothesize that a key optimization issue in multi-task learning arises when gradients from multiple tasks are in conflict with one another, i.e. when gradients point away from one another. More specifically, we hypothesize that such conflict may lead to gradient thrashing. Concretely, gradient Figure 2: Visual depiction of conflicting gradients and PCGrad. In (a), we see that tasks A and B have conflicting gradient directions, which can lead to destructive interference and unstable learning. In (b), we illustrate the PCGrad algorithm in cases where gradients are conflicting. PCGrad projects the gradient of task A onto the normal vector of task B's gradient. In (c), we show that tasks with non-conflicting gradients are not altered under PCGrad, thereby keeping tasks with constructive interference. thrashing refers to the phenomenon where a large gradient for one task changes the parameter vectors in a way that substantially decreases performance on another task. Since worse performance typically leads to larger gradients, this in alternating gradient directions, where, at the next iteration, the second task will have large gradients that dominate and reduce performance on the former task. This issue can be particularly pronounced for neural network optimization, since neural network loss landscapes are known to resemble long narrow valleys , where the gradient perpendicular to the direction of the valley will be small. We aim to study this hypothesis through two toy examples. First, consider the two-dimensional optimization landscape illustrated in Fig. 1a, where the landscape for each task objective corresponds to a deep and curved valley (Fig. 1b and 1c). The optima of this multi-task objective correspond to where the two valleys meet. More details on the optimization landscape are in Appendix B. We observe that the gradient thrashing hypothesis is consistent with what we observe when running Adam on this landscape in Fig. 1d, where we observe that Adam does not traverse one valley towards the other, preventing it from reaching an optimum. We also aim to detect if a similar phenomenon occurs in multi-task learning with a neural network with thousands of parameters on a toy regression problem. To measure the extent of gradient thrashing, we plot the cosine similarity between the gradients of two tasks throughout the beginning of learning in Fig. 4 (left). We indeed observe a significant level of gradient thrashing at every iteration, where the cosine similarity varies between −0.75 and 0.75 at a very high frequency. Motivated by these observations, we develop an algorithm that aims to alleviate the optimization challenges caused by gradient thrashing by preventing such gradient conflict between tasks. We aim to prevent gradient thrashing by directly altering the gradients themselves, i.e. through "gradient surgery." To be maximally effective and maximally applicable, we must perform surgery in a way that still allows for positive interactions between the task gradients and does not introduce any assumptions on the form of the model. We start by first detecting whether two gradients are in conflict, by measuring whether they point away from one another. More concretely, we characterize two tasks as conflicting for the current parameter setting if they yield a negative cosine similarity between their respective gradients. The goal of PCGrad is to modify the gradients for each task so as to minimize negative conflict with other task gradients, which will in turn mitigate gradient thrashing. To deconflict gradients during optimization, PCGrad adopts a simple procedure: if the gradients between two tasks are in conflict, i.e. their cosine similarity is negative, we project the gradient from one task onto the normal plane of the gradient of the other task. This amounts to removing the conflicting component of the gradient for the task, thereby reducing the amount of destructive gradient interference between tasks. A pictorial description of this idea is shown in Fig. 2. Suppose the gradient for task T i is g i, and the gradient for task T j is g j. PCGrad proceeds as follows: First, it determines whether g i conflicts with g j by computing the cosine similarity between vectors g i and g j, where negative values indicate conflicting gradients. If the cosine similarity is negative, we replace g i by its projection onto the normal plane of g j: g i = g i − gi·gj gj 2 g j. If the gradients are not in conflict, i.e. cosine similarity is non-negative, the original gradient g i remains unaltered. PCGrad repeats this process across all of the other tasks sampled in random order from the current Compute cosine similarity between gi as gj as cos(φij) = if cos(φij) < 0 then 8: // Subtract the projection of gi onto gj 9: end if 10: end for 11: Store g proj i = gi 12: end for 13: return update ∆θ = i g proj i batch T j ∀ j = i, ing in the gradient g proj i that is applied for task T i. We perform the same procedure for all tasks in the batch to obtain their respective gradients. The full update procedure is described in Algorithm 1 and a discussion on using a random task order is included in Appendix D. This procedure, while simple to implement, ensures that the gradients that we apply for each task per batch interfere minimally with the other tasks in the batch, mitigating the thrashing gradient problem, producing a variant on standard first-order gradient descent in the multi-objective setting. In practice, the PCGrad gradient surgery method can be combined with any gradient-based optimizer, including commonly used methods such as SGD with momentum and Adam , by simply passing the computed update to the respective optimizer instead of the original gradient. Our experimental verify the hypothesis that this procedure reduces the problem of thrashing gradients, and find that, as a , learning progress is substantially improved. Finally, we analyze the convergence of this procedure in Theorem 1 in the two-task setting, to ensure that the procedure is sensible under the standard assumptions in optimization. Theorem 1. Consider two task loss functions L is a multi-task objective. Let φ be the angle between ∇L 1 (θ) and ∇L 2 (θ). Suppose L is differentiable and that its gradient is Lipschitz continuous with constant L > 0, i.e. we have ||∇L(θ 1) − ∇L(θ 2)|| 2 ≤ L||θ 1 − θ 2 || 2 for any θ 1, θ 2. Then, the PCGrad update rule with step size t ≤ 1 L will converge to either a location in the optimization landscape where cos(φ) = −1 or the optimal value L(θ *). Proof. See Appendix A. Theorem 1 states that application of the PCGrad update in the two-task setting with a convex and Lipschitz multi-task loss function L leads to convergence to either the minimizer of L or a potentially sub-optimal objective value. A sub-optimal solution occurs when the cosine similarity between the gradients of the two tasks is −1, i.e. the gradients directly conflict, leading to zero gradient after applying PCGrad. However, in practice, since we are using SGD, which is a noisy estimate of the true batch gradients, the cosine similarity between the gradients of two tasks in a minibatch is unlikely to be −1, thus avoiding this scenario. We apply PCGrad to both supervised learning and reinforcement learning problem settings with multiple tasks or goals. In this section, we discuss the practical instantiations of PCGrad in those settings. Further implementation details are included in Section 6. In multi-task supervised learning, each task. The objective for each task in this supervised setting is then defined as, where z i is a one-hot encoding of task T i. At each training step, we randomly sample a batch of data points B from the whole dataset i D i and then group the sampled data with the same task encoding into small batches denoted as B i for each T i represented in B. We denote the set of tasks appearing in B as B T. After sampling, we precompute the gradient of each task in B T as Given the set of precomputed gradients ∇ θ L i (f θ), we also precompute the cosine similarity between all pairs of the gradients in the set. Using the pre-computed gradients and their similarities, we can obtain the PCGrad update by following Algorithm 1, without re-computing task gradients nor backpropagating into the network. Since the PCGrad procedure is only modifying the gradients of shared parameters in the optimization step, it is model-agnostic and can be readily applied to any architecture designed for supervised multi-task learning. In Section 6, we combine PCGrad with two state-of-the-art architectures for multi-task learning, which leads to noticeable improvement over their original performance. For multi-task reinforcement learning, PCGrad can be readily applied to policy gradient methods by directly updating the computed policy gradient of each task, following Algorithm 1, analogous to the supervised learning setting. For actor-critic algorithms, it is also straightforward to apply PCGrad: we simply replace the task gradients for both the actor and the critic by their gradients computed via PCGrad. Hence, PCGrad can be readily incorporated into a variety of model-free RL algorithms. When applying PCGrad to goal-conditioned RL, we represent p(T) as a distribution of goals and let z i be the encoding of a goal. Similar to the multi-task supervised learning setting discussed above, PCGrad may be combined with various architectures designed for multi-task and goal-conditioned RL , where PCGrad operates on the gradients of shared parameters, leaving task-specific parameters untouched. In our experiments, we apply PCGrad to the soft actor-critic (SAC) algorithm , a recently proposed off-policy actor-critic algorithm that has shown significant gains in sample efficiency and asymptotic performance across many different domains. In SAC, we employ a Qlearning style gradient to compute the gradient of the Q-function network, Q φ (s, a, z i), often known as the critic, and a reparameterization-style gradient to compute the gradient of the policy network π θ (a|s, z i), often known as the actor. For sampling, we instantiate a set of replay buffers {D i} Ti∼p(T). Training and data collection are alternated throughout training. During a data collection step, we run the policy π θ on all the tasks T i ∼ p(T) to collect an equal number of paths for each task and store the paths of each task T i into the corresponding replay buffer D i. At each training step, we sample an equal amount of data from each replay buffer D i to form a stratified batch. For each task T i ∼ p(T), the parameters of the critic θ are optimized to minimize the soft Bellman residual: where γ is the discount factor,φ are the delayed parameters, and α is a learnable temperature that automatically adjusts the weight of the entropy term. For each task T i ∼ p(T), the parameters of the policy π θ are trained to minimize the following objective We compute and apply PCGrad to both following Algorithm 1. In the context of SAC specifically, we further study how the temperature α should be adjusted. If we use a single learnable temperature for adjusting entropy of the multi-task policy π θ (a|s, z i), SAC may stop exploring once all easier tasks are solved, leading to poor performance on tasks that are harder or require more exploration. To address this issue, we propose to learn the temperature on a per-task basis, i.e. using a parametrized model to represent α ψ (z i) (which we abbreviate as PA for per-task alpha). This allows the method to control the entropy of π θ (a|s, z i) per-task. We optimize the parameters of α ψ (z i) using the same constrained optimization framework as in. Algorithms for multi-task learning typically consider how to train a single model that can solve a variety of different tasks (; ;). The multi-task formulation has been applied to many different settings, including supervised learning (; ; ; ;) and reinforcement-learning , as well as many different domains, such as vision (; a; ; ;), language (; ; ;) and robotics; ). While multi-task learning has the promise of accelerating acquisition of large task repertoires, in practice it presents a challenging optimization problem, which has been tackled in several ways in prior work. A number of architectural solutions have been proposed to the multi-task learning problem based on multiple modules or paths (; ; b; b; ; ;), or using attention-based architectures . Our work is agnostic to the model architecture and can be combined with prior architectural approaches in a complementary fashion. A different set of multi-task learning approaches aim to decompose the problem into multiple local problems, often corresponding to each task, that are significantly easier to learn, akin to divide and conquer algorithms a;;;;. Eventually, the local models are combined into a single, multi-task policy using different distillation techniques (outlined in . In contrast to these methods, we propose a simple and cogent scheme for multi-task learning that allows us to learn the tasks simultaneously using a single, shared model without the need for network distillation. Similarly to our work, a number of prior approaches have observed the difficulty of optimization in the multi-task learning setting (; ; b;). Our work, in contrast to many of these optimization schemes, suggests that the challenge in multi-task learning may be attributed to the problem of gradient thrashing, which we address directly by introducing a simple and practical algorithm that de-conflicts gradients from different tasks. Prior work alternatively proposes a gradient-based multi-objective optimization problem for multi-task learning to address the problem of optimizing possibly conflicting objectives. As noted in Alg 2 in , it learns a constant scaling factor for per-task gradient to avoid conflicting, while our method corrects both the scaling factor and the direction of per-task gradient, which can more effectively deconflict gradients. Prior work has also used the cosine similarity between gradients to define when an auxiliary task might be useful for single-task learning . We similarly use cosine similarity between gradients to determine if the gradients between a pair of tasks are in conflict. , we use this measure of gradient conflict as a part of gradient surgery in the context of multi-task learning applications. A number of works in continual learning have studied how to make gradient updates that do not adversely affect other tasks by projecting the gradients into a space that do not conflict with previous tasks . Those methods focus on the continual learning setting, and either need to solve for the gradient projections using quadratic programming , or only projecting the gradient onto the normal plane of the average of the gradients of past tasks . In contrast, our work focuses on multi-task learning, does not require solving any QP, and iteratively projects the gradients of each task onto the normal plane of the gradients of each of the other tasks instead of averaging. Finally, our method is distinct from and solves a different problem than the projected gradient method (Calamai & Moré, 1987), which is an approach for constrained optimization that projects gradients onto the constraint manifold. The goal of our experiments is to study the following questions: Are conflicting gradients a major factor in making optimization for multi-task learning challenging? Does PCGrad make the optimization problems easier for various multi-task learning problems including supervised, reinforcement, and goal-conditioned reinforcement learning settings across different task families? Can PCGrad be combined with other multi-task learning approaches to further improve performance? 6.1 EXPERIMENTAL SETUP To evaluate our method experimentally, we consider both a multi-task supervised learning and a multi-task reinforcement learning problem setup. For supervised learning, we first consider the MultiMNIST dataset , which contains two tasks: classifying the digit on the top left and on the bottom right in an overlaid image. Beyond digit classification, we also use the CIFAR-100 dataset where each of the 20 label superclasses are treated as distinct tasks, following. We also conduct experiments on the NYUv2 dataset , which consists of RGB-D indoor scene images. , we evaluate our method on 3 tasks: 13-class semantic segmentation, depth estimation, and surface normal prediction. In the case of multi-task reinforcement learning, we evaluate our algorithm on the recently proposed Meta-World benchmark . This benchmark includes a variety of simulated robotic manipulation tasks contained in a shared, table-top environment with a simulated Sawyer arm (visualized as the "Push" environment in Fig. 3). In particular, we use the multi-task benchmarks MT10 and MT50, which consists of the 10 tasks and 50 tasks respectively depicted in Fig. 3 that require diverse strategies to solve them, which makes them difficult to optimize jointly with a single policy. Note that MT10 is a subset of MT50. To evaluate goal-conditioned RL scenarios, we consider goal-conditioned robotic pushing with a Sawyer robot. This domain is representative of challenges in learning goal-conditioned policies over a wide distribution of goals. For details on the experimental set-up and model architectures see Appendix E. To answer question, we consider a simple regression problem, where each task is regressing the input to the output of a sine function. The amplitude and the phase of each task are varied. We construct 10 tasks with the amplitude uniformly sampled from the range and is concatenated with the one-hot task encoding. For training, we use a 3-layer fully-connected neural network with 100 hidden units. We compare the performance of the network trained with Adam and the network trained with Adam with PCGrad-modified gradients while plotting the cosine similarity between a pair of tasks during training as shown in Figure 4. The plot on the left in Figure 4 demonstrates that the cosine similarity of Adam gradients between a pair of tasks has high variance, which leads to the gradient thrashing problem, while the cosine similarity of the gradient projected by PCGrad yields positive values diminishing the conflicting-gradients problem. As shown in the plot on the right in Figure 4, Adam with PCGrad leads to faster learning over Adam, which implies that gradient thrashing is indeed a problem in multi-task optimization and reducing it can in considerable performance boost. To answer question, we perform experiments on three standard multi-task supervised learning datasets: MultiMNIST, multi-task CIFAR-100 and NYUv2. We include the on MultiMNIST in Appendix C. For CIFAR-100, we follow to treat 20 coarse labels in the dataset as distinct tasks and create a dataset with 20 tasks and 2500 training instances as well as 500 test instances per task. We combine PCGrad with a powerful multi-task learning architecture, routing networks (; 2019), by simply projecting gradients of the shared parameters in routing networks. As shown in Table 1, applying PCGrad to a single network achieves 71% classification accuracy, which outperforms most of the prior methods such as independent training and cross-stitch (b). Though routing networks achieve better performance than PCGrad on its own, PCGrad is complementary to routing networks and combining PCGrad with routing networks leads to a 2.8% absolute improvement in test accuracy averaged over 3 runs. We also combine PCGrad with another state-of-art multi-task learning algorithm, MTAN , and evaluate the performance on a more challenging indoor scene dataset, NYUv2, which contains 3 tasks as described in Section 6.1. We compare MTAN with PCGrad to a list of methods mentioned in Section 6.1, where each method is trained with three different weighting schemes as in , equal weighting, weight uncertainty (a), and DWA . We only run MTAN with PCGrad with weight uncertainty as we find weight uncertainty as the most effective scheme for training MTAN. The comparing Cross-Stitch, MTAN and MTAN + PCGrad are presented in Table 2 while the full comparison can be found in Table 4 in the Appendix E.3. MTAN with PCGrad is able to achieve the best scores in 8 out of the 9 categories where there are 3 categories per task. Our multi-task supervised learning demonstrate that PCGrad can be seamlessly combined with state-of-art multi-task learning architectures and further improve their on established supervised multi-task learning benchmarks. To answer question, we test all methods on 10 and 50 manipulation tasks respectively shown in Figure 3. At each data collection step, we collect 600 samples for each task, and at each training step, % accuracy task specific-1-fc 42 task specific-all-fc 49 cross stitch-all-fc (b) 53 routing-all-fc + WPL 74.7 independent 67.7 PCGrad (ours) 71 routing-all-fc + WPL + PCGrad (ours) 77.5 (b). Performance of other methods as reported in . we sample 128 datapoints per task from corresponding replay buffers. The are shown in the two plots on the left in Figure 5. We measure success according to the metrics used in the Meta-World benchmark where the reported the success rates are averaged across tasks. For all methods, we apply PA as discussed in Section 4 to learn a separate alpha term per task as the task encoding in MT10 and MT50 is just a one-hot encoding. PCGrad combined with SAC learns all tasks with the best data efficiency and successfully solves all of the 10 tasks in MT10 and about 70% of the 50 tasks in MT50. Training a single SAC policy and a multi-head policy turns out to be unable to acquire half of the skills in both MT10 and MT50, suggesting that eliminating gradient interference across tasks can significantly boost performance of multi-task RL. Training independent SAC agents is able to eventually solve all tasks in MT10 and 70% of the tasks in MT50, but requires about 2 millions and 15 millions more samples than PCGrad with SAC in MT10 and MT50 respectively, implying that applying PCGrad can in leveraging shared structure among tasks that expedites multi-task learning. As noted by , these tasks involve fairly distinct behavior motions, which makes learning all of them with a single policy challenging as demonstrated by poor baseline performance. The ability to learn these tasks together opens the door for a number of interesting extensions to meta-learning, goal conditioned RL and generalization to novel task families. We present the of PCGrad on goal-conditioned RL in the following subsection. We also provide an ablation study on the importance of correcting the gradient direction and scaling the gradient magnitudes in PCGrad. We construct two variants of PCGrad: only applying the gradient direction corrected with PCGrad while keeping the gradient magnitude unchanged and only applying the gradient magnitude computed by PCGrad while keeping the gradient direction unchanged. As shown in the plot on the left in Figure 6, both variants perform worse than PCGrad and the variant where we only vary the gradient magnitudes is much worse than PCGrad. We also compare PCGrad to a prior method GradNorm , which scales the magnitude of gradients of all the tasks. As shown in the plot on the right in Figure 6, PCGrad significantly outperforms GradNorm. We also notice that the variant of PCGrad where only the gradient magnitudes change gets comparable to GradNorm, which suggests that its important to modify both the gradient directions and magnitudes to eliminate interference and achieve good multi-task learning . Figure 6: Ablation study on only using the magnitude and the direction of the gradients modified by PCGrad (left) and comparison between PCGrad and GradNorm (right). PCGrad outperforms both ablations and GradNorm with a large margin, indicating the importance of modifying both the gradient directions and magnitudes in multi-task learning. For our goal-conditioned RL evaluation, we use the robot-pushing environment described in Sec. 6.1 where the goals are represented as the concatenations of the initial positions of the puck to be pushed and the its goal location, both of which are uniformly sampled (details in Appendix E.2). We also apply PA as discussed in Section 4 to predict the temperature for entropy term given the goal. We summarize the in the plot on the right in Figure 5. PCGrad with SAC and PA achieves the best performance in terms of average distance to the goal position, while PCGrad with SAC improves over the baseline and a vanilla SAC agent is struggling to successfully accomplish the task. This suggests that PCGrad is able to ease the RL optimization problem also when the task distribution is continuous. In this work, we identified one of the major challenges in multi-task optimization: conflicting gradients across tasks. We proposed a simple algorithm (PCGrad) to mitigate the challenge of conflicting gradients via "gradient surgery". PCGrad provides a simple way to project gradients to be orthogonal in a multi-task setting, which substantially improves optimization performance, since the task gradients are prevented from negating each other. We provide some simple didactic examples and analysis of how this procedure works in simple settings, and subsequently show significant improvement in optimization for a variety of multi-task supervised learning and reinforcement learning problems. We show that, once some of the optimization challenges of multi-task learning are alleviated by PCGrad, we can obtain the hypothesized benefits in efficiency and asymptotic performance that are believed to be possible in multi-task settings. While we studied multi-task supervised learning and multi-task reinforcement learning in this work, we suspect the problem of conflicting gradients to be prevalent in a range of other settings and applications, such as meta-learning, continual learning, multi-goal imitation learning , and multi-task problems in natural language processing applications . Due to its simplicity and model-agnostic nature, we expect that applying PCGrad in these domains to be a promising avenue for future investigation. Further, the general idea of gradient surgery may be an important ingredient for alleviating a broader class of optimization challenges in deep learning, such as the challenges in the stability challenges in two-player games and multi-agent optimizations . We believe this work to be a step towards simple yet general techniques for addressing some of these challenges. Proof. We will use the shorthand || · || to denote the L 2 -norm and ∇L = ∇ θ L, where θ is the parameter vector. Let g 1 = ∇L 1, g 2 = ∇L 2, and φ be the angle between g 1 and g 2. At each PCGrad update, we have two cases: cos(φ) ≥ 0 or cos(φ < 0). If cos(φ) ≥ 0, then we apply the standard gradient descent update using t ≤ 1 L, which leads to a strict decrease in the objective function value L(φ) unless ∇L(φ) = 0, which occurs only when θ = θ * . In the case that cos(φ) < 0, we proceed as follows: Our assumption that ∇L is Lipschitz continuous with constant L implies that ∇ 2 L(θ) − LI is a negative semidefinite matrix. Using this fact, we can perform a quadratic expansion of L around L(θ) and obtain the following inequality: Now, we can plug in the PCGrad update by letting θ We then get: (Expanding, using the identity (Expanding further and re-arranging terms) (Note that cos(φ) < 0 so the final term is non-negative) Plugging this into the last expression above, we can conclude the following: 2 will always be positive unless ∇L(θ) = 0. This inequality implies that the objective function value strictly decreases with each iteration where cos(φ) > −1. Hence repeatedly applying PCGrad process can either reach the optimal value L(θ) = L(θ *) or cos(φ) = −1, in which case Note that this only holds when we choose t to be small enough, i.e. t ≤ 1 L. To produce the 2D optimization visualizations in Figure 1, we used a parameter vector θ = [θ 1, θ 2] ∈ R 2 and the following task loss functions:. We initialized θ = [0.5, −3] and performed 500,000 gradient updates to minimize L using the Adam optimizer with learning rate 0.001. We compared using Adam for each update to using Adam in conjunction with the PCGrad method presented in Section 3.2. Following the same set-up in , for each image, we sample a different one uniformly at random. Then we put one of the image on the top left and the other one on the bottom right. The two tasks in the multi-task learning problem are to classify the digits on the top left (task-L) and bottom right (task-R) respectively. We construct such 60K examples. We combine PCGrad with the same backbone architecture used in and compare its performance to by running the open-sourced code provided in . As shown in Table 3, our method 0.13% and 0.55% improvement over As stated on line 4 in Algorithm 1, we sample the tasks from the batch and randomly shuffle the order of the tasks before performing the update steps in PCGrad. With random shuffling, we make PCGrad symmetric w.r.t. the task order in expectation. In Figure 7, we observe that PCGrad with a random task order achieves better performance between PCGrad with a fixed task order in the setting of MT50 where the number of tasks is large and the conflicting gradient phenomenon is much more likely to happen. Figure 7: Ablation study on using a fixed task order during PCGrad. PCGrad with a random task order does significantly better PCGrad with a fixed task order in MT50 benchmark. For our CIFAR-100 multi-task experiment, we adopt the architecture used in , which is a convolutional neural network that consists of 3 convolutional layers with 160 3 × 3 filters each layer and 2 fully connected layers with 320 hidden units. As for experiments on the NYUv2 dataset, we follow to use SegNet as the backbone architecture. Our reinforcement learning experiments all use the SAC algorithm as the base algorithm, where the actor and the critic are represented as 6-layer fully-connected feedforward neural networks for all methods. The numbers of hidden units of each layer of the neural networks are 160, 300 and 200 for MT10, MT50 and goal-conditioned RL respectively. We use five algorithms as baselines in the CIFAR-100 multi-task experiment: task specific-1-fc : a convolutional neural network shared across tasks except that each task has a separate last fully-connected layer, task specific-1-fc : all the convolutional layers shared across tasks with separate fully-connected layers for each task, cross stitch-all-fc (b): one convolutional neural network per task along with cross-stitch units to share features across tasks, routing-all-fc + WPL : a network that employs a trainable router trained with multi-agent RL algorithm (WPL) to select trainable functions for each task, independent: training separate neural networks for each task. For comparisons on the NYUv2 dataset, we consider 5 baselines: Single Task, One Task: the vanilla SegNet used for single-task training, Single Task, STAN : the single-task version of MTAN as mentioned below, Multi-Task, Split, Wide / Deep : the standard SegNet shared for all three tasks except that each task has a separate last layer for final task-specific prediction with two variants Wide and Deep specified in , Multi-Task Dense: a shared network followed by separate task-specific networks, Multi-Task Cross-Stitch (b): similar to the baseline used in CIFAR-100 experiment but with SegNet as the backbone, MTAN : a shared network with a soft-attention module for each task. On the multi-task and goal-conditioned RL domain, we apply PCGrad to the vanilla SAC algorithm with task encoding as part of the input to the actor and the critic as described in Section 4 and compare our method to the vanilla SAC without PCGrad and training actors and critics for each task individually (Independent). We use the pushing environment from the Meta-World benchmark as shown in Figure 3. In this environment, the Table 4: We present the full on three tasks on the NYUv2 dataset: 13-class semantic segmentation, depth estimation, and surface normal prediction . #P shows the total number of network parameters. We highlight the best performing combination of multi-task architecture and weighting in bold. The top validation scores for each task are annotated with boxes. The symbols indicate prior methods: *: (a), †: , ‡: (b). Performance of other methods taken from .
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJewiCVFPB
We develop a simple and general approach for avoiding interference between gradients from different tasks, which improves the performance of multi-task learning in both the supervised and reinforcement learning domains.
In this paper we propose to view the acceptance rate of the Metropolis-Hastings algorithm as a universal objective for learning to sample from target distribution -- given either as a set of samples or in the form of unnormalized density. This point of view unifies the goals of such approaches as Markov Chain Monte Carlo (MCMC), Generative Adversarial Networks (GANs), variational inference. To reveal the connection we derive the lower bound on the acceptance rate and treat it as the objective for learning explicit and implicit samplers. The form of the lower bound allows for doubly stochastic gradient optimization in case the target distribution factorizes (i.e. over data points). We empirically validate our approach on Bayesian inference for neural networks and generative models for images. Bayesian framework and deep learning have become more and more interrelated during recent years. Recently Bayesian deep neural networks were used for estimating uncertainty BID6, ensembling BID6 and model compression BID20. On the other hand, deep neural networks may be used to improve approximate inference in Bayesian models BID13.Learning modern Bayesian neural networks requires inference in the spaces with dimension up to several million by conditioning the weights of DNN on hundreds of thousands of objects. For such applications, one has to perform the approximate inference -predominantly by either sampling from the posterior with Markov Chain Monte Carlo (MCMC) methods or approximating the posterior with variational inference (VI) methods. MCMC methods provide the unbiased (in the limit) estimate but require careful hyperparameter tuning especially for big datasets and high dimensional problems. The large dataset problem has been addressed for different MCMC algorithms: stochastic gradient Langevin dynamics BID28, stochastic gradient Hamiltonian Monte Carlo, minibatch MetropolisHastings algorithms BID15 BID1. One way to address the problem of high dimension is the design of a proposal distribution. For example, for the Metropolis-Hastings (MH) algorithm there exists a theoretical guideline for scaling the variance of a Gaussian proposal BID24 BID25. More complex proposal designs include adaptive updates of the proposal distribution during iterations of the MH algorithm BID12 BID7. Another way to adapt the MH algorithm for high dimensions is combination of adaptive direction sampling and the multiple-try Metropolis algorithm as proposed in BID17. Thorough overview of different extensions of the MH algorithm is presented in BID18.Variational inference is extremely scalable but provides a biased estimate of the target distribution. Using the doubly stochastic procedure BID27 BID11 VI can be applied to extremely large datasets and high dimensional spaces, such as a space of neural network weights BID14 BID5. The bias introduced by variational approximation can be mitigated by using flexible approximations BID22 and resampling BID9.Generative Adversarial Networks BID8 ) (GANs) is a different approach to learn samplers. Under the framework of adversarial training different optimization problems could be solved efficiently BID0 BID21. The shared goal of "learning to sample" inspired the connection of GANs with VI BID19 and MCMC BID26.In this paper, we propose a novel perspective on learning to sample from a target distribution by optimizing parameters of either explicit or implicit probabilistic model. Our objective is inspired by the view on the acceptance rate of the Metropolis-Hastings algorithm as a quality measure of the sampler. We derive a lower bound on the acceptance rate and maximize it with respect to parameters of the sampler, treating the sampler as a proposal distribution in the Metropolis-Hastings scheme. We consider two possible forms of the target distribution: unnormalized density (density-based setting) and a set of samples (sample-based setting). Each of these settings reveals a unifying property of the proposed perspective and the derived lower bound. In the density-based setting, the lower bound is the sum of forward and reverse KL-divergences between the true posterior and its approximation, connecting our approach to VI. In the sample-based setting, the lower bound admit a form of an adversarial game between the sampler and a discriminator, connecting our approach to GANs. The closest work to ours is of BID26. In contrast to their paper our approach is free from hyperparameters; is able to optimize the acceptance rate directly; avoids minimax problem in the density based setting. Our main contributions are as follows:1. We introduce a novel perspective on learning to sample from the target distribution by treating the acceptance rate in the Metropolis-Hastings algorithm as a measure of sampler quality. 2. We derive the lower bound on the acceptance rate allowing for doubly stochastic optimization of the proposal distribution in case when the target distribution factorizes (i.e. over data points). 3. For sample-based and density-based forms of target distribution we show the connection of the proposed algorithm to variational inference and GANs. The rest of the paper is organized as follows. In Section 2 we introduce the lower bound on the AR. Special forms of target distribution are addressed in Section 3. We validate our approach on the problems of approximate Bayesian inference in the space of high dimensional neural network weights and generative modeling in the space of images in Section 4. We discuss and directions of the future work in Section 5. In MH algorithm we need to sample from target distribution p(x) while we are only able to sample from proposal distribution q(x | x). One step of the MH algorithm can be described as follows. DISPLAYFORM0 If the proposal distribution q(x | x) does not depend on x, i.e. q(x | x) = q(x), the algorithm is called independent MH algorithm. The quality of the proposal distribution is measured by acceptance rate and mixing time. Mixing time defines the speed of convergence of the Markov chain to the stationary distribution. The acceptance rate of the MH algorithm is defined as DISPLAYFORM1 where DISPLAYFORM2 In case of independent proposal distribution we show that the acceptance rate defines a semimetric in distribution space between p and q (see Appendix A.2). Although, we can maximize the acceptance rate of the MH algorithm (Eq. 1) directly w.r.t. parameters φ of the proposal distribution q φ (x | x), we propose to maximize the lower bound on the acceptance rate. As our experiments show (see Section 4) the optimization of the lower bound compares favorably to the direct optimization of the acceptance rate. To introduce this lower bound we first express the acceptance rate in terms of total variation distance. DISPLAYFORM0 where TV is the total variation distance. The proof of Theorem 1 can be found in Appendix A.1. This reinterpretation in terms of total variation allows us to lower bound the acceptance rate through the Pinsker's inequality DISPLAYFORM1 The maximization of this lower bound can be equivalently formulated as the following optimization problem DISPLAYFORM2 In the following sections, we show the benefits of this optimization problem in two different settings -when the target distribution is given in a form of unnormalized density and as a set of samples. In Appendix C.5 and C.1 we provide the empirical evidence that maximization of the proposed lower bound in the maximization of the acceptance rate. From now on we consider only optimization problem Eq. 5 but the proposed algorithms can be also used for the direct optimization of the acceptance rate (Eq. 1).To estimate the loss function (Eq. 5) we need to evaluate the density ratio. In the density-based setting unnormalized density of the target distribution is given, so we suggest to use explicit proposal distribution to compute the density ratio explicitly. In the sample-based setting, however, we cannot compute the density ratio, so we propose to approximate it via adversarial training BID8. The brief summary of constraints for both settings is shown in TAB0. DISPLAYFORM0 The following subsections describe the algorithms in detail. In the density-based setting, we assume the proposal to be an explicit probabilistic model, i.e. the model that we can sample from and evaluate its density at any point up to the normalization constant. We also assume that the proposal is reparameterisable BID13 BID23 BID4.During the optimization of the acceptance rate we might face a situation when proposal collapses to the delta-function. This problem usually arises when we use Markov chain proposal, for example, q φ (x | x) = N (x | x, σ). For such proposal we can obtain arbitrary high acceptance rate, making the σ small enough. However, this is not the case for the independent proposal distribution q φ (x | x) = q φ (x). In Appendix B.1 we provide more details and intuition on this property of acceptance rate maximization. We also provide empirical evidence in Section 4 that collapsing to the delta-function does not happen for the independent proposal. In this paper, we consider two types of explicit proposals: simple parametric family (Section 4.2) and normalizing flows BID22 BID3 ) (Section 4.1). Rich family of normalizing flows allows to learn expressive proposal and evaluate its density in any point of target distribution space. Moreover, an invertible model (such as normalizing flow) is a natural choice for the independent proposal due to its ergodicity. Indeed, choosing the arbitrary point in the target distribution space, we can obtain the corresponding point in the latent space using the inverse function. Since every point in the latent space has positive density, then every point in the target space also has positive density. Considering q φ (x) as the proposal, the objective of optimization problem 5 takes the form DISPLAYFORM0 Explicit form of the proposal q φ (x) and the target p(x) distributions allows us to obtain density ratios q φ (x)/q φ (x) and p(x)/p(x) for any points x, x. But to estimate the loss in Eq. 6 we also need to obtain samples from the target distribution x ∼ p(x) during training. For this purpose, we use the current proposal q φ and run the independent MH algorithm. After obtaining samples from the target distribution it is possible to perform optimization step by taking stochastic gradients w.r.t. φ. Pseudo-code for the obtained procedure is shown in Algorithm 1.Algorithm 1 Optimization of proposal distribution in density-based case DISPLAYFORM1 approximate loss with finite number of samples DISPLAYFORM2 perform gradient descent step end while return optimal parameters φ Algorithm 1 could also be employed for the direct optimization of the acceptance rate (Eq. 1). Now we apply this algorithm for Bayesian inference problem and show that during optimization of the lower bound we can use minibatches of data, while it is not the case for direct optimization of the acceptance rate. We consider Bayesian inference problem for discriminative model on dataset DISPLAYFORM3, where x i is the feature vector of ith object and y i is its label. For the discriminative model we know likelihood p(y i | x i, θ) and prior distribution p(θ). In order to obtain predictions for some object x i, we need to evaluate the predictive distribution DISPLAYFORM4 To obtain samples from posterior distribution p(θ | D) we suggest to learn proposal distribution q φ (θ) and perform independent MH algorithm. Thus the objective 6 can be rewritten as DISPLAYFORM5 Note that due to the usage of independent proposal, the minimized KL-divergence splits up into the sum of two KL-divergences. DISPLAYFORM6 Minimization of the first KL-divergence corresponds to the variational inference procedure. DISPLAYFORM7 The second KL-divergence has the only term that depends on φ. Thus we obtain the following optimization problem DISPLAYFORM8 The first summand here contains the sum over all objects in dataset D. We follow doubly stochastic variational inference and suggest to perform unbiased estimation of the gradient in problem 11 using only minibatches of data. Moreover, we can use recently proposed techniques BID15 BID1 that perform the independent MH algorithm using only minibatches of data. Combination of these two techniques allows us to use only minibatches of data during iterations of algorithm 1. In the case of the direct optimization of the acceptance rate, straightforward usage of minibatches in biased gradients. Indeed, for the direct optimization of the acceptance rate (Eq. 1) we have the product over the all training data inside min function. In the sample-based setting, we assume the proposal to be an implicit probabilistic model, i.e. the model that we can only sample from. As in the density-based setting, we assume that we are able to perform the reparameterization trick for the proposal. In this subsection we consider only Markov chain proposal q φ (x | x), but everything can be applied to independent proposal q φ (x) by simple substitution q φ (x | x) with q φ (x). From now we will assume our proposal distribution to be a neural network that takes x as its input and outputs x. Considering proposal distribution parameterized by a neural network allows us to easily exclude delta-function from the space of solutions. We avoid learning the identity mapping by using neural networks with the bottleneck and noisy layers. For the detailed description of the architectures see Appendix C.8.The set of samples from the true distribution X ∼ p(x) allows for the Monte Carlo estimation of the loss DISPLAYFORM0 To compute the density ratio DISPLAYFORM1 we suggest to use well-known technique of density ratio estimation via training discriminator network. Denoting discriminator output as D(x, x), we suggest the following optimization problem for the discriminator. DISPLAYFORM2 Speaking informally, such discriminator takes two images as input and tries to figure out which image is sampled from true distribution and which one is generated by the one step of proposal distribution. It is easy to show that optimal discriminator in problem 13 will be DISPLAYFORM3 Note that for optimal discriminator we have D(x, x) = 1 − D(x, x). In practice, we have no optimal discriminator and these values can differ significantly. Thus, we have four ways for density ratio estimation that may differ significantly. DISPLAYFORM4 To avoid the ambiguity we suggest to use the discriminator of a special structure. Let D(x, x) be a convolutional neural network with scalar output. Then the output of discriminator D(x, x) is defined as follows. DISPLAYFORM5 In other words, such discriminator can be described as the following procedure. For single neural network D(·, ·) we evaluate two outputs D(x, x) and D(x, x). Then we take softmax operation for these values. Summing up all the steps, we obtain algorithm 2.Algorithm 2 Optimization of proposal distribution in sample-based case DISPLAYFORM6 approximate loss with finite number of samples DISPLAYFORM7 perform gradient descent step end for return parameters φ Algorithm 2 could also be employed for direct optimization of the acceptance rate (Eq. 1). But, in Appendix B.2 we provide an intuition for this setting that the direct optimization of the acceptance rate may struggle from vanishing gradients. In this section, we provide experiments for both density-based and sample-based settings, showing the proposed procedure is applicable to high dimensional target distributions. Code for reproducing all of the experiments will be published with the camera-ready version of the paper. To demonstrate performance of our approach we reproduce the experiment from BID26. For target distributions we use synthetic 2d distributions (see Appendix C.3 for densities): ring (a ring-shaped density), mog2 (a mixture of 2 Gaussians), mog6 (a mixture of 6 Gaussians), ring5 (a mixture of 5 distinct rings). We measure performance of learned samplers using Effective Sample Size (see Appendix C.4 for formulation). Since the unnormalized densities of target distributions are given, we can learn proposals as suggested in the density-based setting (Section 3.1).To learn the independent proposal we use RealNVP model BID3 ) (see details in Appendix C.2) and compare the performance of proposals after optimization of different objectives: the acceptance rate (AR), our lower bound on the acceptance rate (ARLB), evidence lower bound that corresponds to the variational inference (VI). We also compare the performance of obtained independent proposals with the performance of Markov chain proposals: A-NICE-MC BID26, Hamiltonian Monte Carlo (HMC).In Tables 2, 3 we see that our approach has comparable performance with A-NICE-MC BID26. However, comparison between A-NICE-MC and learning independent proposal is not the main subject of interest, since A-NICE-MC learns Markov chain proposal. On the one hand, Markov chain proposal uses more information while generating a new sample, hence can learn more expressive stationary distribution, on the other hand, usage of previous sample increase autocorrelation between samples and reduces ESS. Thus, the main point of interest is the comparison of two independent proposals: one is learned by maximization of the acceptance rate (or its lower bound), and the second is learned by variational inference procedure, i.e. maximization of evidence lower bound. In TAB1 we see that both maximization of the acceptance rate and its lower bound outperform variational inference for all target distributions. Moreover, in FIG0 we show that variational inference fails to cover all the modes of mog6 in contrast to proposals learned via maximization of acceptance rate or its lower bound. Densities of learned proposals and histograms for all distributions are presented in Appendix C.6. In density-based setting, we consider Bayesian inference problem for the weights of a neural network. In our experiments we consider approximation of predictive distribution (Eq. 7) as our main goal. To estimate the goodness of the approximation we measure negative log-likelihood and accuracy on the test set. In subsection 3.1 we show that lower bound on acceptance rate can be optimized more efficiently than acceptance rate due to the usage of minibatches. But other questions arise.1. Does the proposed objective in Eq. 11 allow for better estimation of predictive distribution compared to the variational inference?2. Does the application of the MH correction to the learned proposal distribution allow for better estimation of the predictive distribution (Eq. 7) than estimation via raw samples from the proposal?To answer these questions we consider reduced LeNet-5 architecture (see Appendix C.7) for classification task on 20k images from MNIST dataset (for test data we use all of the MNIST test set). Even after architecture reduction we still face a challenging task of learning a complex distribution in 8550-dimensional space. For the proposal distribution we use fully-factorized gaussian DISPLAYFORM0. For variational inference, we train the model using different initialization and pick the model according to the best ELBO. For our procedure, we do the same and choose the model by the maximum value of the acceptance rate lower bound. In Algorithm 1 we propose to sample from the posterior distribution using the independent MH and the current proposal. It turns out in practice that it is better to use the currently learned proposal q φ (θ) = N (θ | µ, σ) as the initial state for random-walk MH algorithm. That is, we start with the mean µ as an initial point, and then use random-walk proposal q(θ | θ) = N (θ | θ, σ) with the variances σ of current independent proposal. This should be considered as a heuristic that improves the approximation of the loss function. The optimization of the acceptance rate lower bound in the better estimation of predictive distribution than the variational inference (see FIG1). Optimization of acceptance rate for the same number of epochs in nearly 30% accuracy on the test set. That is why we do not report for this procedure in FIG1 In both procedures we apply the independent MH algorithm to estimate the predictive distribution. To answer the second question we estimate predictive distribution in two ways. The first way is to perform 100 accept/reject steps of the independent MH algorithm with the learned proposal q φ (θ) after each epoch, i.e. perform MH correction of the samples from the proposal. The second way is to take the same number of samples from q φ (θ) without MH correction. For both estimations of predictive distribution, we evaluate negative log-likelihood on the test set and compare them. The MH correction of the learned proposal improves the estimation of predictive distribution for the variational inference (right plot of FIG2) but does not do so for the optimization of the acceptance rate lower bound (left plot of FIG2). This fact may be considered as an implicit evidence that our procedure learns the proposal distribution with higher acceptance rate. In the sample-based setting, we estimate density ratio using a discriminator. Hence we do not use the minibatching property (see subsection 3.1) of the obtained lower bound, and optimization problems for the acceptance rate and for the lower bound have the same efficiency in terms of using data. That is why our main goal in this setting is to compare the optimization of the acceptance rate and the optimization of the lower bound. Also, in this setting, we have Markov chain proposal that is interesting to compare with the independent proposal. Summing up, we formulate the following questions:1. Does the optimization of the lower bound has any benefits compared to the direct optimization of the acceptance rate? 2. Do we have mixing issue while learning Markov chain proposal in practice? 3. Could we improve the visual quality of samples by applying the MH correction to the learned proposal?We use DCGAN architecture for the proposal and discriminator (see Appendix C.8) and apply our algorithm to MNIST dataset. We consider two optimization problems: direct optimization of the acceptance rate and its lower bound. We also consider two ways to obtain samples from the approximation of the target distribution -use raw samples from the learned proposal, or perform the MH algorithm, where we use the learned discriminator for density ratio estimation. In case of the independent proposal, we show that the MH correction at evaluation step allows to improve visual quality of samples -figures 4(a) and 4(b) for the direct optimization of acceptance rate, figures 4(c) and 4(d) for the optimization of its lower bound. Note that in Algorithm 2 we do not apply the independent MH algorithm during training. Potentially, one can use the MH algorithm considering any generative model as a proposal distribution and learning a discriminator for density ratio estimation. Also, for this proposal, we demonstrate the negligible difference in visual quality of samples obtained by the direct optimization of acceptance rate (see Fig. 4(a) ) and by the optimization of the lower bound (see Fig. 4(c) ). Figure 4: Samples from the learned independent proposal obtained via optimization: of acceptance rate (4(a), 4(b)) and its lower bound (4(c), 4(d)). In Fig. 4(b), 4(d) we show raw samples from the learned proposal. In Fig. 4(a), 4(c) we show the samples after applying the independent MH correction to the samples, using the learned discriminator for density ratio estimation. In the case of the Markov chain proposal, we show that the direct optimization of acceptance rate in slow mixing (see Fig. 5(a) ) -most of the time the proposal generates samples from one of the modes (digits) and rarely switches to another mode. When we perform the optimization of the lower bound the proposal switches between modes frequently (see Fig. 5(b) ). Note that we obtain different distributions of the samples because of conditioning of our proposal. To show that the learned proposal distribution has the Markov property rather than being totally independent, we show samples from the proposal conditioned on two different points in the dataset (see Fig. 6). The difference in samples from two these distributions (Fig. 6(a), 6(a)) reflects the dependence on the conditioning. Additionally, in Appendix C.9 we present samples from the chain after 10000 accepted images and also samples from the chain that was initialized with noise. This paper proposes to use the acceptance rate of the MH algorithm as the universal objective for learning to sample from some target distribution. We also propose the lower bound on the acceptance rate that should be preferred over the direct maximization of the acceptance rate in many cases. The proposed approach provides many ways of improvement by the combination with techniques from the recent developments in the field of MCMC, GANs, variational inference. For example• The proposed loss function can be combined with the loss function from BID16, thus allowing to learn the Markov chain proposal in the density-based setting.• We can use stochastic Hamiltonian Monte Carlo for the loss estimation in Algorithm 1. • In sample-based setting one can use more advanced techniques of density ratio estimation. Application of the MH algorithm to improve the quality of generative models also requires exhaustive further exploration and rigorous treatment. Remind that we have random variables ξ = DISPLAYFORM0, and want to prove the following equalities. DISPLAYFORM1 Equality E ξ min{1, ξ} = P{ξ > u} is obvious. DISPLAYFORM2 Equality P{ξ > u} = 1 − 1 2 E ξ |ξ − 1| can be proofed as follows. DISPLAYFORM3 where F ξ (u) is CDF of random variable ξ. Note that F ξ = 0 since ξ ∈ (0, +∞]. Eq. 21 can be rewritten in two ways. DISPLAYFORM4 To rewrite Eq. 21 in the second way we note that Eξ = 1. DISPLAYFORM5 Summing equations 22 and 23 in the following formula DISPLAYFORM6 Using the form of ξ we can rewrite the acceptance rate as DISPLAYFORM7 In independent case we have ξ = DISPLAYFORM0 and we want to prove that E ξ |ξ − 1| is semimetric (or pseudo-metric) in space of distributions. For this appendix, we denote D(p, q) = E ξ |ξ − 1|. The first two axioms for metric obviously holds DISPLAYFORM1 There is an example when triangle inequality does not hold. DISPLAYFORM2 But weaker inequality can be proved. DISPLAYFORM3 Summing up equations 28, 30 and 32 we obtain DISPLAYFORM4 B OPTIMIZATION OF PROPOSAL DISTRIBUTION Firstly, let's consider the case of gaussian random-walk proposal q(x | x) = N (x | x, σ). The optimization problem for the acceptance rate takes the form DISPLAYFORM0 It is easy to see that we can obtain acceptance rate arbitrarly close to 1, taking σ small enough. In the case of the independent proposal, we don't have the collapsing to the delta-function problem. In our work, it is important to show non-collapsing during optimization of the lower bound, but the same hold for the direct optimization of the acceptance rate. To provide such intuition we consider one-dimensional case where we have some target distribution p(x) and independent proposal q(x) = N (x | µ, σ). Choosing σ small enough, we approximate sampling with the independent MH as sampling on some finite support x ∈ [µ − a, µ + a]. For this support, we approximate the target distribution with the uniform distribution (see FIG5).For such approximation, optimization of lower bound takes the form Here N (x | 0, σ, −a, a) is truncated normal distribution. The first KL-divergence can be written as follows. DISPLAYFORM1 DISPLAYFORM2 Here Z is normalization constant of truncated log normal distribution and DISPLAYFORM3 Summing up two KL-divergencies and taking derivative w.r.t. σ we obtain ∂ ∂σ DISPLAYFORM4 To show that the derivative of the lower bound w.r.t. σ is negative, we need to prove that the following inequality holds for positive x. DISPLAYFORM5 2 /2 dt and noting that 2φ(x) = √ 2π(Φ(x) − Φ(−x)) we can rewrite inequality 47 as DISPLAYFORM6 By the fundamental theorem of calculus, we have DISPLAYFORM7 Hence, DISPLAYFORM8 Or equivalently, DISPLAYFORM9 Using this inequality twice, we obtain DISPLAYFORM10 and DISPLAYFORM11 Thus, the target inequality can be verified by the verification of DISPLAYFORM12 Thus, we show that partial derivative of our lower bound w.r.t. σ is negative. Using that knowledge we can improve our loss by taking a bigger value of σ. Hence, such proposal does not collapse to delta-function. In this section, we provide an intuition for sample-based setting that the loss function for lower bound has better gradients than the loss function for acceptance rate. Firstly, we remind that in the sample-based setting we use a discriminator for density ratio estimation. DISPLAYFORM0 For this purpose we use the discriminator of special structure DISPLAYFORM1 We denote d(x, x) = D(x, x) − D(x, x) and consider the case when the discriminator can easily distinguish fake pairs from valid pairs. So D(x, x) is close to 1 and d(x, x) 0 for x ∼ p(x) and x ∼ q(x | x). To evaluate gradients we consider Monte Carlo estimations of each loss and take gradients w.r.t. x in order to obtain gradients for parameters of proposal distribution. We do not introduce the reparameterization trick to simplify the notation but assume it to be performed. For the optimization of the acceptance rate we have DISPLAYFORM2 DISPLAYFORM3 While for the optimization of the lower bound we have DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 Now we compare Eq. 59 and Eq. 62. We see that in case of strong discriminator we have vanishing gradients in Eq. 59 due to exp(−d(x, x)), while it is not the case for Eq. 62. This experiment shows that it is possible to optimize the acceptance rate, optimizing its lower bound. For the target distribution we consider bimodal Gaussian p(x) = 0.5 · N (x | − 2, 0.5) + 0.5 · N (x | 2, 0.7), for the independent proposal we consider unimodal gaussian q(x) = N (x | µ, σ). We perform stochastic gradient optimization using Algorithm 1 from the same initialization for both objectives FIG6 and obtain approximately the same local maximums. For the proposal distribution we use similar architecture to the NICE proposal. The RealNVP model BID3 use the same strategy for evaluating the Jacobian as the NICE model does. Each coupling layer define the following function. Given a D dimensional input x and d < D, the output y is evaluated by the formula DISPLAYFORM0 where the functions s, t can be arbitrary complex, since the structure of the functions doesn't influence the computation of the Jacobian. For our proposal we use 4 coupling layers with s and t consist of two fully-connected layers with hidden dimension of 256. For synthetic distributions we consider the same distributions as in BID26.The analytic form of p(x) for ring is: DISPLAYFORM0 The analytic form of p(x) for mog2 is: DISPLAYFORM1 where DISPLAYFORM2 The analytic form of p(x) for mog6 is: DISPLAYFORM3 where DISPLAYFORM4 DISPLAYFORM5 where DISPLAYFORM6 For the effective sample size formulation we follow BID26.Assume a target distribution p(x), and a Markov chain Monte Carlo (MCMC) sampler that produces a set of N correlated samples DISPLAYFORM0. Suppose we are estimating the mean of p(x) through sampling; we assume that increasing the number of samples will reduce the variance of that estimate. DISPLAYFORM1 DISPLAYFORM2 where ρ s denotes the autocorrelation under q of x at lag s. We compute the following empirical estimateρ s for ρ s:ρ DISPLAYFORM3 whereμ andσ are the empirical mean and variance obtained by an independent sampler. Due to the noise in large lags s, we adopt the approach of Hoffman & Gelman FORMULA1 where we truncate the sum over the autocorrelations when the autocorrelation goes below 0.05. In this section we provide the empirical evidence that maximization of the proposed lower bound on the acceptance rate (ARLB) in maximization of the acceptance rate (AR). For that purpose we evaluate ARLB and AR at each iteration during the optimization of ARLB. After training we evaluate correlation coefficient between ARLB and logarithm of AR. The curves are shown in FIG9: plots for the acceptance rate and the acceptance rate lower bound evaluated at every iteration during the optimization of the acceptance rate lower bound. Correlation coefficient is evaluated between the logarithm of the acceptance rate and the acceptance rate lower bound. In this section we provide levelplots of learned proposals densities (see FIG0). We also provide 2d histrograms of samples from the MH algorithm using the corresponding proposals (see FIG0). In this section, we show additional figures for Markov chain proposals. In FIG0 we show samples from the chain that was initialized by the noise. In FIG0 we show samples from the chain after 10000 accepted samples. FIG0: Samples from the chain initialized with noise. To obtain samples we use the MH algorithm with the learned proposal and the learned discriminator for density ratio estimation. In Fig. 5 (a) we use proposal and discriminator that are learned during optimization of acceptance rate. In Fig. 5(b) we use proposal and discriminator that are learned during the optimization of the acceptance rate lower bound. Samples in the chain are obtained one by one from left to right from top to bottom starting with noise (first image in the figure). FIG0: Samples from the chain after 10000 accepted samples. To obtain samples we use the MH algorithm with the learned proposal and the learned discriminator for density ratio estimation. In Fig. 5(a) we use proposal and discriminator that are learned during optimization of acceptance rate. In Fig. 5(b) we use proposal and discriminator that are learned during the optimization of the acceptance rate lower bound. Samples in chain are obtained one by one from left to right from top to bottom.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hkg313AcFX
Learning to sample via lower bounding the acceptance rate of the Metropolis-Hastings algorithm
This paper proposes a self-supervised learning approach for video features that in significantly improved performance on downstream tasks (such as video classification, captioning and segmentation) compared to existing methods. Our method extends the BERT model for text sequences to the case of sequences of real-valued feature vectors, by replacing the softmax loss with noise contrastive estimation (NCE). We also show how to learn representations from sequences of visual features and sequences of words derived from ASR (automatic speech recognition), and show that such cross-modal training (when possible) helps even more. Recently there has been a lot of progress in self-supervised representation learning for textual sequences, followed by supervised fine-tuning (using small labeled datasets) of shallow (often linear) decoders on various downstream NLP tasks, such as sentiment classification. In this paper, we build on this work and propose a new method for self-supervised representation learning for videos, optionally accompanied by speech transcripts generated by automatic speech recognition (ASR). We show that fine-tuning linear decoders together with our self-supervised video representations, can achieve state of the art on various supervised tasks, including video classification, segmentation and captioning. Our approach builds on the popular BERT (Bidirectional Encoder Representations from Transformers) model for text. This uses the Transformer architecture to encode long sentences, and trains the model using the "masked language modeling" (MLM) training objective, in which the model must predict the missing words given their bidirectional context. The MLM loss requires that each token in the sequence be discrete. The VideoBERT model of (a) therefore applied vector quantization (VQ) to video frames before passing them (along with optional ASR tokens) to the BERT model. Unfortunately, VQ loses fine-grained information that is often critical for downstream tasks. More recently, several papers (e.g., VilBERT and LXMERT ) proposed to address this limitation by directly measuring the visual similarity between frames using pre-trained visual encoders. In this paper, we propose a way to train bidirectional transformer models on sequences of realvalued vectors (e.g., video frames), x 1:T, using noise contrastive estimation (NCE), without needing pre-trained visual encoders. We call our method "Contastive Bidirectional Transformer" or CBT. We also develop a method that combines x 1:T with an optional sequence of discrete tokens, y 1:T (e.g., derived from ASR). In contrast to the VideoBERT paper (a), we provide a "lightweight" way of combining these signals after training each modality separately. In particular, we propose a cross-modal transformer to maximize the mutual information between x 1:T and y 1:T at the sequence level (rather than at the frame level). This method is robust to small misalignments between the sequences (e.g., if the words at time t do not exactly correspond to what is visually present in frame t). We demonstrate the effectiveness of the proposed approach for learning short-term visual representations, as well as longer term temporal representations. For visual representations, we encode each window of K frames using a 3D convolutional neural network S3D , and then pass this sequence of features to the CBT model for self-supervised pretraining with the NCE loss on the Pre-training Figure 1: Summary of our method for training and evaluation. The blocks above the line are pretrained in an self-supervised way. The solid blocks represent the BERT language model, which is pre-trained on web text and frozen (see section 3.1). The black CBT visual block is trained using NCE loss on unlabeled HowTo or Kinetics videos (see section 3.2). The red cross-modal transformer is trained using cross-modal loss on HowTo with ASR (see section 3.3). The components below the line are trained in a supervised way on various tasks. The purple block is trained for next action prediction on ActivityNet, Breakfast, and 50Salads (see section 4.2). The blue block is trained for video classification on UCF101 and HMDB501 (see section 4.1). The green blocks are trained on captioning and video segmentation tasks, which are described in the supplementary material (section 6.1 and section 6.2). Lseq refers to cross-entropy sequence loss. unlabeled Kinetics dataset . We then fine-tune a linear classifier for video classification on UCF101 and HMDB51. We show that our method outperforms previous state-of-the-art self-supervised methods by large margins (UCF101 from 75.7% to 79.5% and HMDB51 from 35.7% to 44.6%). For temporal representations, we encode each window of K frames using a S3D network that is pretrained on Kinetics, and then "frozen". We then pass this sequence of features to the CBT model for self-supervised pretraining with the NCE loss on the unlabeled HowTo100M dataset (b). We also evaluate the effects of running ASR on HowTo100M, and passing this to our cross-modal transformer as an additional signal. We then fine-tune various shallow decoders for a variety of tasks, including video classification, segmentation and captioning. We show large gains compared to previous methods, especially when we use cross-modal pretraining. See fig. 1 for a summary of our training method and evaluation protocol. Video representations. Most existing work on learning video representations, such as (; ; ; ;), only captures a few seconds of video. Long-term context can be encoded by recurrent neural networks (; b), graph convolutional networks, or long-term feature banks, but these are all supervised methods. Some recent work have been done on learning self-supervised video representation;;;;; by defining pretext tasks such as ordering , rotation , temporal cycle consistency (b;) or colorization ) but similar to their supervised counterparts they capture only few seconds. Self-supervised context modeling. Recently, there has between a lot of work on self-supervised context modeling for language representations (; ;). In particular, the BERT model, which stands for Bidirectional Encoder Representations from Transformers , pre-trains deep bidirectional representations by jointly conditioning on both left and right context in all layers. The pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of NLP tasks, such as question answering and linguistic entailment. Our representation builds on this approach and adapts it to continuous video data by using a contrastive loss. Mutual information estimation and maximization. For representation learning, a signal encoder can be trained to maximize the mutual information (MI) between the input signal and its encoded outputs, or the encoded outputs of the signal and its context (see e.g., (; ; ;). In particular, contrastive predictive coding (CPC) uses noise contrastive estimation (Gutmann & Hyvärinen, 2010) to maximize the MI lower bound. Unlike CPC, which relies on auto regressive models to encode context, we use BERT to encode bidirectional context within each sequence, and across different modalities. Cross-modal learning. The multi-modality of video is a rich source of information for selfsupervised learning of video representations. Since videos contain both visual and audio signals that are roughly synchronized, the two modalities can supervised each other, as explored in prior work such as (; b; a;). Another common form of weak supervision is based on video and language, where language is either obtained by automatic speech recognition (ASR) or from additional textual description. Language can be leveraged by finding a joint embedding space for both visual and textual modalities or by learning an alignment between the two modalities (; ; a). Recently, several concurrent approaches (a; a; ; ; b) generalize the BERT architecture and MLM objective to learn visual-linguistic representations. They assume the visual representations to be fixed and given by supervised pre-trained visual encoders, and define the visual MLM objective in terms of visual similarities (e.g. via vector quantization or measuring L2 distance) between the original and predicted visual representations. To the best of our knowledge, our proposed CBT is the first to demonstrate the effectiveness of BERT-style pre-training in a fully self-supervised way for video. We first give an overview of the BERT model for learning from sequences of words, y 1:T. We then discuss an extension to the case of sequences of video frames, x 1:T. Finally, we discuss how to learn from both kinds of data, even when not perfectly aligned. The BERT model takes in a sequence of discrete tokens, y 1:T, where y t ∈ {1, . . ., K}, embeds each one into a dense vector, e y t ∈ D, and then emits a sequence of dense output vectors, h y t ∈ D y, which are computed using a transformer . The output sequence captures local and global semantic information about the input sequence. The main training objective for BERT is to minimize the pseudo negative log likelihood, defined by where y −t is the sequence of all words except the t'th, and Here f enc (k) is an embedding lookup table for token k, e t = f enc (y t) is the embedding for the token at t, is the embedding sequence for the context, and is a multi-layer multi-headed transformer network that takes a T × D feature matrix as input (masked at location t) and returns a matrix of the same size. The BERT model requires a fixed discrete vocabulary. However, for images and videos, the inputs are real-valued vectors. We propose to use the softmax version of the noise contrastive estimation (NCE) loss , which has the form where where e t = f enc (x t) is the output of a 3D CNN applied to a small window around frame t (we use the S3D model from ),ê t = g context (e −t) is the output of a visual transformer, and neg(t) is a set of (indices of) "negative examples" (in practice we use all the other frames from the same minibatch as frame t). Intuitively the NCE loss encourages the model to learn to identify the correct frame (given the context) compared to a set of negative distractors. More formally, it can be shown that the NCE loss maximizes (a lower bound on) the mutual information (MI) between x t and x −t (see e.g., ). This loss has been used in other work on self-supervised visual representation learning, e.g., in the deep infomax (DIM) and contrastive predictive coding (CPC) papers. In DIM, the context predictor uses a CNN applied to neighboring patches in the same image, and in CPC, the context predictor uses a causal autoregressive model applied to "earlier" patches in the same image. In our CBT method, the context predictor is a bidirectional transformer applied to video frames. In this section we show how to learn useful representations from sequences of continuous visual features (from video) and sequences of discrete words (from ASR). More precisely, assume we have two sequences, x = x 1:T representing video, and y = y 1:T, representing ASR. Note that the sequences may not be perfectly aligned, since a person may speak about things at time t that are not visible in the frame at time t. Therefore it does not make sense to try to maximize the MI between x t and y t at the frame level. Instead, we try to maximize MI between x and y at the sequence level. To do this, we first encode each sequence using CBT and BERT to get h x 1:T = CBT(x 1:T) and h y 1:T = BERT(y 1:T), as shown in fig. 1. We then concatenate these sequences and pass them to a shallow cross-modal transformer to produce h xy 1:T +T. Finally, we pass this to a shallow MLP to compute an MI-like score MI(x, y) = f (h xy 1:T +T). (Here f extracts the features from h xy 0, but it could also use average pooling.) This model is trained using L cross = −E (x,y)∼D log NCE(y|x), where where Neg(y) is a set of ASR sequences not associated with video x. Note that our cross-modal training assumes there is something in common between the video and text streams. In practice this means we have to focus on certain kinds of videos, such as instructional videos, in which the spoken words and the visual content are "about" the same thing. By contrast, arbitrary videos may contain speech content that is unrelated to the visual frames (e.g., imagine a conversation between two characters in a drama or soap opera). Our overall model has three components: one transformer (BERT) that takes discrete ASR tokens, one transformer that takes continuous video features, and a third transformer to estimate mutual information between the two modalities. We jointly train the model by optimizing: We fix w bert = 0, since we use a pre-trained (frozen) BERT model for ASR. We set w visual = 1, and either set w cross = 1 or w cross = 0, depending on whether we use cross-modal training or not. In this section we conduct experiments to study the usefulness of the representations learned by our CBT model for various downstream tasks, including action anticipation, video captioning and action segmentation. We also consider ablations to our model, such as turning cross-modal training on or off, varying the size of the visual transformer, and varying the amount of unlabeled pre-training data. In this section we evaluate self-supervised visual representation learning on the downstream task of action recognition. Existing methods use various proxy tasks to pre-train feature extractors in a self-supervised way, and then use supervised learning to train linear classifiers on top of these frozen representations, or fine-tune the classifier plus feature extractor end-to-end. We follow the same protocol. Experimental setup. We follow the standard practice from recent works (; ; by pre-training our model (S3D feature extractor followed by CBT) on unlabeled RGB-only Kinetics videos. Kinetics is the largest action recognition dataset containing 500k short clip videos (about 10 seconds long) for 600 human actions classes. We take 1 minute sliding windows from the original YouTube videos they are selected from. We then use the (average pooled) S3D features as input to a linear classifier, and train the classifier on various datasets. For evaluation, we use UCF101 , which contains 13,320 videos from 101 human actions, and HMDB51 , which contains 7,000 videos from 51 classes. For both dataset we report the action recognition test accuracy averaged over the 3 standard train/test splits. To pre-train our CBT model, we use a curriculum learning strategy, by first pre-training the S3D feature extractor on unsupervised clips using the loss proposed in the 3DRotNet paper on 16 consecutive frames. We then jointly fine-tune the last blocks of S3D (Mixed5b and Mixed5c) with the visual transformer using the CBT visual loss. We observed that this strategy gave us better on downstream tasks compared to pre-training from scratch using CBT; it also saves memory and computation, which allows us to use longer sequences. During pre-training, we set the number of visual transformer layers to be 2, number of attention heads to be 4, and hidden unit size to be 768. We randomly take 60-second sliding windows from the Kinetics videos, and break them into sequences of 1.5-second clips. We randomly mask out 6 out of the 40 possible locations. We resize the video frames to 112 by 112 before encoding them with S3D to save memory. The model is trained for 500K iterations with batch size of 128 and learning rate of 1e-5. Comparison of pre-training strategies. In Table 1 (Left) we compare our way of pre-training the S3D model (i.e., using CBT visual loss) to existing approaches. In particular, we consider the Shuffle&Learn and 3DRotNet proxy tasks. We reimplement the two methods using S3D CNN, and pre-train them on the same Kinetics data. We also consider random initialization. We report classification on UCF101 and HMDB51 using frozen features and fine-tuned features passed to a linear classifier. We see that our method outperforms existing training methods by a large margin. Comparison to existing methods. Table 1(Right) compares the of our method to various state-of-the art self-supervised methods. (We only report the of fine-tuning, which are better for all methods than using frozen features.) Note that the methods differ both in architecture and training objective. First we compare against 2DCNN approaches Shuffle&Learn and OPN . Our method outperforms both by a very large margin. This can be explained by the fact that our backbone is a 3DCNN architecture, which is much more powerful than 2D CNNs for video action recognition. Next we compare against approaches using 3DCNN architectures similar to our S3D. We also outperform all of these methods by a very large margin, and even beat the most recent approach, DPC , by 3.8 points on UCF101 and 8.9 points on HMDB51. We believe this is due to the better contextual features that we are able to learn by using the transformer model and NCE loss. In this section, we consider self-supervised training of representations from long videos, followed by supervised fine-tuning on various downstream tasks. To avoid running out of memory, we pre-train the S3D model on the task of classifying (short) Kinetics videos. We then freeze this feature extractor, and focus on learning longer-term temporal representations using the self-supervised CBT model. That is, we precompute short term representations e x t = f enc (x t) for all videos using S3D, and focus on learning global representations h For the self-supervised pre-training, we use unlabeled videos from the HowTo100M dataset (b). This contains ∼ 1M instructional videos (details below), so the speech is informative about the vision. Therefore, we also run ASR on this dataset and use cross-modal training to compute h Details on self-supervised pre-training. We pre-train our model on HowTo100M (b). This is a new large-scale dataset of 1.22M narrated instructional videos available on YouTube and covers among 23k different visual tasks. The average duration of a video is 6.5 minutes and there are on average 110 clips per video. To extract visual features, we resize the videos to be 224 by 224, and compute visual features over sliding windows of 1.5 seconds (30 frames at 20 FPS) using an S3D network pre-trained on the Kinetics dataset . We take the feature embeddings from the final Mixed5c block of the S3D network before the classification layer, and average pool the features spatio-temporally to get vectors of size 1024. We follow the same strategy for extracting visual features on the downstream tasks. The visual features are not fine-tuned during pre-training or when applied to downstream tasks. To extract text features, we convert the audio track into sentences by calling the YouTube ASR API, followed by an off-the-shelf LSTM-based language model to add punctuation, thus converting the stream of words into a stream of sentences. We then follow the standard preprocessing steps from BERT , and use WordPieces tokenization with the same vocabulary of 30,000 tokens. To encode ASR tokens, we take the pre-trained BERT-base architecture, which has 12 layers of Transformers, each of which has 768 hidden units and 12 attention heads. To construct paired inputs to pre-train CBT with the cross-modal objective, we iterate over all the sentence segments in the HowTo100M dataset, and concatenate short segments until they reach the maximal length of 48 tokens. We then retrieve up to 48 visual features (72 seconds) starting at the same locations in videos. We mask out 6 out of 48 features randomly. For both the video and cross-modal transformers, we set the total hidden units per layer to 768. We fix the number of layers to 1 for the cross-modal transformer and explore the optimal number of layers and attention heads for the video transformer. Their weights are randomly initialized. For pre-training the CBT model on HowTo100M, we use 32 Cloud TPUs and a total batch size of 128. We use the Adam optimizer with an initial learning rate of 1e-5 and a linear decay learning rate schedule. The model is trained for 2 million iterations, which takes around 2 days. Details on supervised evaluation. We evaluate the pre-trained temporal representations by transfer learning to downstream tasks. We first focus on action anticipation, whose goal is to predict the future actions by observing video sequences preceding them. In the supplementary, we also present on video captioning and action segmentation. For the action anticipation task, we follow the standard setup described from recent work (; a). We consider three datasets: the Breakfast dataset is composed of 1712 cooking videos and contains 48 fine-grained action classes; the 50Salads dataset contains 50 cooking videos with 17 fine-grained action classes; and the ActivityNet 200 dataset contains 19994 YouTube videos with 200 human action classes (beyond the cooking and instructional domains). The inputs are video segments up to T c seconds before the actual action starts, and the outputs are categorical labels. For comparison with previous approaches, we set T c = 1 for Breakfast and 50Salads, and T c = 5 for ActivityNet. For all experiments except for ablation on sequence lengths, we fix the input sequence length to be 72 seconds (corresponding to 48 sliding windows), features for videos shorter than 72 seconds are zero-padded. The outputs of CBT are transformed features with the same length, we take the output feature at the last non-padded position to represent the whole sequence, and put a linear classifier on top to predict the future actions. We jointly fine-tune the weights of visual transformers with the linear classifier. The text transformers and cross-modal transformers are not used during fine-tuning since only visual inputs are available. We train our model for 5 epochs using a batch size of 32 with the Adam optimizer and an initial learning rate of 1e-3. We report the top-1 accuracy on the test sets for Breakfast and 50Salads, and on the validation set for ActivityNet. Comparison to existing methods. Table 2 (Left) compares to existing methods. First we compare to two existing self-supervised approaches, namely and Sun et al. (2019a). Our approach outperforms both by a very large margin. The difference with VideoBERT (Sun et al. (2019a) ), which also relies on a BERT model, can be explained by the fact that it quantizes the visual features into tokens and, hence loses discriminative power. Next we compare to some recent methods that train deep classifiers end-to-end, namely and (a). We outperform both by a large margin. Saldads, ActivityNet. Self-super = Y means the model was pre-trained in a self-supervised way, and then fine-tuned using a linear classifier. Self-super = N means the model is trained end-to-end on the specific task. (Right) Comparison with the average pooling and LSTM baselines on 50Salads Breakfast, 50Salads and ActivityNet. We vary the observation window lengths (sec.) Effect of video length. In Table 2 (Right) we show the impact of the length of the training videos on the performance. We compare with two baselines, average pooling (AvgPool) and LSTM . The AvgPool baseline simply computes the average of all input visual features over time. The LSTM baseline takes the same sequence of S3D features but recurrently updates its hidden states over time. The final hidden state is used for classification. We adjust the hidden unit size of LSTM to make its number of parameters comparable to CBT. We can see that CBT significantly outperforms the two baselines on all three datasets. Moreover, we can see that as the observed video length increases, the performance of CBT monotonically increases, while LSTM and AvgPool either plateaus or decreases. These indicate that CBT is better at modeling long-term temporal context. Table 3: Ablation study on the action anticipation task. We show accuracy on Breakfast, 50Salads and ActivityNet. (Left) Impact of the percentage of HowTo100M videos used, and the cross-modal objective during pre-training. 0% corresponds to no pretraining, ie. using random weights. (Middle, Right) Impact of the number of layers (L) and attention heads (A) for the visual transformers. Effect of dataset size and cross-modal training. In Table 3 (Left), we study the impact of pretraining data set. As expected, pre-training with more examples leads to higher performance on all three benchmarks. We also study the impact of cross-modal training. We see this helps signficantly, especially on the smaller datasets (Breakfast and Salads). Effect of model size. In Table 3 (Middle) and (Right), we study the impact of the number of layers (L) and the number of attention heads (A) for the visual transformer. Not surprisingly, model performance initially increases, but surprisingly, it then starts to decrease, in contrast to the case of NLP-BERT. We conjecture that this is because our unlabeled pre-training set is much smaller than used by the NLP-BERT model. Fortunately, our technique is quite scalable, since we can train the video representations on top of S3D features using relatively shallow transformers -our visual transformer only has 15M parameters, whereas the BERT NLP transformer has 110M parameters. Applications to other tasks. In Table 4 we show the of using of our learned temporal representation for video captioning and action segmentation. See section 6.1 and section 6.2 in the supplementary for details. Table 4: (Left) Video captioning on the YouCook2 dataset (b). We compare with previous state-of-the-art methods by Zhou et al. (2018c) and Sun et al. (2019a), the caption decoder of all methods share the same architecture, the main difference comes from the visual encoder. (Right) Action segmentation on the COIN dataset . A linear classifier is applied on the sequence of CBT output features for dense frame labeling. We compare with previous state-of-the-art methods using the standard frame accuracy metric. We have shown how to extend the BERT model to learn representations from video in a self-supervised way, without needing vector quantization or pre-trained visual features. We have also shown how to extend this to the cross-modal setting, when ASR is available. Finally, we demonstrated that our method learns features that are far more useful than existing self-supervised methods for a variety of downstream video tasks, such as classification, captioning and segmentation. We believe that the simplicity and modularity of our method will let us scale to much larger unlabeled video datasets, which we hope will let us finally surpass supervised video pretraining (e.g., on Kinetics), just as other methods (e.g., CPC++ (Hénaff et al., 2019) ) have recently surpassed supervised image pretraining (on ImageNet). In this section, we apply our model to video captioning. Dataset. We pretrain our model on HowTo100M, and then use its features as input to a captioning model (details below) which is trained on the YouCook2 dataset (b). This contains 2000 Youtube videos of an average length of 5.26 minutes for a total of 176 hours. The annotations consist of segmentation boundaries and captions with on average 7.7 segments per video and 8.8 words per caption. We made sure that there is no overlap between the videos from our pre-training datasets and YouCook2. Model. We follow the experimental setup from (c), where the ground truth video segmentations from YouCook2 are used to train a supervised model mapping video segments to captions. Our captioning model is a transformer with 2 layers and a hidden layer of size 128. During training we set the dropout probability to 0.4. We train our model for 10K iterations using batch size of 128 with the Adam optimizer and an initial learning rate of 1e-4. We report BLEU, METEOR and ROUGE metrics on the validation set. Comparison to other methods. Table 4 shows our . We outperform a simple baseline computed using average-pooled S3D features. We also outperform the approach of Zhou et al. (2018c) and VideoBERT Sun et al. (2019a) on all reported metrics. The comparison to VideoBERT is particularly interesting. The gains suggest that removing the quantization of video features is important for obtaining a fine-grained video representation. We also observe that the difference between CBT and VideoBERT is smaller for YouCook2 than for Breakfast and 50Salads action anticipation task, possibly because the YouCook2 dataset set is more similar to the cooking videos used for pre-training by VideoBERT. In this section, we apply our model to the task of temporal action segmentation. Dataset. We pretrain our model on HowTo100M and then use its features as input to a linear classifier (details below) which is trained on the COIN dataset. This contains 11827 instructional Youtube videos of an average length of 2.36 minutes. The annotations consist of segment boundaries and class label. On average there are 3.91 segments per video each of which lasts 14.9 seconds. There are in total 779 classes. Model. We extract video features using S3D and feed the sequence to the visual transformer. We use a fixed size of 72 seconds and use zero-padding for shorter sequences. The overall clip is represented by its associated output embedding of size 768. This preprocessing step is frozen. We feed the features to a linear classifier, which we train or model for 100K iterations using batch size of 32 with the Adam optmizer and initial learning rate of 1e-3. At test time we operate on a long video with a sliding window of 72 seconds. Comparison to existing approaches. In Table 4 we compare CBT against various state of the art approaches using the frame acuracy as metric, including , and . We outperform them by a large margin state (+19.6 points).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgRMkrtDr
Generalized BERT for continuous and cross-modal inputs; state-of-the-art self-supervised video representations.
We present a generic dynamic architecture that employs a problem specific differentiable forking mechanism to leverage discrete logical information about the problem data structure. We adapt and apply our model to CLEVR Visual Question Answering, giving rise to the DDRprog architecture; compared to previous approaches, our model achieves higher accuracy in half as many epochs with five times fewer learnable parameters. Our model directly models underlying question logic using a recurrent controller that jointly predicts and executes functional neural modules; it explicitly forks subprocesses to handle logical branching. While FiLM and other competitive models are static architectures with less supervision, we argue that inclusion of program labels enables learning of higher level logical operations -- our architecture achieves particularly high performance on questions requiring counting and integer comparison. We further demonstrate the generality of our approach though DDRstack -- an application of our method to reverse Polish notation expression evaluation in which the inclusion of a stack assumption allows our approach to generalize to long expressions, significantly outperforming an LSTM with ten times as many learnable parameters. Deep learning is inherently data driven -visual question answering, scene recognition, language modeling, speech recognition, translation, and other supervised tasks can be expressed as: given input x, predict output y. The field has attempted to model different underlying data structures with neural architectures, but core convolutional and recurrent building blocks were designed with only general notions of spatial and temporal locality. In some cases, additional information about the problem can be expressed simply as an additional loss, but when hard logical assumptions are present, it is nonobvious how to do so in a manner compatible with backpropagation. Discrete logic is a fundamental component of human visual reasoning, but there is no dominant approach to incorporating such structural information into deep learning models. For particular data structures and settings, there has been some success. However, in prior work additional structure must either be learned implicitly and without additional annotations or is available at both train and test time. For example, StackRNN BID7 ) allows recurrent architectures to push and pop from a stack. While this approach works well without explicit stack trace supervision, implicit learning only goes so far: the hardest task it was tested on is binary addition. Approaches such as recursive NN BID13 ) and TreeRNN BID15 ) allow inclusion of explicit tree structures available during both training and testing, but neither can be used when additional supervision is available only at training time. We consider this the most general problem because it is not feasible to obtain good without any additional supervision if the problem is sufficiently difficult. Our objective is to develop a general framework for differentiable, discrete reasoning over data structures, including as stacks and trees. Our approach is flexible to differing degrees of supervision and demonstrates improved when structural assumptions are available at test time. We are less concerned with the no-supervision case because of limitations in scalability, as demonstrated by the scope of StackRNN.We present our framework in the context of two broader architectures: Neural Module Networks (NMN, BID0) and Neural Programmer-Interpreters (NPI, BID11).The original NMN allows per-example dynamic architectures assembled from a set of smaller models; it was concurrently adapted in N2NMN BID4 ) and IEP as the basis of the first visual question answering (VQA) architectures successful on CLEVR BID5 ). The NPI work allows networks to execute programs by directly maximizing the probability of a successful execution trace. In the present work, we present two applications of our framework, which is a superset of both approaches. The first is our CLEVR architecture, which introduces two novel behaviors. It interleaves program prediction and program execution by using the output of each module to predict the next module; this is an important addition because it improves the differentiability of the model. For IEP/N2NMN, the discrete program in the middle of the model breaks the gradient flow. For our model, although the selection of modules is still a discrete non-differentiable choice, it is influenced by the loss gradient: the visual state gives a gradient pathway learnable through the question answer loss. The second contribution of this architecture is a novel differentiable forking mechanism that enables our network to process logical tree structures through interaction with a stack of saved states. This allows our model to perform a broad range of logical operations; DDRstack is the first architecture to obtain consistently strong performance across all CLEVR subtasks. We briefly discuss our rationale for evaluation on CLEVR as well as prior work on the task. Though CLEVR is too easy with or without program supervision, it is the best-available proxy task for high-level reasoning. Its scale, diverse logical subtask categories, and program annotations make the dataset the best current option for designing discrete visual reasoning systems. By effectively leveraging the additional program annotations, we improve over the previous state-of-the-art with a much smaller model -on the important Count and Compare Integer subtasks, we improve from 94.5 to 96.5 percent and 93.8 to 98.4 percent, respectively. However, our objective is neither the last couple percentage points of accuracy on this task nor to decrease supervision, but to motivate more complex tasks over knowledge graphs. We expect that it is possible to improve accuracy on CLEVR with a static architecture using less supervision. This is largely unrelated to the objective of our work -we view CLEVR as a good first step towards increased supervision for the learning of complex logic. Human-level general visual reasoning from scratch is less reasonable than from expressively annotated data: we consider improving and generalizing the ability of architectures to better leverage additional supervision to be the most likely means to this end. Prior work on CLEVR is largely categorized by dynamic and static approaches. IEP BID6 ) and N2NMN both generalized the original neural module networks architecture and used the functional annotations in CLEVR to predict a static program which is then assembled into a tree of discrete modules and executed. IEP further demonstrated success when program annotations are available for only a few percent of questions. These are most similar to our approach; we focus largely upon comparison to IEP, which performs significantly better. RN BID12 ) and FiLM BID10 ), the latter being the direct successor of CBN BID9 ) are both static architectures which incorporate some form of implicit reasoning module in order to achieve high performance without program annotations. In contrast, our architecture uses program annotations to explicitly model the underlying question structure and jointly executes the corresponding functional representation. As a , our architecture performs comparably on questions requiring only a sequence of filtering operations, but it performs significantly better on questions requiring higher level operations such as counting and numerical comparison. We present DDRstack as a second application of our framework and introduce a reverse Polish notation (RPN) expression evaluation task. The task is solvable by leveraging the stack structure of expression evaluation, but extremely difficult without additional supervision: a much larger LSTM baseline fails to attain any generalization on the task. We therefore use RPN as additional motivation for our framework, which introduces a simple mechanism for differentiably incorporating the relevant stack structure. Despite major quantitative differences from CLEVR VQA, the RPN task is structurally similar. In the former, questions seen at training time contain direct programmatic representations well modeled by a set of discrete logical operations and a stack requiring at most one recursive call. The latter is an extreme case with deep recursion requiring a full stack representation, but this stack structure is also available at test time. In summary: the DDR framework combines the discrete modular behavior of NMN and IEP with an NPI inspired forking behavior to leverage structural information about the input data. Our approach resolves common differentibility issues and is easily adapted to the specific of each problem: we achieve a moderate improvement over previous state-of-the-art on CLEVR and succeed on RPN where a much larger baseline LSTM fails to attain generalization. 2.1 CLEVR CLEVR is a synthetic VQA dataset that encourages approaches capable of discrete reasoning through its inclusion of functional program annotations that model the logic of each question. The dataset consists of 100k images and 1 million question/answer pairs. Over 850k of these questions are unique. Images are high quality 3D Blender BID1 ) renders of scenes containing geometric objects of various shapes, sizes, colors, textures, and materials. Thus the dataset is quite realistic despite being synthetic. Furthermore, the authors ran comprehensive tests to avoid exploitable biases in the data. As the objects are geometric figures, no external knowledge of natural images is required, as in earlier VQA datasets. Most importantly, CLEVR provides an expressive program representation of each question. For example, "How many red spheres are there?" is represented as [filter red, filter sphere, count]. Some questions require nonlinear program structure, such as "How many objects are red or spheres", which is represented by a tree with two branches [filter red] and [filter sphere] followed by a binary [union] operation and a final [count]. We include additional examples in the appendix. We are unaware of any dataset with comparable properties of this size and complexity. By raw accuracy alone, CLEVR is effectively solved. Neural architectures have already far surpassed human accuracy on the task, both with and without using program annotations at train time. One perspective is that this should motivate a return to the natural image setting without programs or with transfer learning from CLEVR. In contrast, we believe the rapid recent progress on CLEVR motivates more complex synthetic tasks -perhaps involving harder logical inference over general knowledge graphs. It is not currently obvious what form a visual Turing test should take, nor is it clear what should comprise the train set for such a task: this will likely require much iteration and experimentation. On this front, the synthetic setting is unmatched: making even slight changes to a natural image dataset often involves a length additional data collection task compared to a programmatic change in the synthetic case. We introduce the reverse Polish notation (RPN) expression evaluation dataset as a motivation for additional supervision in higher level learning tasks. The specific problem form we consider is [NUM]*(n+1)-[OP]*n, that is, n + 1 numbers followed by n operations. For example, "2 3 4 + *" evaluates to 14. This simplifies the problem by eliminating consideration for order of operations. Thus the task is: given a sequence of tokens corresponding to a valid expression in reverse Polish notation, evaluate the expression and produce a single real valued answer. This may seem like a simple task; it is not. For large n, expressions behave somewhat like a hash function. Small changes in the input can cause wild variations in the output -we found the problem intractable in general. Our objective is to make stronger structural assumptions about the problem and create an architecture to leverage them. For this reason, our framework is incomparable to StackRNN, which attempts to learn a stack structure implicitly but is unable to incorporate additional supervision when the problem is likely too difficult to solve otherwise. We therefore modify the problem as such: instead of producing only the final expression evaluation, produce the sequence of answers to all n intermediate expressions in the answer labels. For the example "2 3 4 + *", the expected output would be because 3+4=7 and 2*7=14. We further assume the stack structure of the problem is available to the architecture should it be capable of taking advantage of such information. The problem is still sufficiently complex -note that to the model, {1, 3, 4, +, *} would all be meaningless tokens: it must learn both the NUM and the OP tokens. The dataset consists of 100k train, 5k validation, and 20k test expression with n = 10 -that is, 11 numbers followed by 10 operations. We also provide a 20k expression generalization set with n = 30. The label for each question contains the n solutions to each intermediate operation. During data generation, we sample NUM and OP tokens uniformly, reject expressions including division by zero, and also omit expressions that evaluate to over 100 in magnitude. The NUM tokens correspond to 0, 0.1,..., 0.9 and the OP tokens correspond to +, -, *, /; however, architectures are not privy to this information. The purpose of the DDR framework is to naturally incorporate structured information into a neural reasoning architecture. This is important when the given problem is hard without additional supervision and allows the model to perform discrete, complex reasoning. Our framework addresses the difficulty of combining discrete logic with clean, differentiable training and is capable of interfacing with a broad range of data structures. Our framework is a clean fusion of two broad design patterns. Like IEP, we maintain a set of problem-specific neural modules to allow our model to learn relevant program primitives. Like NPI, we interleave program prediction with program execution, differentiably learning modules when module arrangement is not known at test time. This is much more general compared to either IEP/NMN or NPI independently, and the particular mechanism for combining them is non-trivial differentiable forking operation. IEP alone lacks the ability to examine the output of intermediate operations. The relevance of this is particularly clear in the CLEVR setting. The NPI architecture can learn sequences of functions, but lacks the ability to learn the functions themselves. Our approach responds flexibly to the problem supervision: in VQA, modules are known only at train time. At each timestep, the controller therefore produces an index corresponding to a neural module, which is then executed. On the RPN task, the problem structure is also known at test time; the controller is therefore deterministic and directly executes the correct module. We refer to our VQA and RPN architecture adaptations as DDRprog and DDRstack, respectively; details are provided below. DDRprog is a direct adaptation of our framework to CLEVR, requiring only some standard encoders to handle the mixed language and visual data. We provide pseudocode in Algorithm 1, a visual representation in FIG0, and subnetwork details in Table 2 (see Appendix).The input data x for each sample is a (image, question, program) triple; the label y is a (answer, program) pair. The program only available at train time, thus our model must learn to predict it. The network first applies standard LSTM and ResNet ) encoders to the question/image, producing language and visual states. The ResNet encoder is unchanged from FiLM/IEP.Both the language and visual states are passed to the controller. We use a recurrent highway network (RHN) BID17 ) as recommended by BID14 instead of an LSTM BID3 ) -both accept flat inputs. As the visual state contains convolutional maps, we flatten it with a standard classifier. At each time step, the controller outputs a standard softmax classification prediction, which is interpreted as an index over the set of learnable neural modules. These are smaller, slightly modified variants of the modules used in IEP. The selected module is executed on the visual state; Algorithm 1 DDRprog. Note that CNN produces a flattened output and Controller also performs a projection and argmax over program scores to produce programP rediction img, question DISPLAYFORM0 the visual state is set to the output. The module prediction at the final timestep is followed by a small classifier network, which uses the IEP classifier. This architecture introduces a significant advantage over IEP: as modules are predicted and executed one at a time instead of being compiled into a static program, our model can observe the of intermediate function operations -these have meaning as filtering and counting operations on CLEVR.We now motivate our differentiable forking mechanism. As presented thus far, our approach is sufficient on the subset of CLEVR programs that do not contain comparison operations and are effectively linear -indeed, we observe a large performance increase over IEP on this subset of CLEVR. However, some CLEVR questions contain a logical branching operation (e.g. are there more of ... than ... ?) and cannot be answered by structurally linear programs. In general, programs can take the form of expressive trees, but CLEVR programs contain at most two branches. Adding a differentiable forking mechanism handles the general case without modification from the CLEVR architecture. Upon encountering a program branch, our architecture pushes the current language and visual states to a stack and forks a subprocess. This subprocess is effectively a copy of the main network that maintains its own states. It takes as input the language state and the initial and current visual states. Effectively, a different copy of the network with its own state processes each branch of the program. Upon processing the last operation in the branch, a binary cell is applied to the subprocess state outputs and the main process states (popped from the stack), merging them as shown in FIG0. Our architecture is likely to generalize even past the setting of tree processing, as we could replace the stack with a priority queue or any other data structure pertinent to the problem. Finally, a technical note for reproducibility: the fork module must differ from a standard unary module, as it is necessary to pass the original ResNet features (e.g. the initial visual state) to the subprocess in addition to the current visual state. Consider the question: "Is the red thing larger than the blue thing?" In this case, the main network filters by red; it is impossible to recover the blue objects in the subprocess given a red filtered image. We found that it is insufficient to pass only the original images to the subprocess, as the controller is small and has difficulty tracking the current branch. We therefore use a variant of the binary module architecture that merges the original ResNet features with the current visual state (see Algorithm 1). As the fork module is shared across all branch patterns, it is larger than the other binary modules and also one layer deeper -refer to the appendix for full architecture details on each layer. The DDRstack architecture applies our general framework to the increased supervision setting of the RPN task -module arrangement is a fixed expression parse tree. One natural view of the task is: given a parse tree structure, simultaneously socket and refine the learnable NUM and OP nodes. Our model consists of an LSTM controller and a set of 4 learnable binary modules -one per OP -as well as an explicit stack. DDRstack processes one token at a time; similar to unary/binary modules in IEP, NUM and OP tokens are processed differently: NUM: Our model embeds the token and passes it to the LSTM. It then pushes the to the stack. OP: Our model pops twice, calls the OP specific binary cell, and then passes the to the LSTM. It then pushes the to the stack. The binary cell is a simple concatenation of the arguments followed by a single fully connected layer with no nonlinearity. DDRstack can be viewed as a neural analog to standard analytical RPN expression evaluation algorithm where the values of the NUM and OP tokens are unknown. We provide high level pseudocode for the model in Algorithm 2 and a visual representation in FIG1. We train a baseline vanilla LSTM supervised with the intermediate solutions in the last n timesteps. DDRstack uses the same baseline LSTM as its core controller, but includes the aforementioned stack behavior. For both models, predictions are made in the last n timesteps (Algorithm 2 shows only the final return). Table 1: Accuracy on all CLEVR question types for baselines and competitive models. The Human baseline is from the original CLEVR work. * denotes additional program supervision. SA refers to stacked spatial attention BID16. rate and rough network size. The network overall has 9M parameters. We exclude the ResNet feature extractor from all calculations because it is also present in the best FiLM model. Their work further demonstrated it is fairly straightforward to replace it with a from-scratch feature extractor with minimal loss in accuracy. We pass the ground truth program labels to the model during training. Critically, the program labels are only used on the validation set for the purpose of model selection, and our final accuracy is obtained by rerunning the validation set without the ground truth programs. We train on a single GTX 1080 TI and, after 35 epochs, our model matches the previous state-of-the-art accuracy of 97.7 percent. We continue training until the 52nd epoch, dropping the learning rate to 1e-5 for the last few epochs to ensure convergence, and obtain 98.3 percent accuracy. The model predicts program cells with 99.98 percent accuracy. Several models have far exceeded human accuracy on CLEVR -the task remains important for two reasons. First, though CLEVR is large and yields consistent performance across runs, different models exhibit significantly different performance across question types. Where every competitive previous work exhibits curiously poor performance on at least one important subtask, our architecture dramatically increases consistency across all tasks. Second, CLEVR remains the best proxy task for high-level visual reasoning because of its discrete program annotations -this is far more relevant than raw accuracy to our work, which is largely concerned with the creation of a general reasoning framework. However, we do achieve a modest improvement in raw accuracy over the previous state-of-the-art with a >5X smaller architecture. We presently consider RN, FiLM, IEP, and our architecture as competitive models. From Table 1, no architecture has particular difficulty with Exist, Query, or Compare questions; the main differentiating factors are Count and Compare Integer. Though Compare Integer is the smallest class of questions and is therefore assigned less importance by the cross entropy loss, the IEP suggests that this does not cause models to ignore this question type. We therefore consider Count and Compare Integer to be the hardest unary and binary tasks, respectively, and we assign most important to these question subsets in our analysis. We achieve strong performance on both subtasks and a significant increment over previous state-of-the-art on the Count subtask. We first compare to IEP. Our model is 4x smaller than IEP (see Table 1) and resolves IEP's poor performance on the challenging Count subtask. Overall, DDRprog performs at least 2x better across all unary tasks (+1.7 percent on Exist, +3.8 percent on Count, + 1.0 percent on Query) it closely matches binary performance (+0.2 percent on Compare, -0.3 percent on Compare Integer). We believe that our model's lack of similar gains on binary task performance can be attributed to the use of a singular fork module, which is responsible for cross-communication during prediction of both branches of a binary program tree, shared across all binary modules. We have observed that this module is essential to obtaining competitive performance on binary tasks; it is likely suboptimal to use a large shared fork module as opposed to a separate smaller cell for each binary cell. Our model surpasses RN in all categories of reasoning, achieving a 2.6x reduction in overall error. RN achieves impressive for its size and lack of program labels. However, it is questionable whether the all-to-all comparison model will generalize to more logically complex questions. In particular, Count operations do not have a natural formulation as a comparison between pairs of objects, in which case our model achieves a significant 6.4 percent improvement. RN also struggles on the challenging Compare Integer subtask, where we achieve a 4.8 percent improvement. Furthermore, it is unclear how essential high epoch counts are to the model's performance. As detailed in Table 1, RN was trained in a distributed setting for 1000 epochs. Both our and FiLM were obtained on single graphics cards and were only limited in number of epochs for practicality -FiLM had not fully converged, and our model was unregularized. Both IEP and our model achieve a roughly 4x improvement over FiLM on Compare Integer questions (4.9 and 4.6 percent, respectively), the difference being that our model eliminates the Count deficiency and is also 4X smaller than IEP. The contrast between FiLM's Compare Integer and Exist/Query/Compare performance suggests a logical deficiency in the model -we believe it is difficult to model the more complex binary question structures using only implicit branching through batch normalization parameters. FiLM does achieve strong Compare Attribute performance, but many such questions can be more easily resolved through a sequence of purely visual manipulations. FiLM achieves 1.5x relative improvement over our architecture on Exist questions, but this is offset by our 1.5x relative improvement on Count questions. Given proximity in overall performance, FiLM could be seen as the main competitor to our model. However, they achieve entirely different aims: DDRprog is an application of a general framework, >5X smaller, and achieves stable performance over all subtasks. FiLM is larger and suffers from a significant deficiency on the Compare Integer subtask, but it uses less supervision. As mentioned in the introduction, our model is part of a general framework that expands the ability of neural architectures to leverage discrete logical and structural information about the given problem. In contrast, FiLM is a single architecture that is likely more directly applicable to low-supervision natural image tasks. For our architecture, we use hidden dimension 32 throughout the model, ing in only 17k parameters overall. We train with Adam using learning rate 1e-3 and obtain a test L1 error of 0.17 after 63 epochs. Using the same hidden dimension in the pure LSTM baseline (9k parameters) in test error 0.28. We overcompensate for the difference in model size by increasing the hidden dimension of the LSTM to 128 (255k parameters), ing in an only slightly lower test error of 0.24 after nearly 3000 epochs. FIG2 shows training curves for the LSTM baseline and DDRstack. After training both models on problems of length n = 10, we test both models on sequences of length n = 10 and n = 30. For a sequence of length n both models predict values not only for the entire expression but also for all n subproblems where index n corresponds to evaluating the entire sequence. For sequences of both lengths, we evaluate accuracy for the predicted answers on all subproblems. Results are shown in FIG2. We argue that the LSTM fails on the RPN task. This is not immediately obvious: from FIG2, both the small and large LSTM baselines approximately match our model's performance on the first 5 subproblems of the n = 10 dataset. From n = 6 to n = 10, the performance gap grows between our models -the small LSTM is unable to learn deep stack behavior, and performance decays sharply. The n = 30 dataset reveals the failure. The LSTM's performance is far worse on the first few subproblems of this dataset than on the test set of the original task. This is not an error: recall the question formatting [NUM]*(n + 1)-[OP]*n. The leading subproblems do not correspond to the leading tokens of the question, but rather to a central crop. For example, the first two subproblems of "12345+-*/" are given by "345+-", not "12345" -the latter is not a valid expression. The rapid increase in error on the LSTM implies that it did not learn this property, let alone the stack structure. Instead, it memorized all possible subproblems of length n ∈ {1, 2, 3} expressions preceding the first few OP tokens. Performance quickly decays to L1 error greater than 2.0, which corresponds to mostly noise (the standard deviation of answers minus the first few subproblems is approximately 6.0). In contrast, our model's explicit incorporation of the stack assumption in a smooth generalization curve with a gradual decay in performance as problem length increases. We briefly address a few likely concerns with our reasoning. First, one might argue that DDRstack cannot be compared to an LSTM, as the latter does not incorporate explicit knowledge of the problem structure. While this evaluation is correct, it is antithetical to the purpose of our architecture. The LSTM baseline does not incorporate this additional information because there is no obvious way to include it -the prevailing approach would be to ignore it and then argue the model's superiority on the basis that it performs well with less supervision. This logic might suggest implicit reasoning approaches such as StackRNN, which attempt to model the underlying data structure without direct supervision. However, we do not expect such approaches to scale to RPN: the hardest task on which StackRNN was evaluated is binary addition. While StackRNN exhibited significantly better generalization compared to the LSTM baseline, the latter did not completely fail the task. In contrast, RPN is a more complex task that completely breaks the baseline LSTM. While we did not evaluate Stack-RNN on RPN (the original implementation is not compatible modern frameworks), we consider it highly improbably that StackRNN would generalize to RPN, which was intentionally designed to be difficult without additional supervision. In contrast, our dynamic approach achieves a dramatic increase in performance and generalization precisely by efficiently incorporating additional supervision. StackRNN is to DDRstack as FiLM is to DDRprog: one motive is to maximize performance with minimal supervision whereas our motive is to leverage structural data to solve harder tasks. The DDR framework facilitates high level reasoning in neural architectures by enabling networks to leverage additional structural information. Our approach resolves differentiability issues common in interfering with discrete logical data and is easily adapted to the specific of each problem. Our work represents a clean synthesis of the modeling capabilities of IEP/NMN and NPI through a differentiable forking mechanism. We have demonstrated efficacy through two applications of our framework. DDRprog achieves a moderate improvement over previous state-of-the-art on CLEVR with greatly increased consistency and reduced model size. DDRstack succeeds on RPN where a much larger baseline LSTM fails to attain generalization. It is our intent to continue refining the versatility of our architecture, including more accurate modeling of the fork module, as mentioned in our CLEVR VQA discussion. Our architecture and its design principles enable modeling of complex data structure assumptions across a wide class of problems where standard monolithic approaches would ignore such useful properties. We hope that this increase in interoperability between discrete data structures and deep learning architectures aids in motivating higher level tasks for the continued progression and development of neural reasoning. Table 2: Architectural details of subnetworks in DDRprog as referenced in FIG0 and Algorithm 1. Finegrained layer details are provided in tables 4-8. Source will be released pending publication. ResNetFeaturizer DISPLAYFORM0 DISPLAYFORM1 Residual: Add and h × 14 × 14 ReLU h × 14 × 14 InstanceNorm h × 14 × 14 Concatenate and 2h DISPLAYFORM2 Add and h × 14 × 14 ReLU h × 14 × 14 DISPLAYFORM3 Add and 6h DISPLAYFORM4 DISPLAYFORM5 Residual: Add and h × 14 × 14 ReLU h × 14 × 14 1 filter size large 1 filter color yellow 1 filter material rubber 1 filter shape cube 1 unique 1 relate left 0 fork 1 filter size large 1 filter color blue 1 filter material metal 1 filter shape cube 1 unique 1 relate behind 2 intersect 1 filter shape cube 1 unique 1 query color • Answer (predicted, label): green, blue DISPLAYFORM6 • Image Index: 8543 • Question: are there fewer small purple rubber things that are behind the green metallic cylinder than small things that are in front of the tiny matte block? • Program (label): 1 filter size small 1 filter material rubber 1 filter shape cube 1 unique 1 relate front 1 filter size small 1 count 0 fork 1 filter color green 1 filter material metal 1 filter shape cylinder 1 unique 1 relate behind 1 filter size small 1 filter color purple 1 filter material rubber 1 count 2 less than • Answer (predicted, label): no, yes
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HypkN9yRW
A generic dynamic architecture that employs a problem specific differentiable forking mechanism to encode hard data structure assumptions. Applied to CLEVR VQA and expression evaluation.
We propose Support-guided Adversarial Imitation Learning (SAIL), a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning (AIL) algorithms. SAIL addresses two important challenges of AIL, including the implicit reward bias and potential training instability. We also show that SAIL is at least as efficient as standard AIL. In an extensive evaluation, we demonstrate that the proposed method effectively handles the reward bias and achieves better performance and training stability than other baseline methods on a wide range of benchmark control tasks. The class of Adversarial Imitation Learning (AIL) algorithms learns robust policies that imitate an expert's actions from a small number of expert trajectories, without further access to the expert or environment signals. AIL iterates between refining a reward via adversarial training, and reinforcement learning (RL) with the learned adversarial reward. For instance, Generative Adversarial Imitation Learning (GAIL) shows the equivalence between some settings of inverse reinforcement learning and Generative Adversarial Networks (GANs) , and recasts imitation learning as distribution matching between the expert and the RL agent. Similarly, Adversarial Inverse Reinforcement Learning (AIRL) modifies the GAIL discriminator to learn a reward function robust to changes in dynamics or environment properties. AIL mitigates the issue of distributional drift from behavioral cloning , a classical imitation learning algorithm, and demonstrates good performance with only a small number of expert demonstrations. However, AIL has several important challenges, including implicit reward bias , potential training instability , and potential sample inefficiency with respect to environment interaction . In this paper, we propose a principled approach towards addressing these issues. demonstrated that imitation learning is also feasible by constructing a fixed reward function via estimating the support of the expert policy. Since support estimation only requires expert demonstrations, the method sidesteps the training instability associated with adversarial training. However, we show in Section 4.2 that the reward learned via support estimation deteriorates when expert data is sparse, and leads to poor policy performances. Support estimation and adversarial reward represent two different yet complementary RL signals for imitation learning, both learnable from expert demonstrations. We unify both signals into Supportguided Adversarial Imitation Learning (SAIL), a generic imitation learning framework. SAIL leverages the adversarial reward to guide policy exploration and constrains the policy search to the estimated support of the expert policy. It is compatible with existing AIL algorithms, such as GAIL and AIRL. We also show that SAIL is at least as efficient as standard AIL. In an extensive evaluation, we demonstrate that SAIL mitigates the implicit reward bias and achieves better performance and training stability against baseline methods over a series of benchmark control tasks. We briefly review the Markov Decision Process (MDP), the context of our imitation learning task, followed by related works on imitation learning. We consider an infinite-horizon discounted MDP (S, A, P, r, p 0, γ), where S is the set of states, A the set of actions, P: S × A × S → the transition probability, r: S × A → R the reward function, p 0: S → the distribution over initial states, and γ ∈ the discount factor. Let π be a stochastic policy π:, and s t+1 ∼ P (·|s t, a t) for t ≥ 0. We denote π E the expert policy. Behavioral Cloning (BC) learns a policy π: S → A directly from expert trajectories via supervised learning. BC is simple to implement, and effective when expert data is abundant. However, BC is prone to distributional drift: the state distribution of expert demonstrations deviates from that of the agent policy, due to accumulation of small mistakes during policy execution. Distributional drift may lead to catastrophic errors . While several methods address the issue , they often assume further access to the expert during training. Inverse Reinforcement Learning (IRL) first estimates a reward from expert demonstrations, followed by RL using the estimated reward . Building upon a maximum entropy formulation of IRL , and explore adversarial IRL and its connection to Generative Adversarial Imitation Learning . Imitation Learning via Distribution Matching Generative Adversarial Imitation Learning (GAIL) frames imitation learning as distribution matching between the expert and the RL agent. The authors show the connection between IRL and GANs. Specifically, GAIL imitates the expert by formulating a minimax game: min where the expectations E π and E π E denote the joint distributions over state-actions of the RL agent and the expert, respectively. GAIL is able to achieve expert performance with a small number of expert trajectories on various benchmark tasks. However, GAIL is relatively sample inefficient with respect to environment interaction, and inherits issues associated with adversarial learning, such as vanishing gradients, training instability and overfitting to expert demonstrations . Recent works have improved the sample efficiency and stability of GAIL. For instance, Generative Moment Matching Imitation Learning replaces the adversarial reward with a non-parametric maximum mean discrepancy estimator to sidestep adversarial learning. improve sample efficiency with a model-based RL algorithm. and demonstrate significant gain in sample efficiency with off-policy RL algorithms. In addition, Generative Predecessor Models for Imitation Learning imitates the expert policy using generative models to reason about alternative histories of demonstrated states. Our proposed method is closely related to the broad family of AIL algorithms including GAIL and adversarial IRL. It is also complementary to many techniques for improving the algorithmic efficiency and stability, as discussed above. In particular, we focus on improving the quality of the learned reward by constraining adversarial reward to the estimated support of the expert policy. Imitation Learning via Support Estimation Alternative to demonstrate the feasibility of using a fixed RL reward via estimating the support of the expert policy from expert demonstrations. Connecting kernel-based support estimation to Random Network Distillation , the authors propose Random Expert Distillation (RED) to learn a reward function based on support estimation. Specifically, RED learns the reward parameterθ by minimizing: where f θ: S × A → R K projects (s, a) from expert demonstrations to some embedding of size K, with randomly initialized θ. The reward is then defined as: where σ is a hyperparameter. As optimizing Eq. only requires expert data, RED sidesteps adversarial learning, and casts imitation learning as a standard RL task using the learned reward. While RED works well given sufficient expert data, we show in the experiments that its performance suffers in the more challenging setting of sparse expert data. Formally, we consider the task of learning a reward functionr(s, a) from a finite set of trajectories, sampled from the expert policy π E within a MDP. Each trajectory is a sequence of stateaction tuples in the form of τ i = {s 1, a 1, s 2, a 2, ..., s T, a T}. Assuming that the expert trajectories are consistent with some latent reward function r * (s, a), we aim to learn a policy that achieves good performance with respect to r * (s, a) by applying RL on the learned reward functionr(s, a). In this section, we first discuss the advantages and shortcomings of AIL to motivate our method. We then introduce Support-guided Adversarial Learning (SAIL), and present a theoretical analysis that compares SAIL with the existing methods, specifically GAIL. A clear advantage of AIL resides in its low sample complexity with respect to expert data. For instance, GAIL requires as little as 200 state-action tuples from the expert to achieve imitation. The reason is that the adversarial reward may be interpreted as an effective exploration mechanism for the RL agent. To see this, consider the learned reward function under the optimality assumption. With the optimal discriminator to Eq. Eq. shows that the adversarial reward only depends on the ratio φ(s, a) = pπ(s,a). Intuitively, r gail incentivizes the RL agent towards under-visited state-actions, where φ(s, a) > 1, and away from over-visited state-actions, where φ(s, a) < 1. When π E and π match exactly, r gail converges to an indicator function for the support of π E, since φ(s, a) = 1 ∀ (s, a) ∈ supp(π E) . In practice, the adversarial reward is unlikely to converge, as p π E is estimated from a finite set of expert demonstrations. Instead, the adversarial reward continuously drives the agent to explore by evolving the reward landscape. However, AIL also presents several challenges. demonstrated that the reward − log D(s, a) suffers from an implicit survival bias, as the non-negative reward may lead to suboptimal behaviors in goal-oriented tasks where the agent learns to move around the goal to accumulate rewards, instead of completing the tasks. While the authors resolve the issue by introducing absorbing states, the solution assumes extra RL signals from the environment, including access to the time limit of an environment to detect early termination of training episodes. In Section 4.1, we empirically demonstrate the survival bias on Lunar Lander, a common RL benchmark, by showing that agents trained with GAIL often hover over the goal location 1. We also show that our proposed method is able to robustly imitate the expert. Another challenge with AIL is potential training instability. demonstrated empirically that the adversarial reward could be unreliable in regions where the expert data is sparse, causing the agent to diverge from the intended behavior. When the agent policy is substantially different from the expert policy, the discriminator could differentiate them with high confidence, ing in very low rewards and significant slow down in training, similar to the vanishing gradient problem in GAN training . We propose a novel reward function by combining the standard adversarial reward r gail with the corresponding support guidance r red. SAIL is designed to leverage the exploration mechanism offered by the adversarial reward, and to constrain the agent to the estimated support of the expert policy. Despite being a simple modification, support guidance provides strong reward shaping to address the challenges discussed in the previous, Θ function models, initial policy π ω0, initial discriminator parameters w 0, learning rate l D. 2: r red = RED(Θ, τ E) 3: for i = 0, 1,... sample a trajectory τ i ∼ π 5: π ωi+1 = TRPO(r red · r gail, π ωi). Sample θ ∈ Θ 10:θ =MINIMIZE(fθ, f θ, τ) 11: section. As both support guidance and adversarial reward are learnable from expert demonstrations, our method requires no further assumptions that standard AIL. SAIL addresses the survival bias in goal-oriented tasks by encouraging the agent to stop at the goal and complete the task. In particular, r red shapes the adversarial reward by favoring stopping at the goal against all other actions, as stopping at the goal is on the support of the expert policy, while other actions are not. We demonstrate empirically that SAIL assigns significantly higher reward towards completing the task and corrects for the bias in Section 4.1. To improve training stability, SAIL constrains the RL agent to the estimated support of the expert policy, where r gail provides a more reliable RL signal . As r red tends to be very small (ideally zero) for (s, a) ∈ supp(π E), r sail discourages the agent from exploring those state-actions by masking away the rewards. This is a desirable property as the quality of the RL signals beyond the support of the expert policy can't be guaranteed. We demonstrate in Section 4.2 the improved training stability on the Mujoco benchmark tasks. We provide the pseudocode implementation of SAIL in Algorithm 1. The algorithm computes r red by estimating the support of the expert policy, followed by iterative updates of the policy and r gail. We apply the Trust Region Policy Optimization (TRPO) algorithm with the reward r sail for policy updates. Reward Variants In practice, we observe that constraining the range of the adversarial reward generally produces lower-variance policies. Specifically, we transform r gail in Eq. For ease of notation, we refer to the bounded variant as SAIL-b, and the unbounded variant as SAIL. Similarly, we denote the bounded GAIL reward as GAIL-b. We include the comparison between the reward variants in the experiments. In this section, we show that SAIL is at least as efficient as GAIL in its sample complexity for expert data, and provide comparable RL signals on the expert policy's support. We note that our analysis could be similarly applied to other AIL methods, suggesting the broad applicability of our approach. We begin from the asymptotic setting, where the number of expert trajectories tends to infinity. In this case, both GAIL's, RED's and SAIL's discriminators ultimately recover the expert policy's support at convergence (see for for RED; SAIL follows from their combination). Moreover, for both GAIL and SAIL, the expert and agent policy distributions match exactly at convergence, implying a successful imitation learning. Therefore, it is critical to characterize the rates of convergence of the two methods, namely their relative sample complexity with respect to the number of expert demonstrations. Formally, let (s, a) ∈ supp(π E). Prototypical learning bounds for an estimator of the support r ≥ 0 provide high probability bounds in the form of P(r(s, a) ≤ c log(1/δ)n −α ) > 1 − δ for any confidence δ ∈, with c a constant not depending on δ or the number n of samples (i.e., expert state-actions). Here, α > 0 represents the learning rate, namely how fast the estimator is converging to the support. By choosing the reward in Eq., we are leveraging the faster learning rates between α red and α gail, with respect to support estimation. At the time being, no are available to characterize the sample complexity of GAIL (loosely speaking, the α and c introduced above). Therefore, we proceed by focusing on a relative comparison with SAIL. In particular, we show the following (see appendix for a proof). Proposition 1. Assume that for any (s, a) ∈ supp(π E) the rewards for RED and GAIL have the following learning rates in estimating the support Then, for any δ ∈ and any (s, a) ∈ supp(π E), the following holds with probability at least 1 − δ, where R red and R gail are the upper bounds for r red and r gail, respectively. Eq. shows that SAIL is at least as fast as the faster among RED and GAIL with respect to support estimation, implying that SAIL is at least as efficient as GAIL in the sample complexity for expert data. Eq. also indicates the quality of the learned reward, as state-actions outside the expert's support should be assigned minimum reward. Proposition 2. For any (s, a) ∈ supp(π E) and any δ ∈, we assume that The following event holds with probability at least 1 − δ that Eq. shows that on the expert policy's support, r sail is close to r gail up to a precision that improves with the number of expert state-actions. SAIL thus provides RL signals comparable to GAIL on the expert policy's support. It is also worth noting that the analysis could explain why r red + r gail is a less viable approach for combining the two RL signals. The analogous bound to Eq. would be the sum of errors from the two methods, implying the slower of the two learning rates, while Eq. would improve only by a constant, as R gail would be absent from Eq.. Our preliminary experiments indicated that r red + r gail performed noticeably worse than Eq.. Lastly, we comment on whether the assumptions in Eqs. and are satisfied in practice. Following the kernel-based version of RED , we can borrow previous from the set learning literature, which guarantee RED to have a rate of α red = 1/2 . These rates have been shown to be optimal. Any estimator of the support cannot have faster rates than n −1/2, unless additional assumptions are imposed. Learning rates for distribution matching with GANs are still an active area of research, and conclusive characterizing the convergence rates of these estimators are not available. We refer to for an in-depth analysis of the topic. We evaluate the proposed method against BC, GAIL and RED on Lunar Lander and six Mujoco control tasks including Hopper, Reacher, HalfCheetah, Walker2d, Ant, and Humanoid. We omit evaluation against methods using off-policy RL algorithms, as they are not the focus of this work. We also note that support guidance is complementary to such methods. We demonstrate that SAIL variants mitigate the survival bias in Lunar Lander (Fig. 1) from OpenAI Gym , while other baseline methods imitate the expert inconsistently. In this task, the agent is required to control a spacecraft to safely land between the flags. A human expert provided 10 demonstrations for this task as an imitation target. We observe that even without the environment reward, Lunar Lander provides a natural RL signal by terminating episodes early when crashes are detected, thus encouraging the agent to avoid crashing. Consequently, all methods are able to successfully imitate the expert and land the spacecraft appropriately. SAIL variants perform slightly better than GAIL variants on the average reward, and achieve noticeably lower standard deviation. The average performances and the standard deviations evaluated over 50 runs are presented in Table 1. To construct a more challenging task, we disable all early termination feature of the environment, thus removing the environment RL signals. In this no-terminal environment, a training episode only ends after the time limit. We present each algorithm's performance for the no-terminal setting in Table 1. SAIL variants outperform GAIL variants. Specifically, we observe that GAIL learns to land for some initial conditions, while exhibit survival bias in other scenarios by hovering at the goal. In contrast, SAIL variants are still able to recover the expert policy. To visualize the shaping effect from support guidance, we plot the average learned reward for GAIL, SAIL-b and RED at goal states. The goal states are selected from the expert trajectories and satisfy two conditions: 1) touching the ground (the state vector has indicator variables for ground contact), and 2) has "no op" as the corresponding action. As the adversarial reward functions are dynamic, we snapshot the learned rewards when the algorithms obtain their best policies, respectively. Fig. 3 shows the average rewards for each available action, averaged across all the goal states. Compared against the other algorithms, SAIL-b assigns a significantly higher reward to "no op", which facilitates the agent learning. Though GAIL and RED still favor "no op" to other actions, the differences in reward are much smaller, causing less consistent landing behaviors. We further observe that all evaluated AIL methods oscillate between partially hovering behavior and landing behavior during policy learning. The observation suggests that our method only partially addresses the survival bias, a limitation we will tackle in future works. This is likely caused by SAIL's non-negative reward, despite the beneficial shaping effect from support estimation. For additional experiment and discussion on Lunar Lander, please refer to the appendix. Mujoco control tasks have been commonly used as the standard benchmark for AIL. We evaluate SAIL against GAIL, RED and BC on Hopper, Reacher, HalfCheetah, Walker2d, Ant and Humanoid. We adopt the same experimental setup presented in by sub-sampling the expert trajectories every 20 samples. Consistent with the observation from , our preliminary experiments show that sub-sampling presents a more challenging setting, as BC is competitive with AIL when full trajectories are used. In our experiments, we also adopt the minimum 1056.5 ± 0.5 -9.1 ± 4.1 -0.2 ± 0.7 2372.8 ± 8.8 1005.5 ± 8.6 6012.0 ± 434.9 GAIL 3826.5 ± 3.2 -9.1 ± 4.4 4604.7 ± 77.6 5295.4 ± 44.1 1013.3 ± 16.0 8781.2 ± 3112.6 GAIL-b 3810.5 ± 8.1 -8.3 ± 2.5 4510.0 ± 68.0 5388.1 ± 161.2 3413.1 ± 744.7 10132.5 ± 1859.3 SAIL 3824.7 ± 6.6 -7.5 ± 2.7 4747.5 ± 43.4 5293.0 ± 590.9 3330.4 ± 729.4 9292.8 ± 3190.0 SAIL-b 3811.6 ± 3.8 -7.4 ± 2.5 4632.2 ± 59.1 5438.6 ± 18.4 4176.3 ± 203.1 10589.6 ± 52.2 Table 2: Episodic reward and standard deviation on the Mujoco tasks by different methods evaluated over 50 runs. SAIL-b achieves overall the best performance, with significantly lower standard deviation, indicating the robustness of the learned policies. number of expert trajectories specified in for each task. More details on experiment setup are available in the appendix. We apply each algorithm using 5 different random seeds in all Mujoco tasks. Table 2 shows the performance comparison between the evaluated algorithms. We report the mean performance and standard deviation for each algorithm over 50 evaluation runs, choosing the best policies obtained for each algorithm out of the 5 random seeds. The show that SAIL-b is comparable to GAIL on Hopper, and outperform the other methods on all other tasks. We note that RED significantly underperforms in the sub-sampling setting, while used full trajectories in their experiments. Across all tasks, SAIL-b generally achieves lower standard deviation compared to other algorithms, in particular for Humanoid, indicating the robustness of the learned policies. We stress that standard deviation is also a critical metric, as it indicates the robustness of the learned policies when presented with different states. For instance, the large standard deviations in Humanoid are caused by occasional crashes, which may be highly undesirable depending on the intended applications. To illustrate robustness of the learned policies, we plot the histogram of all 50 evaluations in Humanoid for RED, GAIL-b and SAIL-b in Fig. 2. The figure shows that SAIL-b performs consistently with expert performance. Though GAIL-b appears to be only slightly worse in average performance, the degradation is caused by occasional and highly undesirable crashes, suggesting incomplete imitation of the expert. RED performs the worst in average performance, but is consistent with no failure modes detected. The suggests that the proposed method combines the advantages of both support guidance and adversarial learning. Comparing SAIL against SAIL-b, we observe that the bounded variant generally produces policies with smaller standard deviations and better performances, especially for Ant and Humanoid. This is likely due to the fact that SAIL-b receives equal contribution from both support guidance and adversarial learning, as r red and r gail have the same range in this formulation. In addition, we note that GAIL fails to imitate the expert in Ant, while GAIL-b performs significantly better. The suggest that restricting the range of the adversarial reward could improve performance. To assess the sensitivity with respect to random seeds, we plot the training progress against number of iterations for the evaluated algorithms in Fig. 4, Each iteration consists of 1000 environment steps. The figure reports mean and standard deviation of each algorithm, across the 5 random seeds. Fig. 4 shows that SAIL-b is more sample efficient and stable in Reacher, Ant and Humanoid tasks; and is comparable to the other algorithms in the remaining tasks. Consistent with our analysis in Section 3.3, SAIL-b appears at least as efficient as GAIL even when the support guidance (i.e., the performance of RED) suffers from insufficient expert data in Hopper, HalfCheetah and Walker2d. In Reacher, Ant and Humanoid, SAIL-b benefits from the support guidance and achieves better performance and training stability. In particular, we note that without support guidance, GAIL fails to imitate the expert in Ant (Fig. 4e). Similar failures were also observed in. GAIL is also more sensitive to initial conditions: in Humanoid, GAIL converged to sub-optimal policies in 2 out 5 seeds. Lastly, while RED improves noticeably faster during early training in Humanoid, it converged to a sub-optimal policy eventually. In this paper, we propose Support-guided Adversarial Imitation Learning by combining support guidance with adversarial imitation learning. Our approach is complementary to existing adversarial imitation learning algorithms, and addresses several challenges associated with them. More broadly, our show that expert demonstrations contain rich sources of information for imitation learning. Effectively combining different sources of reinforcement learning signals from the expert demonstrations produces more efficient and stable algorithms by constraining the policy search space; and appears to be a promising direction for future research. 10413.1 ± 47.0 RED and SAIL use for support estimation. We use the default networks from RED 4. We set σ following the heuristic in that (s, a) from the expert trajectories mostly have reward close to 1. For fair comparisons, all algorithms shared hyperparameters for each task. We present them in the table below, including discriminator learning rate l D, discount factor γ, number of policy steps per iteration n G, and whether the policy has fixed variance. All other hyperparameters are set to their default values from OpenAI's baselines. To compare our method with the technique of introducing virtual absorbing state (AS) , we also construct a goal-terminal environment where the only terminal state is successful landing at the goal, because the AS technique cannot be directly applied in the no-terminal environment. We present the in Appendix C. The suggest that AS overall improves both the mean performance and standard deviations for both GAIL and SAIL. Specifically, the technique is able to mitigates the survival bias in GAIL significantly. However, SAIL still compares favorably to the technique in the goal-terminal environment. Further, since AS and support guidance is not mutually exclusive, we also combine them and report the performances. The suggest that support guidance is compatible with AS, and achieves overall the best performance with low standard deviations. The also suggest that both AS and support guidance partially mitigate the reward bias, but don't fully solve it. We will further explore this issue in future work.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1x3unVKPS
We unify support estimation with the family of Adversarial Imitation Learning algorithms into Support-guided Adversarial Imitation Learning, a more robust and stable imitation learning framework.
We consider the task of few shot link prediction, where the goal is to predict missing edges across multiple graphs using only a small sample of known edges. We show that current link prediction methods are generally ill-equipped to handle this task---as they cannot effectively transfer knowledge between graphs in a multi-graph setting and are unable to effectively learn from very sparse data. To address this challenge, we introduce a new gradient-based meta learning framework, Meta-Graph, that leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization. Using a novel set of few shot link prediction benchmarks, we show that Meta-Graph enables not only fast adaptation but also better final convergence and can effectively learn using only a small sample of true edges. Given a graph representing known relationships between a set of nodes, the goal of link prediction is to learn from the graph and infer novel or previously unknown relationships . For instance, in a social network we may use link prediction to power a friendship recommendation system , or in the case of biological network data we might use link prediction to infer possible relationships between drugs, proteins, and diseases . However, despite its popularity, previous work on link prediction generally focuses only on one particular problem setting: it generally assumes that link prediction is to be performed on a single large graph and that this graph is relatively complete, i.e., that at least 50% of the true edges are observed during training (e.g., see ; b; ; Lü &). In this work, we consider the more challenging setting of few shot link prediction, where the goal is to perform link prediction on multiple graphs that contain only a small fraction of their true, underlying edges. This task is inspired by applications where we have access to multiple graphs from a single domain but where each of these individual graphs contains only a small fraction of the true, underlying edges. For example, in the biological setting, high-throughput interactomics offers the possibility to estimate thousands of biological interaction networks from different tissues, cell types, and organisms ; however, these estimated relationships can be noisy and sparse, and we need learning algorithms that can leverage information across these multiple graphs in order to overcome this sparsity. Similarly, in the e-commerce and social network settings, link prediction can often have a large impact in cases where we must quickly make predictions on sparsely-estimated graphs, such as when a service has been recently deployed to a new locale. That is to say to link prediction for a new sparse graph can benefit from transferring knowledge from other, possibly more dense, graphs assuming there is exploitable shared structure. We term this problem of link prediction from sparsely-estimated multi-graph data as few shot link prediction analogous to the popular few shot classification setting (; ;). The goal of few shot link prediction is to observe many examples of graphs from a particular domain and leverage this experience to enable fast adaptation and higher accuracy when predicting edges on a new, sparsely-estimated graph from the same domain-a task that can can also be viewed as a form of meta learning, or learning to learn (; 1992; ;) in the context of link prediction. This few shot link prediction setting is particularly challenging as current link prediction methods are generally ill-equipped to transfer knowledge between graphs in a multi-graph setting and are also unable to effectively learn from very sparse data. Present work. We introduce a new framework called Meta-Graph for few shot link prediction and also introduce a series of benchmarks for this task. We adapt the classical gradient-based metalearning formulation for few shot classification (; ;) to the graph domain. Specifically, we consider a distribution over graphs as the distribution over tasks from which a global set of parameters are learnt, and we deploy this strategy to train graph neural networks (GNNs) that are capable of few-shot link prediction. To further bootstrap fast adaptation to new graphs we also introduce a graph signature function, which learns how to map the structure of an input graph to an effective initialization point for a GNN link prediction model. We experimentally validate our approach on three link prediction benchmarks. We find that our MetaGraph approach not only achieves fast adaptation but also converges to a better overall solution in many experimental settings, with an average improvement of 5.3% in AUC at convergence over non-meta learning baselines. The basic set-up for few shot link prediction is as follows: We assume that we have a distribution p(G) over graphs, from which we can sample training graphs G i ∼ p(G), where each is defined by a set of nodes V i, edges E i, and matrix of real-valued node attributes X ∈ R |Vi|×d. When convenient, we will also equivalently represent a graph as |Vi|×|Vi| is an adjacency matrix representation of the edges in E i. We assume that each of these sampled graphs, G i, is a simple graph (i.e., contain a single type of relation and no self loops) and that every node v ∈ V i in the graph is associated with a real valued attribute vector x v ∈ R d from a common vector space. We further assume that for each graph G i we have access to only a sparse subset of the true edges In terms of distributional assumptions we assume that this p(G) is defined over a set of related graphs (e.g., graphs drawn from a common domain or application setting). Our goal is to learn a global or meta link prediction model from a set of sampled training graphs G i ∼ p(G), i = 1...n, such that we can use this meta model to quickly learn an effective link prediction model on a newly sampled graph G * ∼ p(G). More specifically, we wish to optimize a global set of parameters θ, as well as a graph signature function ψ(G i), which can be used together to generate an effective parameter initialization, φ i, for a local link prediction model on graph G i. Relationship to standard link prediction. Few shot link prediction differs from standard link prediction in three important ways: 1. Rather than learning from a single graph G, we are learning from multiple graphs {G 1, ..., G n} sampled from a common distribution or domain. 2. We presume access to only a very sparse sample of true edges. Concretely, we focus on settings where at most 30% of the edges in E i are observed during training, i.e., where By "true edges" we mean the full set of ground truth edges available in a particular dataset. 3. We distinguish between the global parameters, which are used to encode knowledge about the underlying distribution of graphs, and the local parameters φ i, which are optimized to perform link prediction on a specific graph G i. This distinction allows us to consider leveraging information from multiple graphs, while still allowing for individually-tuned link prediction models on each specific graph. Relationship to traditional meta learning. Traditional meta learning for few-shot classification, generally assumes a distribution p(T) over classification tasks, with the goal of learning global parameters that can facilitate fast adaptation to a newly sampled task T i ∼ p(T) with few examples. We instead consider a distribution p(G) over graphs with the goal of performing link prediction on a newly sampled graph. An important complication of this graph setting is that the individual predictions for each graph (i.e., the training edges) are not i.i.d.. Furthermore, for few shot link prediction we require training samples as a sparse subset of true edges that represents a small percentage of all edges in a graph. Note that for very small percentages we effectively break all graph structure and recover the supervised setting for few shot classification and thus simplifying the problem. We now outline our proposed approach, Meta-Graph, to the few shot link prediction problem. We first describe how we define the local link prediction models, which are used to perform link prediction on each specific graph G i. Next, we discuss our novel gradient-based meta learning approach to define a global model that can learn from multiple graphs to generate effective parameter initializations for the local models. The key idea behind Meta-Graph is that we use gradient-based meta learning to optimize a shared parameter initialization θ for the local models, while also learning a parametric encoding of each graph G i that can be used to modulate this parameter initialization in a graph-specific way (Figure 1). In principle, our framework can be combined with a wide variety of GNN-based link prediction approaches, but here we focus on variational graph autoencoders (VGAEs) (b) as our base link prediction framework. Formally, given a graph G = (V, A, X), the VGAE learns an inference model, q φ, that defines a distribution over node embeddings q φ (Z|A, X), where each row z v ∈ R d of Z ∈ R |V|×d is a node embedding that can be used to score the likelihood of an edge existing between pairs of nodes. The parameters of the inference model are shared across all the nodes in G, to define the approximate posterior, where the parameters of the normal distribution are learned via GNNs: and The generative component of the VGAE is then defined as i.e., the likelihood of an edge existing between two nodes, u and v, is proportional to the dot product of their node embeddings. Given the above components, the inference GNNs can be trained to minimize the variational lower bound on the training data: where a Gaussian prior is used for p(z). We build upon VGAEs due to their strong performance on standard link prediction benchmarks (b), as well as the fact that they have a well-defined probabilistic interpretation that generalizes many embedding-based approaches to link prediction (e.g., node2vec ). We describe the specific GNN implementations we deploy for the inference model in Section 3.3. The key idea behind Meta-Graph is that we use gradient-based meta learning to optimize a shared parameter initialization θ for the inference models of a VGAE, while also learning a parametric encoding ψ(G i) that modulates this parameter initialization in a graph-specific way. Specifically, given a sampled training graph G i, we initialize the inference model q φi for a VGAE link prediction model using a combination of two learned components: • A global initialization, θ, that is used to initialize all the parameters of the GNNs in the inference model. The global parameters θ are optimized via second-order gradient descent to provide an effective initialization point for any graph sampled from the distribution p(G). • A graph signature s Gi = ψ(G i) that is used to modulate the parameters of inference model φ i based on the history of observed training graphs. In particular, we assume that the inference model q φi for each graph G i can be conditioned on the graph signature. That is, we augment the inference model to q φi (Z|A, X, s Gi), where we also include the graph signature s Gi as a conditioning input. We use a k-layer graph convolutional network (GCN) (a), with sum pooling to compute the signature: where GCN denotes a k-layer GCN (as defined in (a) ), MLP denotes a densely-connected neural network, and we are summing over the node embeddings z v output from the GCN. As with the global parameters θ, the graph signature model ψ is optimized via second-order gradient descent. The overall Meta-Graph architecture is detailed in Figure 1 and the core learning algorithm is summarized in the algorithm block below. Result: Global parameters θ, Graph signature function ψ Initialize learning rates: α, Sample a mini-batch of graphs, G batch from p(G); The basic idea behind the algorithm is that we (i) sample a batch of training graphs, (ii) initialize VGAE link prediction models for these training graphs using our global parameters and signature function, (iii) run K steps of gradient descent to optimize each of these VGAE models, and (iv) use second order gradient descent to update the global parameters and signature function based on a held-out validation set of edges. As depicted in Fig 1, this corresponds to updating the GCN based encoder for the local link prediction parameters φ j and global parameters θ along with the graph signature function ψ using second order gradients. Note that since we are running K steps of gradient descent within the inner loop of Algorithm 1, we are also "meta" optimizing for fast adaptation, as θ and ψ are being trained via second-order gradient descent to optimize the local model performance after K gradient updates, where generally K ∈ {0, 1, . . ., 5}. We consider several concrete instantiations of the Meta-Graph framework, which differ in terms of how the output of the graph signature function is used to modulate the parameters of the VGAE inference models. For all the Meta-Graph variants, we build upon the standard GCN propagation rule (a) to construct the VGAE inference models. In particular, we assume that all the inference GNNs (Equation 1) are defined by stacking K neural message passing layers of the form: where h v ∈ R d denotes the embedding of node v at layer k of the model, N (v) = {u ∈ V : e u,v ∈ E} denotes the nodes in the graph neighborhood of v, and W (k) ∈ R d×d is a trainable weight matrix for layer k. The key difference between Equation 5 and the standard GCN propagation rule is that we add the modulation function m s G, which is used to modulate the message passing based on the graph signature s G = ψ(G). We describe different variations of this modulation below. In all cases, the intuition behind this modulation is that we want to compute a structural signature from the input graphs that can be used to condition the initialization of the local link prediction models. Intuitively, we expect this graph signature to encode structural properties of sampled graphs G i ∼ p(G) in order to modulate the parameters of the local VGAE link prediction models and adapt it to the current graph., we experiment with basic feature-wise linear modulation to define the modulation function m s G: Here, we restrict the modulation terms β k and γ k output by the signature function to be in [−1, 1] by applying a tanh non-linearity after Equation 4. GS-Gating. Feature-wise linear modulation of the GCN parameters (Equation 6) is an intuitive and simple choice that provides flexible modulation while still being relatively constrained. However, one drawback of the basic linear modulation is that it is "always on", and there may be instances where the modulation could actually be counter-productive to learning. To allow the model to adaptively learn when to apply modulation, we extend the feature-wise linear modulation using a sigmoid gating term, ρ k (with entries), that gates in the influence of γ and β: GS-Weights. In the final variant of Meta-Graph, we extend the gating and modulation idea by separately aggregating graph neighborhood information with and without modulation and then merging these two signals via a convex combination: where we use the basic linear modulation (Equation 6) to define m s β k,γ k. Note that a simplification of Meta-Graph, where the graph signature function is removed, can be viewed as an adaptation of model agnostic meta learning (MAML) to the few shot link prediction setting. As discussed in Section 2, there are important differences in the setup for few shot link prediction, compared to traditional few shot classification. Nonetheless, the core idea of leveraging an inner and outer loop of training in Algorithm 1-as well as using second order gradients to optimize the global parameters-can be viewed as an adaptation of MAML to the graph setting, and we provide comparisons to this simplified MAML approach in the experiments below. We formalize the key differences by depicting the graphical model of MAML as first depicted in and contrasting it with the graphical model for Meta-Graph, in Figure 1. MAML when reinterpreted for a distribution over graphs, maximizes the likelihood over all edges in the distribution. On the other hand, Meta-Graph when recast in a hierarchical Bayesian framework adds a graph signature function that influencesφ j to produce the modulated parameters φ j from N sampled edges. This explicit influence of ψ is captured by the term p(φ j |ψ, φ j) in Equation 7 below: For computational tractability we take the likelihood of the modulated parameters as a point estimate -i.e., p(φ j |ψ,φ j) = δ(ψ ·φ j). We design three novel benchmarks for the few-shot link prediction task. All of these benchmarks contain a set of graphs drawn from a common domain. In all settings, we use 80% of these graphs for training and 10% as validation graphs, where these training and validation graphs are used to optimize the global model parameters (for Meta-Graph) or pre-train weights (for various baseline approaches). We then provide the remaining 10% of the graphs as test graphs, and our goal is to fine-tune or train a model on these test graphs to achieve high link prediction accuracy. Note that in this few shot link prediction setting, there are train/val/test splits at both the level of graphs and edges: for every individual graph, we are optimizing a model using the training edges to predict the likelihood of the test edges, but we are also training on multiple graphs with the goal of facilitating fast adaptation to new graphs via the global model parameters. Our goal is to use our benchmarks to investigate four key empirical questions: Q1 How does the overall performance of Meta-Graph compare to various baselines, including (i) a simple adaptation of MAML (i.e., an ablation of Meta-Graph where the graph signature function is removed), (ii), standard pre-training approaches where we pre-train the VGAE model on the training graphs before fine-tuning on the test graphs, and (iii) naive baselines that do not leverage multi-graph information (i.e., a basic VGAE without pre-training, the Adamic-Adar heuristic , and DeepWalk )? Q2 How well does Meta-Graph perform in terms of fast adaption? Is Meta-Graph able to achieve strong performance after only a small number of gradient steps on the test graphs? Q3 How necessary is the graph signature function for strong performance, and how do the different variants of the Meta-Graph signature function compare across the various benchmark settings? Q4 What is learned by the graph signature function? For example, do the learned graph signatures correlate with the structural properties of the input graphs, or are they more sensitive to node feature information? Datasets. Two of our benchmarks are derived from standard multi-graph datasets from proteinprotein interaction (PPI) networks and 3D point cloud data (FirstMM-DB) . These benchmarks are traditionally used for node and graph classification, respectively, but we adapt them for link prediction. We also create a novel multi-graph dataset based upon the AMINER citation data , where each node corresponds to a paper and links represent citations. We construct individual graphs from AMINER data by sampling ego networks around nodes and create node features using embeddings of the paper abstracts (see Appendix for details). We preprocess all graphs in each domain such that each graph contains a minimum of 100 nodes and up to a maximum of 20000 nodes. For all datasets, we perform link prediction by training on a small subset (i.e., a percentage) of the edges and then attempting to predict the unseen edges (with 20% of the held-out edges used for validation). Key dataset statistics are summarized in Table 1. Baseline details. Several baselines correspond to modifications or ablations of Meta-Graph, including the straightforward adaptation of MAML (which we term MAML in the ), a finetune baseline where we pre-train a VGAE on the training graphs observed in a sequential order and finetune on the test graphs (termed Finetune). We also consider a VGAE trained individually on each test graph (termed No Finetune). For Meta-Graph and all of these baselines we employ Bayesian optimization with Thompson sampling to perform hyperparameter selection using the validation sets. We use the recommended default hyperparameters for DeepWalk and Adamic-Adar baseline is hyperparameter-free. Q1: Overall Performance. Table 2 shows the link prediction AUC for Meta-Graph and the baseline models when trained to convergence using 10%, 20% or 30% of the graph edges. In this setting, we adapt the link prediction models on the test graphs until learning converges, as determined by performance on the validation set of edges, and we report the average link prediction AUC over the test edges of the test graphs. Overall, we find that Meta-Graph achieves the highest average AUC in all but one setting, with an average relative improvement of 4.8% in AUC compared to the MAML approach and an improvement of 5.3% compared to the Finetune baseline. Notably, MetaGraph is able to maintain especially strong performance when using only 10% of the graph edges for training, highlighting how our framework can learn from very sparse samples of edges. Interestingly, in the Ego-AMINER dataset, unlike PPI and FIRSTMM DB, we observe the relative difference in performance between Meta-Graph and MAML to increase with density of the training set. We hypothesize that this is due to fickle nature of optimization with higher order gradients in MAML which is somewhat alleviated in GS-gating due to the gating mechanism. With respect to computational complexity we observe a slight overhead when comparing MetaGraph to MAML which can be reconciled by realizing that the graph signature function is not updated in the inner loop update but only in outer loop. In the Appendix, we provide additional when using larger sets of training edges, and, as expected, we find that the relative gains of Meta-Graph decrease as more and more training edges are available. Q2: Fast Adaptation. Table 3 setting we only compare to the MAML, Finetune, and No Finetune baselines, as fast adaption in this setting is not well defined for the DeepWalk and Adamic-Adar baselines. In terms of fast adaptation, we again find that Meta-Graph is able to outperform all the baselines in all but one setting, with an average relative improvement of 9.4% compared to MAML and 8.0% compared to the Finetune baseline-highlighting that Meta-Graph can not only learn from sparse samples of edges but is also able to quickly learn on new data using only a small number of gradient steps. Also, we observe poor performance for MAML in the Ego-AMINER dataset dataset which we hypothesize is due to extremely low learning rates -i.e. 1e − 7 needed for any learning, the addition of a graph signature alleviates this problem. Figure 2 shows the learning curves for the various models on the PPI and FirstMM DB datasets, where we can see that Meta-Graph learns very quickly but can also begin to overfit after only a small number of gradient updates, making early stopping essential. Q3: Choice of Meta-Graph Architecture. We study the impact of the graph signature function and its variants GS-Gating and GS-Weights by performing an ablation study using the FirstMM DB dataset. Figure 3 shows the performance of the different model variants and baselines considered as the training progresses. In addition to models that utilize different signature functions we report a random baseline where parameters are initialized but never updated allowing us to assess the inherent power of the VGAE model for few-shot link prediction. To better understand the utility of using a GCN based inference network we also report a VGAE model that uses a simple MLP on the node features and is trained analogously to Meta-Graph as a baseline. As shown in Figure 3 many versions of the signature function start at a better initialization point or quickly achieve higher AUC scores in comparison to MAML and the other baselines, but simple modulation and GS-Gating are superior to GS-Weights after a few gradient steps. Q4: What is learned by the graph signature? To gain further insight into what knowledge is transferable among graphs we use the FirstMM DB and Ego-AMINER datasets to probe and compare the output of the signature function with various graph heuristics. In particular, we treat the output of s G = ψ(G) as a vector and compute the cosine similarity between all pairs of graph in the training set (i.e., we compute the pairwise cosine similarites between graph signatures, s G). We similarly compute three pairwise graph statistics-namely, the cosine similarity between average node features in the graphs, the difference in number of nodes, and the difference in number of edges-and we compute the Pearson correlation between the pairwise graph signature similarities and these other pairwise statistics. As shown in Table 4 we find strong positive correlation in terms of Pearson correlation coefficient between node features and the output of the signature function for both datasets, indicating that the graph signature function is highly sensitive to feature information. This observation is not entirely surprising given that we use such sparse samples of edges-meaning that many structural graph properties are likely lost and making the meta-learning heavily reliant on node feature information. We also observe moderate negative correlation with respect to the average Table 4: Pearson scores between graph signature output and other graph statistics. difference in nodes and edges between pairs of graphs for FirstMM DB dataset. For Ego-AMINER we observe small positive correlation for difference in nodes and edges. We now briefly highlight related work on link prediction, meta-learning, few-shot classification, and few-shot learning in knowledge graphs. Link prediction considers the problem of predicting missing edges between two nodes in a graph that are likely to have an edge. . Common successful applications of link prediction include friend and content recommendations , shopping and movie recommendation , knowledge graph completion and even important social causes such as identifying criminals based on past activities . Historically, link prediction methods have utilized topological graph features such as common neighbors yielding strong baselines like Adamic/Adar measure , Jaccard Index among others. Other approaches include Matrix Factorization and more recently deep learning and graph neural networks based approaches (; ;) have risen to prominence. A commonality among all the above approaches is that the link prediction problem is define over a single dense graph where the objective is to predict unknown/future links within the same graph. Unlike these previous approaches, our approach considers link prediction tasks over multiple sparse graphs which are drawn from distribution over graphs akin to real world scenario such as protein-protein interaction graphs, 3D point cloud data and citation graphs in different communities. In meta-learning or learning to learn (; 1992; ;), the objective is to learn from prior experiences to form inductive biases for fast adaptation to unseen tasks. Meta-learning has been particularly effective in few-shot learning tasks with a few notable approaches broadly classified into metric based approaches (; ;), augmented memory (; ;) and optimization based approaches . Recently, there are several works that lie at the intersection of meta-learning for few-shot classification and graph based learning. learn a graph between tasks in embedding space while introduce a message propagation rule between prototypes of classes. However, both these methods are restricted to the image domain and do not consider meta-learning over a distribution of graphs as done here. Another related line of work considers the task of few-shot relation prediction in knowledge graphs. developed the first method for this task, which leverages a learned matching met-ric using both a learned embedding and one-hop graph structures. More recently introduce Meta Relational Learning framework (MetaR) that seeks to transfer relation-specific meta information to new relation types in the knowledge graph. A key distinction between few-shot relation setting and the one which we consider in this work is that we assume a distribution over graphs while in the knowledge graph setting there is only a single graph and the challenge is generalizing to new types of relations within this graph. We introduce the problem of few-shot link prediction-where the goal is to learn from multiple graph datasets to perform link prediction using small samples of graph data-and we develop the Meta-Graph framework to address this task. Our framework adapts gradient-based meta learning to optimize a shared parameter initialization for local link prediction models, while also learning a parametric encoding, or signature, of each graph, which can be used to modulate this parameter initialization in a graph-specific way. Empirically, we observed substantial gains using Meta-Graph compared to strong baselines on three distinct few-shot link prediction benchmarks. In terms of limitations and directions for future work, one key limitation is that our graph signature function is limited to modulating the local link prediction model through an encoding of the current graph, which does not explicitly capture the pairwise similarity between graphs in the dataset. Extending Meta-Graph by learning a similarity metric or kernel between graphs-which could then be used to condition meta-learning-is a natural direction for future work. Another interesting direction for future work is extending the Meta-Graph approach to multi-relational data, and exploiting similarities between relation types through a suitable Graph Signature function. To construct the Ego-Aminer dataset we first create citation graphs from different fields of study. We then select the top 100 graphs in terms number of nodes for further pre-processing. Specifically, we take the 5-core of each graph ensuring that each node has a minimum of 5-edges. We then construct ego networks by randomly sampling a node from the 5-core graph and taking its two hop neighborhood. Finally, we remove graphs with fewer than 100 nodes and greater than 20000 nodes which leads to a total of 72 graphs as reported in Table 1. We list out complete when using larger sets of training edges for PPI, FIRSTMM DB and Ego-Aminer datasets. We show the for two metrics i.e. Average AUC across all test graphs. As expected, we find that the relative gains of Meta-Graph decrease as more and more training edges are available. Table 6: 5-gradient update AUC for PPI for training edge splits
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJepcaEtwB
We apply gradient based meta-learning to the graph domain and introduce a new graph specific transfer function to further bootstrap the process.
Generative neural networks map a standard, possibly distribution to a complex high-dimensional distribution, which represents the real world data set. However, a determinate input distribution as well as a specific architecture of neural networks may impose limitations on capturing the diversity in the high dimensional target space. To resolve this difficulty, we propose a training framework that greedily produce a series of generative adversarial networks that incrementally capture the diversity of the target space. We show theoretically and empirically that our training algorithm converges to the theoretically optimal distribution, the projection of the real distribution onto the convex hull of the network's distribution space. Generative Adversarial Nets (GAN) BID5 is a framework of estimating generative models. The main idea BID4 is to train two target network models simultaneously, in which one, called the generator, aims to generate samples that resemble those from the data distribution, while the other, called the discriminator, aims to distinguish the samples by the generator from the real data. Naturally, this type of training framework admits a nice interpretation as a twoperson zero-sum game and interesting game theoretical properties, such as uniqueness of the optimal solution, have been derived BID5. It is further proved that such adversarial process minimizes certain divergences, such as Shannon divergence, between the generated distribution and the data distribution. Simply put, the goal of training a GAN is to search for a distribution in the range of the generator that best approximates the data distribution. The range is often defined by the input latent variable z and its specific architecture, i.e., Π = {G(z, θ), θ ∈ Θ}. When the range is general enough, one could possibly find the real data distribution. However, in practice, the range is usually insufficient to perfectly describe the real data, which is typically of high dimension. As a , what we search for is in fact the I-projection BID2 of the real data distribution on Π. 1. The range of the generator Π is convex (see figure 1(a)), or it is not convex but the projection of the real data distribution on Π's convex hull (CONV Π) is in Π (see FIG0). 2. The range of the generator is non-convex and the projection of the real data distribution in CONV Π is not in the range Π (see figure 1(c)). In case 1, one can find the optimal distribution in Π to approximate real data set in CONV Π. But in case 2, using standard GANs with a single generator, one can only find the distribution in Π that is nearest to the projection. It then makes sense to train multiple generators and use a convex combination of them to better approximate the data distribution (than using a single generator in the non-convex case (see figure 1(c))).The above argument is based on the assumption that one could achieve global optimality by training, while this is not the case in general. When reaching a local optimal distribution, in order to improve performance, do we need to add more generators and restart training? In this paper, we put forward a sequential training procedure that adds generators one by one to improve the performance, without retraining the previously added generators. Our contributions can be summarized as follows.• We derive an objective function tailored for such a incremental training process. The objective function takes both the real data distribution and the pre-learned distribution into consideration. We show that with this new objective, we actually maximize marginal contribution when adding a new generator. We also put forward an incremental training algorithm based on the new objective function.• We prove that our algorithm always converges to the projection of real data distribution to the convex hull of the ranges of generators, which is the optimal solution with multiple generators. This property continues to hold in online settings where target distribution changes dynamically.• Our experiments show that our algorithm can overcome the local optimal issue mentioned above. We perform experiments on a synthetic dataset as well as two real world datasets, e.g., CelebA and MNIST, and conclude that our algorithm could improve the mixture distribution even in the case where the range is not sufficient enough.• Experiments also show that, compared with previous methods, our algorithm is fast and stable in reducing the divergence between mixture distribution and the real data. Recently, there have been intensive researches on improving the performance of generative adversarial neural networks. Two lines of works are closely related to our paper. They focus mainly on improving the discriminator and the generator respectively. The Unrolled GAN introduced by BID11 improves the discriminator by unrolling optimizing the objective during training, which stabilizes training and effectively reduces the mode collapse. D2GAN proposed by utilizes two discriminators to minimize the KL-divergence and the reverse KL-divergence respectively. It treats different modes more fairly, and thus avoids mode collapse. DFM introduced by brings a Denoising AutoEncoder (DAE) into the generator's objective to minimize the reconstruction error in order to get more information from the target manifold. BID12 proposed McGan based on mean and covariance feature matching to stabilize the training of GANs. Finally, WGAN introduced by employs the Wasserstein distance, which is a more appropriate measure of performance, and achieves more stable performance. These works are different from ours since they focus on the discriminator by measuring the divergence between the generated data and the real data more precisely. However, our work fixes the discriminator and tries to enrich the expressiveness of the generator by combining multiple generators.1.1.2 proposes two methods to improve the training process. The first is selfensembling GANs, which assembles the generators from different epochs to stabilize training. The other is Cascade GAN, where the authors train new generator using the data points with highest values from the discriminator. These two methods are heuristic ways to improve training, but with no theoretical guarantee. BID7 and BID3 proposed the methods called MGAN and multi-agent GANs respectively. The former introduces a classifier into the discriminator to catch different modes, while the later employ a new component into the generators' objective to promote diversity. BID1 introduces a new metric on distributions and proposes a MIX+GAN to search for an equilibrium. But all these methods need to train the multiple generators simultaneously, and none of them can deal with the case when the training process reaches a local optima. Also, these models lack flexibility, in the sense that when one tries to change the number of generators, all the generators need to be retrained. Another closely related work is BID15, in which the authors propose a method called AdaGAN, which is based on a robust reweighting scheme on the data set inspired from boosting. The idea is that the new generators should focus more on the previous bad training data. But AdaGAN and other boosting-like algorithms are based on the assumption that one generator could catch some modes precisely, which may not be reasonable since the generator always learns to generate the average samples among the real data set in order to obtain low divergence, especially when the generator's range is under condition of FIG0. In Section 5, we compare our algorithm with AdaGAN with different dataset. A GAN BID5 takes samples (a.k.a. latent variables z) from a simple and standard distribution as its input and generates samples in a high dimensional space to approximate the target distribution. This is done by training a generative neural network and an auxiliary discriminative neural network alternatively. An f- BID14 generalizes the adversarial training as minimizing the f-divergence between the real data distribution and the generated distribution, DISPLAYFORM0 dx. A GAN is a special f-GAN that minimizes the JensenShannon divergence. The general objective function of an f-GAN can be defined as follows: min DISPLAYFORM1 Here f * is the conjugate function of f in f-divergence; T represents a neural network regarded as the corresponding discriminator; finally, θ and ξ denote the parameters of the generator and the discriminator, respectively. The adversarial training method proposed by BID5 is playing a minimax game between the generator and the discriminator. Such a method can be caught in local optima and thus is undesirable (e.g., mode collapse).In this paper we propose a novel framework to train multiple generators sequentially: We maintain a group of generators (empty at the beginning) as well as their corresponding weights, then add new generators into the group one by one and rebalance the weights. In particular, only the newly added generator at each step is trained. The purpose here is to augment the capacity of the group of generators and mitigate the local optima issue. Define the distribution range of a generator as Π = {p | p = G(z, θ), θ ∈ Θ}, i.e., the set of distributions that the generator can produce with different parameter θ. The distribution range is determined by the distribution of input z and the architecture of the generative network. Define a generator group as G = {G 1, G 2,, . . ., G n}, where G i is the generator added in step i. We associate each generator with a weight ω i > 0. Then the mixed distribution of the group is: DISPLAYFORM0 ω i is the sum of weights. When a new generator G n+1 joins in the group G, the group becomes G = G ∪ {G n+1} and the mixed distribution becomes DISPLAYFORM1 In this section, we describe how we use a generator group to improve the performance and tackle the local optima issue mentioned previously. To train such a generator group, we propose an incremental training algorithm (algorithm 1) adding generators to the group sequentially. In algorithm 1, we use DISPLAYFORM0 repeat Build and initialize generator G i using the same network structure. Set target distribution for G i to be p target = DISPLAYFORM1 until Convergence D(·, ·) to denote the "distance" between two distributions, which can be any divergence (e.g., fdivergence or Wasserstein distance) or a general norm. The key step in algorithm 1 is the choice of the target distribution for training DISPLAYFORM2 j=1 ω j p j and after adding G i, the generator group G can perfectly produce the desired distribution p real. However, in general, we have D(p target, p i) = 0 and our algorithm proceeds in a greedy fashion, i.e., it always maximizes the marginal contribution of G i to the generator group G. We devote the rest of this section to proving the above statement. In algorithm 1, we use different loss functions for each generators. The marginal contribution of the (N + 1)-th generator is as follows when we adopt f-divergence as the distance measure: DISPLAYFORM0 To get a better approximation to the real distribution, we fix the existing generators in the group and tune the parameters of the new generator to minimize the distance between the new group and the real distribution. In fact, this is equivalent to maximizing the marginal contribution of the new generator DISPLAYFORM1 DISPLAYFORM2.To show this, we first introduce the χ 2 -divergence. DISPLAYFORM3 dx. Note that χ 2 -divergence is a special case of the f-divergence: DISPLAYFORM4 In fact, with some mild assumptions on f, the f -divergence is well-approximated by χ 2 -divergence when p and q are close. The following lemma can be obtained via Taylor expansion BID2. Lemma 1. For any f-divergence with f (u), if f (u) is twice differentiable at u = 1 and f > 0, then for any q and p close to q we have: DISPLAYFORM5 Proof of proposition 1. We rewrite the objective function equation 1 for χ 2 -divergence: DISPLAYFORM6 Based on the former definition, we obtain DISPLAYFORM7 +, which concludes the proof. According to algorithm 1, in each round, a new generator G N +1 is added and the loss function is set to be D(p target, p N +1). Therefore, when training each generator G i, the target distribution only depends on the real distribution and the previous generators in G. In particular, both of them are already known (figure 2).To minimize D(p target, p G N +1), we conduct adversarial training by using an auxiliary discriminator T: DISPLAYFORM0 where by the linearity of expectation: DISPLAYFORM1.Based on these, we propose an incremental training algorithm for G N +1 as algorithm 2. In this section, we show that although our framework which trains each generator in a greedy way, the output distribution of the generator group will always converge. Furthermore, the converged distribution is the closest one to the target distribution among the set of all possible distributions that a group of generators can produce (i.e., the optimal one within the distribution range of the group of generators).Recall our notation that the distribution range of a generator is Π. By taking a convex combination of multiple generators (with the same architecture), the set of all possible output distributions becomes the convex hull of Π: DISPLAYFORM0 Our algorithm greedily optimizes each G N +1 to minimize D(p target, p G N +1). By the Pinsker's inequality, the total variation distance between p target and p G N +1 is upper bounded by D(p target, p G N +1)/2 and we can easily extend it to χ 2 -divergence by D KL (p||q) ≤ D χ 2 (p||q) + 0.42. In other words, while greedily optimizing each G N +1, the distance between p target and p G N +1 is also approximately minimized. Hence it is reasonable to assume that for each G N +1, its distance to p target is approximately minimized with some tolerance ≥ 0, i.e., p G N +1 −p target ≤ inf p − p target +. Under such an assumption, our algorithm approximately converges to the the optimal distribution in CONV Π: Proposition 2. For any Π that is connected and bounded, algorithm 2 approximately converges to the optimal distribution within the closure of the convex hull CONV Π of Π.To simplify the argument, we fix each ω i to be 1 and embed the discrete probability distributions into a Hilbert space. In this case, each G N +1 approximately minimizes the distance to p target = (N + 1)p real − N i=1 p Gi can be formalized as: DISPLAYFORM0 and our algorithm approximately converges to the optimal distribution in CONV Π if as N → ∞, DISPLAYFORM1 Then proposition 2 is implied by the following lemma. Lemma 2. Consider a connected and bounded subset Π of a Hilbert space H and any target ρ ∈ H. Let {p * n} ∞ n=1 be a sequence of points in Π such that for ρ target = (n + 1)ρ − nT n, DISPLAYFORM2 Corollary 1. With the finite change of target distribution, algorithm 2 can converge to the new optimal distribution within CONV Π.Due to the space limit, we send the proof to the appendix. Based on corollary 1, regardless the change of target distribution, as long as it is an finite variation, algorithm 2 can converge to the projection of new target distribution. Due to the sequential nature and the above theoretical guarantee, our algorithm naturally generalizes the dynamic online settings. We test our algorithm on a synthesized Gaussian distribution dataset and two well-known real world datasets: CelebA and MNIST, which are the complex high dimensional real world distributions. We design the experiment to test our sequential training algorithm. The main purpose is not to demonstrate high quality , e.g., high definition pictures, but to show that our algorithm can search for an appropriate distribution that significantly improved the performance of mixture distributions as the number of generators increase, especially when the generator's range is rather limited. In all experiments, we use the Adam optimizer with learning rate of 5 × 10 −5, and β 1 = 0.5, β 2 = 0.9. Finally, we set weights ω i = 1 for convenience. Metric. As the method mentioned in BID14, when we fix the generator, we can train an auxiliary neural network to maximize the derived lower bound to measure the divergence between the generated data and the real data, i.e., D f (P ||Q). Based on these theories, we import an auxiliary neural network to measure the performance of different methods. The architecture of the auxiliary neural network is the same as the discriminator used in each experiment. We train it for 50 epoches, which is enough to measure the differences. Then we take the mean value of the last 100 iterations as the final output. Synthesized data. In this part, we design some experiments in R 2 space. The dataset is sampled from 8 independent two-dimensional Gaussian distributions (i.e., the blue points in FIG7 . The model is previously proposed by BID11 .Firstly, following the experiment designed in BID11 and BID7, we choose the latent variable z in a high dimensional space as z ∼ N (0, I 256), i.e., the distribution projection is likely to be in the generator's range, which meets the condition of FIG0. In FIG7, the blue points are the real data while the corresponding colored number represents the data points generated by each generator respectively. As FIG7 shows, we train up to 4 generators to approximate the data distribution and the first generator tends to catch the data with high probability around the centre of each Gaussian. As the number of generators increasing, generated data tends to cover the data away from the centre in order to be complementary to previous mixture distributions and thus gains a considerable marginal profit. These demonstrate our marginal maximization algorithm can promote the mixture distributions to cover the data with low probabilities. Secondly, we reduce the dimension of z to 1, i.e., z ∼ N and simplify the corresponding network architecture, so that the condition of figure 1(c) is likely met. In this part, we compare our algorithm with the state of the art incremental training method AdaGAN BID15 and the baseline method Orignal GAN. 1 We train up to 20 generators in each experiment with the same starting generator (i.e., identical first generator for each method), then measure the D χ 2 (p||q) between real distribution and the generated mixed distribution. We repeat the experiment for 30 times to reduce the effect of random noises. Figure 5 and figure 6 illustrate the average and the best performance with different numbers of generators, respectively. According to the , our algorithm approaches to p real faster than the other two methods and achieves the best performance among all three methods. In summary, our algorithm outperforms the other two both in terms of the speed of converging to the real distribution and the quality of the final under the case of figure 1(c).MNIST. In this experiment, we run our algorithm on the MNIST dataset BID9. We design this experiment to measure the performance of our algorithm for a more complex data distribution. We choose the latent variable as z ∼ N to limit the corresponding generator range and Then we train up to 22 generators to approximate the real distribution and the is showed in figure 4. Our algorithm outperforms the Original GAN but is inferior to the AdaGAN with the first 8 generators. As the number of generators increases, AdaGAN seems to run into a bottleneck while both our algorithm and the Original GAN gradually approximate to the real data distribution. In order to analysis the convergence, we further train up to 100 generators with both our algorithm and the original GAN. In FIG4, the horizontal dash lines represent the minimum value of the Wasserstein distance for the two method respectively. As showed in figure, the distance gradually decrease with the number of generators increasing, and our algorithm is much faster to reduce the distance and can even obtain a better performance. More over, as we tends to investigate the property of each generator in the generators' group, we measure the Wasserstein distance between distribution G(z, θ) and the real data, the experiment is showed in FIG5 and the dash lines in FIG5 represent the mean value of the 100 generators. Interestingly, the shows that, in each generator, original GAN tends to search a distribution in the distribution range that is closer to the real data distribution, while our algorithm is searching for a distribution that is complementary to the generators' group (i.e., a huge decrease in the mixture condition in FIG4) even if its own performance is poor (i.e., a high distance in the in FIG5).CelebA. We also conduct our experiment on the CelebA dataset BID10. As shown in figure 1, we start with an identical generator and train up to 6 generators using different methods. The measured Wasserstein distance between mixed distribution of Group G and the real-data distribution is showed in FIG0. In this experiment, we use the training method WGAN-GP proposed by BID6. The experiment indicates that our algorithm outperforms the other two methods after the second generator. It demonstrates the potential of our algorithm applying to real world datasets. Proof of lemma 2. Without loss of generality, we can assume ρ = 0, since otherwise we can add an offset −ρ to the Hilbert space H. DISPLAYFORM0 Letp ∈ Π be the point that minimizes the distance between −nT n and its projectionp ⊥ on line −nT n, i.e.,p = arg min p∈Π p ⊥ + nT n, where p ⊥ = p,nTn H nTn 2 · nT n. Then we can further bound d n+1 by p + nT n, DISPLAYFORM1 On one hand, since T n ∈ CONV Π,p ⊥ can be seen as the projection ofp on T n as well, hence p −p ⊥ ≤ p − T n. Note that Π is bounded, therefore p − T n is bounded by the diameter of Π, denoted as d ≥ 0.On the other hand, suppose that p * is the point closest to ρ = 0 within CONV Π, i.e., p * = arg min p∈CONV Π p. Since Π is connected, therefore the projection of Π on −nT n is the same with the projection of CONV Π. Hence, DISPLAYFORM2 In other words, DISPLAYFORM3 If p * = inf p∈CONV Π p > 0, then we have d n = n T n ≥ n p *. Hence DISPLAYFORM4 2 /2(n + 1) p * ≤ (p * +)(n + 1) + d 2 /2 p * · ln(n + 1).Then for any δ > 0, let N be sufficiantly large such that ln N N ≤ 2δ p * /d 2, we have DISPLAYFORM5 Otherwise, p * = 0 and d n+1 ≤ + d 2 + d 2 n. Note that the upper bound is increasing in d n, hence ∀n > 0, d n ≤ d * n for d * n defined as follows: DISPLAYFORM6 For which, we can easily prove by induction that d * n ≤ n + √ nd. Therefore T n = d n ≤ + d/ √ n, which immediately completes the proof. Proof of corollary 1. Without loss of generality, we assume the optimal projection of target distribution is changed from ρ to ρ after n 0 iterations, where n 0 ∈ R + is a constant value. Then we can derive T n+n0 − ρ ≤ n· Tn−ρ n+n0 DISPLAYFORM0, where n ∈ R + is the training iteration after change. Then based on lemma 2, we obtain lim n→+∞ T n − ρ ≤ inf p∈Π p − ρ +. On the other side, for a specific n 0, T n0 − ρ ≤ T n0 − ρ + ρ − ρ is a bounded value if the variation of target distribution is limited. Finally, we can obtain lim n→+∞ T n+n0 − ρ ≤ inf p∈Π p − ρ +, which concludes the proof. dx. For KL-divergence and χ 2 -divergence, the corresponding f (t) are f KL (t) = t log(t) and f χ 2 (t) = (t−1) 2 respectively. Import an auxiliary function as: DISPLAYFORM1 Then based on the monotonicity of F(t), we have F (t) min ≥ −0.42. DISPLAYFORM2
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryekdoCqF7
We propose a new method to incrementally train a mixture generative model to approximate the information projection of the real data distribution.
Generative priors have become highly effective in solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. With a generative model we can represent an image with a much lower dimensional latent codes. In the context of compressive sensing, if the unknown image belongs to the range of a pretrained generative network, then we can recover the image by estimating the underlying compact latent code from the available measurements. However, recent studies revealed that even untrained deep neural networks can work as a prior for recovering natural images. These approaches update the network weights keeping latent codes fixed to reconstruct the target image from the given measurements. In this paper, we optimize over network weights and latent codes to use untrained generative network as prior for video compressive sensing problem. We show that by optimizing over latent code, we can additionally get concise representation of the frames which retain the structural similarity of the video frames. We also apply low-rank constraint on the latent codes to represent the video sequences in even lower dimensional latent space. We empirically show that our proposed methods provide better or comparable accuracy and low computational complexity compared to the existing methods. Compressive sensing refers to a broad class of problems in which we aim to recover a signal from a small number of measurements -. Suppose we are given a sequence of measurements for t = 1,..., T as y t = A t x t + e t, where x t denotes the t th frame in the unknown video sequence, y t denotes its observed measurements, A t denotes the respective measurement operator, and e t denotes noise or error in the measurements. Our goal is to recover the video sequence (x t) from the available measurements (y t). The recovery problem becomes especially challenging as the number of measurements (in y t) becomes very small compared to the number of unknowns (in x t). Classical signal priors exploit sparse and low-rank structures in images and videos for their reconstruction -. However, the natural images exhibits far richer nonlinear structures than sparsity alone. We focus on a newly emerging generative priors that learn a function that maps vectors drawn from a certain distribution in a low-dimensional space into images in a highdimensional space. The generative model and optimization problems we use are inspired by recent work on using generative models for compressive sensing in -. Compressive sensing using generative models was introduced in, which used a trained deep generative network as a prior for image reconstruction from compressive measurements. Afterwards deep image prior (DIP) used an untrained convolutional generative model as a prior for solving inverse problems such as inpainting and denoising because of their tendency to generate natural images; the reconstruction problem involves optimization of generator network parameters. Inspired by these observations, a number of methods have been proposed for solving compressive sensing problem by optimizing generator network weights while keeping the latent code fixed at a random value,. Both DIP and deep decoder update the model weights to generate a given image; therefore, the generator can reconstruct wide range of images. One key difference between the two approaches is that the network used in DIP is highly overparameterized, while the one used in deep decoder is underparameterized. We observed two main limitations in the DIP and deep decoder-based video recovery that we seek to address in this paper. The latent codes in DIP and deep decoder methods are initialized at random and stay fixed throughout the recovery process. Therefore, we cannot infer the structural similarities in the images from the structural similarities in the latent codes. Both of these methods train one network per image. A naive approach to train one network per frame in a video will be computationally prohibitive, and if we train a single network to generate the entire video sequence, then their performance degrades. Therefore, we propose joint optimization over network weights γ and the latent codes z t to reconstruct video sequence. Thus we learn a single generator and a set of latent codes to represent a video sequence. We observe that when we optimize over latent code alongside network weights, the temporal similarity in the video frames is reflected in the latent code representation. To exploit similarities among the frames in a video sequence, we also include low-rank constraints on the latent codes. An illustration of different types of representations we use in this paper are shown in Figure 1. In this paper, we reconstruct a video sequence from the compressive measurements in by jointly optimizing over the latent codes z t and the network parameters γ. Since the frames in a video sequence exhibit rich redundancies in their representation, we impose a low-rank constraint on the latent codes to represent the video sequence with a more compact representation of the latent codes. The key contributions of this paper are as follows. • We demonstrate that joint optimization allows us to learn a single generator network for an entire video sequence and corresponding latent codes simultaneously. We demonstrate that this approach has lower computational complexity and requires less number of parameters to reliably generate the entire video sequence. Furthermore, joint optimization retains the similarity structure of the video frames in their latent representation which leaves further scope for different tasks which involves latent space manipulation. • Consecutive frames in a video sequence share lot of similarities. To encode similarities among the reconstructed frames, we introduce low-rank constraints on the generator latent codes. This enables us to represent a video sequence with even smaller number of parameters in the latent space. We show that, in some cases, the low-rank structure on the latent codes also provides a nice low-dimensional manifold. For a single image reconstruction, deep image prior solve the following optimization to obtain optimalγ, arg min In this optimization, z is initialized randomly and kept unaltered. To jointly optimize the latent codes and generator parameters for a video sequence, we use the similar formulation as in but optimize it over the z t and γ. The ing optimization problem can be written as The reconstructed video sequence can be generated using the estimated latent codes (ẑ 1, . . .,ẑ T) and generator weights (γ) asx t = Gγ(ẑ t). We initialize latent codes with samples drawn from a Gaussian distribution and normalize them to have unit norm. We initialize γ with random weights using the initialization scheme in. Initilizing the generator with a pretrained set of weights can potentially serve as a good initialization and lead to good and faster convergence. We tested both variants, but observed little difference in performance; therefore, we use random initialization of parameters in this paper. Each iteration of joint optimization consists of two steps: 1) latent code optimization and 2) network parameter optimization. After every gradient descent update of the latent codes, z t, we update the model parameters with stochastic gradient descent. In all of our experiments with joint optimization, we learned a single set of network weights for the entire sequence. We note that it is possible to divide a longer video sequences into small segments and learn different sets of network weights for each of them. At the end of our reconstruction process, we have a single set of trained weightsγ, reconstructed framesx t and their corresponding optimal latent codesẑ t. As we optimize over the latent codes and the network weights in joint optimization, the latent codes capture the temporal similarity of the video frames. To further exploit the redundancies in a video sequence, we assume that the variation in the sequence of images are localized and the latent codes sequence can be represented in a low-dimensional space compared to their ambient dimension. Let us define a matrix Z with all the latent codes as where z t is the latent code corresponding to t th image of the sequence. To impose a low-rank constraint, we solve the following constrained optimization: We solve using a projected gradient descent method in which we project the latent code estimates after every iteration to a manifold of rank-r matrices. To do that, we compute Z matrix and its rank-r approximation using principal component analysis (PCA) or singular value decomposition (SVD). In this manner, we can express each of the latent codes in terms of r orthogonal basis vectors vectors u 1,..., u r as where α ij is the weight of the corresponding basis vector. We can represent a video sequence with T frames with r orthogonal codes, and the lowrank representation of latent codes requires r × k + r × T parameters compared to T × k. This offers r(k) times compression to our latent code representation. As we observe later, we use r = 4 for k = 256 and T = 32 which gives us compression of 0.14 in latent code representation. In this paper we report the for one synthetic sequence which we refer to as'Rotating MNIST'. In this sequence, we resize one MNIST digit to 64 × 64 and rotate by 2 • per frame for a total of 32 frames. We experiment on different real video and UCF101 dataset. In Table I, we report our for'Handclapping','Handwaving' and'Walking' video sequences from KTH dataset;'Archery','Apply Eye Makeup' and'Band Marching' video sequences from UCF101 dataset. We centered and resized every frame in KTH videos to 64 × 64 and UCF101 videos to 256 × 256 pixels. We used the well-known DCGAN architecture for our generators, except that we do not use any batch-normalization layer. The latent code dimensions for grayscale 64 × 64, RGB 64 × 64 and RGB 256 × 256 video sequences are 64, 256 and 512 respectively. We use Adam optimizer for generator weights optimization and SGD for latent code optimization. Unless otherwise mentioned, we use rank=4 constraint as low rank constraint because we empirically found that we need a least rank=4 for a video sequence with 32 frames to get comparable performance. We show comparison with classical total variation minimization based TVAL3D (3D extension of TVAL3 ) algorithm and state-of-the-art untrained generative prior based deep decoder on denoising, inpainting, and compressive sensing tasks. We use two different deep decoder settings: underparameterized deep decoder (UP deep decoder) and overparameterized deep decoder (OP deepdecoder). Although the authors suggested deep decoder to be UP, we report the for OP deep decoder as well because it shows better performance and its hyperparameters are tuned by the authors of deep decoder. Other then denoising and inpainting, we performed compressive random projection experiments where we used separable measurements, Y = P T XP, where X, Y are reshaped versions of x, y as 2D matrices, P is a random projection matrix. We report the for denoising experiment at 20 dB SNR noise, inpainting experiment for 80% missing pixels and compressive sensing experiments for 20% available measurements in Table I. From the , we can observe that joint optimization with/without low-rank constraint outperform TVAL3D algorithm and UP deep decoder. It performs at par with OP deep decoder. In Figure 2, we show reconstruction performance for denoising, inpainting and compressive sensing at different measurement rate or noise level for'Handwaving' video sequence. We can get the similar observation from these curves as well. We report some reconstruction for'Handwaving' sequence in Figure 2. From the reconstructions, we can say that joint optimization performs at par with the comparing algorithms. It especially performs well in reconstructing details from masked frames. ) and deep decoder. The are average over five experiments with different random measurement matrices (or noise in the case of denoising). Rotating The computational complexity of our proposed methods vary with the choice of the generator structure. We have chosen DCGAN generator structure for our experiments. We calculate memory requirement for gradient descent using torchsummary package. For a single 64 × 64 RGB image, memory requirement for UP deep decoder, OP deep decoder and joint optimization is 2.75 MB, 66.48 MB and 2.06 MB respectively. For a single 256 × 256 RGB image, memory requirement for UP deep decoder, OP deep decoder and joint optimization is 44.03 MB, 1239.75 MB and 10.88 MB respectively. For a RGB video seqence with 32 frames, UP deep decoder will require 11, 304 × 32 parameters while OP deep decoder will have 397, 056 × 32 (12.7M) parameters. On the other hand we need 4,852,736 and 6,988,544 network parameters to represent RGB 64 × 64 and 256 × 256 video sequences, respectively in joint optimization method with DCGAN generator. Because of the huge memory requirement, OP deep decoder is not suitable for optimization over entire video sequence whereas the low capacity hinders UP deep decoder from generating entire video sequence. To investigate the similarity structure in the latent codes obtained by joint optimization, we performed another experiment in which we concatenated 16 frames from each of the The cosine similarity matrices for the video frames, compressive measurements, latent codes for with fixed latent codes (random), latent codes for joint optimization, and latent codes for joint optimization with low-rank are presented in Figure 3 (a)-(e). We can distinguish the video sequences from the pairwise similarity matrices of the latent codes we estimate with joint optimization. We also observe that the low-rank constraint improves the similarity matrix. We mentioned that using low rank constraint we can represent the video sequences in much lower dimensional space. If the generator function is continuous it makes sense that the latent space representation of a video sequence will retain their sequential structure in some low dimensional representation. We demonstrate one such example using rank=2 constraint on the latent codes while reconstructing'Rotating MNIST' sequence from its masked version with 80% pixels missing. As we are enforcing, rank=2 constraint taking mean and first principal component, the latent codes should fall on line. In Figure 4, we represent the latent codes in a 2D plane using 2 orthogonal basis vectors. t th point in Figure 4, represent latent code of t th frame. We can observe that latent codes are maintaining sequence in their 2D dimensional representation. For complex motions, it might take higher dimensional representation to observe such sequential pattern.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJgmnmn5Lr
Recover videos from compressive measurements by learning a low-dimensional (low-rank) representation directly from measurements while training a deep generator.
Magnitude-based pruning is one of the simplest methods for pruning neural networks. Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures. Based on the observation that the magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental demonstrate that the proposed method consistently outperforms the magnitude pruning on various networks including VGG and ResNet, particularly in the high-sparsity regime. The "magnitude-equals-saliency" approach has been long underlooked as an overly simplistic baseline among all imaginable techniques to eliminate unnecessary weights from over-parametrized neural networks. Since the early works of; which provided more theoretically grounded alternative of magnitude-based pruning (MP) based on second derivatives of loss function, a wide range of methods including Bayesian / information-theoretic approaches (; ; ;), pregularization (; ;), sharing redundant channels , and reinforcement learning approaches (; ;) has been proposed as more sophisticated alternatives. On the other hand, the capabilities of MP heuristics are gaining attention once more. Combined with minimalistic techniques including iterative pruning and dynamic reestablishment of connections , a recent large-scale study by claims that MP can achieve a state-of-the-art trade-off of sparsity and accuracy on ResNet-50. The unreasonable effectiveness of magnitude scores often extends beyond the strict domain of network pruning; a recent experiment by suggests an existence of an automatic subnetwork discovery mechanism underlying the standard gradient-based optimization procedures of deep, overparametrized neural networks by showing that the MP algorithm finds an efficient trainable subnetwork. These observations constitute a call to revisit the "magnitude-equals-saliency" approach for a better understanding of deep neural network itself. As an attempt to better understand the nature of MP methods, we study a generalization of magnitude scores under a functional approximation framework; by viewing MP as a relaxed minimization of distortion in layerwise operators introduced by zeroing out parameters, we consider a multi-layer extension of the distortion minimization problem. Minimization of the newly suggested distortion measure which'looks ahead' the impact of pruning on neighboring layers gives birth to a novel pruning strategy, coined lookahead pruning (LAP). In this paper, we focus on comparison of the proposed LAP scheme to its MP counterpart. We empirically demonstrate that LAP consistently outperforms the MP under various setups including linear networks, fully-connected networks, and deep convolutional and residual networks. In particular, the LAP consistently enables more than ×2 gain in the compression rate of the considered models, with increasing benefits under the high-sparsity regime. Apart from its performance, the lookahead pruning method enjoys additional attractive properties: • Easy-to-use: Like magnitude-based pruning, the proposed LAP is a simple score-based approach agnostic to model and data, which can be implemented by computationally light elementary tensor operations. Unlike most Hessian-based methods, LAP does not rely on an availability of training data except for the retraining phase. It also has no hyper-parameter to tune, in contrast to other sophisticated training-based and optimization-based schemes. • Versatility: As our method simply replaces the "magnitude-as-saliency" criterion with a lookahead alternative, it can be deployed jointly with algorithmic tweaks developed for magnitudebased pruning, such as iterative pruning and retraining or joint pruning and training with dynamic reconnections . The remainder of this manuscript is structured as follows: In Section 2, we introduce a functional approximation perspective toward MP and motivate LAP and its variants as a generalization of MP for multiple layer setups; in Section 3 we explore the capabilities of LAP and its variants with simple models, then move on to apply LAP to larger-scale models. We begin by a more formal description of the magnitude-based pruning (MP) algorithm . Given an L-layer neural network associated with weight tensors W 1,..., W L, the MP algorithm removes connections with smallest absolute weights from each weight tensor, until the desired level of sparsity has been achieved. This layerwise procedure is equivalent to finding a mask M whose entries are either 0 or 1, incurring a smallest Frobenius distortion, measured by min where denotes the Hadamard product, · 0 denotes the entry-wise 0 -norm, and s is a sparsity constraint imposed by some operational criteria. Aiming to minimize the Frobenius distortion (Eq.), the MP algorithm naturally admits a functional approximation interpretation. For the case of a fully-connected layer, the maximal difference between the output from a pruned and an unpruned layer can be bounded as Namely, the product of the layerwise Frobenius distortion upper bounds the output distortion of the network incurred by pruning weights. Note that this perspective on MP as a worst-case distortion minimization was already made in , which inspired an advent of the layerwise optimal brain surgery (L-OBS) procedure. A similar idea holds for convolutional layers. For the case of a two-dimensional convolution with a single input and a single output channel, the corresponding linear operator takes a form of a doubly block circulant matrix constructed from the associated kernel tensor (see, e.g.,). Here, the Frobenius distortion of doubly block circulant matrices can be controlled by the Frobenius distortion of the weight tensor of the convolutional layer. The case of multiple input/output channel or non-circular convolution can be dealt with similarly using channel-wise circulant matrices as a block. We refer the interested readers to Sedghi et al. (2019 The myopic optimization (Eq.) based on the per-layer Frobenius distortion falls short even in the simplest case of the two-layer linear neural network with one-dimensional output, where we consider predictors of a form Y = u W x and try to minimize the Frobenius distortion of u W (equivalent to 2 distortion in this case). Here, if u i is extremely large, pruning any nonzero element in the i-th row of W may incur a significant Frobenius distortion. Motivated by this observation, we consider a block approximation analogue of the magnitude-based pruning objective Eq.. Consider an L-layer neural network with associated weight tensors W 1,..., W L, and assume linear activation for simplicity (will be extended to nonlinear cases later in this section). Let J (W i) denote the Jacobian matrix corresponding to the linear operator characterized by W i. For pruning the i-th layer, we take into account the weight tensors of neighboring layers W i−1, W i+1 in addition to the original weight tensor W i. In particular, we propose to minimize the Frobenius distortion of the operator block An explicit minimization of the block distortion (Eq.), however, is computationally intractable in general (see Appendix D for a more detailed discussion). To avoid an excessive computational overhead, we propose to use the following score-based pruning algorithm, coined lookahead pruning (LAP), for approximating Eq.: For each entry w of W i, we prune away the weights with the smallest value of lookahead distortion (in a single step), defined as where W i | w=0 denotes the tensor whose entries are equal to the entries of W i except for having zeroed out w. We let both W 0 and W L+1 to be tensors consisting of ones. In other words, lookahead distortion (Eq.) measures the distortion (in Frobenius norm) induced by pruning w while all other weights remain intact. For three-layer blocks consisting only of fully-connected layers and convolutional layers, Eq. reduces to the following compact formula: for an edge w connected to the j-th input neuron/channel and the k-th output neuron/channel of the i-th layer, where its formal derivation is presented in Appendix E. where |w| denotes the weight of w, W [j, :] denotes the slice of W composed of weights connected to the j-th output neuron/channel, and W [:, k] denotes the same for the k-th input neuron/channel. In LAP, we compute the lookahead distortion for all weights, and then remove weights with smallest distortions in a single step (as done in MP). A formal description of LAP is presented in Algorithm 1. We also note the running time of LAP is comparable with that of MP (see Appendix G). LAP on linear networks. To illustrate the benefit of lookahead, we evaluate the performance of MP and LAP on a linear fully-connected network with a single hidden layer of 1,000 nodes, trained with MNIST image classification dataset. Fig. 2a and Fig. 2b depict the test accuracy of models pruned with each methods, before and after retraining steps. As can be expected from the discrepancy between the minimization objectives (Eqs. and), networks pruned with LAP outperform networks pruned with MP at every sparsity level, in terms of its performance before a retraining phase. Remarkably, we observe that test accuracy of models pruned with LAP monotonically increases from 91.2% to 92.3% as the sparsity level increases, until the fraction of surviving weights reaches 1.28%. At the same sparsity level, models pruned with MP achieves only 71.9% test accuracy. We also observe that LAP leads MP at every sparsity level even after a retraining phase, with an increasing margin as we consider a higher level of sparsity. Understanding LAP with nonlinear activations. Most neural network models in practice deploy nonlinear activation functions, e.g., rectified linear units (ReLU). Although the lookahead distortion was originally derived using linear activation functions, LAP can also be used for nonlinear networks, as the quantity L i (w) remains relevant to the original block approximation point of view. This is especially true when the network is severely over-parametrized. To see this, consider a case where one aims to prune a connection in the first layer of a two-layer fully-connected network with ReLU, i.e., where σ(x) = max{0, x} is applied entrywise. Under the over-parametrized scenario, zeroing out a single weight may alter the activation pattern of connected neurons with only negligible probability, which allows one to decouple the probability of activation of each neuron from the act of pruning each connection. This enables us to approximate the root mean square distortion of the network output introduced by pruning w of W 1 by √ p k L 1 (w), where k is the index of the output neuron that w is connected to, and p k denotes the probability of activation for the k-th neuron. In this sense, LAP (Algorithm 1) can be understood as assuming i.i.d. activations of neurons, due to a lack of an additional access to training data. In other word, LAP admits a natural extension to the regime where we assume an additional access to training data during the pruning phase. This variant, coined LAP-act, will be formally described in Appendix F, with experimental comparisons to another datadependent baseline of optimal brain damage (OBD) . Another theoretical justification of using the lookahead distortion (Eq.) for neural networks with nonlinear activation functions comes from recent discoveries regarding the implicit bias imposed by training via stochastic gradient descent . See Appendix M for a detailed discussion. As will be empiricically shown in Section 3.1, LAP is an effective pruning strategy for sigmoids and tanh activations, that are not piece-wise linear as ReLU. Batch normalization (BN), introduced by , aims to normalize the output of a layer per batch by scaling and shifting the outputs with trainable parameters. Based on our functional approximation perspective, having batch normalization layers in a neural network is not an issue for MP, which relies on the magnitudes of weights; batch normalization only affects the distribution of the input for each layer, not the layer itself. On the other hand, as the lookahead distortion (Eq.) characterizes the distortion of the multi-layer block, one must take into account batch normalization when assessing the abstract importance of each connection. The revision of lookahead pruning under the presence of batch normalization can be done fairly simply. Note that such a normalization process can be expressed as for some a, b ∈ R dim(x). Hence, we revise the lookahead pruning to prune the connections with a minimum value of where a i [k] denotes the k-th index scaling factor for the BN layer placed at the output of the i-th fully-connected or convolutional layer (if BN layer does not exist, let a i [k] = 1). This modification of LAP makes it an efficient pruning strategy, as will be empirically verified in Section 3.3. As the LAP algorithm (Algorithm 1) takes into account current states of the neighboring layers, LAP admits several variants in terms of lookahead direction, order of pruning, and sequential pruning methods; these methods are extensively studied in Section 3.2 Along with "vanilla" LAP, we consider in total, six variants, which we now describe below: Mono-directional LAPs. To prune a layer, LAP considers both preceding and succeeding layers. Looking forward, i.e., only considering the succeeding layer, can be viewed as an educated modification of the internal representation the present layer produces. Looking backward, on the other hand, can be interpreted as only taking into account the expected structure of input coming into the present layer. The corresponding variants, coined LFP and LBP, are tested. Order of pruning. Instead of using the unpruned tensors of preceding/succeeding layers, we also consider performing LAP on the basis of already-pruned layers. This observation brings up a question of the order of pruning; an option is to prune in a forward direction, i.e., prune the preceding layer first and use the pruned weight to prune the succeeding, and the other is to prune backward. Both methods are tested, which are referred to as LAP-forward and LAP-backward, respectively. Sequential pruning. We also consider a sequential version of LAP-forward/backward methods. More specifically, if we aim to prune total p% of weights from each layer, we divide the pruning budget into five pruning steps and gradually prune (p/5)% of the weights per step in forward/backward direction. Sequential variants will be marked with a suffix "-seq". In this section, we compare the empirical performance of LAP with that of MP. More specifically, we validate the applicability of LAP to nonlinear activation functions in Section 3.1. In Section 3.2, we test LAP variants from Section 2.3. In Section 3.3, we test LAP on VGG , ResNet , and Wide ResNet . Experiment setup. We consider five neural network architectures: The fully-connected network (FCN) under consideration is consist of four hidden layers, each with 500 neurons. The convolutional network (Conv-6) consists of six convolutional layers, followed by a fully-connected classifier with two hidden layers with 256 neurons each; this model is identical to that appearing in the work of suggested as a scaled-down variant of VGG. 2 VGG-19 is used, with an addition of batch normalization layers after each convolutional layers, and a reduced number of fully-connected layers from three to one. 3 ResNets of depths {18, 50} are used. WRN of 16 convolutional layers and widening factor 8 (WRN-16-8) is used. All networks used ReLU activation function, except for the experiments in Section 3.1. We mainly consider image classification tasks. In particular, FCN is trained on MNIST dataset , Conv-6, VGG, and ResNet are trained on CIFAR-10 dataset , and VGG, ResNet, and WRN are trained on Tiny-ImageNet. 4 We focus on the one-shot pruning of MP and LAP, i.e., models are trained with a single training-pruning-retraining cycle. All in this section are averaged over five independent trials. We provide more details on setups in Appendix A. We first compare the performance of LAP with that of MP on FCN using three different types of activation functions: sigmoid, and tanh, and ReLU. Figs. 3a to 3c depict the performance of models pruned with LAP (Green) and MP (Red) under various levels of sparsity. Although LAP was motivated primarily from linear networks and partially justified for positivehomogenous activation functions such as ReLU, the experimental show that LAP consistently outperforms MP even on networks using sigmoidal activation functions. We remark that LAP outperforms MP by a larger margin as fewer weights survive (less than 1%). Such a pattern will be observed repeatedly in the remaining experiments of this paper. In addition, we also check whether LAP still exhibits better test accuracy before retraining under the usage of nonlinear activation functions, as in the linear network case (Fig. 2b). Fig. 3d illustrates the test accuracy of pruned FCN using ReLU on MNIST dataset before retraining. We observe that the network pruned by LAP continues to perform better than MP in this case; the network pruned by LAP retains the original test accuracy until only 38% of the weights survive, and shows less than 1% performance drop with only 20% of the weights remaining. On the other hand, MP requires 54% and 30% to achieve the same level of performance, respectively. In other words, the models pruned with MP requires about 50% more survived parameters than the models pruned with LAP to achieve a similar level of performance before being retrained using additional training batches. Now we evaluate LAP and its variants introduced in Section 2.3 on FCN and Conv-6, each trained on MNIST and CIFAR-10, respectively. Table 1 summarizes the experimental on FCN and Table 2 summarizes the on Conv-6. In addition to the baseline comparison with MP, we also compare with random pruning (RP), where the connection to be pruned was decided completely independently. We observe that LAP performs consistently better than MP and RP with similar or smaller variance in any case. In the case of an extreme sparsity, LAP enjoys a significant performance gain; over 75% gain on FCN and 14% on Conv-6. This performance gain comes from a better training accuracy, instead of a better generalization; see Appendix L for more information. Comparing mono-directional lookahead variants, we observe that LFP performs better than LBP in the low-sparsity regime, while LBP performs better in the high-sparsity regime; in any case, LAP performed better than both methods. Intriguingly, the same pattern appeared in the case of the ordered pruning. Here, LAP-forward can be considered an analogue of LBP in the sense that they both consider layers closer to the input to be more important. Likewise, LAP-backward can be considered an analogue of LFP. We observe that LAP-forward performs better than LAP-backward in the high-sparsity regime, and vice versa in the low-sparsity regime. Our interpretation is as follows: Whenever the sparsity level is low, the importance of a carefully curating the input signal is not significant due to high redundancies in natural image signal. This causes a relatively low margin of increment by looking backward in comparison to looking forward. When the sparsity level is high, the input signal is scarce, and the relative importance of preserving the input signal is higher. Finally, we observe that employing forward/backward ordering and sequential methods leads to a better performance, especially in the high-sparsity regime. There is no clear benefit of adopting directional methods in the low-sparsity regime. The relative gain in performance with respect to LAP is either marginal, or unreliable. (Tables 3 and 4), and VGG-19, ResNet-50, and WRN-16-8 on TinyImageNet (Tables 5 to 7). For models trained on CIFAR-10, we also test LAP-forward to verify the observation that it outperforms LAP in the high-sparsity regime on such deeper models. We also report additional experimental on VGG-{11, 16} trained on CIFAR-10 in Appendix B. For models trained on Tiny-ImageNet, top-1 error rates are reported in Appendix C. From Tables 3 to 7, we make the following two observations: First, as in Section 3.2, the models pruned with LAP consistently achieve a higher or similar level of accuracy compared to models pruned with MP, at all sparsity levels. In particular, test accuracies tend to decay at a much slower rate with LAP. In Table 3, for instance, we observe that the models pruned by LAP retain test accuracies of 70∼80% even with less than 2% of weights remaining. In contrast, the performance of models pruned with MP falls drastically, to below 30% accuracy. This observation is consistent on both CIFAR-10 and Tiny-ImageNet datasets. Second, the advantages of considering an ordered pruning method (LAP-forward) over LAP is limited. While we observe from Table 3 that LAP-forward outperforms both MP and LAP in the highsparsity regime, the gain is marginal considering standard deviations. LAP-forward is consistently worse than LAP (by at most 1% in absolute scale) in the low-sparsity regime. In this work, we interpret magnitude-based pruning as a solution to the minimization of the Frobenius distortion of a single layer operation incurred by pruning. Based on this framework, we consider the minimization of the Frobenius distortion of multi-layer operation, and propose a novel lookahead pruning (LAP) scheme as a computationally efficient algorithm to solve the optimization. Although LAP was motivated from linear networks, it extends to nonlinear networks which indeed minimizes the root mean square lookahead distortion assuming i. τ fraction in all fully-connected layers, except for the last layer where we use (1 + q)/2 instead. For FCN, we use (p, q) = (0, 0.5). For Conv-6, VGGs ResNets, and WRN, we use (0.85, 0.8). For ResNet-{18, 50}, we do not prune the first convolutional layer. The range of sparsity for reported figures in all tables is decided as follows: we start from τ where test error rate starts falling below that of an unpruned model and report the at τ, τ + 1, τ + 2,... for FCN and Conv-6, τ, τ + 2, τ + 4,... for VGGs, ResNet-50, and WRN, and τ, τ + 3, τ + 6,... for ResNet-18. In this section, we show that the optimization in Eq. is NP-hard by showing the reduction from the following binary quadratic programming which is NP-hard : for some symmetric matrix A ∈ R n×n. Without loss of generality, we assume that the minimum eigenvalue of A (denoted with λ) is negative; if not, Eq. admits a trivial solution x = (0, . . ., 0). Assuming λ < 0, Eq. can be reformulated as: where H = A − λI. Here, one can easily observe that the above optimization can be solved by solving the below optimization for s = 1,..., n min x∈{0,1} n: i xi=s Finally, we introduce the below equality where 1 denotes a vector of ones, U is a matrix consisting of the eigenvectors of H as its column vectors, and Λ is a diagonal matrix with corresponding (positive) eigenvalues of H as its diagonal elements. The above equality shows that Eq. is a special case of Eq. by choosing W 1 = √ ΛU, W 2 = 1, W 3 = 1 and M = 1 − x. This completes the reduction from Eq. to Eq.. In this section, we provide a derivation of Eq. for the fully-connected layers. The convolutional layers can be handled similarly by substituting the multiplications in Eqs. and by the convolutions. The Jacobian matrix of the linear operator correponding to a fully-connected layer is the weight matrix itself, i.e. J (W i) = W i. From this, lookahead distortion can be reformulated as Now, we decompose the matrix product W i+1 W i W i−1 in terms of entries of W i as below: where, and j-th row of W i−1, respectively. The contribution of a single entry w: ]. Therefore, in terms of the Frobenius distortion, we conclude that which completes the derivation of Eq. for fully-connected layers. F LAP-ACT: IMPROVING LAP USING TRAINING DATA Recall two observations made from the example of two-layer fully connected network with ReLU activation appearing in Section 2.1: LAP is designed to reflect the lack of knowledge about the training data at the pruning phase; once the activation probability of each neuron can be estimated, it is possible to refine LAP to account for this information. In this section, we continue our discussion on the second observation. In particular, we study an extension of LAP called lookahead pruning with activation (LAP-act) which prunes the weight with smallest value of Here, W i is a scaled version of W i and w is the corresponding scaled value of w, defined by where I ij denotes the set of output indices in the j-th output neuron/channel of i-th layer (for fully connected layers, this is a singleton). Also, p k denotes the neuron's probability of activation, which can be estimated by passing the training data. We derive LAP-act (Eq.) in Appendix F.1 and perform preliminary empirical validations in Appendix F.2 with using optimal brain damage (OBD) as a baseline. We also evaluate a variant of LAP using Hessian scores of OBD instead of magnitude scores. It turns out that in the small networks (FCN, Conv-6), LAP-act outperforms OBD. Consider a case where one aims to prune a connection of a network with ReLU, i.e., where σ(x) = max{0, x} is applied entrywise. Under the over-parametrized scenario, zeroing out a single weight may alter the activation pattern of connected neurons with only negligible probability, which allows one to decouple the probability of activation of each neuron from the act of pruning each connection. From this observation, we first construct the below random distortion, following the philosophy of the linear lookahead distortion Eq. where J (W i) denotes a random matrix where ] and g i [k] is a 0-1 random variable corresponding to the activation, i.e., g i [k] = 1 if and only if the k-th output of the i-th layer is activated. However, directly computing the expected distortion with respect to the real activation distribution might be computationally expensive. To resolve this issue, we approximate the root mean-squared lookahead distortion by applying the mean-field approximation to the activation probability of neurons, i.e., all activations are assumed to be independent, as denotes the mean-field approximation of p(g). Indeed, the lookahead distortion with ReLU nonlinearity (Eq.) or three-layer blocks consisting only of the fully-connected layers and the convolutional layers can be easily computed by using the rescaled weight matrix W i: where I i,j denotes a set of output indices in the j-th output neuron/channel of the i-th layer. Finally, for an edge w connected to the j-th input neuron/channel and the k-th output neuron/channel of the i-th layer, Eq. reduces to where w denotes the rescaled value of w. This completes the derivation of Eq.. We compare the performance of three algorithms utilizing training data at the pruning phase: optimal brain damage (OBD) which approximates the loss via second order Taylor seris approximation with the Hessian diagonal , LAP using OBD instead of weight magnitudes (OBD+LAP), and LAP-act as described in this section. We compare the performances of three algorithms under the same experimental setup as in Section 3.2. To compute the Hessian diagonal for OBD and OBD+LAP, we use a recently introduced software package called "BackPACK," , which is the only open-source package supporting an efficient of Hessians, up to our knowledge. Note that the algorithms evaluated in this section are also evaluated for global pruning experiments in Appendix I. The experimental for FCN and Conv-6 are presented in Tables 13 and 14. Comparing to algorithms relying solely on the model parameters for pruning (MP/LAP in Tables 1 and 2), we observe that OBD performs better in general, especially in the high sparsity regime. This observation is coherent to the findings of. Intriguingly, however, we observe that applying lookahead critertion to OBD (OBD+LAP) significantly enhances to OBD significantly enhances the performance in the high sparsity regime. We hypothesize that LAP helps capturing a correlation among scores (magnitude or Hessian-based) of adjacent layers. Also, we observe that LAP-act consistently exhibits a better performance compared to OBD. This is somewhat surprising, in the sense that LAP-act only utilizes (easier-to-estimate) information about activation probabilities of each neuron to correct lookahead distortion. The average running time of OBD, OBD+LAP, and LAP-act is summarized in Table 15. We use Xeon E5-2630v4 2.20GHz for pruning edges, and additionally used a single NVidia GeForce GTX-1080 for the computation of Hessian diagonals (used for OBD, OBD+LAP) and activation probabiility (for LAP-act). We observe that LAP-act runs in a significantly less running time than OBD/OBD+LAP, and the gap widens as the number of parameters and the dimensionality of the dataset increases (from MNIST to CIFAR-10). MP comprises of three steps: computing the absolute value of the tensor, sorting the absolute values, and selecting the cut-off threshold and zero-ing out the weights under the threshold. Steps and remain the same in LAP, and typically takes O(n log n) steps (n denotes the number of parameters in a layer). On the other hand, Step is replaced by computing the lookahead distortion for each parameter w. Fortunately, this need not be computed separately for each parameter. Indeed, one can perform tensor operations to compute the squared lookahead distortion, which has the same ordering with lookahead distortion. For fully-connected layers with 2-dimensional Jacobians, the squared lookahead distortion for where 1 i denotes all-one matrix of size d i−2 × d i; multiplying 1 i denotes summing operation along an axis and duplicating summed into the axis, and 2 denotes the element-wise square operation. The case of convolutional layers can be handled similarly. We note that an implementation of Eq. Table 16, where we fixed the layerwise pruning rate to be uniformly 90%. The codes are implemented with PyTorch, and the computations have taken place on 40 CPUs of Intel Xeon E5-2630v4 @ 2.20GHz. All figures are averaged over 100 trials. We make two observations from Table 16. First, the time required for LAP did not exceed 150% of the time required for MP, confirming our claim on the computational benefits of LAP. Second, most of the added computation comes from considering the factors from batch normalization, without which the added computation load is ≈5%. In the main text, LAP is compared to the MP in the context of unstructured pruning, where we do not impose any structural constraints on the set of connections to be pruned together. On the other hand, the magnitude-based pruning methods are also being used popularly as a baseline for channel pruning , which falls under the category of structured pruning. MP in channel pruning is typically done by removing channels with smallest aggregated weight magnitudes; this aggregation can be done by either taking 1 -norm or 2 -norm of magnitudes. Similarly, we can consider channel pruning scheme based on an 1 or 2 aggregation of LAP distortions, which we will call LAP-1 and LAP-2 (as opposed to MP-1 and MP-2). We compare the performances of LAP-based channel pruning methods to MP-based channel pruning methods, along with another baseline of random channel pruning (denoted with RP). We test with Conv-6 (Table 17) and VGG-19 (Table 18) networks on CIFAR-10 dataset. All reported figures are averaged over five trials, experimental settings are identical to the unstructure pruning experiments unless noted otherwise. Similar to the case of unstructured pruning, we observe that LAP-based methods consistently outperform MP-based methods. Comparing 1 with 2 aggregation, we note that LAP-2 performs better than LAP-1 in both experiments, by a small margin. Among MP-based methods, we do not observe any similar dominance. Table 19 and Table 20. In this methods, we prune a fraction of weights with smallest scores (e.g. weight magnitude, lookahead distortion, Hessian-based scores) among all weights in the whole network. The suffix "-normalize" in the tables denotes that the score is normalized by the Frobenius norm of the corresponding layer's score. For MP, LAP, OBD+LAP and LAP-act, we only report the for global pruning with normalization, as the normalized versions outperform the unnormalized ones. In the case of OBD, whose score is already globally designed, we report the for both unnormalized and normalized versions. As demonstrated in Section 3.2 for fixed layerwise pruning rates, we observe that LAP and its variants perform better than their global pruning baselines, i.e. MP-normalize and OBD. We also note that LAP-normalize performs better than MP with pre-specified layerwise pruning rates (appeared in Section 3.2), with a larger gap for higher levels of sparsity. We test LAP-all on FCN under the same setup as in Section 3.2, and report the in Table 21. All figures are averaged over five trials. We observe that LAP-all achieves a similar level of performance to LAP, while LAP-all underperforms under a high-sparsity regime. We suspect that such shortfall originates from the accumulation of error terms incurred by ignoring the effect of activation functions, by which the benefits of looking further fades. An in-depth theoretical analysis for the determination of an optimal "sight range" of LAP would be an interesting future direction. As a sanity check, we compare the performance of large neural networks pruned via MP and LAP to the performance of a small network. In particular, we prune VGG-16, VGG-19, and ResNet-18 trained on CIFAR-10 dataset, to have a similar number of parameters to MobileNetV2 . For training and pruning VGGs and ResNet, we follows the prior setup in Appendix A while we use the same setup for training MobileNetV2 (Adam optimizer with learning rate of 3 · 10 −4 with batch size 60, and trained 60k steps). We observe that models pruned via LAP (and MP) exhibit better performance compared to MobileNetV2, even when pruned to have a smaller number of parameters. In this section, we briefly discuss where the benefits of the sub-network discovered by LAP comes from; does LAP subnetwork have a better generalizability or expressibility? For this purpose, we look into the generalization gap, i.e., the gap between the training and test accuracies, of the hypothesis learned via LAP procedure. Below we present a plot of test accuracies (Fig. 4a) and a plot of generalization gap (Fig. 4b) for FCN trained with MNIST dataset. The plot hints us that the network structure learned by LAP may not necessarily have a smaller generalizability. Remarkably, the generalization gap of the MP-pruned models and the LAP-pruned models are very similar to each other; the benefits of LAP subnetwork compared to MP would be that it can express a better-performing architecture with a network of similar sparsity and generalizability. remains constant for any hidden neuron j over training via gradient flow. In other words, the total outward flow of weights is tied to the inward flow of weights for each neuron. This observation hints at the possibility of a relative undergrowth of weight magnitude of an'important' connection, in the case where the connection shares the same input/output neuron with other'important' connections. From this viewpoint, the multiplicative factors in Eq. take into account the abstract notion of neuronal importance score, assigning significance to connections to the neuron through which more gradient signals have flowed through. Without considering such factors, LAP reduces to the ordinary magnitude-based pruning.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryl3ygHYDB
We study a multi-layer generalization of the magnitude-based pruning.
Recent literature has demonstrated promising on the training of Generative Adversarial Networks by employing a set of discriminators, as opposed to the traditional game involving one generator against a single adversary. Those methods perform single-objective optimization on some simple consolidation of the losses, e.g. an average. In this work, we revisit the multiple-discriminator approach by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction computation can be done efficiently. Our indicate that hypervolume maximization presents a better compromise between sample quality and diversity, and computational cost than previous methods. Generative Adversarial Networks (GANs) BID13 offer a new approach to generative modeling, using game-theoretic training schemes to implicitly learn a given probability density. Prior to the emergence of GAN architectures, realistic generative modeling remained elusive. When offering unparalleled realism, GAN training remains fraught with stability issues. Commonly reported shortcomings involved in the GAN game are the lack of useful gradients provided by the discriminator, and mode collapse, i.e. lack of diversity in the generator's samples. Considerable research effort has been devoted in recent literature in order to overcome training instability 1 within the GAN framework. Some architectures such as BEGAN BID4 ) have applied auto-encoders as discriminators and proposed a new loss to help stabilize training. Methods such as TTUR BID16, in turn, have attempted to define schedules for updating the generator and discriminator differently. The PacGAN algorithm proposes to modify the discriminator's architecture which will receive m concatenated samples as input, while modifications to alternate updates in SGD were introduced in . These samples are jointly classified as either real or generated, and authors show that this enforces sample diversity. In SNGAN , authors introduce spectral normalization on the discriminator aiming to ensure Lipschitz continuity, which is empirically shown to consistently yield high quality samples when different sets of hyperparameters are used. Recent works have proposed to tackle GANs instability issues using multiple discriminators. propose a GAN variation in which one generator is trained against a set of discriminators, where each discriminator sees a fixed random projection of the inputs. Prior work, including GMAN BID9 has also explored training against multiple discriminators. In this paper, we build upon Neyshabur et al.'s introduced framework and propose reformulating the average loss minimization aiming to further stabilize GAN training. Specifically, we propose treating the loss signal provided by each discriminator as an independent objective function. To achieve this, we simultaneously minimize the losses using multi-objective optimization techniques. Namely, we exploit previously introduced methods in literature such as the multiple gradient descent algorithm (MGD) BID7. However, due to MGD's prohibitively high cost in the case of large neural networks, we propose the use of more efficient alternatives such as maximization of the hypervolume of the region defined between a fixed, shared upper bound on those losses, which we will refer to as the nadir point η *, and each of the component losses. In contrast to's approach, where the average loss is minimized when training the generator, hypervolume maximization (HV) optimizes a weighted loss, and the generator's training will adaptively assign greater importance to feedback from discriminators against which it performs poorly. Experiments performed on MNIST show that HV presents a good compromise in the computational cost-samples quality trade-off, when compared to average loss minimization or GMAN's approach (low quality and cost), and MGD (high quality and cost). Also, the sensitivity to introduced hyperparameters is studied and indicate that increasing the number of discriminators consequently increases the generator's robustness along with sample quality and diversity. Experiments on CIFAR-10 indicate the method described produces higher quality generator samples in terms of quantitative evaluation. Moreover, image quality and sample diversity are once more shown to consistently improve as we increase the number of discriminators. In summary, our main contributions are the following:1. We offer a new perspective on multiple-discriminator GAN training by framing it in the context of multi-objective optimization, and draw similarities between previous research in GANs variations and MGD, commonly employed as a general solver for multi-objective optimization. 2. We propose a new method for training multiple-discriminator GANs: Hypervolume maximization, which weighs the gradient contributions of each discriminator by its loss. The remainder of this document is organized as follows: Section 2 introduces definitions on multiobjective optimization and MGD. In Section 3 we describe prior relevant literature. Hypervolume maximization is detailed in Section 4, with experiments and presented in Section 5. Conclusions and directions for future work are drawn in Section 6. In this section we provide some definitions regarding multi-objective optimization literature which will be useful in the next sections. Henceforth, the boldface notation will be used to indicate vector-valued variables. Multi-objective optimization. A multi-objective optimization problem is defined as BID6: DISPLAYFORM0 where K is the number of objectives, Ω is the variables space and x = [x 1, x 2, ..., x n] T ∈ Ω is a decision vector or possible solution to the problem. F: Ω → R K is a set of K-objective functions that maps the n-dimensional variables space to the K-dimensional objective space. Pareto-dominance. Let x 1 and x 2 be two decision vectors. x 1 is said to dominate x 2 (denoted by x 1 ≺ x 2) if and only if f i (x 1) ≤ f i (x 2) for all i ∈ {1, 2, . . ., K} and f j (x 1) < f j (x 2) for some j ∈ {1, 2, . . ., K}. If a decision vector x is dominated by no other vector in Ω, x is said to be non-dominated. Pareto-optimality. A decision vector x * ∈ Ω is said to be Pareto-optimal if and only if there is no x ∈ Ω such that x ≺ x *, i.e. x * is a non-dominated solution. The Pareto-optimal Set (PS) is defined as the set of all Pareto-optimal solutions x ∈ Ω, i.e., P S = {x ∈ Ω|x is Pareto optimal}. The set of all objective vectors F(x) such that x is Pareto-optimal is called Pareto front (PF), that is P F = {F(x) ∈ R K |x ∈ P S}.Pareto-stationarity. Pareto-stationarity is a necessary condition for Pareto-optimality. For f k differentiable everywhere for all k, F is said to be Pareto-stationary at the point x if there exists a set of scalars α k, k ∈ {1, . . ., K}, such that: DISPLAYFORM1 Multiple Gradient Descent. Multiple gradient descent BID7 Schäffler et al., 2002; ) was proposed for the unconstrained case of multi-objective optimization of F(x) assuming a convex, continuously differentiable and smooth f k (x) for all k. MGD finds a common descent direction for all f k by defining the convex hull of all ∇f k (x) and finding the minimum norm element within it. Consider w * given by: DISPLAYFORM2 w * will be either 0 in which case x is a Pareto-stationary point, or w * = 0 and then w * is a descent direction for all f i (x). Similar to gradient descent, MGD consists in finding the common steepest descent direction w * t at each iteration t, and then updating parameters with a learning rate λ according to DISPLAYFORM3 3 RELATED WORK While we would prefer to always have strong gradients from the discriminator during training, the vanilla GAN makes this difficult to ensure, as the discriminator quickly learns to distinguish real and generated samples BID12, thus providing no meaningful error signal to improve the generator thereafter. BID9 proposed the Generative Multi-Adversarial Networks (GMAN) which consist in training the generator against a softmax weighted arithmetic average of K different discriminators, according to Eq. 4. DISPLAYFORM0 where DISPLAYFORM1, β ≥ 0, and L D k is the loss of discriminator k and defined as DISPLAYFORM2 where D k (x) and G(z) are the outputs of the k-th discriminator and the generator, respectively. The goal of using the proposed averaging scheme is to privilege worse discriminators and thus providing more useful gradients to the generator during training. Experiments were performed with β = 0 (equal weights), β → ∞ (only worst discriminator is taken into account), β = 1, and β learned by the generator. Models with K = {2, 5} were tested and evaluated using a proposed metric and the Inception score . However, showed that the simple average of discriminator's losses provided the best values for both metrics in most of the considered cases. Opposed to proposed training a GAN with K discriminators using the same architecture. Each discriminator D k sees a different randomly projected lower-dimensional version of the input image. Random projections are defined by a randomly initialized matrix W k, which remains fixed during training. Theoretical provided show that the distribution induced by the generator G will converge to the real data distribution p data, as long as there is a sufficient number of discriminators. Moreover, discriminative tasks in the projected space are harder, i.e. real and fake samples are more alike, thus avoiding early convergence of discriminators, which leads to common stability issues in GAN training such as mode-collapse BID12. Essentially, the authors trade one hard problem for K easier subproblems. The losses of each discriminator L D k are the same as shown in Eq. 5. However, the generator loss L G is defined as simply the sum of the losses provided by each discriminator, as shown in Eq. 6. This choice of L G does not exploit available information such as the performance of the generator with respect to each discriminator. DISPLAYFORM3 3.2 HYPERVOLUME MAXIMIZATION Consider a set of solutions S for a multi-objective optimization problem. The hypervolume H of S is defined as BID10: DISPLAYFORM4, where µ is the Lebesgue measure and η * is a point dominated by all x ∈ S (i.e. f i (x) is upper-bounded by η), referred to as nadir point. H(S) can be understood as the size of the space covered by {F(x)|x ∈ S} BID3.The hypervolume was originally introduced as a quantitative metric for coverage and convergence of Pareto-optimal fronts obtained through population based algorithms BID5. Methods based on direct maximization of H exhibit favorable convergence even in challenging scenarios, such as simultaneous minimization of 50 objectives BID3 We introduce a variation of the GAN game such that the generator solves the following multi-objective problem: DISPLAYFORM0 where each DISPLAYFORM1.., K}, is the loss provided by the k-th discriminator. Training proceeds as the usual formulation BID13, i.e. with alternate updates between the discriminators and the generator. Updates of each discriminator are performed to minimize the loss described in Eq. 5.A natural choice for generator's updates is the MGD algorithm, described in Section 2. However, computing the direction of steepest descent w * before every parameter update step, as required in MGD, can be prohibitively expensive for large neural networks. Therefore, we propose an alternative scheme for multi-objective optimization and argue that both our proposal and previously published methods can all be viewed as performing computationally more efficient versions of MGD update rule without the burden of having to solve a quadratric program, i.e. computing w *, every iteration. Fleischer BID10 has shown that maximizing H yields Pareto-optimal solutions. Since MGD converges to a set of Pareto-stationary points, i.e. a super-set of the Pareto-optimal solutions, hypervolume maximization yields a sub-set of the solutions obtained using MGD.We exploit the above mentioned property and define the generator loss as the negative loghypervolume, as defined in Eq. 8: DISPLAYFORM0 where the nadir point coordinate η is an upper bound for all l k. In Fig. 1 we provide an illustrative example for the case where K = 2. The highlighted region corresponds to e V. Since the nadir point η * is fixed, V will only be maximized, and consequently L G minimized, if each l k is minimized. Figure 1: 2D example of the objective space where the generator loss is being optimized. DISPLAYFORM1 Moreover, by adapting the shown in , the gradient of L G with respect to any generator's parameter θ is given by: DISPLAYFORM2 In other words, the gradient can be obtained by computing a weighted sum of the gradients of the losses provided by each discriminator, whose weights are defined as the inverse distance to the nadir point components. This formulation will naturally assign more importance to higher losses in the final gradient, which is another useful property of hypervolume maximization. Nadir point selection. It is evident from Eq. 9 that the selection of η directly affects the importance assignment of gradients provided by different discriminators. Particularly, as the quantity min k {η − l k} grows, the multi-objective GAN game approaches the one defined by the simple average of l k. Previous literature has discussed in depth the effects of the selection of η in the case of population-based methods BID1 BID7. However, those are not readily applicable for the single-solution case. As will be shown in Section 5, our experiments indicate that the choice of η plays an important role in the final quality of samples. Nevertheless, this effect becomes less relevant as the number of discriminators increases. Similarly to , we propose an adaptive scheme for η such that at iteration t: η t = δ max k {l k,t}, where δ > 1 is a user-defined parameter which will be referred to as slack. This enforces min k {η − l k} to be higher when max k {l k,t} is high and low otherwise, which induces a similar behavior as an average loss when training begins and automatically places more importance on the discriminators in which performance is worse as training progresses. Extra discussion and an illustrative example of the adaptation scheme adopted is presented in Appendix G.Comparison to average loss minimization. The upper bound proven by assumes that the marginals of the real and generated distributions are identical along all random projections. Average loss minimization does not ensure equally good approximation between the marginals along all directions. In case of a trade-off between discriminators, i.e. if decreasing the loss on a given projection increases the loss with respect to another one, the distribution of losses can be uneven. With HV on the other hand, especially when η is reduced throughout training, overall loss will be kept high as long as there are discriminators with high loss. This objective tends to prefer central regions of a trade-off, in which all discriminators present a roughly equally low loss. All methods described previously for the solution of GANs with multiple discriminators, i.e. average loss minimization , GMAN's weighted average BID9 and hypervolume maximization can be defined as MGD-like two-step algorithms consisting of:Step 1 -consolidating all gradients into a single update direction (compute the set α 1,...,K);Step 2 -updating parameters in the direction returned in step 1. Definition of Step 1 for the different methods studied here can be seen in the following: DISPLAYFORM0 Average loss minimization : BID9: DISPLAYFORM1 DISPLAYFORM2 We performed three sets of experiments aiming to analyze the following aspects: (i) How alternative methods for training GANs with multiple discriminators perform in comparison to MGD; (ii) How alternative methods perform in comparison to each other in terms of sample quality and coverage; and (iii) Whether the behavior induced by HV improves the with respect to the baseline methods. Firstly, we exploited the relatively low dimensionality of MNIST and used it as testbed for a comparison of MGD with the other approaches, i.e. average loss minimization (AVG), GMAN's weighted average loss, and HV, proposed in this work. Moreover, multiple initializations and slack combinations were evaluated in order to investigate how varying the number of discriminators affects robustness to those factors. Then, experiments were performed with CIFAR-10 while increasing the number of discriminators. We evaluated HV's performance compared to baseline methods, and the effect in samples quality. We also analyzed the impact on the diversity of generated samples by using the stacked MNIST dataset . Samples of generators trained on stacked MNIST, CIFAR-10, CelebA, and Cats dataset are shown in the Appendix. In all experiments performed, the same architecture, set of hyperparameters and initialization were used for both AVG, GMAN and our proposed method. The only different aspect is the generator loss. Unless stated otherwise, Adam was used to train all the models with learning rate, β 1 and β 2 set to 0.0002, 0.5 and 0.999, respectively. Mini-batch size was set to 64. The Fréchet Inception Distance (FID) BID16 was employed for comparison. Details on FID computation can be found in Appendix A. We employed MGD in our experiments with MNIST. In order to do so, a quadratic program has to be solved prior to every parameters update. For this, we used the Scipy's implementation of the Serial Least Square Quadratic Program solver 2.Three and four fully connected layers with LeakyReLU activations were used for the generator and discriminator, respectively. Dropout was also employed in the discriminator and the random projection layer was implemented as a randomly initialized norm-1 fully connected layer, reducing the vectorized dimensionality of MNIST from 784 to 512. A pretrained LeNet was used for FID computation. Experiments over 100 epochs with 8 discriminators are reported in Fig. 2 and Fig. 3. In Fig. 2, box-plots refer to 30 independent computations of FID over 10000 images sampled from the generator which achieved the minimum FID at train time. FID are measured at train time over 1000 images and the best values are reported in Fig. 3 along with the necessary time to achieve it. MGD outperforms all tested methods. However, its cost per iteration does not allow its use in more relevant datasets other than MNIST. Hypervolume maximization, on the other hand, performs closest to MGD than the considered baselines, while introducing no relevant extra cost. In Fig. 4, we analyze convergence in the Pareto-stationarity sense by plotting the norm of the update direction for each method, given by || K k=1 α k ∇l k ||. All methods converged to similar norms, leading to the that different Pareto-stationary solutions will perform differently in terms of quality of samples. FID as a function of wall-clock time is shown in Figure 22 (Appendix H).HV sensitivity to initialization and choice of δ. Analysis of the sensitivity of the performance with the choice of the slack parameter δ and initialization was performed under the following setting: models were trained for 50 epochs on MNIST with hypervolume maximization using 8, 16, 24 discriminators. Three independent runs (different initializations) were executed with each δ = {1.05, 1.5, 1.75, 2} and number of discriminators, totalizing 36 final models. FIG1 reports the box-plots obtained for 5 FID independent computations using 10000 images, for each of the 36 models obtained under the setting previously described. Results clearly indicate that increasing the number of discriminators yields much smaller variation in the FID obtained by the final model. We evaluate the performance of HV compared to baseline methods using the CIFAR-10 dataset. FID was computed with a pretrained ResNet BID15. ResNet was trained on the 10-class classification task of CIFAR-10 up to approximately 95% test accuracy. DCGAN and WGAN-GP BID14 were included in the experiments for FID reference. Same architectures as in were employed for all multi-discriminators settings. An increasing number of discriminators was used. Inception score as well as FID computed with other models are included in Appendix C.In Fig. 6, we report the box-plots of 15 independent evaluations of FID on 10000 images for the best model obtained with each method across 3 independent runs. Results once more indicate that HV outperforms other methods in terms of quality of the generated samples. Moreover, performance clearly improves as the number of discriminators grows. Fig. 7 shows the FID at train time, i.e. measured with 1000 generated samples after each epoch, for the best models across runs. Models trained against more discriminators clearly converge to smaller values. We report the norm of the update direction || K k=1 α k ∇l k || for each method in FIG4, Appendix C.: FID estimated over 1000 generated images at train time. Models trained against more discriminators achieve lower FID.Cost under the multiple discriminator setting. We highlight that even though training with multiple discriminators may be more computationally expensive when compared to conventional approaches, such framework supports fully parallel training of the discriminators, a feature which is not trivially possible in other GAN settings. For example in WGAN, the discriminator is serially updated multiple times for each generator update. In Fig. 10 at Appendix C, we provide a comparison between the wall-clock time per iteration between all methods evaluated. Serial implementations of discriminators updates with 8 and 16 discriminators were faster than WGAN-GP. We repeat the experiments in aiming to analyze how the number of discriminators impacts the sample diversity of the corresponding generator when trained using hypervolume maximization. The stacked MNIST dataset is employed and reported in are used for comparison. HV for 8, 16, and 24 discriminators were obtained with 10k and 26k generator images averaged over 10 runs. The number of covered modes along with the KL divergence between the generated mode distribution and test data are reported in Table 1 998.0 ± 1.8 0.120 ± 0.004 HV -24 disc.998.3 ± 1.1 0.116 ± 0.003 26k HV -8 disc.776.8 ± 6.4 1.115 ± 0.007 HV -16 disc.1000.0 ± 0.0 0.088 ± 0.002 HV -24 disc.1000.0 ± 0.0 0.084 ± 0.002 Table 1: Number of covered modes and reverse KL divergence for stacked MNIST.As in previous experiments, improved as we increased the number of discriminators. All evaluated models using HV outperformed DCGAN, ALI, Unrolled GAN and VEEGAN. Moreover, HV with 16 and 24 discriminators achieved state-of-the-art coverage values. Thus, the increase in models' capacity via using more discriminators directly ed in an improvement in generator's coverage. Training details as well as architectures information are presented in Appendix B. In this work we have shown that employing multiple discriminators is a practical approach allowing us to trade extra capacity, and thereby extra computational cost, for higher quality and diversity of generated samples. Such an approach is complimentary to other advances in GANs training and can be easily used together with other methods. We introduced a multi-objective optimization framework for studying multiple discriminator GANs, and showed strong similarities between previous work and the multiple gradient descent algorithm. The proposed approach was observed to consistently yield higher quality samples in terms of FID. Furthermore, increasing the number of discriminators was shown to increase sample diversity and generator robustness. Deeper analysis of the quantity || K k=1 α k ∇l k || is the subject of future investigation. We hypothesize that using it as a penalty term might reduce the necessity of a high number of discriminators. In BID16, authors proposed to use as a quality metric the squared Fréchet distance BID11 between Gaussians defined by estimates of the first and second order moments of the outputs obtained through a forward pass in a pretrained classifier of both real and generated data. They proposed the use of Inception V3 for computation of the data representation and called the metric Fréchet Inception Distance (FID), which is defined as: DISPLAYFORM0 where m d, Σ d and m g, Σ g are estimates of the first and second order moments from the representations of real data distributions and generated data, respectively. We employ FID throughout our experiments for comparison of different approaches. However, for each dataset in which FID was computed, the output layer of a pretrained classifier on that particular dataset was used instead of Inception. m d and Σ d were estimated on the complete test partitions, which are not used during training. Architectures of the generator and discriminator are detailed in TAB4, respectively. Batch normalization was used in all intermediate convolutional and fully connected layers of both models. We employed RMSprop to train all the models with learning rate and α set to 0.0001 and 0.9, respectively. Mini-batch size was set to 64. The setup in is employed and we build 128000 and 26000 samples for train and test sets, respectively. Table 4 presents the best FID (computed with a pretrained ResNet) achieved by each approach at train time, along with the epoch in which it was achieved, for each of 3 independent runs. Train time FIDs are computed using 1000 generated images. Table 4: Best FID obtained for each approach on 3 independent runs. FID is computed on 1000 generated images after every epoch. In FIG4, we report the norm of the update direction || K k=1 α k ∇l k || of the best model obtained for each method. Interestingly, different methods present similar behavior in terms of convergence in the Pareto-stationarity sense, i.e. the norm upon convergence is lower for models trained against more discriminators, regardless of the employed method. We computed extra scores using 10000 images generated by the best model reported in Table 4, i.e. the same models utilized to generate the shown in Fig. 6. Both Inception score and FID were computed with original implementations, while FID-VGG and FID-ResNet were computed using a VGG and a ResNet we pretrained. Results are reported with respect to DCGAN's scores. Table 5: Scores of different methods measure on generated CIFAR-10 samples. DCGAN scores are used as reference values, and report are the ratio between given model and DCGAN scores. Inception score is better when high, whereas FIDs are better when low. In TAB9 we present a comparison of minimum FID-ResNet obtained during training, along with computation cost in terms of time and space for different GANs, with both 1 and 24 discriminators. The computational cost of training GANs under a multiple discriminator setting is higher by design, in terms of both FLOPS and memory, if compared with single discriminators settings. However, a corresponding shift in performance is the of the additional cost. This effect was consistently observed considering 4 different well-known approaches, namely DCGAN , Least-square GAN (LSGAN) , and HingeGAN . The architectures of all single discriminator models follow the DCGAN, described in . For the 24 discriminators models, we used the architecture described in , which consists in removing the the normalization layers from DCGAN's discriminator and further adding the projection layer, inline with previous experiments reported for CIFAR-10 upscaled to 64x64. All models were trained with minibatch size of 64 during 150 epochs. Adam BID18 Furthermore, wall-clock time per iteration for different numbers of discriminators is shown in Fig. 10 for experiments with CIFAR-10 with serial updates of discriminators. Notice that while the increase in cost in terms of FLOPS and memory is unavoidable when multiple discriminators settings is employed, wall-clock time can be made close to single discriminators cases since training with respect to different discriminators can be implemented in parallel. On the other hand, extra cost in time introduced by other frameworks such as WGAN-GP or SNGAN cannot be trivially recovered. All reported in previous sections using CIFAR-10 were obtained with an upscaled version of the dataset. Here, we thus run experiments with the dataset in its original resolution aiming to contextualize our proposed approach with respect to previously introduced methods. To do so, we repeated similar experiments as reported in - TAB4, for the model referred to as standard CNN. The same architecture is employed and the spectral normalization is removed from the discriminators. Moreover, the same projection input is added in each of the discriminators. Results in terms of both FID and Inception score, evaluated on top of 5000 generated images as in as well as with 10000 images, are reported in TAB11 for our proposed approach and our implementation of , along with the FID measured using a ResNet classifier trained in advance. As can be seen, the addition of the multiple discriminators setting along with hypervolume maximization yields a relevant shift in performance for the DCGAN-like generator, taking all evaluated metrics to levels of recently proposed GANs. In this experiment, we verify whether the proposed multiple discriminators setting is capable of generating higher resolution images. For that, we employed the CelebA at a size of 128x128. We used a similar architecture for both generator and discriminators networks as described in the previous experiments. A convolutional layer with 2048 feature maps was added to both generator and discriminators architectures due to the increase in the image size. Adam optimizer with the same set of hyperparameters as for CIFAR-10 and CelebA 64x64 was employed. We trained models with 6, 8, and 10 discriminators during 24 epochs. Samples from each generator are shown in FIG7. We show the proposed multiple-discriminators setting scales to higher resolution even in the small dataset regime, by reproducing the experiments presented in BID17. We used the same architecture for the generator. For the discriminator, we removed batch normalization from all layers and used stride equal to 1 at the last convolutional layer, after adding the initial projection step. The Cats dataset 3 was employed, we followed the same pre-processing steps, which, in our case, yielded 1740 training samples with resolution of 256x256. Our model is trained using 24 discriminators and Adam optimizer with the same hyperparameters as for CIFAR-10 and CelebA previously described experiments. In FIG3 we show generator's samples after 288 training epochs. One epoch corresponds to updating over 27 minibatches of size 64.Figure 18: Cats generated using 24 discriminators after 288 training epochs. In this experiment we illustrate and confirm the introduced in , showing the effect of using an increasing number of random projections to train a GAN. We trained models using average loss minimization with 1 to 6 discriminators on the CelebA dataset for 15 epochs. Samples from the generator obtained in the last epoch are shown in FIG4. Generated samples are closer to real data as the number of random projections (and discriminators, consequently) increases.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1MB-3RcF7
We introduce hypervolume maximization for training GANs with multiple discriminators, showing performance improvements in terms of sample quality and diversity.
Designing of search space is a critical problem for neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit much smaller than the ones used in recent NAS algorithms. This search space facilitates direct selection of channel numbers and kernel sizes in convolutions. In addition, we propose a resource-aware architecture search algorithm which dynamically selects atomic blocks during training. The algorithm is further accelerated by a dynamic network shrinkage technique. Instead of a search-and-retrain two-stage paradigm, our method can simultaneously search and train the target architecture in an end-to-end manner. Our method achieves state-of-the-art performance under several FLOPS configurations on ImageNet with a negligible searching cost. We open our entire codebase at: https://github.com/meijieru/AtomNAS. Human-designed neural networks are already surpassed by machine-designed ones. Neural Architecture Search (NAS) has become the mainstream approach to discover efficient and powerful network structures (; ; ;). Although the tedious searching process is conducted by machines, humans still involve extensively in the design of the NAS algorithms. Designing of search spaces is critical for NAS algorithms and different choices have been explored. and utilize supernets with multiple choices in each layer to accommodate a sampled network on the GPU. Chen et al. (2019b) progressively grow the depth of the supernet and remove unnecessary blocks during the search. Tan & Le (2019a) propose to search the scaling factor of image resolution, channel multiplier and layer numbers in scenarios with different computation budgets. Stamoulis et al. (2019a) propose to use different kernel sizes in each layer of the supernet and reuse the weights of larger kernels for small kernels.; Tan & Le (2019b) adopts Inverted Residuals with Linear Bottlenecks (MobileNetV2 block) , a building block with light-weighted depth-wise convolutions for highly efficient networks in mobile scenarios. However, the proposed search spaces generally have only a small set of choices for each block. DARTS and related methods b; use around 10 different operations between two network nodes.;;; Stamoulis et al. (2019a) search the expansion ratios in the MobileNetV2 block but still limit them to a few discrete values. We argue that more fine-grained search space is essential to find optimal neural architectures. Specifically, the searched building block in a supernet should be as small as possible to generate the most diversified model structures. We revisit the architectures of state-of-the-art networks; Tan & Le (2019b); ) and find a commonly used building block: convolution -channel-wise operation -convolution. We reinterpret such structure as an ensemble of computationally independent blocks, which we call atomic blocks. This new formulation enables a much larger and more fine-grained search space. Starting from a supernet which is built upon atomic blocks, the search for exact channel numbers and various operations can be achieved by selecting a subset of the atomic blocks. For the efficient exploration of the new search space, we propose a NAS algorithm named AtomNAS to conduct architecture search and network training simultaneously. Specifically, an importance factor is introduced to each atomic block. A penalty term proportional to the computation cost of the atomic block is enforced on the network. By jointly learning the importance factors along with the weights of the network, AtomNAS selects the atomic blocks which contribute to the model capacity with relatively small computation cost. Training on large supernets is computationally demanding. We observe that the scaling factors of many atomic blocks permanently vanish at the early stage of model training. We propose a dynamic network shrinkage technique which removes the ineffective atomic blocks on the fly and greatly reduce the computation cost of AtomNAS. In our experiment, our method achieves 75.9% top-1 accuracy on ImageNet dataset around 360M FLOPs, which is 0.9% higher than state-of-the-art model (a). By further incorporating additional modules, our method achieves 77.6% top-1 accuracy. It outperforms MixNet by 0.6% using 363M FLOPs, which is a new state-of-the-art under the mobile scenario. In summary, the major contributions of our work are: 1. We propose a fine-grained search space which includes the exact number of channels and mixed operations (e.g., combination of different convolution kernels). 2. We propose an efficient end-to-end NAS algorithm named AtomNAS which can simultaneously search the network architecture and train the final model. No finetuning is needed after AtomNAS finishes. 3. With the proposed search space and AtomNAS, we achieve state-of-the-art performance on ImageNet dataset under mobile setting. Recently, there is a growing interest in automated neural architecture design. Reinforce learning based NAS methods (; b; a) are usually computational intensive, thus hampering its usage with limited computational budget. To accelerate the search procedure, ENAS represents the search space using a directed acyclic graph and aims to search the optimal subgraph within the large supergraph. A training strategy of parameter sharing among subgraphs is proposed to significantly increase the searching efficiency. The similar idea of optimizing optimal subgraphs within a supergraph is also adopted by;;;. A prominent disadvantage of the above methods is their coarse search spaces only include limited categories of properties, e.g. kernel size, expansion ratio, the number of layer, etc. Because of the restriction of search space, it is difficult to learn optimal architectures under computational resource constraints. On the contrary, our method proposes the fine-grained search space to enable searching more flexible network architectures under various resource constraints. Assuming that many parameters in the network are unnecessary, network pruning methods start from a computation-intensive model, identify the unimportant connections and remove them to get a compact and efficient network. Early method simultaneously learns the important connections and weights. However, non-regularly removing connections in these works makes it hard to achieve theoretical speedup ratio on realistic hardwares due to extra overhead in caching and indexing. To tackle this problem, structured network pruning methods (b; ; ; ;) are proposed to prune structured show that in structured network pruning, the learned weights are unimportant. This suggests structured network pruning is actually a neural architecture search focusing on channel numbers. Our method jointly searches the channel numbers and a mix of operations, which is a much larger search space. We formulate our neural architecture search method in a fine-grained search space with the atomic block used as the basic search unit. An atomic block is comprised of two convolutions connected by a channel-wise operation. By stacking atomic blocks, we obtain larger building blocks (e.g. residual block and MobileNetV2 block proposed in a variety of state-of-the-art models including ResNet, MobileNet V2/V3 . In Section 3.1, We first show larger network building blocks (e.g. MobileNetV2 block) can be represented by an ensembles of atomic blocks. Based on this view, we propose a fine-grained search space using atomic blocks. In Section 3.2, we propose a resource-aware atomic block selection method for end-to-end architecture search. Finally, we propose a dynamic network shrinkage technique in Section 3.3, which greatly reduces the search cost. Under the typical block-wise NAS paradigm b), the search space of each block in a neural network is represented as the Cartesian product C = i=1 P i, where each P i is the set of all choices of the i-th configuration such as kernel size, number of channels and type of operation. For example, C = {conv, depth-wise conv, dilated conv} × {3, 5} × {24, 32, 64, 128} represents a search space of three types of convolutions by two kernel sizes and four options of channel number. A block in the ing model can only pick one convolution type from the three and one output channel number from the four values. This paradigm greatly limits the search space due to the few choices of each configuration. Here we present a more fine-grained search space by decomposing the network into smaller and more basic building blocks. We denote f c,c (X) as a convolution operator, where X is the input tensor and c, c are the input and output channel numbers respectively. A wide range of manually-designed and NAS architectures share a structure that joins two convolutions by a channel-wise operation: where g is a channel-wise operator. For example, in VGG and a Residual Block , f 0 and f 1 are convolutions and g is one of Maxpool, ReLU and BN-ReLU; in a MobileNetV2 block , f 0 and f 1 are point-wise convolutions and g is depth-wise convolution with BN-ReLU in the MobileNetV2 block. Eq. can be reformulated as follows: where f ] is the operator of the i-th channel of g, and {f are obtained by splitting the kernel tensor of f 1 along the the input channel dimension. Each term in the summation can be seen as a computationally independent block, which is called atomic block. Fig. demonstrate this reformulation. By determining whether to keep each atomic block in the final model individually, the search of channel number c is enabled through channel selection, which greatly enlarges the search space. This formulation also naturally includes the selection of operators. To gain a better understanding, we first generalize Eq. as: Note the array indices i are moved to subscripts. In this formulation, we can use different types of operators for f 0i, f 1i and g i; in other words, f 0, f 1 and g can each be a combination of different operators and each atomic block can use different operators such as convolution with different kernel sizes. Formally, the search space is formulated as a supernet which is built based on the structure in Eq.; such structure satisfies Eq. and thus can be represented by atomic blocks; each of f 0, f 1 and g is a combination of operators. The new search space includes some state-of-the-art network architectures. For example, by allowing g to be a combination of convolutions with different kernel sizes, the MixConv block in MixNet (b) becomes a special case in our search space. In addition, our search space facilitates discarding any number of channels in g, ing in a more fine-grained channel configuration. In comparison, the channels numbers are determined heuristically in Tan & Le (2019b). In this work, we adopt a differentiable neural architecture search paradigm where the model structure is discovered in a full pass of model training. With the supernet defined above, the final model can be produced by discarding part of the atomic blocks during training. Following DARTS ), we introduce a scaling factor α to scale the output of each atomic block in the supernet. Eq. then becomes Here, each α i is tied with an atomic block comprised of three operators f c,1 1i,g i and f 1,c 0i. The scaling factors are learned jointly with the network weights. Once the training finishes, the atomic blocks with factors smaller than a threshold are discarded. We still need to address two issues related to the factor α. First, where should we put them in the supernet? The scaling parameters in the BN layers can be directly used as such scaling factors . In most cases, g contains at least one BN layer and we use the scaling parameters of the last BN layer in g as α. If g has no BN layers, which is rare, we can place α anywhere between f 0 and f 1, as long as we apply regularization terms to the weights of f 0 and f 1 (e.g., weight decays) in order to prevent weights in f 0 and f 1 from getting too large and canceling the effect of α. The second issue is how to avoid performance deterioration after discarding some of the atomic blocks. For example, DARTS discards operations with small scale factors after iterative training of model parameters and scale factors. Since the scale factors of the discarded operations are not small enough, the performance of the network will be affected which needs re-training to adjust the weights again. In order to maintain the performance of the supernet after dropping some atomics blocks, the scaling factors α of those atomic blocks should be sufficiently small. Inspired by the channel pruning work in , we add L1 norm penalty loss on α, which effectively Initialize the supernet and the exponential moving average; while epoch ≤ max epoch do Update network weights and scaling factors α by minimizing the loss function L; Update theα by Eq.; if Total FLOPs of dead blocks ≥ ∆ then Remove dead blocks from the supernet; end Recalculate BN's statistics by forwarding some training examples; Validate the performance of the current supernet; end Algorithm 1: Dynamic network shrinkage pushes many scaling factors to near-zero values. At the end of learning, atomic blocks with α close to zero are removed from the supernet. Note that since the BN scales change more dramatically during training due to the regularization term, the running statistics of BNs might be inaccurate and needs to be calculated again using the training set. With the added regularization term, the training loss is where λ is the coefficient of L1 penalty term, S is the index set of all atomic blocks, and E is the conventional training loss (e.g. cross-entropy loss combined with the weight decay term). |α i | is weighted by coefficient c i which is proportional to the computation cost of i-th atomic block, i.e. c i. By using computation costs aware regularization, we encourage the model to learn network structures that strike a good balance between accuracy and efficiency. In this paper, we use FLOPs as the criteria of computation cost. Other metrics such as latency and energy consumption can be used similarly. As a , the whole loss function L trades off between accuracy and FLOPs. Usually, the supernet is much larger than the final search . We observe that many atomic blocks become "dead" starting from the early stage of the search, i.e., their scaling factors α are close to zero till the end of the search. To utilize computational resources more efficiently and speed up the search process, we propose a dynamic network shrinkage algorithm which cuts down the network architecture by removing atomic blocks once they are deemed "dead". We adopt a conservative strategy to decide whether an atomic block is "dead": for scaling factors α, we maintain its momentumα which is updated aŝ where α t is the scaling factors at t-th iteration and β is the decay term. An atomic block is considered "dead" if bothα and α t are smaller than a threshold, which is set to 1e-3 throughout experiments. Once the total FLOPs of "dead" blocks reach a predefined threshold, we remove those blocks from the supernet. As discussed above, we recalculate BN's running statistics before deploying the network. The whole training process is presented in Algorithm 1. We show the FLOPs of a sample network during the search process in Fig. 2. We start from a supernet with 1521M FLOPs and dynamically discard "dead" atomic blocks to reduce search cost. The overall search and train cost only increases by 17.2% compared to that of training the searched model from scratch. We first describe the implementation details in Section 4.1 and then compare AtomNAS with previous state-of-the-art methods under various FLOPs constraints in Section 4.2. Finally, we provide more analysis about AtomNAS in Section 4.3. The picture on the left of Fig. 3 illustrates a search block in the supernet. Within this search block, f 0 is a 1 × 1 pointwise convolutions that expands the input channel number from C to 3 × 6C; g is a mix of three depth-wise convolutions with kernel sizes of 3 × 3, 5 × 5 and 7 × 7, and f 1 is another 1×1 pointwise convolutions that projects the channel number to the output channel number. Similar to , if the output dimension stays the same as the input dimension, we use a skip connection to add the input to the output. In total, there are 3 × 6C atomic blocks in the search block. The overall architecture of the supernet is shown in the table on the right of Fig. 3. The supernet has 21 search blocks. We use the same training configuration (e.g., RMSProp optimizer, EMA on weights and exponential learning rate decay) as ). We find that using this configuration is sufficient for our method to achieve good performance. Our are shown in Table 1 and Table 3. When training the supernet, we use a total batch size of 2048 on 32 Tesla V100 GPUs and train for 350 epochs. For our dynamic network shrinkage algorithm, we set the momentum factor β in Eq. to 0.9999. At the beginning of the training, all of the weights are randomly initialized. To avoid removing atomic blocks with high penalties (i.e., FLOPs) prematurely, the weight of the penalty term in Eq. is increased from 0 to the target λ by a linear scheduler during the first 25 epochs. By setting the weight of the L1 penalty term λ to be 1.8×10 −4, 1.2×10 −4 and 1.0×10 −4 respectively, we obtain networks with three different sizes: AtomNAS-A, AtomNAS-B, and AtomNAS-C. They have the similar FLOPs as previous state-of-the-art networks under 400M: MixNet-S (b), MixNet-M (b) and SinglePath (a). We apply AtomNAS to search high performance light-weight model on ImageNet 2012 classification task . Table 1 compares our methods with previous state-of-the-art models, either manually designed or searched. With models directly produced by AtomNAS, our method achieves the new state-of-the-art under all FLOPs constraints. Especially, AtomNAS-C achieves 75.9% top-1 accuracy with only 360M FLOPs, and surpasses all other models, including models like PDARTS and DenseNAS which have much higher FLOPs. Techniques like Swish activation function and Squeeze-and-Excitation (SE) module † means methods use extra techniques like Swish activation and Squeeze-and-Excitation module. fair comparison with methods that use these techniques, we directly modify the searched network by replacing all ReLU activation with Swish and add SE module with ratio 0.5 to every block and then retrain the network from scratch. Note that unlike other methods, we do not search the configuration of Swish and SE, and therefore the performance might not be optimal. Extra data augmentations such as MixUp and AutoAugment are still not used. We train the models from scratch with a total batch size of 4096 on 32 Tesla V100 GPUs for 250 epochs. Simply adding these techniques improves the further. AtomNAS-A+ achieves 76.3% top-1 accuracy with 260M FLOPs, which outperforms many heavier models including MnasNet-A2. It performs as well as Efficient-B0 (a) by using 130M less FLOPs and without extra data augmentations. It also outperforms the previous state-of-the-art MixNet-S by 0.5%. In addition, AtomNAS-C+ improves the top-1 accuracy on ImageNet to 77.6%, surpassing previous state-of-the-art MixNet-M by 0.6% and becomes the overall best performing model under 400M FLOPs. Fig. 4 visualizes the top-1 accuracy on ImageNet for different models. It's clear that our fine-grained search space and the end-to-end resource-aware search method boost the performance significantly. † denotes methods using extra network modules such as Swish activation and Squeeze-and-Excitation module. ‡ denotes using extra data augmentation such as MixUp and AutoAugment. * denotes models searched and trained simultaneously. Parameters FLOPs Top-1(%) Top-5(%) MobileNetV1 4.2M 575M 70.6 89.5 MobileNetV2 3.4M 300M 72.0 91.0 MobileNetV2 (our impl.) 3.4M 301M 73.6 91.5 MobileNetV2 (1.4) 6.9M 585M 74.7 92.5 ShuffleNetV2 3.5M 299M 72.6 -ShuffleNetV2 2× 7.4M 591M 74.9 -FBNet-A 4.3M 249M 73.0 -FBNet-C 5.5M 375M 74.9 -Proxyless (mobile) 4.1M 320M 74.6 92.2 SinglePath (a) 4.4M 334M 75.0 92.2 NASNet-A 5.3M 564M 74.0 91.6 DARTS (second order) 4.9M 595M 73.1 -PDARTS (cifar 10) (b) 4.9M 557M 75.6 92.6 DenseNAS-A 7.9M 501M 75.9 92.6 FairNAS-A (b) 4 3x3 32x112x112 16x112x112 24x56x56 24x56x56 24x56x56 24x56x56 40x28x28 40x28x28 40x28x28 40x28x28 80x14x14 80x14x14 80x14x14 80x14x14 96x14x14 96x14x14 96x14x14 96x14x14 192x7x7 192x7x7 192x7x7 192x7x7 Pooling FC 320x7x7 3 5 7 Figure 5: The architecture of AtomNAS-C. Blue, orange, cyan blocks denote atomic blocks with kernel size 3, 5 and 7 respectively; the heights of these blocks are proportional to their expand ratios. We plot the structure of the searched architecture AtomNAS-C in Fig. 5, from which we see more flexibility of channel number selection, not only among different operators within each block, but also across the network. In Fig. 6a, we visualize the ratio between atomic blocks with different kernel sizes in all 21 search blocks. First, we notice that all search blocks have convolutions of all three kernel sizes, showing that AtomNAS learns the importance of using multiple kernel sizes in network architecture. Another observation is that AtomNAS tends to keep more atomic blocks at the later stage of the network. This is because in earlier stage, convolutions of the same kernel size costs more FLOPs; AtomNAS is aware of this (thanks to its resource-aware regularization) and try to keep as less as possible computationally costly atomic blocks. To demonstrate the effectiveness of the resource-aware regularization in Section 3.2, we compare it with a baseline without FLOPs-related coefficients c i, which is widely used in network pruning (; b). Table 2 shows the . First, by using the same L1 penalty coefficient λ = 1.0 × 10 −4, the baseline achieves a network with similar performance but using much more FLOPs; then by increasing λ to 1.5 × 10 −4, the baseline obtain a network which has similar FLOPs but inferior performance (i.e., about 1.0% lower). In Fig. 6b we visualized the ratio of different types of atomic blocks of the baseline network obtained by λ = 1.5×10 −4. The baseline network keeps more atomic blocks in the earlier blocks, which have higher computation cost due to higher input resolution. On the contrary, AtomNAS is aware of the resource constraint, thus keeping more atomic blocks in the later blocks and achieving much better performance. As the BN's running statistics might be inaccurate as explained in Section 3.2 and Section 3.3, we re-calculate the running statistics of BN before inference, by forwarding 131k randomly sampled training images through the network. Table 3 shows the impact of the BN recalibration. The top-1 accuracies of AtomNAS-A, AtomNAS-B, and AtomNAS-C on ImageNet improve by 1.4%, 1.7%, and 1.2% respectively, which clearly shows the benefit of BN recalibration. Our dynamic network shrinkage algorithm speedups the search and train process significantly. For AtomNAS-C, the total time for search-and-training is 25.5 hours. For reference, training the final architecture from scratch takes 22 hours. Note that as the supernet shrinks, both the GPU memory consumption and forward-backward time are significantly reduced. Thus it's possible to dynamically change the batch size once having sufficient GPU memory, which would further speed up the whole procedure. In this paper, we revisit the common structure, i.e., two convolutions joined by a channel-wise operation, and reformulate it as an ensemble of atomic blocks. This perspective enables a much larger and more fine-grained search space. For efficiently exploring the huge fine-grained search space, we propose an end-to-end algorithm named AtomNAS, which conducts architecture search and network training jointly. The searched networks achieve significantly better accuracy than previous state-of-the-art methods while using small extra cost. Table 4: Comparision with baseline backbones on COCO object detection and instance segmentation. Cls denotes the ImageNet top-1 accuracy; detect-mAP and seg-mAP denotes mean average precision for detection and instance segmentation on COCO dataset. The detection of baseline models are from Stamoulis et al. (2019b). SinglePath+ (b) In this section, we assess the performance of AtomNAS models as feature extractors for object detection and instance segmentation on COCO dataset . We first pretrain AtomNAS models (without Swish activation function and Squeeze-and-Excitation (SE) module ) on ImageNet, use them as drop-in replacements for the backbone in the Mask-RCNN model (a) by building the detection head on top of the last feature map, and finetune the model on COCO dataset. We use the open-source code MMDetection (a). All the models are trained on COCO train2017 with batch size 16 and evaluated on COCO val2017. Following the schedule used in the open-source implementation of TPU-trained Mask-RCNN, the learning rate starts at 0.02 and decreases by a scale of 10 at 15-th and 20th epoch respectively. The models are trained for 23 epochs in total. Table 4 compares the with other baseline backbone models. The detection of baseline models are from Stamoulis et al. (2019b). We can see that all three AtomNAS models outperform the baselines on object detection task. The demonstrate that our models have better transferability than the baselines, which may due to mixed operations, a.k.a multi-scale are here, are more important to object detection and instance segmentation. https://github.com/tensorflow/tpu/tree/master/models/official/mask_ rcnn
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BylQSxHFwr
A new state-of-the-art on Imagenet for mobile setting
We introduce Lyceum, a high-performance computational ecosystem for robotlearning. Lyceum is built on top of the Julia programming language and theMuJoCo physics simulator, combining the ease-of-use of a high-level program-ming language with the performance of native C. Lyceum is up to 10-20Xfaster compared to other popular abstractions like OpenAI’sGymand Deep-Mind’sdm-control. This substantially reduces training time for various re-inforcement learning algorithms; and is also fast enough to support real-timemodel predictive control with physics simulators. Lyceum has a straightfor-ward API and supports parallel computation across multiple cores or machines. The code base, tutorials, and demonstration videos can be found at: https://sites.google.com/view/lyceum-anon. Progress in deep learning and artificial intelligence has exploded in recent years, due in large part to growing computational infrastructure. The advent of massively parallel GPU computing, combined with powerful automatic-differentiation tools like TensorFlow and PyTorch (b), have lead to new classes of algorithms by enabling what was once computational intractable. These tools, alongside fast and accurate physics simulators like MuJoCo and associated frameworks like OpenAI's Gym and DeepMind's dm_control , have similarly transformed various aspects of robotic control like Reinforcement Learning (RL), Model-Predictive Control (MPC), and motion planning. These platforms enable researchers to give their ideas computational form, share with collaborators, and deploy their successes on real systems. From these advances, simulation to real-world (sim2real) transfer has emerged as a promising paradigm for robotic control. A growing body of recent work suggests that robust control policies trained in simulation can successfully transfer to the real world (a; ; ; ;). Despite these advances, many algorithms are computationally inefficient and have been unable to scale to complex problem domains. Training control policies with state-of-the-art RL algorithms often takes hours to days of compute time. For example, OpenAI's extremely impressive Dactyl work required 50 hours of training time across 6144 CPU cores and 8 powerful NVIDIA V100 GPUs. Such computational budgets are available only to a select few labs. Furthermore, such experiments are seldom run only once in deep learning and especially in deep RL. Indeed, RL algorithms are notoriously sensitive to choices of hyper-parameters and require reward shaping (; ; 2018;). Thus, many iterations of the learning process may be required, with humans in the loop, to improve reward and hyperparameters, before deploying solutions in real world. This computational bottleneck often leads to a scarcity of hardware , relative to the number of papers that propose new algorithms on highly simplified and well tuned benchmark tasks . Exploring avenues to reduce experiment turn around time is thus crucial to scaling up to harder tasks and making resource-intensive algorithms and environments accessible to research labs without massive cloud computing budgets. In a similar vein, computational considerations have also limited progress in model-based control algorithms. For real-time model predictive control, the computational restrictions manifest as the requirement to compute controls in bounded time with limited local resources. As we will show, existing frameworks such as Gym and dm_control, while providing a convenient abstraction in Python, are too slow to meet this real-time computation requirement. As a , most planning algorithms are run offline and deployed in open-loop mode on hardware. This is unfortunate, since it does not take feedback into account which is well known to be critical for stochastic control. Our Contributions: Our goal in this work is to overcome the aforementioned computational restrictions to enable faster training of policies with RL algorithms, facilitate real-time MPC with a detailed physics simulator, and ultimately enable researchers to engage more complex robotic tasks. To this end, we develop Lyceum, a computational ecosystem that uses the Julia programming language and the MuJoCo physics engine. Lyceum ships with the main OpenAI gym continuous control tasks, along with other environments representative of challenges in robotics. Julia's unique features allow us to wrap MuJoCo with zero-cost abstractions, providing the flexibility of a high-level programming language to enable easy creation of environments, tasks, and algorithms while retaining the performance of native C/C++. This allows RL and MPC algorithms implemented in Lyceum to be 10-100X faster compared to Gym and dm_control. We hope that this speedup will enable RL researchers to scale up to harder problems without increased computational costs, as well as enable real-time MPC that looks ahead through a simulator. Recently, various physics simulators and computational ecosystems around them have transformed robot learning research. They allow for exercising creativity to quickly generate new and interesting robotic scenes, as well as quickly prototype RL solutions. We summarize the main threads of related work below. Physics simulators: MuJoCo has quickly emerged as a leading physics simulators for robot learning research. It is fast and efficienct, and particularly well suited for contact rich tasks. Numerous recent works have also demonstrated simulation to reality transfer with MuJoCo through physically consistent system identification (a) or domain randomization (; ;). Our framework wraps MuJoCo in Julia and enables programming and research with a high level language yet retaining the speed of a low level language like C. While we primarily focus on MuJoCo, we believe that similar design principles can also be extended to other simulators such as Bullet and DART . Computational ecosystems: Even though MuJoCo has been around for at least 6-7 years, it's mainstream adoption and popularity grew more recently in the past 3-4 years after the introduction of computational ecosystems around the simulator. Specifically, OpenAI's gym and DeepMind's dm_control sparked a wave of interest by providing python bindings for MuJoCo (which itself is written in C) as well as easy-to-use environments and a highlevel python API. This has enabled the RL community to quickly access physics-based environments and prototype algorithms. Unfortunately, this flexibility comes at the price of computational efficiency. Existing ecosystems are slow due to inefficiencies and poor parallelization capabilities of Python. Prior works have tried to address some of the shortcomings of Python-based frameworks by attempting to add JIT compilation to the language (; a;) but only support a subset of the language, and do not achieve the same performance as Julia. developed a framework similar to Gym that supports distributed computing, but it still suffers the same performance issues of Python and multi-processing. Perhaps closest to our motivation is the work of , which demonstrates the usefulness of Julia as a language for robotics. However, it uses a custom and minimalist rigid body simulator with limited contact support. In contrast, our work attempts to address the inefficiencies of existing computational ecosystems through use of Julia, and directly wraps a high-quality and extensively supported simulator like MuJoCo with zero overhead. A number of algorithmic toolkits like OpenAI Baselines , MJRL , Soft-Learning , and RL-lab ; as well as environments like Hand Manipulation Suite , Door Gym , and Surreal Robosuite have been developed around existing computational ecosystems. Our framework supports all the underlying functionality needed to transfer these advances into our ecosystem (e.g. simulator wrappers and automatic differentiation through Flux). Lyceum comes with a few popular robotics algorithms out of the box like PPO and Natural Policy Gradient for RL as well as several MPC algorithms like "Model Predictive Path Integral" . In the future, we plan to port a number of additional algorithms and advances into our ecosystem. Robotic control with RL and MPC requires unique computational considerations when designing infrastructure and ecosystems. RL algorithms are typically inherently parallel. Consider the class of policy gradient RL algorithms which have demonstrated very impressive in a variety of tasks (; OpenAI;) or evolutionary methods for policy learning. They require rolling out many trajectories to form a dataset, on which the policy is updated. These rollouts can be parallelized and so we would like a computational infrastructure that yields performance which scales linearly with increasing number of cores. RL algorithms are also notoriously sensitive to many hyperparameter details, as well as reward shaping (; ; 2018). These invariably need to be done sequentially with a human in the loop, especially to refine and design the reward functions. Thus, experiment turn around times are critical, which are largely limited by the speed of serial operations within each thread. Similar considerations also apply to real-time MPC, where sampling based algorithms like MPPI , POLO (b), or iLQR (fitted LQR) (; ;) can be parallelized and would benefit from a framework that facilitates this. In addition, real-time MPC also poses the challenge of requiring computation of action to happen within the control loop time period, to incorporate correct real-world feedback. In robotic hardware, this typically must be done with locally available (often on-board) compute, since cloud computing invariably has significantly larger latency than hardware control loop period. This places an even stronger emphasis on extremely efficient serial operations within threads to match the strict bounded time computation requirements. We now describe the structure of our ecosystem and the advantages it provides. Lyceum consists of the following Julia packages which together empower robotics researchers with the ease of use of a high-level, dynamic programming language with all performance of a compiled, low-level language. 1. Lyceum.jl, a "base" package which defines a set of abstract environment and controller interfaces along with several utilities. 2. LyceumAI.jl, a collection of various algorithms for robotic control. 3. MuJoCo.jl, a low-level Julia wrapper for the MuJoCo physics simulator. 4. LyceumMuJoCo.jl, a high-level "environment" abstraction similar to Gym and dm_control. Julia is a general-purpose programming language developed in 2012 at MIT with a focus on technical computing . While a full description of Julia is beyond the scope of this paper, we highlight a few key aspects that we leverage in Lyceum and believe make Julia an excellent tool for robotics and RL researchers. Julia feels like a dynamic, interpreted scripting language. Its high-level syntax, garbage-collected memory management, and feature-rich "Read-Eval-Print-Loop" (REPL) command-line interface makes programming in Julia quick and interactive. Under the hood, however, Julia leverages the LLVM backend to "just-in-time" (JIT) compile native machine code that is as fast as C for a variety of hardware platforms. This allows researchers in robotics and RL to rapidly express their ideas in code while maintaining the high-performance at runtime that is critical to many problem domains. A core philosophy of Julia is that users' code should be just as powerful and fast as the core language. Indeed, most of Julia is itself implemented in Julia! This empowers users to create powerful tools and packages like Lyceum without requiring low-level knowledge of the language or waiting for a library to implement a desired feature. This is highly beneficial for algorithmic development since the kinds of operations available to the researcher are often restricted by what functionality is afforded by the computational platform. For example, use of optimization methods based on Newton's method were out of favor in machine learning until the feature of Hessian-vector products were implemented in low-level languages like C and wrapped into packages like TensorFlow and PyTorch. While many languages and libraries provide the notion of "broadcasting" or vectorizing certain operations, Julia extends this capability to all functions and can combine multiple operations into a single in-place, fused loop. Consider the following example: which compiles down to a single loop that computes 2y 2 + exp(z) for every element in Y and Z, assigning it to it's appropriate location in X. Critically, this allows researchers to flexibly and efficiently apply several operations to a data structure without worrying about which specific vectorized function to use (e.g. BLAS's gemm function) or waiting for a library to implement it if it does not exist. One the greatest benefits of Python is the massive ecosystem of packages it provides. While Julia has many powerful tools like Flux.jl for deep-learning and Optim.jl for optimization, Julia users are also able to easily interact with Python through PyCall.jl in just a few lines of code: using PyCall so = pyimport("scipy.optimize") so.newton(x -> cos(x) -x, 1) Similarly, Julia comes with a straightforward and zero-overhead interface for calling C as shown in this snippet from MuJoCo.jl: Similar tools also exist for other languages like R and C++. This crucially allows Julia researchers to tap into a deep well of tools and libraries and allows roboticists to interact with low-level hardware drivers. Julia also comes with extensive support for distributed and shared-memory multi-threading that allows users to trivially parallelize their code. The following example splits the indices of X across all the available cores and does an in-place multiplication of X by 2 in parallel across an internal thread pool: This equips researchers with the power to parallelize their implementations and make maximal use of high-core count CPUs without worrying about about over-scheduling their machines with too many threads. Lyceum is designed with this feature in mind and supports parallel computation out-of-the-box. All these features and more make writing both performant and generic code easy for the programmer, contributing to a rich ecosystem of open-source Julia packages like Lyceum. To handle the 3000+ packages available, Julia comes with a built-in package manager that prevents having to separately manage a build system and helps to avoid "dependency hell" by allowing each package to provide its own binaries and maintain separate versions of its dependencies. This means less time is spent getting things to run and more time for focusing on the task at hand. At the highest level we provide Lyceum.jl, This base package contains several convenience utilities used throughout the Lyceum ecosystem for data logging, multi-threading, and controller benchmarking (i.e. measuring throughput, jitter, etc.), and more. Lyceum.jl also contains interface definitions, such as AbstractEnv which LyceumMuJoCo.jl, discussed below, implements. This interface is similar to the popular Python frameworks Gym and dm_control with a few differences: 1. The ability to arbitrarily get/set the state of the simulator, a necessary feature for modelbased methods like MPC or motion planning. Note that an important component of this is defining a proper notion of a state, which is often missing from existing frameworks. 2. Optional, in-place versions for all functions (e.g. getstate! (·) which store the return value in a user-allocated data structure. This eliminates unnecessary memory allocations and garbage collection, enabling environments to be used in tight, real-time loops. 3. An optional "evaluation" metric. Often times reward functions are heavily "shaped" and hard to interpret. The evaluation metric serves as a measure of the true task-completion reward that you want to optimize for. We expect most users will be interested in implementing their own environments, which forms a crucial part of robotics research. Indeed, different researchers may be interested in different robots performing different tasks, ranging from whole arm manipulators to legged locomotion to dexterous anthropomorphic hands. To aid this process, we provide sensible defaults for most of the API. Additionally, only a subset of these functions need to be overridden should something other than the default behavior be desired. For example, an environment can only override getobs! (·) and observationspace(·), while our framework uses the information provided by observationspace(·) to pre-allocate the required data structure and pass it to getobs! (·). Having both in-place and out-of-place operations allows users to choose convenience vs greater performance as desired. This separation of interface and implementation allows for other simulators and back-ends (e.g. RigidBodySim.jl or DART) to be used in lieu of the MuJoCo-based environments we provide should the user desire. The next package we provide is MuJoCo.jl, a low-level Julia wrapper for MuJoCo which has a one-to-one correspondence to MuJoCo's C interface. We then build LyceumMuJoCo.jl, the MuJoCo implementation of our AbstractEnv API, on top of MuJoCo.jl. Along with this interface we provide ports of CartPole, Swimmer, Ant, HalfCheetah, and Humanoid from the popular OpenAI Gym framework, along with two new environments, each with their own task-specific reward and evaluation functions. The first is the problem domain of reconfiguration planning (Figure 2), which has been studied earlier in motion planning literature (kin; nie). The task is defined by a set of N total movable objects and M ⊂ N target objects. Each target object has a specified goal configuration while the remaining N \ M objects represent movable obstacles, possibly with their own set of constraints (e.g. on the right side of Figure 2 the objects must stay on the tabletop). The reconfiguration planning domain explores interesting challenges related to: (a) reward under-specification -where only desired goal configurations of a subset of objects are specified; thereby having a multiplicity of equally good and valid solutions; (b) sequential manipulation -in certain cases, some objects may have to be moved out of the way to reach and manipulate other objects; (c) variable number of objects -most current work in RL is concerned with manipulating a fixed number of known objects, whereas in reconfiguration planning there can be a wide variety in number of objects and scenes. We also provide a locomotion environment (Figure 2) in which an agent must navigate robustly through rough terrain, hills, and terraces, represented as height maps. These environments are procedurally-generated, allowing for the environment to be modified by changing a few parameters, support multi-threading out-of-the-box, are non-allocating, and have zero-overhead over using MuJoCo in C directly. Note that our goal is to not solve these task domains in this work, but rather provide an interesting set of environments the research community can work on. Coupled with these environments is LyceumAI.jl, a collection of algorithms for robotic control that similarly leverage Julia's performance and multi-threading abilities. Currently we provide implementations of "Model Predictive Path Integral Control" (MPPI), a stochastic shooting method for model-predictive control and Natural Policy Gradient. We designed our experiments and benchmarks to address the following questions: (a) Does Julia faciliate high-performance programming in the robotics and RL domains (as discussed in Section 3)? (b) Does our computational ecosystem and wrapper lead to faster implementation and experiment time than Gym and dm_control, and in particular is Lyceum faster than real-time for MPC? To answer the above question, we will consider different models of increasing complexity: CartPole, Ant, our reconfiguration environment using HERB, and Humanoid. All experiments are performed on a 16-core Intel i9-7960X with the CPU governor pinned at 1.2GHz so as to prevent dynamic scaling or thermal throttling from affecting . As Gym and dm_control do not come with support for parallel computing, the authors implement this functionality using Python's multiprocessing library as recommended in several GitHub Issues by the respective library authors. In the first task, we explore the parallel scaling performance of LyceumMuJoCo.jl against Gym, dm_control, and a native C implementation using a OpenMP thread pool. We collect 1024 samples in parallel with 1 through 16 threads using the humanoid.xml model that ships with MuJoCo. In the second task, we compare the relative sampling performance for each of the models listed above by again collecting 1024 samples in parallel but with a fixed 16 threads. The third task examines the real time factor using our Julia MPPI implementation and a similar implementation in Python using Gym for each of the models listed above. The real-time factor is defined as: ∆t Simulator ∆t Controller where ∆t Simulator is the timestep that our simulator uses and ∆t Controller is the timestep per iteration of MPPI measured by wall-clock time. A real-time factor of 1 indicates that the controller runs at the same speed as the simulator, while a real-time factor of 2 indicates that the controller runs twice as fast as the simulator. While we collect the data at the same, fixed 1.2GHz, we scale all timings up to what we would expect at 3.3GHz, the sustained clock-speed of our processor, for a more realistic presentation. 6 BENCHMARK Figure 3 illustrates how sampling throughput scales with increasing core counts for Gym, DMC, Lyceum, and a native C implementation used OpenMP. While Lyceum and C scale 98% and 95% of linear between 1 and 16 threads, Gym and DMC only achieve 29% and 23% scaling, respectively. Additionally, the authors noted that this gap grew when experiments were run without the CPU frequency was unlocked and allowed to scale to 3.3GHz. Upon profiling the Python implementation it was discovered that a significant component of the slowdown is due to inter-process communication overhead and waiting on locks. Thus it appears that the distributed Python implementations are largely IO-bound. Figure 3 compares sampling throughput across environments of different complexity. We provide this data in two forms: as a fraction of native C's throughput and as samples per second on a log scale. As seen, Lyceum and native C significantly outperform Gym and dm_control in all cases. The relative performance between the best-performing and worst-performing implementations for Ant, CartPole, HERB, and Humanoid are 44x, 286x, 116x, and 14x, respectively. The smaller differences in the more complex environments is likely due to the fact that each process in the Gym/dm_control environments is performing more work, amortizing the overhead of inter-process communication. It should also be noted that Lyceum and C scale linearly through the entire range of threads, while Gym and dm_control appear to flatline around 10 threads. This suggests that these frameworks may be unable to benefit from higher core count CPUs. While increased sampling throughput is critical for many algorithms, end-to-end performance matters most. Figure 1 compares an implementation of Proximal Policy Optimzation (PPO) in our framework against OpenAI Baseline's implementation for 1 million timesteps in the OpenAI Gym environments Swimmer, Hopper, and Humanoid. Each experiment was averaged over three random seeds and performed on a single core with the default OpenAI Baseline hyper-parameters. For Hopper and Swimmer, our implementation yields similar training curves to OpenAI, while lagging behind for the Humanoid experiment. This difference is likely due to the number of additions made to the core PPO algorithm in OpenAI Baseline's including reward and observation scaling, value function gradient clipping, orthonormal paramter initialization, and more. The important difference, however, is the far shorter training time as measured by wall clock time. This speed up comes from the increased sampling efficieny provided by Lyceum, as well as the high performance auto-differentiation framework Flux.jl. We intruced Lyceum, a new computational ecosystem for robot learning in Julia that provides the rapid prototyping and ease-of-use benefits of a high-level programming language, yet retaining the performance of a low-level language like C. We demonstrated that this ecosystem can obtain 10-20X speedups compared to existing ecosystems like OpenAI gym and dm_control. We also demonstrated that this speed up enables faster experimental times for RL algorithms, as well as real-time model predictive control. In the future, we hope to port over algorithmic infrastructures like OpenAI's baselines as well as environments like hand manipulation suite and DoorGym .
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
SyxytxBFDr
A high performance robotics simulation and algorithm development framework.
There is a strong incentive to develop versatile learning techniques that can transfer the knowledge of class-separability from a labeled source domain to an unlabeled target domain in the presence of a domain-shift. Existing domain adaptation (DA) approaches are not equipped for practical DA scenarios as a of their reliance on the knowledge of source-target label-set relationship (e.g. Closed-set, Open-set or Partial DA). Furthermore, almost all the prior unsupervised DA works require coexistence of source and target samples even during deployment, making them unsuitable for incremental, real-time adaptation. Devoid of such highly impractical assumptions, we propose a novel two-stage learning process. Initially, in the procurement-stage, the objective is to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift. To achieve this, we enhance the model’s ability to reject out-of-source distribution samples by leveraging the available source data, in a novel generative classifier framework. Subsequently, in the deployment-stage, the objective is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps, with no access to the previously seen source samples. To achieve this, in contrast to the usage of complex adversarial training regimes, we define a simple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM). A thorough evaluation shows the practical usability of the proposed learning framework with superior DA performance even over state-of-the-art source-dependent approaches. Deep learning models have proven to be highly successful over a wide variety of tasks . However, a majority of these remain heavily dependent on access to a huge amount of labeled samples to achieve a reliable level of generalization. A recognition model trained on a certain distribution of labeled samples (source domain) often fails to generalize when deployed in a new environment (target domain) in the presence a discrepancy in the input distribution . Domain adaptation (DA) algorithms seek to minimize this discrepancy either by learning a domain invariant feature representation (; ; ;), or by learning independent domain transformations to a common latent representation through adversarial distribution matching , in the absence of target label information. Most of the existing approaches (c;) assume a common label-set shared between the source and target domains (i.e. C s = C t), which is often regarded as Closed-Set DA (see Fig. 1). Though this assumption helps to analyze various insights of DA algorithms, such an assumption rarely holds true in real-world scenarios. Recently researchers have independently explored two broad adaptation settings by partly relaxing the above assumption. In the first kind, Partial DA (b; a; b), the target label space is considered as a subset of the source label space (i.e. C t ⊂ C s). This setting is more suited for large-scale universal source datasets, which will almost always subsume the label-set of a wide range of target domains. However, the availability of such a universal source is highly questionable for a wide range of input domains and tasks. In the second kind, regarded as Open-set DA , the target label space is considered as a superset of the source label space (i.e. C t ⊃ C s). The major challenge in this setting is attributed to detection of target samples from the unobserved categories in a fully-unsupervised scenario. Apart from the above two extremes, certain works define a partly mixed scenario by allowing "private" label-set for both source and target domains (i.e. C s \ C t = ∅ and C t \ C s = ∅) but with extra supervision such as few-shot labeled data or access to the knowledge of common categories . Most of the prior approaches consider each scenario in isolation and propose independent solutions. Thus, they require access to the knowledge of label-set relationship (or category-gap) to carefully choose a DA algorithm, which would be suitable for the problem in hand. Furthermore, all the prior unsupervised DA works require coexistence of source and target samples even during deployment, hence not source-free. This is highly impractical, as labeled source data may not be accessible after deployment due to several reasons such as, privacy concerns, restricted access to proprietary data, accidental loss of source data or other computational limitations in real-time deployment scenarios. Acknowledging the aforementioned shortcomings, we propose one of the most convenient DA frameworks which is ingeniously equipped to address source-free DA for all kinds of label-set relationships, without any prior knowledge of the associated category-gap (i.e. universal-DA). We not only focus on identifying the key complications associated with the challenging problem setting, but also devise insightful ideas to tackle such complications by adopting learning techniques much different from the available DA literature. This leads us to realize a holistic solution which achieves superior DA performance even over prior source-dependent approaches. We briefly review the available domain adaptation methods under the three major divisions according to the assumption on label-set relationship. a) Closed-set DA. The cluster of previous works under this setting focuses on minimizing the domain gap at some intermediate feature level either by minimizing well-defined statistical distance functions (; ; ;) or by formalizing it as an adversarial distribution matching problem (; ; ; ; inspired from the Generative Adversarial Nets . Certain prior works (; ; use GAN framework to explicitly generate target-like images translated from the source image samples, which is also regarded as pixel-level adaptation in contrast to other feature level adaptation works (; ; ; . b) Partial DA. Focusing on Partial DA, Cao et al. (2018a) proposed to achieve adversarial class-level matching by utilizing multiple domain discriminators furnishing class-level and instance-level weighting for individual data samples. Zhang et al. (2018b) proposed to utilize importance weights for source samples depending on their similarity to the target domain data using an auxilliary discriminator. To effectively address the problem of negative-transfer , Cao et al. (2018b) employed a single discriminator to achieve both adversarial adaptation and class-level weighting of source samples. c) Open-set DA. Saito et al. (2018b) proposed a more general open-set adaptation setting without accessing the knowledge of source private labels set in contrast to the prior work . They extended the source classifier to accommodate an additional "unknown" class, which is trained adversarially against the other source classes. Universal DA. proposed Universal DA, which requires no prior knowledge of label-set relationship similar to the proposed setting, but considers access to both source and target samples during adaptation. The problem setting for source-free domain adaptation is broadly divided into a two stage process. a) Procurement stage. In this stage, we are given full access to the labeled samples of source domain, where p is the distribution of source samples and C s denotes the label-set of the source domain. Here, the objective is to equip the model for the second stage, i.e. the Deployment stage, in the presence of a discrepancy in the distribution of input target samples. To achieve this we rely on an artificially generated negative dataset, D n = {(x n, y n): x n ∼ p n, y n ∈ C n }, where p n is the distribution of negative source samples such that C n ∩ C s = ∅. Figure 2: Latent space cluster arrangement during adaptation (see Section 3.1.1). b) Deployment stage. After obtaining a trained model from the Procurement stage, the model will have its first encounter with the unlabeled target domain samples from the deployed environment. We denote the unlabeled target data by D t = {x t : x t ∼ q}, where q is the distribution of target samples. Note that, access to the source dataset D s from the previous stage is fully restricted during adaptation in the Deployment stage. Suppose that, C t is the "unknown" label-set of the target domain. We define the common label space between the source and target domain as C = C s ∩ C t. The private label-set for the source and the target domains is represented as C s = C s \ C t and C t = C t \ C s respectively. 3.1.1 Challenges. The available DA techniques heavily rely on the adversarial discriminative (; a) strategy. Thus, they require access to the source samples to reliably characterize the source domain distribution. Moreover, these approaches are not equipped to operate in a source-free setting. Though a generative model can be used as a memory-network to realize source-free adaptation, such a solution is not scalable for large-scale source datasets (e.g. ImageNet ), as it introduces unnecessary extra parameters in addition to the associated training difficulties . This calls for a fresh analysis of the requirements beyond the solutions found in literature. In a general DA scenario, with access to source samples in the Deployment stage (specifically for Open-set or Partial DA), a widely adopted approach is to learn domain invariant features. In such approaches the placement of source category clusters is learned in the presence of unlabeled target samples which obliquely provides a supervision regarding the relationship between C s and C t. For instance, in case of Open-set DA, the source clusters may have to disperse to make space for the clusters from target private C t (see Fig. 2a to 2b). Similarly, in partial DA, the source clusters may have to rearrange themselves to keep all the target shared clusters (C = C t) separated from the source private C s (see Fig. 2a to 2c). However in a complete source-free framework, we do not have the liberty to leverage such information as source and target samples never coexist together during training. Motivated by the adversarial discriminative DA technique , we hypothesize that, inculcating the ability to reject samples that are out of the source data distribution can facilitate future source-free domain alignment using this discriminatory knowledge. Therefore, in the Procurement stage the overarching objective is two-fold. • Firstly, we must aim to learn a certain placement of source clusters best suited for all kinds of category-gap scenarios acknowledging the fact that, a source-free scenario does not allow us to modify the placement in the presence of target samples during adaptation (see Fig. 2d). • Secondly, the learned embedding must have the ability to reject out-of-distribution samples, which is an essential requirement for unsupervised adaptation in the presence of domain-shift. 3.1.2 Solution. In the presence of source data, we aim to restrain the model's domain and category bias which is generally inculcated as a of the over-confident supervised learning paradigms (see Fig. 4A). To achieve this goal, we adopt two regularization strategies viz. i) regularization via generative modeling and ii) utilization of a labeled simulated negative source dataset to generalize for the latent regions not covered by the given positive source samples (see Fig. 4C). How to configure the negative source dataset? While configuring D n, the following key properties have to be met. Firstly, latent clusters formed by the negative categories must lie in-between the latent clusters of positive source categories to enable a higher degree of intra-class compactness with interclass separability (Fig. 4C). Secondly, the negative source samples must enrich the source domain n. One of the key characteristics shared between the samples from source and unknown target domain is the semantics of the local part-related features specifically for image-based object recognition tasks. Relying on this assumption, we propose a systematic procedure to simulate the samples of D n by randomly compositing local regions between a pair of images drawn from the positive source dataset D s (see Fig. 3A and appendix, Algo. 2). Intuitively, composite samples x n created on image pairs from different source categories are expected to lie in-between the two positive source clusters in the latent space, thereby introducing a combinatorial amount of new class labels i.e. n. As an alternative approach, in the absence of domain knowledge (e.g. non-image datasets, or for tasks beyond image-recognition such as pose estimation), we propose to sample virtual negative instances, u n from the latent space which are away from the high confidence regions (3-sigma) of positive source clusters (Fig. 4B). For each negative sample, we assign a negative class label (one of |C n | = |Cs| C 2) corresponding to the pair of most confident source classes predicted by the classifier. Thus, we obtain D is the distribution of negative samples in the latent u-space (more details in appendix Algo. 3). Training procedure. The generative source classifier is divided into three stages; i) backbone-model M, ii) feature extractor F s, and iii) classifier D (see Fig. 3B). Output of the backbone-model is denoted as v = M (x), where x is drawn from either D s or D n. Following this, the output of F s and D are represented as u and d respectively.. Additionally, we define priors of only positive source classes as P (u s |c i) = N (u s |µ ci, Σ ci) for i = 1, 2...|C s | at Algorithm 1 Training algorithm in the Procurement stage 1: input: (xs, ys) ∈ Ds, (xn, yn) ∈ Dn; θF s, θD, θG: Parameters of Fs, D and G respectively. 2: initialization: pretrain {θF s, θD} using cross-entropy loss on (xs, ys) followed by initialization of the sample mean µc i and covariance Σc i (at u-space) of Fs • M (xs) for xs from class ci; i = 1, 2,...|Cs| 3: for iter < M axIter do 4: where ks and kn are the index of ground-truth label ys and yn respectively. 6:; Lv = |vs −vs|; Lu = |ur −ûr| 7: Update θF s, θD, θG by minimizing LCE, Lv, Lu, and Lp alternatively using separate optimizers. 9: if (iter % U pdateIter == 0) then 10: Recompute the sample mean (µc i) and covariance (Σc i) of Fs • M (xs) for xs from class ci; n: generate fresh latent-simulated negative samples using the updated priors) the intermediate embedding Here, parameters of the normal distributions are computed during training as shown in line-10 of Algo. 1. A cross-entropy loss over these prior distributions is defined as L p (line-7 in Algo. 1), to effectively enforce intra-class compactness with inter-class separability (progression from Fig. 4B to 4C). Motivated by generative variational auto-encoder (VAE) setup , we introduce a feature decoder G, which aims to minimize the cyclic reconstruction loss selectively for the samples from positive source categories v s and randomly drawn samples u r from the corresponding class priors (i.e. L v and L u, line-6 in Algo. 1). This along with a lower weightage α for the negative source categories (i.e. at the cross-entropy loss L CE, line-6 in Algo. 1) is incorporated to deliberately bias F s towards the positive source samples, considering the level of unreliability of the generated negative dataset. 3.2.1 Challenges. We hypothesize that, the large number of negative source categories along with the positive source classes i.e. C s ∪ C n can be interpreted as a universal source dataset, which can subsume label-set C t of a wide range of target domains. Moreover, we seek to realize a unified adaptation algorithm, which can work for a wide range of category-gaps. However, a forceful adaptation of target samples to positive source categories will cause target private samples to be classified as an instance of the source private or the common label-set, instead of being classified as "unknown", i.e. one of the negative categories in C n. In contrast to domain agnostic architectures (; a; a), we resort to an architecture supporting domain specific features , as we must avoid disturbing the placement of source clusters obtained from the Procurement stage. This is an essential requirement to retain the task-dependent knowledge gathered from the source dataset. Thus, we introduce a domain specific feature extractor denoted as F t, whose parameters are initialized from the fully trained F s (see Fig. 3B). Further, we aim to exploit the learned generative classifier from the Procurement stage to complement for the purpose of separate ad-hoc networks (critic or discriminator) as utilized by the prior works (; b). We define a weighting factor (SSM) for each target sample x t, as w(x t). A higher value of this metric indicates x t's similarity towards the positive source categories, specifically inclined towards the common label space C. Similarly, a lower value of this metric indicates x t's similarity towards the negative source categories C n, showing its inclination towards the private target labels C t. Let, ps, qt be the distribution of source and target samples with labels in C s and C t respectively. We define, p c and q c to denote the distribution of samples from source and target domains belonging to the shared label-set C. Then, the SSM for the positive and negative source samples should lie on the two extremes, forming the following inequality: To formalize the SSM criterion we rely on the class probabilities defined at the output of source model only for the positive class labels, i.e.ŷ (k) for k = 1, 2...|C s |. Note that,ŷ (k) is obtained by performing softmax over |C s | + |C n | categories as discussed in the Procurement stage. Finally, the SSM and its complement are defined as, We hypothesize that, the above definition will satisfy Eq. 1, as a of the generative learning strategy adopted in the Procurement stage. In Eq. 2 the exponent is used to further amplify separation between target samples from the shared C and those from the private C t label-set (see Fig. 5A). b) Source-free domain adaptation. To perform domain adaptation, the objective function aims to move the target samples with higher SSM value towards the clusters of positive source categories and vice-versa at the frozen source embedding, u-space (from the Procurement stage). To achieve this, parameters of only F t network are allowed to be trained in the Deployment stage. However, the decision of weighting the loss on target samples towards the positive or negative source clusters is computed using the source feature extractor F s i.e. the SSM in Eq. 2. We define, the deployment model as h = D • F t • M (x t) using the target feature extractor, with softmax predictions over K categories obtained asẑ. Thus, the primary loss function for adaptation is defined as, Additionally, in the absence of label information, there would be uncertainty in the predictionsẑ as a of distributed class probabilities. This leads to a higher entropy for such samples. Entropy minimization is adopted in such scenarios to move the target samples close to the highly confident regions (i.e. positive and negative cluster centers from the Procurement stage) of the classifier's feature space. However, it has to be done separately for positive and negative source categories based on the SSM values of individual target samples to effectively distinguish the target-private set from the full target dataset. To achieve this, we define two different class probability vectors separately for the positive and negative source classes denoted as, z Fig. 3B ). Entropy of the target samples in the positive and negative regimes of the source classifier is obtained as n respectively. Consequently, the entropy minimization loss is formalized as, Thus, the final loss function for adapting the parameters of F t is presented as Here β is a hyper-parameter controlling the importance of entropy minimization during adaptation. We perform a thorough evaluation of the proposed source-free, universal domain adaptation framework against prior state-of-the-art models across multiple datasets. We also provide a comprehensive ablation study to establish generalizability of the approach across a variety of label-set relationships and justification of the various model components. Datasets. For all the following datasets, we resort to the experimental settings inline with the recent work by (UAN). Office-Home dataset consists of images from 4 different domains -Artistic (Ar), Clip-art (Cl), Product (Pr) and Real-world (Rw). Alphabetically, the first 10 classes are selected as C, the next 5 classes as C s, and the rest 50 as C t. VisDA2017 dataset comprises of 12 categories with synthetic images as the source domain and natural images as the target domain, out of which, the first 6 are chosen as C, the next 3 as C s and the rest as C t. Office-31 dataset contains images from 3 distinct domains -Amazon (A), DSLR (D) and Webcam (W). We use the 10 classes shared by Office-31 and Caltech-256 to construct the shared label-set C and alphabetically select the next 10 as C s, with the remaining 11 classes contributing to C t. To evaluate scalability, ImageNet-Caltech is also considered with 84 common classes inline with the setting in. Simulation of labeled negative samples. To simulate negative labeled samples for training in the Procurement stage, we first sample a pair of images, each from different categories of C s, to create unique negative classes in C n. Note that, we impose no restriction on how the hypothetical classes are created (e.g. one can composite non-animal with animal). A random mask is defined which splits the images into two complementary regions using a quadratic spline passing through a central image region (see Appendix Algo. 2). Then, the negative image is created by merging alternate mask regions as shown in Fig. 3A. For the I→C task of ImageNet-Caltech, the source domain (ImageNet), consisting of 1000 classes, in a large number of possible negative classes (i.e. |C n | = |Cs| C 2). We address this by randomly selecting only 600 of these negative classes for ImageNet(I), and 200 negative classes for Caltech(C) in the task C→I. In a similar fashion, we generate latent-simulated negative samples only for the selected negative classes in these datasets. Consequently, we compare two models with different Procurement stage training -(i) USFDA-a: using image-composition as negative dataset, and (ii) USFDA-b: using latent-simulated negative samples as the negative dataset. We use USFDA-a for most of our ablation experiments unless mentioned explicitly. Average accuracy on Target dataset, T avg. We resort to the evaluation protocol proposed in the VisDA2018 Open-Set Classification challenge. Accordingly, all the target private classes are grouped into a single "unknown" class and the metric reports the average of per-class accuracy over |C s | + 1 classes. In the proposed framework a target sample is marked as "unknown", if it is classified (argmax kẑ (k) ) into any of the negative |C n | classes out of total |C s | + |C n | categories. In contrast, UAN relies on a sensitive hyperparameter, as a threshold on the sample-level weighting, to mark a target sample as "unknown". Also note that, our method is completely source-free during the Deployment stage, while all other methods have access to the full source-data. Accuracy on Target-Unknown data, T unk. We evaluate the target unknown accuracy, T unk, as the proportion of actual target private samples (i.e. {(x t, y t): y t ∈ C t }) being classified as "unknown" after adaptation. Note that, UAN does not report T unk which is a crucial metric to evaluate the vulnerability of the model after its deployment in the target environment. The T avg metric fails to capture this as a of class-imbalance in the Open-set scenario (b). Hence, to realize a common evaluation ground, we train the UAN implementation provided by the authors and denote it as UAN* in further sections of this paper. We observe that, the UAN training algorithm is often unstable with a decreasing trend of T unk and T avg over increasing training iterations. We thus report the mean and standard deviation of the peak values of T unk and T avg achieved by UAN*, over 5 separate runs on Office-31 dataset (see Table 7). Implementation Details. We implement our network in PyTorch and use ResNet-50 as the backbone-model M, pre-trained on ImageNet inline with UAN . The complete architecture of other components with fully-connected layers is provided in the Supplementary. A sensitivity analysis of the major hyper-parameters used in the proposed framework is provided in Fig. 5B -C, and Appendix Fig. 8B. In all our ablations across the datasets, we fix the hyperparameters values as α = 0.2 and β = 0.1. We utilize Adam optimizer with a fixed learning rate of 0.0001 for training in both Procurement and Deployment stage (see Appendix for the code). For the implementation of UAN*, we use the hyper-parameter value w 0 = −0.5, as specified by the authors for the task A→D in Office-31 dataset. a) Comparison with prior arts. We compare our approach with , and other prior methods. The are presented in Table 1 and Table 2. Clearly, our framework achieves state- Relative freq. x t from target-private x t from target-shared P iter =100 P iter =500 of-the-art even in a source-free setting on several tasks. Particularly in Table 2, we present the target-unknown accuracy T unk on various dataset. It also holds the mean and standard-deviation for both the accuracy metrics computed over 5 random initializations in the Office-31 dataset (the last six rows). Our method is able to achieve much higher T unk than UAN* , highlighting our superiority as a of the novel learning approach incorporated in both Procurement and Deployment stages. Note that, both USFDA-a and USFDA-b yield similar performance across a wide range of standard benchmarks. We also perform a characteristic comparison of algorithm complexity in terms of the amount of learnable parameters and training time. In contrast to UAN, the proposed framework offers a much simpler adaptation algorithm devoid of utilization of ad-hoc networks like adversarial discriminator and additional finetuning of the b) Does SSM satisfy the expected inequality? Effectiveness of the proposed learning algorithm, in case of source-free deployment, relies on the formulation of SSM, which is expected to satisfy Eq. 1. Fig. 5A shows a histogram of the SSM separately for samples from target-shared (blue) and target-private (red) label space. The success of this metric is attributed to the generative nature of Procurement stage, which enables the source model to distinguish between the marginally more negative target-private samples as compared to the samples from the shared label space. c) Sensitivity to hyper-parameters. As we tackle DA in a source-free setting simultaneously intending to generalize across varied category-gaps, a low sensitivity to hyperparameters would further enhance our practical usability. To this end, we fix certain hyperparameters for all our ablations (also in Fig. 6C) even across datasets (i.e. α = 0.2, β = 0.1). Thus, one can treat them as global-constants with |C n | being the only hyperparameter, as variations in one by fixing the others yield complementary effect on regularization in the Procurement stage. A thorough analysis reported in the appendix Fig. 8, clearly demonstrates the low-sensitivity of our model to these hyperparameters. Figure 6: Comparison across varied label-set relationships for the task A→D in Office-31 dataset. A) Visual representation of label-set relationships and T avg at the corresponding instances for B) UAN* and C) ours source-free model. Effectively, the direction along x-axis (blue horizontal arrow) characterizes increasing Open-set complexity. The direction along y-axis (red vertical arrow) shows increasing complexity of Partial DA scenario. The pink diagonal arrow denotes the effect of decreasing shared label space. the most compelling manner, we propose a tabular form shown in Fig. 6A. We vary the number of private classes for target and source along x and y axis respectively, with a fixed |C s ∪ C t | = 31. We compare the T avg metric at the corresponding table instances, shown in Fig. 6B -C. The clearly highlight superiority of the proposed framework specifically for the more practical scenarios (close to the diagonal instances) as compared to the unrealistic Closed-set setting (|C s | = |C t | = 0). e) DA in absence of shared categories. In universal adaptation, we seek to transfer the knowledge of "class-separability criterion" obtained from the source domain to the deployed target environment. More concretely, it is attributed to the segregation of data samples based on some expected characteristics, such as classification of objects according to their pose, color, or shape etc. To quantify this, we consider an extreme case where C s ∩ C t = ∅ (A→D in Office-31 with |C s | = 15, |C t | = 16). Allowing access to a single labeled target sample from each category in C t = C t, we aim to obtain a one-shot recognition accuracy (assignment of cluster index or class label using the one-shot samples as the cluster center at F t • M (x t)) to quantify the above metric. We obtain 64.72% accuracy for the proposed framework as compared to 13.43% for UAN* . This strongly validates our superior knowledge transfer capability as a of the generative classifier with labeled negative samples complementing for the target-private categories. f) Dependency on the simulated negative dataset. Conceding that a combinatorial amount of negative labels can be created, we evaluate the scalability of the proposed approach, by varying the number of negative classes in the Procurement stage by selecting 0, 4, 8, 64, 150 and 190 negative classes as reported in the X-axis of Fig. 5C. For the case of 0 negative classes, denoted as |C n | * = 0 in Fig. 5C, we synthetically generate random negative features at the intermediate level u, which are at least 3-sigma away from each of the positive source priors P (u s |c i). We then make use of these feature samples along with positive image samples, to train a (|C s | + 1) class Procurement model with a single negative class. The are reported in Fig. 5C on the A→D task of Office-31 dataset with category relationship inline with the setting in Table 7. We observe an acceptable drop in accuracy with decrease in number of negative classes, hence validating scalability of the approach for large-scale classification datasets (such as ImageNet). Similarly, we also evaluated our framework by combining three or more images to form such negative classes. An increasing number of negative classes (|Cs| C 3 > |Cs| C 2) attains under-fitting on positive source categories (similar to Fig. 5C, where accuracy reduces beyond a certain limit because of over regularization). We have introduced a novel source-free, universal domain adaptation framework, acknowledging practical domain adaptation scenarios devoid of any assumption on the source-target label-set relationship. In the proposed two-stage framework, learning in the Procurement stage is found to be highly crucial, as it aims to exploit the knowledge of class-separability in the most general form with enhanced robustness to out-of-distribution samples. Besides this, success in the Deployment stage is attributed to the well-designed learning objectives effectively utilizing the source similarity criterion. This work can be served as a pilot study towards learning efficient inheritable models in future. In this section, we describe the architecture and the training process used for the Procurement and Deployment stages of our approach. a) Design of classifier D used in the Procurement stage. Keeping in mind the possibility of an additional domain shift after performing adaptation (e.g. encountering domain W after performing the adaptation A → D in Office-31 dataset), we design the classifier's architecture in a manner which allows for dynamic modification in the number of negative classes post-procurement. We achieve this by maintaining two separate classifiers during Procurement -D src, that operates on the positive source classes, and, D neg that operates on the negative source classes (see architecture in Table 5). The final classification score is obtained by computing softmax over the concatenation of logit vectors produced by D src and D neg. Therefore, the model can be retrained on a different number of negative classes post deployment (using another negative class classifier D neg), thus preparing it for a subsequent adaptation step to another domain. b) Negative dataset generation. We propose two methods to generate negative samples for the Procurement stage, and name the models trained subsequently as USFDA-a and USFDA-b. Here, we describe the two processes: n (USFDA-a). In the presence of domain knowledge (knowledge of the task at hand, i.e. object recognition using images), we generate the negative dataset D n by compositing images taken from different classes, as described in Algo. 2. We generate random masks using quadratic splines passing through a central image region (lines 3-9). Using these masks, we merge alternate regions of the images, both horizontally and vertically, ing in 4 negative images for each pair of images (lines 10-13). To effectively cover the inter-class negative region, we randomly sample image pairs from D s belonging to different classes, however we do not impose any constraint on how the classes are selected (for e.g. one can composite images from an animal and a non-animal class). We choose 5000 pairs for tasks on Office-31, Office-Home and VisDA datasets, and 12000 for ImageNet-Caltech. Since the input source distribution (p) is fixed we first synthesize a negative dataset offline (instead of creating them on the fly) to ensure finiteness of the training set. The training algorithm for USFDA-a is given in Algo. 1. horizontal splicing 7: s2 ← − quadratic_interpolation ([(x1, 0), (dx, dy), (x2, 223) ]) vertical splicing 8: m1 ← − mask region below s1 9: m2 ← − mask region to the left of s2 10: Ia ← − m1 * I1 + (1 − m1) * I2 11: Let λ cj and l cj be the maximum eigen value and the corresponding eigen vector of Σ cj, for each class c j 5:ũ r ∼ N (µ, Σ) n (USFDA-b): Here, we perform rejection sampling as given in Algorithm 3. Here, we obtain a sample from the global source prior P (u s) = N (u s |µ, Σ), where µ and Σ are the mean and covariance computed at u-space over all the positive source image samples. We reject the sample if it lies within the 3-sigma bound of any class (i.e. we keep the sample if it is far away from all source class-priors, N (µ ci, Σ ci)), as shown in lines 6 to 11 in Algo. 3. A sample selected in this fashion is expected to lie in an intermediate region between the source class priors. The two classes in the vicinity of the sample are then determined by obtaining the two most confident class predictions given by the classifier D src (lines 7 and 8). Using this pair of classes, we assign a unique negative class label to the sample which corresponds to the intermediate region between the pair of classes. Note, to learn the arrangement of positive and negative clusters, the feature extractor F s must be trained using negative samples. We do this by passing the sampled latent-simulated negative instance (ũ r) through the decoder-encoder pair, (i.e. D • F s • G(ũ r)), and enforcing the cross-entropy loss to classify them into the respective negative class. The training algorithm for USFDA-b is given in Algo. 4. c) Justification of L p. The cross-entropy loss on the likelihoods (referred as L p in the paper) not only enforces intra-class compactness but also ensures inter-class separability in the embedding space, u. Since the negative samples are only an approximation of future target private classes expected to be encountered during deployment, we choose not to employ this loss for them. Such a training procedure, eventually in a natural development of bias towards the confident positive source classes. This subsequently leads to the placement of source clusters in a manner which enables source-free adaptation (See Fig. 4). (ũr,ỹr) = sample latent-simulated negative instances from D where ks and kn are the index of ground-truth label ys and yn respectively, and σ is the softmax activation. 7:; Lv = |vs −vs|; Lu = |ur −ûr| 8: Update θF s, θD, θG by minimizing LCE, Lv, Lu, and Lp alternatively using separate optimizers. 10: if (iter % U pdateIter == 0) then 11: Recompute µc i, Σc i for each source class ci; Generate D e) Use of multiple optimizers for training. In the presence of multiple loss terms, we subvert a time-consuming loss-weighting scheme search by making use of multiple Adam optimizers during training. Essentially, we define a separate optimizer for each loss term, and optimize only one of the losses (chosen in a round robin fashion) in each iteration of training. We use a learning rate of 0.0001 during training. Intuitively, the higher order moment parameters in the Adam optimizer adaptively scale the gradients as required by the loss landscape. f) Label-Set Relationships. For Office-31 dataset in the UDA setting, we use the 10 classes shared by Office-31 and Caltech-256 as the shared label-set C. These classes are: back_pack, calculator, keyboard, monitor, mouse, mug, bike, laptop_computer, headphones, projector. From the remaining classes, in alphabetical order, we choose the first 10 classes as source-private (C s) classes, and the rest 11 as target-private (C t) classes. For VisDA, alphabetically, the first 6 classes are considered C, the next 3 as C s and the last 3 comprise C t. The Office-Home dataset has 65 categories, of which we use the first 10 classes as C, the next 5 for C s, and the rest 50 classes as C t. The details of the architecture used during the Deployment stage are given in Table 7. Note that the Feature Decoder G used during the Procurement stage, is not available during the Deployment stage, restricting complete access to the source data. Training during the Deployment stage. The only trainable component is the Feature Extractor F t, which is initialized from F s at Deployment. Here, the SSM is calculated by passing the target images through the network trained on source data (source model), i.e for each image x t, we calculateŷ = softmax(D • F s • M (x t)). Note that the softmax is calculated over all |C s | + |C n | classes. This is done by concatenating the outputs of D src and D neg, and then calculating softmax. Then, the SSM is determined by the exponential confidence of a target sample, where confidence is the highest softmax value in the categories in |C s |. We find that widely adopted standard domain adaptation datasets such as Office-31 and VisDA often share a part or all of their label-set with ImageNet. Therefore, to validate our method's applicability when initialized from a network pretrained on an unrelated dataset, we attempt to solve the adaptation task A→D in Office-31 dataset by pretraining the ResNet-50 backbone on Places dataset . In Table 3 it can be observed that our method outperforms even source-dependent methods (e.g. UAN , which is also initialized a ResNet-50 backbone pretrained on Places dataset). In contrast to our method, the algorithm in UAN involves ResNet-50 finetuning. Therefore, we also compare against a variant of UAN with a frozen backbone network, by inserting an additional feature extractor that operates on the features extracted from ResNet-50 (similar to F s in the proposed method). The architecture of the feature extractor used for this variant of UAN is outlined in Table 6. We observe that our method significantly outperforms this variant of UAN with lesser number of trainable parameters (see Table 3). C.2 SPACE AND TIME COMPLEXITY ANALYSIS. On account of keeping the weights of the backbone network frozen throughout the training process, and devoid of ad-hoc networks such as adversarial discriminator our method makes use of significantly lesser trainable parameters when compared to previous methods such as UAN (See Table 3). Devoid of adversarial training, the proposed method also has a significantly lesser total training time for adaptation: 44 sec versus 280 sec in UAN (for the A→D task of Office-31 and batch size of 32). Therefore, the proposed framework offers a much simpler adaptation pipeline, with a superior time and space complexity and at the same time achieves state-of-the-art domain adaptation performance across different datasets, even without accessing labeled source data at the time of adaptation (See Table 3). This corroborates the superiority of our method in real-time deployment scenarios. In addition to the T avg reported in Fig. 6 in the paper, we also compare the target-unknown accuracy T unk for UAN* and our pipeline. The are presented in Figure 7. Refer the link to the code provided in the submission for details of the chosen class labels for each adaptation scenario shown in Figure 7. Clearly, our method achieves a statistically significant improvement on most of the label-set VisDA, S→ R =20, =20, relationships over UAN. This demonstrates the capability of our algorithm to detect outlier classes more efficiently than UAN, which can be attributed to the ingeniously developed Procurement stage. In all our experiments (across datasets as in Tables 1 and 2 and across varied label-set relationships as in Fig. 6), we fix the hyperparameters as, α = 0.2, β = 0.1, |C n | = |Cs| C 2 and b +ve /b −ve = 1. As mentioned in Section 4.3, one can treat these hyperparameters as global constants. In Fig. 8 we demonstrate the sensitivity of the model to these hyperparameters. Specifically, in Fig. 8A we show the sensitivity of the adaptation performance, to the choice of |C n | during the Procurement stage, across a spectrum of label-set relationships. In Fig. 8B we show the sensitivity of the model to α and the batch-size ratio b +ve /b −ve. Sensitivity to β is shown in Fig. 5. Clearly, the model achieves a reasonably low sensitivity to the hyperparameters, even in the challenging source-free scenario. We additionally evaluate our method in the unsupervised closed set adaptation scenario. In Table 4 we compare with the closed set domain adaptation methods DAN , ADDA , CDAN and the universal domain adaptation method UAN . Note that, DAN, ADDA and CDAN rely on the assumption of a shared label space between the source and the target, and hence are not suited for a universal setting. Furthermore, all other methods require an explicit retraining on the source data during adaptation to perform well, even in the closed-set scenario. This clearly establishes the superiority of our method in the source-free setting. We observe in our experiments that the accuracy on the source samples does not drop as a of the partially generative framework. For the experiments conducted in Fig. 5C, we observe similar classification accuracy on the source validation set, on increasing the number of negative classes from 0 to 190. This effect can be attributed to a carefully chosen α = 0.2, which is deliberately biased towards positive source samples to help maintain the discriminative power of the model even in the presence of class imbalance (i.e. |C n | |C s |). This enhances the model's generative ability without compromising on the discriminative capacity on the positive source samples. In universal adaptation, we seek to transfer the knowledge of "class separability" obtained from the source domain to the deployed target environment. More concretely, it is attributed to the segregation of data samples based on an expected characteristics, such as classification of objects according to their pose, color, or shape etc. To quantify this, we consider an extreme case where C s ∩ C t = ∅ (A→D in Office-31 with |C s | = 15, |C t | = 16). Considering access to a single labeled target sample from each target category in C t = C t, which are denoted as x cj t, where j = 1, 2,.., |C t |, we perform one-shot Nearest-Neighbour based classification by obtaining the predicted class label asĉ t = argmin cj ||F t • M (x t) − F t • M (x cj t)|| 2. Then, the classification accuracy for the entire target set is computed by comparingĉ t with the corresponding ground-truth category. We obtain 64.72% accuracy for the proposed framework as compared to 13.43% for UAN* . A higher accuracy indicates that, the samples are inherently clustered in the intermediate feature level M • F t (x t) validating an efficient transfer of "class separability" in a fully unsupervised manner. We obtain a t-SNE plot at the intermediate feature level u for both target and source samples (see Figure 9), where the embedding for the target samples is obtained as u t = F t • M (x t) and the same for the source samples is obtained as u s = F s • M (x s). This is because we aim to learn domain-specific features in contrast to domain-agnostic features as a of the restriction imposed by the source-free scenario ("cannot disturb placement of source clusters"). Firstly we obtain compact clusters for the source-categories as a of the partially generative Procurement stage. Secondly, the target-private clusters are placed away from the source-shared and source-private as expected as a of the carefully formalized SSM weighting scheme in the Deployment stage. This plot clearly validates our hypothesis. For both Procurement and Deployment stages, we make use of the machine with the specifications mentioned in Table 8. The architecture is developed and trained in Python 2.7 with PyTorch 1.0.0.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1gd0nEFwS
A novel unsupervised domain adaptation paradigm - performing adaptation without accessing the source data ('source-free') and without any assumption about the source-target category-gap ('universal').
One of the long-standing challenges in Artificial Intelligence for learning goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential problems has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large expert networks which require extensive data and computation time for training. In this work, we propose an efficient multi-task learning framework which solves multiple goal-directed tasks in an on-line setup without the need for expert supervision. Our work uses active learning principles to achieve multi-task learning by sampling the harder tasks more than the easier ones. We propose three distinct models under our active sampling framework. An adaptive method with extremely competitive multi-tasking performance. A UCB-based meta-learner which casts the problem of picking the next task to train on as a multi-armed bandit problem. A meta-learning method that casts the next-task picking problem as a full Reinforcement Learning problem and uses actor-critic methods for optimizing the multi-tasking performance directly. We demonstrate in the Atari 2600 domain on seven multi-tasking instances: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance. Deep Reinforcement Learning (DRL) arises from the combination of the representation power of Deep learning (DL) BID10 BID3 ) with the use of Reinforcement Learning (RL) BID28 objective functions. DRL agents can solve complex visual control tasks directly from raw pixels BID6 BID12 BID24 BID11 BID23 BID13 BID29 BID30 BID2 BID26 BID7. However, models trained using such algorithms tend to be task-specific because they train a different network for different tasks, however similar the tasks are. This inability of the AI agents to generalize across tasks motivates the field of multi-task learning which seeks to find a single agent (in the case of DRL algorithms, a single deep neural network) which can perform well on all the tasks. Training a neural network with a multi-task learning (MTL) algorithm on any fixed set of tasks (which we call a multi tasking instance (MTI)) leads to an instantiation of a multi-tasking agent (MTA) (we use the terms Multi-Tasking Network (MTN) and MTA interchangeably). Such an MTA would possess the ability to learn task-agnostic representations and thus generalize learning across different tasks. Successful DRL approaches to the goal-directed MTL problem fall into two categories. First, there are approaches that seek to extract the prowess of multiple task-specific expert networks into a single student network. The Policy Distillation framework BID20 and Actor-Mimic Networks BID16 fall into this category. These works train k task-specific expert networks (DQNs) and then distill the individual task-specific policies learned by the expert networks into a single student network which is trained using supervised learning. While these approaches eventually produce a single a network that solves multiple tasks, individual expert networks must first be trained, and this training tends to be extremely computation and data intensive. The second set of DRL approaches to multi-tasking are related to the field of transfer learning. Many recent DRL works BID16 BID21 BID18 BID4 attempt to solve the transfer learning problem. Progressive networks BID21 ) is one such framework which can be adapted to the MTL problem. Progressive networks iteratively learn to solve each successive task that is presented. Thus, they are not a truly on-line learning algorithm. Progressive Networks instantiate a task-specific column network for each new task. This implies that the number of parameters they require grows as a large linear factor with each new task. This limits the scalability of the approach with the presented in the work being limited to a maximum of four tasks only. Another important limitation of this approach is that one has to decide the order in which the network trains on the tasks. In this work we propose a fully on-line multi-task DRL approach that uses networks that are comparable in size to the single-task networks. In particular, our contributions are the following: 1) We propose the first successful on-line multi-task learning framework which operates on MTIs that have many tasks with very visually different high-dimensional state spaces (See FIG0 for a visual depiction of the 21 tasks that constitute our largest multi-tasking instance). 2) We present three concrete instantiations of our MTL framework: an adaptive method, a UCB-based meta-learning method and a A3C based meta-learning method. 3) We propose a family of robust evaluation metrics for the multi-tasking problem and demonstrate that they evaluate a multi-tasking algorithm in a more sensible manner than existing metrics. 4) We provide extensive analyses of the abstract features learned by our methods and argue that most of the features help in generalization across tasks because they are task-agnostic. 5) We report on seven distinct MTIs: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance. Previous works have only reported on a single MTI. Our largest MTI has more than double the number of tasks present in the largest MTI on which have been published in the Deep RL literature BID20. 6) We hence demonstrate how hyper-parameters tuned for an MTI (an instance with six tasks) generalize to other MTIs (with up to 21 tasks). In this section, we introduce the various concepts needed to explain our proposed framework and the particular instantiations of the framework. A large class of decision problems can be cast as multi-armed bandit problem wherein the goal is to select the arm (or action) that gives the maximum expected reward. An efficient class of algorithms for solving the bandit problem are the UCB algorithms BID1 BID0. The UCB algorithms carefully track the uncertainty in estimates by giving exploration bonuses to the agent for exploring the lesser explored arms. Such UCB algorithms often maintain estimates for the average reward that an arm gives, the number of times the arm has been pulled and other exploration factors required to tune the exploration. In the case of non-stationary bandit problems, it is required for such average estimates to be non-stationary as well. For solving such non-stationary bandit problems, Discounted UCB style algorithms are often used BID8 BID22 BID5. In our UCB-based meta-learner experiments, we use the Discounted UCB1-Tuned+ algorithm BID8. One of the ways of learning optimal control using RL is by using (parametric) actor critic algorithms BID9. These approaches consist of two components: an actor and a critic. The actor is a parametric function: π θa (a t |s t) mapping from the states to the actions according to which the RL agent acts (θ a are the parameters of the policy/actor). A biased but low-variance sample estimate for the policy gradient is: ∇ θa log π θa (a t |s t)(Q(s t, a t) − b(s t)) where a t is the action executed in state s t. Q(s t, a t) is the action value function. b(s t) is a state-dependent baseline used for reducing the variance in policy gradient estimation. The critic estimates Q(s t, a t) and possibly the state dependent baseline b(s t). Often, b(s t) is chosen to be the value function of the state, V (s t). We can also get an estimate for Q(s t, a t) as r t+1 + γV (s t+1) where γ is the discounting factor. The critic is trained using temporal difference learning BID27 algorithms like TD BID28. The objective function for the critic is: DISPLAYFORM0 2. In all our experiments, we use the Asynchronous Advantage Actor-Critic Algorithm (A3C) as our base RL algorithm. In the domain of Multi-Task Learning, the goal is to obtain a single agent which can perform well on all the k tasks in a given fixed MTI. The performance metrics we use are presented in Section 3.1. While there might be transfer happening between the instances while learning, it is assumed that the agent at every point of time has access to all the tasks in the multi-task instance. The MTA acts in an action space that is the union of the action spaces of the individual tasks. We assume that the input to the MTA is such that the state features are the same, or at the least the same feature learning mechanism will work across all the tasks. In this work, we demonstrate the effectiveness of our MTA on games from Arcade Learning Environment BID32. While these games are visually distinct, the same feature learning mechanism, namely a Convolutional Neural Network, works well across all the games. It is also important to note that the identity of the current task is not part of the input to the MTN during training on a task. In contrast, existing methods such as BID20 give the identity of the task that the MTN is being trained on as an input. Thus, our MTN must implicitly figure out the identity of the task just from the input features and the dynamics of the task. Previous works BID16 define the performance of an MTA on an MTI as the arithmetic mean (p am) of the normalized game-play scores of the MTA on the various tasks in the MTI. Let ρ i be the game-play score of an MTA in task i, h i be the target score in task i (potentially obtained from other published work). We argue that p am is not a robust evaluation metric. An MTA can be as good as the target on all tasks and achieves p am = 1. However, a bad MTA can achieve p am = 1 by being k (total number of tasks) times better than the target on one of the tasks and being as bad as getting 0 score in all the other tasks. We define a better performance metric: q am (Equation 1). It is better because the MTA needs to be good in all the tasks in order to get a high q am. We also define, q gm, the geometric-mean based and q hm, the harmonic-mean based performance metrics. DISPLAYFORM0 We evaluate the MTAs in our work on p am, q am, q gm, q hm. TAB1 reports the evaluation on q am. Evaluations on the other metrics have been reported in Appendix E. In this section, we introduce our framework for MTL by first describing a naive framework for on-line MTL in the first subsection and then presenting our approach as extension to this framework in the second subsection. To avoid the computational costs of training single-task expert networks, we assume that the MTA does not have access to expert networks' predictions. Previous approaches to MTL: BID16 BID20 have been off-line in nature. Before we describe the frameworks, we outline how an on-line algorithm for MTL takes inputs from different tasks. When a MTN is trained using an on-line MTL algorithm, it must be trained by interleaving data/observations from all the tasks in the MTI. An on-line MTL algorithm must decide once every few time steps, the next task on which the MTN is to be trained. We call such decision steps as task decision steps. Note that task decision steps can be event driven (eg: at the end of every episode) or time driven (eg: once every k time steps). The BA3C is a simple on-line MTL algorithm. The MTN is a single A3C network which is trained by interleaving observations from k tasks in an on-line fashion. The task decision steps in BA3C occur at the end of every episode of training for the MTN. The next task for the MTN to train on is decided uniformly at random. The full training algorithm for BA3C is given as Algorithm 2 in Appendix C. We believe that the lackluster performance of BA3C (this has been reported in BID16 as well) is because of the probability distribution according to which the agent decides which task to train on next. In BA3C's case, this is the uniform probability distribution. We posit that this distribution is an important factor in determining the multi-tasking abilities of a trained MTA. We demonstrate our framework with the help of the LSTM BID33 version of the A3C algorithm. Our framework is inspired by active learning principles BID17 BID31 BID25. We call our framework A4C-Active sampling A3C. The overarching idea of our work is simple and effective: A multi-task learning algorithm can achieve better performance with fewer examples if it is allowed to decide which task to train on at every task decision step (thus "actively sampling" tasks) as opposed to uniformly sampling. More precisely, it is better if the agent decides to train on tasks which it is currently bad at. This decision can be made based on a heuristic or using another meta-learner. We explore two different approaches for the meta-learner -posing the meta-learning problem as an multi-arm bandit problem and as a full RL problem. DISPLAYFORM0 n ← Number of episodes used for estimating current performance in any task T i s i ← List of last n scores that the multi-tasking agent scored during training on task T i p i ← Probability of training on task T i next 6:amta ← The Active Sampling multi-tasking agent 7:meta_decider ← An instantiation of our active learning based task decision framework 8:for train_steps:0 to MaxSteps do 9:for i in {1, · · ·, |T |} do 10: DISPLAYFORM0 DISPLAYFORM1 architecture is the same as that of a single-task network. The important improvement is in the way the next task for training is selected. Instead of selecting the next task to train on uniformly at random, our framework maintains for each task T i, an estimate of the MTN's current performance (ρ i) as well as the target performance (h i). These numbers are then used to actively sample tasks on which the MTA's current performance is poor. In all the methods we ensure that all tasks continue to be selected with non-zero probability during the learning. We emphasize that no single-task expert networks need to be trained for our framework; published scores from other papers (such as BID26 for Atari) or even Human performance can be used as target performance. In case the task-decision problem is cast as a full RL problem, there are various definitions of state and reward that can be chosen. In what follows, we present 3 different instantiations of our A4C framework with particular choices of states and rewards. We experimented with other choices for state and reward definitions and we report the ones with the best performance in our experiments. There could be other agents under the A4C framework, some potentially better than our instantiations with other choices of state and reward functions and design of such agents are left as future work. We refer to this method as A5C (Adaptive Active-sampling A3C). The task decision steps in A5C occur at the end of every episode of training of the MTN. Among the methods we propose, this is the only method which does not learn the sampling distribution p (Line 15, Algorithm 1). It computes an estimate of how well the MTN can solve task T i by calculating m i = hi−ρi hi for each of the tasks. The probability distribution for sampling next tasks (at task decision steps) is then computed as: DISPLAYFORM0, where τ is a temperature hyper-parameter. Intuitively, m i is a task-invariant measure of how much the current performance of the MTN lags behind the target performance on task T i. A higher value for m i means that the MTN is bad in Task T i. By actively sampling from a softmax probability distribution with m i as the evidence, our adaptive method is able to make smarter decisions about where to allocate the training resources next (which task to train on next). This method is referred to as UA4C (UCB Active-sampling A3C). The task decision steps in UA4C occur at the end of every episode of training for the MTN. In this method, the problem of picking the next task to train on is cast as a multi-armed bandit problem with the different arms corresponding to the various tasks in the MTI being solved. The reward for the meta-learner is defined as: r = m i where i is the index of the latest task that was picked by the meta-learner, and that the MTN was trained on. The reason for defining the reward in this way is that it allows the meta-learner to directly optimize for choosing those tasks that the MTN is bad at. For our experiments, we used the Discounted-UCB1-tuned+ algorithm BID8. We used a discounted-UCB algorithm because the bandit problem is non-stationary (the more a task is trained on, the smaller the rewards corresponding to the choice of that task become). We also introduced a tunable hyperparameter β which controls the relative importance of the bonus term and the average reward for a given task. Using the terminology from BID8, the upper confidence bound that the meta-learner uses for selecting the next task to train on is: DISPLAYFORM0 We refer to this method as EA4C (Episodic meta-learner Active-sampling A3C). The task decision steps in EA4C occur at the end of every episode of training for the MTN (see Appendix A for a version of EA4C which makes task-decision steps every few time steps of training). EA4C casts the problem of picking the next task to train on as a full RL problem. The idea behind casting it in this way is that by optimizing for the future sum of rewards (which are defined based on the multi-tasking performance) EA4C can learn the right sequence in which to sample tasks and hence learn a good curriculum for training MTN. The EA4C meta-learner consists of an LSTM-A3C-based controller that learns a task-sampling policy over the next tasks as a function of the previous sampling decisions and distributions that the meta-learner has used for decison making. Reward definition: Reward for picking task T j at (meta-learner time step t) is defined as: DISPLAYFORM0 where m i was defined in Section 4.2.1. L is the set of worst three tasks, according to 1 − m i = ρi hi (normalized task performance). λ is a hyper-parameter. First part of the reward function is similar to that defined for the UCB meta-learner in Section 4.2.2. The second part of the reward function ensures that the performance of the MTN improves on worst three tasks and thus increases the multi-tasking performance in general. State Definition: The state for the meta-learner is designed to be descriptive enough for it to learn the optimal policy over the choice of which task to train the MTN on next. To accomplish this, we pass a 3k length vector to the meta-learner (where k is the number of tasks in the MTI) which is a concatenation of 3 vectors of length k. The first vector is a normalized count of the number of times each of the tasks has been sampled by the meta-learner since the beginning of training. The second vector is the identity of the task sampled at the previous task decision step, given as a one-hot vector. The third vector is the previous sampling distribution over the tasks that the meta learner had used to select the task on which the MTN was trained, at the last task decision step. Our definition of the meta-learner's state is just one of the many possible definitions. The first and the third vectors have been included to make the state descriptive. We included the identity of the latest task on which the MTN was trained so that the meta-learner is able to learn policies which are conditioned on the actual counts of the number of times a task was sampled. The A3C MTNs we use have the same size and architecture as a single-task network (except when our MTL algorithms need to solve MT7, which has 21 tasks) and these architectural details are the same across different MTL algorithms that we present. The experimental details are meticulously documented in Appendix B. All our MTAs used a single 18-action shared output layer across all the different tasks in an MTI, instead of different output head agents per task as used in BID20. Appendix I contains our empirical argument against using such different-output head MTAs. It is important to note that previous works BID16 BID20 have only on a single MTI. TAB3 (in Appendix B) contains the description of the seven MTIs presented to MTAs in this work. Hyper-parameters for all multi-tasking algorithms in this work were tuned on only one MTI: MT1. If an MTI consists of k tasks, then all MTNs in this work were trained on it for only k × 50 million time steps, which is half of the combined training time for all the k tasks put together (task-specific agents were trained for 100 million time steps in BID26). All the target scores in this work were taken from TAB5 of BID26. We reiterate that for solving the MTL problem, it is not necessary to train single-task expert networks for arriving at the target scores; one can use scores published in other works. We conducted experiments on seven different MTIs with number of constituent tasks varying from 6 to 21. All hyper-parameters were tuned on MT1, an MTI with 6 tasks. We demonstrate the robustness of our framework by testing all our the algorithms on seven MTIs including a 21-task MTI (MT7) which is more than double the number of tasks any previous work BID20 has done multi-tasking on. Description of the MTIs used in this work is provided in TAB3 in Appendix B. We have performed three experiments to demonstrate the robustness of our method to the target scores chosen. These experiments and the supporting details have been documented in Appendix G. We now describe our findings from the general game-play experiments as demonstrated in TAB1. We observe that all our proposed models beat the BA3C agent by a large margin and obtain a performance of more than double the performance obtained by the BA3C agent. Among the proposed models, on MT1 (where the hyperparameters were tuned), A5C performs the best. However, the performance of UA4C and EA4C is only slightly lower than that of A5C. We accredit this relatively higher performance to the fact that there are many hyper-parameters to tune in the UA4C and the EA4C methods unlike A5C where only the temperature hyperparameter had to be tuned. We tuned all the important hyperparameters for UA4C and EA4C. However, our granularity of tuning was perhaps not very fine. This could be the reason for the slightly lower performance. The UA4C agent, however, generalizes better than A5C agent on the larger MTIs (MT5 & MT6). Also, the performance obtained by EA4C is close to that of A5C and UA4C in all the multitasking instances. The MTI MT4 has been taken from BID16. On MT4, many of our agents are consistently able to obtain a performance close to q am = 0.9. It is to be noted that Actor Mimic networks are only able to obtain q am = 0.79 on the same MTI. The most important test of generalization is the 21-task instance (MT7). EA4C is by far the best performing method for this instance. This clearly demonstrates the hierarchy of generalization capabilities demonstrated by our proposed methods. At the first level, the EA4C MTA can learn task-agnostic representations which help it perform well on even large scale MTIs like MT7. Note that the hyper-parameters for all the algorithms were tuned on MT1, which is a 6-task instance. That the proposed methods can perform well on much larger instances with widely visually different constituent tasks without retuning hyperparameters is proof for a second level of generalization: the generalization of the hyper-parameter setting across multi-tasking instances. An important component of our framework are the target scores for the different tasks. There are two concerns that one might have regarding the use of target scores: 1) Access to target scores implies access to trained single-task agents which defeats the purpose of online multi-task learning.2) The method of training such an active-sampling based agent on new tasks where the tasks have never been solved. We aim to address both the concerns regarding the use of target scores in our proposed framework. We reiterate that the access to target scores does not imply access to trained single-task agents. We would expect that any researcher who uses our framework would also use published resources as the source of target scores, rather than training single-task networks for each of the tasks. In some cases, one might want to build an MTA prior to the existence of agents that can solve each of the single tasks. In such a case, it would be impossible to access target scores because the tasks in question have never been solved. In such cases, we propose to use a doubling of targets paradigm (demonstrated using Doubling UCB-based Active-sampling A3C (DUA4C) in Algorithm 7) to come up with rough estimates for the target scores and demonstrate that our doubling-target paradigm can in impressive performances. The doubling target paradigm maintains an estimate of target scores for each of the tasks that the MTA needs to solve. As soon as the MTA achieves a performance that is greater than or equal to the estimated target, the estimate for the target is doubled. The idea is that in some sense, the agent can keep improving until it hits a threshold, and then the threshold is doubled. All the hyper-parameters found by tuning UA4C on MT1 were retained. None of the hyper-parameters were retuned. This thus represents a setup which isn't very favorable for DUA4C. Figure 4 depicts the evolution of the raw performance (game-score) of the DUA4C agent trained with doubling target estimates instead of single-task network's scores. The performance of DUA4C on different MTIs is contained in TAB2. Results on other metrics along with training curves on various MTIs are shown in Appendix K. We observe that even in this unfavorable setup, the performance of DUA4C is impressive. The performance could possibly improve if hyper-parameters were tuned for this specific paradigm/framework. This section analyses the reasons as to why our MTL framework A4C performs much better than the baseline (BA3C). Based on the experiments that follow, we claim that it is the task-agnostic nature of the abstract features that are learned in this work which allow our proposed algorithms to perform very well. An MTA can potentially perform well at the different tasks in an MTI due to the sheer representational power of a deep neural network by learning task-specific features without generalizing across tasks. We empirically demonstrate that this is not the case for the agents proposed in our work. The experiments in this section analyze the activation patterns of the output of the LSTM controller. We call a neuron task-agnostic if it is as equally responsible for the performance on many of the tasks. Before we show the task agnostic nature of the neurons in our A4C agents, we present an intuition as to how our agents are able to overcome the problem of catastrophic forgetting. We first note that in all the agents defined under the A4C framework, a task has a higher probability to get sampled if the m i for the task is higher. Forgetting is avoided in our agents by the virtue of the sampling procedure used by the meta-learners. Say m 1 is largest among all m i' s. This causes task 1 to get sampled more. Since the agent is training on task 1, it gets better at it. This leads to m 1 getting smaller. At some point if m 2 (some other task) becomes larger than m 1, task 2 will start getting sampled more. At some later point, if performance on task 1 degrades due to the changes made to the network, then m 1 will again become larger and thus it'll start getting sampled more. It can now be argued that performance estimates (m i) could be stale for some tasks if they don't get sampled. While it is true that we don't update the score of a task till it is sampled again, we need to keep in mind that the sampling of the tasks is done from a distribution across tasks. As a , there is still finite probability of every task getting sampled. This is analogous to exploration in RL. Note that if the probability of sampling such tasks was so low that it would practically be impossible to sample it again, this would imply that performance on the task was great. What we have observed through comprehensive experimentation is that once such good performance has been achieved on some task, degradation does not happen. In this set of experiments, our agents trained on M T 1 are executed on each of the constituent tasks for 10 episodes. A neuron is said to fire for a time step if its output has an absolute value of 0.3 or more. Let f ij denote the fraction of time steps for which neuron j fires when tested on task i. Neuron j fires for the task i if f ij ≥ 0.01. We chose this low threshold because there could be important neurons that detect rare events. FIG4 demonstrates that for A4C, a large fraction of the neurons fire for a large subset of tasks and are not task-specific. It plots neuron index versus the set of fraction of time steps that neuron fires in, for each task. The neurons have been sorted first by |{i : f ij ≥ 0.01}| and then by i f ij. Neurons to the left of the figure fire for many tasks whereas those towards the right are task-specific. The piece-wise constant line in the figure counts the number of tasks in which a particular neuron fires with the leftmost part signifying 6 tasks and the rightmost part signifying zero tasks. Appendix H contains the analysis for all MTIs and methods. We introduce a way to analyze multitasking agents without using any thresholds. We call this method the turnoff-analysis. Here, we force the activations of one of the neurons in LSTM output to 0 and then observe the change in the performances on individual tasks with the neuron switched off. This new score is then compared with the original score of the agent when none of the neurons were switched off and an absolute percentage change in the scores is computed. These percentage changes are then normalized for each neuron and thus a tasks versus neuron matrix A is obtained. The variance of column i of A gives a score for the task-specificity of the neuron i. We then sort the columns of A in the increasing order of variance and plot a heatmap of the matrix A. We conclude from Figure 6 that A4C agents learn many non task-specific abstract features which help them perform well across a large range of tasks. Our experiments demonstrate that A4C agents learn many more task-agnostic abstract features than the BA3C agent. Specifically, observe how uniformly pink the plot corresponding to the UA4C agent is, compared to the BA3C plot. We propose a framework for training MTNs which, through a form of active learning succeeds in learning to perform on-line multi-task learning. The key insight in our work is that by choosing the task to train on, an MTA can choose to concentrate its resources on tasks in which it currently performs poorly. While we do not claim that our method solves the problem of on-line multi-task reinforcement learning definitively, we believe it is an important first step. Our method is complementary to many Figure 6: Turn Off analysis heap-maps for the all agents. For BA3C since the agent scored 0 on one of the games, normalization along the neuron was done only across the other 5 games.of the existing works in the field of multi-task learning such as: BID20 and BID16. These methods could potentially benefit from our work. Another possible direction for future work could be to explicitly force the learned abstract representations to be task-agnostic by imposing objective function based regularizations. One possible regularization could be to force the average firing rate of a neuron to be the same across the different tasks. In the EA4C method introduced in Section 4.2.3, the task-decision steps, which also correspond to one training step for the meta-learner, happen at the end of one episode of training on one of the tasks. For three of the multi-tasking instances (MT1, MT2 and MT3) that we experimented with, the total number of training steps was 300 million. Also, an average episode length of tasks in these instances is of the order of 1000 steps. Hence, the number of training steps for the meta-learner in EA4C is of the order of 3 × 10 5. This severely restricts the size of the neural network which is used to represent the policy of the meta-learner. To alleviate this problem we introduce a method called FA4C: Fine-grained meta-learner Activesampling A3C. The same architecture and training procedure from EA4C is used for FA4C, except for the fact that task decision steps happen after every N steps of training the multi-tasking network, instead of at the end of an episode. The value of N was fixed to be 20. This is the same as the value of n used for n-step returns in our work as well as BID26. Observe that when the number of training steps for the multi-tasking network is 300 million, the number of training steps for meta-learner is now of the order of 15 million. This allows the use of larger neural networks for the meta-learner policy as compared to EA4C. Since we used an LSTM in the neural network representing the multitasking agent's policy, we stored the state of the LSTM cells at the end of these n = 20 training steps for each of the tasks. This allows us to resume executing any of the tasks after training on one of them for just 20 steps using these cached LSTM state cells. We now describe the reward function and state construction for FA4C:Reward Function: Since the task decision steps for this method happen after every 20 steps of training the multi-tasking network, the meta-learner needs to be rewarded in a way that evaluates its 20-step task selection policy. It makes sense to define this reward to be proportional to the performance of the MTN during those 20 time steps, and inversely proportional to the target performance during those 20 time steps. These target scores have to be computed differently from those used by other methods introduced in this paper since the scores now correspond to performance over twenty time steps and not over the entire episode. The target scores for a task in FA4C can be obtained by summing the score of a trained single-task agent over twenty time steps and finding the value of this score averaged over the length of the episode. Concretely, if the single-task agent is executed for k episodes and each episode is of length l i, 1 ≤ i ≤ k and r i,j denotes the reward obtained by the agent at time step j in episode i where 1 ≤ i ≤ k, 1 ≤ j ≤ l i then the averaged 20-step target score is given by (let x i = li 20): DISPLAYFORM0 Published as a conference paper at ICLR 2018This design of the target score has two potential flaws: 1) A task could be very rewarding in certain parts of the state space(and hence during a particular period of an episode) whereas it could be inherently sparsely rewarding over other parts. It would thus make sense to use different target scores for different parts of the episode. However, we believe that in an expected sense our design of the target score is feasible.2) Access to such fine grained target scores is hard to get. While the target scores used in the rest of the paper are simple scalars that we took from other published work BID26, for getting these h f g's we had to train single-task networks and get these fine grained target scores. Hopefully such re-training for targets would not be necessary once a larger fraction of the research starts open-sourcing not only their codes but also their trained models. The overall reward function is the same as that for EA4C (defined in Equation 2) except one change, m i is now defined as: DISPLAYFORM1 where h i,f g is the target score defined in Equation 3 for task T i and ρ i,f g is the score obtained by multi-tasking instance in task T i over a duration of twenty time steps. State Function: The state used for the fine-grained meta-learner is the same as that used by the episodic meta-learner. Our experimental show that while FA4C is able to perform better than random on some multi-tasking instances, on others, it doesn't perform very well. This necessitates a need for better experimentation and design of fine-grained meta controllers for multi-task learning. We first describe the seven multi-tasking instances with which we experiment. We then describe the hyper-parameters of the MTA which is common across all the 5 methods (BA3C, A5C, UA4C, EA4C, FA4C) that we have experimented with, in this paper. In the subsequent subsections we describe the hyper-parameter choices for A5C, UA4C, EA4C, FA4C. The seven multi-tasking instances we experimented with have been documented in TAB3. The first three instances are six task instances, meant to be the smallest instances. MT4 is an 8-task instance. It has been taken from BID16 and depicts the 8 tasks on which BID16 experimented. We experimented with this instance to ensure that some kind of a comparison can be carried out on a set of tasks on which other have been reported. MT5 and MT6 are 12-task instances and demonstrate the generalization capabilities of our methods to medium-sized multi-tasking instances. Note that even these multi-tasking instances have two more tasks than any other multi-tasking (Policy distillation BID20 reports on a 10-task instance. However, we decided not to experiment with that set of tasks because the has been demonstrated with the help of a neural network which is 4 times the size of a single task network. In comparison, all of our for 6, 8 and 12 task instances use a network which has same size as a single-task network). Our last set of experiments are on a 21-task instance. This is in some sense a holy grail of multi-tasking since it consists of 21 extremely visually different tasks. The network used for this set of experiments is only twice the size of a single-task network. Hence, the MTA still needs to distill the prowess of 10.5 tasks into the number of parameters used for modeling a single-task network. Note that this multi-tasking instance is more than twice the size of any other previously published work in multi-tasking. In this sub-section we document the experimental details regarding the MTN that we used in our experiments. We used the LSTM version of the network proposed in and trained it using the async-rms-prop algorithm. The initial learning rate was set to 10 −3 (found after hyper-parameter tuning over the set {7 × 10 −4, 10 −3}) and it was decayed linearly over the entire training period to a value of 10 −4. The value of n in the n-step returns used by A3C was set to 20. This was found after hyper-parameter tuning over the set {5, 20}. The discount factor γ for the discounted returns was set to be γ = 0.99. Entropy-regularization was used to encourage exploration, similar to its use in. The hyper-parameter which trades-off optimizing for the entropy and the policy improvement is β (introduced in . β = 0.02 was separately found to give the best performance for all the active sampling methods (A5C, UA4C, EA4C, FA4C) after hyper-parameter tuning over the set {0.003, 0.01, 0.02, 0.03, 0.05}. The best β for BA3C was found to be 0.01.The six task instances (MT1, MT2 and MT3) were trained for 300 million steps. The eight task instance (MT4) was trained over 400 million steps. The twelve task instances (MT5 and MT6) were trained for 600 million steps. The twenty-one task instance was trained for 1.05 billion steps. Note that these training times were chosen to ensure that each of our methods was at least 50% more data efficient than competing methods such as off-line policy distillation. All the models on all the instances,except the twenty-one task instance were trained with 16 parallel threads. The models on the twenty-one task instance were trained with 20 parallel threads. Training and evaluation were interleaved. It is to be noted that while during the training period active sampling principles were used in this work to improve multi-tasking performance, during the testing/evaluation period, the multi-tasking network executed on each task for the same duration of time (5 episodes, each episode capped at length 30000). For the smaller multi-tasking instances (MT1, MT2, MT3 and MT4), after every 3 million training steps, the multi-tasking network was made to execute on each of the constituent tasks of the multi-tasking instance it is solving for 5 episodes each. Each such episode was capped at a length of 30000 to ensure that the overall evaluation time was bounded above. For the larger multi-tasking instances (MT5, MT6 and MT7) the exact same procedure was carried out for evaluation, except that evaluation was done after every 5 million training steps. The lower level details of the evaluation scheme used are the same as those described in BID26. The evolution of this average game-play performance with training progress has been demonstrated for MT1 in FIG3. Training curves for other multi-tasking instances are presented in Appendix D. We used a low level architecture similar to BID26 which in turn uses the same low level architecture as. The first three layers of are convolutional layers with same filter sizes, strides, padding as BID26. The convolutional layers each have 64 filters. These convolutional layers are followed by two fully connected (FC) layers and an LSTM layer. A policy and a value function are derived from the LSTM outputs using two different output heads. The number of neurons in each of the FC layers and the LSTM layers is 256.Similar to ) the Actor and Critic share all but the final layer. Each of the two functions: policy and value function are realized with a different final output layer, with the value function outputs having no non-linearity and with the policy having a softmax-non linearity as output non-linearity, to model the multinomial distribution. We will now describe the hyper-parameters of the meta-task-decider used in each of the methods proposed in the paper: The algorithm for A5C has been specified in Algorithm 3. The temperature parameter τ in the softmax function used for the task selection was tuned over the set {0.025, 0.033, 0.05, 0.1}. The best value was found to be 0.05. Hyper-parameter n was set to be 10. The hyper-parameter l was set to be 4 million. The discounted UCB1-tuned + algorithm from BID8 was used to implement the meta-task-decider. The algorithm for training UA4C agents has been demonstrated in Algorithm 4. We hyper-parameter tuned for the discount factor γ used for the meta-decider (tuned over the set {0.8, 0.9, 0.99}) and the scaling factor for the bonus β (tuned over the set {0.125, 0.25, 0.5, 1}). The best hyper-parameters were found to be γ = 0.99 and β = 0.25. The meta-learner network was also a type of A3C network, with one meta-learner thread being associated with one multi-task learner thread. The task that the MTN on thread i trained on was sampled according to the policy of the meta-learner M i where M i denotes the meta-learner which is executing on thread i. The meta-learner was also trained using the A3C algorithm with asyncrms-prop. The meta-learner used 1-step returns instead of the 20-step returns that the usual A3C algorithm uses. The algorithm for training EA4C agents has been demonstrated in Algorithm 5. We tuned the β meta for entropy regularization for encouraging exploration in the meta-learner's policy over the set {0, 0.003, 0.01} and found the best value to be β meta = 0. We also experimented with the γ meta, the discounting factor for the RL problem that the meta-learner is solving. We tuned it over the set {0.5, 0.8, 0.9} and found the best value to be γ meta = 0.8. The initial learning rate for the meta learner was tuned over the set {5 × 10 −4, 10 −3, 3 × 10 −3} and 10 −3 was found to be the optimal initial learning rate. Similar to the multi-tasking network, the learning rate was linearly annealed to 10 −4 over the number of training steps for which the multi-tasking network was trained. We extensively experimented with the architecture of the meta-learner. We experimented with feedforward and LSTM versions of EA4C and found that the LSTM versions comprehensively outperform the feed-forward versions in terms of the multi-tasking performance (q am). We also comprehensively experimented with wide, narrow, deep and shallow networks. We found that increasing depth beyond a point (≥ 3 fully connected layers) hurt the multi-tasking performance. Wide neural networks (both shallow and deep ones) were unable to perform as well as their narrower counter-parts. The number of neurons in a layer was tuned over the set {50, 100, 200, 300, 500} and 100 was found to be the optimal number of neurons in a layer. The number of fully-connected layers in the meta-learner was tuned over the set {1, 2, 3} was 2 was found to be the optimal depth of the meta-controller. The best-performing architecture of the meta-learner network consists of: two fully-connected layers with 100 neurons each, followed by an LSTM layer with 100 LSTM cells, followed by one linear layer each modeling the meta-learner's policy and its value function. We experimented with dropout layers in meta-learner architecture but found no improvement and hence did not include it in the final architecture using which all experiments were performed. All the hyper-parameters for FA4C were tuned in exactly the same way that they were tuned for EA4C. The task decision steps for FA4C were time-driven (taken at regular intervals of training the multi-tasking network) rather than being event-driven (happening at the end of an episode, like in the EA4C case). While the interval corresponding to the task decision steps can in general be different from the n to be used for n-step returns, we chose both of them to be the same with n = 20. This was done to allow for an easier implementation of the FA4C method. Also, 20 was large enough so that one could find meaningful estimates of 20-step cumulative returns without the estimate having a high variance and also small enough so that the FA4C meta-learner was allowed to make a large number of updates (when the multi-tasking networks were trained for 300 million steps (like in MT1, MT2 and MT3) The FA4C meta-learner was trained for roughly 15 million steps.) Algorithm 1 contains a pseudo-code for training a generic active sampling method proposed in this work. This appendix contains specific instantiations of that algorithm for the all the methods proposed in this work. It also contains an algorithm for training the baseline MTA proposed in this work. Algorithm FORMULA7 for i in {1, · · ·, k} do 6: DISPLAYFORM0 for train_steps:0 to t do score j ← bsmta.train_for_one_episode(T j) Algorithm 3 A5C 1: function MULTITASKING (SetOfTasks T) 2: DISPLAYFORM0 h i ← Target score in task T i. This could be based on expert human performance or even published scores from other technical works 4:n ← Number of episodes which are used for estimating current average performance in any task T i l ← Number of training steps for which a uniformly random policy is executed for task selection. At the end of l training steps, the agent must have learned on ≥ n episodes ∀ tasks T i ∈ T t ← Total number of training steps for the algorithm 7:s i ← List of last n scores that the multi-tasking agent scored during training on task T i. p i ← Probability of training on an episode of task T i next. τ ← Temperature hyper-parameter of the softmax task-selection non-parametric policy 10:amta ← The Active Sampling multi-tasking agent 11:for i in {1, · · ·, k} do 12: DISPLAYFORM0 for train_steps:0 to t do if train_steps ≥ l then X i ← Discounted sum of rewards for task i 8:X i ← Mean of discounted sum of rewards for task i score ← amta.train_for_one_episode(T j)19: DISPLAYFORM0 hj −score hj, 0 21: X i ← Discounted sum of rewards for task i 8:X i ← Mean of discounted sum of rewards for task i X i ← 0 ∀i 13: DISPLAYFORM1 DISPLAYFORM2 for train_steps:0 to t do 17: DISPLAYFORM3 DISPLAYFORM4 X i ← γX i ∀i 22: DISPLAYFORM5 hj −score hj, 0 23: DISPLAYFORM6 n j ← n j + 1 25:X i ← X i /n i ∀i 26: DISPLAYFORM7 Comparison of performance of BA3C, A5C, UA4C, EA4C and FA4C agents along with task-specific A3C agents for MT2 (6 tasks). Agents in these experiments were trained for 300 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. Figure 9: Comparison of performance of BA3C, A5C, UA4C, EA4C and FA4C agents along with task-specific A3C agents for MT3 (6 tasks). Agents in these experiments were trained for 300 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. This multi-tasking instance has 12 tasks. Although this set of tasks is medium-sized, the multi-tasking network has the same size as those used for MT1, MT2 and MT3 as well as a single-task network. Figure 10: Comparison of performance of BA3C, A5C, UA4C, EA4C and FA4C agents along with task-specific A3C agents for MT4 (8 tasks). Agents in these experiments were trained for 400 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. FIG0: Comparison of performance of BA3C, A5C, UA4C, EA4C and FA4C agents along with task-specific A3C agents for MT5 (12 tasks). Agents in these experiments were trained for 600 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. This multi-tasking instance has 12 tasks as well. FIG0: Comparison of performance of BA3C, A5C, UA4C, EA4C and FA4C agents along with task-specific A3C agents for MT6 (12 tasks). Agents in these experiments were trained for 600 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. This multi-tasking instance has 21 tasks. This is a large-sized set of tasks. Since a single network now needs to learn the prowess of 21 visually different Atari tasks, we roughly doubled the number of parameters in the network, compared to the networks used for MT1, MT2 and MT3 as well as a single-task network. We believe that this is a fairer large-scale experiment than those done in BID20 wherein for a multi-tasking instance with 10 tasks, a network which has four times as many parameters as a single-task network is used. Published as a conference paper at ICLR 2018 FIG0: Comparison of performance of BA3C, A5C, UA4C, EA4C and FA4C agents along with task-specific A3C agents for MT7 (21 tasks). Agents in these experiments were trained for 1.05 billion time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. In this appendix, we document the performance of our methods on the all the four performance metrics (p am, q am, q gm, q hm) that have been proposed in Section 4.1.q am is a robust evaluation metric because the agent needs to be good in all the tasks in order to get a high score on this metric. In TAB5 we can observe a few important trends:1. The adaptive method is a hard baseline to beat. The very fact that tasks are being sampled in accordance with the lack of performance of the multi-tasking on them, means that the MTA benefits directly from such a strategy.2. The UCB-based meta-learner generalizes fairly well to medium-sized instances but fails to generalize to the largest of our multi-tasking instances: MT7. 3. It is our meta-learning method EA4C which generalizes the best to the largest multi-tasking instance MT7. This could be because the UCB and adaptive controllers are more rigid compared to the learning method. TAB6 demonstrates the need for the evaluation metrics that we have proposed. specifically, it can be seen that in case of MT4, the non-clipped average performance is best for BA3C. However, this method is certainly not a good MTL algorithm. This happens because the uniform sampling ensures that the agent trains on the task of Enduro a lot (can be seen in the corresponding training curves). Owing to high performance on a single task, p am ends up concluding that BA3C is the best multi-tasking network. We defined the q gm and q hm metrics because in some sense, the q am metric can still get away with being good on only a few tasks and not performing well on all the tasks. In this limited sense, q hm is probably the best choice of metric to understand the multi-tasking performance of an agent. We can observe that while A5C performance was slightly better than EA4C performance for MT4 according to the q am metric, the agents are much more head to head as evaluated by the q hm metric. To demonstrate that our framework is robust to the use of different target scores, we performed two targeted experiments. In this first experiment, we swapped out the use of single-task scores as target scores with the use of scores obtained by Human testers. These human scores were taken from. We experimented with UA4C on MT1 in this subsection. Consequently we refer to the use of human scores in UA4C as HUA4C. FIG0 depicts the evolution of the raw performance (game-score) FIG0: Training curve for HUA4C: when human scores are used as target for calculating the rewards.of HUA4C agent trained with human scores as targets instead of single-task network's scores. The performance of HUA4C on all the metrics proposed in this paper is contained in TAB9. All the hyper-parameters found by tuning UA4C on MT1 were retained. None of the hyper-parameters were re-tuned. This represents a setup which isn't very favorable for HUA4C. We observe that even in this setup, the performance of UA4C is impressive. However, it is unable to learn at all for two of the tasks and has at best mediocre performance in three others. We believe that the performance could possibly improve if hyper-parameters were tuned for this specific paradigm/framework. To demonstrate that the impressive performance of our methods is not conditioned on the use of singletask performance as target scores, we decided to experiment with twice the single-task performance as the target scores. In some sense, this twice the single-task performance score represents a very optimistic estimate of how well an MTA can perform on a given task. All experiments in this sub-section are performed with A5C. Since the hyper-parameters for all the methods were tuned on M T 1, understandably, the performance of our agents is better on M T 1 than M T 2 or M T 3. Hence we picked the multi-tasking instances M T 2 and M T 3 to demonstrate the effect of using twice the target scores which were used by A5C. We chose the twice-single-task-performance regime arbitrarily and merely wanted to demonstrate that a change in the target scores does not adversely affect our methods' performance. Note that we did not tune the hyper-parameters for experiments in this sub-section. Such a tuning could potentially improve the performance further. It can be seen that in every case, the use of twice-the-single-task-performance as target scores improves the performance of our agents. In some cases such as M T 3 there was a large improvement. In this section, we present the from Firing Analyses done for all the MTIs in this work. The method used to generate the following graphs has been described in section 7.2. It can be seen from the following graphs that the active sampling methods(A5C,UA4C and EA4C) have a large fraction of neurons that fire for a large fraction of time in atleast half the number of tasks in the MTI, whereas BA3C has a relatively higher fraction of task-specific neurons. This alludes to the fact that the active sampling methods have been successful in learning useful features that generalize across different tasks, hence leading to better performance. Neuron-Firing Analysis on MT1: FIG1: Training Curves for the DUA4C agent on MT1 (6 tasks). The horizontal line represents the Single Task Agent's score. Agents in these experiments were trained for 300 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. Figure 27: Training Curves for the DUA4C agent on MT2 (6 tasks). The horizontal line represents the Single Task Agent's score. Agents in these experiments were trained for 300 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks.. The horizontal line represents the Single Task Agent's score. Agents in these experiments were trained for 400 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks. Figure 29: Training Curves for the DUA4C agent on MT5 (12 tasks). The horizontal line represents the Single Task Agent's score. Agents in these experiments were trained for 600 million time steps and required half the data and computation that would be required to train the task-specific agents (STA3C) for all the tasks.
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1nZ1weCZ
Letting a meta-learner decide the task to train on for an agent in a multi-task setting improves multi-tasking ability substantially
Numerous machine reading comprehension (MRC) datasets often involve manual annotation, requiring enormous human effort, and hence the size of the dataset remains significantly smaller than the size of the data available for unsupervised learning. Recently, researchers proposed a model for generating synthetic question-and-answer data from large corpora such as Wikipedia. This model is utilized to generate synthetic data for training an MRC model before fine-tuning it using the original MRC dataset. This technique shows better performance than other general pre-training techniques such as language modeling, because the characteristics of the generated data are similar to those of the downstream MRC data. However, it is difficult to have high-quality synthetic data comparable to human-annotated MRC datasets. To address this issue, we propose Answer-containing Sentence Generation (ASGen), a novel pre-training method for generating synthetic data involving two advanced techniques, dynamically determining K answers and pre-training the question generator on the answer-containing sentence generation task. We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data. Experimental show that our approach outperforms existing generation methods and increases the performance of the state-of-the-art MRC models across a range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD and QUASAR-T without any architectural modifications to the original MRC model. Machine reading comprehension (MRC), which finds an answer to a given question from given paragraphs called context, is an essential task in natural language processing. With the use of high-quality human-annotated datasets for this task, such as SQuAD-v1.1 , SQuAD-v2.0 , and KorQuAD , researchers have proposed MRC models, often surpassing human performance on these datasets. These datasets commonly involve finding a short snippet within a paragraph as an answer to a given question. However, these datasets require a significant amount of human annotation to create pairs of a question and its relevant answer from a given context. Often the size of the annotated data is relatively small compared to that of data used in other unsupervised tasks such as language modeling. Hence, researchers often rely on the two-phase training method of transfer learning, i.e., pre-training the model using large corpora from another domain in the first phase, followed by fine-tuning it using the main MRC dataset in the second phase. Most state-of-the-art models for MRC tasks involve such pre-training methods. present a bidirectional contextual word representation method called ELMo, which is pre-trained on a large corpus, and its learned contextual embedding layer has been widely adapted to many other MRC models. Devlin et al. (2019a) show that pre-training with a masked language model on a large corpus and then fine-tuning on a downstream dataset in significant performance improvements. However, pre-training on another domain task and then fine-tuning on a downstream task may suffer from performance degradation, depending on which pre-training task is used in the first phase. For example, show that the pre-training task of next sentence classification decreases performance on the downstream MRC tasks. To handle this problem, generating synthetic data similar to the those of a downstream task is crucial to obtain a properly pre-trained model. Recently, researchers have studied a model for generating synthetic MRC data from large corpora such as Wikipedia. This is essentially a form of transfer learning, by training a generation model and using this model to create synthetic data for training the MRC model, before fine-tuning on the downstream MRC dataset. suggest a two-stage synthesis network that decomposes the process of generating question-answer pairs into two steps, generating a fixed number (K) of answers conditioned on the paragraph, and question generation conditioned on the paragraph and the generated answer. Devlin et al. (2019b) introduced a pre-training technique for the question generator of this method by pretraining on the generation of next-sentence that follows the paragraph. However, choosing a fixed number (K) of candidate answers from each paragraph will lead to missing candidates if K is too small, and will lead to having lower-quality candidates if K is too big. Moreover, the next sentence generation task is not conditioned on the answer, despite the answer being a strong conditional restriction for question generation task. Also, the next sentence that follows a paragraph may have little relevance to the questions or answers from within the paragraph, and hence is not the ideal candidate for pre-training question generation. To address these issues, we propose Answer-containing Sentence Generation (ASGen), a novel method for a synthetic data generator with two novel processes, dynamically predicting K answers to generate diverse questions and pre-training the question generator on answer-containing sentence generation task. We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on downstream MRC datasets after training on the generated data. Experimental show that our approach outperforms existing generation methods, increasing the performance of the state-ofthe-art MRC models across a wide range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD, and QUASAR-T without any architectural modifications to the MRC model. This section discusses the details of our proposed ASGen method. ASGen consists of a BERT-based generative model (BertGen) and answer-containing sentence generation pre-training (AS). First, we will describe how BertGen model generates synthetic data from Wikipedia. Next, we will explain the novel components of our methods and how we pre-trained the question generator in BertGen based on them. BertGen encodes paragraphs in Wikipedia with two separate generation networks, the answer generator and the question generator. Answer Generator. As shown in Fig. 2-, we generate the number of answer candidates K for a given context without the question by applying a fully connected feed-forward layer on the contextual embedding of classification token " [CLS] ". To make the contextual embeddings and to predict answer spans, we utilize a BERT (a) encoder (Fig. 2 -BERT Encoder-A). Depending on the predicted number K, we select the K top candidate answer spans from the context. As shown in Fig. 2 -, we use the K selected candidate answer spans as input to the question generator. Question Generator. Next, as shown in Fig. 2 -, we generate a question conditioned on each answer predicted from the answer generator. Specifically, we pass as input to a BERT encoder the context and an indicator for the answer span location in the context (Fig. 2 -BERT Encoder-Q). Next, a Transformer decoder generates the question word-by-word based on the encoded representation of the context and the answer span. For pre-training such a question generator on an answer-containing sentence generation task, we exclude the answer-containing sentence from the original context and train the model to generate the excluded sentence given the modified context and the answer span as input. Finally, we generate questions and answers from a large corpus, e.g., all the paragraphs in Wikipedia in this paper. After generating such data, we train the MRC model on the generated data in the first phase and then fine-tune on the downstream MRC dataset (such as SQuAD) in the second phase. In this paper, we use BERT as the default MRC model, since it exhibits state-of-the-art performance in many MRC datasets. The most natural method for humans to create a question-answer pair from a given context is to select the answer first and then create a corresponding question. In this situation, we conjecture that a human is more likely to choose as an answer a phrase that is "answer-like", such as keyphrases, nouns, dates, names, etc. There may be several answers in the context that are likely to be selected by humans as answers, especially if the context is lengthy or if it contains multiple nouns, dates, names, etc. Fig. 4, to see these characteristics, we examine the distribution of the number of answers in the SQuAD dataset and hypothesize that there exists an underlying pattern in the number of answers that occur in a context. The conventional method to generate multiple answers from a context is to draw a fixed number (K) of answers. However, this approach can generate low-quality answers if K is too big, and it can impact the number and diversity of the generated answers if K is too small. Therefore, we predict the number of answers K in a given context W = {w t} T 0 using regression as, {w where T is the number of word tokens in the context with position 0 reserved for classification token '[CLS]', and f k represents a fully connected unit with two hidden layers that have hidden dimensions equal to H and 1, respectively, where H is the hidden dimension of BERT Encoder-A. To calculate the score s i for start index i of a predicted answer span, we compute the dot product of the encoder output with a trainable start vector S. For each start index i, we calculate the span end index score e i,j for end index j in a similar manner with a trainable end vector E, but conditioned on i, i.e., where f s represents a fully connected layer with hidden dimension H and ⊕ indicates the concatenation operation. For training, we use the mean squared error loss between K and ground-truth number of answers. We also use cross-entropy loss on the s i,e i,j and ground truth start/end of the answer span for each token. Predicting the number of answers and predicting the span are jointly trained to minimize the sum of their respective losses. During inference, we choose the K top answer spans with the highest score summation of start index score and end index score, i.e., The K selected answer spans A span k are then given to the question generator as input in the form of an indication of the answer span location. In order to generate questions conditioned on different answers that may arise in a context, we generate a question for each of the K answers. Devlin et al. (2019b) previously proposed to pre-train this generation model with an unsupervised task that generates the next sentence following a given paragraph to improve generation performance. We identify several issues with this approach. The final question generation task has the form of sentence generation given an answer and a context, while the next-sentence generation has no answer component. The next-sentence generation task is not conditioned on the answer, despite the answer being a strong conditional constraint for the question generation task. Also, the next sentence that follows a paragraph may have little relevance to the questions or answers from within the paragraph, and hence is not the ideal candidate for pre-training question generation. To address these issues, we modify the context to exclude the sentence containing our previously generated answer and pre-train our generator on the task of generating this excluded answercontaining sentence, conditioned on the answer and the modified context. Specifically, we exclude answer-containing sentence S ans while leaving the answer and modify the original context D to D ans as Note that we change S ans to not exclude the answer-containing sentence in the case of fine-tuning on the question generation, i.e., Afterwards, we pass the previously generated answer to the sequence-to-sequence generation model as a segmentation encoding M ans that identifies the answer part within the context, i.e., where m 0 and m 1 indicate trainable vectors corresponding to segmentation id 0 and 1, respectively. Here we tag the segmentation id for each word in the context as 0 and each word in the answer as 1. A * B indicates the operation of concatenating vector A for B many times. Next, we generate answer-containing sentence embedding W g = {w g t} T 0 using a Transformer sequence-to-sequence model (the encoder part is initialized with BERT) as Finally, we calculate the loss of the generation model with cross-entropy over generated sentence words, i.e., where y indicates a ground-truth one-hot vector of the answer-containing sentence word (the question word in the case of fine-tuning), D is the vocabulary size, and E ∈ R d×D represents a word embedding matrix shared between the BERT Encoder-Q and the Transformer decoder. In this manner, we pre-train the question generation model using a task similar to the final task of conditionally generating the question from a given answer and a context. Pre-training Dataset. To build the dataset for answer-containing sentence generation tasks (AS) and the synthetic MRC data for pre-training the downstream MRC model, we collect all paragraphs from the entire English Wikipedia dump (Korean Wikipedia dump for KorQuAD) and synthetically generate questions and answers on these paragraphs. We apply extensive filtering and cleanup to only retain high quality collected paragraphs from Wikipedia. Detailed pre-processing steps for obtaining the final Wikipedia dataset can be found in the supplemental material. Using the answer generator in ASGen (BertGen+AS), we generate 43M answer-paragraph pairs (Full-Wiki) from the final Wikipedia dataset for pre-training on answer-containing sentence generation. For ablation studies on pre-training approaches, we also sample 2.5M answer-paragraph pairs (Small-Wiki) from Full-Wiki and 25K answer-paragraph pairs (Test-Wiki) to evaluate the pretraining method. Finally, using the question generator in ASGen (BertGen+AS), we generate one question for each answer-paragraph pair in Full-Wiki and create the final synthetic MRC data containing 43M triples of a paragraph, its question and its answer. Benchmark Datasets. In most MRC datasets, a question and a context are represented as a sequence of words, and the answer span (indices of start and end words) is annotated from the context words based on the question. Among these datasets, we choose SQuAD as the primary benchmark dataset for question generation, since it is the most popular human-annotated MRC dataset. SQuAD-v1.1 consists of crowd-sourced questions and answers based on contexts from Wikipedia articles. We compare our question generation capability with existing question generation methods such as UniLM . For fair comparison, we split the training set of SQuAD-v1.1 data into our own training and test sets, and keep the original development set as our dev set, as previously done in ,, and. We call this dataset as Test Split1 1. We also evaluate on the reversed dev-test split, called Test Split2. To evaluate the effect of generated synthetic MRC data, we evaluate the fine-tuned MRC model on the downstream MRC dataset after training on the generated synthetic data. We perform this on SQuAD-v1.1 and SQuAD-v2.0 . We also evaluate on KorQuAD which is another dataset created with the same procedure as SQuAD-v1.1 for Korean language. To show that our generated data is useful for other MRC datasets, we fine-tune and test the MRC model on QUASAR-T which is large-scale MRC dataset, after training on the synthetic data that generated from SQuAD-v1.1. Implementation Details. For the answer generator, we use BERT (a) Comparison of the Pre-training Method. We compare our question generation pre-training method, which is pre-training on answer-containing sentence generation task (AS), with a method from Devlin et al. (2019b), which is pre-training on next-sentence generation task (NS), and with a method from , which only trains question generation on final MRC dataset. We reproduced these methods on BertGen as they were described in their original work for comparison. Note that'BertGen+AS' is equivalent to'ASGen'. We generate synthetic data from Wikipedia using these approaches which are trained on the target downstream MRC datasets except for QUASAR-T. In the case of QUASAR-T, we use synthetic data which is generated by ASGen trained on SQuADv1.1. To check the effectiveness of our method on downstream MRC tasks, we evaluate our generated data on SQuAD-v1.1, SQuAD-v2.0, KorQuAD and QUASAR-T by training state-of-the-art models (BERT and BERT+CLKT 2) on generated data followed by fine-tuning on the train set for each dataset. The structure of'BERT + CLKT' model is the same as that of original BERT except that the model is pre-trained for the Korean language. Due to the absence of common pre-trained BERT for Korean, we used this model as a baseline to demonstrate the effectiveness of our method. Dynamic Answer Prediction. We conducted an experiment to demonstrate the performance of our method in generating the number of answers in a given context. As shown in Table 1, in the case of fixed K, the mean absolute error from the ground-truth K gt is the smallest at K pred = 5 and the values are 1.92 and 0.99 for Test Split1 and Test Split2, respectively. Thresholding on the sum of the start and end logits with a fixed threshold value which minimizes the mean absolute error in an error of 2.31 and 1.12, respectively in the two splits. In contrast, our answer generator generates a more appropriate number of answers than the fixed K approach, by reducing the mean absolute error between the ground-truth K gt and the prediction K pred of 1.24 and 0.76, respectively for the two splits. Question Generation. To evaluate our question generator, we fine-tune the model on both Test Split1 and Test Split2, after pre-training answer-containing sentence generation on Full-Wiki. As shown in Table 2, ASGen outperforms existing methods by 0.9 BLEU-4 score on Split2, 24.7 for ASGen vs. 23.8 for UniLM. Moreover, our final question generation model, ASGen (Large), outperforms existing methods by a large margin in BLEU-4 score on both splits, 25.4 for ASGen (Large) vs. 22.1 for UniLM for Split1 and 28.0 for ASGen (Large) vs. 23.8 for UniLM for Split2. To show the effectiveness of our answer-containing sentence pre-training task (AS), we compare between various pre-training tasks. As shown in Table 3, AS is shown to perform better than NS, e.g. 21.5 vs. 18.2 and 24.7 vs. 19.7 in the two splits, respectively. Note that conditioning on a given answer has only a small effect on AS, e.g. 19.4 vs 19.5. This implies the performance gain is largely due to pre-training on the answer-containing sentence generation task rather than conditioning on a given answer. We also compare the BLEU-4 scores between before and after applying AS on other existing question generation models. We reproduce and use the official code of. As shown in Table 4, AS consistently improves the performance of other question generation models with no architecture changes or parameter tuning. We conduct experiments by training MRC models on the synthetic data generated by ASGen from Wikipedia before fine-tuning the model on the downstream dataset to show the effectiveness of our synthetic data generation. For each dataset, the MRC model is pre-trained on the corresponding generated synthetic data and fine-tuned on the downstream data. As shown in Table 5, the MRC model pre-trained on the synthetic data generated by ASGen shows an improvement of 1.9 F1 score on SQuAD-v1.1, 4.0 F1 score on SQuAD-v2.0, and 0.5 F1 score on KorQuAD from the state-of-the-art baseline models. Moreover, using the synthetic data generated from ASGen shows better performance than using the synthetic data generated from'BertGen+NS' on both SQuAD-v1.1 and SQuAD-v2.0 downstream data. Effects of MRC and Synthetic Data Size. Fig. 5 shows the effects of synthetic data with respect to the size of the synthetic and real MRC data. In Fig. 5-(a), where we fix the size of synthetic data as 43M, the F1 score of MRC model pre-trained on the synthetic data generated by ASGen consistently outperforms that of BertGen+NS. In particular, performance difference becomes apparent for a small size of real MRC data, while the performance gap diminishes for a large size. Such a gap may become insignificant for a sufficient size of real MRC data, but for the current size of SQuAD data (87K in total) AS still improves the performance. As shown in Fig. 5-(b), we also conducted experiments by training the MRC model using a different amounts of generated synthetic data for the same number of iterations, while using the full size of real SQuAD data. The total number of training steps for all data sizes is kept the same as that of 10M synthetic data. A larger size of generated data consistently gives better performance. Transfer Learning to Other Datasets. In this experiment, we first fine-tune ASGen using SQuADv1.1, and using synthetic data generated by this ASGen, we train BERT MRC model. Afterwards, we fine-tune BERT for the downstream MRC task using QUASAR-T, in order to verify that the data generated in this manner is useful for other MRC datasets. QUASAR-T has two separate datasets, one with short snippets as context, and the other with long paragraphs as context. As shown in Table 6, training with our synthetic data is shown to improve the F1 score by 2.2 and 1.7 for the two cases, respectively. Comparison of Question Generation. We qualitatively compare the generated questions after pretraining with NS and AS to demonstrate the effectiveness of our method. For the correct answer "49.6%" as shown in the first sample in Table 7, NS omitted "Fresno", which is a critical word to make the question specific, while AS's question does not suffer from this issue. Note that the word "Fresno" occurs in the answer-containing sentence. This issue also occurs in the second sample, where NS uses the word "available" rather than the more relevant words from the answer-containing sentence, but AS uses many of these words such as "most" and "popular" to generate contextually rich questions. Also, the question from NS asks about "two" libraries, while the answer has "three" libraries, showing the lack of sufficient conditioning on the answer. The third sample also shows that AS draws more context-related questions than NS by including the exact subject "TARDIS" to use for the corresponding answer in a similar vein. Machine Reading Comprehension. For MRC tasks, a large number of datasets have been proposed, most often focused on finding an answer span for a question from a given paragraph. Popular and fully human-annotated datasets include SQuAD-v1.1 , SQuAD-v2.0 , KorQuAD , and HotpotQA . However, these datasets are relatively small with around 100K samples each, which is far smaller than those datasets used for unsupervised tasks such as language modeling. Question Generation. Question generation methods have been actively studied for various purposes including data augmentation in question answering. proposed an attention-based model for question generation by encoding sentence-level as well as paragraph-level information. introduced a query-based generative model to jointly solve question generation and answering tasks. separately encoded the answer and the rest of the paragraph for proper question generation. utilized a gated self-attention encoder with a max-out unit to handle long paragraphs. Our proposed method (AS) can further improve the question generation quality of these methods by pre-training them with an answer-containing sentence generation task. Transfer Learning. Pre-training methods have been increasingly popular in natural language processing to obtain contextualized word representations. Open-GPT , BERT (a), XLNet (, and UniLM use a Transformer module to learn different styles of language models on a large dataset fol- Table 7: Examples from SQuAD-v1.1 dev set demonstrating generated questions. We compare our method (lowed by fine-tuning on the downstream task. While our approach is similar to these approaches, our pre-training task for question generator generates answer-containing sentences to learn better representations for the question generation task. Synthetic Data Generation. show that neural models generate better answers than using off-the-shelf tools for selecting named entities and noun phrases. proposed to separate the answer generation and the question generation. This model generates questions conditioned on generated answers, and then they evaluate the quality of the synthetic data by training an MRC model with them before fine-tuning on SQuAD. Inspired by the observations from previous studies, we improved the performance of answer generation and question generation by using a newly designed models as well as a novel pre-training technique. We propose two advanced training methods for generating high-quality and diverse synthetic data for MRC. First, we dynamically choose the K top answer spans from an answer generator and then generate the sentence containing the corresponding answer span as a pre-training task for the question generator. Using the proposed methods, we generate 43M synthetic training samples and train the MRC model before fine-tuning on the downstream MRC dataset. Our proposed method outperforms existing questions generation methods achieving new state-of-the-art on SQuAD question generation and consistently shows the performance improvement for the state-of-the-art models on SQuAD-v1.1, SQuAD-v2.0, KorQuAD, and QUASAR-T datasets without any architectural modification to the MRC model. We also remove all pages with less than 500 characters, as these pages are often low-quality stub articles, which removes a further 16% of the articles. We remove all "meta" namespace pages such as talk, disambiguation, user pages, portals, etc. as these often contain irrelevant text or casual conversations between editors. In order to extract usable text from the wiki-markup format of the Wikipedia articles, we remove extraneous entities from the markup including table of contents, headers, footers, links/URLs, image captions, IPA double parentheticals, category tables, math equations, unit conversions, HTML escape codes, section headings, double brace templates such as info-boxes, image galleries, HTML tags, HTML comments and all other tables. We then split the cleaned text from the pages into paragraphs, and remove all paragraphs with less than 150 characters or more than 3500 characters. Paragraphs with the number of characters between 150 to 500 were sub-sampled such that these paragraphs make up 16.5% of the final dataset, as originally done for the SQuAD dataset. Since the majority of the paragraphs in Wikipedia are rather short, of the 60M paragraphs from the final 2.4M articles, our final Wikipedia dataset contains 8.3M paragraphs. We also evaluate the question generation model from on another data split. We call this as Test-Split3. Test Split3 is obtained by dividing the original development set in SQuAD-v1.1 into two equal halves randomly and choosing one of them as development set and the other as test set while retaining the train set in SQuAD-v1.1. As shown in Table 8, the question generation model from improves the BLEU-4 score on Test-Split3 by 1.3 (w.r.t the reproduced score). As shown in Table 9, in the case of downstream MRC (EM/F1) which we dicussed in Section 4, for SQuAD v1.1 and SQuAD v2.0, we selected 5 model checkpoints from the same pretraining at varying numbers of pre-training steps. We then fine-tune each of these models on the final downstream data 3 times each, picked the best performing model and reported it's score. For KorQuAD, only 1 finetuning was performed with the final pre-trained model.
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1lFsREYPS
We propose Answer-containing Sentence Generation (ASGen), a novel pre-training method for generating synthetic data for machine reading comprehension.
Deep neural networks (DNNs) dominate current research in machine learning. Due to massive GPU parallelization DNN training is no longer a bottleneck, and large models with many parameters and high computational effort lead common benchmark tables. In contrast, embedded devices have a very limited capability. As a , both model size and inference time must be significantly reduced if DNNs are to achieve suitable performance on embedded devices. We propose a soft quantization approach to train DNNs that can be evaluated using pure fixed-point arithmetic. By exploiting the bit-shift mechanism, we derive fixed-point quantization constraints for all important components, including batch normalization and ReLU. Compared to floating-point arithmetic, fixed-point calculations significantly reduce computational effort whereas low-bit representations immediately decrease memory costs. We evaluate our approach with different architectures on common benchmark data sets and compare with recent quantization approaches. We achieve new state of the art performance using 4-bit fixed-point models with an error rate of 4.98% on CIFAR-10. Deep neural networks (DNNs) are state of the art in many machine learning challenges, pushing recent progress in computer vision, speech recognition and object detection (; ;). However, the greatest have been accomplished by training large models with many parameters using large amounts of training data. As a , modern DNNs show an extensive memory footprint and high-precision floating-point multiplications are especially expensive in terms of computation time and power consumption. When deployed on embedded devices, the complexity of DNNs is necessarily restricted by the computational capability. Therefore, efforts have been made to modify DNNs to better suit specific hardware instructions. This includes both the transfer from floating-point to fixed-point arithmetic and the reduction in bit-size. This process is termed fixed-point quantization and especially low-bit representations simultanouesly reduce memory cost, inference time, and energy consumption. A survey is given in. Furthermore, ternary-valued weights or even binary-valued weights allow replacement of many multiplications with additions 1. However, most quantization approaches do not fit to the common structure in modern DNNs. State of the art architectures (such as ResNet, DenseNet, or MobileNetV2) consist of interconnected blocks that combine a convolution or fully-connected layer, a batch normalization layer and a ReLU activation function. Each block can be optionally extended by a pooling layer, as shown in Figure 1. Since both convolution and fully-connected layers perform weighted sums, we summarize the two as a Linear component. In contrast to the block structure, recent quantization approaches focus on the Linear component while preserving floating-point batch normalization (BN) layers. This is crucial, since BN layers are folded into the preceding layer after the training and consequently destroy its fixed-point representation. Even when performed separately, channel-wise floating-point multiplications make a pure fixed-point representation impossible. Furthermore, many quantization methods strictly binarize activations which only works for very large models. In this paper, we propose a soft quantization approach to learn pure fixed-point representations of state of the art DNN architectures. Thereby, we follow the block structure and transfer all individual components into fixed-point representations before combining them appropriately. We follow the same approach as and formulate bit-size dependent fixed-point constraints for each component before transferring these constraints into regularization terms. To the best of our knowledge, we are the first to provide a soft quantization approach to learn pure fixed-point representations of DNNs. We extensively validate our approach on several benchmark data sets and with state of the art DNN architectures. Although our approach is completely flexible in bit-size, we test two special cases: • A pure fixed-point model with 4-bit weights and 4-bit activations which performs explicitly well, outperforming the floating-point baseline in many cases. • A model with ternary-valued weights and 4-bit activations that can be evaluated using additions, bit shifts and clipping operations alone (no multiplications needed). Considering that the optional pooling layer is of minor importance for quantization 2, three main components remain in each block: convolution and fully-connected layers (Linear), batch normalization (BN) and the non-linear activation function ReLU. Since each component differs in its computation task, different quantization strategies can be followed. Table 1 gives an overview of recent approaches including the respective bit-sizes during test time. Components encoded with 32 bits remain in high-precision floating-point arithmetic. use binarized weights during both forward and backward passes, but update their high-precision counterparts instead, which are kept during the whole optimization process. Thus, stochastic gradient descent is able to converge by doing small steps in the direction of the negative gradients. increased the model capacity by combining ternary-valued weights and a real-valued step-size. Since the step-size is a non-trainable parameter, its value is optimized by approximating the euclidean distance between the scaled ternary-weights and their high-precision counterparts. amplifed the approach of by discretizing both weights and activations to ±1 during the forward pass. During the backward pass, the straight-through estimator is used to estimate the local gradient of the rounding function. This way, the upstream gradient can be passed on during backpropagation. Recently, approaches have been proposed that operate on fixed-point quantization functions with learnable function parameters. investigated signed quantization functions whose uniform step-size can be learned for a given number of bits. extended this approach and learned both step-size and dynamic range of symmetric and uniform quantization functions. proposed optimization constraints to limit the overall memory costs. proposed a soft quantization approach to train DNNs with multimodal fixed-point weights. Soft quantization means to use high-precision weights during the training, but simultaneously promote posterior distributions that are well qualified for post-quantization. Another soft quantization approach by investigated regularization terms for discrete activations. proposed a Bayesian method to train DNNs with quantizing priors that in multimodal weight distributions. However, high-precision BN layers are still a critical factor for success. The channel-wise floatingpoint multiplications within BN significantly increase the model capacity, especially for low-bit quantization, but at the same time eliminate the possibility of pure fixed-point arithmetic on dedicated hardware. introduced a hard quantization framework to completely train DNNs in low-bit fixed-point arithmetic, including fixed-point BN layers. focus only on computations and variables within the training procedure. Since BN layers operate on different statistics after training, the fixed-point representation during evaluation stays unclear. use 32-bit fixed-point multiplications which makes it impossible to fold the BN layer into the preceding Linear layer after quantization. In this work, we complete the soft quantization approach and learn pure fixed-point representations on state of the art DNN architectures. We claim the following contributions: • We follow the block structure in Figure 1 and formulate fixed-point constraints for each component that can be used during training. The fixed-point BN constraint is an especially novel and important feature in soft quantization. The ing fixed-point block can be used in nearly all state of the art DNN architectures. • We propose an exponentially increasing regularization parameter to control the model capacity immediately at training time. A set of clipping functions improves numerical stability and accelerates training time. • We demonstrate that our fixed-point model outperforms other quantization approaches on common benchmark data sets and varying model sizes, from small-to large-scale. Fixed-point numbers consist of an N -bit -signed or unsigned -integer and a global scaling factor. The scaling factor is always a power of two whose exponent indicates the position of the decimal point: Thus, multiplications with powers of two in bit shift operations, which can significantly accelerate computations on adequate fixed-point hardware . In order to evaluate DNNs using pure fixed-point arithmetic, all individual layers must be converted into fixed-point representations and put together in a meaningful way. Depending on the layer type, different conditions must be fulfilled and we describe those conditions by several regularization terms R i. In the end, the training objective is a composition such that the actual learning task, which is described by the cost function C, and the fixed-point constraints R i are solved simultaneously during training: A quantization function maps an input signal x to a smaller set of discrete values x. If certain properties apply, the quantization function can be expressed using basic operations like scaling, rounding, and clipping. Since different layer types deal with different value ranges, we use three types of quantization functions: In this notation, · rounds to the closest integer and clip (x, min, max) truncates all values to the domain [min, max]. The uniform quantization functions are parameterized by their uniform step-sizes ∆ and the number of available bits N. Obviously, the fixed-point representation is fulfilled if and only if the step-size is a power of two, hence ∆ = 2 −f, f ∈ Z. In this case, the scaling operation is replaced by a bit shift. Therefore, we use an additional logarithmic quantizer that rounds all input values to the closest power of two. That way, we are able to formulate appropriate fixed-point constraints for all important layers. Convolution and fully-connected layers are summarized as Linear components in Figure 1 since both perform weighted sums 3. With x l−1 being the input of basic block l, the computation can be written as a l = x l−1 * w l + b l, whereas * is either a convolution operator or a matrix-vector multiplication, and {w l, b l} is the set of parameters with w l being either a weight-tensor or -matrix. Since additions play a subordinate role for complexity, we focus on weight multiplications. For this purpose, recommend individual Gaussian priors to change the weight distribution from an unimodal distribution to a symmetric and multimodal distribution. The priors are induced by the L 2 -norm and include the symmetric quantization function: where L is the number of layers, M l the number of weights in layer l, and ∆ l the step-size in layer l. As recommended in , we use the layer-wise mean to give wide layers with many parameters a greater flexibility to compensate the quantization loss. Furthermore, we determine the layer-wise step-size on pre-trained weights by minimizing R w under the constraint of In the next chapter, however, we see that the actual step-sizes are learned within the batch normalization component. Effectively, R w gives individual Gaussian priors to each network weight with respect to the closest fixed-point mode. The priors are updated with every forward pass, enabling the weights to continuously switch between neighboring modes. The gradient with respect to a single weight is Due to real-valued weights and a unique rounding function, the partial derivative ∂Q int /∂w l,i can be assumed to be zero. The final gradient is a scaled version of the corresponding quantization error. After training, the weights are quantized as follows 3.3 BATCH NORMALIZATION BN layers are located between convolution or fully-connected layers on one side and non-linear activation functions on the other. After the training, BN layers can be folded into the preceding layers to increase efficiency. Therefore, we first derive the BN fixed-point constraint before combining both layers to one coherent fixed-point module. BN is performed channel-wise, with each channel being normalized and transformed linearly. The number of BN channels is equal to the number of output channels of the preceding convolution or fully-connected layer. If a denotes the BN input variable, the calculation is Var with c being the channel index, a c being the output variable, and {γ c, β c} being the learnable affine transformation. During training, each channel is normalized using mean and variance of the current mini batch. At test time, normalization is done by the layer statistics {µ c, σ c}, which have been continuously updated during training. Thus, each channel is first shifted and then multiplied with γ c / σ 2 c +, which can be turned into a bit-shift operation if the multiplier is a power of two. Since γ c is the only learnable parameter in this expression, we propose the following regularization term: where l is the layer index, L the number of layers, c the channel index, and C l the number of channels in layer l. Thus, we utilize the L 2 -norm to give individual fixed-point priors while taking into account that γ l,c is divided by the standard deviation after training. The corresponding gradient is Travelling in the direction of the negative gradient optimizes γ l,c in the sense that, divided by the standard deviation, the closest power of two is approximated. In doing so, both the normalization statistics {µ, σ 2} and the affine transformation {γ, β} can be adapted simultaneously such that the learning task and the fixed-point constraint are fulfilled. After training, BN parameters are quantized as follows: Folding BN layers into the preceding Linear component reduces both computation time and memory capacity. After quantizing with Equation 9, the BN calculation simplifies to Replacing the input variable by its calculation using the preceding Linear layer gives where {w l,c γ l,c, (b l,c − µ l,c) γ l,c + β l,c } is the parameter set of the folded layer. Let us see if the fixed-point constraint is still fulfilled. After training and quantization, w l consists of signed integers and a layer-wise step-size. By folding, the step-size is channel-wise multiplied with γ l,c. This turns the layer-dependent step-size into a channel-dependent step-size. Since both multipliers are powers of two, the newly created step-size is a power of two as well. Consequently, the BN fixed-point constraint enables to learn individual step-sizes that fulfill the fixed-point constraint. ReLU is the state-of-the-art non-linear activation function in DNNs. In order to approximate the non-negative ReLU output, we use the uniform but unsigned quantization function Q uni to quantize the network activations during each forward pass. During the backward pass, we utilize the STE to define the partial derivative of the rounding function as follows: ∂ x /∂x = 1. That way, the local gradients with respect to the input and the step-size are as follows: 5 The gradient with respect to the input is passed on if x does not fall into the saturation bounds, otherwise it is set to zero. The gradient with respect to the step-size is non-zero if x is positive. Furthermore, it varies inside the range [−0.5, 0.5] if x is within the range of the quantization steps. Otherwise, the gradient is equal to the highest N -bit integer. In order to fulfill the fixed-point constraint of Q uni, the step size has to be a power of two. Therefore, we use the logarithmic quantization function and propose the following regularization term with the step-size gradient where l is the layer index, L is the amount of layers, and ∆ l the step size in layer l. The gradient is a scaled version of the quantization error and approximates the fixed-point constraint during training. is the training objective according to Equation 1, with C representing the learning task, R i, i ∈ {w, γ, x}, being the particular fixed-point constraint, and λ i being the corresponding regularization parameter which controls the weighting between learning task and quantization. In fact, regularization parameters add additional cost since their values must be determined empirically on a validation set. Then again, they allow an easy control of the model capacity, which is a fundamental problem in deep learning. On one hand, too much capacity often leads to overfitting. On the other hand, there must be sufficient capacity to enable optimal convergence, especially at the beginning of the training ). Indeed, a training-time dependent regularization parameter can be used to control the model capacity immediately at training time. In this context, recommend a linearly increasing regularization parameter that shifts the weighting towards the quantization constraint. However, we have found that exponential growth is better suited to change capacity. The calculation is where λ is the initial value, e denotes the current training epoch, and α E is the growth factor which depends on the total number of training epochs E. In our experiments, we used the same configuration for each model on each data set: α E = 10/E, λ w = 10, λ γ = λ x = 10 −4. The values were determined on a CIFAR-10 validation set. Notice that the gradient ∂R w /∂w from Equation 4 is divided by the number of layer weights. Therefore, the corresponding start value λ w is accordingly higher. Weight clipping: Soft quantization approaches train in high precision, but aim for posterior distributions that are well qualified for post quantization. These posterior distributions are promoted by suitable quantization constraints. In case of the Linear component, the fixed-point constraint limits the potential parameter space to the discrete values ±∆ l 2 N −1 − 1. This can be utilized as prior knowledge since weights should not exceed this interval during training. For example, a weight quantization using N = 2 bits and ∆ = 1 leads to the possible quantization values {−1, 0, 1}. If a single weight already has the value −1, it is useless to optimize in the negative direction. Therefore we clip all Linear weights within −∆ l 2 N −1 − 1, ∆ l 2 N −1 − 1 after each update step to promote reasonable weight adaptation. Step-size clipping: The physical limitation of quantization steps is to be strictly positive. Furthermore, very small step-sizes could cause numerical problems in the denominator. Therefore, we limit all quantization step-sizes to be ≥ 2 −8 and clip the values after each update step, respectively. Before the update step is done, the quantizing gradients from Equations 4, 8, and 12 are scaled by the corresponding regularization parameter. For numerical stability, we clip the scaled gradients to the absolute value of 0.1. This also prevents the regularization parameter from being too high. In this section, we evaluate our fixed-point model on three common benchmark datasets: MNIST, CIFAR-10, and CIFAR-100. All experiments are done using stochastic gradient descent with a nesterov momentum of 0.9 and a linearly decreasing learning rate from 0.01 to 0.001. The batch size is 64. We compare our fixed-point model with all approaches from Table 1. Results are shown in Table 2. In order to provide a detailed comparison, we use two different bit-size configurations: 1. Fix-Net: A pure fixed-point model with 4-bit weights, 4-bit activations and bit-shift batch normalization. The performance is comparable to the floating-point baseline with all multiplications performed in fixed-point arithmetic. 2. Add-Net: A model with symmetric 2-bit weights (ternary-valued), 4-bit activations and bit-shift batch normalization. The evaluation during test time can be done without any multiplications. A simplified example of the computational graph is given in the appendix A.1. For the MNIST data set, we use 2-bit activations. MNIST is a handwritten-digits classification task. The dataset consists of 28×28 gray scale images and is divided into 60,000 training and 10,000 test samples . We use LeNet5 from and preprocess the images by subtracting the mean and dividing by the standard-deviation over the training set. Our Add-Net configuration achieves 0.65% test error after 40 epochs of training with 2-bit weights and activations. The is similar to SGM and TWN, although both only quantize convolution and fully-connected layers. Our Fix-Net further decreases test error down to 0.59%. Both networks outperform the floating-point baseline of 0.71%. CIFAR-10 is an image classification task with 10 different classes. The data consists of 32×32 RGB images and is divided into 50,000 training and 10,000 test samples . We preprocess the images as recommended in. For detailed testing, we use three different model architectures: VGG7 from , DenseNet (L=76, k=12) from and ResNet20 from. VGG7 is a conventional CNN architecture with 7 layers, BN and many parameters. In contrast, both DenseNet and ResNet20 show an efficient architecture with comparatively less parameters. Due to their lower number of redundancies, DenseNet and ResNet20 are considered as difficult to quantize. With an error rate of 6.22%, our Add-Net performs best among all models with binary-or ternary-valued weights. The performance is comparable to SGM which uses full-precision activations and batch normalization. With an error rate of 4.98%, our Fix-Net performs best in accuracy and even outperforms the floating-point baseline of 5.42% which proves its regularization characteristic. The bit-size configuration of Fix-Net is mostly comparable to and with error rates of 6.21% and 8.30%, respectively. DenseNet: With the DenseNet architecture, Add-Net achieves 6.54% test error and outperforms the Bayesian approach of VNQ by more than 2%. SMG is slightly better with an error rate of 6.19% but quantizes only the Linear layers. The Fix-Net configuration achieves 5.63% test error and consequently beats the floating-point baseline of 5.72%. ResNet: It is the smallest architecture in comparison, with only 0.28M parameters. Our Add-Net achieves an error rate of 10.13% which is 2% higher than the floating-point baseline. However, with only approximately 70kB memory costs and no multiplications, Add-Net achieves a significant reduction in complexity, even for small models. DQ performs slightly better, with an error rate of 9.62% and floating-point BN layers. Our Fix-Net decreases test error down to 8.68% but still misses the floating-point baseline of 8.07%. Since ResNet20 already has a limited capacity, its regularization capability is also limited. CIFAR-100 uses the same RGB images as CIFAR-10, but provides 10 additional sub-classes for each class in CIFAR-10. Thus, only 500 training samples are available for each of the 100 classes, which makes CIFAR-100 a challenging classification task. We use VGG11 from and preprocess the images according to. Our Add-Net achieves 33.16% test error with at the same time lowest complexity. A visualization of the Add-Net weight distribution at different training times is given in Figure 3 in the appendix A.2. With an error rate of 30.25%, the Fix-Net configuration performs best in comparison and even outperforms the floating-point baseline of 31.42% by more than 1%. Soft quantization aims to reduce the complexity of DNNs at test time rather than at training time. Therefore, training remains in floating-point precision, but maintains consideration of dedicated quantization constraints. In this paper, we propose a novel soft quantization approach to learn pure fixed-point representations of state of the art DNN architectures. With exponentially increasing fixed-point priors and weight clipping, our approach provides self-reliant weight adaptation. In detailed experiments, we achieve new state of the art quantization . Especially the combination of 4-bit weights, 4-bit activations and fixed-point batch normalization layers seems quite promising This section gives an insight in the training process of the VGG11 Add-Net on Cifar-100. Therefore, Figure 3 shows the weight distribution of Layer-1, Layer-4 and Layer-7 after several epochs of training. Since weight decay is used for pretraining, the initial weights resemble a unimodal distribution with a single peak at zero (epoch 0). At the start of training, two additional peaks arise at ±∆ since layer weights are clipped to the particular quantization domain. Following this, the weights start to rearrange themselves taking into account both the fixed-point constraint and the learning task. Epoch 0 Epoch 20 Epoch 80 Epoch 100 Figure 3: Weight distribution of Layer-1, Layer-4 and Layer-7 (from top to bottom) of VGG11 after several epochs. Since weight decay is used for pre-training, the weight distribution is unimodal at the beginning with a peak at zero. Then, our approach continuously rearranges the weights into a ternary-valued distribution, clearly visible at epoch 80. The variance of each mode is continuously decreased by the exponentially increasing regularization parameter. After 100 epochs, the weights are that close to the fixed-point centers that post quantization does not produce a remarkable error. Note: y-axis scaled individually for convenience, the x-axis for epoch 0 is wider to catch the whole distribution.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgKzlSKPH
Soft quantization approach to learn pure fixed-point representations of deep neural networks
We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks. Existing approaches conventionally learn full model parameters independently and then compress them via \emph{ad hoc} processing such as model pruning or filter factorization. Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces {parameter sharing} throughout the learning process. We demonstrate that such a novel weight sampling approach (and induced WSNet) promotes both weights and computation sharing favorably. By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters. Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification. Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet. Combined with weight quantization, the ed models are up to \textbf{180$\times$} smaller and theoretically up to \textbf{16$\times$} faster than the well-established baselines, without noticeable performance drop. Despite remarkable successes in various applications, including e.g. audio classification, speech recognition and natural language processing, deep neural networks (DNNs) usually suffer following two problems that stem from their inherent huge parameter space. First, most of state-of-the-art deep architectures are prone to over-fitting even when trained on large datasets BID42. Secondly, DNNs usually consume large amount of storage memory and energy BID17. Therefore these networks are difficult to embed into devices with limited memory and power (such as portable devices or chips). Most existing networks aim to reduce computational budget through network pruning BID16 BID1 BID11, filter factorization BID23 BID28, low bit representation BID36 for weights and knowledge transfering BID20. In contrast to the above works that ignore the strong dependencies among weights and learn filters independently based on existing network architectures, this paper proposes to explicitly enforce the parameter sharing among filters to more effectively learn compact and efficient deep networks. In this paper, we propose a Weight Sampling deep neural network (i.e. WSNet) to significantly reduce both the model size and computation cost of deep networks, achieving more than 100× smaller size and up to 16× speedup at negligible performance drop or even achieving better performance than the baseline (i.e. conventional networks that learn filters independently). Specifically, WSNet is parameterized by layer-wise condensed filters from which each filter participating in actual convolutions can be directly sampled, in both spatial and channel dimensions. Since condensed filters have significantly fewer parameters than independently trained filters as in conventional CNNs, learning by sampling from them makes WSNet a more compact model compared to conventional CNNs. In addition, to reduce the ubiquitous computational redundancy in convolving the overlapped filters and input patches, we propose an integral image based method to dramatically reduce the computation cost of WSNet in both training and inference. The integral image method is also advantageous because it enables weight sampling with different filter size and minimizes computational overhead to enhance the learning capability of WSNet. In order to demonstrate the efficacy of WSNet, we conduct extensive experiments on the challenging acoustic scene classification and music detection tasks. On each test dataset, including MusicDet200K (a self-collected dataset, as detailed in Section 4), ESC-50 (a), UrbanSound8K BID40 and DCASE BID45, WSNet significantly reduces the model size of the baseline by 100× with comparable or even higher classification accuracy. When compressing more than 180×, WSNet is only subject to negligible accuracy drop. At the same time, WSNet significantly reduces the computation cost (up to 16×). Such strongly establish the capability of WSNet to learn compact and efficient networks. Although we detailed experiments mostly limited to 1D CNNs in this paper, we will explore how the same approach can be naturally generalized to 2D CNNs in future work. In this paper we considered Acoustic Scene Classification (ASC) tasks as well as music detection tasks. ASC aims to classify the surrounding environment where an audio stream is generated given the audio input BID4. It can be applied in many different ways such as audio tagging , audio collections management , robotic navigation BID10, intelligent wearable interfaces BID50, context adaptive tasks BID41, etc. Music detection is a related task to determine whether or not a small segment of audio is music. It is usually treated as a binary classification problem given an audio segment as input, i.e., to classify the segment into two categories: music or non-music. As evident in many other areas, convolutional neural networks (CNN) have been widely used in audio classification tasks BID48 BID39. SoundNet BID2 stands out among different CNNs for sound classification due to the following two reasons. First, it is trained from the large-scale unlabeled sound data using visual information as a bridge, while many other networks are trained with smaller datasets. Secondly, SoundNet directly takes one dimensional raw wave signals as input so that there is no need to calculate time-consuming audio specific features, e.g. MFCC BID35 BID12 and spectrogram BID15. SoundNet has yielded significant performance improvements on state-of-the-art with standard benchmarks for acoustic scene classification. In this paper, we demonstrate that the proposed WSNet achieves a comparable or even better performance than SoundNet at a significantly smaller size and faster speed. Early approaches for deep model compression include BID29 BID18 that prune the connections in networks based on the second order information. Most recent works in network compression adopt weight pruning BID16 BID11 BID1; BID26 BID31, filter decomposition BID43 BID14 BID23, hashed networks BID6 BID9 and weight quantization BID17. However, although those works reduce model size, they also suffer from large performance drop. BID5 and are based on student-teacher approches which may be difficult to apply in new tasks since they require training a teacher network in advance. BID13 predicts parameters based on a few number of weight values. BID25 proposes an iterative hard thresholding method, but only achieve relatively small compression ratios. uses a binning method which can only be applied over fully connected layers. BID20 compresses deep models by transferring the knowledge from pre-trained larger networks to smaller networks. In contrast, WSNet is able to learn compact representation for both convolution layers and fully connected layers from scratch. The deep models learned by WSNet can significantly reduce model size compared to the baselines with comparable or even better performance. In terms of deep model acceleration, the factorization and quantization methods listed above can also reduce computation latency in inference. While irregular pruning (as done in most pruning methods BID17) incurs computational overhead, grouped pruning is able to accelerate networks. FFT BID32 and LCNN are also used to speed up computation in pratice. Comparatively, WSNet is superior because it learns networks that have both smaller model size and faster computation versus baselines. WSNet presents a class of novel models with the appealing properties of a small model size and small computation cost. Some recently proposed efficient model architectures include the class of Inception models BID22 BID9 which adopts depthwise separable convolutions, the class of Residual models BID19 BID49 which uses residual path for efficient optimization, and the factorized networks which use fully factorized convolutions. MobileNet BID21 and Flattened networks BID24 are based on factorization convolutions. ShuffleNet BID50 uses group convolution and channel shuffle to reduce computational cost. Compared with above works, WSNet presents a new model design strategy which is more flexible and generalizable: the parameters in deep networks can be obtained conveniently from a more compact representation, e.g. through the weight sampling method proposed in this paper or other more complex methods based on the learned statistic models. In this section, we describe details of the proposed WSNet for 1D CNNs. First, the notations are introduced. Secondly, we elaborate on the core components in WSNet: weight sampling along the spatial dimension and channel dimension. Thirdly, we introduce the denser weight sampling to enhance the learning capability of WSNet. Finally, we propose an integral image method for accelerating WSNet in both training and inference. Before diving into the details, we first introduce the notations used in this paper. The traditional 1D convolution layer takes as input the feature map F ∈ R T ×M and produces an output feature map G ∈ R T ×N where (T, M, N) denotes the spatial length of input, the channel of input and the number of filters respectively. We assume that the output has the same spatial size as input which holds true by using zero padded convolution. The 1D convolution kernel K used in the actual convolution of WSNet has the shape of (L, M, N) where L is the kernel size. Let k n, n ∈ {1, · · · N} denotes a filter and f t, t ∈ {1, · · · T} denotes a input patch that spatially spans from t to t + L − 1, then the convolution assuming stride one and zero padding is computed as: DISPLAYFORM0 where · stands for the vector inner product. Note we omit the element-wise activation function to simplify the notation. In WSNet, instead of learning each weight independently, K is obtained by sampling from a learned condensed filter Φ which has the shape of (L *, M *). The goal of training WSNet is thus cast to learn more compact DNNs which satisfy the condition of L * M * < LM N. To quantize the advantage of WSNet in achieving compact networks, we define the compactness of K in a learned layer in WSNet w.r.t. the conventional layer with independently learned weights as: DISPLAYFORM1 In the following section, we demonstrate WSNet learn compact networks by sampling weights in two dimensions: the spatial dimension and the channel dimension.3.2 WEIGHT SAMPLING 3.2.1 ALONG SPATIAL DIMENSION In conventional CNNs, the filters in a layer are learned independently which presents two disadvantages. Firstly, the ed DNNs have a large number of parameters, which impedes their deploy- Sampling Stride: S Figure 1: Illustration of WSNet that learns small condensed filters with weight sampling along two dimensions: spatial dimension (the bottom panel) and channel dimension (the top panel). The figure depicts procedure of generating two continuous filters (in pink and purple respectively) that convolve with input. In spatial sampling, filters are extracted from the condensed filter with a stride of S. In channel sampling, the channel of each filter is sampled repeatedly for C times to achieve equal with the input channel. Please refer to Section 3.2 for detailed explanations. ment in computation resource constrained platforms. Second, such over-parameterization makes the network prone to overfitting and getting stuck in (extra introduced) local minimums. To solve these two problems, a novel weight sampling method is proposed to efficiently reuse the weights among filters. Specifically, in each convolutional layer of WSNet, all convolutional filters K are sampled from the condensed filter Φ, as illustrated in Figure 1. By scanning the weight sharing filter with a window size of L and stride of S, we could sample out N filters with filter size of L. Formally, the equation between the filter size of the condensed filter and the sampled filters is: DISPLAYFORM0 The compactness along spatial dimension is DISPLAYFORM1 Note that since the minimal value of S is 1, the minimal value of L * (i.e. the minimum spatial length of the condensed filter) is L + N − 1 and the maximal achievable compactness is therefore L. Although it is experimentally verified that the weight sampling strategy could learn compact deep models with negligible loss of classification accuracy (see Section 4), the maximal compactness is limited by the filter size L, as mentioned in Section 3.2.1.In order to seek more compact networks without such limitation, we propose a channel sharing strategy for WSNet to learn by weight sampling along the channel dimension. As illustrated in Figure 1 (top panel), the actual filter used in convolution is generated by repeating sampling for C times. The relation between the channels of filters before and after channel sampling is: DISPLAYFORM0 Therefore, the compactness of WSNet along the channel dimension achieves C. As introduced later in Experiments (Section 4), we observe that the repeated weight sampling along the channel dimension significantly reduces the model size of WSNet without significant performance drop. One notable advantage of channel sharing is that the maximum compactness can be as large as M (i.e. when the condensed filter has channel of 1), which paves the way for learning much more aggressively smaller models (e.g. more than 100× smaller models than baselines).The above analysis for weight sampling along spatial/channel dimensions can be conveniently generalized from convolution layers to fully connected layers. For a fully connected layer, we treat its weights as a flattened vector with channel of 1, along which the spatial sampling (ref. Section 3.2.1) is performed to reduce the size of learnable parameters. For example, for the fully connected layer "fc1" in the baseline network in Table 1, its filter size, channel number and filter number are 1536, 1 and 256 respectively. We can therefore perform spatial sampling for "fc1" to learn a more compact representation. Compared with convolutional layers which generally have small filter sizes and thus have limited compactnesses along the spatial dimenstion, the fully connected layers can achieve larger compactnesses along the spatial dimension without harming the performance, as demonstrated in experimental (ref. to Section 4.2). WSNet is trained from the scratch in a similar way to conventional deep convolutional networks by using standard error back-propagation. Since every weight K l,m,n in the convolutional kernel K is sampled from the condensed filter Φ along the spatial and channel dimension, the only difference is the gradient of Φ i,j is the summation of all gradients of weights that are tied to it. Therefore, by simply recording the position mapping M: (i, j) → (l, m, n) from Φ i,j to all the tied weights in K, the gradient of Φ i,j is calculated as: DISPLAYFORM0 where L is the conventional cross-entropy loss function. In open-sourced machine learning libraries which represent computation as graphs, such as TensorFlow BID0, Equation can be calculated automatically. The performance of WSNet might be adversely affected when the size of condensed filter is decreased aggressively (i.e. when S and C are large). To enhance the learning capability of WSNet, we could sample more filters for layers with significantly reduced sizes. Specifically, we use a smaller sampling strideS (S < S) when performing spatial sampling. In order to keep the shape of weights unchanged in the following layer, we append a 1×1 convolution layer with the shape of (1,n, n) to reduce the channels of densely sampled filters. It is experimentally verified that denser weight sampling can effectively improve the performance of WSNet in Section 4. However, since it also brings extra parameters and computational cost to WSNet, denser weight sampling is only used in lower layers of WSNet whose filter number (n) is small. Besides, one can also conduct channel sampling on the added 1×1 convolution layers to further reduce their sizes. According to Equation 1, the computation cost in terms of the number of multiplications and adds (i.e. Mult-Adds) in a conventional convolutional layer is: DISPLAYFORM0 However, as illustrated in FIG0, since all filters in a layer in WSNet are sampled from a condensed filter Φ with stride S, calculating the of convolution in the conventional way as in Eq. FORMULA0 incurs severe computational redundance. Concretely, as can be seen from Eq., one item in the ouput feature map is equal to the summation of L inner products between the row vector of f and the column vector of k. Therefore, when two overlapped filters that are sampled from the condensed filter (e.g. k 1 and k 2 in FIG0) convolves with the overlapped input windows (e.g. f 1 and f 2 in FIG0), some partially repeated calculations exist (e.g. the calculations highlight in green and indicated by arrow in FIG0 . To eliminate such redundancy in convolution and speed-up WSNet, we propose a novel integral image method to enable efficient computation via sharing computations. We first calculate an inner product map P ∈ R T ×L * which stores the inner products between each row vector in the input feature map (i.e. F) and each column vector in the condensed filter (i.e. Φ): calculates the inner product of each row in F and each column in Φ as in Eq.. The convolution between a filter k 1 which is sampled from Φ and the input patch f 1 is then the summation of all values in the segment between (u, v) and DISPLAYFORM1 DISPLAYFORM2 is the convolutional filter size). Since there are repeated calculations when the filter and input patch are overlapped, e.g. the green segment indicated by arrow when performing convolution between k 2 and s 2, we construct the integral image I using P according to Eq.. Based on I, the convolutional between any sampled filter and input patch can be retrieved directly in time complexity of O according to Eq., e.g. the of DISPLAYFORM3 For notation definitions, please refer to Sec. 3.1. The comparisons of computation costs between WSNet and the baselines using conventional architectures are introduced in Section 3.4.The integral image for speeding-up convolution is denoted as I. It has the same size as P and can be conveniently obtained throught below formulation: DISPLAYFORM4 Based on I, all convolutional can be obtained in time complexity of O as follows DISPLAYFORM5 Recall that the n-th filter lies in the spatial range of (nS, nS + L − 1) in the condensed filter Φ. Since G ∈ R T ×N, it thus takes T N times of calculating Eq. to get G. In Eq. ∼ Eq. FORMULA11, we omit the case of padding for clear description. When zero padding is applied, we can freely get the convolutional for the padded areas even without using Eq. FORMULA11 DISPLAYFORM6 Based on Eq. ∼ Eq., the computation cost of the proposed integral image method is DISPLAYFORM7 Note the computation cost of P (i.e. Eq. FORMULA7) is the dominating term in Eq.. Based on Eq., Eq. FORMULA13 and Eq., the theoretical acceleration ratio is DISPLAYFORM8 Recall that L is the filter size and S is the pre-defined stride when sampling filters from the condensed filter Φ (ref. to Eq. FORMULA2).In practice, we adopt a variant of above method to further boost the computation efficiency of WSNet, as illustrated in FIG2 In Eq., we repeat Φ by C times along the channel dimension to Figure 3: A variant of the integral image method used in practice which is more efficient than that illustrated in FIG0. Instead of repeatedly sampling along the channel dimension of Φ to convolve with the input F, we wrap the channels of F by summing up C matrixes that are evenly divided from F along the channels, i.e. F(i, j) = DISPLAYFORM9 Since the channle ofF is only 1/C of the channel of F, the overall computation cost is reduced as demonstrated in Eq.. make it equal with the channel of the input F. However, we could first wrap the channels of F by accumulating the values with interval of L along its channel dimension to a thinner feature map F ∈ R T ×M * which has the same channel number as Φ, i.e. F(i, j) = C−1 c=0 F(i, j + cM *). Both Eq. FORMULA10 and Eq. remain the same. Then the computational cost is reduced to DISPLAYFORM10 where the first item is the computational cost of warping the channels of F to obtainF. Since the dominating term (i.e. Eq.) in Eq is smaller than in Eq., the overall computation cost is thus largely reduced. By combining Eq. and Eq., the theoretical acceleration compared to the baseline is DISPLAYFORM11 Finally, we note that the integral image method applied in WSNet naturally takes advantage of the property in weight sampling: redundant computations exist between overlapped filters and input patches. Different from other deep model speedup methods BID43 BID14 which require to solve time-consuming optimization problems and incur performance drop, the integral image method can be seamlessly embeded in WSNet without negatively affecting the final performance. In this section, we present the details and analysis of the in our experiments. Extensive ablation studies are conducted to verify the effectiveness of the proposed WSNet on learning compact and efficient networks. On all tested datasets, WSNet is able to improve the classification performance over the baseline networks while using 100× smaller models. When using even smaller (e.g. 180×) model size, WSNet achieves comparable performance w.r.t the baselines. In addition, WSNet achieves 2× ∼ 4× acceleration compared to the baselines with a much smaller model (more than 100× smaller). Datasets We collect a large-scale music detection dataset (MusicDet200K) from publicly available platforms (e.g. Facebook, Twitter, etc.) for conducting experiments. For fair comparison with previous literatures, we also test WSNet on three standard, publicly available datasets, i.e ESC-50, UrbanSound8K and DCASE. The details of used datasets are as follows. MusicDet200K aims to assign a sample a binary label to indicate whether it is music or not. MusicDet200K has overall 238,000 annotated sound clips. Each has a time duration of 4 seconds and is resampled to 16000 Hz and normalized BID34. Among all samples, we use 200,000/20,000/18,000 as train/val/test set. The samples belonging to "non-music" count for 70% of all samples, which means if we trivially assign all samples to be "non-music", the classification accuracy is 70%.ESC-50 (a) is a collection of 2000 short (5 seconds) environmental recordings comprising 50 equally balanced classes of sound events in 5 major groups (animals, natural soundscapes and water sounds, human non-speech sounds, interior/domestic sounds and exterior/urban noises) divided into 5 folds for cross-validation. Following BID2, we extract 10 sound clips from each recording with length of 1 second and time step of 0.5 second (i.e. two neighboring clips have 0.5 seconds overlapped). Therefore, in each cross-validation, the number of training samples is 16000. In testing, we average over ten clips of each recording for the final classification . UrbanSound8K BID40 ) is a collection of 8732 short (around 4 seconds) recordings of various urban sound sources (air conditioner, car horn, playing children, dog bark, drilling, engine idling, gun shot, jackhammer, siren and street music). As in ESC-50, we extract 8 clips with the time length of 1 second and time step of 0.5 second from each recording. For those that are less than 1 second, we pad them with zeros and repeat for 8 times (i.e. time step is 0.5 second).DCASE BID45 ) is used in the Detection and Classification of Acoustic Scenes and Events Challenge (DCASE). It contains 10 acoustic scene categories, 10 training examples per category and 100 testing examples. Each sample is a 30-second audio recording. During training, we evenly extract 12 sound clips with time length of 5 seconds and time step of 2.5 seconds from each recording. Evaluation criteria To demonstrate that WSNet is capable of learning more compact and efficient models than conventional CNNs, three evaluation criteria are used in our experiments: model size, the number of multiply and adds in calculation (mult-adds) and classification accuracy. Baseline networks To test the scability of WSNet to different network architectures (e.g. whether having fully connected layers or not), two baseline networks are used in comparision. The baseline network used on MusicDet200K consists of 7 convolutional layers and 2 fully connected layers, using which we demonstrate the effectiveness of WSNet on both convolutional layers and fully connected layers. For fair comparison with previous literatures, we firstly modify the state-of-theart SoundNet BID2 by applying pooling layers to all but the last convolutional layer. As can be seen in Table 5, this modification significantly boosts the performance of original SoundNet. We then use the modified SoundNet as a baseline on all three public datasets. The architectures of the two baseline networks are shown in TAB1 respectively. Weight Quantization Similar to other works BID17 BID36, we apply weight quantization to further reduce the size of WSNet. Specifically, the weights in each layer are linearly quantized to q bins where q is a pre-defined number. By setting all weights in the same bin to the same value, we only need to store a small index of the shared weight for each weight. The size of each bin is calculated as (max(Φ) − min(Φ))/q. Given q bins, we only need log 2 (q) bits to encode the index. Assuming each weight in WSNet is represented using 4 bytes float number (32 bits) without weight quantization, the ratio of each layer's size before and after weight quantization is DISPLAYFORM0 Recall that L * and M * are the spatial size and the channel number of condensed filter. Since the condition L * M * q generally holds in most layers of WSNet, weight quantization is able to reduce the model size by a factor of 32 log 2 (q). Different from BID17 BID36 which learns the quantization during training, we apply weight quantization to WSNet Table 1: Baseline-1: configurations of the baseline network used on MusicDet200K. Each convolutional layer is followed by a nonlinearity layer (i.e. ReLU), batch normalization layer and pooling layer, which are omitted in the table for brevity. The strides of all pooling layers are 2. The padding strategies adopted for both convolutional layers and fully connected layers are all "size preserving". after its training. In the experiments, we find that such an off-line way is sufficient to reduce model size without losing accuracy. Implementation details WSNet is implemented and trained from scratch in Tensorflow BID0. Following BID2, the Adam optimizer, a fixed learning rate of 0.001, and momentum term of 0.9 and batch size of 64 are used throughout experiments. We initialized all the weights to zero mean gaussian noise with a standard deviation of 0.01. In the network used on MusicDet200K, the dropout ratio for the dropout layers BID44 after each fully connected layer is set to be 0.8. The overall training takes 100,000 iterations. Ablation analysis Through controlled experiments, we investigate the effects of each component in WSNet on the model size, computational cost and classification accuracy. The comparative study of different settings of WSNet are listed in TAB2. For clear description, we name WSNets with different settings by the combination of symbols S/C/SC † /D/Q. Please refer to the caption of TAB2 for detailed meanings. Spatial sampling. We test the performance of WSNet by using different sampling stride S in spatial sampling. As listed in TAB2, S 2 and S 4 slightly outperforms the classification accuracy of the baseline, possibly due to reducing the overfitting of models. When the sampling stride is 8, i.e. the compactness in spatial dimension is 8 (ref. to Section 3.2.1), the classification accuracy of S 8 only drops slightly by 0.6%. Note that the maximum compactness along the spatial dimension is equal to the filter size, thus for the layer "conv7" which has a filter size of 4, its compactness is limited by 4 (highlighted by underline in spatial sampling enables WSNet to learn significantly smaller model with comparable accuracies w.r.t. the baseline. Channel sampling. Three different compactness along the channel dimension, i.e. 2, 4 and 8 are tested by comparing with baslines. It can be observed from TAB2 that C 2 and C 4 and C 8 have linearly reduced model size without incurring noticeable drop of accuracy. In fact, C 2 can even improve the accuracy upon the baseline, demonstrating the effectiveness of channel sampling in WSNet. When learning more compact models, C 8 demonstrates better performance compared to S 8 tha has the same compactness in the spatial dimension, which suggests we should focus on the channel sampling when the compactness along the spatial dimension is high. We then simultaneously perform weight sampling on both the spatial and channel dimensions. As demonstrated by the of S 4 C 4 SC † 4 and S 8 C 8 SC † 8, WSNet can learn highly compact models (more than 20× smaller than baselines) without noticeable performance drop (less than 0.5%). Denser weight sampling. Denser weight sampling is used to enhance the learning capability of WSNet with aggressive compactness (i.e. when S and C are large) and make up the performance loss caused by sharing too much parameters among filters. As shown in TAB2, by sampling 2× more filters in conv1, conv2 and conv3, S 8 C 8 SC † 8 D 2 significantly outperforms the S 8 C 8 SC †8. Above demonstrate the effectiveness of denser weight sampling to boost the performance of WSNet. Integral image for efficient computation. As evidenced in the last column in TAB2, the proposed integral image method consistently reduces the computation cost of WSNet. For S 8 C 8 SC † 8 which is 23× smaller than the baseline, the computation cost (in terms of #mult-adds) is significantly reduced by 16.4 times. Due to the extra computation cost brought by the 1×1 convolution in denser TAB2 for the meaning of symbols S/C/D. Since the input lengths for the baseline are different in each dataset, we only provide the #Mult-Adds for UrbanSound8K. Note that since we use the ratio of baseline's #Mult-Adds versus WSNet's #Mult-Adds for one WSNet, the numbers corresponding to WSNets in the column of #Mult-Adds are the same for all dataset. Table 5: Comparison with state-of-the-arts on ESC-50. All of WSNet are obtained by 10-folder validation. Please refer to TAB2 scratch init.; provided data 65.8 ± 0.25 4× DISPLAYFORM0 Piczak ConvNet (b) scratch init.; provided data 64.5 28M SoundNet BID2 scratch init.; provided data 51.1 13M SoundNet BID2 pre-training; extra data 72.9 13M filter sampling, S 8 C 8 SC † 8 D 2 achieves lower acceleration (3.8×). Group convolution BID49 can be used to alleviate the computation cost of the added 1×1 convolution layers. We will explore this direction in our future work. Weight quantization. It can be observed from TAB2 that by using 256 bins to represent each weight by one byte (i.e. 8bits), S 8 C 8 SC † 15 A 2 Q 4 is reduced to 1/168 of the baseline's model size while incurring only 0.1% accuracy loss. The above demonstrates that the weight quantization is complementary to WSNet and they can be used jointly to effectively reduce the model size of WSNet. Since we do not use weight quantization to accelerate models in this paper, the WSNets before and after weight quantization have the same computational cost. The comparison of WSNet with other state-of-the-arts on ESC-50 is listed in Table 5. The settings of WSNet used on ESC-50, UrbanSound8K and DCASE are listed in TAB5. Compared with the baseline, WSNet is able to significantly reduce the model size of the baseline by 25 times and 45 times, while at the same time improving the accuracy of the baseline by 0.5% and 0.1% respectively. The computation costs of WSNet are listed in TAB5, from which one can observe that WSNet achieves higher computational efficiency by reducing the #Mult-Adds of the baseline by 2.3× and 2.4×, respectively. Such promising again demonstrate the effectiveness of WSNet on learning compact and efficient networks. After applying weight quantization to WSNet, its model size is reduced to only 1/180 of the baseline while the accuracy only slightly drops by 0.2%. Compared with the SoundNet trained from scratch with provided data, WSNets significantly outperform its classification accuracy by over 10% with more than 100× smaller models. Using a transfer learning approach, SoundNet BID2 that is pre-trained using a large number of unlabeled videos achieves better accuracy than WSNet. However, since the training method is DISPLAYFORM0 RNH BID37 scratch init.; provided data 77 -Ensemble BID46 scratch init.; provided data 78 -SoundNet BID2 pre-training; extra data 88 13Morthogonal to WSNet, we believe that WSNet can achieve better performance by training in a similar way as SoundNet BID2 on a large amount of unlabeled video data. We report the comparison of WSNet with state-of-the-arts on UrbanSound8k in TAB6. It is again observed that WSNet significantly reduces the model size of baseline while obtaining comparative . Both Piczak (2015b) and BID38 use pre-computed 2D features after log-mel transformation as input. In comparison, the proposed WSNet simply takes the raw wave of recordings as input, enabling the model to be trained in an end-to-end manner. As evidenced in TAB7, WSNet outperforms the classification accuracy of the baseline by 1% with a 100× smaller model. When using an even more compact model, i.e. 180× smaller in model size. The classification accuracy of WSNet is only one percentage lower than the baseline (i.e. has only one more incorrectly classified sample), verifying the effectiveness of WSNet. Compared with SoundNet BID2 that utilizes a large number of unlabeled data during training, WSNet (S 8 C 4 D 2 Q 4) that is 100× smaller achieves comparable only by using the provided data. In this paper, we present a class of Weight Sampling networks (WSNet) which are highly compact and efficient. A novel weight sampling method is proposed to sample filters from condensed filters which are much smaller than the independently trained filters in conventional networks. The weight sampling in conducted in two dimensions of the condensed filters, i.e. by spatial sampling and channel sampling. TAB2 To further verify WSNet's capacity of learning compact models, we conduct experiments on ESC-50 and MusicDet200K to compare WSNet with baselines compressed in an intuitive way, i.e. reducing the number of filters in each layer. If #filters in each layer is reduced by T, the overall #parameters in baselines is reduced by T 2 (i.e. the compression ratio of model size is T 2). In Figure 4 and Figure 5, we plot how baseline accuracy varies with respect to different compression ratios and the accuracies of WSNet with the same model size of compressed baselines. As shown in Figure 4 and Figure 5, WSNet outperforms baselines by a large magin across all compression ratios. Particularly, when the comparison ratios are large (e.g. 45 on ESC-50 and 42 on MusicDet200K), In this paper, we focus on WSNet with 1D convnets. Comprehensive experiments clearly demonstrate its advantages in learning compact and computation-efficient networks. We note that WSNet is general and can also be applied to build 2D convnets. In 2D convnets, each filter has three dimensions including two spatial dimensions (i.e. along X and Y directions) and one channel dimension. One straightforward extension of WSNet to 2D convnets is as follows: for spatial sampling, each filter is sampled out as a patch (with the same number of channels as in condensed filter) from condensed filter. Channel sampling remains the same as in 1D convnets, i.e. repeat sampling in the channel dimension of condensed filter. Following the notations for WSNet with 1D convnets (ref. to Sec. 3.1), we denote the filters in one layer as K ∈ R w×h×M ×N where (w, h, M, N) denote the width and height of each filter, the number of channels and the number of filters respectively. The condensed filter Φ has the shape of (W, H, M *). The relations between the shape of condensed filter and each sampled filter are: DISPLAYFORM0 where Sw and S h are the sampling strides along two spatial dimensions and C is the compactness of WSNet along channel dimension. The compactness (please refer to Sec. 3.1 for denifinition) of WSNet along spatial dimension is. However, such straightforward extension of WSNet to 2D convnets is not optimum due to following two reasons: Compared to 1D filters, 2D filters present stronger spatial dependencies between the two spatial dimensions. Nave extension may fail to capture such dependencies. It is not easy to use the integral image method for speeding up WSNet in 2D convnets as in 1D convnets. Because of above problems, we believe there are more sophisticated and effective methods for applying WSNet to 2D convnets and we would like to explore in our future work. Nevertheless, we conduct following preliminary experiments on 2D convents using above intuitive extension and verify the effectiveness of WSNet in image classification tasks (on MNIST and CIFAR10). Since both WSNet and HashNet BID6 BID9 explore weights tying, we compare them on MNIST and CIFAR10. For fair comparison, we use the same baselines used in BID6 BID9. The baseline used for MNIST is a 3-layer fully connected network with a single hidden layer containing 1,000 hidden units. The configuration of the baseline network used for CIFAR10 is listed in TAB10. All hyperparameters used training including learning rate, momentum, drop out and so on follow BID6 BID9. For each dataset, we hold out 20% of the whole training samples to form a validation set. The comparison between WSNet and HashNet on MNIST/CIFAR10 are listed in TAB11, respectively. As one can observe in TAB11, when learning networks with the same sizes, WSNet achieves significant lower error rates than HashNet on both datasets. Above clearly demonstrate the advantages of WSNet in learning compact models. Furthermore, we also conduct experiment on CIFAR10 with the state-of-the-art ResNet18 BID19 network as baseline. Both the network architecture and training hyperparameters follow BID19. WSNet is able to achieve 20× smaller model size with slight performance drop (0.6%). Such promising further demonstrate the effectiveness of WSNet.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1I3M7Z0b
We present a novel network architecture for learning compact and efficient deep neural networks
Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking , a requirement that no existing attack technique can satisfy. In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%. Since the first Adversarial Example (AE) against traffic sign image classification discovered by Eykholt et al. , several research work in adversarial machine learning (; ; a; b; b; ;) started to focus on the context of visual perception in autonomous driving, and studied AEs on object detection models. For example, Eykholt et al. and Zhong et al. studied AEs in the form of adversarial stickers on stop signs or the back of front cars against YOLO object detectors , and performed indoor experiments to demonstrate the attack feasibility in the real world. Building upon these work, most recently Zhao et al. (b) leveraged image transformation techniques to improve the robustness of such adversarial sticker attacks in outdoor settings, and were able to achieve a 72% attack success rate with a car running at a constant speed of 30 km/h on real roads. While these from prior work are alarming, object detection is in fact only the first half of the visual perception pipeline in autonomous driving, or in robotic systems in general -in the second half, the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories, called trackers, of surrounding obstacles. This is required for the subsequent driving decision making process, which needs the built trajectories to predict future moving trajectories for these obstacles and then plan a driving path accordingly to avoid collisions with them. To ensure high tracking accuracy and robustness against errors in object detection, in MOT only the detection with sufficient consistency and stability across multiple frames can be included in the tracking and actually influence the driving decisions. Thus, MOT in the visual The complete visual perception pipeline in autonomous driving, i.e., both object detection and Multiple Object Tracking (MOT) (Baidu; ; 2015; a; ; MathWorks; Udacity). perception of autonomous driving poses a general challenge to existing attack techniques that blindly target objection detection. For example, as shown by our analysis later in §4, an attack on objection detection needs to succeed consecutively for at least 60 frames to fool a representative MOT process, which requires an at least 98% attack success rate (§4). To the best of our knowledge, no existing attacks on objection detection can achieve such a high success rate (; ; a; b; b;). In this paper, we are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving, i.e., both object detection and object tracking, and discover a novel attack technique, called tracker hijacking, that can effectively fool the MOT process using AEs on object detection. Our key insight is that although it is highly difficult to directly create a tracker for fake objects or delete a tracker for existing objects, we can carefully design AEs to attack the tracking error reduction process in MOT to deviate the tracking of existing objects towards an attacker-desired moving direction. Such process is designed for increasing the robustness and accuracy of the tracking , but ironically, we find that it can be exploited by attackers to substantially alter the tracking . Leveraging such attack technique, successful AEs on as few as one single frame is enough to move an existing object in to or out of the headway of an autonomous vehicle and thus may cause potential safety hazards. We select 20 out of 100 randomly sampled video clips from the Berkeley Deep Drive dataset for evaluation. Under recommended MOT configurations in practice and normal measurement noise levels, we find that our attack can succeed with successful AEs on as few as one frame, and 2 to 3 consecutive frames on average. We reproduce and compare with previous attacks that blindly target object detection, and find that when attacking 3 consecutive frames, our attack has a nearly 100% success rate while attacks that blindly target object detection only have up to 25%. Contributions. In summary, this paper makes the following contributions: • We are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving, i.e., both object detection and MOT. We find that without considering MOT, an attack blindly targeting object detection needs at least a success rate of 98% to actually affect the complete visual perception pipeline in autonomous driving, which is a requirement that no existing attack technique can satisfy. • We discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. This technique exploits the tracking error reduction process in MOT, and can enable successful AEs on as few as one single frame to move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. • The attack evaluation using the Berkeley Deep Drive dataset shows that our attack can succeed with successful AEs on as few as one frame, and only 2 to 3 consecutive frames on average, and when 3 consecutive frames are attacked, our attack has a nearly 100% success rate while attacks that blindly target object detection only have up to 25%. • Code and evaluation data are all available at GitHub (Github). Adversarial examples for object detection. Since the first physical adversarial examples against traffic sign classifier demonstrated by Eykholt et al. , several work in adversarial machine learning (; ; a; b; b;) have been focused on the visual perception task in autonomous driving, and more specifically, the object detection models. To achieve high attack effectiveness in practice, the key challenge is how to design robust attacks that can survive distortions in real-world driving scenarios such as different viewing angles, distances, lighting conditions, and camera limitations. For example, Lu et al. (a) shows that AEs against Faster- RCNN generalize well across a sequence of images in digital space, but fail in most of the sequence in physical world; Eykholt et al. generates adversarial stickers that, when attached to stop sign, can fool YOLOv2 object detector, while it is only demonstrated in indoor experiment within short distance; Chen et al. generates AEs based on expectation over transformation techniques, while their evaluation shows that the AEs are not robust to multiple angles, probably due to not considering perspective transformations (b). It was not until recently that physical adversarial attacks against object detectors achieve a decent success rate (70%) in fixed-speed (6 km/h and 30 km/h) road test (b). While the current progress in attacking object detection is indeed impressive, in this paper we argue that in the actual visual perception pipeline of autonomous driving, object tracking, or more specifically MOT, is a integral step, and without considering it, existing adversarial attacks against object detection still cannot affect the visual perception even with high attack success rate. As shown in our evaluation in §4, with a common setup of MOT, an attack on object detection needs to reliably fool at least 60 consecutive frames to erase one object (e.g., stop sign) from the tracking , in which case even a 98% attack success rate on object detectors is not enough (§4). MOT . MOT aims to identify objects and their trajectories in video frame sequence. With the recent advances in object detection, tracking-by-detection has become the dominant MOT paradigm, where the detection step identifies the objects in the images and the tracking step links the objects to the trajectories (i.e., trackers). Such paradigm is widely adopted in autonomous driving systems today (Baidu; ; 2015; a; ; MathWorks; Udacity), and a more detailed illustration is in Fig. 1. As shown, each detected objects at time t will be associated with a dynamic state model (e.g., position, velocity), which represents the past trajectory of the object (track| t−1). A per-track Kalman filter (Baidu; ; ; ;) is used to maintain the state model, which operates in a recursive predict-update loop: the predict step estimates current object state according to a motion model, and the update step takes the detection detc| t as measurement to update its state estimation track| t. The association between detected objects with existing trackers is formulated as a bipartite matching problem (; ;) based on the pairwise similarity costs between the trackers and detected objects, and the most commonly used similarity metric is the spatial-based cost, which measures the overlapping between bounding boxes, or bboxes (Baidu; ; ; ; ; ; ; ; ;). To reduce errors in this association, an accurate velocity estimation is necessary in the Kalman filter prediction . Due to the discreteness of camera frames, Kalman filter uses the velocity model to estimate the location of the tracked object in the next frame in order to compensate the object motion between frames. However, as described later in §3, such error reduction process unexpectedly makes it possible to perform tracker hijacking. MOT manages tracker creation and deletion with two thresholds. Specifically a new tracker will be created only when the object has been constantly detected for a certain number of frames, this threshold will be referred to as the hit count, or H in the rest of the paper. This helps to filter out occasional false positives produced by object detectors. On the other hand, a tracker will be deleted if no objects is associated with for a duration of R frames, or called a reserved age. It prevents the tracks from being accidentally deleted due to infrequent false negatives of object detectors. The configuration of R and H usually depends on both the accuracy of detection models, and the frame rate (fps). Previous work suggest a configuration of R = 2· fps, and H = 0.2· fps , which gives a R = 60 frames and H = 6 frames for a common 30 fps visual perception system. We will show in §4 that an attack that blindly targeting object detection needs to constantly fool at least 60 frames (R) to erase an object, while our proposed tracker hijacking attack can fabricate object that last for R frames and vanish target object for H frames in the tracking by attacking as few as one frame, and only 2~3 frames on average (S4). Scope. This work focuses on the track-by-detection pipeline as described above, which has been recognized as the dominant MOT paradigm in recent literature (; ; ;) and MOT challenges . A MOT approach can choose to include one or more similarity measures to match objects across frames. Common measures include bounding box overlaps, object appearances, visual representations, and other statistical measures . As the first study on the adversarial threats against MOT, we choose the IoU-based Hungarian matching (; ;) as our target algorithm, as it is the most widely adopted and standardized similarity metric by not only very recent work (; ;), but also two real-world autonomous driving systems, i.e., Baidu Apollo (Baidu) and Autoware . This thus ensures the representativeness and practical significance of our work. Overview. Fig. 2a illustrates the tracker hijacking attack discovered in this paper, in which an AE for object detection (e.g., in the form of adversarial patches on the front car) that can fool the detection for as few as one frame can largely deviate the tracker of a target object (e.g., a front car) in MOT. As shown, the target car is originally tracked with a predicted velocity to the left at t 0. The attack starts at time t 1 by applying an adversarial patch onto the back of the car. The patch is carefully generated to fool the object detector with two adversarial goals: erase the bounding box of target object from detection , and fabricate a bounding box with similar shape that is shifted a little bit towards an attacker-specified direction. The fabricated bounding box (red one in detection at t 1) will be associated with the original tracker of target object in the tracking , which we call a hijacking of the tracker, and thus would give a fake velocity towards the attacker-desired direction to the tracker. The tracker hijacking shown in Fig. 2a lasts for only one frame, but its adversarial effects could last tens of frames, depending on the MOT parameter R and H (introduced in §2). For example, at time t 2 after the attack, all detection bounding boxes are back to normal, however, two adversarial effects persist: the tracker that has been hijacked with attacker-induced velocity will not be deleted until a reserved age (R) has passed, and the target object, though is recovered in the detection , will not be tracked until a hit count (H) has reached, and before that the object remains missing in the tracking . However, it's important to note that our attack may not always succeed with one frame in practice, as the recovered object may still be associated with its original tracker, if the tracker is not deviated far enough from the object's true position during a short attack duration. Our empirical show that our attack usually achieves a nearly 100% success rate when 3 consecutive frames are successfully attacked using AE (§4). Such persistent adversarial effects may cause severe safety consequences in self-driving scenarios. We highlight two attack scenarios that can cause emergency stop or even a rear-end crashes: Attack scenario 1: Target object move-in. Shown in Fig. 2b, an adversarial patch can be placed on roadside objects, e.g., a parked vehicle to deceive visual perception of autonomous vehicles passing by. The adversarial patch is generated to cause a translation of the target bounding box towards the center of the road in the detection , and the hijacked tracker will appear as a moving vehicle cutting in front in the perception of the victim vehicle. This tracker would last for 2 seconds if R is configured as 2· fps as suggested in , and tracker hijacking in this scenario could cause an emergency stop and potentially a rear-end crash. Attack scenario 2: Target object move-out. Similarly, tracker hijacking attack can also deviate objects in front of the victim autonomous vehicle away from the road to cause a crash as shown in Fig. 2c. Adversarial patch applied on the back of front car could deceive MOT of autonomous vehicle behind into believing that the object is moving out of its way, and the front car will be missing from the tracking for a duration of 200ms, if H uses the recommended configuration of 0.2· fps . This may cause the victim autonomous vehicle to crash into the front car. Input: Index of target object to be hijacked K, attacker-desired directional velocity #» v, adversarial patch area as a mask matrix patch. generate adversarial frame x with Eq. 3 attack object detector with specialized loss else 8: return X attack succeeds when target object is not associated with original tracker 9: end if 10: update current tracker with adversarial frame 11: end for Targeted MOT design. Our attack targets on first-order Kalman filter, which predicts a state vector containing position and velocity of detected objects over time. For the data association, we adopt the mostly widely used Intersection over Union (IoU) as the similarity metric, and the IoU between bounding boxes are calculated by Hungarian matching algorithm to solve the bipartite matching problem that associates bounding boxes detected in consecutive frames with existing trackers. Such combination of algorithms in the MOT is the most common in previous work (; ;) and real-world systems (Baidu). We now describe our methodology of generating an adversarial patch that manipulates detection to hijack a tracker. As detailed in Alg. 1, given a targeted video image sequence, the attack iteratively finds the minimum required frames to perturb for a successful track hijack, and generates the adversarial patches for these frames. In each attack iteration, an image frame in the original video clip is processed, and given the index of target objects K, the algorithm finds an optimal position to place the adversarial bounding box pos in order to hijack the tracker of target object by solving Eq. 1. The attack then constructs adversarial frame against object detection model with an adversarial Previous attack that simply erase the bbox has no impact on the tracking output (b), while tracker hijacking attack that fabricates bbox with carefully chosen position successfully redirects the tracker towards attacker-specified direction (c). patch, using Eq. 3 as the loss function to erase the original bounding box of target object and fabricate the adversarial bounding box at the given location. The tracker is then updated with the adversarial frame that deviates the tracker from its original direction. If the target object in the next frame is not associate with its original tracker by the MOT algorithm, attack has succeeded; otherwise, this process is repeated for the next frame. We discuss two critical steps in this algorithm below, and please refer to the Appendix A for the complete implementation of the algorithm. Finding optimal position for adversarial bounding box. To deviate the tracker of a target object K, besides removing its original bounding box detc| t [K], the attack also needs to fabricate an adversarial box with a shift δ towards a specified direction. This turns into an optimization problem (Eq. 1) of finding the translation vector δ that maximizes the cost of Hungarian matching (M(·)) between the detection box and the existing tracker so that the bounding box is still associated with its original tracker (M ≤ λ), but the shift is large enough to give an adversarial velocity to the tracker. Note that we also limit the shifted bounding box to be overlapped with the patch to facilitate adversarial example generation, as it's often easier for adversarial perturbations to affect prediction in its proximity, especially in physical settings. Generating adversarial patch against object detection. Similar to the existing adversarial attacks against object detection models; b), we also formulate the adversarial patch generation as an optimization problem shown in Eq. 3 in Appendix. Existing attacks without considering MOT directly minimize the probability of target class (e.g., a stop sign) to erase the object from detection . However, as shown in Fig. 3b, such AEs are highly ineffective in fooling MOT as the tracker will still track for R frames even after the detection bounding box is erased. Instead, the loss function of our tracker hijacking attack incorporates two optimization objectives: minimizes the target class probability to erase the bounding box of target object; fabricates the adversarial bounding box at the attacker-desired location and in the specific shape to hijack the tracker. Details of our algorithm can be found in Appendix A, and the implementation can be found at (Github). In this section, we describe our experiment settings for evaluating the effectiveness of our tracker hijacking attack, and comparing it with previous attacks that blindly attack object detection in detail. 4.1 EXPERIMENT METHODOLOGY Evaluation metrics. We define a successful attack as that the detected bounding box of target object can no longer be associated with any of the existing trackers when attack has stopped. We measure the effectiveness of our track hijacking attack using the minimum number of frames that the AEs on (b) Attack success rate at R = 60 H = 6, and R = 5, H = 2 Figure 4: In normal measurement noise covariance range (a), our tracker hijacking attack would require the AE (adversarial example) to fool only 2~3 consecutive frames on average to successfully deviate the target tracker despite the (R, H) settings. We also compare the success rate of tracker hijacking with previous adversarial attack against object detectors only under different attacker capabilities, i.e., the number of consecutive frames the AE can reliably fool the object detector (b). Tracker hijacking achieves superior attack success rate (100%) even by fooling as few as 3 frames, while previous attack is only effective when the AE can reliably fools at least R consecutive frames. object detection need to succeed. The attack effectiveness highly depends on the difference between the direction vector of the original tracker and adversary's objective. For example, attacker can cause a large shift on tracker with only one frame if choosing the adversarial direction to be opposite to its original direction, while it would be much harder to deviate the tracker from its established track, if the adversarial direction happens to be the same as the target's original direction. To control the variable, we measure the number of frames required for our attack in two previous defined attack scenarios: target object move-in and move-out. Specifically, in all move-in scenarios, we choose the vehicle parked along the road as target, and the attack objective is to move the tracker to the center, while in all move-out scenarios, we choose vehicles that are moving forward, and the attack objective is to move the target tracker off the road. Dataset selection. We randomly sampled 100 video clips from Berkeley Deep Drive dataset , and then manually selected 10 suitable for the object move-in scenario, and another 10 for the object move-out scenario. For each clip, we manually label a target vehicle and annotate the patch region to be a small area at its back as shown in Fig. 3c. All videos are 30 frames per second. Implementation details. We implement our targeted visual perception pipeline using Python, with YOLOv3 as the object detection model due to its high popularity among in real-time systems. For the MOT implementation, we use the Hungarian matching implementation called linear_assignment in the sklearn package for the data association, and we provide a reference implementation of Kalman filter based on the one used in OpenCV (OpenCV). The effectiveness of attack depends on a configuration parameter of Kalman filter, called measurement noise covariance (cov). cov is an estimation about how much noise is in the system, a low cov value would give Kalman filter more confidence on the detection at time t when updating the tracker, while a high cov value would make Kalman filter to place trust more on its own previous prediction at time t − 1 than that at time t. We give a detailed introduction of configurable parameters in Kalman filter in §2 of our Appendix B. This measurement noise covariance is often tuned based on the performance of detection models in practice. We evaluate our approach under different cov configurations ranging from very small (10 −3) to very large as shown in Fig. 4a, while cov is usually set between 0.01 and 10 in practice (Baidu;). Effectiveness of tracker hijacking attack. Fig. 4a shows the average number of frames that the AEs on object detection need to fool for a successful track hijacking over the 20 video clips. Although a configuration with R = 60 and H = 6 is recommended when fps is 30 , we still test different reserved age (R) and hit count (H) combinations as real-world deployment are usually more conservative and use smaller R and H (Baidu;). The show that tracker hijacking attack only requires successful AEs on object detection in 2 to 3 consecutive frames on average to succeed despite the (R, H) configurations. We also find that even with a successful AE on only one frame, our attack still has 50% and 30% success rates when cov is 0.1 and 0.01 respectively. Interestingly, we find that object move-in generally requires less frames compared with object move-out. The reason is that the parked vehicles in move-in scenarios (Fig. 2b) naturally have a moving-away velocity relative to the autonomous vehicle. Thus, compared to move-out attack, movein attack triggers a larger difference between the attacker-desired velocity and the original velocity. This makes the original object, once recovered, harder to associate correctly, making hijacking easier. Comparison with attacks that blindly target object detection. Fig. 4b shows the success rate of our attack and previous attacks that blindly target object detection (denoted as detection attack). We reproduced the recent adversarial patch attack on object detection from Zhong et al. , which targets the autonomous driving context and showed effectiveness using real-world car testing. In this attack, the objective is to erase the target class from the detection of each frame. Evaluated under two (R, H) settings, we find that our tracker hijacking attack achieves superior attack success rate (100%) even by attacking as few as 3 frames, while the detection attack needs to reliably fool at least R consecutive frames. When R is set to 60 according to the frame rate of 30 fps, the detection attack needs to have an adversarial patch that can constantly succeed at least 60 frames while the victim autonomous vehicle is driving. This means an over 98.3% (59 60) AE success rate, which has never been achieved or even got close to in prior work (b; ; ; a). Note that the detection attack still can have up to~25% success rate before R. This is because the detection attack causes the object to disappear for some frames, and when the vehicle heading changes during such disappearing period, it is still possible to cause the original object, when recovered, to misalign with the tracker predication in the original tracker. However, since our attack is designed to intentionally mislead the tracker predication in MOT, our success rate is substantially higher (3-4×) and can reach 100% with as few as 3 frames attacked. Implications for future research in this area. Today, adversarial machine learning research targeting the visual perception in autonomous driving, no matter on attack or defense, uses the accuracy of objection detection as the de facto evaluation metric . However, as concretely shown in our work, without considering MOT, successful attacks on the detection alone do not have direct implication on equally or even closely successful attacks on the MOT , the final output of the visual perception task in real-world autonomous driving (Baidu;). Thus, we argue that future research in this area should consider: using the MOT accuracy as the evaluation metric, and instead of solely focusing on object detection, also studying weaknesses specific to MOT or interactions between MOT and object detection, which is a highly under-explored research space today. This paper marks the first research effort towards both directions. Practicality improvement. Our evaluation currently are all conducted digitally with captured video frames, while our method should still be effective when applied to generate physical patches. For example, our proposed adversarial patch generation method can be naturally combined with different techniques proposed by previous work to enhance reliability of AEs in the physical world (e.g., non-printable loss and expectation-over-transformation ). We leave this as future work. Generality improvement. Though in this work we focused on MOT algorithm that uses IoU based data association, our approach of finding location to place adversarial bounding box is generally applicable to other association mechanisms (e.g., appearance-based matching). Our AE generation algorithm against YOLOv3 should also be applicable to other object detection models with modest adaptations. We plan to provide reference implementations of more real-world end-to-end visual perception pipelines to pave the way for future adversarial learning research in self-driving scenarios. In this work, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, i.e., both object detection and MOT. We discover a novel attack technique, tracker hijacking, that exploits the tracking error reduction process in MOT and can enable successful AEs on as few as one frame to move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. The evaluation show that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%. The source code and data is all available at (Github). Our discovery and strongly suggest that MOT should be systematically considered and incorporated into future adversarial machine learning research targeting the visual perception in autonomous driving. Our work initiates the first research effort along this direction, and we hope that it can inspire more future research into this largely overlooked research perspective. Given the targeted video image sequence, track hijacking attack iteratively finds the minimum required frames to perturb for a successful hijack, and generates the adversarial patches for these frames. An image frame in the original video clip is given at each iteration and we use Alg. 2 to find an optimal position to place the adversarial bounding box pos in order to hijack the tracker of target object. The FINDPOS takes the existing tracking track| t−1, the detected objects detc| t, the index of target object K, the attacker desired directional vector #» v, the adversarial patch area patch as input, and iteratively moves the bounding box along the direction of #» v while keeping some invariants: the shifted bounding box shall still be associated with the original tracker of target object (Eq. 2); the shifted bounding box shall always have overlap with the patch (IoU (detc [K], patch) > γ). The while loop will end when the bounding box has been shifted to the farmost position from its original position along #» v, where the invariants still hold. The intuition behind FINDPOS is that, in order for the tracker to loss track of the target object when attack has ended, attacker needs to deviate the bounding box of target object as far as possible within its original data association range. Algorithm 2 Track Hijacking Attack -Find fabricated bbox position Input: Existing trackers track| t−1; detection objects detc| t; MOT algorithm T rk(·) Input: Index of target object to be hijacked K, attacker desired directional vector #» v; adversarial patch area as a mask matrix patch Output: fabricate bounding box position pos 1: procedure FINDPOS 2: detc ← detc| t track ← track| t−1 k ← 1 5: track ← T r(track, detc) 8: end while 10: return pos 12: end procedure After the target bounding box location is identified, the next step is to generate adversarial patch against the object detection model. Similar to the existing adversarial attacks against object detection models; b), we also formulate the adversarial patch generation as an optimization problem shown in Eq. 3. Existing attacks without considering MOT directly minimize the probability of target class (e.g., a stop sign) to erase the target from detection . However, as shown in Fig. 3b, such AEs are highly ineffective in fooling MOT as the tracker will still track for R frames even after the detection bounding box is erased. Instead, the loss function of our tracker hijacking attack incorporates two loss terms: L 1 minimizes the target class probability at given location to erase the target bounding box, where identifies all bounding boxes (B) before non-max suppression , who contain the center location (cx t, cy t) of pos, while C i is the confidence score of bounding boxes; L 2 controls the fabrication of adversarial bounding box at given center location (cx t, cy t) with given shape (w t, h t) to hijack the tracker. In the implementation, we use Adam optimizer to minimize the loss by iteratively perturbing the pixels along the gradient directions within the patch area, and the generation process stops when an adversarial patch that satisfies the requirements is generated. Note that the fabrication loss L 2 needs only to be used when generating the first adversarial frame in a sequence to give the tracker an attacker-desired velocity #» v, and then λ can be set to 0 to only focus on erasing target bounding box similar to previous work. Thus, our attack wouldn't add much difficulty to the optimization. The code of our implementation can be found at (Github). Alg. 3 takes the adversarial bounding box position pos for fabrication, and the original bounding box for vanish to generate an adversarial frame x whose perturbation is limited in the patch area. Similar to the existing adversarial attacks against object detection models; b), we also formulate the adversarial patch generation as an optimization problem. First, the algorithm identifies all bounding boxes i ∈ B in the intermediate of object detection model before non-max suppression , and for all of them who contain the central point c x, c y of pos in its bounding box area, initialize 1 i ← 1, otherwise, 1 i ← 0. The algorithm then use Adam optimizer to minimize the loss L 1 + λL 2 where L 1 minimizes the target class probability in the vanish area, and L 2 controls the fabrication of adversarial bounding box at given center location (cx t, cy t) with given shape (w t, h t) to hijack the tracker. Note that the fabrication loss L 2 needs only to be used when generating the first adversarial frame in a sequence to give the tracker an attacker-desired velocity, and then λ can be set to 0 to only focus on erasing target bounding box similar to previous work. Also note that when calculating the pixel gradient, we apply a mask patch to the input x to restrict the perturbation area. The attack stops when the maximum attack iteration has reached, and the adversarial example with the patch applied is returned. The implementation is available at (Github). The main idea behind Kalman filter is that the measurement is not always reliable, and by combining a statistic noise model, the estimation can be more accurate than base on single measurement alone. This makes Kalman filter a natural fit for the track-by-detection pipeline, as MOT is intended to tolerate and correct the occasional errors in the detection . The main principle of Kalman filter is represented as Eq. 4. wherex k is the current state estimation, K k is the Kalman gain, Z k is the measurement value at state k, andx k−1 is the previous estimation. The equation shows that Kalman filter performs the state estimation using both the current measurement and the previous estimation, while the Kalman gain K k is also a variable that will be updated by measurements. In the MOT applications, the state estimations are the trackers, while the measurements are the detected bounding boxes at each frame. In this paper, we use first-order Kalman filter to track the central point location(r, c) of bounding boxes, and first-order low-pass filter to track the width and length of bounding boxes with a decay factor 0.5, which is the same as Baidu Apollo self-driving platform's implementation (Baidu). The update of the tracker states are updated with two steps: the time update, and the measurement update. The time update is performed as: where F k is the first-order state transition model, and P k is the posteriori error covariance matrix, which is a measure of the estimated accuracy of the state estimate. The Q K is the covariance of the x ← x 10: for n = 0 to N do Calculate vanish loss L 1: return x 20: end procedure process noise. The measurement update is performed in the same loop as: where H k is the observation model, R k the covariance of the observation model, and #» z k the observation. In particular, denoting the coordinates of center point as (r, c), we set the state vector #» x and state covariance matrix P as: where the cov is the variable we refered to as measurement noise covariance value we enumerated in our evaluation. From the expression of the Kalman gain in the measurement update process, we can see that the gain factor K is related to variations in R. Identified by H.-G. Yeh et al. , the Kalman gain can be regarded as a ratio of dynamic process to the measurement noise, i.e., K is proportional to Q cov·I. So when the cov value is small, the object tracking response is relatively fast, and the tracking bounding boxes follow the detection boxes more closely; while when the cov value is large, the Kalman filter trust more on its own estimation rather than the measurement, and the tracker is less responsive to the change of bounding boxes, which makes our track hijacking attack a little bit harder. In our paper, we empirically validated the impact of different cov values [0, 0.01, 0.1, 1, 10] on the effectiveness of our attack, and found that under normal cov configuration range (0.01 to 10), our attack can get a nearly 100% success rate by fooling 3 consecutive detection frames on average.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJl31TNYPr
We study the adversarial machine learning attacks against the Multiple Object Tracking mechanisms for the first time.
Self-supervised learning (SlfSL), aiming at learning feature representations through ingeniously designed pretext tasks without human annotation, has achieved compelling progress in the past few years. Very recently, SlfSL has also been identified as a promising solution for semi-supervised learning (SemSL) since it offers a new paradigm to utilize unlabeled data. This work further explores this direction by proposing a new framework to seamlessly couple SlfSL with SemSL. Our insight is that the prediction target in SemSL can be modeled as the latent factor in the predictor for the SlfSL target. Marginalizing over the latent factor naturally derives a new formulation which marries the prediction targets of these two learning processes. By implementing this framework through a simple-but-effective SlfSL approach -- rotation angle prediction, we create a new SemSL approach called Conditional Rotation Angle Prediction (CRAP). Specifically, CRAP is featured by adopting a module which predicts the image rotation angle \textbf{conditioned on the candidate image class}. Through experimental evaluation, we show that CRAP achieves superior performance over the other existing ways of combining SlfSL and SemSL. Moreover, the proposed SemSL framework is highly extendable. By augmenting CRAP with a simple SemSL technique and a modification of the rotation angle prediction task, our method has already achieved the state-of-the-art SemSL performance. The recent success of deep learning is largely attributed to the availability of a large amount of labeled data. However, acquiring high-quality labels can be very expensive and time-consuming. Thus methods that can leverage easily accessible unlabeled data become extremely attractive. Semisupervised learning (SemSL) and self-supervised learning (SlfSL) are two learning paradigms that can effectively utilize massive unlabeled data to bring improvement to predictive models. SemSL assumes that a small portion of training data is provided with annotations and the research question is how to use the unlabeled training data to generate additional supervision signals for building a better predictive model. In the past few years, various SemSL approaches have been developed in the context of deep learning. The current state-of-the-art methods, e.g. MixMatch , unsupervised data augmentation , converge to the strategy of combining multiple SemSL techniques, e.g. Π-Model , Mean Teacher , mixup , which have been proved successful in the past literature. SlfSL aims for a more ambitious goal of learning representation without any human annotation. The key assumption in SlfSL is that a properly designed pretext predictive task which can be effortlessly derived from data itself can provide sufficient supervision to train a good feature representation. In the standard setting, the feature learning process is unaware of the downstream tasks, and it is expected that the learned feature can benefit various recognition tasks. SlfSL also offers a new possibility for SemSL since it suggests a new paradigm of using unlabeled data, i.e., use them for feature training. Recent work has shown great potential in this direction. This work further advances this direction by proposing a new framework to seamlessly couple SlfSL with SemSL. The key idea is that the prediction target in SemSL can serve as a latent factor in the course of predicting the pretext target in a SlfSL approach. The connection between the predictive targets of those two learning processes can be established through marginalization over the latent factor, which also implies a new framework of SemSL. The key component in this framework is a module that predicts the pretext target conditioned on the target of SemSL. In this preliminary work, we implement this module by extending the rotation angle prediction method, a recently proposed SlfSL approach for image recognition. Specifically, we make its prediction conditioned on each candidate image class, and we call our method Conditional Rotation Angle Prediction (CRAP). The proposed framework is also highly extendable. It is compatible with many SemSL and SlfSL approaches. To demonstrate this, we further extend CRAP by using a simple SemSL technique and a modification to the rotation prediction task. Through experimental evaluation, we show that the proposed CRAP achieves significantly better performance than the other SlfSL-based SemSL approaches, and the extended CRAP is on par with the state-of-the-art SemSL methods. In summary, the main contributions of this paper are as follows: • We propose a new SemSL framework which seamlessly couples SlfSL and SemSL. It points out a principal way of upgrading a SlfSL method to a SemSL approach. • Implementing this idea with a SlfSL approach, we create a new SemSL approach (CRAP) that can achieve superior performance than other SlfSL-based SemSL methods. • We further extend CRAP with a SemSL technique and an improvement over the SlfSL task. The ed new method achieves the state-of-the-art performance of SemSL. Our work CRAP is closely related to both SemSL and SlfSL. SemSL is a long-standing research topic which aims to learn a predictor from a few labeled examples along with abundant of unlabeled ones. SemSL based on different principals are developed in the past decades, e.g., "transductive" models , multi-view style approaches and generative model-based methods , etc. Recently, the consistency regularization based methods have become quite influential due to their promising performance in the context of deep learning. Specifically, Π-Model requires model's predictions to be invariant when various perturbations are added to the input data. Mean Teacher enforces a student model producing similar output as a teacher model whose weights are calculated through the moving average over the weight of student model. Virtual Adversarial Training encourages the predictions for input data and its adversarially perturbed version to be consistent. More recently, mixup has emerged as a powerful SemSL regularization method which requires the output of mixed data to be close to the output mixing of original images. In order to achieve good performance, most state-of-the-art approaches adopt the strategy of combining several existing techniques together. For example, Interpolation Consistency Training incorporates Mean Teacher into the mixup regularization, MixMatch adopts a technique that uses fused predictions as pseudo prediction target as well as the mixup regularization. Unsupervised data augmentation upgrades Π-Model with advanced data augmentation methods. SlfSL is another powerful paradigm which learns feature representations through training on pretext tasks whose labels are not human annotated. Various pretext tasks are designed in different approaches. For example, image inpainting trains model to reproduce an arbitrary masked region of the input image. Image colorization encourages model to perform colorization of an input grayscale image. Rotation angle prediction forces model to recognize the angle of a rotated input image. After training with the pretext task defined in a SlfSL method, the network is used as a pretrained model and can be fine-tuned for a downstream task on task-specific data. Generally speaking, it is still challenging for SlfSL method to achieve competitive performance to fully-supervised approaches. However, SlfSL provides many new insights into the use of unlabeled data and may have a profound impact to other learning paradigms, such as semi-supervised learning. SlfSL based SemSL is an emerging approach which incorporates SlfSL into SemSL. The most straightforward approach is to first perform SlfSL on all available data and then fine-tune the learned model on labeled samples. S 4 L ) is a newly proposed method which jointly train the downstream task and pretext task in a multi-task fashion without breaking them into stages. In this paper, we further advance this direction through proposing a novel architecture which explicitly links these two tasks together and ensure that solving one task is beneficial to the other. In SemSL, we are given a set of training samples {x 1, x 2, · · ·, x n} ∈ X with only a few of them X l = {x 1, x 2, · · ·, x l} ∈ X annotated with labels {y 1, y 2, · · ·, y l} ∈ Y l (usually l << n and y is considered as discrete class label here). The goal of a SemSL algorithm is to learn a better posterior probability estimator over y, i.e., p(y|x, θ) with θ denoting model parameters, from both labeled and unlabeled training samples. SlfSL aims to learn feature representations via a pretext task. The task usually defines a target z, which can be derived from the training data itself, e.g., rotation angle of the input image. Once z is defined, SlfSL is equivalent to training a predictor to model p(z|x; θ). There are two existing schemes to leverage SlfSL for SemSL. The first is to use SlfSL to learn the feature from the whole training set and then fine-tuning the network on the labeled part. The other is jointly optimizing the tasks of predicting y and z, as in the recently proposed S 4 L method. As shown in Figure 1 (a), S 4 L constructs a network with two branches and a shared feature extractor. One branch for modeling p(y|x; θ) and another branch for modeling p(z|x; θ). However, in both methods the pretext target z predictor p(z|x; θ) is implicitly related to the task of predicting y. Our framework is different in that we explicitly incorporate y into the predictor for z. Specifically, we treat y as the latent factor in p(z|x; θ) and factorize p(z|x; θ) through marginalization: Eq. 1 suggests that the pretext target predictor p(z|x; θ) can be implemented as two parts: a model to estimate p(y|x; θ) and a model to estimate z conditioned on both x and y, i.e., p(z|x, y; θ). For the labeled samples, the ground-truth y is observed and can be used for training p(y|x; θ). For unlabeled samples, the estimation from p(y|x; θ) and p(z|x, y; θ) will be combined together to make the final prediction about z. Consequently, optimizing the loss for p(z|x; θ) will also provide gradient to back-propagate through p(y|x; θ). This is in contrast to the case of S 4 L, where the gradient generated from the unlabeled data will not flow through p(y|x; θ). Theoretically, p(z|x; θ) and p(y|x; θ) can be two networks, but in practise we model them as two branches connecting to a shared feature extractor. p(z|x; θ) suggested by Eq. 1 is essentially a pretext target predictor with a special structure and partial observations on its latent variable, i.e. y. The benefits of using such a predictor can be understood from three perspectives: p(y|x; θ) in Eq. 1 acts as a soft selector to select p(z|x, y; θ) for predicting z. If the estimation of p(y|x; θ) is accurate, it will select p(z|x, y =ŷ(x); θ) for prediction and update, whereŷ(x) is the true class of x. This selective updating will make p(z|x, y; θ) give more accurate prediction over z if y matchesŷ(x). After such an update, p(z|x, y; θ) will in turn encourage p(y|x; θ) to attain higher value for y =ŷ(x) since the prediction from p(z|x, y =ŷ(x); θ) is more likely to be accurate. Thus, the terms p(y|x; θ) and p(z|x, y; θ) will reinforce each other during training. even if p(y|x; θ) is not accurate (this may happen at the beginning of the training process), p(z|x, y; θ) can still perform the pretext target prediction and act as an unsupervised feature learner. Thus, the features will be gradually improved in the course of training. With a better feature representation, the estimation of p(y|x; θ) will also be improved. Finally, to predict z in Eq. 1, p(z|x, y; θ) needs to be evaluated for each candidate y. This in effect is similar to creating an ensemble of diversified pretext target predictors and with the combination weight given by p(y|x; θ) according to the marginalization rule. Thus, training features with Eq. 1 may enjoy the benefit from ensemble learning. Again, this will lead to better features and thus benefit the modelling of p(y|x; θ) and p(z|x, y; θ). The above framework provides a guideline for turning a SlfSL method into a SemSL algorithm: modifying a SlfSL predictor p(z|x; θ) by p(z|x, y; θ) and introducing a branch for p(y|x; θ) optimizing the prediction of z on the SemSL dataset and update the branches p(z|x, y; θ), p(y|x; θ) and their shared feature extractor. using p(y|x; θ) as downstream task predictor or adding an additional branch for training p(y|x; θ) only with the labeled data as in S 4 L. More details about the additional branch will be explained in Section 4. < l a t e x i t s h a 1 _ b a s e 6 4 = " t 2 W 6 x u / Y c G q z 7 e 6 w r U f 4 A 7 6 z e 2 A = " < l a t e x i t s h a 1 _ b a s e 6 4 = " t 2 W 6 x u / Y c G q z 7 e 6 w r U f 4 A 7 6 z e 2 A = " In the following part, we will describe an implementation of this framework, which is realized by upgrading the rotation-angle prediction-based SlfSL to its conditional version. Rotation angle prediction is a recently proposed SlfSL approach for image recognition. It randomly rotates the input image by one of the four possible rotation angles ({0 •, 90 •}) and requires the network to give a correct prediction of the rotation angle. Despite being extremely simple, this method works surprisingly well in practice. The underlying logic is that to correctly predict the rotation angle, the network needs to recognize the canonical view of objects from each class and thus enforces the network to learn informative patterns of each image category. Following the proposed framework, we upgrade rotation angle prediction to conditional rotation angle prediction (CRAP) for semi-supervised learning. In this case, z in Eq. 1 is the rotation angle and y is the class label of input image x. We realize p(z|x, y; θ) by allocating a rotation angle prediction branch for each class. The prediction from each branch is then aggregated with the aid of p(y|x; θ) for the final prediction of z as shown in Eq. 1. A more detailed schematic illustration of the CRAP method is shown in Figure 1 (b). As seen, our method adopts a network with multiple branches and a shared feature extractor. Specifically, branches within the dashed box are called auxiliary branches since they are only used for training and will be discarded at the test stage. It contains C rotation predictors which corresponds to p(z|x, y; θ) and a semantic classifier which generates p(y|x; θ). The auxiliary branches and feature extractor are trained by using the procedure described in Section 3. Note that in CRAP, we do not directly use the semantic classifier from the auxiliary branches as the final classifier. Instead, we introduce an additional semantic classifier and learn it only via the loss incurred from the labeled data. This treatment is similar to S 4 L and we find this strategy work slightly better in practice. We postulate the reason is that the p(y|x; θ) branch in auxiliary branches is mainly trained by the supervision generated from the optimization of p(z|x; θ). Such supervision is noisy comparing with the loss generated from the ground-truth y. It is better to use such a branch just for feature training since the latter is more tolerant to noisy supervision. Remark: One potential obstacle of our model is that the quantity of parameters in the auxiliary branches would increase significantly with a large C. To tackle this, we propose to perform dimension reduction for the features feeding into the rotation predictor. Results in Section 5.3 show that this scheme is effective as our performance will not drop even when the dimension is reduced from 2048 to 16. The CRAP method is also highly expendable. In the following, we will extend CRAP from two perspectives: improving p(y|x; θ) and improving p(z|x, y; θ). As discussed in Section 3, our method essentially introduces a network module with a special structure and partial observations on the latent variable y. Besides using labeled data to provide supervision for y, we can also use existing SemSL techniques to provide extra loss for modeling p(y|x; θ). To implement such an extension, we employ a simple SemSL loss as follows: we rotate each image in four angles within one batch (the prediction of the rotated image can be obtained as the byproduct of CRAP) and obtain the arithmetic averagep of the predicted distributions across these four rotated samples. Then we perform a sharpening operation overp as in MixMatch: where C is the number of classes and T ∈ is a temperature hyper-parameter. Then we use the cross entropy betweenp i and p(y|x; θ) (in auxiliary branches) as an additional loss. Note that other (more powerful) SemSL can also apply here. We choose the above SemSL technique is simply because its operation, i.e. image rotation, has already been employed in the CRAP algorithm and thus could be reused to generate the additional SemSL loss without increasing the complexity of the algorithm. We also make another extension over CRAP by introducing an improved version of the conditional rotation prediction task. Specifically, we require the rotation prediction branch to predict rotation angle for a mixed version of the rotated image, that is, we randomly mix the input image x i with another randomly sampled rotated image x j via x mix = αx i + (1 − α)x j, with α sampled from [0.5, 1]. Meanwhile, the class prediction p(y|x i ; θ) is calculated from the unmixed version of the input x i. In such a design, the network needs to recognize the rotation angle of the target object with the noisy distraction from another image, and we call this scheme denoising rotation prediction. The purpose of introducing this modified task is to make the SlfSL task more challenging and more dependent on the correct prediction from p(y|x; θ). To see this point, let's consider the following example. Letter'A' is rotated with 270 • and is mixed with letter'B' with rotation 90 •. Directly predicting the rotation angle for this mixed image encounters an ambiguity: whose rotation angle, A's or B's, is the right answer? In other words, the network cannot know which image class is the class-of-interest. This ambiguity can only be resolved from the output of p(y|x; θ) since its input is unmixed target image. Therefore, this improved rotation prediction task relies more on the correct prediction from the semantic classifier and training through CRAP is expected to give stronger supervision signal to p(y|x; θ). Note that although the denoising rotation prediction also uses mix operation, it is completely different from mixup. The latter constructs a loss to require the output of the mixed image to be mixed version of the outputs of original images. This loss is not applied in our method. For more algorithm details about CRAP and the extended CRAP, please refer to the Appendix A.1. In this section, we conduct experiments to evaluate the proposed CRAP method 1. The purpose of our experiments is threefolds: to validate if CRAP is better than other SlfSL-based SemSL algorithms. to compare CRAP and extended CRAP (denoted as CRAP+ hereafter) against the state-of-the-art SemSL methods. to understand the contribution of various components in CRAP. To make a fair comparison to recent works, different experimental protocols are adopted for different datasets. Specifically, for CIFAR-10 and CIFAR-100 and SVHN , we directly follow the settings in . For ILSVRC-2012 (, our settings are identical to except for data pre-processing operations for which we only use the inception crop augmentation and horizontal mirroring. We ensure that all the baselines are compared under the same setting. Followed the standard settings of SemSL, the performance with different amount of labeled samples are tested. For CIFAR-10 and SVHN, sample size of labeled images is ranged in five levels: {250, 500, 1000, 2000, 4000}. For CIFAR-100, 10000 labeled data is used for training. For ILSVRC-2012, 10% and 1% of images are labeled among the whole dataset. In each experiment, three independent trials are conducted for all datasets except for ILSVRC-2012. See more details in Table 8 in Appendix. Firstly, we compare CRAP to other SlfSL-based SemSL algorithms on five datasets: CIFAR-10, CIFAR-100, SVHN, SVHN+Extra and ILSVRC-2012. Two SlfSL-based SemSL baseline approaches are considered: 1) Fine-tune: taking the model pretrained on the pretext task as an initialization and fine-tuning with a set of labeled data. We term this method Fine-tune in the following sections. 2) S 4 L: S 4 L method proposed in. Note that we do not include any methods which combine other SemSL techniques. For this reason, we only use our basic CRAP algorithm in the comparison in this subsection. As a reference, we also report the performance obtained by only using the labeled part of the dataset for training, denoting as Labeled-only. The experimental are as follows: The are presented in Table 1. We find that the "Fine-tune" strategy leads to a mixed amount of improvement over the "Labeled-only" case. It is observed that a large improvement can be obtained when the amount of labeled samples is ranged from 500 to 2000 but not on 250 and 4000's settings. It might be because on one hand too few labeled samples are not sufficient to perform an effective fine-tuning while on the other hand the significant improvement diminishes after the sample size increase. In comparison, S 4 L achieves much better accuracy for the case of using few samples. This is largely benefited from its down-stream-task awareness design -the labeled training samples exerts impact at the feature learning stage. Our CRAP method achieves significantly better performance than those two ways of incorporating SlfSL for SemSL and always halves the test error of S 4 L in most cases. Table 2 shows the of each method. Somehow surprisingly, we find that the Fine-tune and S 4 L do not necessarily outperform the Labeled-only baseline. They actually performs worse than Labeled-only on SVHN. With more training data in SVHN + Extra, S 4 L tends to bring benefits for enhancing performance when the size of labeled samples are small e.g., with 250 samples. In comparison, the proposed CRAP still manages to produce significant improvement over Labeled-only in all those settings. This clearly demonstrates that the simple combination of SlfSL and SemSL may not necessarily bring improvement and a properly-designed strategy of incorporating SlfSL with SemSL is crucial. CIFAR-100 As shown in Table 3, it is obvious that all SlfSL-based SemSL methods can have better accuracy than that of Labeled-only. S 4 L leads to a marginal improvement over Fine-tune although its performance is a little bit unstable on different partitions as shown by its higher variance. Again, the proposed CRAP achieves significant improvement over those baselines. Table 4 presents the of each method. The top block of Table 4 shows the reported in the original S 4 L paper and we also re-implement S 4 L based on the code of. Due to the difference of data pre-processing, in the upper block cannot be directly compared to those below. Again, we have observed that CRAP is consistently superior to S 4 L in all settings. As mentioned in Section 4, for saving the computational cost, we propose to reduce the dimensionality of features fed into the rotation angle predictor when there is a large number of classes. In Table 5, we demonstrates the effect of this scheme. As seen, the test performance stays the same when the feature dimensions is gradually reduced from 2048 to only 16 dimensions. This clearly validates the effectiveness of the proposed scheme. In the following section, we proceed to demonstrate the performance of CRAP+, that is, the extended CRAP method by incorporating the two extensions discussed in Section 4.1 and 4.2. We compare its performance against the current state-of-the-art methods in SemSL. Similar to , several SemSL baselines are considered: Pseudo-Label, Π-Model, Mean Teacher, Virtual Adversarial Training (VAT), MixUp and MixMatch 2. Since a fair and comprehensive comparison has been done in and we strictly follow the same experimental setting, we directly compare CRAP+ to the numbers reported in . The experimental are shown in Figure 2, Figure 3 and Table 6. As seen from those Figures and Table, the proposed CRAP+ is on-par with the best performed approaches, e.g., Mixmatch, in those datasets. This clearly demonstrates the power of the proposed method. Note that the current state-ofthe-art in SemSL is achieved by carefully combining multiple existing successful ideas in SemSL. In contrast, our CRAP+ achieves excellent performance via an innovative framework of marrying SlfSL with SemSL. Conceptually, the latter enjoys greater potential. In fact, CRAP might be further extended by using more successful techniques in SemSL, such as MixUp. Since the focus of this paper is to study how SlfSL can benefit SemSL, we do not pursue this direction here. Since there are several components in CRAP and CRAP+, we study the effect of adding or removing some components in order to provide additional insight into the role of each part. Specifically, we measure the effect of only adding extension 1 to CRAP, i.e., incorporating an additional SemSL loss through sharpening operations on the semantic classifier in auxiliary branches further adding extension 2 to CRAP. The ed model is identical to CRAP+ removing semantic classifier of main branch from CRAP. This is equivalent to using the semantic classifier in auxiliary branches for testing removing rotation angle prediction branch from auxiliary branches and adding extension 1 to CRAP. The ed structure can be seen as a variant of only using the SemSL technique in Extension 1 (but also with the classifier in main branch) removing whole auxiliary branches from CRAP, i.e., pure supervised method with data rotated. We conduct ablation studies on CIFAR-10 with 250 and 4000 labels with presented in Table 7. The main observations are: The two extensions in CRAP+ will bring varying degrees of improvement. Extension 1 in Section 4.1, i.e., a stronger p(y|x; θ) modeling, perhaps leads to greater improvement. Using an additional semantic classifier leads to a slight performance improvement over the strategy of directly utilizing p(y|x; θ) in the auxiliary branches for testing (method in third line from the bottom). Using the sharpening strategy as in our extension 1 and training a SemSL method alone does not produce good performance. This indicates the superior performance of CRAP+ is not simply coming from a strong SemSL method but its incorporation with the CRAP framework. Applying rotation as a data augmentation for labeled data (the last method in Table 7) will not lead to improved performance over the labeled-only baseline as by cross referring the in Table 9. This shows that the advantage of CRAP is not coming from the rotation data augmentation. In this work, we introduce a framework for effectively coupling SemSL with SlfSL. The proposed CRAP method is an implementation of this framework and it shows compelling performance on several benchmark datasets compared to other SlfSL-based SemSL methods. Furthermore, two extensions are incorporated into CRAP to create an improved method which achieves comparable performance to the state-of-the-art SemSL methods.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJxoz1rKwr
Coupling semi-supervised learning with self-supervised learning and explicitly modeling the self-supervised task conditioned on the semi-supervised one
Ranking is a central task in machine learning and information retrieval. In this task, it is especially important to present the user with a slate of items that is appealing as a whole. This in turn requires taking into account interactions between items, since intuitively, placing an item on the slate affects the decision of which other items should be chosen alongside it. In this work, we propose a sequence-to-sequence model for ranking called seq2slate. At each step, the model predicts the next item to place on the slate given the items already chosen. The recurrent nature of the model allows complex dependencies between items to be captured directly in a flexible and scalable way. We show how to learn the model end-to-end from weak supervision in the form of easily obtained click-through data. We further demonstrate the usefulness of our approach in experiments on standard ranking benchmarks as well as in a real-world recommendation system. Ranking a set of candidate items is a central task in machine learning and information retrieval. Many existing ranking systems are based on pointwise estimators, where the model assigns a score to each item in a candidate set and the ing slate is obtained by sorting the list according to item scores ). Such models are usually trained from click-through data to optimize an appropriate loss function BID17. This simple approach is computationally attractive as it only requires a sort operation over the candidate set at test (or serving) time, and can therefore scale to large problems. On the other hand, in terms of modeling, pointwise rankers cannot easily express dependencies between ranked items. In particular, the score of an item (e.g., its probability of being clicked) often depends on the other items in the slate and their joint placement. Such interactions between items can be especially dominant in the common case where display area is limited or when strong position bias is present, so that only a few highly ranked items get the user's attention. In this case it may be preferable, for example, to present a diverse set of items at the top positions of the slate in order to cover a wider range of user interests. A significant amount of work on learning-to-rank does consider interactions between ranked items when training the model. In pairwise approaches a classifier is trained to determine which item should be ranked first within a pair of items (e.g., BID13 BID17 BID6 . Similarly, in listwise approaches the loss depends on the full permutation of items (e.g., BID7 BID47 . Although these losses consider inter-item dependencies, the ranking function itself is pointwise, so at inference time the model still assigns a score to each item which does not depend on scores of other items. There has been some work on trying to capture interactions between items in the ranking scores themselves (e.g., BID29 BID22 BID49 BID32 BID8 . Such approaches can, for example, encourage a pair of items to appear next to (or far from) each other in the ing ranking. Approaches of this type often assume that the relationship between items takes a simple form (e.g., submodular) in order to obtain tractable inference and learning algorithms. Unfortunately, this comes at the expense of the model's expressive power. In this paper, we present a general, scalable approach to ranking, which naturally accounts for high-order interactions. In particular, we apply a sequence-to-sequence (seq2seq) model BID35 to the ranking task, where the input is the list of candidate items and the output is the ing ordering. Since the output sequence corresponds to ranked items on the slate, we call this model sequence-to-slate (seq2slate). The order in which the input is processed can significantly affect the performance of such models BID39. For this reason, we often assume the availability of a base (or "production") ranker with which the input sequence is ordered (e.g., a simple pointwise method that ignores the interactions we seek to model), and view the output of our model as a re-ranking of the items. To address the seq2seq problem, we build on the recent success of recurrent neural networks (RNNs) in a wide range of applications (e.g., BID35 . This allows us to use a deep model to capture rich dependencies between ranked items, while keeping the computational cost of inference manageable. More specifically, we use pointer networks, which are seq2seq models with an attention mechanism for pointing at positions in the input BID38 . We show how to train the network end-to-end to directly optimize several commonly used ranking measures. To this end, we adapt RNN training to use weak supervision in the form of click-through data obtained from logs, instead of relying on ground-truth rankings, which are much more expensive to obtain. Finally, we demonstrate the usefulness of the proposed approach in a number of learning-to-rank benchmarks and in a large-scale, real-world recommendeation system. The ranking problem is that of computing a ranking of a set of items (or ordered list or slate) given some query or context. We formalize the problem as follows. Assume a set of n items, each represented by a feature vector x i ∈ R m (which may depend on a query or context). Let π ∈ Π denote a permutation of the items, where each π j ∈ {1, . . ., n} denotes the index of the item in position j. Our goal is to predict the output ranking π given the input items x. For instance, given a specific user query, we might want to return an ordered set of music recommendations from a set of candidates that maximizes some measure of user engagement (e.g., number of tracks played). In the seq2seq framework, the probability of an output permutation, or slate, given the inputs is expressed as a product of conditional probabilities according to the chain rule: DISPLAYFORM0 This expression is completely general and does not make any conditional independence assumptions. In our case, the conditional p(π j |π <j, x) ∈ ∆ n (a point in the n-dimensional simplex) models the probability of any item being placed at the j'th position in the ranking given the items already placed at previous positions. Therefore, this conditional exactly captures all high-order dependencies between items in the ranked list, including those due to diversity, similarity or other interactions. Our setting is somewhat different than a standard seq2seq setting in that the output vocabulary is not fixed. In particular, the same index (position) is populated by different items in different instances (queries). Indeed, the vocabulary size n itself may vary per instance in the common case where the number of items to rank can change. This is precisely the problem addressed by pointer networks, which we review next. We employ the pointer-network architecture of BID38 to model the conditional p(π j |π <j, x). A pointer network uses non-parametric softmax modules, akin to the attention mechanism of BID1, and learns to point to items in its input sequence rather than predicting an index from a fixed-sized vocabulary. Our seq2slate model, illustrated in FIG0, consists of two recurrent neural networks (RNNs): an encoder and a decoder, both of which use Long Short-term Memory (LSTM) cells BID14. At each encoding step i ≤ n, the encoder RNN reads the input vector x i and outputs a d-dimensional vector e i, thus transforming the input sequence {x i} n i=1 into a sequence of latent memory states {e i} n i=1. At each decoding step j, the decoder RNN outputs a d-dimensional vector d j which is used as a query in our attention function. The attention function takes as input the query d j ∈ R d and the set of latent memory states computed by the encoder {e i} n i=1 and produces a probability distribution over the next item to include in the output sequence as follows: DISPLAYFORM0 where W enc, W dec ∈ R d×d and v ∈ R d are learned parameters in our network, denoted collectively by parameter vector θ. The probability p DISPLAYFORM1, is obtained via a softmax over the remaining items and represents the degree to which the model points to input i at decoding step j. To output a permutation, the p j i are set to 0 for items i that already appear in the slate. Once the next item π j is selected, typically greedily or by sampling (see below), its embedding x πj is fed as input to the next decoder step. The input of the first decoder step is a learned d-dimensional vector, denoted as go in FIG0. Importantly, p θ (π|x) is differentiable for ant fixed permutation π which allows gradient-based learning (see Section 3). We note the following. (i) The model makes no explicit assumptions about the type of interactions between items. If the learned conditional in Eq. is close to the true conditional in Eq., then the model can capture rich interactions-including diversity, similarity or others. We demonstrate this flexibility in our experiments (Section 4). (ii) x can represent either raw inputs or embeddings thereof, which can be learned together with the sequence model. (iii) The computational cost of inference, dominated by the sequential decoding procedure, is O(n 2), which is standard in seq2seq models with attention. We also consider a computationally cheaper single-step decoder with linear cost O(n), which outputs a single vector p 1, from which we obtain π by sorting the values (similarly to pointwise ranking). We now turn to the task of training the seq2slate model from data. A typical approach to learning in ranking systems is to run an existing ranker "in the wild" and log click-through data, which are then used to train an improved ranking model. This type of training data is relatively inexpensive to obtain, in contrast to human-curated labels such as relevance scores, ratings, or rankings BID17. Formally, each training example consists of a sequence of items {x 1, . . ., x n} and binary labels (y 1, . . ., y n), with y i ∈ {0, 1}, representing user feedback (e.g., click/no-click). Our approach easily extends to more informative feedback, such as the level of user engagement with the chosen item (e.g., time spent), but to simplify the presentation we focus on the binary case. Our goal is to learn the parameters θ of p θ (π j |π <j, x) (Eq.) such that permutations π corresponding to "good" rankings are assigned high probabilities. Various performance measures R(π, y) can be used to evaluate the quality of a permutation π given the labels y, for example, mean average precision (MAP), precision at k, or normalized discounted cumulative gain at k (NDCG@k). Generally speaking, permutations where the positive labels rank higher are considered better. In the standard seq2seq setting, models are trained to maximize the likelihood of a target sequence of tokens given the input, which can be done by maximizing the likelihood of each target token given the previous target tokens using Eq.. During training, the model is typically fed the ground-truth tokens as inputs to the next prediction step, an approach known as teacher forcing BID43. Unfortunately, this approach cannot be applied in our setting since we only have access to weak supervision in the form of labels y (e.g clicks), rather than ground-truth permutations. Instead, we show how the seq2slate model can be trained directly from the labels y. One potential approach, which has been applied successfully in related tasks BID3 BID48, is to use reinforcement learning (RL) to directly optimize for the ranking measure R(π, y). In this setup, the objective is to maximize the expected ranking metric obtained by sequences sampled from our model: E π∼p θ (.|x) [R(π, y)]. One can use policy gradients and stochastic gradient ascent to optimize θ. The gradient is formulated using the popular REINFORCE update BID42 and can be approximated via Monte-Carlo sampling as follows: DISPLAYFORM0 where k indexes ranking instances in a batch of size B, π k are permutations drawn from the model p θ and b(x) denotes a baseline function that estimates the expected rewards to reduce the variance of the gradients. RL, however, is known to be a challenging optimization problem and can suffer from sample inefficiency and difficult credit assignment. As an alternative, we propose supervised learning using the labels y. In particular, rather than waiting until the end of the output sequence (as in RL), we wish to give feedback to the model at each decoder step. Consider the first step, and recall that the model assigns a score s i to each item in the input. We define a per-step loss (s, y) which essentially acts as a multi-label classification loss with labels y as ground truth. Two natural, simple choices for are cross-entropy loss and hinge loss: DISPLAYFORM0 hinge (s, y) = max{0, 1 − min i:yi=1 DISPLAYFORM1 whereŷ i = y i / j y j, and p i is a softmax of s, similar to Eq.. Intuitively, with cross-entropy loss we try to assign high probabilities to positive labels (see also BID20, while hinge loss is minimized when scores of items with positive labels are higher than scores of those with negative labels. Notice that both losses are convex functions of the scores s. To improve convergence, we consider a smooth version of the hinge-loss where the maximum and minimum are replaced by their smooth counterparts: smooth-max(s; γ) = 1 γ log i e γsi (and smooth minimum is defined similarly, using min i (s i) = − max i (−s i)).If we simply apply a per-step loss from Eq. to all steps of the output sequence while reusing the labels y at each step, then the loss is invariant to the actual output permutations (e.g., predicting a positive item at the beginning of the sequence has the same cost as predicting it at the end). Instead, we let the loss at each decoding step j depend on the items already chosen, so no further loss is incurred after a label is predicted correctly. In particular, for a fixed permutation π, define the sequence loss: DISPLAYFORM2 where S = {s j} n j=1, and π<j (s j, y) depends only on the indices in s j and y which are not in the prefix permutation π <j = (π 1, . . ., π j−1) (see Eq. FORMULA4). Including a per-step weight w j can encourage better performance earlier in the sequence (e.g., w j = 1/ log(j + 1)). Furthermore, if optimizing for a particular slate size k is desired, one can restrict this loss to just the first k output steps. Since teacher-forcing is not an option, we resort to feeding the model its own previous predictions, as in; BID31. In this case, the permutation π is not fixed, but rather depends on the scores S. Specifically, we consider two policies for producing a permutation during training, sampling and greedy decoding, and introduce their corresponding losses. The greedy policy consists of selecting the item that maximizes p θ (·|π <j, x) at every time step j. The ing permutation π * then satisfies π * j = argmax i p θ (π j = i|π * <j) and our loss becomes L π *. The greedy policy loss is not continuous everywhere since a small change in the scores s may in a jump between permutations, and therefore L π. Specifically, the loss is non-differentiable when any s j has multiple maximizing arguments. Outside this measure-zero subspace, the loss is continuous (almost everywhere), and the gradient is well-defined. Sampling policy The sampling policy consists of drawing each π j from p θ (·|π <j, x). The corresponding loss E[L] = π p θ (π)L π (θ) is differentiable everywhere since both p θ (π) and L π (θ) are differentiable for any permutation π (See appendix for a direct derivation of E[L] as a function of S). In this case, the gradient is formulated as: DISPLAYFORM0 which can be approximated by: DISPLAYFORM1 where b(x k) is a baseline that approximates L π k (θ). Applying stochastic gradient descent intuitively decreases both the loss of any sample (right term) but also the probability of drawing samples with high losses (left term). Notice that our gradient calculation differs from scheduled sampling which instead computes the loss of the sampled sequences (right term) but ignores the probability of sampling high loss sequences (left term). We found it helpful to include both terms, which may apply more generally to training of sequence-to-sequence models BID11. For both training policies, we minimize the loss via stochastic gradient descent over mini-batches in an end-to-end fashion. We evaluate the performance of our seq2slate model on a collection of ranking tasks. In Section 4.1 we use learning-to-rank benchmark data to study the behavior of the model. We then apply our approach to a large-scale commercial recommendation system and report the in Section 4.2. Implementation Details We set hyperparameters of our model to values inspired by the literature. All experiments use mini-batches of 128 training examples and LSTM cells with 128 hidden units. We train our models with the Adam optimizer BID19 and an initial learning rate of 0.0003 decayed every 1000 steps by a factor of 0.96. Network parameters are initialized uniformly at random in [−0.1, 0.1]. To improve generalization, we regularize the model by using dropout with probability of dropping p dropout = 0.1 and L2 regularization with a penalty coefficient λ = 0.0003. Unless specified otherwise, all use supervised training with cross-entropy loss xent and the sampling policy. At inference time, we report metrics for the greedy policy. We use an exponential moving average with a decay rate of 0.99 as the baseline b(x) in Eq. FORMULA3 and Eq.. When training the seq2slate model with REINFORCE, we use R = NDGC@10 as the reward function and do not regularize the model. We also considered a bidirectional encoder RNN BID34 but found that it did not lead to significant improvements in our experiments. To understand the behavior of the proposed model, we conduct experiments using two learning-torank datasets. We use two of the largest publicly available benchmarks: the Yahoo Learning to Rank Challenge data (set 1), 1 and the Web30k dataset. Table 1: Performance of seq2slate and other baselines on data generated with diverse-clicks. We adapt the procedure proposed by BID18 to generate click data. The original procedure is as follows: first, a base ranker is trained from the raw data. We select this base ranker by training all models in the RankLib package, 3 and selecting the one with the best performance on each data set (MART for Yahoo and LambdaMART for Web30k). We generate an item ranking using the base model, which is then used to generate training data by simulating a user "cascade" model: a user observes each item with decaying probability 1/i η, where i is the base rank of the item and η is a parameter of the generative model. This simulates a noisy sequential scan. An observed item is clicked if its ground-truth relevance score is above a threshold (relevant: {2, 3, 4}, irrelevant: {0, 1}), otherwise no click is generated. To introduce high-order interactions, we augment the above procedure as follows, creating a generative process dubbed diverse-clicks. When observing a relevant item, the user will only click if it is not too similar to previously clicked items (i.e, diverse enough), thus reducing the total number of clicks. Similarity is defined as being in the smallest q percentile (i.e., q = 0.5 is the median) of Euclidean distances between pairs of feature vectors within the same ranking instance: d ij = x i − x j. We use η = 0 (no decay, since clicks are sparse anyway due to the diversity term) and q = 0.5. This modification to the generative model is essential for our purpose as the original data does not contain explicit inter-item dependencies. We also discuss variations of this model below. Using the generated training data, we train both our seq2slate model and baseline rankers from the RankLib package: AdaRank BID46, Coordinate Ascent BID24, LambdaMART BID45, ListNet BID7, MART BID10, Random Forests BID5, RankBoost BID9, RankNet BID6. Some of these baselines use deep neural networks (e.g., RankNet, ListNet), so they are strong state-ofthe-art models with comparable complexity to seq2slate. The in Table 1 show that seq2slate significantly outperforms all the baselines, suggesting that it can better capture and exploit the dependencies between items in the data. To better understand the behavior of the model, we visualize the probabilities of the attention from Eq. for one of the test instances in Fig. 2. Interestingly, the model produces slates that are close to the input ranking, but with some items demoted to lower positions, presumably due to the interactions with previous items. We next consider several variations of the generative model and of the seq2slate model itself. Results are reported in TAB2. The rank-gain metric per example is computed by summing the positions change of all positive labels in the re-ranking, and this is averaged over all examples (queries). TAB2, we compare the different training variants outlined in Section 3, namely cross entropy with the greedy or sampling policy, a smooth hinge loss with γ = 1.0, and REINFORCE. We find that supervised learning with cross entropy generally performs best, with the smooth hinge loss doing slightly worse. Our weakly supervised training methods have positive rank gain on all datasets, meaning they improve over the base ranker. Results from TAB2 in the appendix) suggest that training with REINFORCE yields comparable on Yahoo but significantly worse on the more challenging Web30k dataset. We find no significant difference in performance between relying on the greedy and sampling policies during training. Table 3: Performance compared to a competitive base production ranker on real data. One-step decoding We compare seq2slate to the model which uses a single decoding step, referred to as one-step decoder (see Section 2). In TAB2 we see that this model has comparable performance to the sequential decoder. This suggests that when inference time is crucial, as in many real-world systems, one might prefer the faster single-shot option. One possible explanation for the comparable performance of the one-step decoder is that the interactions in our generated data are rather simple and can be effectively learned by the encoder. By contrast, in Section 4.2 we show that on more complex real-world data, sequential decoding can perform significantly better. Sensitivity to input order Previous work suggests that the performance of seq2seq models are often sensitive to the order in which the input is processed BID39 BID26. To test this we consider the use of seq2slate without relying on the base ranker to order the input, but instead items are fed to the model in random order. The in TAB2 (see shuffled data) show that the performance is indeed significantly worse in this case, which is consistent with previous studies. It suggests that reranking is an easier task than ranking from scratch. Adaptivity to the type of interaction To demonstrate the flexibility of seq2slate, we generate data using a variant of the diverse-clicks model above. In the similar-clicks model, the user also clicks on observed irrelevant items if they are similar to previously clicked items (increasing the number of total clicks). As above, we use the pairwise distances in feature space d ij to determine similarity. For this model we use q = 0.5, and η = 0.3 for Web30k, η = 0.1 for Yahoo, to keep the proportion of positive labels similar. The in the appendix (see TAB4) show that seq2slate has comparable performance to the baseline rankers, with slightly better performance on the harder Web30k data. This demonstrates that our model can adapt to various types of interactions in the data. We also apply seq2slate to a ranking problem from a large-scale commercial recommendation system. We train the model using massive click-through logs (comprising roughly O(10 7) instances) with cross-entropy loss, the greedy policy, L2-regularization and dropout. The data has item sets of varying size, with an average n of 10.24 items per example. We learn embeddings of the raw inputs as part of training. Table 3 shows the performance of seq2slate and the one-step decoder compared to the production base ranker on test data (of roughly the same size as the training data). Significant gains are observed in all performance metrics, with sequential decoding outperforming the one-step decoder. This suggests that sequential decoding may more faithfully capture complex dependencies between the items. Finally, we let the learned seq2slate model run in a live experiment (A/B testing). We compute the click-through rate (CTR) in each position (#clicks/#examples) for seq2slate. The production base ranker serves traffic outside the experiment, and we compute CTR per position for this traffic as well. Fig. 3 shows the difference in CTR per position, indicating that seq2slate has significantly higher CTR in the top positions. This suggests that seq2slate indeed places items that are likely to be chosen higher in the ranking. In this section we discuss additional related work. Our work builds on the recent impressive success of seq2seq models in complex prediction tasks, including machine translation BID35 BID1, parsing BID37, combinatorial optimization BID38 BID3, multi-label classification BID41 BID26, and others. Our work differs in that we explicitly target the ranking task, which requires a novel approach to training seq2seq models from weak feedback (click-through data). Most of the work on ranking mentioned above uses shallow representations. However, in recent years deep models have been used for information retrieval, focusing on embedding queries, documents and query-document pairs BID15 BID12 BID27 BID40 BID28 ) (see also recent survey by BID25). Rather than embedding individual items, in seq2slate a representation of the entire slate of items is learned and encoded in the RNN state. Moreover, learning the embeddings (x) can be easily incorporated into the training of the sequence model to optimize both simultaneously end-to-end. Closest to ours is the recent work of BID0, where an RNN is used to encode a set of items for re-ranking. Their approach uses a single decoding step with attention, similar to our one-step decoder. In contrast, we use sequential decoding, which we find crucial in certain applications (see Section 4.2). Another important difference is that their training formulation assumes availability of full rankings or relevance scores, while we focus on learning from cheap click-through data. Finally, Santa BID33 recently proposed an elegant framework for learning permutations based on the so called Sinkhorn operator. Their approach uses a continuous relaxation of permutation matrices (i.e., the set of doubly-stochastic matrices). Later, BID23 combined this with a Gumbel softmax distribution to enable efficient learning. However, this approach is focused on reconstruction of scrambled objects, and it is not obvious how to extend it to our ranking setting, where no ground-truth permutation is available. We presented a novel seq2slate approach to ranking sets of items. We found the formalism of pointer-networks particularly suitable for this setting. We addressed the challenge of training the model from weak user feedback to improve the ranking quality. Our experiments show that the proposed approach is highly scalable and can deliver significant improvements in ranking . Our work can be extended in several directions. In terms of architecture, we aim to explore the Transformer network BID36 in place of the RNN. Several variants can potentially improve the performance of our model, including beam-search inference BID44, and training with Actor-Critic BID2 or SeaRNN BID21 ) and it will be interesting to study their performance in the ranking setting. Finally, an interesting future work direction will be to study off-policy correction BID16 Since the terms are continuous (and smooth) in S for all j and π <j, so is the entire function.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkgHk3RctX
A pointer network architecture for re-ranking items, learned from click-through logs.
While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In practice, the momentum parameter is often chosen in a heuristic fashion with little theoretical guidance. In this work, we use the framework of algorithmic stability to provide an upper-bound on the generalization error for the class of strongly convex loss functions, under mild technical assumptions. Our bound decays to zero inversely with the size of the training set, and increases as the momentum parameter is increased. We also develop an upper-bound on the expected true risk, in terms of the number of training steps, the size of the training set, and the momentum parameter. A fundamental issue for any machine learning algorithm is its ability to generalize from the training dataset to the test data. A classical framework used to study the generalization error in machine learning is PAC learning BID0 BID1. However, the associated bounds using this approach can be conservative. Recently, the notion of uniform stability, introduced in the seminal work of Bousquet and Elisseeff BID2, is leveraged to analyze the generalization error of the stochastic gradient method (SGM) BID3. The in BID3 ) is a substantial step forward, since SGM is widely used in many practical systems. This method is scalable, robust, and widely adopted in a broad range of problems. To accelerate the convergence of SGM, a momentum term is often added in the iterative update of the stochastic gradient BID4. This approach has a long history, with proven benefits in various settings. The heavy-ball momentum method was first introduced by Polyak BID5, where a weighted version of the previous update is added to the current gradient update. Polyak motivated his method by its resemblance to a heavy ball moving in a potential well defined by the objective function. Momentum methods have been used to accelerate the backpropagation algorithm when training neural networks BID6. Intuitively, adding momentum accelerates convergence by circumventing sharp curvatures and long ravines of the sublevel sets of the objective function BID7. For example, Ochs et al. has presented an illustrative example to show that the momentum can potentially avoid local minima BID8. Nesterov has proposed an accelerated gradient method, which converges as O(1/k 2) where k is the number of iterations . However, the Netstrov momentum does not seem to improve the rate of convergence for stochastic gradient (, Section 8.3.3). In this work, we focus on the heavy-ball momentum. Although momentum methods are well known to improve the convergence in SGM, their effect on the generalization error is not well understood. In this work, we first build upon the framework in BID3 to obtain a bound on the generalization error of SGM with momentum (SGMM) for the case of strongly convex loss functions. Our bound is independent of the number of training iterations and decreases inversely with the size of the training set. Secondly, we develop an upper-bound on the optimization error, which quantifies the gap between the empirical risk of SGMM and the global optimum. Our bound can be made arbitrarily small by choosing sufficiently many iterations and a sufficiently small learning rate. Finally, we establish an upper-bound on the expected true risk of SGMM as a function of various problem parameters. We note that the class of strongly convex loss functions appears in several important machine learning problems, including linear and logistic regression with a weight decay regularization term. Other related works: convergence analysis of first order methods with momentum is studied in (; BID11 BID12 BID13 BID14 BID15 BID16 BID17 . Most of these works consider the deterministic setting for gradient update. Only a few works have analyzed the stochastic setting BID15 BID16 BID17 . Our convergence analysis are not directly comparable with these works due to their different assumptions regarding the properties of loss functions. In particular, we analyze the convergence of SGMM for a smooth and strongly convex loss function as in BID3, which is new. First-order methods with noisy gradient are studied in BID18 and references therein. In BID18, the authors show that there exists linear regression problems for which SGM outperforms SGMM in terms of convergence. Our main focus in this work is on the generalization, and hence true risk, of SGMM. We are aware of only one similar work in this regard, which provides stability bounds for quadratic loss functions BID19 . In this paper, we obtain stability bounds for the general case of strongly convex loss functions. In addition, unlike BID19, our show that machine learning models can be trained for multiple epochs of SGMM with bounded generalization errors. We use E[·] to denote the expectation and · to represent the Euclidean norm of a vector. We use lower-case bold font to denote vectors. We use sans-serif font to denote random quantities. Sets and scalars are represented by calligraphic and standard fonts, respectively. We consider a general supervised learning problem, where S = {z 1, · · ·, z n} denotes the set of samples of size n drawn i.i.d. from some space Z with an unknown distribution D. We assume a learning model described by parameter vector w. Let f (w; z) denote the loss of the model described by parameter w on example z ∈ Z. Our ultimate goal is to minimize the true or population risk: DISPLAYFORM0 Since the distribution D is unknown, we replace the objective by the empirical risk, i.e., DISPLAYFORM1 We assume w = A(S) for a potentially randomized algorithm A(·). In order to find an upper-bound on the true risk, we consider the generalization error, which is the expected difference of empirical and true risk: DISPLAYFORM2 Finally, to upper bound g, we consider uniform stability:Definition 1 Let S and S denote two data sets from space Z n such that S and S differ in at most one example. Algorithm A is s -uniformly stable if for all data sets S, S, we have DISPLAYFORM3 It is shown in BID3 ) that uniform stability implies generalization in expectation:Theorem 1 BID3 If A is an s -uniformly stable algorithm, then the generalization error of A is upper-bounded by s.Theorem 1 shows that it is enough to control the uniform stability of an algorithm to upper bound the generalization error. In our analysis, we will assume that the loss function satisfies the following properties. DISPLAYFORM0 We assume that the parameter space Ω is a convex set. Furthermore, for the loss function to be L-Lipschitz and and strongly convex, we further assume that Ω is compact. Since Ω is compact, the SGMM update requires projection. The update rule for projected SGMM is given by: DISPLAYFORM0 where P denotes the Euclidean projection onto Ω, α > 0 is the learning rate 1, µ > 0 is the momentum parameter, i t is a randomly selected index, and f (w t ; z it) is the loss evaluated on sample z it. In SGMM, we run the update iteratively for T steps and let w T denote the final output. Note that there are two typical approaches to select i t. The first approach is to select i t ∈ {1, · · ·, n} uniformly at random at each iteration. The second approach is to permutate {1, · · ·, n} randomly once and then select the examples repeatedly in a cyclic manner. Our are valid for both approaches. The key quantity of interest in this paper is the generalization error for SGMM given by: DISPLAYFORM1 since the randomness in A arises from the choice of i 0, · · ·, i T −1. In the following, we assume that the loss function f (·; z) is β-smooth, L-Lipschitz, and γ-strongly convex for all z. Theorem 2 (Stability bound) Suppose that the SGMM update is executed for T steps with constant learning rate α and momentum µ. Provided that DISPLAYFORM0 The in Theorem 2 implies that the stability bound decreases inversely with the size of the training set. It increases as the momentum parameter µ increases. These properties are also verified in our experimental evaluation. Theorem 3 (Convergence bound) Suppose that the SGMM update is executed for T steps with constant learning rate α and momentum µ. Then we have DISPLAYFORM1 whereŵ T denotes the average of T steps of the algorithm, i.e.,ŵ T = DISPLAYFORM2 Theorem 3 bounds the optimization error, i.e., the expected difference between the empirical risk achieved by SGMM and the global minimum. Upon setting µ = 0 and γ = 0 in FORMULA8, we can recover the classical bound on optimization error for SGM BID20, (, Theorem 5.2). The first two terms in vanish as T increases. The terms with negative sign improve the convergence due to the strongly convexity. The last term depends on the learning rate, α, the momentum parameter µ, and the Lipschitz constant L. This term can be controlled by selecting α sufficiently small. Proposition 1 (Upper-bound on true risk) Suppose that the SGMM update is executed for T steps with constant learning rate α and momentum µ, satisfying the conditions in Theorem 2 and DISPLAYFORM3 T, we have: DISPLAYFORM4 where DISPLAYFORM5 andŵ T as well as the constants W 0, · · ·, W 3 are defined in Theorem 3.Proposition 1 provides a bound on the expected true risk of SGMM in terms of the global minimum of the empirical risk. The bound in FORMULA11 is obtained by combining Theorem 2 and Theorem 3 and minimizing the expression over α. The choice of α simplifies considerably when µ is sufficiently small, as stated in Proposition 1. Due to the page constraint, the proof of this is provided in the supplementary material. Note that the first two terms in vanish as T increases. The last term in vanishes as the number of samples n increases. Following BID3, we track the divergence of two different iterative sequences of update rules with the same starting point. However, our analysis is more involved as the presence of momentum term requires a more careful bound on the iterative expressions. To keep the notation uncluttered, we first consider SGMM without projection and defer the discussion of projection to the end of this proof. Let S = {z 1, · · ·, z n} and S = {z 1, · · ·, z n} be two samples of size n that differ in at most one example. Let w T and w T denote the outputs of SGMM on S and S, respectively. We consider the updates w t+1 = G t (w t) + µ(w t − w t−1) and w t+1 = G t (w t) + µ(w t − w t−1) with G t (w t) = w t − α∇ w f (w t ; z it) and G t (w t) = w t − α∇ w f (w t ; z it), respectively, for t = 1, · · ·, T. We denote δ t ∆ = w t − w t. Suppose w 0 = w 0, i.e., δ 0 = 0. We first establish an upper-bound on E A [δ t+1] in terms of E A [δ t] and E A [δ t−1] in the following lemma, whose proof is provided in the supplementary document. DISPLAYFORM0 Using the of Lemma 1, in the following, we develop an upper bound on E A [δ T]. Let us consider the recursion DISPLAYFORM1 withδ 0 = δ 0 = 0. Upon inspecting it is clear that DISPLAYFORM2 as we simply drop the remainder of positive terms. Substituting into, we have DISPLAYFORM3 where the second inequality holds due to µ ≥ αβγ β+γ − 1 2.Noting that DISPLAYFORM4 where the second expression holds since 0 ≤ µ < αβγ 3(β+γ) is assumed. Applying the L-Lipschitz property on f (·, z), it follows that DISPLAYFORM5 Since this bound holds for all S, S and z, we obtain an upper-bound on the uniform stability and the proof is complete. Our stability bound in Theorem 2 holds for the projected SGMM update because Euclidean projection does not increase the distance between projected points (the argument is essentially analogous to BID3, Lemma 4.6)). In particular, note that Lemma 1 holds for the projected SGMM. Again, we first consider SGMM without projection and discuss the extension to projection at the end of this proof. Our proof is inspired by the convergence analysis in BID15 BID13 for a convex loss function with bounded variance and time-decaying learning rate. Different from these works, we analyze the convergence of SGMM for a smooth and strongly convex loss function with constant learning rate. To facilitate the convergence analysis, we define: DISPLAYFORM0 with p 0 = 0. Substituting into the SGMM update, the parameter recursion is given by DISPLAYFORM1 It follows that DISPLAYFORM2 Substituting p t into, the recursion can be written as DISPLAYFORM3 Upon taking the expectation with respect to i t in FORMULA0 we have DISPLAYFORM4 where we use the fact that ∇ w f (w t ; z it) ≤ L, due to L-Lipschitz, and that E it [∇ w f (w t ; z it)] = ∇ w R S (w t). Furthermore, since R S (·) is a γ-strongly convex function, for all w t and w t−1, we have DISPLAYFORM5 Substituting FORMULA1 in FORMULA0, we have DISPLAYFORM6 Taking expectation over i 0, · · ·, i t for a given S, summing for t = 0, · · ·, T, and rearranging terms, we have DISPLAYFORM7 Since · is a convex function, for all w T and w, we have DISPLAYFORM8 Furthermore, due to convexity of R S (·), we have DISPLAYFORM9 Taking expectation over S, applying inequalities and FORMULA1 into FORMULA1, and substituting w = w * S, we obtain and the proof is complete. Our convergence bound in Theorem 3 can be extended to projected SGMM. Let use denote y t+1 ∆ = w t + µ(w t − w t−1) − α∇ w f (w t ; z it). Then, for any feasible w ∈ Ω, holds for y t+1, i.e., DISPLAYFORM10 Note that the LHS of can be written as DISPLAYFORM11 We note that µw t + (1 − µ)w ∈ Ω for any w ∈ Ω and w t ∈ Ω since Ω is convex. Now in projected SGMM, we have DISPLAYFORM12 since projection a point onto Ω moves it closer to any point in Ω. This shows inequality holds, and the convergence do not change. In this section, we validate the insights obtained in our theoretical in experimental evaluation. Our main goal is to study how adding momentum affects the convergence and generalization of SGM. We study the performance of SGMM when applied to the notMINIST dataset. Please note that similar are provided for the MNIST dataset in the supplementary document. We train a logistic regression model with the weight decay regularization using SGMM for binary classification on the two-class notMNIST dataset that contains the images from letter classes "C" and "J", which leads to a smooth and strongly convex loss function. We set the learning rate α = 0.01. The weight decay coefficient and the minibatch size are set to 0.001 and 10, respectively. We use 100 SGMM realizations to evaluate the average performance. We compare the training and generalization performance of SGM without momentum with that of SGMM under µ = 0.5 and µ = 0.9, which are common momentum values used in practice (, Section 8.3 .2).The generalization error (with respect to cross entropy) and training error versus the number of training samples, n, under SGMM with fixed T = 1000 iterations are shown in FIG0, respectively, for µ = 0, 0.5, 0.9. In FIG1, we plot the generalization error (with respect to classification accuracy) and the training accuracy as a function of the number of training samples for the same dataset. First, we observe that the generalization error (with respect to both cross entropy and classification accuracy) decreases as n increases for all values of µ, which is suggested by our stability upper-bound in Theorem 2. In addition, for sufficiently large n, we observe that the generalization error increases with µ, consistent with Theorem 2. On the other hand, the training error increases as n increases, which is expected. We can observe that adding momentum reduces training error as it improves the convergence rate. The training accuracy also improves by adding momentum as illustrated in FIG1. In order to study the optimization error of SGMM, we show the training error and test error versus the number of epochs, under SGMM trained with n = 500 samples in Figures 3a and 3b, respectively. We plot the classification accuracy for training and test datasets in Figures 4a and 4b, respectively. We observe that the training error decreases as the number of epochs increases for all values of µ, which is consistent with the convergence analysis in Theorem 3. Furthermore, as expected, we see that adding momentum improves the training error and accuracy. However, as the number of epochs increases, we note that the benefit of momentum on the test error and accuracy becomes negligible. This happens because adding momentum also in a higher generalization error thus penalizing the gain in training error. We study the generalization error and convergence of SGMM for the class of strongly convex loss functions, under mild technical conditions. We establish an upper-bound on the generalization error, which decreases with the size of the training set, and increases as the momentum parameter is increased. Secondly, we analyze the convergence of SGMM during training, by establishing an upper-bound on the gap between the empirical risk of SGMM and the global minimum. Our proposed bound reduces to a classical bound on the optimization error of SGM BID20 for convex functions, when the momentum parameter is set to zero. Finally, we establish an upper-bound on the expected difference between the true risk of SGMM and the global minimum of the empirical risk, and illustrate how it scales with the number of training steps and the size of the training set. Although our are established for the case when the learning rate is constant, they can be easily extended to the case when the learning rate decreases with the number of iterations. We also present experimental evaluations on the notMNIST dataset and show that the numerical plots are consistent with our theoretical bounds on the generalization error and the convergence gap.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1lwRjR9YX
Stochastic gradient method with momentum generalizes.
Imitation learning from demonstrations usually relies on learning a policy from trajectories of optimal states and actions. However, in real life expert demonstrations, often the action information is missing and only state trajectories are available. We present a model-based imitation learning method that can learn environment-specific optimal actions only from expert state trajectories. Our proposed method starts with a model-free reinforcement learning algorithm with a heuristic reward signal to sample environment dynamics, which is then used to train the state-transition probability. Subsequently, we learn the optimal actions from expert state trajectories by supervised learning, while back-propagating the error gradients through the modeled environment dynamics. Experimental evaluations show that our proposed method successfully achieves performance similar to (state, action) trajectory-based traditional imitation learning methods even in the absence of action information, with much fewer iterations compared to conventional model-free reinforcement learning methods. We also demonstrate that our method can learn to act from only video demonstrations of expert agent for simple games and can learn to achieve desired performance in less number of iterations. Reinforcement learning(RL) involves training an agent to learn a policy that accomplishes a certain task in an environment. The objective of reinforcement learning is to maximize the expected future reward from a guiding signal. BID11 showed that neural networks can be used to approximate state-action value functions used by an agent to perform discrete control based on a guiding reward. This was demonstrated in Atari games where the score was used as the reward signal. Similarly, continuous control of robotics arm was achieved by BID9 minimizing the distance between end-effector and target location. Following these, other methods such as BID20 were proposed to improve the sample efficiency of modelfree algorithms with theoretical guarantees of policy improvement in each step. These algorithms assume that a guiding reward signal is available for the agent to learn optimal behavior for a certain task. However, in most cases of natural learning, such guiding signal is not present and learning is performed by imitating an expert behavior. Imitation learning involves copying the behavior of an expert agent to accomplish the desired task. In the conventional imitation learning setting, a set of expert trajectories providing states and optimal actions τ = {s 0, a 0, s 1, a 1, ..., s n, a n) performed by an expert agent π E are available but the reward (or cost function), r E (s, a) used to achieve the expert behavior is not available. The goal is to learn a new policy π, which imitates the expert behavior by maximizing the likelihood of given demonstration trajectories. A straightforward way for imitation learning is to direct learn the optimal action to perform given the current state as proposed by; BID2. The policy π can learn to imitate the expert behavior by maximizing likelihood of the condition distribution of action given states p(a|s). This can be achieved by simply training a parameterized function (neural networks for instance) with state and action pairs from the expert trajectories. Since this involves end-to-end supervised learning, training is much more sample-efficient compared to reinforcement learning and overcomes inherent problems in model-free methods such as credit assignment BID22 ). However, since behavior cloning learns optimal action from a single state value only, it is unaware of the future state distribution the current action will produce. Thus, errors are compounded in the future states leading to undesired agent behavior as shown by BID18; BID17. Therefore, numerous training samples are required for behavior cloning to reduce errors in action prediction required for satisfactory imitation learning. The second approach to imitation learning involves setting up exploration in a Markov Decision Process(MDP) setting. The goal then is to recover a reward signal that best explains the expert trajectories. BID12 first introduced Inverse Reinforcement Learning(IRL), where the goal is to find a reward signalr from the trajectories such that the expert is uniquely optimal. After computing this estimated reward signal, usually, a model-free reinforcement learning performed to obtain the desired policy imitating the expert behavior by maximizing the expected discounted reward E π (t γ tr (s t, a t)). While this alleviates the problem of compounding errors as in behavior cloning, BID25 showed that estimating a unique reward function from state and action trajectories is an ill-posed problem. Following the success of Generative Adversarial Networks(GANs) BID3 ) in various fields of machine learning, adversarial learning has also been shown incorporated in the imitation learning framework. The recent work on Generative Adversarial Imitation Leaning or GAIL by BID4 showed that model-free reinforcement learning using the discriminator as a cost function can learn to imitate the expert agent with much less number of demonstrated trajectories compared to behavior cloning. Following the success of GAIL, there have extensions by BID0 to model-based generative imitation learning using a differentiable dynamics model of the environment. Robust imitation policy strategies using a combination of variational autoencoders BID7; BID16 ) and GAIL has also been proposed by BID23.The previous works assume that the expert trajectories consist of both action and state values from the optimal agent. However, optimal actions are usually not available in real-world imitation learning. For example, we often learn tasks like skipping, jump rope, gymnastics, etc. just by watching other expert humans perform the task. In this case, the optimal expert trajectories only consist of visual input, in other words, the consecutive states of the expert human with no action information. We learn to jump rope by trying to reproduce actions that in state trajectories similar to the state trajectories observed from the expert. This requires exploring the environment in a structured fashion to learn the dynamics of the rope (for jump rope) which then enables executing optimal actions to imitate the expert behavior. The recent work of BID10 presents learning from observations only with focus to transferring skills learned from source domain to an unseen target domain, using rewards obtained by feature tracking for model-free reinforcement learning. Inspired by the above method of learning in humans, we present a principled way of learning to imitate an expert from state information only, with no action information available. We first learn a distribution of the next state from the current state trajectory, used to estimate a heuristic reward signal enabling model-free exploration. The state, action and next states information from modelfree exploration is used to learn a dynamics model of the environment. For the case of learning in humans, this is similar to performing actions for replicating the witnessed expert state trajectories, which in turn gives information about the dynamics of the environment. Once this forward model is learned, we try to find the action that maximizes the likelihood of next state. Since the forward model gives a function approximation for the environment dynamics, we can back propagate errors through it to perform model-based policy update by end to end supervised learning. We demonstrate that our proposed network can reach, with fewer iterations, the level close to an expert agent behavior (which is a pre-trained actor network or manually provided by humans), and compare it with reinforcement learning using a hand-crafted reward or a heuristics reward that is based on prediction error of next state learned from the optimal state trajectories of the expert. We summarize the notations used in the paper in this section. Consider a Markov Decision Process (MDP) denoted as (S, A, P, r, ρ 0, γ), where S is the finite set of states, A is the set of possible actions, P: S × A → S is the transition probability distribution and r: S × A → R be the reward signal from state and actions, ρ 0 → R is the initial state distribution and γ ∈ is the discount factor. Let π: S × A → be the policy that gives the conditional distribution of actions given current state, p(a|s) and R(π) = E π [t γ t r t (s t, a t)] is the discounted reward associated with the policy. We consider expert trajectories consisting of only optimal state distribution without any action information, τ E = {s 0, s 1, ..., s n}. The trajectories sampled from model-free exploration is denoted as, τ RL = {s 0, a 0, s 1, a 1, ..., s n, a n}. We use the terms dynamic model, state-transition probability and forward model interchangeably in the paper. f E (s t, a t) denotes the non-differentiable forward model of the environment and f (s t, a t) denotes the differentiable version which is learned during the proposed training procedure. π mf denotes the model-free policy and π mb denotes the model-based policy network. While most previous works use trajectories containing both state and action information to infer a policy that imitates the expert behavior, our problem statement is to imitate an expert from optimal state trajectories only. This setting is common in many natural scenarios where only state information is available. For example, humans learning to swim by seeing videos or observing other experts swimmers, have access to the sensory stream of information containing states only. The optimal actions are obtained by trying to replicate the state trajectories in the real environment via exploration. In this work, we learn a time series predictor of next state given the current state. Subsequently, we estimate a heuristic reward signal at each step, based on the difference of predicted next state by the time series model and the actual next state taken by the policy network to learn a model-free policy with exploration. However, such heuristic methods suffer from the disadvantages of slow modelfree training to reach satisfactory policy and provide no guarantees on convergence. Therefore we resort to model-based policy learning that learns a differentiable model of the environment dynamics used to learn policy by directly supervised learning. The proposed algorithm alternates between dynamic model parameters and policy parameters update using gradient propagation through the differentiable dynamics model. We formalize this setup in the following sections. Following the work in Inverse Reinforcement Learning(IRL), it is possible to frame the imitation learning problem in the MDP framework. The imitation learning problem then reduces to finding a uniquely optimal reward signal that maximizes the likelihood of the observed demonstration trajectories. However, finding a uniquely optimal reward function, even with optimal trajectories containing both state and action information, is an ill-posed problem. The ill-posed nature of optimal reward estimation is further accentuated in the absence of action information in expert trajectories. As such, many solutions for parameterized families of reward estimation models maximizing the likelihood of only expert state trajectories might be sub-optimal to learn the desired task. For model-free policy learning, we use a heuristic reward estimated by the error in next state prediction at each step. We make an assumption that estimating a locally optimal reward maximizing the likelihood of next step predicting, is intuitively globally optimal for learning the desired task. A straight-forward method to obtain such heuristic reward signal from the trajectories, τ s, is to learn a time series predictive model of the next state given the current state, p(s t+1 |s 1:t) from the expert state trajectories using time series modeling. In our case, we use an exponential of the difference between predicted states and actual next state associated with the action predicted by the policy network. DISPLAYFORM0 where k is a constant controlling gain of the reward signal and σ controls the sensitivity of reward to divergence from the predicted state. This reward can be used for guiding any standard model-free reinforcement learning techniques to maximize the expected reward, R(DISPLAYFORM1 We assume on the intuition that locally optimal heuristic reward estimation will be sufficient to ensure global optimality for learning the desired task. Showing the overall architecture of the proposed method. Firstly, model-free policy is updated using reward estimation from next state mismatch which storing the samples in replay buffer. These are used to update the differentiable dynamics model of the environment, which provides the error gradients for end-to-end model-based policy update from state trajectories Consider there are m sets of expert trajectory episode each consisting of T states, given as τ E = {s 0, s 1, ..., s n}, where n = mT. We assume that the trajectories in each episode are independent of each other. For the imitation learning problem, we wish to imitate an expert agent π E from state trajectories τ E. we formulate a maximum likelihood problem for state trajectories given a parameterized model of the agent policy, given as DISPLAYFORM0 where θ represents the parameter of the model. We assume that the random variables state(s t), action(a t) and next state s t+1 form a directed graphical model as shown in figure 2 (b). Following the natural dynamics of environments in reinforcement learning setting, we assume that control action for the agent a t is conditionally independent of other state and actions in the past given the current state. Distribution of the next state is conditionally independent of the other past states given current state and action following the MDP setting. In this framework, we can frame the modelbased policy as an encoder network with action as the latent variable and the dynamics policy as the decoder network predicting the next state. The log-likelihood estimation loss, in this case, can be written as, DISPLAYFORM1 where θ e, theta d are the encoder and decoder parameters respectively. Learning can be performed using (s t, s t+1) pairs from the expert trajectories by minimizing the above loss, sampling action values from the posteriori distribution p(a t |s t) using standard Markov Chain Monte Carlo (MCMC) methods or variational inference methods BID7; BID16. However, the learned action from the encoder in this case will not mimic the actual control commands used to perform the desired task in the control environment. We propose a constrained minimization cost function which enforces the decoder network to mimic the environment dynamics of the agent. The proposed cost function enforces that the decoders model minimizes the loss for dynamics model prediction of next state given the current state and action, whereas the composition of encoder over the decoder minimizes prediction loss of the next state given the current state. This loss function is given as, DISPLAYFORM2 where we θ dyn are the parameters of dynamics model and θ mb are model-based policy network's parameters. Let us denote the first term of the loss term in equation 4 as model-based policy loss, L mb and the second term is referred as dynamics model loss, L dyn. As shown in FIG0, we perform alternate minimizations on the proposed cost function and training on the encoder and decoder are performed on two separate datasets. Firstly, the dynamics model parameters are updated from the experience replay samples gathered during model-free exploration. Subsequently, the updated dynamics model is used as the decoder network in the above formulation with fixed weights while the model-based policy parameters are updated by the gradient, ∇ θ mb L mb. This enforces the encoder network to act as a model-based policy that learns to predict the optimal action given the current state. During implementations, a deterministic encoder and decoder are used and a single action is sampled from the posterior distribution during training. Since our expert trajectories only consist of sensory streams containing state information only, learning from the heuristic reward in a MDP setting can be slow and does not guarantee that the reward is optimal for the desired task. Thus we resort to dynamics model based learning which can propagate error gradients with respect to action information as discussed in the above section. Consider an analogy of a robot learning to navigate through a maze of expert state information only. It must first learn the state transition model p(s t+1 |s t, a t) to navigate through the environment. Once dynamics model is learned, it can obtain the best action that takes the current state in optimal state trajectories to the next state. Solving for the desired action at each state (p(a t |s t)) is a maximum likelihood problem from the expert state trajectories which can be solved by end-to-end supervised learning. Let us assume we have a parameterized model for agent dynamics, given as s t+1 = f (s t, a t ; θ dyn), where θ dyn denotes parameters of the model. During model-free learning, we store the trajectories of (s t, a t, s t+1) that were encountered during exploration by the agent. Let us denotes the trajectories of these triplets as τ RL. For continuous state spaces, the gradient for dynamic model parameters are given as DISPLAYFORM0 which is gradient on the mean squared error loss between model predicted and true next state. For the discrete state space case, we can first maximize the probability of next state given the current state and action using a categorical loss. Any standard stochastic gradient descent algorithms can be used for the above optimizations. However, we use neural networks as function approximators which are shown to have approximation capacity for arbitrary non-linear function, although its nonconvex nature of optimization does always guarantee a solution that is globally optimal. Recent techniques in stochastic gradient descent (; BID6) have alleviated this problem to a large extent. If we assume the dynamics model is ideal in predicting the next state and there exists a unique action to reach the next state from the current state, then the proposed method is identical to behavior cloning, although true action information is not provided by the expert. Therefore, the performance of this method is upper bounded by the performance of behavior cloning model which learns from the true action information between states. In our formulation so far, the entire next state of the agent is predicted by the next state predictor. However, we found that predicting a part of the state which is dependent on action gives better reward structure. This is in line with the work of BID14, where the authors predict φ(s t) as the latent representation of the neural network predicting the action from consecutive states, a t = g(φ(s t), φ(s t+1)). This transformed state value, φ(s t) is also used as the input and output for the dynamics model. This transformation is beneficial for two reasons: (i) It is difficult to learn the dynamics model for high dimensional state information and thus first projecting onto a low dimensional manifold to learn the dynamics model gives a more feasible learning framework.(ii) In case of transferring the learned dynamics model between different tasks that use the same environment, a common state input is required for the dynamics model, which can be achieved such transformation. In case of learning from videos, we use the agent position in the image frames as φ(s t). For the case of linked arm reacher, we will use phi(s t) as the joint angles and joint velocities. We now outline the algorithm based on the above discussed model-based policy learning framework. Sample trajectory from model-free policy τ k ∈ π k mf and add to the replay buffer τ RL Updated the dynamics model parameter using trajectories from the replay buffer DISPLAYFORM0 Update model based parameter from expert trajectories with fixed dynamics model. DISPLAYFORM1 The above algorithm shows an iterative process where in each iteration we first train a model-free algorithm using the heuristic reward function. This step is necessary because we collect a certain amount of system dynamics data, (s t, a t, s t+1), while training the model-free policy. Then, we train a system dynamics model using the above collected data. The action policy is then trained using the system dynamics model in the model-based part, which constitutes one cycle of the training. Subsequently, we repeat this cycle again starting from the model-free part. With each iteration, we collect additional system dynamics data, which in a more precise dynamics model, leading to accurate action policy parameter gradients for updating the model-based policy. The frequency of switching between model-free replay buffer collection and model-based update can be varied depending on the complexity of dynamics model. For state predictions from previous states, we used Long Short-Term Memory Network, proposed by BID5. For the policy and dynamics model, we use neural networks as function approximators. In this work, we assume that the state transformation φ is manually specified in each experiments, although it is possible to learn such representation by learning a common transformation between states that predicts the action, a t = g(φ(s t), φ(s t+1)). We perform experimental evaluations on three kinds of environment, (i) Robotics arm reacher in 2d, (ii) Simple 2d obstacle avoidance based reacher, (iii) Learning to play simple games from raw video demonstrations. In each experiment, we show specific strengths of the proposed algorithm as follows. We use roboschool reacher(; BID1 environment to simulate twolink robotic arm that can move in two-dimensional space. The desired task is to learn reaching a given target from random starting configurations. The arm is controlled using angular torque values of both joints. We use state values consisting of angular position, angular velocity, the end effector location of the robotic link and the position of the target. The robotic arm angles and angular velocities were used as φ(s t), which is the portion of state dependent on action. In this experiment, we assume that the true reward function is known and in addition, we have some state trajectories from the expert policy. The goal is to leverage these existing state trajectories to learn a better policy in less number of steps. Reward signal, consisting of distance potential, electricity cost and the penalty for stuck joint, which is the default reward specified for the environment, was used. Specifically, we show 500 trajectories of optimal states each with 100 steps to learn model-based policy using the proposed method. We used neural networks with hidden layers containing neurons for both model-based and model-free policies. Training was performed over 2000 episodes using Deep Deterministic Policy Gradients(DDPG) proposed by BID9.Figure 3(a) shows the comparison of proposed method against the DDPG algorithm. Our method learns the dynamics model from the model-free exploration, which quickly learns the simple environment dynamics, in this case, thereby learning an optimal policy imitating the state trajectories much faster than model-free training, which is shown in . However, we found this is due to a large number of state trajectories that are shown to the proposed method. Since our performance is upper bounded by behavior cloning , we share the same drawbacks of compounding errors and data-hungry policy learning. In this experiment, we demonstrate that proposed algorithm can be used for direct end-to-end supervised imitation learning on novel tasks without resorting to model-free reinforcement learning. We refer to this setup as one-shot imitation learning, which we demonstrate on a simple toy environment. The environment is shown in FIG1. The environment consists of an agent which can freely move in 2D space. The agent is controlled by continuous position control where the action is the change in (x, y) agent position at each time step. The goal is to reach the target location while avoiding the obstacle. Initially, we train our algorithm on an environment to avoid a single obstacle while reaching the target. We use state information as the absolute position of the agent, target and obstacle, the agent velocity and relative location of obstacle and target with respect to the agent. We use φ(s t) as the agent 2d position in the environment. For expert demonstration, we implemented a manually engineered policy that always avoids the obstacle. We used 1000 number of demonstrations containing only state trajectories to reach a target while avoiding a single obstacle. Out of 1000 demonstrations, 800 are used for training and 200 for validation. We first learn the time series prediction of the next state, used to compute the heuristic reward based using prediction error as discussed in section 2.1. For model-based policy, we use a MLP with hidden units. We use the same policy network for both model-based and modelfree policy. The dynamics model is also modeled as a neural network with a single hidden layer of 8 units for both state and action input. We used a switching frequency of 5 between the model-free and model-based updates. Using these setting for the proposed algorithm, we get a model-based policy and the dynamics model as output. Using the dynamics model obtained from training with demonstrations of single obstacle avoidance, we perform one-shot imitation learning to learn avoidance of two obstacles. The algorithm is presented 500 samples of expert state trajectories for avoiding two obstacles. The model-based policy in the new setting is learned by step 6 of the proposed algorithm 1 using the previously learned dynamics model. Although the state information for the policy networks might change due to an additional obstacle, since φ(s t), which is the agent 2d location, remains same in both cases, we can perform one-shot imitation learning in this case. We compare the with respect to the expert policy and behavior cloning and report the average of test reward on 50 episodes. While the expert policy achieves average test reward of 3036, and behavior cloning achieves 3939 and our imitation learning method gave a reward of 3805. This demonstrates that our proposed algorithm can be used for one-shot imitation in environments with same dynamics and can produce comparable to behavior cloning which was trained from true actions. In this experiment, we learn model-based control policies from raw pixels. We use the python reimplementation of the game Flappy bird BID8 ). In this environment, the agent has to hover through the pipes without collision by flapping its wings. The environment has gravity and with each flap, the agent receives an upward acceleration which sends it to an upward parabolic trajectory. We choose this environment due to its complicated dynamics and show that our proposed method can learn to model this dynamics. We learn action policies from raw videos of just 10 episodes each with 1000 steps. The reward is assumed to be unknown in this case and we estimate the reward by the error in prediction of the next state as mentioned in section 2.1. The control action is a single discrete command of whether to flap the bird's wings or not. We denotes this action space as {+1, −1}. For state information at each step we use 4 consecutive frames resized to (80 × 80 × 4). We also assume that the absolute position of the bird's location is available to us, which we use as φ(s t). This can be also computed by learning a simple object detector from image frames. For the next state predictor, we use an LSTM predictor that outputs the next position of the agent location given the sequence of states witnessed so far. The model-free reward prediction step receives a reward signal based on the difference of the actual next state taken the policy, from the predicted next state by LSTM. This reward is used to train DQN BID11 ) for model-free policy update which collects data to train the dynamics model, which in turn trains the model-based policy network. We also train vanilla DQN using a standard reward, which in this case is 0.1 for each time step and +1 reward if the agent can successfully pass through a pipe. To compare the various methods, we used the reward used by vanilla DQN as a baseline for comparison. For the model-based policy we used a convolutional neural network (CNN) with soft-max output, which is essentially a binary classification task. For the DQN model-free policy, the last layer predicts the q-values and therefore has linearly activation. In this case, we first learn dynamics model, which is approximated as a multi layered perceptron (MLP) with single hidden layer of 16 units for both state and action inputs. It learns to regress the next state from the current state and actions minimizing mean squared error loss. After this, from the expert demonstration of state trajectory(τ E), we find the current optimal action that minimizes the next state a * k = arg min a p(s t+1 − f (s t, a t) | at∈{0,1},{st+1,st}∼τ E. The next step is to find the modelbased policy by behavior cloning on the states (s t ∼ τ E) and current optimal actions, a * k. We found that, the number of +1(flap) actions by the agent are far less frequent compared to the number of -1(no flap) actions which cause an unbalance in distribution. Thus, we used class balancing techniques for learning both model based policy update and direct behavior cloning from true actions (baseline) . Figure 3(b) shows the comparison of proposed model-based method with estimated reward (mb reward pred), behavior cloning (bc), model-free RL using DQN with standard known reward (dqn reward) and with estimated reward (dqn reward pred). It is to be noted that although original reward just provides a constant reward (of 0.1 and +1 bonus) without constant guidance at each step value, estimated reward provides a dense guidance signal which leads to faster convergence as shown in the comparison . We take desired samples from the model-free training, based on prioritized sampling technique with regards to the estimated reward signal, to perform model-based policy update. We found that prioritized sampling was essential to learn a good dynamics model of the environment. The show that the model-based policy learns behavior close to optimal in much fewer steps compared to the model-free counterparts. However, its performance in upper bounded by the behavior cloning method which achieves an average test reward (over 20 iterations) of 49.62. The expert score is 60.9. Since DQN with estimated reward signal learns via exploration in a MDP setting, it can surpass the performance of behavior cloning since the number of expert demonstrations is limited in this case. We presented a model-based imitation learning method that can learn to act from expert state trajectories in the absence of action information. Our method uses trajectories sampled from the modelfree policy exploration to train a dynamics model of the environment. As model-free policy is enriched through time, the forward model can better approximate the actual environment dynamics, which leads to improved gradient flow, leading to better model-based policy update which is trained in a supervised fashion from expert state trajectories. In the ideal case, when dynamics model perfectly approximates the environment, our proposed method is equivalent to behavior cloning, even in the absence of action information. We demonstrate that the proposed method learns the desired policy in less number of iterations compared conventional model-free methods. We also show that once the dynamics model is trained it can be used to transfer learning for other tasks in a similar environment in an end-to-end supervised manner. Future work includes tighter integration of the model-based learning and the model-free learning for higher data efficiency by sharing information between the model-free policy π mf and the model-based policy π mb and between the next state predictor p(s t+1 |s t) and the dynamics model p(s t+1 |s t, a t) and improving the limitations of compounding errors and requirement of large number of demonstration, by adversarial training which can maximize likelihood of future state distributions as well.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1GDXzb0b
Learning to imitate an expert in the absence of optimal actions learning a dynamics model while exploring the environment.
Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps. While this discovery is insightful, finding proper sub-networks requires iterative training and pruning. The high cost incurred limits the applications of the lottery ticket hypothesis. We show there exists a subset of the aforementioned sub-networks that converge significantly faster during the training process and thus can mitigate the cost issue. We conduct extensive experiments to show such sub-networks consistently exist across various model structures for a restrictive setting of hyperparameters (e.g., carefully selected learning rate, pruning ratio, and model capacity). As a practical application of our findings, we demonstrate that such sub-networks can help in cutting down the total time of adversarial training, a standard approach to improve robustness, by up to 49% on CIFAR-10 to achieve the state-of-the-art robustness. Pruning has served as an important technique for removing redundant structure in neural networks (b; a; ;). Properly pruning can reduce cost in computation and storage without harming performance. However, pruning was until recently only used as a post-processing procedure, while pruning at initialization was believed ineffective (a;). Recently, proposed the lottery ticket hypothesis, showing that for a deep neural network there exist sub-networks, when trained from certain initialization obtained by pruning, performing equally or better than the original model with commensurate convergence rates. Such pairs of sub-networks and initialization are called winning tickets. This phenomenon indicates it is possible to perform pruning at initialization. However, finding winning tickets still requires iterative pruning and excessive training. Its high cost limits the application of winning tickets. Although shows that winning tickets converge faster than the corresponding full models, it is only observed on small networks, such as a convolutional neural network (CNN) with only a few convolution layers. In this paper, we show that for a variety of model architectures, there consistently exist such sub-networks that converge significantly faster when trained from certain initialization after pruning. We call these boosting tickets. We observe the standard technique introduced in for identifying winning tickets does not always find boosting tickets. In fact, the requirements are more restrictive. We extensively investigate underlining factors that affect such boosting effect, considering three stateof-the-art large model architectures: VGG-16 , ResNet-18 , and WideResNet . We conclude that the boosting effect depends principally on three factors: (i) learning rate, (ii) pruning ratio, and (iii) network capacity; we also demonstrate how these factors affect the boosting effect. By controlling these factors, after only one training epoch on CIFAR-10, we are able to obtain 90.88%/90.28% validation/test accuracy (regularly requires >30 training epochs) on WideResNet-34-10 when 80% parameters are pruned. We further show that the boosting tickets have a practical application in accelerating adversarial training, an effective but expensive defensive training method for obtaining robust models against adversarial examples. Adversarial examples are carefully perturbed inputs that are indistinguishable from natural inputs but can easily fool a classifier . We first show our observations on winning and boosting tickets extend to the adversarial training scheme. Furthermore, we observe that the boosting tickets pruned from a weakly robust model can be used to accelerate the adversarial training process for obtaining a strongly robust model. On CIFAR-10 trained with WideResNet-34-10, we manage to save up to 49% of the total training time (including both pruning and training) compared to the regular adversarial training process. Our contributions are summarized as follows: 1. We demonstrate that there exists boosting tickets, a special type of winning tickets that significantly accelerate the training process while still maintaining high accuracy. 2. We conduct extensive experiments to investigate the major factors affecting the performance of boosting tickets. 3. We demonstrate that winning tickets and boosting tickets exist for adversarial training scheme as well. 4. We show that pruning a non-robust model allows us to find winning/boosting tickets for a strongly robust model, which enables accelerated adversarial training process. 2 AND RELATED WORK Network pruning has been extensively studied as a method for compressing neural networks and reducing resource consumption (; ; ; b; ;). However, it was previously believed that pruned networks cannot be trained from the start (b;). Surprisingly, recent research has shown it is possible to prune a neural network at the initialization and still reach similar performance as the full model . Within this category, the lottery ticket hypothesis states a randomly-initialized dense neural network contains a sub-network that is initialized such that, when trained in isolation, learns as fast as the original network and matches its test accuracy. In, an iterative pruning method is proposed to find such sub-networks. Specifically, this approach first randomly initializes the model. The initialization is stored separately and the model is trained in the standard manner until convergence. Then a certain proportion of the weights with the smallest magnitudes are pruned while remaining weights are reset to the previously stored initialization and ready to be trained again. This train-prune-reset procedure is performed several times until the target pruning ratio is reached. Using this pruning method, they show the ing pruned networks can be trained to similar accuracy as the original full networks, which is better than the model with the same pruned structure but randomly initialized. One of the limitations of the lottery ticket hypothesis, as pointed in, is that winning tickets are found by unstructured pruning which does not necessarily yield faster training or executing time. In addition, finding winning tickets requires training the full model beforehand, which is time-consuming as well, especially considering iterative pruning. In this paper, we aim to show that there exists a subset of winning tickets, namely boosting tickets, that not only performs equally well as the original model but also converges much faster. Given a classifier f: X → {1, . . ., k} for an input x ∈ X, an adversarial example x adv is a perturbed version of x such that D(x, x adv) < for some small > 0, yet being mis-classified as f (x) = f (x adv). D(·, ·) is some distance metric which is often an p metric, and in most of the literature ∞ metric is considered, so as in this paper. The procedure of constructing such adversarial examples is often referred to as adversarial attacks. One of the simplest attacks is a single-step method, Fast Gradient Sign Method (FGSM) , manipulating inputs along the direction of the gradient with respect to the outputs:, where Π x+S is the projection operation that ensures adversarial examples stay in the p ball S around x. Although this method is fast, the attack is weak and can be defended easily. On the other hand, its multi-step variant, Projected Gra-dient Descend (PGD), is one of the strongest attacks :, where x is initialized with a random perturbation. Since PGD requires to access the gradients for multiple steps, it will incur high computational cost. On the defense side, currently the most successful defense approach is constructing adversarial examples via PGD during training and add them to the training sets as data augmentation, which is referred to as adversarial training . One caveat of adversarial training is its computational cost due to performing PGD attacks at each training step. Alternatively, using FGSM during training is much faster but the ing model is robust against FGSM attacks but vulnerable against PGD attacks . In this paper, we show it is possible to combine the advantages of both and quickly train a strongly robust model benefited from the boosting tickets. Prior studies have shown success in achieving both compactness and robustness of the trained networks (; ; ; ; ;). However, most of them will either incur much higher training cost or sacrifice robustness from the full model. On the contrary, our approach is able to reduce training time while obtaining similar/higher robust accuracy than the original full network. We first investigate boosting tickets on the standard setting without considering adversarial robustness. In this section, we show that with properly chosen hyperparameters, we are managed to find boosting tickets on VGG-16 and ResNet that can be trained much faster than the original dense network. Detailed model architectures and the setup can be found in Supplementary Section A. To find the boosting tickets, we use a similar algorithm for finding winning tickets, which is briefly described in the previous section and will be detailed here. First, a neural network is randomly initialized and saved in advance. Then the network is trained until convergence, and a given proportion of weights with the smallest magnitudes are pruned, ing in a mask where the pruned weights indicate 0 and remained weights indicate 1. We call this train-and-prune step pruning. This mask is then applied to the saved initialization to obtain a sub-network, which are the boosting tickets. All of the weights that are pruned (where zeros in the mask) will remain to be 0 during the whole training process. Finally, we can retrain the sub-networks. The key differences between our algorithm and the one proposed in to find winning tickets are (i) we use a small learning rate for pruning and retrain the sub-network (tickets) with learning rate warm-up from this small learning rate. In particular, for VGG-16 we choose 0.01 for pruning and warmup from 0.01 to 0.1 for retraining; for ResNet-18 we choose 0.05 for pruning and warmup from 0.05 to 0.1 for retraining; (ii) we find it is sufficient to prune and retrain the model only once instead of iterative pruning for multiple times. In Supplementary Section B, we show the difference of boosting effects brought from the tickets found by iterative pruning and one-shot pruning is negligible. Note warmup is also used in. However, they propose to use warmup from small learning rate to a large one during pruning as well, which hinders the boosting effect as shown in the following experiments. First, we show the existence of boosting tickets for VGG-16 and ResNet-18 on CIFAR-10 in Figure 1 and compare to the winning tickets. In particular, we show the boosting tickets are winning tickets, in the sense that they outperform the randomly initialized models. When compared to the winning tickets, boosting tickets demonstrate equally good performance with a higher convergence rate. Similar on MNIST can be found in Supplementary Section C. To measure the overall convergence rate, early stopping seems to be a good fit in the literature. It is commonly used to prevent overfitting and the final number of steps are used to measure convergence rates. However, early stopping is not compatible with learning rate scheduling we used in our case where the total number of steps is determined before training. This causes two issues in our evaluation in Figure 1: (i) Although the boosting tickets reach a relatively high validation accuracy much earlier than the winning ticket, the training procedure is and d) for winning tickets, boosting tickets, and randomly initialized weights. In both models, the boosting tickets show faster convergence rate and equally good performance as the winning tickets. then hindered by the large learning rate. After the learning rate drops, the performance gap between boosting tickets and winning tickets becomes negligible. As a , the learning rate scheduling obscures the improvement on convergence rates of boosting tickets; (ii) Due to fast convergence, boosting tickets tend to overfit, as observed in ResNet-18 after 50 epochs. To mitigate these two issues without excluding learning rate scheduling, we conduct another experiment where we mimic the early stopping procedure by gradually increasing the total number of epochs from 20 to 100. The learning rate is still dropped at the 50% and 75% stage. In this way, we can better understand the speed of convergence without worrying about overfitting even with learning rate scheduling involved. In figure 2, we compare the boosting tickets and winning tickets in this manner on VGG-16. While the first two plots in Figure 2 show the general trend of convergence, the improvement of convergence rates is much clearer in the last four plots. In particular, the validation accuracy of boosting tickets after 40 epochs is already on pair with the one trained for 100 epochs. Meanwhile, the winning tickets fall much behind the boosting tickets until 100 epochs where two finally match. We further investigate the test accuracy at the end of training for boosting and winning tickets in Table 1. We find the test accuracy of winning tickets gradually increase as we allow for more training steps, while the boosting tickets achieve the highest test accuracy after 60 epochs and start to overfit at 100 epochs. Summarizing the observations above, we confirm the existence of boosting tickets and state the boosting ticket hypothesis: A randomly initialized dense neural network contains a sub-network that is initialized such that, when trained in isolation, converges faster than the original network and other winning tickets while matches their performance. In the following sections, we investigate three major components that affect the boosting effects. As finding boosting tickets requires alternating learning rates, it is natural to assume the performance of boosting tickets relies on the choice of learning rate. Thus, we extensively investigate the influence of various learning rates. We use similar experimental settings in the previous section, where we increase the total number of epochs gradually and use the test accuracy as a measure of convergence rates. We choose four different learning rates 0.005, 0.01, 0.05 and 0.1 for pruning to get the tickets. All of the tickets found by those learning rates obtain the accuracy improvement over randomly reinitialized sub-model and thus satisfy the definition of winning tickets (i.e., they are all winning tickets). As shown in the first two plots of Figure 3, tickets found by smaller learning rates tend to have stronger boosting effects. For both VGG-16 and ResNet-18, the models trained with learning rate 0.1 show the least boosting effects, measured by the test accuracy after 20 epochs of training. On the other hand, training with too small learning rate will compromise the eventual test accuracy at a certain extent. Therefore, we treat the tickets found by learning rate 0.01 as our boosting tickets for VGG-16, and the one found by learning rate 0.05 as for ResNet-18, which converge much faster than all of the rest while achieving the highest final test accuracy. Pruning ratio has been an important component for winning tickets, and thus we investigate its effect on boosting tickets. Since we are only interested in the boosting effect, we use the validation accuracy at early stages as a measure of the strength of boosting to avoid drawing too many lines in the plots. In Figure 4, we show the validation accuracy after the first and fifth epochs of models for different pruning ratios for VGG-16 and ResNet-18. For both VGG-16 and ResNet-18, boosting tickets always reach much higher accuracy than randomly reinitialized sub-models, demonstrating their boosting effects. When the pruning ratio falls into the range from 60% to 90%, boosting tickets can provide the strongest boosting effects which obtain around 80% and 83% validation accuracy after the first and the fifth training epochs for VGG-16 and obtain 76% and 85% validation accuracy for ResNet-18. On the other hand, the increase of validation accuracy between the first training epoch and the fifth training epoch become smaller when boosting effects appear. It indicates their convergence starts to saturate due to the large learning rate at the initial stage and is ready for dropping the learning rate. We finally investigate how model capacity, including the depth and width of models, affects the boosting tickets. We use WideResNet either with its depth or width fixed and vary the other factor. In particular, we keep the depth as 34 and increases the width from 1 to 10, comparing their boosting effect. Then we keep the width as 10 and increase the depth from 10 to 34. The changes of validation accuracy of the models are shown in Figure 5. Overall, Figure 5 shows models with larger capacity have a more significant boosting effect, though the boosting effects keep the same when the depth is larger than 22. Notably, we find the largest model WideResNet-34-10 achieves 90.88% validation accuracy after only one training epoch. Although the lottery ticket hypothesis is extensively studied in and, the same phenomenon in adversarial training setting lacks thorough understanding. In this section, we show two important facts that make boosting tickets suitable for the adversarial scheme: the lottery ticket hypothesis and boosting ticket hypothesis are applicable to the adversarial training scheme; pruning on a weakly robust model allows to find the boosting ticket for a strongly robust model and save training cost. In the following experiment, we use a naturally trained model, that is trained in the standard manner, and two adversarially trained models using FGSM and PGD respectively to obtain the tickets by pruning these models. Then we retrain these pruned models with the same PGD-based adversarial training from the same initialization. In Figure 6, we report the corresponding accuracy on the original validation sets and on the adversarially perturbed validation examples, noted as clean accuracy and robust accuracy. We further train the pruned model from random reinitialization to validate lottery ticket hypothesis. Unless otherwise stated, in all the PGD-based adversarial training, we keep the same setting as. The PGD attacks are performed in 10 steps with step size 2/255 (PGD-10). The PGD attacks are bounded by 8/255 in its ∞ norm. For the FGSM-based adversarial training, the FGSM attacks are bounded by 8/255. Both models trained from the boosting tickets obtained with FGSM-and PGD-based adversarial training demonstrate superior performance and faster convergence than the model trained from random reinitialization. This confirms the lottery ticket hypothesis and boosting ticket hypothesis are applicable to adversarial training scheme on both clean accuracy and robust accuracy. More interestingly, the performance of the models pruned with FGSM-and PGD-based adversarial training are almost the same. This observation suggests it is sufficient to train a weakly robust model with FGSM-based adversarial training for obtaining the boosting tickets and retrain it with stronger attacks such as PGD. This finding is interesting because FGSM-based adversarial trained models will suffer from label leaking problems as learning weak robustness. In fact, the FGSM-based adversarially trained model from which we obtain our boosting tickets has 89% robust accuracy against FGSM but with only 0.4% robust accuracy against PGD performed in 20 steps. However, Figure 6 shows the following PGD-based adversarial retraining on the boosting tickets obtained from that FGSM-based trained model is indeed robust. Further discussions can be found in Section 5. , the authors argued that the lottery ticket hypothesis fails to hold in adversarial training via experiments on MNIST. We show they fail to observe winning tickets because the models they used have limited capacity. In the adversarial setting bounded by L ∞ ≤ 0.3, small models such as a CNN with two convolutional layers used in can not yield even winning tickets when pruning ratio is large. In Figure 7, plot (a) and (b) are the clean and robust accuracy of the pruned models when the pruning ratio is 80%. The pruned model degrades into a trivial classifier where all example are classified into the same class with 11.42%/11.42% valid/test accuracy. However, when we use VGG-16, as shown in plot (c) and (d), the winning tickets are found again. This can be explained as adversarial training requires much larger model capacity than standard training , thus pruning small models could undermine their performance. Since MNIST is a simple dataset, adversarial training converges quickly at the first few epochs for both the tickets and randomly initialized models. Therefore, there is no winning tickets performing obvious boosting effect which we can identify as a boosting ticket on MNIST. We then conduct the same experiment as in Figure 2 but in the adversarial training setting to better show the improved convergence rates. The for validation accuracy and test accuracy are presented in Figure 8 and Table 2 respectively. It suggests it is sufficient to train 60 epochs to achieve similar robust accuracy as the full model trained for 100 epochs. Until now, we have confirmed that boosting tickets exist consistently across different models and training schemes and convey important insights on the behavior of pruned models. However, in the natural training setting, although boosting tickets provide faster convergence, it is not suitable for accelerating the standard training procedure as pruning to find the boosting tickets requires training full models beforehand. On the other hand, the two observations mentioned in Section 4 enable boosting tickets to accelerate adversarial training. In particular, we can find boosting tickets with FGSM-based adversarial training that and they can significantly accelerate the PGD-based adversarial training. Note that the cost of FGSM-based training is only 1/10 times of the standard 10-step PGD-based one and thus is almost negligible compared to the time saved due to the boosting effect. In Table 3, we apply adversarial training to WideResNet-34-10, which has the same structure used in , with the proposed approach for 40, 70 and 100 epochs and report the best accuracy/robust accuracy under various attacks among the whole training process. In particular, we perform 20-step PGD, 100-step PGD as white-box attacks where the attackers have the access to the model parameters. More experimental are included in the Appendix. We report the time consumption for training each model to measure how much time is saved by boosting tickets. We run all the experiments on a workstation with 2 V100 GPUs in parallel. From Table 3 we observe that while our approach requires pruning before training, it is overall faster as it uses FGSM-based adversarial training. In particular, to achieve its best robust accuracy, original Madry et al.'s training method requires 134,764 seconds on WideResNet-34-10. To achieve that, our boosting ticket only requires 69,552 seconds, including 15,462 seconds to find the boosting ticket and 54,090 seconds to retrain the ticket, saving 49% of the total training time. Not knowledge distillation. It may seem that winning tickets and boosting tickets behave like knowledge distillation where the learned knowledge from a large model is transferred to a small model. This conjecture may explain the boosting effects as the pruned model quickly recover the knowledge from the full model. However, the lottery ticket framework seems to be distinctive to knowledge distillation. If boosting tickets simply transfer knowledge from the full model to the pruned model, then an FGSM-based adversarially trained model should not find tickets that improves the robustness of the sub-model against PGD attacks, as the full model itself is vulnerable to PGD attacks. Yet in Section 4.1 we observe an FGSM-based adversarially trained model still leads to boosting tickets that accelerates PGD-based adversarial training. We believe the cause of boosting tickets requires further investigation in the future. Accelerate adversarial training. propose to reduce the training time for PGD-based adversarial training by recycling the gradients computed for parameter updates and constructing adversarial examples. While their approach focuses on reducing the computational time for each epoch, our method focuses more on the convergence rate (i.e., reduce the number of epochs required for convergence). Therefore, our approach is compatible with theirs, making it a promising future direction to combine both to further reduce the training time. In this paper, we investigate boosting tickets, sub-networks coupled with certain initialization that can be trained with significantly faster convergence rate. As a practical application, in the adversarial training scheme, we show pruning a weakly robust model allows to find boosting tickets that can save up to 49% of the total training time to obtain a strongly robust model that matches the state-ofthe-art robustness. Finally, it is an interesting direction to investigate whether there is a way to find boosting tickets without training the full model beforehand, as it is technically not necessary. Setup. We use , weight decay, decreasing learning rate schedules (×0.1 at 50% and 75%), and augmented training data for training models. We try to keep the setting the same as the one used in except we use one-shot pruning instead of iterative pruning. It allows the whole pruning and training process to be more practical in real applications. On CIFAR-10 dataset, we randomly select 5,000 images out of 50,000 training set as validation set and train the models with the rest. The reported test accuracy is measured with the whole testing set. All of our experiments are run on four Tesla V100s, 10 Tesla P100s, and 10 2080 Tis. For all the time-sensitive experiments like adversarial training on WideResNet-34-10 in Section 4.3, we train each model on two Tesla V100s with data parallelism. For the rest ones measuring the final test accuracy, we use one gpu for each model without parallelism. In Table 4, we summarize the number of parameters and parameter sizes of all the model architectures that we evaluate with including VGG-16 , ResNet-18 , and the variance of. Figure 9, we track the training of models obtained from both iterative pruning and one shot pruning. We find the performance of both, in terms of the boosting effects and final accuracy, is indistinguishable. and. We plot the validation accuracy of models from both approaches and the corresponding randomly initialized models. In this section, we report experiment on MNIST for the standard setting, where we use LeNet with two convolutions and two fully connected layers for the classification task. As for MNIST we do not use learning rate scheduling, early stopping is then used to determine the speed of convergence. In 5, we report the epochs when early stopping happens and the test accuracy to illustrate the existence of boosting tickets for MNIST. While winning tickets converge at the 18th epoch, boosting tickets converge at the 11th epoch, indicating faster convergence.. In addition to the experimental reported in the main text, in Table 6 we include for C&W attacks and transfer attacks; where we attack one model with adversarial examples found by 20-step PGD based on other models. We find the adversarial examples generated from one model can transfer to another model with a slight decrease on the robust error. This indicates our models and Madry's model share adversarial examples and further share decision boundaries. Table 6: Best test clean accuracy (the first row), robust accuracy (the second to fourth rows), transfer attack accuracy (the middle four rows), and training time for PGD-based adversarial training (the last four rows) on boosting tickets obtained by FGSM-based adversarial training in various of numbers of epochs on WideResNet-34-10. Overall, our adversarial training strategy based on boosting tickets is able to save up to 49% of the total training time while achieving higher robust accuracy compared to the regular adversarial training on the original full model.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Sye2c3NYDB
We show the possibility of pruning to find a small sub-network with significantly higher convergence rate than the full model.
Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder's output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality). Feature representations of the observed raw data play a crucial role in the success of machine learning algorithms. Effective representations should be able to capture the underlying (abstract or high-level) latent generative factors that are relevant for the end task while ignoring the inconsequential or nuisance factors. Disentangled feature representations have the property that the generative factors are revealed in disjoint subsets of the feature dimensions, such that a change in a single generative factor causes a highly sparse change in the representation. Disentangled representations offer several advantages -(i) Invariance: it is easier to derive representations that are invariant to nuisance factors by simply marginalizing over the corresponding dimensions, (ii) Transferability: they are arguably more suitable for transfer learning as most of the key underlying generative factors appear segregated along feature dimensions, (iii) Interpretability: a human expert may be able to assign meanings to the dimensions, (iv) Conditioning and intervention: they allow for interpretable conditioning and/or intervention over a subset of the latents and observe the effects on other nodes in the graph. Indeed, the importance of learning disentangled representations has been argued in several recent works BID5 BID37 BID50.Recognizing the significance of disentangled representations, several attempts have been made in this direction in the past BID50. Much of the earlier work assumes some sort of supervision in terms of: (i) partial or full access to the generative factors per instance BID48 BID58 BID35 BID33, (ii) knowledge about the nature of generative factors (e.g, translation, rotation, etc.) BID29 BID11, (iii) knowledge about the changes in the generative factors across observations (e.g., sparse changes in consecutive frames of a Video) BID25 BID57 BID21 BID14 BID32, (iv) knowledge of a complementary signal to infer representations that are conditionally independent of it 1 BID10 BID41 BID53. However, in most real scenarios, we only have access to raw observations without any supervision about the generative factors. It is a challenging problem and many of the earlier attempts have not been able to scale well for realistic settings BID51 BID15 BID13 ) (see also,).Recently, BID9 proposed an approach to learn a generative model with disentangled factors based on Generative Adversarial Networks (GAN) BID24, however implicit generative models like GANs lack an effective inference mechanism 2, which hinders its applicability to the problem of learning disentangled representations. More recently, proposed an approach based on Variational AutoEncoder (VAE) BID34 for inferring disentangled factors. The inferred latents using their method (termed as β-VAE) are empirically shown to have better disentangling properties, however the method deviates from the basic principles of variational inference, creating increased tension between observed data likelihood and disentanglement. This in turn leads to poor quality of generated samples as observed in.In this work, we propose a principled approach for inference of disentangled latent factors based on the popular and scalable framework of amortized variational inference BID34 BID55 BID23 BID49 powered by stochastic optimization BID30 BID34 BID49. Disentanglement is encouraged by introducing a regularizer over the induced inferred prior. Unlike β-VAE, our approach does not introduce any extra conflict between disentanglement of the latents and the observed data likelihood, which is reflected in the overall quality of the generated samples that matches the VAE and is much better than β-VAE. This does not come at the cost of higher entanglement and our approach also outperforms β-VAE in disentangling the latents as measured by various quantitative metrics. We also propose a new disentanglement metric, called Separated Attribute Predictability or SAP, which is better aligned with the qualitative disentanglement observed in the decoder's output compared to the existing metrics. We start with a generative model of the observed data that first samples a latent variable z ∼ p(z), and an observation is generated by sampling from p θ (x|z). The joint density of latents and observations is denoted as p θ (x, z) = p(z)p θ (x|z). The problem of inference is to compute the posterior of the latents conditioned on the observations, i.e., p θ (z|x) = DISPLAYFORM0. We assume that we are given a finite set of samples (observations) from the true data distribution p(x). In most practical scenarios involving high dimensional and complex data, this computation is intractable and calls for approximate inference. Variational inference takes an optimization based approach to this, positing a family D of approximate densities over the latents and reducing the approximate inference problem to finding a member density that minimizes the Kullback-Leibler divergence to the true posterior, i.e., q * x = min q∈D KL(q(z) p θ (z|x)) BID6. The idea of amortized inference BID34 BID55 BID23 BID49 is to explicitly share information across inferences made for each observation. One successful way of achieving this for variational inference is to have a so-called recognition model, parameterized by φ, that encodes an inverse map from the observations to the approximate posteriors (also referred as variational autoencoder or VAE) BID34 BID49. The recognition model parameters are learned by optimizing the problem min φ E x KL(q φ (z|x) p θ (z|x)), where the outer expectation is over the true data distribution p(x) which we have samples from. This can be shown as equivalent to maximizing what is termed as evidence lower bound (ELBO): DISPLAYFORM1 The ELBO (the objective at the right side of Eq. 1) lower bounds the log-likelihood of observed data, and the gap vanishes at the global optimum. Often, the density forms of p(z) and q φ (z|x) are chosen such that their KL-divergence can be written analytically in a closed-form expression (e.g., p(z) is N (0, I) and q φ (z|x) is N (µ φ (x), Σ φ (x))) BID34. In such cases, the ELBO can be efficiently optimized (to a stationary point) using stochastic first order methods where both expectations are estimated using mini-batches. Further, in cases when q φ (·) can be written as a continuous transformation of a fixed base distribution (e.g., the standard normal distribution), a low variance estimate of the gradient over φ can be obtained by coordinate transformation (also referred as reparametrization) BID22 BID34 BID49. Most VAE based generative models for real datasets (e.g., text, images, etc.) already work with a relatively simple and disentangled prior p(z) having no interaction among the latent dimensions (e.g., the standard Gaussian N (0, I)) BID7 BID43 BID31 BID59. The complexity of the observed data is absorbed in the conditional distribution p θ (x|z) which encodes the interactions among the latents. Hence, as far as the generative modeling is concerned, disentangled prior sets us in the right direction. Although the generative model starts with a disentangled prior, our main objective is to infer disentangled latents which are potentially conducive for various goals mentioned in Sec. 1 (e.g., invariance, transferability, interpretability). To this end, we consider the density over the inferred latents induced by the approximate posterior inference mechanism, DISPLAYFORM0 which we will subsequently refer to as the inferred prior or expected variational posterior (p(x) is the true data distribution that we have only samples from). For inferring disentangled factors, this should be factorizable along the dimensions, i.e., DISPLAYFORM1 This can be achieved by minimizing a suitable distance between the inferred prior q φ (z) and the disentangled generative prior p(z). We can also define expected posterior as DISPLAYFORM2 If we take KL-divergence as our choice of distance, by relying on its pairwise convexity (i.e., KL( BID56, we can show that the distance between q φ (z) and p θ (z) is bounded by the objective of the variational inference: DISPLAYFORM3 DISPLAYFORM4 In general, the prior p(z) and expected posterior p θ (z) will be different, although they may be close (they will be same when p θ (x) = p θ (x|z)p(z)dz is equal to p(x)). Hence, variational posterior inference of latent variables with disentangled prior naturally encourages inferring factors that are close to being disentangled. We think this is the reason that the original VAE (Eq. ) has also been observed to exhibit some disentangling behavior on simple datasets such as MNIST BID34. However, this behavior does not carry over to more complex datasets BID4 BID39, unless extra supervision on the generative factors is provided BID35 BID33. This can be due to: (i) p(x) and p θ (x) being far apart which in turn causes p(z) and p θ (z) being far apart, and (ii) the non-convexity of the ELBO objective which prevents us from achieving the global minimum of E x KL(q φ (z|x) p θ (z|x)) (which is 0 and implies KL(q φ (z) p θ (z)) = 0). In other words, maximizing the ELBO (Eq.) might also in reducing the value of KL(q φ (z) p(z)), however, due to the aforementioned reasons, the gap between KL(q φ (z) p(z)) and E x KL(q φ (z|x) p θ (z|x)) could be large at the stationary point of convergence. Hence, minimizing KL(q φ (z) p(z)) or any other suitable distance D(q φ (z), p(z)) explicitly will give us better control on the disentanglement. This motivates us to add D(q φ (z) p(z)) as part of the objective to encourage disentanglement during inference, i.e., DISPLAYFORM5 where λ controls its contribution to the overall objective. We refer to this as DIP-VAE (for Disentangled Inferred Prior) subsequently. Optimizing FORMULA7 directly is not tractable if D(·, ·) is taken to be the KL-divergence KL(q φ (z) p(z)), which does not have a closed-form expression. One possibility is use the variational formulation of the KL-divergence BID45 BID46 ) that needs only samples from q φ (z) and p(z) to estimate a lower bound to KL(q φ (z) p(z)). However, this would involve optimizing for a third set of parameters ψ for the KL-divergence estimator, and would also change the optimization to a saddle-point (min-max) problem which has its own optimization challenges (e.g., gradient vanishing as encountered in training generative adversarial networks with KL or Jensen-Shannon (JS) divergences BID24 BID2 ). Taking D to be another suitable distance between q φ (z) and p(z) (e.g., integral probability metrics like Wasserstein distance BID54) might alleviate some of these issues but will still involve complicating the optimization to a saddle point problem in three set of parameters 3. It should also be noted that using these variational forms of the distances will still leave us with an approximation to the actual distance. We adopt a simpler yet effective alternative of matching the moments of the two distributions. Matching the covariance of the two distributions will amount to decorrelating the dimensions of DISPLAYFORM6 By the law of total covariance, the covariance of z ∼ q φ (z) is given by DISPLAYFORM7 where E q φ (z|x) [z] and Cov q φ (z|x) [z] are random variables that are functions of the random variable x (z is marginalized over). Most existing work on the VAE models uses q φ (z|x) having the form DISPLAYFORM8, where µ φ (x) and Σ φ (x) are the outputs of a deep neural net parameterized by φ. In this case Eq. reduces to Cov DISPLAYFORM9, which we want to be close to the Identity matrix. For simplicity, we choose entry-wise squared 2 -norm as the measure of proximity. Further, Σ φ (x) is commonly taken to be a diagonal matrix which means that cross-correlations (off-diagonals) between the latents are due to only DISPLAYFORM10. This suggests two possible options for the disentangling regularizer: DISPLAYFORM11 which we refer as DIP-VAE-II. Penalizing just the off-diagonals in both cases will lead to lowering the diagonal entries of Cov p(x) [µ φ (x)] as the ij'th off-diagonal is really a derived attribute obtained by multiplying the square-roots of i'th and j'th diagonals (for each example x ∼ p(x), followed by averaging over all examples). This can be compensated in DIP-VAE-I by a regularizer on the diagonal entries of Cov p(x) [µ φ (x)] which pulls these towards 1. We opt for two separate hyperparameters controlling the relative importance of the loss on the diagonal and off-diagonal entries as follows: DISPLAYFORM12 The regularization terms involving Cov p(x) [µ φ (x)] in the above objective can be efficiently optimized using SGD, where Cov p(x) [µ φ (x)] can be estimated using the current minibatch 4.For DIP-VAE-II, we have the following optimization problem: DISPLAYFORM13 As discussed earlier, the term DISPLAYFORM14 Penalizing the off-diagonals of Cov p(x) [µ φ (x)] in the Objective will contribute to reduction in the magnitude of its diagonals as discussed earlier. As the regularizer on the diagonals is not directly on DISPLAYFORM15, unlike DIP-VAE-I, it will be not be able to keep DISPLAYFORM16 ii such that their sum remains close to 1. In datasets where the number of generative factors is less than the latent dimension, DIP-VAE-II is more suitable than DIP-VAE-I as keeping all dimensions active might in splitting of an attribute across multiple dimensions, hurting the goal of disentanglement. It is also possible to match higher order central moments of q φ (z) and the prior p(z). In particular, third order central moments (and moments) of the zero mean Gaussian prior are zero, hence 2 norm of third order central moments of q φ (z) can be penalized. Recently proposed β-VAE proposes to modify the ELBO by upweighting the KL(q φ (z|x) p(z)) term in order to encourage the inference of disentangled factors: DISPLAYFORM0 where β is taken to be great than 1. Higher β is argued to encourage disentanglement at the cost of reconstruction error (the likelihood term in the ELBO). Authors report empirical with β ranging from 4 to 250 depending on the dataset. As already mentioned, most VAE models proposed in the literature, including β-VAE, work with N (0, I) as the prior p(z) and N (µ φ (x), Σ φ (x)) with diagonal Σ φ (x) as the approximate posterior q φ (z|x). This reduces the objective to DISPLAYFORM1 For high values of β, β-VAE would try to pull µ φ (x) towards zero and Σ φ (x) towards the identity matrix (as the minimum of x − ln x for x > 0 is at x = 1), thus making the approximate posterior q φ (z|x) insensitive to the observations. This is also reflected in the quality of the reconstructed samples which is worse than VAE (β = 1), particularly for high values of β. Our proposed method does not have such increased tension between the likelihood term and the disentanglement objective, and the sample quality with our method is on par with the VAE.Finally, we note that both β-VAE and our proposed method encourage disentanglement of inferred factors by pulling Cov q φ (z) (z) in Eq. towards the identity matrix: β-VAE attempts to do it by making Cov q φ (z|x) (z) close to I and E q φ (z|x) (z) close to 0 individually for all observations x, while the proposed method directly works on Cov q φ (z) (z) (marginalizing over the observations x) which retains the sensitivity of q φ (z|x) to the conditioned-upon observation.3 QUANTIFYING DISENTANGLEMENT: SAP SCORE propose a metric to evaluate the disentanglement performance of the inference mechanism, assuming that the ground truth generative factors are available. It works by first sampling a generative factor y, followed by sampling L pairs of examples such that for each pair, the sampled generative factor takes the same value. Given the inferred z x:= µ φ (x) for each example x, they compute the absolute difference of these vectors for each pair, followed by averaging these difference vectors. This average difference vector is assigned the label of y. By sampling n such minibatches of L pairs, we get n such averaged difference vectors for the factor y. This process is repeated for all generative factors. A low capacity multiclass classifier is then trained on these vectors to predict the identities of the corresponding generative factors. Accuracy of this classifier on the difference vectors for test set is taken to be a measure of disentanglement. We evaluate the proposed method on this metric and refer to this as Z-diff score subsequently. We observe in our experiments that the Z-diff score is not correlated well with the qualitative disentanglement at the decoder's output as seen in the latent traversal plots (obtained by varying only one latent while keeping the other latents fixed). It also depends on the multiclass classifier used to obtain the score. We propose a new metric, referred as Separated Attribute Predictability (SAP) score, that is better aligned with the qualitative disentanglement observed in the latent traversals and also does not involve training any classifier. It is computed as follows: (i) We first construct a d × k score matrix S (for d latents and k generative factors) whose ij'th entry is the linear regression or classification score (depending on the generative factor type) of predicting j'th factor using only i'th latent [µ φ (x)] i. For regression, we take this to be the R 2 score obtained with fitting a line (slope and intercept) that minimizes the linear regression error (for the test examples). The R 2 score is given by DISPLAYFORM2 and ranges from 0 to 1, with a score of 1 indicating that a linear function of the i'th inferred latent explains all variability in the j'th generative factor. For classification, we fit one or more thresholds (real numbers) directly on i'th inferred latents for the test examples that minimize the balanced classification errors, and take S i j to be the balanced classification accuracy of the j'th generative factor. For inactive latent Table 1: Z-diff score, the proposed SAP score and reconstruction error (per pixel) on the test sets for 2D Shapes and CelebA (β 1 = 4, β 2 = 60, λ = 10, λ 1 = 5, λ 2 = 500 for 2D Shapes; β 1 = 4, β 2 = 32, λ = 2, λ 1 = 1, λ 2 = 80 for CelebA). For the on a wider range of hyperparameter values, refer to Fig. 1 DISPLAYFORM3 ] ii close to 0), we take S ij to be 0.(ii) For each column of the score matrix S which corresponds to a generative factor, we take the difference of top two entries (corresponding to top two most predictive latent dimensions), and then take the mean of these differences as the final SAP score. Considering just the top scoring latent dimension for each generative factor is not enough as it does not rule out the possibility of the factor being captured by other latents. A high SAP score indicates that each generative factor is primarily captured in only one latent dimension. Note that a high SAP score does not rule out one latent dimension capturing two or more generative factors well, however in many cases this would be due to the generative factors themselves being correlated with each other, which can be verified empirically using ground truth values of the generative factors (when available). Further, a low SAP score does not rule out good disentanglement in cases when two (or more) latent dimensions might be correlated strongly with the same generative factor and poorly with other generative factors. The generated examples using single latent traversals may not be realistic for such models, and DIP-VAE discourages this from happening by enforcing decorrelation of the latents. However, the SAP score computation can be adapted to such cases by grouping the latent dimensions based on correlations and getting the score matrix at group level, which can be fed as input to the second step to get the final SAP score. We evaluate our proposed method, DIP-VAE, on three datasets -(i) CelebA BID39: It consists of 202, 599 RGB face images of celebrities. We use 64 × 64 × 3 cropped images as used in several earlier works, using 90% for training and 10% for test. (ii) 3D Chairs BID4: It consists of 1393 chair CAD models, with each model rendered from 31 azimuth angles and 2 elevation angles. Following earlier work BID58 BID18 that ignores near-duplicates, we use a subset of 809 chair models in our experiments. We use the binary masks of the chairs as the observed data in our experiments following. First 80% of the models are used for training and the rest are used for test. (iii) 2D Shapes: This is a synthetic dataset of binary 2D shapes generated from the Cartesian product of the shape (heart, oval and square), x-position (32 values), y-position (32 values), scale (6 values) and rotation (40 values). We consider two baselines for the task of unsupervised inference of disentangled factors: (i) VAE BID34 BID49, and (ii) the recently proposed β-VAE. To be consistent with the evaluations in, we use the same CNN network architectures (for our encoder and decoder), and same latent dimensions as used in for CelebA, 3D Chairs, 2D Shapes datasets. Hyperparameters. For the proposed DIP-VAE-I, in all our experiments we vary λ od in the set {1, 2, 5, 10, 20, 50, 100, 500} while fixing λ d = 10λ od for 2D Shapes and 3D Chairs, and λ d = 50λ od for CelebA. For DIP-VAE-II, we fix λ od = λ d for 2D Shapes, and λ od = 2λ d for CelebA. Additionally, for DIP-VAE-II we also penalize the 2 -norm of third order central moments of q φ (z) with hyperparameter λ 3 = 200 for 2D Shapes data (λ 3 = 0 for CelebA). For β-VAE, we experiment with β = {1, 2, 4, 8, 16, 25, 32, 64, 100, 128, 200, 256} (where β = 1 corresponds to the VAE). We used a batch size of 400 for all 2D Shapes experiments and 100 for all CelebA experiments. For both CelebA and 2D Shapes, we show the in terms of the Z-diff score, the Figure 1: Proposed Separated Atomic Predictability (SAP) score and the Z-diff disentanglement score as a function of average reconstruction error (per pixel) on the test set of 2D Shapes data for β-VAE and the proposed DIP-VAE. The plots are generated by varying β for β-VAE, and λ od for DIP-VAE-I and DIP-VAE-II (the number next to each point is the value of these hyperparameters, respectively). DISPLAYFORM0 is computed for every attribute k using the training set and a bias is learned by minimizing the hinge loss. Accuracy on other attributes stays about same across all methods. Arched proposed SAP score, and reconstruction error. For 3D Chairs data, only two ground truth generative factors are available and the quantitative scores for these are saturated near the peak values, hence we show only the latent traversal plots which we based on our subjective evaluation of the reconstruction quality and disentanglement (shown in Appendix).Disentanglement scores and reconstruction error. For the Z-diff score, in all our experiments we use a one-vs-rest linear SVM with weight on the hinge loss C set to 0.01 and weight on the regularizer set to 1. Table 1 shows the Z-diff scores and the proposed SAP Figure 2: The proposed SAP score and the Z-diff score as a function of average reconstruction error (per pixel) on the test set of CelebA data for β-VAE and the proposed DIP-VAE. The plots are generated by varying β for β-VAE, and λ od for DIP-VAE-I and DIP-VAE-II (the number next to each point is the value of these hyperparameters, respectively).scores along with reconstruction error (which directly corresponds to the data likelihood) for the test sets of CelebA and 2D Shapes data. Further we also show the plots of how the Z-diff score and the proposed SAP score change with the reconstruction error as we vary the hyperparameter for both methods (β and λ od, respectively) in Fig. 1 (for 2D Shapes data) and Fig. 2 (for CelebA data). The proposed DIP-VAE-I gives much higher Z-diff score at little to no cost on the reconstruction error when compared with VAE (β = 1) and β-VAE, for both 2D Shapes and CelebA datasets. However, we observe in the decoder's output for single latent traversals (varying a single latent while keeping others fixed, shown in FIG0 and FIG1) that a high Z-diff score is not necessarily a good indicator of disentanglement. Indeed, for 2D Shapes data, DIP-VAE-I has a higher Z-diff score (98.7) and almost an order of magnitude lower reconstruction error than β-VAE for β = 60, however comparing the latent traversals of β-VAE in FIG0 and DIP-VAE-I in FIG1 indicate a better disentanglement for β-VAE for β = 60 (though at the cost of much worse reconstruction where every generated sample looks like a hazy blob). On the other hand, we find the proposed SAP score to be correlated well with the qualitative disentanglement seen in the latent traversal plots. This is reflected in the higher SAP score of β-VAE for β = 60 than DIP-VAE-I. We also observe that for 2D Shapes data, DIP-VAE-II gives a much better trade-off between disentanglement (measured by the SAP score) and reconstruction error than both DIP-VAE-I and β-VAE, as shown quantitatively in Fig. 1 and qualitatively in the latent traversal plots in FIG0. The reason is that DIP-VAE-I enforces [Cov p(x) [µ φ (x)]] ii to be close to 1 and this may affect the disentanglement adversely by splitting a generative factor across multiple latents for 2D Shapes where the generative factors are much less than the latent dimension. For real datasets having lots of factors with complex generative processes, such as CelebA, DIP-VAE-I is expected to work well which can be seen in Fig. 2 where DIP-AVE-I yields a much lower reconstruction error with a higher SAP score (as well as higher Z-diff scores).Binary attribute classification for CelebA. We also experiment with predicting the binary attribute values for each test example in CelebA from the inferred µ φ (x). For each attribute k, we compute the attribute vector set, and project the µ φ (x) along these vectors. A bias is learned on these scalars (by minimizing hinge loss) which is then used for classifying the test examples. TAB1 shows the for the attribute which show the highest change across various methods (most other attribute accuracies do not change). The proposed DIP-VAE outperforms both VAE and β-VAE for most attributes. The performance of β-VAE gets worse as β is increased further. DISPLAYFORM0 Adversarial autoencoder BID40 also matches q φ (z) (which is referred as aggregated posterior in their work) to the prior p(z). However, adversarial autoencoder does not have the goal of minimizing KL(q φ (z|x)||p θ (z|x)) which is the primary goal of variational inference. It DISPLAYFORM0, where D is the distance induced by a discriminator that tries to classify z ∼ q φ (z) from z ∼ p(z) by optimizing a cross-entropy loss (which induces JS-divergence as D). This can be contrasted with the objective in.Invariance and Equivariance. Disentanglement is closely connected to invariance and equivariance of representations. If R: x → z is a function that maps the observations to the feature representions, equivariance (with respect to T) implies that a primitive transformation T of the input in a corresponding transformation T of the feature, i.e., R(T (x)) = T (R(x)). Disentanglement requires that T acts only on a small subset of dimensions of R(x) (a sparse action). In this sense, equivariance is a more general notion encompassing disentanglement as a special case, however this special case carries additional benefits of interpretability, ease of transferrability, etc. Invariance is also a special case of equivariance which requires T to be identity for R to be invariant to the action of T on the input observations. However, invariance can obtained more easily from disentangled representations than from equivariant representations by simply marginalizing the appropriate subset of dimensions. There exists a lot of prior work in the literature on equivariant and invariant feature learning, mostly under the supervised setting which assumes the knowledge about the nature of input transformations (e.g., rotations, translations, scaling for images, etc.) BID52 BID8 BID0 BID50 BID12 BID16 BID27 BID44 BID47. We proposed a principled variational framework to infer disentangled latents from unlabeled observations. Unlike β-VAE, our variational objective does not have any conflict between the data log-likelihood and the disentanglement of the inferred latents, which is reflected in the empirical . We also proposed the SAP disentanglement metric that is much better correlated with the qualitative disentanglement seen in the latent traversals than the Z-diff score. An interesting direction for future work is to take into account the sampling biases in the generative process, both natural (e.g., sampling the female gender makes it unlikely to sample beard for face images in CelebA) as well as artificial (e.g., a collection of face images that contain much more smiling faces for males than females misleading us to believe p(gender,smile) = p(gender)p(smile)), which makes the problem challenging and also somewhat less well defined (at least in the case of natural biases). Effective use of disentangled representations for transfer learning is another interesting direction for future work.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1kG7GZAW
We propose a variational inference based approach for encouraging the inference of disentangled latents. We also propose a new metric for quantifying disentanglement.
In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution. Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents. Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination. However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures. Deep reinforcement learning (DRL) has achieved a lot of success at finding optimal policies to address single-agent complex tasks . However, there also exist a lot of challenges in multiagent systems (MASs) since agents' behaviors are influenced by each other and the environment exhibits more stochasticity and uncertainties (; ; ;). Recently, a number of deep multiagent reinforcement learning (MARL) approaches have been proposed to address complex multiagent problems, e.g., coordination of robot swarm systems and autonomous cars . One major class of works incorporates various multiagent coordination mechanisms into deep multiagent learning architecture (; ;). proposed a centralized actor-critic architecture to address the partial observability in MASs. They also incorporate the idea of joint action learner (JAL) to facilitate multiagent coordination. proposed Counterfactual Multi-Agent Policy Gradients (COMA) motivated from the difference reward mechanism to address the challenges of multiagent credit assignment. proposed applying mean-field theory to solve large-scale multiagent learning problems. More recently, extended the idea of leniency to deep MARL and proposed the retroactive temperature decay schedule to address stochastic rewards problems. However, all these works ignore the natural property of the action influence between agents, which we aim to exploit to facilitate multiagent coordination. Another class of works focus on specific network structure design to address multiagent learning problems (; ; ;). designed a value-decomposition network (VDN) to learn an optimal linear value decomposition from the team reward signal based on the assumption that the joint actionvalue function for the system can be additively decomposed into value functions across agents. relaxed the linear assumption in VDN by assuming that the Q-values of individual agents and the global one are also monotonic, and proposed QMIX employing a network that estimates joint action-values as a complex non-linear combination of per-agent values. Recently, proposed the relational deep RL to learn environmental entities relations. However, they considered the entity relations on the pixel-level of raw visual data, which ignores the natural property of the influence of actions between agents. proposed a novel network architecture called Relational Forward Model (RFM) for predictive modeling in multiagent learning. RFM takes a semantic description of the state of an environment as input, and outputs either an action prediction for each agent or a prediction of the cumulative reward of an episode. However, RFM does not consider from the perspective of the influence of each action on other agents. OpenAI designed network structures to address multiagent learning problems in a famous Multiplayer Online Battle Arena (MOBA), Dota2. They used a scaled-up version of PPO , adopted the attention mechanism to compute the weight of choosing the target unit, with some of information selected from all information as input. However, this selection is not considered from the influence of each action on other agents. There are also a number of works designing network structures for multiagent communication . However, none of the above works explicitly leverage the fact that an agent's different actions may have different impacts on other agents, which is a natural property in MASs and should be considered in the decision-making process. In multiagent settings, each agent's action set can be naturally divided into two types: one type containing actions that affect environmental information or its private properties and the other type containing actions that directly influence other agents (i.e., their private properties). Intuitively, the estimation of performing actions with different types should be evaluated separately by explicitly considering different information. We refer to the property that different actions may have different impacts on other agents as action semantics. We can leverage the action semantics information to improve an agent's policy/Q network design toward more efficient multiagent learning. To this end, we propose a novel network architecture, named Action Semantics Network (ASN) to characterize such action semantics for more efficient multiagent coordination. The main contributions of this paper can be summarized as follows: 1) to the best of our knowledge, we are the first to explicitly consider action semantics and design a novel network to extract it to facilitate learning in MASs; 2) ASN can be easily combined with existing DRL algorithms to boost its learning performance; 3) experimental * on StarCraft II micromanagement and Neural MMO show our ASN leads to better performance compared with state-of-the-art approaches in terms of both convergence speed and final performance. Stochastic games (SGs) are a natural multiagent extension of Markov decision processes (MDPs), which models the dynamic interactions among multiple agents. Considering the fact that agents may not have access to the complete environmental information, we follow previous work's settings and model the multiagent learning problems as partially observable stochastic games (POSGs) . A partially observable stochastic game (POSG) is defined as a tuple N, S, where N is the set of agents; S is the set of states; A i is the set of actions available to agent i (the joint action space T is the transition function that defines transition probabilities between global states: S × A × S →; R i is the reward function for agent i: S × A → R and O i is the set of observations for agent i. Note that a state s ∈ S describes the environmental information and the possible configurations of all agents, while each agent i draws a private observation o i correlated with the state: S → O i, e.g., an agent's observation includes the agent's private information and the relative distance between itself and other agents. Formally, an observation of agent i at step t can be constructed as follows: is the observed environmental information, m i t is the private property of agent i (e.g., in robotics, m i t includes agent i's location, the battery power and the healthy status of each component) and the rest are the observations of agent i on other agents (e.g., in robotics, o i,i−1 t includes the relative location, the exterior of agent i−1 that agent i observes). An policy π i: O i ×A i → [0; 1] specifies the probability distribution over the action space of agent i. The goal of agent i is to learn a policy π i that maximizes the expected return with a discount factor γ: 3 THE ACTION SEMANTICS NETWORK ARCHITECTURE In MASs, multiple agents interact with the environment simultaneously which increases the environmental stochasticity and uncertainties, making it difficult to learn a consistent globally optimal policy for each agent. A number of Deep Multiagent Reinforcement Learning (MARL) approaches have been proposed to address such complex problems in MASs by either incorporating various multiagent coordination mechanisms into deep multiagent learning architecture; ) or designing specialized network structures to facilitate multiagent learning (; ;). However, none of them explicitly consider extracting action semantics, which we believe is a critical factor that we can leverage to facilitate coordination in multiagent settings. Specifically, each agent's action set can be naturally classified into two types: one type containing actions that directly affect environmental information or its private properties and the other type of actions directly influence other agents. Therefore, if an agent's action directly influences one of the other agents, the value of performing this action should be explicitly dependent more on the agent's observation for the environment and the information of the agent to be influenced by this action, while any additional information (e.g., part of the agent's observation for other agents) is irrelevant and may add noise. We refer to the property that different actions may have different impacts on other agents as action semantics. However, previous works usually use all available information for estimating the value of all actions, which can be quite inefficient. To this end, we propose a new network architecture called Action Semantics Network (ASN) that explicitly considers action semantics between agents to improve the estimation accuracy over different actions. Instead of inputting an agent's total observation into one network, ASN consists of several sub-modules that take different parts of the agent's observation as input according to the semantics of actions. In this way, ASN can effectively avoid the negative influence of the irrelevant information, and thus provide a more accurate estimation of performing each action. Besides, ASN is general and can be incorporated into existing deep MARL frameworks to improve the performance of existing DRL algorithms. In the next section, we will describe the ASN structure in detail. Considering the semantic difference of different actions, we classify an agent's action set A i of agent i into two subsets: A i in and A i out. A i in contains actions that affect the environmental information or its private properties and do not influence other agents directly, e.g., moving to different destinations would only affect its own location information. A i out corresponds to those actions that directly influence some of other agents, e.g., attack agent j in competitive settings or communicate with agent j in cooperative settings. Following the above classification, the proposed network architecture, ASN, explicitly considers the different influence of an agent's actions on other agents by dividing the network into different sub-modules, each of which takes different parts of the agent's observation as input according to the semantics of actions (shown in Figure 1). Considering an agent i and n − 1 agents in its neighborhood, ASN decouples agent i's network into n sub-modules as follows. The first one shown in Figure 1 (left side O2A i) contains a network O2E i which is used to generate the observation embedding e i given the full observation o i t of agent i as input, and a network E2A i (embedding to action) which generates the values of all action in A i in as output. The rest of n − 1 sub-modules (O2A i,j, j ∈ N, j = i) are used to estimate the values of those actions in A i out related with each influenced agent, composed of n − 1 networks (O2E i,j, j ∈ N, j = i) which are responsible for determining the observation embeddings related with each influenced agent, denoted as e i,j. Each of n − 1 sub-modules O2A i,j only takes a part of agent i's observation related with one neighbor agent j, o i,j t as input. 5 6 78 9 or = 78 9, 6, 6 ∈? 9@ 9 5 6 78 9 or = 78 9, 6, 6 ∈? 7A8 9! 2# $ Figure 1: ASN of agent i contains n sub-modules: i,n, each of which takes different parts of the agent's observation as input. For value-based RL methods, at each step t, the evaluation of executing each action a Next, we describe how ASN can be incorporated into existing deep MARL, which can be classified into two paradigms: Independent Learner (IL) and Joint Action Learner (JAL) (; ; . IL applies a single-agent learning algorithm to a multiagent domain to treat other agents as part of the environment. In contrast, JALs observe the actions of other agents, and optimize the policy for each joint action. Following the above two paradigms, we propose two classes of ASN-based MARL: ASN-IL and ASN-JAL. For ASN-IL, we focus on the case of combing ASN with PPO , a popular single-agent policy-based RL. The way ASN combines with other single-agent RL is similar. In contrast, ASN-JAL describes existing deep MARL approaches combined with ASN, e.g., QMIX and VDN . ASN-PPO For each agent i equipped with a policy network parameterized by θ i, ASN-PPO replaces the vanilla policy network architecture with ASN and optimizes the policy following PPO. Generally, IL ignores the existence of other agents, thus, for each agent i, the expected return J(θ i) is optimized using the policy gradient theorem: where A t is the advantage function at timestep t. PPO uses constraints and advantage estimation to reformulate the optimization problem as: where r t (θ i) is the probability ratio, θ i old is the policy parameters before the update. Then in ASN-PPO, r t (θ i) can be rewritten as follows by substituting Equation 2: Lastly, ASN-PPO maximizes the objective (Equation 3) following PPO during each iteration. ASN-QMIX The way ASN combines with deep MARL algorithms is similar and we use QMIX as an example to present. Figure 2 illustrates the ASN-QMIX network structure, where for each agent i, ASN-QMIX replaces the vanilla Q-network architecture with ASN. At each step t, the individu- is first calculated following Section 3.2 and then input into the mixing network. The mixing network mixes the output of all agents' networks monotonically and produces the joint action-value function Q tot (s t, a t). The weights of the mixing network are restricted to be non-negative and produced by separate hypernetworks, each of which takes state s t as input and generates the weights of one layer of the mixing network. Finally, ASN-QMIX is trained to minimize the loss: 2, where B is the batch size of transitions, y tot t = r t + γ max a Q tot (s, a ; θ −), and θ − are the parameters of the target network as in DQN . Multi-action ASN The general case in MASs is that an agent may have multiple actions which can directly influence another agent, e.g., a router can send packages with different size to one of its neighbors, a soldier can select different weapons to attack enemies and cause different damages. To address this, we extend the basic ASN to a generalized version, named Multi-action ASN (shown in Figure 3(a) ), that takes o i,j as input, and produces a number of embeddings e i,j1, · · ·, e i,jm, where m is the number of actions that directly influences agent j. After that, multi-action ASN calculates the estimation of performing each action, which uses a pairwise interaction function M to combine the two embeddings e i,j k,k∈ [1,m] and e i following Equation. Parameter-sharing between sub-modules Parameter-sharing (PS) mechanism is widely used in MARL. If agents are homogeneous, their policy networks can be trained more efficiently using PS which greatly reduces the training complexity . Recent work also incorporates PS on heterogeneous agents by adding extra information to identify agent type. Following previous work, here we incorporate PS to enable parameter-sharing between different sub-modules of ASN. The basic ASN (Figure 1) for agent i contains a number of sub-modules O2A i,j, each of which takes o i,j as input. In this way, if an action a i,j t ∈ A i out has a direct impact on any of another agent j, the number of sub-modules is equal to the number of other agents. The training of basic ASN is inefficient since the number of sub-modules is increasing with the increase in the number of agents. If the other agents that agent i can directly influence are homogeneous, the sub-module parameters can be shared across those agents. Thus, in a homogeneous MAS, all influencing agents can share one sub-module (shown in Figure 3 (b) ); in a MAS that contains several types of agents, each type of agents can share one sub-module (Mixed ASN in Figure 3 (c) ). Note that the basic ASN can be seen as the simplest case that designs a sub-module for each influencing agent without PS. We evaluate the performance of ASN compared with different network structures including the vanilla network (i.e., aggregate all information and input into one single network), the dueling network , the attention network that expects to learn which information should be focused on more automatically (i.e., adds an additional hidden layer to compute the weights of the input and then generate an element-wise product to input into the next layer) and entity-attention network (i.e., instead of computing attention weight for each dimension of the input, the weight is computed for each entity/agent) under various DRL approaches. Other network architectures as we mentioned before are not comparable here since they are orthogonal to our ASN. Our test domains include StarCraft II micromanagement and Massively Multiplayer Online Role-Playing Games (Neural MMO) . The details of neural network structures and parameter settings are in the appendix. StarCraft II is a real-time strategic game with one or more humans competing against each other or a built-in game AI. Here we focus on a decentralized multiagent control that each of the learning agents controls an individual army entity. At each step, each agent observes the local game state which consists of the following information for all units in its field of view: relative distance between other units, the position and unit type (detailed in the appendix) and selects one of the following actions: move north, south, east or west, attack one of its enemies, stop and the null action. Agents belonging to the same side receive the same joint reward at each time step that equals to the total damage on the enemy units. Agents also receive a joint reward of 10 points after killing each enemy, and 200 points after killing all enemies. The game ends when all agents on one side die or the time exceeds a fixed period. Note that previous works; ) reduce the learning complexity by manually adding a rule that forbids each agent to select an invalid action, e.g., attack an opponent that beyond the attack range and move beyond the grid border. We relax this setting since it requires prior knowledge, which is hard to obtain in the real world. We are interested in evaluating whether these rules can be learned automatically through end-to-end training as well. Thus, the following are based on the setting that each agent can select an action that causes an invalid effect, and in , the agent will standstill at the current time step. We also evaluate ASN following previous settings (adding the manual rule in StarCraft II that forbidding the invalid actions) and ASN still achieves better performance which can be found in the appendix. In StarCraft II 8m map (8 Marines vs 8 Marines), each agent is homogeneous to each other, so we adopt homogeneous ASN to evaluate whether it can efficiently characterize action semantics between two agents. Figure 4 input of different sub-modules, ASN enables an agent to learn the right timing to attack different opponents to maximize its total damage on opponents. In contrast, existing network architectures simply input all information into one network, thus an agent cannot distinguish the difference of effects that different actions may have on the opponents and may choose the suboptimal opponent to attack, thus ing in lower performance than ASN. Attention network performs better than vanilla and dueling when combined with IQL, while both of them show very similar performance with the vanilla network when combined with QMIX and VDN. However, entity-attention performs worst since it is hard to figure out the useful information for each entity when input all information into one network initially. Since the performance difference of other network architecture is marginal, we only present of ASN-QMIX compared with the vanilla network under QMIX (denoted as vanilla-QMIX) in the following sections. Figure 5 (a) we can observe that Mixed ASN-QMIX perform better than vanilla-QMIX. The reason is that ASN efficiently identifies action semantics between each type of two agents, thus it selects more proper attack options each time and achieves better performance last vanilla-QMIX. Is ASN still effective on large-scale scenarios? We further test on a large-scale agent space on a 15m map. Figure 5 (b) depicts the dynamics of the average win rates of ASN-QMIX and vanilla-QMIX. We can see that ASN-QMIX quickly learns the average win rates of approximately 80 %, while vanilla-QMIX fails, with the average win rates of approximately only 20 %. From Figure 4 (b) and 5 (b) we can find that with the increase of the agent number, the margin becomes larger between two methods. Intuitively, ASN enables an agent to explicitly consider more numbers of other agents' information with a larger agent size. However, for the vanilla network, it is more difficult to identify the action influence on other agents from a larger amount of mixed information, which in lower average win rates than ASN. An interesting observation for vanilla-QMIX is that they will learn to run away to avoid all being killed, and testing videos can be found in our anonymous website *. Can ASN recognize the influence of different actions? remove the manually added rule (which prevents selecting any invalid action), and agents would probably select the invalid action and standstill, which increases the learning difficulties. We can see that ASN-QMIX achieves an average percentage of approximately 71.9% for choosing a valid action. However, vanilla-QMIX only achieves an average percentage of approximately 44.3%. This phenomenon confirms that ASN effectively exploits action semantics between agents and enables agents to learn which action can be chosen at each time step, facilitating more robust learning, even in large-scale MASs. Can ASN effectively improve the estimation accuracy of actions? We investigate whether ASN can efficiently characterize the action semantics and facilitate multiagent coordination. To make the analysis more clear, we test the model learned on a 15m map on two illustrating scenarios: 1) the one-on-one combat scenario that the distance between two agents is dynamically changing; 2) the one Marine vs two Marines scenario that the HPs (Hit Points) of two opponents are dynamically different. Figure 6(a) shows the dynamics of the attack action's Q-value with the distance change of the ASN agent and its opponent. We can observe that the Q-value of the action that the ASN agent attacking its opponent decreases as the distance of the agent and its opponent increases, and stabilizes when the distance exceeds the attack range. However, the vanilla agent keeps the Qvalue of the attack action nearly unchanged. This indicates that ASN can automatically learn the information of when an action is valid and behave appropriately, while the vanilla agent has to rely on manually added rules to avoid choosing invalid actions. Figure 6 (b) and (c) shows the dynamics of the attack action's Q-value of ASN agent and vanilla agent with the HPs difference of two opponents changing (i.e., the HP difference equals to the HP of opponent 1 minus the HP of opponent 2). We can see that the ASN agent holds a higher Q-value of attacking opponent 1 when opponent 1's HP is lower than opponent 2 and vice versa. The symmetric curve of ASN is due to the fact that the state description of two opponents is very similar in this scenario. However, the vanilla agent always keeps a higher attack action's Q-value on Opponent 1 than on Opponent 2, which means it always selects to attack Opponent 1. These indicate that ASN can effectively exploit the action semantics between agents and improves the estimation accuracy on different actions, thus facilitates robust learning among agents. Does ASN exploits the 0-padding information? When one of the army units dies or some units are beyond the range of vision, one common practice is to use 0-paddings as the input for the observation of the died army unit. In this section, we provide an ablation study on whether ASN design 0.0 0.2 0.4 0.6 0.8 1.0 Step ×10 Step ×10 exploits the 0-padding information. Figure 7 shows the win rates of various network architectures combined with QMIX when using 1-paddings and −1-paddings as the input for the observation of the died army unit. We can see that ASN still performs best among all network architectures in terms of both convergence speed and final win rates. This indicates that ASN effectively extracts the action semantics between agents, instead of benefiting from the particular settings of 0-paddings. The Neural MMO is a massively multiagent environment that defines combat systems for a large number of agents. Figure 8 illustrates a simple Neural MMO scene with two groups of agents on a 10×10 tile. Each group contains 3 agents, each of which starts at any of the tiles, with HP = 100. At each step, each agent loses one unit of HP, observes local game state (detailed in the appendix) and decides on an action, i.e., moves one tile (up, right, left, down and stop) or makes an attack using any of three attack options (shown in the left part in Figure 8 : "Melee" with the attack distance is 2, the amount of damage is 5; "Range" with the attack distance is 4, the amount of damage is 2; "Mage" with the attack distance is 10, the amount of damage is 1). Each action that causes an invalid effect (e.g., attack an opponent that beyond the attack range and move beyond the grid border) would make the agent standstill. Each agent gets a penalty of −0.1 if the attack fails. The game ends when all agents in one group die, and agents belonging to the same group receive a joint reward, which is the difference of the total HPs between itself and its opposite side. In Neural MMO, an agent can attack one of its opponent using one of three different attack options, which can be used to evaluate whether multi-action ASN can efficiently identify the multiple action semantics between agents. Here we adopt two kinds of multi-action ASN: ASN-M1 that shares parameters of the first neural network layer across three attack actions on one enemy (as shown in Figure 3 and A2C ). We can observe that ASN performs best under all three IL approaches in terms of average rewards. This is because ASN can learn to choose appropriate actions against other agents at different time steps to maximize the damage on others. However, the vanilla network just mixes all information together which makes it difficult to identify and take advantage of the action semantics between agents, thus it achieves lower performance than ASN. Since the information is mixed initially, although the attention and entity-attention networks try to learn which information should be focused on more, it is hard to distinguish which part of the information is more useful, thus achieving lower performance than ASN. Can ASN recognize the best actions from multiple ones? We further investigate whether ASN can efficiently exploit different action semantics between agents and enable an agent to identify the best attack option (i.e., an attack that causes the most damage) with the distance between the agent and its opponent changing. Figure 10 shows the average attack damage of each attack option in Neural MMO when the distance between agent i and its opponent j is less than or equal to 2 (d ij ≤ 2). The best attack option is "Melee" within this distance range since it causes the maximum damage among three attacks. We can see that both ASN-M1 agent and ASN-M cause higher total damage than other methods, and ASN-M1 agent causes the highest total damage on average. However, the attention network only causes average total damage of approximately 1.5, the entity-attention and vanilla network only cause average total damage of approximately 1.0 due to the lower probability of selecting the best attack action "Melee". This is because two kinds of ASN have a larger probability to select the best attach option "Melee" than other two networks, thus causing larger total damage. Similar on other distance ranges (d i,j ≤ 4, d i,j ≤ 10) can be found in the appendix that ASN always causes higher total damage than other networks. We propose a new network architecture, ASN, to facilitate more efficient multiagent learning by explicitly investigating the semantics of actions between agents. To the best of our knowledge, ASN is the first to explicitly characterize the action semantics in MASs, which can be easily combined with various multiagent DRL algorithms to boost the learning performance. ASN greatly improves the performance of state-of-the-art DRL methods compared with a number of network architectures. In this paper, we only consider the direct action influence between any of two agents. As future work, it is worth investigating how to model the action semantics among more than two agents. Another interesting direction is to consider the action semantics between agents in continuous action spaces. 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Step Here we provide the hyperparameters for StarCraft II † shown in Table 2. The following present the performance of ASN-QMIX and vanilla-QMIX under different StarCraft II maps with adding the manual rule (forbids the agent to choose the invalid actions). In a 10x10 tile (where each tile can be set as different kinds, e.g., rocks, grass), there are two teams of agents (green and red), each of which has 3 agents. At the beginning of each episode, each agent appears on any of the 10x10 tiles. The observation of an agent is in the form of a 43-dimensional vector, in which the first 8 dimensions are: time to live, HP, remaining foods (set 0), remaining water (set 0), current position (x and y), the amount of damage suffered, frozen state (1 or 0); the rest of 35 dimensions are divided equally to describe the other 5 agents' information. The first 14 dimensions describe the information of 2 teammates, following with the description of 3 opponents' information. Each observed agent's information includes the relative position(x and y), whether it is a teammate(1 or 0), HP, remaining foods, remaining water, and the frozen state. Each agent chooses an action from a set of 14 discrete actions: stop, move left, right, up or down, and three different attacks against one of its opponent ("Melee" with the attack distance is 2, the amount of damage is 5; "Range" with the attack distance is 4, the amount of damage is 2; "Mage" with the attack distance is 10, the amount of damage is 1). Each agent gets a penalty of −0.1 if the attack fails. They get a −0.01 reward for each tick and a −10 penalty for being killed. The game ends when a group of agents dies or the time exceeds a fixed period, and agents belonging to the same group receive the same reward, which is the difference of the total number of HPs between itself and its opposite side. The details of vanilla, attention, entity-attention networks for Neural MMO are shown in Figure 13 (a-c) which contains an actor network, and a critic network. All actors are similar to those for StarCraft II in Figure 11, except that the GRU layer is excluded and the output is the logic probability of choosing each action. All critics are the same as shown in Figure 13 (a). Since in Neural MMO, each agent has multiple actions that have direct influence on each other agent, i.e., three kinds of attack options, we test two kinds of ASN variants: one (Figure 13(d) ) is the Multi-action ASN we mentioned in the previous section that shares the first layer parameters among multiple actions; the other (Figure 13 (e)) is the basic homogeneous ASN that does not share the first layer parameters among multiple actions. Here we provide the hyperparameters for Neural MMO shown in Table 3. The above present the average attack damage of each attack option under the different distance ranges between the agent and its opponent.
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryg48p4tPH
Our proposed ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them.
Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. While convolutional spiking neural networks have been demonstrated to achieve near state-of-the-art performance, only one solution has been proposed to convert gated recurrent neural networks, so far. Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog gated LSTM cell where its neurons can be substituted for efficient stochastic spiking neurons. These adaptive spiking neurons implement an adaptive form of sigma-delta coding to convert internally computed analog activation values to spike-trains. For such neurons, we approximate the effective activation function, which resembles a sigmoid. We show how analog neurons with such activation functions can be used to create an analog LSTM cell; networks of these cells can then be trained with standard backpropagation. We train these LSTM networks on a noisy and noiseless version of the original sequence prediction task from , and also on a noisy and noiseless version of a classical working memory reinforcement learning task, the T-Maze. Substituting the analog neurons for corresponding adaptive spiking neurons, we then show that almost all ing spiking neural network equivalents correctly compute the original tasks. With the manifold success of biologically inspired deep neural networks, networks of spiking neurons are being investigated as potential models for computational and energy efficiency. Spiking neural networks mimic the pulse-based communication in biological neurons, where in brains, neurons spike only sparingly -on average 1-5 spikes per second BID0. A number of successful convolutional neural networks based on spiking neurons have been reported BID7 BID13 BID6 BID15 BID12, with varying degrees of biological plausibility and efficiency. Still, while spiking neural networks have thus been applied successfully to solve image-recognition tasks, many deep learning algorithms use recurrent neural networks (RNNs), in particular using Long Short-Term Memory (LSTM) layers BID11. Compared to convolutional neural networks, LSTMs use memory cells to store selected information and various gates to direct the flow of information in and out of the memory cells. To date, the only spike-based version of LSTM has been realized for the IBM TrueNorth platform Shrestha et al.: this work proposes a method to approximate LSTM specifically for the constrains of this neurosynaptic platform by means of a store-and-release mechanism that synchronizes the modules. This translates to a frame-based rate coding computation, which is less biological plausible and energy efficient than an asynchronous approach, as the one proposed here. Here, we demonstrate a gated recurrent spiking neural network that corresponds to an LSTM unit with a memory cell and an input gate. Analogous to recent work on spiking neural networks (O '; BID6 BID19 BID20, we first train a network with modified LSTM units that computes with analog values, and show how this LSTMnetwork can be converted to a spiking neural network using adaptive stochastic spiking neurons that encode and decode information in spikes using a form of sigma-delta coding BID18 BID19 BID14 . In particular, we develop a binary version of the adaptive sigma-delta coding proposed in BID19 : we approximate the shape of the transfer function that this model of fast-adapting spiking neurons exhibits, and we assemble the analog LSTM units using just this transfer function. Since input-gating is essential for maintaining memorized information without interference from unrelated sensory inputs BID11, and to reduce complexity, we model a limited LSTM neuron consisting of an input cell, input gating cell, a Constant Error Carousel (CEC) and output cell. The ant analog LSTM network is then trained on a number of classical sequential tasks, such as the noise-free and noisy Sequence Prediction and the T-Maze task BID11 BID1. We demonstrate how nearly all the corresponding spiking LSTM neural networks correctly compute the same function as the analog version. Note that the conversion of gated RNNs to spike-based computation implies a conversion of the neural network from a time step based behavior to the continuous-time domain: for RNNs, this means having to consider the continuous signal integration in the memory cell. We solve the time conversion problem by approximating analytically the spiking memory cell behavior through time. Together, this work is a first step towards using spiking neural networks in such diverse and challenging tasks like speech recognition and working memory cognitive tasks. To construct an Adapting Spiking LSTM network, we first describe the Adaptive Spiking Neurons and we approximate the corresponding activation function. Subsequently, we show how an LSTM network comprised of a spiking memory cell and a spike-driven input-gate can be constructed and we discuss how analog versions of this LSTM network are trained and converted to spiking versions. Adaptive Spiking Neuron. The spiking neurons that are used in this paper are Adaptive Spiking Neurons (ASNs) as described in BID2. This is a variant of an adapting Leaky Integrate & Fire (LIF) neuron model that includes fast adaptation to the dynamic range of input signals. The ASNs used here communicate with spikes of a fixed height h = 1 (binary output), as suggested by BID20. The behavior of the ASN is determined by the following equations: incoming postsynaptic current: DISPLAYFORM0 input signal: DISPLAYFORM1 threshold: DISPLAYFORM2 internal state: DISPLAYFORM3 where w i is the weight (synaptic strength) of the neuron's incoming connection; t i s < t denote the spike times of neuron i, and t s < t denote the spike times of the neuron itself; φ(t) is an exponential smoothing filter with a short time constant τ φ; ϑ 0 is the resting threshold; m f is a variable controlling the speed of spike-rate adaptation; τ β, τ γ, τ η are the time constants that determine the rate of decay of I(t), ϑ(t) andŜ(t) respectively (see BID2 and BID19 for more details). As in BID2, the ASN emits spikes following a stochastic firing condition defined as: DISPLAYFORM4 where V (t) is the membrane potential defined as the difference between S(t) andŜ(t), λ 0 = 0.005 is a normalization parameter and ∆V = 0.1 is a scaling factor that defines the slope of the stochastic area. Activation function of the Adaptive Analog Neuron. In order to create a network of ASNs that performs correctly on typical LSTM tasks, our approach is to train a network of Adaptive Analog Neurons (AANs) and then convert the ing analog network into a spiking one, similar to O'; BID6; BID19. We define the activation function of the AANs as the function that maps the input signal S to the average PSC I that is perceived by the next (receiving) ASN in a defined time window. We normalize the obtained spiking activation function at the point where it reaches a plateau. We then fit the normalized spiking activation function with a sum-of-exponentials shaped function as: DISPLAYFORM5 with derivative: DISPLAYFORM6 where, for the neuron parameters used, we find a = 148.7, b = −10.16, c = 3.256 and d = −1.08.Using this mapping from the AAN to the ASN (see Figure 1), the activation function can be used during training: thereafter, the ASNs are used as "drop in" replacements for the AANs in the trained network. Unless otherwise stated, the ASNs use τ η = τ β = τ γ = 10 ms, and ϑ 0 and m f are set to 0.3 and 0.18 for all neurons. The spike height, h, is found such that ASN(4.8) = 1. Note that the spike height h is a normalization parameter for the activation function of the ASN model: in order to have binary communication across the network, the output weights are simply scaled by h. Adaptive Spiking LSTM. An LSTM cell usually consists of an input and output gate, an input and output cell and a CEC BID11. Deviating from the original formulation, and more recent versions where forget gates and peepholes were added Gers et al. FORMULA1, the Adaptive Spiking LSTM as we present it here only consists of an input gate, input and output cells, and a CEC. As noted, to obtain a working Adaptive Spiking LSTM, we first train its analog equivalent, the Adaptive Analog LSTM. Figure 2 shows the schematic of the Adaptive Analog LSTM and its spiking analogue. It is important to note that we aim for a one-on-one mapping from the Adaptive Analog LSTM to the Adaptive Spiking LSTM. This means that while we train the Adaptive Analog LSTM network with the standard time step representation, the conversion to the continuous-time spiking domain is achieved by presenting each input for a time window of size ∆t. Sigmoidal ASN. The original formulation of LSTM uses sigmoidal activation functions in the input gate and input cell. However, the typical activation function of real neurons resembles a half-sigmoid and we find that the absence of a gradient for negative input is problematic during training. Here, we approximate a sigmoidal-shaped activation function by exploiting the stochastic firing condition of the ASN. Indeed, Figure 1 shows that the ASN has a non-null probability to fire even under the threshold ϑ 0. Therefore, the AAN transfer function of Eq. 6 holds a gradient in that area. Together with the maximal activation being normalized to 1 (see Eq. 6 for lim S→∞) the AAN transfer function represents a good candidate for LSTM operations such as closing and opening the gates. Spiking input gate and spiking input cell. The AAN functions are used in the Adaptive Analog LSTM cell for the input gate and input cell. The activation value of the input cell is multiplied by the activation value of the input gate, before it enters the CEC, see Figure 2. In the spiking version of the input gate, the outgoing signal from the ASN is accumulated in an intermediate neuron (ASN * in Figure 2). The internal stateŜ of this neuron is then multiplied with the spikes that move from the ASN of the input cell to the ASN of the output cell. This leads to a direct mapping from the Adaptive Analog LSTM to the Adaptive Spiking LSTM.Spiking Constant Error Carousel (CEC) and spiking output cell. The Constant Error Carousel (CEC) is the central part of the LSTM cell and avoids the vanishing gradient problem BID11. In the Adaptive Spiking LSTM, we merge the CEC and the output cell to one ASN with an internal state that does not decay -in the brain could be implemented by slowly decaying (seconds) neurons BID5. The value of the CEC in the Adaptive Analog LSTM corresponds with state I of the ASN output cell in the Adaptive Spiking LSTM.In the Adaptive Spiking LSTM, we set τ β in Equation 1 to a very large value for the CEC cell to obtain the integrating behavior of a CEC. Since no forget gate is implemented this in a spiking CEC neuron that fully integrates its input. When τ β is set to ∞, every incoming spike is added to a non-decaying PSC I. So if the state of the sending neuron (ASN in in FIG1) has a stable inter-spike interval (ISI), then I of the receiving neuron (ASN out) is increased with incoming spike height h every ISI, so h ISI per time step. For a stochastic neuron, this corresponds to the average increase per time step. The same integrating behavior needs to be translated to the analog CEC. Since the CEC cell of the Adaptive Spiking LSTM integrates its input S every time step by S τη, we can map this to the CEC of the Adaptive Analog LSTM. The CEC of a traditional LSTM without a forget gate is updated every time step by CEC(t) = CEC(t − 1) + S, with S its input value. The CEC of the Adaptive Analog LSTM is updated every time step by CEC(t) = CEC(t − 1) + S τη. This is depicted in Figure 2 via a weight after the input gate with value 1 τη. To allow a correct continuous-time representation after the spike-coding conversion, we divide the incoming connection weight to the CEC, W CEC, by the time window ∆t. In our approach then, we train the Adaptive Analog LSTM as for the traditional LSTM (without the τ η factor), which effectively corresponds to set a continuous-time time window ∆t = τ η. Thus, to select a different ∆t, in the spiking version W CEC has to be set to W CEC = τ η /∆t. The middle plot in FIG1 shows that setting τ β to ∞ for ASN out in a spiking network in the same behavior as using an analog CEC that integrates with CEC(t) = CEC(t − 1) + S, since the slope of the analog CEC is indeed the same as the slope of the spiking CEC. Here, every time step in the analog experiment corresponds to ∆t = 200 ms. However, the spiking CEC still produces an error with respect to the analog CEC (the error increases for lower ∆ts, e.g. it doubles when going from 200ms to 50ms). This is because of two reasons: first, the stochastic firing condition in an irregular ISI; second, the adapting behavior of the ASN produces a transitory response that is not represented by the AAN transfer function. For these reasons, by choosing bigger time windows ∆t more stable responses are obtained. Learning rule used for training the spiking LSTM To train the analog LSTMs on the supervised tasks, a customized truncated version of real-time recurrent learning (RTRL) was used. This is the same algorithm used in Gers et al. FORMULA1, where the partial derivatives w.r.t. the weights W xc and W xi (see Figure 2) are truncated. For the reinforcement learning (RL) tasks we used RL-LSTM Bakker FORMULA1, which uses the same customized, truncated version of RTRL that was used for the supervised tasks. RL-LSTM also incorporates eligibility traces to improve training and Advantage Learning BID10. All regular neurons in the network are trained with traditional backpropagation. Since the presented Adaptive Analog LSTM only has an input gate and no output or forget gate, we present four classical tasks from the LSTM literature that do not rely on these additional gates. Sequence Prediction with Long Time Lags. The main concept of LSTM, the ability of a CEC to maintain information over long stretches of time, was demonstrated in in a Sequence Prediction task: the network has to predict the next input of a sequence of p + 1 possible input symbols denoted as a 1,..., a p−1, a p = x, a p+1 = y. In the noise free version of this task, every symbol is represented by the p + 1 input units with the i − th unit set to 1 and all the others to 0. At every time step a new input of the sequence is presented. As in the original formulation, we train the network with two possible sequences, (x, a 1, a 2, ..., a p−1, x) and (y, a 1, a 2, ..., a p−1, y), chosen with equal probability. For both sequences the network has to store a representation of the first element in the memory cell for the entire length of the sequence (p). We train 20 networks on this task for a total of 100k trials, with p = 100, on an architecture with p + 1 input units and p + 1 output units. The input units are fully connected to the output units without a hidden layer. The same sequential network construction method from the original paper was used to prevent the "abuse problem": the Adaptive Analog LSTM cell is only included in the network after the error stops decreasing BID11. In the noisy version of the sequence prediction task, the network still has to predict the next input of the sequence, but the symbols from a 1 to a p−1 are presented in random order and the same symbol can occur multiple times. Therefore, only the final symbols a p and a p+1 can be correctly predicted. This version of the sequence prediction task avoids the possibility that the network learns local regularities in the input stream. We train 20 networks with the same architecture and parameters of the previous task, but for 200k trials. For both noise-free and noisy tasks we considered the network converged when the average error over the last 100 trials was less than 0.25.T-Maze task. In order to demonstrate the generality of our approach, we trained a network with Adaptive Analog LSTM cells on a Reinforcement Learning task, originally introduced in BID1. In the T-Maze task, an agent has to move inside a maze to reach a target position in order to BID11 and current implementation); while for the T-Maze tasks it corresponds to the total number of steps BID1. ASN accuracy (%), total number of spikes per task and firing rate (Hz) are also reported. Note that the firing rate for both the sequence prediction tasks are computed without taking into account the input and output neurons not active in a specific time frame. Task be rewarded while maintaining information during the trial. The maze is composed of a long corridor with a T-junction at the end, where the agent has to make a choice based on information presented at the start of the task. The agent receives a reward of 4 if it reaches the target position and −0.4 if it moves against the wall. If it moves to the wrong direction at the T-junction it also receives a reward of −0.4 and the system is reset. The larger negative reward value, w.r.t. the one used in BID1, is chosen to encourage Q-values to differentiate more during the trial. The agent has 3 inputs and 4 outputs corresponding to the 4 possible directions it can move to. At the beginning of the task the input can be either 011 or 110 (which indicates on which side of the T-junction the reward is placed).Here, we chose the corridor length N = 20. A noiseless and a noisy version of the task were defined: in the noiseless version the corridor is represented as 101, and at the T-junction 010; in a noisy version the input in the corridor is represented as a0b where a and b are two uniformly distributed random variables in a range of. While the noiseless version can be learned by LSTM-like networks without input gating BID16, the noisy version requires the use of such gates. The network consists of a fully connected hidden layer with 12 AAN units and 3 Adaptive Analog LSTMs. To increase the influence of the LSTM cell in the network, we normalized the activation functions of the AAN output cell and ASN output cell at S = 1. The same training parameters are used as in Bakker FORMULA1; we train 20 networks for each task and all networks have the same architecture. As a convergence criteria we checked whenever the network reached on average a total reward greater than 3.5 in the last 100 trials. As shown in TAB0, all of the networks that were successfully trained for the noise-free and noisy Sequence Prediction tasks could be converted into spiking networks. FIG3 shows the last 6 inputs of a noise-free Sequence Prediction task before (left) and after (right) the conversion, demonstrating the correct predictions made in both cases. Indeed, for the 19 successful networks, after presenting either x or y as the first symbol of the sequence, the average error over the last 200ms was always below the chosen threshold of 0.25. As it can be seen in Figure 6, the analog and the spiking CEC follow a comparable trend during the task, reaching similar values at the end of the simulation. Note that, in the noisy task, all the successfully trained networks were still working after the conversion: in this case, due to the input noise, the CEC values are always well separated. Finally, we found that the number of trials needed to reach the convergence criterion were, on average, lower than the one reported in BID11.Similar were obtained for the T-Maze task: all the networks were successful after the conversion in both the noise-free and noisy conditions. FIG4 shows the Q-values of a noisy T-Maze task, demonstrating the correspondence between the analog and spiking representation even in presence of noisy inputs. However, we notice that the CEC of the spiking LSTMs reach different values compared to their analog counterparts. This is probably due to the increased network and task complexity. In general, we see that the spiking CEC value is close to the analog CEC value, while always exhibiting some deviations. Moreover, TAB0 reports the average firing rate computed per task, showing reasonably low values compatible with the one recorder from real neurons. Gating is a crucial ingredient in recurrent neural networks that are able to learn long-range dependencies BID11. Input gates in particular allow memory cells to maintain information over long stretches of time regardless of the presented -irrelevant -sensory input BID11. The ability to recognize and maintain information for later use is also that which makes gated RNNs like LSTM so successful in the great many sequence related problems, ranging from natural language processing to learning cognitive tasks BID1.To transfer deep neural networks to networks of spiking neurons, a highly effective method has been to map the transfer function of spiking neurons to analog counterparts and then, once the network has been trained, substitute the analog neurons with spiking neurons; BID6; BID19. Here, we showed how this approach can be extended to gated memory units, and we demonstrated this for an LSTM network comprised of an input gate and a CEC. Hence, we effectively obtained a low-firing rate asynchronous LSTM network. The most complex aspect of a gating mechanism turned out to be the requirement of a differentiable gating function, for which analog networks use sigmoidal units. We approximated the activation function for a stochastic Adaptive Spiking Neurons, which, as many real neurons, approximates a half-sigmoid (Fig. 1). We showed how the stochastic spiking neuron has an effective activation even below the resting threshold ϑ 0. This provides a gradient for training even in that area. The ant LSTM network was then shown to be suitable for learning sequence prediction tasks, both in a noise-free and noisy setting, and a standard working memory reinforcement learning task. The learned network could then successfully be mapped to its spiking neural network equivalent for at least 90% of the trained analog networks. Figure 6: The values of the analog CECs and spiking CECs for the noise-free Sequence Prediction (left, only one CEC cell was used) and noise-free T-maze (right, three CEC cells were used) tasks. The spiking CEC is the internal stateŜ of the output cell of the Adaptive Spiking LSTM.We also showed that some difficulties arise in the conversion of analog to spiking LSTM. Principally, the ASN activation function is derived for steady-state adapted spiking neurons, and this difference causes an error that may be large for fast changing signals. Analog-valued spikes as explored in BID19 could likely resolve this issue, at the expense of some loss of representational efficiency. Although the adaptive spiking LSTM implemented in this paper does not have output gates BID11, they can be included by following the same approach used for the input gates: a modulation of the synaptic strength. The reasons for our approach are multiple: first of all, most of the tasks do not really require output gates; moreover, modulating each output synapse independently is less intuitive and biologically plausible than for the input gates. A similar argument can be made for the forget gates, which were not included in the original LSTM formulation: here, the solution consists in modulating the decaying factor of the CEC.Finally, which gates are really needed in an LSTM network is still an open question, with answers depending on the kind of task to be solved BID9 BID21. For example, the AuGMEnT framework does not use gates to solve many working memory RL tasks BID16. In addition, it has been shown by BID4; BID9 that a combination of input and forget gates can outperform LSTM on a variety of tasks while reducing the LSTM complexity.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rk8R_JWRW
We demonstrate a gated recurrent asynchronous spiking neural network that corresponds to an LSTM unit.
CNNs are widely successful in recognizing human actions in videos, albeit with a great cost of computation. This cost is significantly higher in the case of long-range actions, where a video can span up to a few minutes, on average. The goal of this paper is to reduce the computational cost of these CNNs, without sacrificing their performance. We propose VideoEpitoma, a neural network architecture comprising two modules: a timestamp selector and a video classifier. Given a long-range video of thousands of timesteps, the selector learns to choose only a few but most representative timesteps for the video. This selector resides on top of a lightweight CNN such as MobileNet and uses a novel gating module to take a binary decision: consider or discard a video timestep. This decision is conditioned on both the timestep-level feature and the video-level consensus. A heavyweight CNN model such as I3D takes the selected frames as input and performs video classification. Using off-the-shelf video classifiers, VideoEpitoma reduces the computation by up to 50\% without compromising the accuracy. In addition, we show that if trained end-to-end, the selector learns to make better choices to the benefit of the classifier, despite the selector and the classifier residing on two different CNNs. Finally, we report state-of-the-art on two datasets for long-range action recognition: Charades and Breakfast Actions, with much-reduced computation. In particular, we match the accuracy of I3D by using less than half of the computation. A human can skim through a minute-long video in just a few seconds, and still grasp its underlying story . This extreme efficiency of the human visual and temporal information processing beggars belief. The unmatched trade-off between efficiency and accuracy can be attributed to visual attention -one of the hallmarks of the human cognitive abilities. This raises the question: can we build an efficient, yet effective, neural model to recognize minutes-long actions in videos? A possible solution is building efficient neural networks, which have a demonstrated record of success in the efficient recognition of images . Such models have been successful for recognizing short-range actions in datasets such as HMDB and UCF-101 , where analysis of only a few frames would suffice . In contrast, a long-range action can take up to a few minutes to unfold (a). Current methods fully process the long-range action video to successfully recognize it. Thus, for long-range actions, the major computational bottleneck is the sheer number of video frames to be processed. Another potential solution is attention. Not only it is biologically plausible, but also it is used in a wide spectrum of computer vision tasks, such as image classification, semantic segmentation , action recognition and temporal localization . Attention has also been applied to language understanding and graph modeling (Veličković et al., 2017). Most of these methods use soft-attention, where the insignificant visual signals are least attended to. However, such signals are still fully processed by the neural network and hence no reduction on the computation cost is obtained. Neural gating is a more conceivable choice to realize the efficiency, by completely dropping the insignificant visual signals. Recently, there has been a notable success in making neural gating differentiable . Neural gating is applied to conditional learning, and is used to gate network layers , convolutional channels , and more . That begs the question: can neural gating help in reducing the computational cost of recognizing minutes-long actions? That is to say, can we learn a gating mechanism to consider or discard video frames, conditioned on their video? Motivated by the aforementioned questions, we propose VideoEpitoma, a two-stage neural network for efficient classification of long-range actions without compromising the performance. The first stage is the timestep selector, in which, many timesteps of a long-range action are efficiently represented by lightweight CNN, such as MobileNet (; ;). Then, a novel gating module learns to select only the most significant timesteps -practically achieving the epitoma (Latin for summary) of this video. In the second stage, a heavyweight CNN, such as I3D , is used to effectively represent only the selected timesteps, followed by temporal modeling for the video-level recognition. This paper contributes the followings: i. VideoEpitoma, a neural network model for efficient recognition of long-range actions. The proposed model uses a novel gating module for timestep selection, conditioned on both the input frame and its context. ii. Off the shelf, our timestamp selector benefits video classification models and yields signification reduction in computation costs. We also show that if trained end-to-end, the timestep selector learns better gating mechanism to the benefit of the video classifier. iii. We present state-of-the-art on two long-range action recognition benchmarks: Charades and Breakfast Actions with significant reductions in the computational costs. Efficient Architectures. CNNs are the go-to solution when it comes to video classification. Thus, one prospective of reducing the computation of video recognition is to build efficient CNNs. Methods for pruning least important weights or filters were previously proposed. Careful design choices in very efficient 2D CNNs such as MobileNet and ShuffleNet . These 2D CNNs are extended to their 3D counterparts (ShuffleNet-3D and MobileNet-3D byKöpüklü et al. ) to learn spatiotemporal concepts for video classification. Neural architecture search is used to find the lightweight NasNet-Mobile . Long-range Actions Short-range actions in datasets such as Kinetics and UCF-101 have average length of 10 seconds. They can be practically classified with CNNs using as little as 10 frames per video, and in some cases, even 1 frame would suffice . Therefore, building efficient CNNs is a plausible choice to reduce computational cost of recognizing them. However, long-range videos (e.g. Charades and Breakfast Actions ) can take up to 5 minutes to unfold. Thus, requiring as many as a thousand frames (a; b) to be correctly classified. As such, analyzing all the frames using efficient CNNs can still be computationally expensive. In contrast, having a mechanism to select the most relevant frames can boost the efficiency. Therefore, this paper focuses on reducing the number of video frames needed for action recognition. Nevertheless, our work is orthogonal to prior works that focus on development of efficient CNN for action recognition. Conditional Computing. Another solution to reduce the computation is to dynamically route the compute graph of a neural network. The assumption is that not all input signals require the same amount of computation -some are complicated while others are seemingly easy. Thanks to categorical reparametarization , it becomes possible to discretize a continuous distribution, and effectively learn binary gating. In , a dynamical graph is build by gating the layers of a typical CNN. While in ), the gating is achieved on the level of convolutional channels. In the same vien, GaterNet proposes a separate gater network to learn binary gates for the backbone network. Differently, this paper focuses on gating the video frames themselves, to realize efficiency. Sampling of Video Frames. Several works discuss frame sampling for short-range videos. SCSampler learns a ranking score using trimmed v.s. untrimmed video segments. proposes a student-teacher model for trimmed video classification. In , an agent is trained with reinforcement to learn where to look next. However, frame sampling for long-range actions is fundamentally different from that of short-range. Unlike shortrange actions, in long-range actions, usually a much smaller proportion of timesteps are crucial for classification. As a , this paper focuses on frame selection for solely long-range actions, and it does not require any video-level annotation other than the video category itself. Then it temporally models these selected timesteps to arrive at the video-level feature, which is then classified. Figure 2: Bottom, the Timestep Selector learns concept kernels to represent the dominant visual concepts across the videos. Top, the gating module learns to select only a few timesteps according to their importance to the current video. Model Overview. VideoEpitoma consists of two stages: Timestep Selector and Video Classifier, see figure 1. The first stage is the Timestep Selector and consists of a lightweight CNN, LightNet, followed by a novel gating module, see figure 2. The purpose of this module is timestep gating, i.e. to take binary decision of considering or discarding each video timestep, based on how relevant it is to the video itself. The second stage is the video classifier. Its main purpose is to learn deep and discriminatory video-level representations for maximum classification accuracy. Thus, it resides on top of a heavyweight CNN, HeavyNet, followed by an off-the-shelf temporal layer for video-level representation, and a Multi-Layer Perceptron (MLP) for classification. Only the timesteps chosen by the first stage, i.e. the Timestep Selector, are considered by the second stage, i.e. the video classifier. The Timestep Selector. Conceptually speaking, a longrange action consists of few yet dominant and discriminative visual concepts, based on which, the video can be recognized (b; a). Take for example "Making Pancake". One can easily discriminate it by observing its dominant evidences "Pancake", "Eggs", "Pan", and "Stove". These evidences can be thought of latent concepts. To represent these concepts, we opt for learning a set of N concept kernels K = {k 1, k 2, ...k N}. K are randomly initialized and are part of the network parameters and learned during the training of the selector. Our concept kernels K are reminiscent of the nodes in VideoGraph (b) or the centroids in ActionVLAD . Once these concepts are learned, it becomes easier to efficiently summarize a long-range action. We transverse through a video of thousands timesteps and decide which of them to consider and which to discard, based on the similarity between the features of these timesteps and that of the latent concepts. Our assumption is that a lightweight representation of each timestep is sufficient for taking this decision. Thus, the selector depends on an efficient LightNet to represent these timesteps. Given a long-range video v of T timestep, each is represented as a feature x i ∈ R C×H×W using the LightNet, where C is the convolutional channels, H, W are the channel height and width, respectively. The Gating Module. The purpose of the gating module is to select the video timsteps, see figure 2 top. We start by comparing how relevant each timstep feature x i is to all of the concept kernels K ∈ R N×C using a dot product. The is the similarity scores, representing how relevant a timestep is to each of these concept kernels. Then we model the correlation between these similarity scores s i with a two-layer MLP with a single neuron in the output layer, denoted as α. Next, we need to convert the continuous variable α to a binary variable, such that it represents the decision of the gating module. For this, we make use of to discretize a continuous variable. Following the gating mechanism of , we add gumbel noise to α and follow with sigmoid activation and binary thresholding, arriving at the activated gating valueα. Then, each timestep feature x i is multiplied byα, to either select or discard it. A problem with the aforementioned gating mechanism is that during the feedforward, the classifier does not know which of the selected timesteps is more relevant than the other. As a remedy, we propose a different gating mechanism, see figure 2, top. First, a sigmoid non-linearity is applied to the gating value α to limit its lower-and upper-bound,α = sigmoid(α). Then, to achieve gating, we clipα below threshold 0.5. This modified activation function clipped sigmoid fits perfectly to the purpose of timestep gating due to 3 desirable properties, see figure 3. i. Being a relaxation for the step-function makes it differentiable. ii. Retaining the sigmoid value above the threshold means that the classifier gets the chance to know, out of the selected timesteps, which is relatively more important than the other. iii. Unlike ReLU, the sigmoid activation is upperbounded by 1, thus preventing a single timestep from dominating the others by being multiplied by unbounded gating valueα. Context Conditional Gating. Up till now, the selector learns to gate each timestep regardless of its context, i.e. the video itself. To achieve conditional gating, where both the timestep and its context affect the gating mechanism, we opt for a temporal modeling layer, self-attention , before the gating module, See figure 2, bottom. This temporal layer learns to correlate each timestep with all the others in the video before gating. For selecting timesteps during training, the gating module uses gated-sigmoid as the activation for the gating value α. It has some desirable properties. i. Unlike ReLU, having upper bound does not allow a timestep feature to dominate others. ii. Unlike sigmoid, being clipped allows the network to discard insignificant timesteps, i.e. those with gating values α < 0.5. In test time, we replace the gated-sigmoid with step-function for binary gating of timesteps. Sparse Selection. The last component of the selector is to enforce sparsity on timestep selection, i.e. choose as few timesteps as possible, yet retain the classification accuracy. Loosely speaking, the selector can simply cheat by predicting gating values α just higher than the threshold 0.5, ing in all gates opened and all timesteps selected. The selector has a natural tendency to such a behaviour, as the only loss used so far is that of classification. And the more timesteps used by the classifier, the better the classification accuracy. To prevent such a behaviour, we apply L 0 regularization to the gating valuesα to enforce sparsity on the selected timesteps. We note that the Sparsity regularization is necessary for a properly functioning gating mechanism. The Video Classifier The assumption of VideoEpitoma is that having efficiently selected the most crucial timesteps from the video using the LightNet and the selector, one can opt for a much more powerful HeavyNet to effectively classify the video. Thus, the second stage of VideoEpitoma is the video classifier, see figure 1. It takes as input only the subset T of timesteps chosen by the selector, T T. Each timestep is represented as a feature y i using HeavyNet. Following feature extraction, we use one layer of self-attention for temporal modeling to obtain a video-level representation, followed by a two-layer MLP for the final classification. Before training VideoEpitoma, all the CNN used, as LightNet and HeavyNet, are fine-tuned first on the videos of the dataset in hand. VideoEpitoma is trained with batch size 32 and for 100 epochs. We use Adam with learning rate 1e-3 and epsilon 1e-4. We use PyTorch and TensorFlow for our implementation. Our choice for the LightNet is MobileNetv3. As for the HeavyNet, we experiment with I3D , ShuffleNet3D and ResNet2D (the 50-layer version). Worth mentioning that in the gating module, and during the training phase, we use gumbel noise and clipped sigmoid to get the activated gating valueα, see figure 2. In the test phase, we don't use gumbel noise, and we use step-function, to get a binary gating value. Breakfast Actions Breakfast Actions is a dataset for long-range actions, depicting cooking activities. All in all, it contains 1712 videos, divided into 1357 and 335 for training and testing, respectively. The task is video recognition into 10 classes of making different breakfasts. Added to the video-level annotation, we are given temporal annotations of 48 one-actions. In our experiments, we only use the video-level annotation, and we do not use the temporal annotation of the one-actions. The videos are long-range, with the average length of 2.3 minutes per video. Which makes it ideal for testing the efficiency of recognizing long-range actions. The evaluation method is the accuracy. Charades Charades is a widely used benchmark for human action recognition. It is a diverse dataset with 157 action classes in total. The task is mult-label recognition, where each video is assigned to one or more action class. It is divided into 8k, 1.2k and 2k videos for training, validation and test splits, respectively, covering 67 hours. On average, each video spans 30 seconds, and is labeled with 6 and 9 actions for training and test splits, respectively. Thus, Charades meets the criteria of long-range actions. We use Mean Average Precision (mAP) for evaluation. One might raise an important question -will a Timestep Selector based on LightNet features benefit a classifier based on HeavyNet features, given the differences between the feature spaces of LightNet and HeavyNet? To answer this question, we construct an experiment of two steps on Breakfast. The first step is training a stand-alone selector. For this, we train VideoEpitoma to classify the videos of Breakfast, where we choose MobileNet for both LightNet and HeavyNet. During training, we randomly sample T = 32 timesteps from each video. Since MobileNet is a 2D CNN, a timestep here is practically a video frame. With the help of the L 0 regularization, the selector achieves sparse selection of timesteps, by as little as T = 16 without degrading the classification performance. The second step is testing how will the selector benefit off-the-shelf CNN classifiers. For this, we use different CNN classifiers, previously fine-tuned on Breakfast: I3D, ShuffleNet3D and ResNet2D. Then, we measure their performance using sampled T ∈ {1, 2, 4, 8, 16} timesteps from each video. We use different sampling methods: i. random, ii. uniform and iii. timestep selector. As discussed, the output of the timestep selector is a per-timestep binary valueα ∈ {0, 1} of whether to consider or discard this timestep. So, if T timesteps are processed by the selector, it is able to choose a subset T timesteps and discard the others, where T T. And to to evaluate the benefit of the selector, the off-the-self classifier then uses only T. As shown in figure 4, we observe that the stand-alone selector helps the off-the-shelf classifiers to retain their performance with a reduction of up to 50% of the timesteps. The same improvement is observed for three different CNN classifiers: I3D, ResNet2D and ShuffleNet3D. The main of this experiment is the following. To realize the efficient recognition of long-range actions, reducing the number of processed timesteps is far more rewarding than reducing the processing of each timestep. In other words, our Timestep Selector is able to reduce, by more than half, the computation of the already efficient ShuffleNet3D. See the appendix for full . Having demonstrated that a stand-alone selector can benefit off-shelf classifiers, we pose another question -is it possible to train VideoEpitoma end-to-end, given that the selector and the classifier operate on features from two different CNNs, LightNet and HeavyNet, with two different feature spaces. To answer this question, we do the following experiment. We train VideoEpitoma in an endto-end fashion, where we choose the efficient MobileNet as the LightNet of the selector. As for the HeavyNet of the classifer, we explore multiple choices: I3D, ShuffleNet3D and ResNet2D. Based on our experiments, a careful consideration is to align the timestep features of the 2D LightNet with that of the 3D HeavyNet. In a typical 3D HeavyNet, each timestep is a video snippet of m successive frames {f j, ...., f j+m}, represented as one timestep feature y i ∈ R C×H×W. Thus, the corresponding feature x i from the 2D LightNet has to be based on the middle frame of the snippet, i.e. frame f j+(m/2). c e r e a l s c o f f e e f r i e d e g g j u i c e m i l k p a n c a k e s a l a t s a Stand-alone End-to-end ResNet2D End-to-end I3D End-to-end ShuffleNet3D Figure 5: The ratio of selected timesteps for the action categories of Breakfast. When VideoEpitoma is trained end-to-end, the Timestep Selector learns a better selection to the benefit of the classifer. Notice that the selection ratio changes from stand-alone selector (red) to end-to-end training with the HeavyNets: ResNet2D (green) I3D (yellow) and ShuffleNet3D (blue). The findings of this experiment are as follows. Figure 5 shows the ratio of the selected timesteps by the selector for the videos of each action category of Breakfast. The ratios of the stand-alone (red) is changed when it is trained end-to-end with different HeavyNet: ResNet2D, (blue), I3D (yellow), and ShuffleNet3D (blue). Also, we observe that the choices for the selector when the HeavyNet is 3D CNN tends to agree, relgardless of which 3D CNN is used. Between yellow and blue, we see agreement for 8 of 10 actions. However, the choices tend to vary between 2D and 3D as HeavyNet. Between green and yellow, there is agreement for 3 our of 10 actions. From this experiment, we conclude that, the gating module, depending on LightNet features, learns to select better timesteps to the benefit of the HeavyNet classifier. Gating irrelevant visual evidences is of a great importance in recognizing long-range actions. For example, when discriminating two action categories "Making Pancake" and "Preparing Coffee", we want to make a gating decision for the visual evidences of "Knife" and "Pan". It is better to discard "Knife" as it is irrelevant to both of actions -this is called frame gating. However, the visual evidence of "Pan" is relevant to only "Making Pancake". Thus, it's optimal to consider it only if the action is "Making Pancake" and discarding it otherwise -this is called context gating. In the Timestep Selector, see figure 2 bottom, we use a temporal modeling layer before the gating module. It enables the correlation between a timestep feature, and the video context, i.e. the other timestep features. As a , the gating mechanism becomes conditioned on both the timestep and the video context. To verify this assumption, we conduct an ablation study. We train a variant of the Timestep Selector without the temporal modeling layer, which makes the gating mechanism conditioned on only the timestep feature. In the Timestep Selector, the gating meachnism is conditioned on both the timestep-level feature and the video-level context, which is a better conditional gating. If the gating is only frame-conditioned, the ratios of the selected timesteps for action categories have small variance. Which means the gating is less dependent on the context, i.e. the action category. On contrary, the notice a big variance for the frame and contextconditioned. The gating becomes more dependent on the action category when selecting the timesteps. We observe a drop in performance when using this variant of the Timestep Selector. The reason is that when the gating is conditioned only on the timestep feature, it acts as a saliency selector. That is to say, the gating discards only the frames not related to any of the action categories of the dataset. Figure 6, left, shows the ratio of selected timesteps for each action categories of Breakfast. The frame-conditioned gating (blue) tends to select similar ratios regardless of the category. In contrast, we see more diverse ratios for the timestep and context-conditioned gating. Figure 6, right, shows the ratio variances for the two gating mechanisms. The much higher variance for context gating means that it is more dependent on the action category than the frame gating. We conclude that the cost of selecting timestep using LightNet is marginal to that of the HeavyNet and classifier. When it comes to the recognition of long-range actions, the golden rule is the more timesteps the better the accuracy, and the heavier the computation. But given the huge redundancies of the visual evidences in these timesteps, there is a tradeoff between accuracy and computation. In this experiment, we explore what is effect of this tradeoff on VideoEpitoma, and we compare against off-the-shelf CNNs. Figure 7 shows this tradeoff for three different CNNs: I3D, ResNet2D and ShuffleNet3D. While table 1 details the exact computational budget of VideoEpitoma v.s. the competing CNN. The of this experiment is twofold. First, when it comes to classifying the minuteslong actions, classifying a handful of carefully selected timesteps, using VideoEpitoma, is far more rewarding solution than efficiently all of them, using for example ShuffleNet3D. Second, the cost of selecting these timesteps can be significantly reduced by using a lightweight 2D CNN, as MobileNet. (RNet2D), ii. ShuffleNet3D (SNet3D) and iii. I3D. The computational cost of LightNet and the gating module is marginal compared to that of the HeavyNet. In addition, our selector retains the performance of the HeavyNet but with using half of the timesteps and almost half of the computational cost. Our final experiment is to experiment how VideoEpitoma would fair against off-the-shelf CNN for recognizing the multi-label action videos of Charades. Charades differs from Breakfast in two ways. i Videos of Charades are mid-range, with 0.5 minutes as average length, compared to 2 minutes of Breakfast. ii Charades is multi-label classifications, with 7 labels per video, and 157 labels in total. Breakfast is single-label classification, with 10 labels in total. Due to these two differences, it is harder to select unrelated timesteps from the videos of Charades than Breakfast -most of the timesteps are already relevant to recognizing the mid-range videos of Charades. Still, VideoEpitoma outperforms the off-the-shelf ResNet2D, at different time scales, see figure 8. In this paper, we proposed VideoEpitoma, a neural model for efficient recognition of long-range actions in videos. We stated the fundamental differences between long-range actions and their shortrange counterparts (a; b). And we highlighted how these differences influenced our way of find a solution for an efficient recognition of such videos. The outcome of this paper is VideoEpitoma, a neural model with the ability to retain the performance of off-the-shelf CNN classifiers at a fraction of the computational budget. This paper concludes the following. Rather than building an efficient CNN video classifier, we opted for an efficient selection of the most salient parts of the video, followed by an effective classification of only these salient parts. For a successful selection, we proposed a novel gating module, able to select timesteps conditioned on their importance to their video. We experimented how this selection benefits off-the-shelf CNN classifiers. Futher more, we showed how VideoEpitoma, i.e. both the selector and the classifier, improves even further when trained end-to-end. Finally, we experimented VideoEpitoma on two benchmarks for longrange actions. We compared against realted methods to hightight the efficiency of videoEpitoma for saving the computation, and its effectiveness of recognizing the long-range actions.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Skx1dhNYPS
Efficient video classification using frame-based conditional gating module for selecting most-dominant frames, followed by temporal modeling and classifier.
Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages. A question we ask is whether one can leverage abundant unlabeled texts to improve syntactic parsers, beyond just using the texts to obtain more generalisable lexical features (i.e. beyond word embeddings). To this end, we propose a novel latent-variable generative model for semi-supervised syntactic dependency parsing. As exact inference is intractable, we introduce a differentiable relaxation to obtain approximate samples and compute gradients with respect to the parser parameters. Our method (Differentiable Perturb-and-Parse) relies on differentiable dynamic programming over stochastically perturbed edge scores. We demonstrate effectiveness of our approach with experiments on English, French and Swedish. A dependency tree is a lightweight syntactic structure exposing (possibly labeled) bi-lexical relations between words BID77 BID24, see Figure 1. This representation has been widely studied by the NLP community leading to very efficient state-of-the-art parsers BID30 BID12 BID43, motivated by the fact that dependency trees are useful in downstream tasks such as semantic parsing BID66, machine translation BID11 BID4, information extraction BID9 BID42, question answering BID8 and even as a filtering method for constituency parsing BID34, among others. Unfortunately, syntactic annotation is a tedious and expensive task, requiring highly-skilled human annotators. Consequently, even though syntactic annotation is now available for many languages, the datasets are often small. For example, 31 languages in the Universal Dependency Treebank, 1 the largest dependency annotation resource, have fewer than 5,000 sentences, including such major languages as Vietnamese and Telugu. This makes the idea of using unlabeled texts as an additional source of supervision especially attractive. In previous work, before the rise of deep learning, the semi-supervised parsing setting has been mainly tackled with two-step algorithms. On the one hand, feature extraction methods first learn an intermediate representation using an unlabeled dataset which is then used as input to train a supervised parser BID35 BID83 BID7 BID73. On the other hand, the self-training and co-training methods start by learning a supervised parser that is then used to label extra data. Then, the parser is retrained with this additional annotation BID68 BID25 BID50. Nowadays, unsupervised feature extraction is achieved in neural parsers by the means of word embeddings BID55 BID65. The natural question to ask is whether one can exploit unlabeled data in neural parsers beyond only inducing generalizable word representations. Figure 1: Dependency tree example: each arc represents a labeled relation between the head word (the source of the arc) and the modifier word (the destination of the arc). The first token is a fake root word. Our method can be regarded as semi-supervised Variational Auto-Encoder . Specifically, we introduce a probabilistic model (Section 3) parametrized with a neural network (Section 4). The model assumes that a sentence is generated conditioned on a latent dependency tree. Dependency parsing corresponds to approximating the posterior distribution over the latent trees within this model, achieved by the encoder component of VAE, see Figure 2a. The parameters of the generative model and the parser (i.e. the encoder) are estimated by maximizing the likelihood of unlabeled sentences. In order to ensure that the latent representation is consistent with treebank annotation, we combine the above objective with maximizing the likelihood of gold parse trees in the labeled data. Training a VAE via backpropagation requires marginalization over the latent variables, which is intractable for dependency trees. In this case, previous work proposed approximate training methods, mainly differentiable Monte-Carlo estimation BID27 BID67 and score function estimation, e.g. REINFORCE BID80. However, REINFORCE is known to suffer from high variance BID56. Therefore, we propose an approximate differentiable Monte-Carlo approach that we call Differentiable Perturb-and-Parse (Section 5). The key idea is that we can obtain a differentiable relaxation of an approximate sample by perturbing weights of candidate dependencies and performing structured argmax inference with differentiable dynamic programming, relying on the perturbed scores. In this way we bring together ideas of perturb-and-map inference BID62 BID45 and continuous relaxation for dynamic programming BID53. Our model differs from previous works on latent structured models which compute marginal probabilities of individual edges BID26; BID41. Instead, we sample a single tree from the distribution that is represented with a soft selection of arcs. Therefore, we preserve higher-order statistics, which can then inform the decoder. Computing marginals would correspond to making strong independence assumptions. We evaluate our semi-supervised parser on English, French and Swedish and show improvement over a comparable supervised baseline (Section 6).Our main contributions can be summarized as follows: we introduce a variational autoencoder for semi-supervised dependency parsing; we propose the Differentiable Perturb-and-Parse method for its estimation; we demonstrate the effectiveness of the approach on three different languages. In short, we introduce a novel generative model for learning latent syntactic structures. A dependency is a bi-lexical relation between a head word (the source) and a modifier word (the target), see Figure 1. The set of dependencies of a sentence defines a tree-shaped structure. 2 In the parsing problem, we aim to compute the dependency tree of a given sentence. Formally, we define a sentence as a sequence of tokens (words) from vocabulary W. We assume a one-to-one mapping between W and integers 1... |W|. Therefore, we write a sentence of length n as a vector of integers s of size n + 1 with 1 ≤ s i ≤ |W| and where s 0 is a special root symbol. A dependency tree of sentence s is a matrix of booleans T ∈ {0, 1} (n+1)×(n+1) with T h,m = 1 meaning that word s h is the head of word s m in the dependency tree. DISPLAYFORM0 Figure 2: (a) Illustration of our probabilistic model with random variables s, T and z for sentences, dependency trees and sentence embeddings, respectively. The gray area delimits the latent space. Solid arcs denote the generative process, dashed arcs denotes posterior distributions over the latent variables. (b) Stochastic computation graph. (c) Illustration of the decoder when computing the probability distribution of s 4, the word at position 4. Dashed arcs at the bottom represent syntactic dependencies between word at position 4 and previous positions. At each step, the LSTM takes as input an embedding of the previous word (s 0 is a special start-of-sentence symbol). Then, the GCN combines different outputs of the LSTM by transforming them with respect to their syntactic relation with the current position. Finally, the probability of s 4 is computed via the softmax function. More specifically, a dependency tree T is the adjacency matrix of a directed graph with n + 1 vertices v 0... v n. A matrix T is a valid dependency tree if and only if this graph is a v 0 -rooted spanning arborescence, 3 i.e. the graph is connected, each vertex has at most one incoming arc and the only vertex without incoming arc is v 0. A dependency tree is projective if and only if, for each arc v h → v m, if h < m (resp. m < h) then there exists a path with arcs T from v h to each vertex v k such that h < k < m (resp. m < k < h). From a linguistic point of view, projective dependency trees combine contiguous phrases (sequence of words) only. Intuitively, this means that we can draw the dependency tree above the sentence without crossing arcs. Given a sentence s, an arc-factored dependency parser computes the dependency tree T which maximizes a weighting function f (T ; W) = h,m T h,m W h,m, where W is a matrix of dependency (arc) weights. This problem can be solved with a O(n 2) time complexity BID75 BID51. If we restrict T to be a projective dependency tree, then the optimal solution can be computed with a O(n 3) time complexity using dynamic programming BID16. Restricting the search space to projective trees is appealing for treebanks exhibiting this property (either exactly or approximately): they enforce a structural constraint that can be beneficial for accuracy, especially in a low-resource scenario. Moreover, using a more restricted search space of potential trees may be especially beneficial in a semi-supervised scenario: with a more restricted space a model is less likely to diverge from a treebank grammar and capture non-syntactic phenomena. Finally, Eisner's algorithm BID16 can be described as a deduction system BID64, a framework that unifies many parsing algorithms. As such, our methodology could be applied to other grammar formalisms. For all these reasons, in this paper, we focus on projective dependency trees only. We now turn to the learning problem, i.e. estimation of the matrix W. We assume that we have access to a set of i.i.d. labeled sentences L = {s, T, . . .} and a set of i.i.d. unlabeled sentences U = {s, . . .}. In order to incorporate unlabeled data in the learning process, we introduce a generative model where the dependency tree is latent (Subsection 3.1). As such, we can maximize the likelihood of observed sentences even if the ground-truth dependency tree is unknown. We learn the parameters of this model using a variational Bayes approximation (Subsection 3.2) augmented with a discriminative objective on labeled data (Subsection 3.3). Under our probabilistic model, a sentence s is generated from a continuous sentence embedding z and with respect to a syntactic structure T. We formally define the generative process of a sentence of length n as: DISPLAYFORM0 This Bayesian network is shown in Figure 2a. In order to simplify notation, we omit conditioning on n in the following. T and z are latent variables and p(s|T, z) is the conditional likelihood of observations. We assume that the priors p(T) and p(z) are the uniform distribution over projective trees and the multivariate standard normal distribution, respectively. The true distribution underlying the observed data is unknown, so we have to learn a model p θ (s|T, z) parametrized by θ that best fits the given samples: DISPLAYFORM1 Then, the posterior distribution of latent variables p θ (T, z|s) models the probability of underlying representations (including dependency trees) with respect to a sentence. This conditional distribution can be written as: DISPLAYFORM2 In the next subsection, we explain how these two quantities can be estimated from data. Computations in Equation 1 and Equation 2 require marginalization over the latent variables: DISPLAYFORM0 which is intractable in general. We rely on the Variational Auto-Encoder (VAE) framework to tackle this challenge BID27 BID67. We introduce a variational distribution q φ (T, z|s) which is intended to be similar to p θ (T, z|s). More formally, we want KL [q φ (T, z|s) p θ (T, z|s)] to be as small as possible, where KL is the Kulback-Leibler (KL) divergence. Then, the following equality holds: DISPLAYFORM1 where log p θ (s) is called the evidence. The KL divergence is always positive, therefore by removing the last term we have: DISPLAYFORM2 where the right-hand side is called the Evidence Lower Bound (ELBO). By maximizing the ELBO term, the divergence KL [q φ (T, z|s) p θ (T, z|s)] is implicitly minimized. Therefore, we define a surrogate objective, replacing the objective in Equation 1: DISPLAYFORM3 The ELBO in Equation 4 has two components. First, the KL divergence with the prior, which usually has a closed form solution. For the distribution over dependency trees, it can be computed with the semiring algorithm of BID38. Second, the non-trivial term DISPLAYFORM4 During training, Monte-Carlo method provides a tractable and unbiased estimation of the expectation. Note that a single sample from q φ (T, z|s) can be understood as encoding the observation into the latent space, whereas regenerating a sentence from the latent space can be understood as decoding. However, training a VAE requires the sampling process to be differentiable. In the case of the sentence embedding, we follow the usual setting and define q φ (z|s) as a diagonal Gaussian: backpropagation through the the sampling process z ∼ q φ (z|s) can be achieved thanks to the reparametrization trick BID27 BID67. Unfortunately, this approach cannot be applied to dependency tree sampling T ∼ q φ (T |s). We tackle this issue in Section 5. VAEs are a convenient approach for semi-supervised learning and have been successfully applied in NLP BID33 BID81 BID85 BID82. In this scenario, we are given the dependency structure of a subset of the observations, i.e. T is an observed variable. Then, the supervised ELBO term is defined as: DISPLAYFORM0 Note that our end goal is to estimate the posterior ditribution over dependency trees q φ (T |s), i.e. the dependency parser, which does not appear in the supervised ELBO. We want to explicitly use the labeled data in order to learn the parameters of this parser. This can be achieved by adding a discriminative training term to the overall loss. The loss function for training a semi-supervised VAE is: DISPLAYFORM0 where the first term is the standard loss for supervised learning of log-linear models BID23 BID36 ). In this section, we describe the neural parametrization of the encoder distribution q φ (Subsection 4.1) and the decoder distribution p θ (Subsection 4.2). A visual representation is given in Figure 2b. We factorize the encoder as q φ (T, z|s) = q φ (T |s)q φ (z|s). The categorical distribution over dependency trees is parametrized by a log-linear model BID36 where the weight of an arc is given by the neural network of BID30.The sentence embedding model is specified as a diagonal Gaussian parametrized by a LSTM, similarly to the seq2seq framework BID72 BID5. That is: DISPLAYFORM0 where m and v are mean and variance vectors, respectively. We use an autoregressive decoder that combines an LSTM and a Graph Convolutional Network (; . The LSTM keeps the history of generated words, while the GCN incorporate information about syntactic dependencies. The hidden state of the LSTM is initialized with latent variable z (the sentence embedding). Then, at each step 1 ≤ i ≤ n, an embedding associated with word at position i − 1 is fed as input. A special start-of-sentence symbol embedding is used at the first position. Let o i be the hidden state of the LSTM at position i. The standard seq2seq architecture uses this vector to predict the word at position i. Instead, we transform it in order to take into account the syntactic structure described by the latent variable T. Due to the autoregressive nature of the decoder, we can only take into account dependencies T h,m such that h < i and m < i. Before being fed to the GCN, the output of the LSTM is fed to distinct multi-layer perceptrons 6 that characterize syntactic relations: if s h is the head of s i, o h is transformed with MLP, if s m is a modifier of s i, o m is transformed with MLP, and lastly o i is transformed with MLP. Formally, the GCN is defined as follows: DISPLAYFORM0 The output vector g i is then used to estimate the probability of word s i. The neural architecture of the decoder is illustrated on Figure 2c. Encoder-decoder architectures are usually straightforward to optimize with the back-propagation algorithm BID40 BID37 ) using any autodiff library. Unfortunately, our VAE contains stochastic nodes that can not be differentiated efficiently as marginalization is too expensive or intractable (see Figure 2b for the list of stochastic nodes in our computation graph). BID27 and BID67 proposed to rely on a Monte-Carlo estimation of the gradient. This approximation is differentiable because the sampling process is moved out of the backpropagation path. In this section, we introduce our Differentiable Perturb-and-Parse operator to cope with the distribution over dependency trees. Firstly, in Subsection 5.1, we propose an approximate sampling process by computing the best parse tree with respect to independently perturbed arc weights. Secondly, we propose a differentiable surrogate of the parsing algorithm in Subsection 5.2. Sampling from a categorical distributions can be achieved through the Gumbel-Max trick BID20 BID44. 8 Unfortunately, this reparametrization is difficult to apply when the discrete variable can take an exponential number of values as in Markov Random Fields (MRF). BID62 proposed an approximate sampling process: each component is perturbed independently. Then, standard MAP inference algorithm computes the sample. This technique is called perturb-and-map. Arc-factored dependency parsing can be expressed as a MRF where variable nodes represent arcs, singleton factors weight arcs and a fully connected factor forces the variable assignation to describe a valid dependency tree BID71. Therefore, we can apply the perturb-and-map method to dependency tree sampling: DISPLAYFORM0 where G is the Gumbel distribution, that is sampling matrix P is equivalent to setting P i,j = − log(− log U i,j)) where U i,j ∼ Uniform.Algorithm 1 This function search the best split point for constructing an element given its span. b is a one-hot vector such that b i−k = 1 iff k is the best split position. DISPLAYFORM1 s ← null-initialized vec. of size j − i 3:for i ≤ k < j do 4: DISPLAYFORM2 b ← ONE-HOT-ARGMAX(s) 6: DISPLAYFORM3 has contributed the optimal objective, this function sets T i,j to 1. Then, it propagates the contribution information to its antecedents. 1: function BACKTRACK-URIGHT(i, j, T) 2: DISPLAYFORM4 for i ≤ k < j do 5: DISPLAYFORM5 6: DISPLAYFORM6 The (approximate) Monte-Carlo estimation of the expectation in Equation 3 is then defined as: DISPLAYFORM7 where denotes a Monte-Carlo estimation of the gradient, P ∼ G is sampled in the last line and EISNER is an algorithm that compute the projective dependency tree with maximum (perturbed) weight BID16. Therefore, the sampling process is outside of the backpropagation path. Unfortunately, the EISNER algorithm is built using ONE-HOT-ARGMAX operations that have illdefined partial derivatives. We propose a differentiable surrogate in the next section. We now propose a continuous relaxation of the projective dependency parsing algorithm. We start with a brief outline of the algorithm using the parsing-as-deduction formalism, restricting this presentation to the minimum needed to describe our continuous relaxation. We refer the reader to BID16 for an in-depth presentation. The parsing-as-deduction formalism provides an unified presentation of many parsing algorithms BID64 BID70. In this framework, a parsing algorithm is defined as a deductive system, i.e. as a set of axioms, a goal item and a set of deduction rules. Each deduced item represents a sub-analysis of the input. Regarding implementation, the common way is to rely on dynamic programming: items are deduced in a bottom-up fashion, from smaller sub-analyses to large ones. To this end, intermediate are stored in a global chart. For projective dependency parsing, the algorithm builds a chart whose items are of the form DISPLAYFORM0 represents a sub-analysis where every word s k, i ≤ k ≤ j is a descendant of s i and where s j cannot have any other modifier (resp. can have). The two other types are defined similarly for descendants of word s j. In the first stage of the algorithm, the maximum weight of items are computed (deduced) in a bottom-up fashion. For example, the weight WEIGHT[i j] is defined as the maximum of WEIGHT DISPLAYFORM1 assumes a dependency with head s i and modifier s j. In the second stage, the algorithm retrieves arcs whose scores have contributed to the optimal objective. Part of the pseudo-code for the first and second stages are given in Algorithm 1 and Algorithm 2, respectively. Note that, usually, the second stage is implemented with a linear time complexity but we cannot rely on this optimization for our continuous relaxation. This algorithm can be thought of as the construction of a computational graph where WEIGHT, BACKPTR and CONTRIB are sets of nodes (variables). This graph includes ONE-HOT-ARGMAX operations that are not differentiable (see line 5 in Algorithm 1). This operation takes as input a vector of weights v of size k and returns a one-hot vector o of the same size with o i = 1 if and only if v i is the element of maximum value: DISPLAYFORM2 We follow a recent trend BID22 BID45 BID18 in differentiable approximation of the ONE-HOT-ARGMAX function and replace it with the PEAKED-SOFTMAX operator: DISPLAYFORM3 1≤j≤k exp(1 /τ v j) where τ > 0 is a temperature hyperparameter controlling the smoothness of the relaxation: when τ → ∞ the relaxation becomes equivalent to ONE-HOT-ARGMAX. With this update, the parsing algorithm is fully differentiable.12 Note, however, that outputs are not valid dependency trees anymore. Indeed, then an output matrix T contains continuous values that represent soft selection of arcs. BID53 introduced a alternative but similar approach for tagging with the Viterbi algorithm. We report pseudo-codes for the forward and backward passes of our continuous relaxation of EISNER's algorithm in Appendix F. The fact that T is a soft selection of arcs, and not a combinatorial structure, does not impact the decoder. Indeed, a GCN can be run over weighted graphs, the message passed between nodes is simply multiplied by the continuous weights. This is one of motivations for using GCNs rather than a Recursive LSTMs BID74 in the decoder. On the one hand, running a GCN with a matrix that represents a soft selection of arcs (i.e. with real values) has the same computational cost than using a standard adjacency matrix (i.e. with binary elements) if we use matrix multiplication on GPU. 13 On the other hand, a recursive network over a soft selection of arcs requires to build a O(n 2) set of RNN-cells that follow the dynamic programming chart where the possible inputs of a cell are multiplied by their corresponding weight in T, which is expensive and not GPU-friendly. We ran a series of experiments on 3 different languages to test our method for semi-supervised dependency parsing: English, French and Swedish. Details about corpora can be found in Appendix C. The size of each dataset is reported in TAB0. Note that the setting is especially challenging for Swedish: the amount of unlabeled data we use here barely exceeds that of labeled data. The hyperparameters of our network are described in Appendix D. In order to ensure that we do not bias our model for the benefit of the semi-supervised scenario, we use the same parameters as BID30 for the parser. Also, we did not perform any language-specific parameter selections. This makes us hope that our method can be applied to other languages with little extra effort. We stress that no part-of-speech tags are used as input in any part of our network. For English, the supervised parser took 1.5 hours to train on a NVIDIA Titan X GPU while the semi-supervised parser without sentence embedding, which sees 2 times more instances per epoch, took 3.5 hours to train. Table 2: (a) Parsing : unlabeled attachment score / labeled attachment score. We also report with the parser of BID30 which uses a different discriminative loss for supervised training. (b) Recall / Precision evaluation with respect to dependency lengths for the supervised parser and the best semi-supervised parser on the English test set. Bold numbers highlight the main differences. (c) Recall / Precision evaluation with respect to dependency labels for multi-word expressions (mwe), adverbial modifiers (advmod) and appositional modifiers (appos). For each dataset, we train under the supervised and the semi-supervised scenario. Moreover, in the semi-supervised setting, we experiment with and without latent sentence embedding z. We compare only to the model of BID30. Recently, even more accurate models have been proposed (e.g., BID12 . In principle, the ideas introduced in recent work are mostly orthogonal to our proposal as we can modify our VAE model accordingly. For example, we experimented with using bi-affine attention of BID12, though it has not turned out beneficial in our low-resource setting. Comparing to multiple previous parsers would have also required tuning each of them on our dataset, which is infeasible. Therefore, we only report with a comparable baseline, i.e. trained with a structured hinge loss BID30 BID76 . We did not perform further tuning in order to ensure that our analysis is not skewed toward one setting. Parsing are summarized in Table 2a .We observe a score increase in all three languages. Moreover, we observe that VAE performs slightly better without latent sentence embedding. We assume this is due to the fact that dependencies are more useful when no information leaks in the decoder through z. Interestingly, we observe an improvement, albeit smaller, even on Swedish, where we used a very limited amount of unlabeled data. We note that training with structured hinge loss gives stronger than our supervised baseline. In order to maintain the probabilistic interpretation of our model, we did not include a similar term in our model. We conducted qualitative analyses for English. 14 We report scores with respect to dependency lengths in Table 2b . We observe that the semi-supervised parser tends to correct two kind of errors. Firstly, it makes fewer mistakes on root attachments, i.e. the recall is similar between the two parsers but the precision of the semi-supervised one is higher. We hypothesis that root attachment errors come at a high price in the decoder because there is only a small fraction of the vocabulary that is observed with this syntactic function. Secondly, the semi-supervised parser recovers more long distance relations, i.e. the recall for dependencies with a distance superior or equal to 7 is higher. Intuitively, we assume these dependencies are more useful in the decoder: for short distance dependencies, the LSTM efficiently captures the context of the word to predict, whereas this infor-mation could be vanishing for long distances, meaning the GCN has more impact on the prediction. We also checked how the scores differ across dependency labels. We report main differences in Tables 2c. The largest improvements are obtained for multi-word expressions: this is particularly interesting because they are known to be challenging in NLP. Dependency parsing in the low-ressource scenario has been of interest in the NLP community due to the expensive nature of annotation. On the one hand, transfer approaches learn a delexicalized parser for a resource-rich language which is then used to parse a low-resource one BID2 BID52 . On the other hand, the grammar induction approach learns a dependency parser in an unsupervised manner. BID32 introduced the first generative model that outperforms the right-branching heuristic in English. Close to our work, BID6 use an auto-encoder setting where the decoder tries to rebuild the source sentence. However, their decoder is unstructured (e.g. it is not auto-regressive).Variational Auto-Encoders BID27 BID67 have been investigated in the semi-supervised settings for NLP. BID33 learn a semantic parser where the latent variable is a discrete sequence of symbols. BID85 successfully applied the variational method to semi-supervised morphological re-inflection where discrete latent variables represent linguistic features (e.g. tense, part-of-speech tag). BID82 proposed a semi-supervised semantic parser. Similarly to our model, they rely on a structured latent variable. However, all of these systems use either categorical random variables or the REINFORCE score estimator. To the best of our knowledge, no previous work used continuous relaxation of a dynamic programming latent variable in the VAE setting. The main challenge is backpropagation through discrete random variables. BID45 and BID22 first introduced the Gumbel-Softmax operator for the categorical distribution. There are two issues regarding more complex discrete distributions. Firstly, one have to build a reparametrization of the the sampling process. BID62 showed that low-order perturbations provide samples of good qualities for graphical models. Secondly, one have to build a good differentiable surrogate to the structured arg max operator. Early work replaced the structured arg max with structured attention BID26. However, computing the marginals over the parse forest is sensitive to numerical stability outside specific cases like non-projective dependency parsing BID41 BID78. BID53 proposed a stable algorithm based on dynamic program smoothing. Our approach is highly related but we describe a continuous relaxation using the parsing-as-deduction formalism. BID63 propose to replace the true gradient with a proxy that tries to satisfy constraints on a arg max operator via a projection. However, their approach is computationally expensive, so they remove the tree constraint on dependencies during backpropagation. A parallel line of work focuses on sparse structures that are differentiable BID49 BID59. We presented a novel generative learning approach for semi-supervised dependency parsing. We model the dependency structure of a sentence as a latent variable and build a VAE. We hope to motivate investigation of latent syntactic structures via differentiable dynamic programming in neural networks. Future work includes research for an informative prior for the dependency tree distribution, for example by introducing linguistic knowledge BID57 BID61 or with an adversarial training criterion BID46. This work could also be extended to the unsupervised scenario.where z is the sample. As such, e ∼ N is an input of the neural network for which we do not need to compute partial derivatives. This technique is called the reparametrization trick BID27 BID67. Sampling from a categorical distributions can be achieved through the Gumbel-Max trick BID20 BID44. Randomly generated Gumbel noise is added to the log-probability of every element of the sample space. Then, the sample is simply the element with maximum perturbed log-probability. Let d ∈ k be a random variable taking values in the corner of the unit-simplex of dimension k with probability: DISPLAYFORM0 where w is a vector of weights. Sampling d ∼ p(d) can be re-expressed as follows: DISPLAYFORM1 where G is the Gumbel distribution. Sampling g ∼ G is equivalent to setting g i = − log(− log u i)) where u i ∼ Uniform. If w is computed by a neural network, the sampling process is outside the backpropagation path. English We use the Stanford Dependency conversion BID10 of the Penn Treebank BID48 with the usual section split: 02-21 for training, 22 for development and 23 for testing. In order to simulate our framework under a low-resource setting, the annotation is kept for 10% of the training set only: a labeled sentence is the sentence which has an index (in the training set) modulo 10 equal to zero. French We use a similar setting with the French Treebank version distributed for the SPMRL 2013 shared task and the provided train/dev/test split (Abeillé et al., 2000; BID69 .Swedish We use the Talbanken dataset which contains two written text parts: the professional prose part (P) and the high school students' essays part (G). We drop the annotation of (G) in order to use this section as unlabeled data. We split the (P) section in labeled train/dev/test using a pseudo-randomized scheme. We follow the splitting scheme of but fix section 9 as development instead of k-fold cross-validation. Sentence i is allocated to section i mod 10. Then, section 1-8 are used for training, section 9 for dev and section 0 for test. Encoder: word embeddings We concatenate trainable word embeddings of size 100 with external word embeddings. 15 We use the word-dropout settings of BID30. For English, external embeddings are pre-trained with the structured skip n-gram objective BID39. 16 For French and Swedish, we use the Polyglot embeddings BID3. 17 We stress out that no part-of-speech tag is used as input in any part of our network. Encoder: dependency parser The dependency parser is built upon a two-stack BiLSTM with a hidden layer size of 125 (i.e. the output at each position is of size 250). Each dependency is then weighted using a single-layer perceptron with a tanh activation function. Arc label prediction rely on a similar setting, we refer to the reader to BID30 for more information about the parser's architecture. Encoder: sentence embedding The sentence is encoded into a fixed size vector with a simple leftto-right LSTM with an hidden size of 100. The hidden layer at the last position of the sentence is then fed to two distinct single-layer perceptrons, with an output size of 100 followed by a piecewise tanh activation function, that computes means and standard deviations of the diagonal Gaussian distribution. Decoder The decoder use fixed pre-trained embeddings only. The recurrent layer of the decoder is a LSTM with an hidden layer size of 100. MLP, MLP and MLP are all single-layer perceptrons with an output size of 100 and without activation function. Training We encourage the VAE to rely on latent structures close to the targeted ones by bootstrapping the training procedure with labeled data only. In the first two epochs, we train the network with the discriminative loss only. Then, for the next two epochs, we add the supervised ELBO term (Equation 5). Finally, after the 6th epoch, we also add the unsupervised ELBO term (Equation 3). We train our network using stochastic gradient descent for 30 epochs using Adadelta with default parameters as provided by the Dynet library. In the semisupervised scenario, we alternate between labeled and unlabeled instances. The temperature of the PEAKED-SOFTMAX operator is fixed to τ = 1. Dynamic programs for parsing have been studied as abstract algorithms that can be instantiated with different semirings BID17. For example, computing the weight of the best parse relies on the R, max, + semiring. This semiring can be augmented with set-valued operations to retrieve the best derivation. However, a straightforward implementation would have a O(n 5) space complexity: for each item in the chart, we also need to store the set of arcs. Under this formalism, the backpointer trick is a method to implicitly constructs these sets and maintain the optimal O(n 3) complexity. Our continuous relaxation replaces the max operator with a smooth surrogate and the set values with a soft-selection of sets. Unfortunately, R, PEAKED-SOFTMAX is not a commutative monoid, therefore the semiring analogy is not transposable. We describe how we can embed a continuous relaxation of projective dependency parsing as a node in a neural network. During the forward pass, we are given arc weights W and we compute the relaxed projective dependency tree T that maximize the arc-factored weight h,m T h,m ×W h,m. Each output variable T h,m ∈ is a soft selection of dependency with head-word s h and modifier s m. During back-propagation, we are given partial derivatives of the loss with respect to each arc and we compute the ones with respect to arc weights: DISPLAYFORM0 Note that the Jacobian matrix has O(n 4) values but we do need to explicitly compute it. The space and time complexity of the forward and backward passes are both cubic, similar to Eisner's algorithm. The forward pass is a two step algorithm:1. First, we compute the cumulative weight of each item and store soft backpointers to keep track of contribution of antecedents. This step is commonly called to inside algorithm. 2. Then, we compute the contribution of each arc thanks to the backpointers. This step is somewhat similar to the arg max reconstruction algorithm. The outline of the algorithm is given in Algorithm 3.The inside algorithm computes the following variables:• a[i j][k] is the weight of item [i j] if we split its antecedent at k.• b[i j][k] is the soft backpointer to antecedents of item [i j] with split at k.• c[i j] is the cumulative weight of item [i j].and similarly for the other chart values. The algorithm is given in Algorithm 5.The backpointer reconstruction algorithm compute the contribution of each arc. We follow backpointers in reverse order in order to compute the contribution of each itemc[i j]. The algorithm is given in Algorithm 6. During the backward pass, we compute the partial derivatives of variables using the chain rule, i.e. in the reverse order of their creation: we first run backpropagation through the backpointer reconstruction algorithm and then through the inside algorithm (see Algorithm 4). Given the partial derivatives in FIG3, backpropagation through the backpointer reconstruction algorithm is straighforward to compute, see Algorithm 7. Partial derivatives of the inside algorithm's variables are given in Figure 4. ∀i < k ≤ j: DISPLAYFORM0
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJlgNh0qKQ
Differentiable dynamic programming over perturbed input weights with application to semi-supervised VAE
DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks. Deep learning made a huge impact on a wide variety of applications BID8 BID24 BID9 BID15 BID10 BID18 and recent neural network classifiers have become excellent at detecting relevant signals (e.g., the presence of a cat) contained in input data points such as images by filtering out all other, nonrelevant and distracting components also present in the data. This separation of signal and distractors is achieved by passing the input through many layers with millions of parameters and nonlinear activation functions in between, until finally at the output layer, these models yield a highly condensed version of the signal, e.g. a single number indicating the probability of a cat being in the image. While deep neural networks learn efficient and powerful representations, they are often considered a'black-box'. In order to better understand classifier decisions and to gain insight into how these models operate, a variety techniques have been proposed BID20 BID25 BID12 BID1 BID0 BID11 BID26 BID22 BID27 BID23 BID21. These methods for explaining classifier decisions operate under the assumption that it is possible to propagate the condensed output signal back through the classifier to arrive at something that shows how the relevant signal was encoded in the input and thereby explains the classifier decision. Simply put, if the classifier detected a cat, the visualization should point to the cat-relevant aspects of the input image from the perspective of the network. Techniques that are based on this principle include saliency maps from network gradients BID1 BID20, DeConvNet (, DCN), Guided BackProp (, GBP), Figure 1: Illustration of explanation approaches. Function and signal approximators visualize the explanation using the original color channels. The attribution is visualized as a heat map of pixelwise contributions to the output Layer-wise Relevance Propagation (, LRP) and the Deep Taylor Decomposition (, DTD), Integrated Gradients BID23 and SmoothGrad BID21.The merit of explanation methods is often demonstrated by applying them to state-of-the-art deep learning models in the context of high dimensional real world data, such as ImageNet, where the provided explanation is intuitive to humans. Unfortunately, theoretical analysis as well as quantitative empirical evaluations of these methods are lacking. Deep neural networks are essentially a composition of linear transformations connected with nonlinear activation functions. Since approaches, such as DeConvNet, Guided BackProp, and LRP, back-propagate the explanations in a layer-wise fashion, it is crucial that the individual linear layers are handled correctly. In this work we show that these gradient-based methods fail to recover the signal even for a single-layer architecture, i.e. a linear model. We argue that therefore they cannot be expected to reliably explain a deep neural network and demonstrate this with quantitative and qualitative experiments. In particular, we provide the following key contributions:• We analyze the performance of existing explanation approaches in the controlled setting of a linear model (Sections 2 and 3).• We categorize explanation methods into three groups -functions, signals and attribution (see Fig. 1) -that require fundamentally different interpretations and are complementary in terms of information about the neural network (Section 3).• We propose two novel explanation methods -PatternNet and PatternAttribution -that alleviate shortcomings of current approaches, as discovered during our analysis, and improve explanations in real-world deep neural networks visually and quantitatively (Sections 4 and 5).This presents a step towards a thorough analysis of explanation methods and suggests qualitatively and measurably improved explanations. These are crucial requirements for reliable explanation techniques, in particular in domains, where explanations are not necessarily intuitive, e.g. in health and the sciences BID16.Notation and scope Scalars are lowercase letters (i), column vectors are bold (u), element-wise multiplication is . The covariance between u and v is cov [u, v], the covariance of u and i is cov [u, i]. The variance of a scalar random variable i is σ 2 i. Estimates of random variables will have a hat (û). We analyze neural networks excluding the final soft-max output layer. To allow for analytical treatment, we only consider networks with linear neurons optionally followed by a rectified linear unit (ReLU), max-pooling or soft-max. We analyze linear neurons and nonlinearities independently such that every neuron has its own weight vector. These restrictions are similar to those in the saliency map BID20, DCN , GBP (Springenberg Figure 2 : For linear models, i.e., a simple neural network, the weight vector does not explain the signal it detects BID5 . The data x = ya s + a d is color-coded w.r.t. the output y = w T x. Only the signal s = ya s contributes to y. The weight vector w does not agree with the signal direction, since its primary objective is canceling the distractor. Therefore, rotations of the basis vector a d of the distractor with constant signal s lead to rotations of the weight vector (right). et al., 2015), LRP BID0 and DTD BID11. Without loss of generality, biases are considered constant neurons to enhance clarity. In this section, we analyze explanation methods for deep neural network, starting with the simplest neural network setting: a purely linear model and data sampled from a linear generative model. This setup allows us to (i) fully control how signal and distractor components are encoded in the input data and (ii) analytically track how the ing explanation relates to the known signal component. This analysis allows us then to highlight shortcomings of current explanation approaches that carry over to deep neural networks. Consider the following toy example (see Fig. 2) where we generate data x as: DISPLAYFORM0 We train a linear regression model to extract y from x. By construction, s is the signal in our data, i.e., the part of x containing information about y. Using the terminology of BID5 the distractor d obfuscates the signal making the detection task more difficult. To optimally extract y, our model has to be able to filter out the distractor d. This is why the weight vector is also called the filter. In the example, w = [1, −1] T fulfills this convex task. From this example, we can make several observations: The optimal weight vector w does not align, in general, with the signal direction a s, but tries to filter the contribution of the distractor (see Fig. 2). This is optimally solved when the weight vector is orthogonal to the distractor w T d = 0. Therefore, when the direction of the distractor a d changes, w must follow, as illustrated on the right hand side of the figure. On the other hand, a change in signal direction a s can be compensated for by a change in sign and magnitude of w such that w T a s = 1, but the direction stays constant. The fact that the direction of the weight vector in a linear model is largely determined by the distractor implies that given only the weight vector, we cannot know what part of the input produces the output y. On the contrary, the direction a s must be learned from data. Now assume that we have additive isotropic Gaussian noise. The mean of the noise can easily be compensated for with a bias change. Therefore, we only have to consider the zero-mean case. Since isotropic Gaussian noise does not contain any correlations or structure, the only way to remove it is by averaging over different measurements. It is not possible to cancel it out effectively by using a well-chosen weight vector. However, it is well known that adding Gaussian noise shrinks the weight vector and corresponds to L2 regularization. In the absence of a structured distractor, the smallest weight vector w such that w T a s = 1 is the one in the direction of the signal. Therefore in practice both these effects influence the actual weight vector. As already indicated above, deep neural networks are essentially a composition of linear layers and non-linear activation functions. In the next section, we will show that gradient-based methods, e.g., DeConvNet, Guided BackProp, and LRP, are not able to distinguish signal from distractor in a linear model and therefore back-propagate sub-optimal explanations in deeper networks. This analysis allows us to develop improved layer-wise explanation techniques and to demonstrate quantitative and qualitative better explanations for deep neural networks. Terminology Throughout this manuscript we will use the following terminology: The filter w tells us how to extract the output y optimally from data x. The pattern a s is the direction in the data along which the desired output y varies. Both constitute the signal s = a s y, i.e., the contributing part of x. The distractor d is the component of the data that does not contain information about the desired output. In this section, we take a look at a subset of explanation methods for individual classifier decisions and discuss how they are connected to our analysis of linear models in the previous section. Fig. 1 gives an overview of the different types of explanation methods which can be divided into function, signal and attribution visualizations. These three groups all present different information about the network and complement each other. Functions -gradients, saliency map Explaining the function in input space corresponds to describing the operations the model uses to extract y from x. Since deep neural networks are highly nonlinear, this can only be approximated. The saliency map estimates how moving along a particular direction in input space influences y (i.e., sensitivity analysis) where the direction is given by the model gradient BID1 BID20. In case of a linear model y = w T x, the saliency map reduces to analyzing the weights ∂y/∂x = w. Since it is mostly determined by the distractor, as demonstrated above, it is not representing the signal. It tells us how to extract the signal, not what the signal is in a deep neural network. Signal -DeConvNet, Guided BackProp, PatternNet The signal s detected by the neural network is the component of the data that caused the networks activations. BID26 formulated the goal of these methods as " [...] to map these activities back to the input pixel space, showing what input pattern originally caused a given activation in the feature maps". In a linear model, the signal corresponds to s = a s y. The pattern a s contains the signal direction, i.e., it tells us where a change of the output variable is expected to be measurable in the input BID5. Attempts to visualize the signal for deep neural networks were made using DeConvNet BID26 and Guided BackProp BID22. These use the same algorithm as the saliency map, but treat the rectifiers differently (see Fig. 1): DeConvNet leaves out the rectifiers from the forward pass, but adds additional ReLUs after each deconvolution, while Guided BackProp uses the ReLUs from the forward pass as well as additional ones. The back-projections for the linear components of the network correspond to a superposition of what are assumed to be the signal directions of each neuron. For this reason, these projections must be seen as an approximation of the features that activated the higher layer neuron. It is not a reconstruction in input space BID26.For the simplest of neural networks -the linear model -these visualizations reduce to the gradient 1. They show the filter w and neither the pattern a s, nor the signal s. Hence, DeConvNet and Guided BackProp do not guarantee to produce the detected signal for a linear model, which is proven by our toy example in Fig. 2. Since they do produce compelling visualizations, we will later investigate whether the direction of the filter w coincides with the direction of the signal s. We will show that this is not the case and propose a new approach, PatternNet (see Fig. 1), to estimate the correct direction that improves upon the DeConvNet and Guided BackProp visualizations. Attribution -LRP, Deep Taylor Decomposition, PatternAttribution Finally, we can look at how much the signal dimensions contribute to the output through the layers. This will be referred to as the attribution. For a linear model, the optimal attribution would be obtained by element-wise multiplying the signal with the weight vector: r input = w ay, with the element-wise multiplication. BID0 introduced layer-wise relevance propagation (LRP) as a decomposition of pixel-wise contributions (called relevances). BID11 extended this idea and proposed the deep Taylor decomposition (DTD). The key idea of DTD is to decompose the activation of a neuron in terms of contributions from its inputs. This is achieved using a first-order Taylor expansion around a root point x 0 with w T x 0 = 0. The relevance of the selected output neuron i is initialized with its output from the forward pass. The relevance from neuron i in layer l is re-distributed towards its input as: DISPLAYFORM0 To obtain the relevance for neuron i in layer l−1 the incoming relevances from all connected neurons j in layer l are summed r DISPLAYFORM1 Here we can safely assume that w T x > 0 because a non-active ReLU unit from the forward pass stops the re-distribution in the backward pass. This is identical to how a ReLU stops the propagation of the gradient. The difficulty in the application of the deep Taylor decomposition is the choice of the root point x 0, for which many options are available. It is important to recognize at this point that selecting a root point for the DTD corresponds to estimating the distractor x 0 = d and, by that, the signalŝ = x − x 0. PatternAttribution is a DTD extension that learns from data how to set the root point. Summarizing, the function extracts the signal from the data by removing the distractor. The attribution of output values to input dimensions shows how much an individual component of the signal contributes to the output, which is what LRP calls relevance. Visualizing the function has proven to be straightforward BID1 BID20. In contrast, visualizing the signal BID5 BID26 BID22 and the attribution BID0 BID11 BID23 is more difficult. It requires a good estimate of what is the signal and what is the distractor. In the following section we first propose a quality measure for neuron-wise signal estimators. This allows us to evaluate existing approaches and, finally, derive signal estimators that optimize this criterion. These estimators will then be used to explain the signal (PatternNet) and the attribution (PatternAttribution). All mentioned techniques as well as our proposed signal estimators treat neurons independently, i.e., the full explanation will be a superposition of neuron-wise explanations. Recall that the input data x comprises both signal and distractor: x = s + d, and that the signal contributes to the output but the distractor does not. Assuming the filter w has been trained sufficiently well to extract y, we have DISPLAYFORM0 Note that estimating the signal based on these conditions alone is an ill-posed problem. We could limit ourselves to linear estimators of the formŝ = u(w T u) −1 y, with u a random vector such that DISPLAYFORM1 For such an estimator, the signal estimateŝ = u w T u −1 y satisfies w Tŝ = y. This implies the existence of an infinite number of possible rules for the DTD as well as infinitely many back-projections for the DeConvNet family. To alleviate this issue, we introduce the following quality measure ρ for a signal estimator S(x) =ŝ that will be written with explicit variances and covariances using the shorthandsd = x − S(x) and y = w T x: DISPLAYFORM2 This criterion introduces an additional constraint by measuring how much information about y can be reconstructed from the residuals x −ŝ using a linear projection. The best signal estimators remove most of the information in the residuals and thus yield large ρ(S). Since the correlation is invariant to scaling, we constrain v Td to have variance σ 2 v Td = σ 2 y. Finding the optimal v for a fixed S(x) amounts to a least-squares regression fromd to y. This enables us to assess the quality of signal estimators efficiently. Let us now discuss two signal estimators that have been used in previous approaches. S x -the identity estimator The naive approach to signal estimation is to assume the entire data is signal and there are no distractors: DISPLAYFORM0 With this being plugged into the deep Taylor framework, we obtain the z-rule BID11 which is equivalent to LRP BID0. For a linear model, this corresponds to r = w x as the attribution. It can be shown that for ReLU and max-pooling networks, the z-rule reduces to the element-wise multiplication of the input and the saliency map BID17 BID6. This means that for a whole network, the assumed signal is simply the original input image. It also implies that, if there are distractors present in the data, they are included in the attribution: r = w x = w s + w d. When moving through the layers by applying the filters w during the forward pass, the contributions from the distractor d are cancelled out. However, they cannot be cancelled in the backward pass by the element-wise multiplication. The distractor contributions w d that are included in the LRP explanation cause the noisy nature of the visualizations based on the z-rule. S w -the filter based estimator The implicit assumption made by DeConvNet and Guided BackProp is that the detected signal varies in the direction of the weight vector w. This weight vector has to be normalized in order to be a valid signal estimator. In the deep Taylor decomposition framework this corresponds to the w 2 -rule and in the following signal estimator: DISPLAYFORM1 For a linear model, this produces an attribution of the form w w w T w y. This estimator does not reconstruct the proper signal in the toy example of section 2. Empirically it is also sub-optimal in our experiment in Fig. 3. We suggest to learn the signal estimator S from data by optimizing the previously established criterion. A signal estimator S is optimal with respect to Eq. if the correlation is zero for all possible v: ∀v, cov[y,d]v = 0. This is the case when there is no covariance between y andd. Because of linearity of the covariance and sinced = x − S(x) the above condition leads to DISPLAYFORM0 It is important to recognize that the covariance is a summarizing statistic and consequently the problem can still be solved in multiple ways. We will present two possible solutions to this problem. Note that when optimizing the estimator, the contribution from the bias neuron will be considered 0 since it does not covary with the output y. S a -The linear estimator A linear neuron can only extract linear signals s from its input x. Therefore, we could assume a linear dependency between s and y, yielding a signal estimator: DISPLAYFORM1 Plugging this into Eq. and optimising for a yields DISPLAYFORM2 Note that this solution is equivalent to the approach commonly used in neuro-imaging BID5 despite different derivation. With this approach we can recover the signal of our toy example in section 2. It is equivalent to the filter-based approach only if the distractors are orthogonal to the signal. We found that the linear estimator works well for the convolutional layers. However, when using this signal estimator with ReLUs in the dense layers, there is still a considerable correlation left in the distractor component (see Fig. 3).S a+− -The two-component estimator To move beyond the linear signal estimator, it is crucial to understand how the rectifier influences the training. Since the gate of the ReLU closes for negative activations, the weights only need to filter the distractor component of neurons with y > 0. Since this allows the neural network to apply filters locally, we cannot assume a global distractor component. We rather need to distinguish between the positive and negative regime: DISPLAYFORM3 Even though signal and distractor of the negative regime are canceled by the following ReLU, we still need to make this distinction in order to approximate the signal. Otherwise, information about whether a neuron fired would be retained in the distractor. Thus, we propose the two-component signal estimator: DISPLAYFORM4 Next, we derive expressions for the patterns a + and a −. We denote expectations over x within the positive and negative regime with E + [x] and E − [x], respectively. Let π + be the expected ratio of inputs x with w T x > 0. The covariance of data/signal and output become: DISPLAYFORM5 Assuming both covariances are equal, we can treat the positive and negative regime separately using Eq. to optimize the signal estimator: FORMULA12 and solving for a + yields the required parameter (a − analogous). DISPLAYFORM6 DISPLAYFORM7 The solution for S a+− reduces to the linear estimator when the relation between input and output is linear. Therefore, it solves our introductory linear example correctly. Based on the presented analysis, we propose PatternNet and PatternAttribution as illustrated in Fig. 1. PatternNet yields a layer-wise back-projection of the estimated signal to input space. The signal estimator is approximated as a superposition of neuron-wise, nonlinear signal estimators S a+− in each layer. It is equal to the computation of the gradient where during the backward pass the weights of the network are replaced by the informative directions. In Fig. 1, a visual improvement over DeConvNet and Guided Backprop is apparent. PatternAttribution exposes the attribution w a + and improves upon the layer-wise relevance propagation (LRP) framework BID0. It can be seen as a root point estimator for the DeepTaylor Decomposition (DTD). Here, the explanation consists of neuron-wise contributions of the estimated signal to the classification score. By ignoring the distractor, PatternAttribution can reduce the noise and produces much clearer heat maps. By working out the back-projection steps in the Deep-Taylor Decomposition with the proposed root point selection method, it becomes obvious that PatternAttribution is also analogous to the backpropagation operation. In this case, the weights are replaced during the backward pass by w a +. To evaluate the quality of the explanations, we focus on the task of image classification. Nevertheless, our method is not restricted to networks operating on image inputs. We used Theano BID2 and BID4 for our implementation. We restrict the analysis to the well-known ImageNet dataset BID13 using the pre-trained VGG-16 model BID19. Images were rescaled and cropped to 224x224 pixels. The signal estimators are trained on the first half of the training dataset. The vector v, used to measure the quality of the signal estimator ρ(x) in Eq. FORMULA5, is optimized on the second half of the training dataset. This enables us to test the signal estimators for generalization. All the presented here were obtained using the official validation set of 50000 samples. The validation set was not used for training the signal estimators, nor for training the vector v to measure the quality. Consequently our are obtained on previously unseen data. The linear and the two component signal estimators are obtained by solving their respective closed form solutions (Eq. and Eq. FORMULA15 ). With a highly parallelized implementation using 4 GPUs this could be done in 3-4 hours. This can be considered reasonable given that several days are required to train the actual network. The quality of a signal estimator is assessed with Eq.. Solving it with the closed form solution is computationally prohibitive since it must be repeated for every single weight vector in the network. Therefore we optimize the equivalent least-squares problem using stochastic mini-batch gradient descent with until convergence. This was implemented on a NVIDIA Tesla K40 and took about 24 hours per optimized signal estimator. After learning to explain, individual explanations are computationally cheap since they can be implemented as a back-propagation pass with a modified weight vector. As a , our method produces explanations at least as fast as the work by BID3 on real time saliency. However, our method has the advantage that it is not only applicable to image models but is a generalization of the theory commonly used in neuroimaging BID5. Measuring the quality of signal estimators In Fig. 3 we present the from the correlation measure ρ(x), where higher values are better. We use random directions as baseline signal estimators. Clearly, this approach removes almost no correlation. The filter-based estimator S w succeeds in removing some of the information in the first layer. This indicates that the filters are similar to the patterns in this layer. However, the gradient removes much less information in the higher layers. Overall, it does not perform much better than the random estimator. This implies that the weights do not correspond to the detected stimulus in a neural network. Hence the implicit assumptions about the signal made by DeConvNet and Guided BackProp is not valid. The optimized estimators remove much more of the correlations across the board. For convolutional layers, S a and S a+− perform comparably in all but one layer. The two component estimator S a+− is best in the dense layers. Image degradation The first experiment was a direct measurement of the quality of the signal estimators of individual neurons. The second one is an indirect measurement of the quality, but it considers the whole network. We measure how the prediction (after the soft-max) for the initially selected class changes as a function of corrupting more and more patches based on the ordering assigned by the attribution (see BID14 . This is also related to the work by BID27 . In this experiment, we split the image in non-overlapping patches of 9x9 pixels. We compute the attribution and sum all the values within a patch. We sort the patches in decreasing order based on the aggregate heat map value. In step n = 1..100 we replace the first n patches with the their mean per color channel to remove the information in this patch. Then, we measure how this influences the classifiers output. We use the estimators from the previous experiment to obtain the function-signal attribution heat maps for evaluation. A steeper decay indicates a better heat map. Results are shown in Fig. 4 . The baseline, in which the patches are randomly ordered, performs worst. The linear optimized estimator S a performs quite poorly, followed by the filter-based estimator S w . The trivial signal estimator S x performs just slightly better. However, the two component model S a+− leads to the fastest decrease in confidence in the original prediction by a large margin. Its excellent quantitative performance is also backed up by the visualizations discussed next. Qualitative evaluation In FIG1, we compare all signal estimators on a single input image. For the trivial estimator S x, the signal is by definition the original input image and, thus, includes the distractor. Therefore, its noisy attribution heat map shows contributions that cancel each other in the neural network. The S w estimator captures some of the structure. The optimized estimator S a in slightly more structure but struggles on color information and produces dense heat maps. The two component model S a+− on the right captures the original input during signal estimation and produces a crisp heat map of the attribution. FIG2 shows the visualizations for six randomly selected images from ImageNet. PatternNet is able to recover a signal close to the original without having to resort to the inclusion of additional rectifiers in contrast to DeConvNet and Guided BackProp. We argue that this is due to the fact that the optimization of the pattern allows for capturing the important directions in input space. This contrasts with the commonly used methods DeConvNet, Guided BackProp, LRP and DTD, for which the correlation experiment indicates that their implicit signal estimator cannot capture the true signal in the data. Overall, the proposed approach produces the most crisp visualization in addition to being measurably better, as shown in the previous section. Additonally, we also contrast our methods to the prediction-differences analysis by BID27 in the supplementary material. Relation to previous methods Our method can be thought of as a generalization of the work by BID5, making it applicable on deep neural networks. Remarkably, our proposed approach can solve the toy example in section 2 optimally while none of the previously published methods for deep learning are able to solve this BID0 BID11 BID21 BID23 BID27 BID3 BID26 BID22 . Our method shares the idea that to explain a model properly one has to learn how to explain it with Zintgraf et al. FORMULA5 and BID3 . Furthermore, since our approach is after training just as expensive as a single back-propagation step, it can be applied in a real-time context, which is also possible for the work done by BID3 but not for BID27 . Understanding and explaining nonlinear methods is an important challenge in machine learning. Algorithms for visualizing nonlinear models have emerged but theoretical contributions are scarce. We have shown that the direction of the model gradient does not necessarily provide an estimate for the signal in the data. Instead it reflects the relation between the signal direction and the distracting noise contributions ( Fig. 2). This implies that popular explanation approaches for neural networks (DeConvNet, Guided BackProp, LRP) do not provide the correct explanation, even for a simple linear model. Our reasoning can be extended to nonlinear models. We have proposed an objective function for neuron-wise explanations. This can be optimized to correct the signal visualizations (PatternNet) and the decomposition methods (PatternAttribution) by taking the data distribution into account. We have demonstrated that our methods constitute a theoretical, qualitative and quantitative improvement towards understanding deep neural networks. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement NO 657679, the BMBF for the Berlin Big Data Center BBDC (01IS14013A), a hardware donation from NVIDIA. We thank Sander Dieleman, Jonas Degraeve, Ira Korshunova, Stefan Chmiela, Malte Esders, Sarah Hooker, Vincent Vanhoucke for their comments to improve this manuscript. We are grateful to Chris Olah and Gregoire Montavon for the valuable discussions. In this section we will give an overview of the visualization algorithms to clarify their actual implementation for ReLu networks. This shows the similarities and the differences between all approaches. For all visualization approaches, the back-projection through a max-pooling layer is only through the path that was active in the forward pass. To create the Predictive-Differences analysis visualizations BID27, we used the opensource code provided by the authors with the default parameter settings provided for VGG. Figure 7: Visualization of random images from ImageNet (validation set). In the leftmost shows column the ground truth, the predicted label and the classifier's confidence. Comparison between the proposed methods PatternNet and PatternAttribution to the Prediction-Differences approach by BID27.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hkn7CBaTW
Without learning, it is impossible to explain a machine learning model's decisions.
Graph neural networks have shown promising on representing and analyzing diverse graph-structured data such as social, citation, and protein interaction networks. Existing approaches commonly suffer from the oversmoothing issue, regardless of whether policies are edge-based or node-based for neighborhood aggregation. Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization performance for unseen graphs. To address these issues, we propose a new graph neural network model that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM). GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem and attention to use node information explicitly. These two mechanisms allow for a weighted neighborhood aggregation which considers the properties of entities and relations. With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on four benchmark graph datasets comprising transductive and inductive learning tasks. Furthermore, we empirically demonstrate the significance of considering global information. The source code will be publicly available in the near future. Graphs are universal data representations that exist in a wide variety of real-world problems, such as analyzing social networks , forecasting traffic flow , and recommending products based on personal preferences . Owing to breakthroughs in deep learning, recent graph neural networks (GNNs) have achieved considerable success on diverse graph problems by collectively aggregating information from graph structures; ). As a , much research in recent years has focused on how to aggregate the feature representations of neighbor nodes so that the dependence of graphs is effectively utilized. The majority of studies have predominantly depended on edges to aggregate the neighboring nodes' features. These edge-based methods are premised on the concept of relational inductive bias within graphs , which implies that two connected nodes have similar properties and are more likely to share the same label . While this approach leverages graphs' unique property of capturing relations, it appears less capable of generalizing to new or unseen graphs (b). To improve the neighborhood aggregation scheme, some studies have incorporated node information; They fully utilize node information and reduce the effects of relational (edge) information. A recent approach, graph attention networks (GAT), employs the attention mechanism so that weights used for neighborhood aggregation differ according to the feature of nodes (Veličković et al., 2018). This approach has yielded impressive performance and has shown promise in improving generalization for unseen graphs. Regardless of neighborhood aggregation schemes, most methods, however, suffer from a common problem where neighborhood information is considered to a limited degree . For example, graph convolutional networks (GCNs) only operate on data that are closely connected due to oversmoothing, which indicates the "washing out" of remote nodes' features via averaging. Consequently, information becomes localized and access to global information is restricted , leading to poor performance on datasets in which only a small portion is labeled. In order to address the aforementioned issues, we propose a novel method, Graph Entities with Step Mixture via random walk (GESM), which considers information from all nodes in the graph and can be generalized to new graphs by incorporating random walk and attention. Random walk enables our model to be applicable to previously unseen graph structures, and a mixture of random walks alleviates the oversmoothing problem, allowing global information to be included during training. Hence, our method can be effective, particularly for nodes in the periphery or a sparsely labeled dataset. The attention mechanism also advances our model by considering node information for aggregation. This enhances the generalizability of models to diverse graph structures. To validate our approach, we conducted experiments on four standard benchmark datasets: Cora, Citeseer, and Pubmed, which are citation networks for transductive learning, and protein-protein interaction (PPI) for inductive learning, in which test graphs remain unseen during training. In addition to these experiments, we verified whether our model uses information of remote nodes by reducing the percentage of labeled data. The experimental demonstrate the superior performance of GESM on inductive learning as well as transductive learning for datasets. Moreover, our model achieved enhanced accuracy for datasets with reduced label rates, indicating the contribution of global information. The key contributions of our approach are as follows: • We present graphs with step mixture via random walk, which can adaptively consider local and global information, and demonstrate its effectiveness through experiments on public benchmark datasets with few labels. • We propose Graph Entities with Step Mixture via random walk (GESM), an advanced model which incorporates attention, and experimentally show that it is applicable to both transductive and inductive learning tasks, for both nodes and edges are utilized for the neighborhood aggregation scheme. • We empirically demonstrate the importance of propagation steps by analyzing its effect on performance in terms of inference time and accuracy. Step-0 Step-1 Step- Figure 1: Random walk propagation procedure. From left to right are step-0, step-1, step-2, and stepinfinite. The values in each node indicate the distribution of a random walk. In the leftmost picture, only the starting node has a value of 100, and all other nodes are initialized to zero. As the number of steps increases, values spread throughout the graph and converge to some extent. Random walk, which is a widely used method in graph theory, mathematically models how node information propagates throughout the graph. As shown in Figure 1, random walk refers to randomly moving to neighbor nodes from the starting node in a graph. For a given graph, the transition matrix P, which describes the probabilities of transition, can be formulated as follows: where A denotes the adjacency matrix of the graph, and D the diagonal matrix with a degree of nodes. The probability of moving from one node to any of its neighbors is equal, and the sum of the probabilities of moving to a neighboring node adds up to one. Let u t be the distribution of the random walk at step t (u 0 represents the starting distribution). The t step random walk distribution is equal to multiplying P, the transition matrix, t times. In other words, The entries of the transition matrix are all positive numbers, and each column sums up to one, indicating that P is a matrix form of the Markov chain with steady-state. One of the eigenvalues is equal to 1, and its eigenvector is a steady-state . Therefore, even if the transition matrix is infinitely multiplied, convergence is guaranteed. The attention mechanism was introduced in sequence-to-sequence modeling to solve long-term dependency problems that occur in machine translation . The key idea of attention is allowing the model to learn and focus on what is important by examining features of the hidden layer. In the case of GNNs , GATs (Veličković et al., 2018) achieved stateof-the-art performance by using the attention mechanism. Because the attention mechanism considers the importance of each neighboring node, node features are given more emphasis than structural information (edges) during the propagation process. Consequently, using attention is advantageous for training and testing graphs with different node features but the same structures (edges). Given the many benefits of attention, we incorporate the attention mechanism to our model to fully utilize node information. The attention mechanism enables different importance values to be assigned to nodes of the same neighborhood, so combining attention with mixture-step random walk allows our model to adaptively highlight features with salient information in a global scope. Let G = (V, E) be a graph, where V and E denote the sets of nodes and edges, respectively. Nodes are represented as a feature matrix X ∈ R n×f, where n and f respectively denote the number of nodes and the input dimension per node. A label matrix is Y ∈ R n×c with the number of classes c, and a learnable weight matrix is denoted by W. The adjacency matrix of graph G is represented as A ∈ R n×n. The addition of self-loops to the adjacency matrix is A = A + I n, and the column normalized matrix of A is = AD −1 with 0 = I n. Most graph neural networks suffer from the oversmoothing issue along with localized aggregation. Although JK-Net tried to handle oversmoothing by utilizing GCN blocks with mulitple propagation, it could not completely resolve the issue as shown in Figure 4b. We therefore propose Graph Step Mixture (GSM), which not only separates the node embedding and propagation process but also tackles oversmoothing and localized aggregation issues through a mixture of random walk steps. GSM has a simple structure that is composed of three stages, as shown in Figure 2. Input X passes through a fully connected layer with a nonlinear activation. The output is then multiplied by a normalized adjacency matrix for each random walk step that is to be considered. The for each step are concatenated and fed into another fully connected layer, giving the final output. The entire propagation process of GSM can be formulated as: where is the concatenation operation, s is the maximum number of steps considered for aggregation, and k is the normalized adjacency matrix multiplied k times. As can be seen from Equation 3, weights are shared across nodes. In our method, the adjacency matrix is an asymmetric matrix, which is generated by random walks and flexible to arbitrary graphs. On the other hand, prior methods such as JK-Net and MixHop , use a symmetric Laplacian adjacency matrix, which limits graph structures to given fixed graphs. (a) Traditional global aggregation scheme Step-1 Step-2 Step-3 (b) Our step-mixture scheme For the concatenation operation, localized sub-graphs are concatenated with global graphs, which allows the neural network to adaptively select global and local information through learning (see Figure 3). While traditional graph convolution methods consider aggregated information within three steps by A(A(AXW )W )W, our method can take all previous aggregations into account To develop our base model which depends on edge information, we additionally adopt the attention mechanism so that node information is emphasized for aggregation, i.e., Graph Entity Step Mixture (GESM). We simply modify the nonlinear transformation of the first fully connected layer in GSM by replacing it with the attention mechanism denoted by H multi (see Equations 3 and 4). As described in Equation 4, we employ multi-head attention, where H multi is the concatenation of m attention layers and α is the coefficient of attention computed using concatenated features of nodes and its neighboring nodes. By incorporating attention to our base model, we can avoid or ignore noisy parts of the graph, providing a guide for random walk . Utilizing attention can also improve combinatorial generalization for inductive learning, where training and testing graphs are completely different. In particular, datasets with the same structure but different node information can benefit from our method because these datasets can only be distinguished by node information. Focusing on node features for aggregation can thus provide more reliable in inductive learning. The time complexity of our base model is O(s × l × h), where s is the maximum number of steps considered for aggregation, l is the number of non-zero entries in the adjacency matrix, and h is the hidden feature dimension. As suggested by , we can assume h << l under realistic assumptions. Our model complexity is, therefore, highly efficient with time complexity O(s × l), which is on par with vanilla GCN . Transductive learning. We utilize three benchmark datasets for node classification: Cora, Citeseer, and Pubmed . These three datasets are citation networks, in which the nodes represent documents and the edges correspond to citation links. The edge configuration is undirected, and the feature of each node consists of word representations of a document. Detailed statistics of the datasets are described in Table 1. For experiments on datasets with the public label rate, we follow the transductive experimental setup of. Although all of the nodes' feature vectors are accessible, only 20 node labels per class are used for training. Accordingly, 5.1% for Cora, 3.6% for Citeseer, and 0.3% for Pubmed can be learned. In addition to experiments with public label rate settings, we conducted experiments using datasets where labels were randomly split into a smaller set for training. To check whether our model can propagate node information to the entire graph, we reduced the label rate of Cora to 3% and 1%, Citeseer to 1% and 0.5%, Pubmed to 0.1%, and followed the experimental settings of for these datasets with low label rates. For all experiments, we report the using 1,000 test nodes and use 500 validation nodes. Inductive learning. We use the protein-protein interaction PPI dataset ,which is preprocessed by Veličković et al.. As detailed in Table 1, the PPI dataset consists of 24 different graphs, where 20 graphs are used for training, 2 for validation, and 2 for testing. The test set remains completely unobserved during training. Each node is multi-labeled with 121 labels and 50 features regarding gene sets and immunological signatures. For transductive learning, we compare our model with numbers of state-of-the-art models according to the reported in the corresponding papers. Our model is compared with baseline models specified in (Veličković et al., 2018) such as label propagation (LP) , graph embeddings via random walk (DeepWalk) , and Planetoid . We also compare our model with models that use self-supervised learning (Union), learnable graph convolution (LGCN) , GCN based multi-hop neighborhood mixing (JK-GCN and MixHop) , multi-scale graph convolutional networks (AdaLNet) and maximal entropy transition (PAN) . We further include models that utilize teleport term during propagation APPNP , conduct convolution via spectral filters such as ChebyNet , GCN , SGC (a), and GWNN and models that adopt attention between nodes, such as GAT (Veličković et al., 2018) and AGNN . For inductive learning tasks, we compare our model against four baseline models. This includes graphs that use sampling and aggregation (GraphSAGE-LSTM) , and jumping-knowledge (JK-LSTM) , along with GAT and LGCN which are used in the transductive setting. Regarding the hyperparameters of our transductive learning models, we used different settings for datasets with the public split and random split. We set the dropout probability such that 0.3 of the data were kept for the public split and 0.6 were kept for the random split. We set the number of multi-head m = 8 for GESM. The size of the hidden layer h ∈ {64, 512} and the maximum number of steps used for aggregation s ∈ {10, 30} were adjusted for each dataset. We trained for a maximum of 300 epochs with L2 regularization λ = 0.003 and learning rate lr = 0.001. We report the average classification accuracy of 20 runs. For inductive learning, the size of all hidden layers was the same with h = 256 for both GSM, which consisted of two fully connected layers at the beginning and GESM. We set the number of steps s = 10 for GSM, and s = 5, m = 15 for GESM. L2 regularization and dropout were not used for inductive learning (Veličković et al., 2018). We trained our models for a maximum of 2,000 epochs with learning rate lr = 0.008. The evaluation metric was the micro-F1 score, and we report the averaged of 10 runs. For all the models, the nonlinearity function of the first fully connected layer was an exponential linear unit (ELU) . Our models were initialized using Glorot initialization and were trained to minimize the cross-entropy loss using the Adam . We employed an early stopping strategy based on the loss and accuracy of the validation sets, with a patience of 100 epochs. Results on benchmark datasets. Table 2 summarizes the comparative evaluation experiments for transductive and inductive learning tasks. In general, not only are there a small number of methods that can perform on both transductive and inductive learning tasks, but the performance of such methods is not consistently high. Our methods, however, are ranked in the top-3 for every task, indicating that our method can be applied to any task with large predictive power. For transductive learning tasks, the experimental of our methods are higher than or equivalent to those of other methods. As can be identified from the table, our base model GSM, which is computationally efficient and simple, outperforms many existing baseline models. These indicate the significance of considering both global and local information and using random walks. It can also be observed that GESM yielded more impressive than GSM, suggesting the importance of considering node information in the aggregation process. For the inductive learning task, our base model GSM, which employs an edge-based aggregation method, does not invariably obtain the highest accuracy. However, our model with attention, GESM, significantly improves performance of GSM by learning the importance of neighborhood nodes, and surpasses the of GAT, despite the fact that GAT consists of more attention layers. These for unseen graphs are in good agreement with shown by Veličković et al., in which reducing the influence of structural information improved generalization. Results on datasets with low label rates. To demonstrate that our methods can consider global information, we experimented on sparse datasets with low label rates of transductive learning datasets. As indicated in Table 3, our models show remarkable performance even on the dataset with low label rates. In particular, we can further observe the superiority of our methods by inspecting Table 2 and 3, in which our methods trained on only 3% of the Cora dataset outperformed some other methods trained on 5.1% of the data. Because both GSM and GESM showed enhanced accuracy, it could be speculated that using a mixture of random walks played a key role in the experiments; the improved can be explained by our methods adaptively selecting node information from local and global neighborhoods, and allowing peripheral nodes to receive information. Oversmoothing and Accuracy. As shown in Figure 4a, GCN , SGC (a), and GAT (Veličković et al., 2018) suffer from oversmoothing. GCN and GAT show severe degradation in accuracy after the 8th step; The accuracy of SGC does not drop as much as GCN and GAT but nevertheless gradually decreases as the step size increases. The proposed GSM, unlike the others, maintains its performance without any degradation, because no rank loss occurs and oversmoothing is overcome by step mixture. Interestingly, JK-Net also keeps the training accuracy regardless of the step size by using GGN blocks with multiple steps according to Figure 4a. We further compared the test accuracy of GSM with JK-Net, a similar approach to our model, in regards to the step size. To investigate the adaptability to larger steps of GSM and JK-Net, we concatenated features after the 10th step. As shown in Figure 4b, GSM outperforms JK-Net, even though both methods use concatenation to alleviate the oversmoothing issue. These are in line with the fact that JK-Net obtains global information similar to GCN or GAT. Consequently, the larger the step, the more difficult it is for JK-Net to maintain performance. GSM, on the other hand, maintains a steady performance, which confirms that GSM does not collapse even for large step sizes. Test Accuracy GSM public (5.1%) 3% 1% GESM public (5.1%) 3% 1% We also observe the effect on accuracy as the number of steps increases under three labeling conditions for GSM and GESM. As represented in Figure 5, it is evident that considering remote nodes can contribute to the increase in accuracy. By taking into account more data within a larger neighborhood, our model can make reliable decisions, ing in improved performance. Inspection of the figure also indicates that the accuracy converges faster for datasets with higher label rates, presumably because a small number of walk steps can be used to explore the entire graph. Moreover, the addition of attention benefits performance in terms of higher accuracy and faster convergence. Inference time. As shown in Figure 6, the computational complexity of all models increases linearly as the step size increases. We can observe that the inference time of GSM is faster than GCN especially when the number of steps is large. The inference time of GESM is much faster than GAT (Veličković et al., 2018) while providing higher accuracies and stable (see Appendix A). Our methods are both fast and accurate due to the sophisticated design with a mixture of random walk steps. Embedding Visualization. Figure 7 visualizes the hidden features of Cora from our models by using Figure 7: t-SNE plot of the last hidden layer trained on the Cora dataset. the t-SNE algorithm . The figure illustrates the difference between edgebased and node-based aggregation. While the nodes are closely clustered in the from GSM, they are scattered in that of GESM. According to the in Table 2, more closely clustered GSM does not generally produce better than loosely clustered GESM, which supports findings that the attention mechanism aids models to ignore or avoid noisy information in graphs . Table 5: Average test set accuracy and standard deviation over 100 random train/validation/test splits with 20 runs. Top-3 for each column are highlighted in bold, and top-1 values are underlined. Coauthor CS Coauthor Physics Amazon Computers Amazon Photo MLP 88.3 ± 0.7 88.9 ± 1.1 44.9 ± 5.8 69.6 ± 3.8 LogReg 86.4 ± 0.9 86.7 ± 1.5 64.1 ± 5.7 73.0 ± 6.5 LP 73.6 ± 3.9 86.6 ± 2.0 70.8 ± 8.1 72.6 ± 11.1 GCN 91.1 ± 0.5 92.8 ± 1.0 82.6 ± 2.4 91.2 ± 1.2 GraphSAGE 91.3 ± 2.8 93.0 ± 0.8 82.4 ± 1.8 91.4 ± 1.3 GAT (Veličković et al., 2018) 90.5 ± 0.6 92.5 ± 0.9 78.0 ± 19.0 85.7 ± 20.3 GSM (our base model) 91.8 ± 0.4 93.3 ± 0.6 79.2 ± 2.0 89.3 ± 1.9 GESM (GSM+attention) 92.0 ± 0.5 93.7 ± 0.6 79.3 ± 1.7 90.0 ± 2.0 For an in-depth verification of overfitting, we extended our experiments to four types of new node classification datasets. Coauthor CS and Coauthor Physics are co-authorship graphs from the KDD Cup 2016 challenge 1, in which nodes are authors, features represent the article keyword for each author's paper, and class labels indicate each author's most active research areas. Amazon Computers and Amazon Photo are co-purchase graphs of Amazon, where nodes represent the items, and edges indicate that items have been purchased together. The node features are bag-of-words of product reviews, and class labels represent product categories. Detailed statistics of the datasets are described in Table 4 and we followed the experimental setup of. We used the same values for each hyperparameter (unified size: 64, step size: 15, multi-head for GAT and GESM: 8) without tuning. The in Table 5 prove that our proposed methods do not overfit to a particular dataset. Moreover, in comparison to GAT, the performance of GESM is more accurate, and more stable. We visualized the distribution of attention vectors. Figure 8a plots the distribution of neighbors with equal importance and Figure 8b displays the distribution of attention weighted neighbors that we trained with GESM. Although both figures look similar to some degree, we can conjecture that GESM slightly adjusts the weight values, contributing to improved performance.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1eWbkSFPS
Simple and effective graph neural network with mixture of random walk steps and attention
Basis pursuit is a compressed sensing optimization in which the l1-norm is minimized subject to model error constraints. Here we use a deep neural network prior instead of l1-regularization. Using known noise statistics, we jointly learn the prior and reconstruct images without access to ground-truth data. During training, we use alternating minimization across an unrolled iterative network and jointly solve for the neural network weights and training set image reconstructions. At inference, we fix the weights and pass the measurements through the network. We compare reconstruction performance between unsupervised and supervised (i.e. with ground-truth) methods. We hypothesize this technique could be used to learn reconstruction when ground-truth data are unavailable, such as in high-resolution dynamic MRI. Deep learning in tandem with model-based iterative optimization -, i.e. model-based deep learning, has shown great promise at solving imaging-based inverse problems beyond the capabilities of compressed sensing. These networks typically require hundreds to thousands of examples for training, consisting of pairs of corrupted measurements and the desired ground-truth image. The reconstruction is then trained in an end-to-end fashion, in which data are reconstructed with the network and compared to the ground-truth . in many cases, collecting a large set of fully sampled data for training is expensive, impractical, or impossible. In this work, we present an approach to model-based deep learning without access to ground-truth data -. We take advantage of (known) noise statistics for each training example and formulate the problem as an extension of basis pursuit denoising with a deep convolutional neural network (CNN) prior in place of image sparsity. During training, we jointly solve for the CNN weights and the reconstructed training set images. At inference time, we fix the weights and pass the measured data through the network. As proof of principle, we apply the technique to undersampled, multi-channel magnetic resonance imaging (MRI). We compare our Deep Basis Pursuit (DBP) formulation with and without supervised learning, as well as to MoDL, a recently proposed unrolled modelbased network that uses ground-truth data for training. We show that in the unsupervised setting, we are able to approach the image reconstruction quality of supervised learning, thus opening the door to applications where collecting fully sampled data is not possible. We focus on the discretized linear signal model under additive white Gaussian noise: where x ∈ C N is the vectorized unknown image, A ∈ C M ×N is the discretized forward model describing the imaging system, y ∈ C M is a vector of the acquired measurements, and v ∼ N c 0, σ 2 I is a complex-valued Gaussian noise vector. We are interested in the ill-posed regime, where M < N. To make the inverse problem well-posed, x is commonly solved through a regularized least-squares: where Q(x) is a suitable regularization term, and λ > 0 is the strength of the regularization. An alternative, equivalent formulation that directly accounts for the model error due to noise is the constrained problem: where = σ √ M is the square-root of the expected noise power in the measurements. When an 1 -norm is used for regularization, this is known as basis pursuit denoising, and provides an intuitive formulation as it finds the best (sparsist) representation given a noise error constraint. CNNs have recently been used to solve imaging inverse problems, relying on the network architecture and training data to learn the inverse mapping. When a large corpus of training data is available, it is possible to learn the inverse mapping directly from under-sampled measurements, typically by first transforming the measurements to the image domain either through the adjoint operation A * y or through a conventional reconstruction. Except for the initial transformation, these models do not take advantage of knowledge of the imaging system in the network architecture. Thus, they require substantial training data and are prone to overfitting and CNN artifacts. More recently, network architectures that combine both CNN blocks and data consistency blocks incorporating knowledge of the forward model have grown in popularity, as they allow for robustness against CNN artifacts and training with limited data,. These architectures are inspired by conventional first-order iterative algorithms intended to solve the unconstrained problem, and typically alternate between data consistency and manifold projection. To facilitate training with backpropagation, the iterative algorithms are unrolled for a finite number of steps and optimized in an end-to-end manner. As the network is differentiable, gradient updates can be computed through the application of the forward operator with auto-differentiation. For a particular network architecture, we can view the image reconstruction as a feed-forward network where F w is a deep network parameterized by weights w that operates on the measurements and optionally incorporates knowledge of the forward model. Given a training set of inputs {y and corresponding ground-truth images {x, the network weights can be trained in a traditional end-to-end fashion by minimizing the average training loss as measured by the loss function L: For inference, the weights are fixed and new measurements are reconstructed through a forward pass of. Inspired by other model-based deep learning architectures -, we propose a new unrolled network based on basis pursuit denoising, which we call Deep Basis Pursuit (DBP). We assume the noise statistics of the measurements are known and we use them to selfregularize the solution. In turn, we propose to train in an unsupervised fashion in the measurement domain, taking advantage of explicit control of the error between the measurements and the output of the network. We first describe the DBP model, and then discuss training the in an unsupervised fashion without ground-truth data. We combine the data consistency constraint of basis pursuit denoising with the 2 -norm incorporating a CNN auto-encoder. The DBP optimization is given by arg min where N w (x) ≡ x − R w (x) is a CNN parameterized by weights w that aims to estimate noise and aliasing,. In other words, R w (x) represents a denoised version of x. In this way, we seek to find the "cleanest" representation of x while allowing for the expected data inconsistency due to noise. To approximately solve, we consider an alternating minimization, repeated N 1 times: x k = arg min Subproblem is a forward pass through the CNN. Subproblem is convex and can solved with ADMM. We introduce the slack variable z = Ax and the dual variable u, and apply the following update steps, repeated N 2 times: where ρ > 0 is the ADMM penalty parameter and L2Proj(z,) is the projection of z onto the 2 -ball of radius. The update steps are amenable to matrix-free optimization, as the forward and adjoint calculations can be represented as computationally efficient operators. In particular, subproblem can be approximately solved with N 3 iterations of the Conjugate Gradient Method. Altogether, we can view DBP as an unrolled optimization alternating between CNN layers and data consistency layers, as shown in Fig. 1. At each layer, the same CNN is used, though it is possible in general to relax this requirement. For a fixed CNN R w, the DBP model is a special case of:xw ≡ Fw(y; A,), wherẽ w = (w, ρ) are the network parameters, and the network uses measurements together with knowledge of the system and noise power to return an estimate of the image. and ground-truth training data are available, the network weights can be trained in a traditional end-to-end fashion according to. When ground-truth data are not available, we consider a loss functionL imposed in the measurement domain: The measurement loss can be a useful surrogate for the true loss, as the measurements contain (noisy) information about the ground-truth image,. Thus, we may hope to learn about the image statistics given a large-enough training set that includes a diversity of measurements. We consider the application to under-sampled, multichannel MRI. The MRI reconstruction task is well-suited to DBP, as the noise statistics are Gaussian and can be measured during a short pre-scan. We first describe the multi-channel MRI forward operator and general sampling strategy. Then we discuss the experimental setup, including the dataset and implementation details. In multi-channel MRI, the signal is measured by an array of receive coils distributed around an object, each with a spatially-varying sensitivity profile. In the measurement model, the image is linearly mixed with each coil sensitivity profile, Fourier transformed, and sampled. We can describe the measurement model as A = (P F S 1) · · · (P F S C) ∈ C M ×N, where C is the number of receive coils, S c ∈ C N ×N is a diagonal operator containing the spatial sensitivity profile of the c th coil along the diagonal, F is the Fourier transform operator, and P ∈ {0, 1} M C ×N is a diagonal operator that selects the sampled frequencies. Data: We used the "Stanford Fully Sampled 3D FSE Knees" dataset from mridata.org, containing 3D Cartesian proton-density knee scans of 20 healthy volunteers. Each 3D volume consisted of 320 slices with matrix size 320×256 and was scanned with an 8-channel receive coil array. Although each slice is fully sampled, in practice the "ground-truth" data itself has noise. To aid in experimental comparison, "noise-free" ground-truth data were created by averaging the data from seven adjacent slices. For each slice, the spatial sensitivity profiles of each coil were estimated using ESPIRiT, a self-calibrated parallel imaging method. Ground-truth images were reconstructed by solving using the fully sampled data without regularization. Each slice was then passed through the forward model and retrospectively under-sampled using a different variable-density Poisson-disc sampling pattern, with a 16×16 calibration region and acceleration factor R ≈ 12. Slices from the first 16 volunteers were used for training, discarding the first and last 20 edge slices of each volume (4,384 slices). Similarly, slices from the next two volunteers were used for validation (548 slices), and slices from the last two volunteers were used for testing (548 slices). We added complex-valued Gaussian noise with standard deviation σ = 0.01 to the noise-free, averaged data. Implementation: For all experiments we used a Euclidean norm loss function for training. When training with ground-truth (supervised), the loss was applied in the image domain. For unsupervised training, the loss was applied in the measurement (Fourier) domain. We used a U-Net architecture for the CNN autoencoder, with separate input channels for the real and imaginary components. The U-Net consisted of three encoding layers with ReLU activation functions and 64, 128, and 256 channels, respectively, followed by three similar decoding layers. A final convolutional layer with no activation function mapped the decoder back to two channels. All convolutions used a 3 × 3 kernel size. For comparison, MoDL was also implemented using the same unrolled parameters and CNN architecture. All networks were implemented in PyTorch. Evaluation: DBP was separately trained with and without ground-truth data. We also trained MoDL with ground-truth data. In addition, we also evaluated parallel imaging and compressed sensing (PICS) using BART, with 1 -Wavelet regularization parameter optimized over the validation set. Normalized root mean-squared error (NRMSE) was used to compare reconstructions. V. Fig. 2 shows the mean NRMSE on the training set for each epoch. In addition to a performance gap between supervised and unsupervised learning, unsupervised DBP has noisier updates, likely because the loss function in the measurement domain is a noisy surrogate to the NRMSE. Fig. 3 shows the NRMSE across the validation set for different numbers of unrolls during inference. Even though the networks were trained with 5 unrolls, best performance is seen for different number of unrolls (6, 10 and 12 for MoDL, unsupervised DBP, and supervised DBP, respectively). Compared to MoDL, the DBP formulation behaves more stably as the number of unrolls increases, which may be due to the hard data consistency constraint. At the optimal number of unrolls, unsupervised DBP outperforms PICS. Fig. 4. Box plot of test set NRMSE for supervised and unsupervised DBP at two different unrolls -the first matching unrolls at training, and the second chosen to minimize validation set mean NRMSE. Also shown is PICS NRMSE for optimized regularization on validation set. Fig. 5 shows some of the intermediate output stages for the supervised and unsupervised DBP networks, indicating that similar structure is learned in both CNNS; however, the supervised DBP appears to better amplify and denoise features in the image. The magnitude reconstructions and error maps of a representative slice from the test set are shown in Fig. 6. Supervised DBP achieves the lowest error, followed by unsupervised DBP, PICS, and MoDL. Small details in edge sharpness are retained with DBP, but reduced with MoDL and PICS. There are strong connections to iterative optimization and unrolled deep networks,,. Jointly optimizing over the images and weights could be viewed a non-linear extension to dictionary learning. Nonetheless, there is a cost in reconstruction error when moving to unsupervised learning, highlighting the importance of a large training data set to offset the missing ground-truth information. The choice of measurement loss function and data SNR may also greatly impact the quality. Fortunately, in many practical settings there is an abundance of undersampled or corrupted measurement data available for training. In , the combination of basis pursuit denoising and deep learning can take advantage of undersampled data and provide a means to learn model-based deep learning reconstructions without access to groundtruth images.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BylRn72cUH
We present an unsupervised deep learning reconstruction for imaging inverse problems that combines neural networks with model-based constraints.
Deep learning approaches usually require a large amount of labeled data to generalize. However, humans can learn a new concept only by a few samples. One of the high cogntition human capablities is to learn several concepts at the same time. In this paper, we address the task of classifying multiple objects by seeing only a few samples from each category. To the best of authors' knowledge, there is no dataset specially designed for few-shot multiclass classification. We design a task of mutli-object few class classification and an environment for easy creating controllable datasets for this task. We demonstrate that the proposed dataset is sound using a method which is an extension of prototypical networks. Deep learning approaches are usually capable of solving a classification problem when a large labeled dataset is available during the training BID12; BID4. However, when a very few samples of a new category is shown to a trained classifier, it either fails to generalize or overfit on the new samples. Humans, however, can easily generalize their prior knowledge to learn a new concept even with one sample. Few-shot learning approaches are proposed to address this gap between human capablity of learning a new concept with only a very few labled samples and machine capablity in generalizing to a new concept. mini-ImageNet and tiered-Imagenet BID19 are two main datasets that are developed to help the research community addressing the problem of few-shot classification. Although that human are capable of learning a new concept with only a very few samples. Learning a few new concepts at the same time, and with only a very few samples of each is considered as a high cognition task BID1 and very challenging even for humans. It is yet an active area of study to know how human are capable of doing this. There could be many factors involved in this high cognition process, and there are many hypothesis around this. One popular hypothesis is that the brain is able to learn a good representation that has high capacity and can generalize well BID7. Studying the reasons behind human high cognitive capablity of learning a few new concepts in paralell and with only a very few samples, is out of the scope of this paper. However, in this paper, we propose to extend the few shot learning problem to multi-class few shot classification problem and moving a step towards filling the gap between human cognitive capablity of learning multiple new concepts in paralel and with only a few samples, and machine learning approaches. To do so, our first step is to define a dataset and a setup to address this problem, and an evaluation metric to measure our progression towards solving this problem. We argue that the existing datasets are not desirable for this task. Omniglot BID13, mini-ImageNet, tiered-ImagaNet, are designed for single object classification. Such datasets as, MS COCO BID15 and Pascal VOC BID5 have multiple object classes but they are not well suited for few-shot learning. The issue is the high imbalance of class cooccurrence (for example, 'human' label occures with all other classes). Therefore it is hard to prevent the learner from "sneak peeking" new concepts. To sum it up, this work's contribution is two-fold: 1. We propose the new task of mutli-object few-shot classification to test model's ability to disentagle and represent several object on an image (see Section 3) and propose an extension to prototypical-style models to efficiently solve the task (Section 3.1);2. We construct a new dataset which provides clean and controlled environment of multiobject images (see Section 4) and provide the framework, benchmarks and the code for the community to explore controlled scenarios and alternative few-shot classification tasks. The problem of learning new concepts with small number of labeled data is usually referred to as few-shot learning BID14. Two of the most famous datasets to address the problem of few shot classification are mini-Imagenet and tiered-Imagenet BID19. Both of these datasets address the problem of few shot classification of single objects. BID10 addresses the problem of few shot object detection in natural images. There are two main groups of approaches addressing this problem (i) optimization-based frameworks and (ii) metric-based frameworks. Optimization-based framework BID0 BID6 BID20, is a class of algorithms that learn how to quickly learn new concepts. Other notable performant approaches that do not fall into these two categories is SNAIL BID16.Meta-learning approaches work by learning a parameterized function that maps labeled training sets into classifiers. Metric-based framework learn a representation that minimize intra-class distances while maximize the distance between variant classes. These approaches usually rely on an episodic training framework: the model is trained with sub-tasks (episodes) in which there are only a few training samples for each category. Matching networks trains a similarity function between images. In each episode, it uses an attention mechanism (over the encoded support) as a similarity measure for one-shot classification. In prototypical networks , a metric space is learned where embeddings of queries of one category are close to the centroid (or prototype) of support of the same category, and far away from centroids of other classes in the episode. Due to simplicity and performance of this approach, many methods extended this work. For instance, BID19 propose a semi-supervised fewshot learning approach and show that leveraging unlabeled samples outperform purely supervised prototypical networks. BID10 propose to augment the support set by generating hallucinated examples. Task-dependent adaptive metric (TADAM) BID17 ) relies on conditional batch normalization BID18 to provide task adaptation (based on task representations encoded by visual features) to learn a task-dependent metric space. In order to test the ability to disentangle unseen objects on a given image, we propose a task of multi-object few-shot classification. Few-shot classification First, we briefly summarize the single object few-shot classification BID14. While a dataset for supervised learning is comprised of a number of input-label pairs, a dataset D for meta-learning is comprised of a number of tasks. In a common K-shot N -way classification every task has a support set S = {(x i, y i)} i of the size KN and a query set DISPLAYFORM0 The learner is given a number of support-query pairs for training. For testing, the learner is given am unseen support set S and asked to predict labels for a certain query Q = x, which label is one of the labels in the support. Usually, prototypes are computed in the following way. First, the images from the support set e i = CNN(x i) are embedded, the prototype for each class is computed as an average embeding p n = mean i:yi=n e i for all n ∈ 1... N. In this work, we propose to extend this task for a multi category case. We define a K-shot N -way D-dense task to have the support set S = {(x i, y) is a tuple of D labels corresponding the objects. This way, the learner is exposed to every object class exactly K times, therefore our methods can be easier compared among varying D. Similarly, the query Q = {(x i, Y i)} i is a set of images each containing D objects with ground truth labels y i. A naïve aproach requires exponential in D pseudo-label to represent all possible combinations. The learner can only be exposed to a limited number of possible combinations of objects. The exponential size of the label quickly overpasses few shots (commonly from 5 to 20) available for training. We propose multi-prototype networks to tackle aforementioned exponential explosion. For simplicity, in this work we assume that the labels y i are given in the order that objects appear on the image x i from left to right. This is a major simplification and we will lift it in the near future work. To extend the proto-net setup, we train a model to produce D embeddings for every image e. The rest of the procedure is identical to the proto-net -every query is compared to the prototypes and the distance to the correct prototype is pushed down, all the rest are pushed up. We aim to have a controlled environment for reliable experiments. To achieve this, we develop a dataset based on Shapenet 3D models renderred with Blender in the setup similar to CLEVR. This provides us flexibility to construct single or multiple object tasks and change the task parameters -number of shots and ways. The dataset along with the generation tools will be made publically available. In the next sections we describe in the detail the dataset generation procedure. Then, to compare the complexity of the dataset to existing ones, we run a popular model TADAM BID17 for a traditional single-object-per-image setup. Then, we increase the number of objects per image and report with our proposed model. We generated a dataset using methods similarly to visual reasoning dataset CLEVR BID9. To increase the variability of the object classes, we used 3D meshes from Shapenet BID3 dataset. Beforehand, we splitted 55 Shapenet classes into three subsets randomly for training, validation and testing (see Appendix A). The sizes of the subsets are 35, 10, 10, respectively. We render images task-by-task. We uniformly sample classes used in the task. Then, we sample a mesh of each class K times. We distribute object over canvases. Finally, we render images by randomly placing meshes on each canvas. In order to produce the representation of an object independent of color, we strip the texture and apply random color random texture of either rubber or metal. FIG3 demonstrates some samples from the test split. The model should be able to tackle We train a prototypical network and TADAM with 12-block residual feature extractor BID8. We summarize our in the TAB0 and compare to performance on mini-ImageNet. As it can be seen, our dataset is simpler than mini-ImageNet, but it cannot be trivially solved by close to state-of-the-art methods even in the simpest single-object case. Therefore, we conclude that the dataset is not trivial nor too hard. Having shown that the proposed dataset is sound for single-object task, we increase the number of objects per image and apply the proposed MultiProtoNet (see Section 3.1). In all experiments below, we used 12-block residual network that produces a corresponding number of embeddings per image. All networks are optimized with Adam with the Euclidian distance metric. The experiments are summarized in the TAB1. We notice that while the accuracy accuracy drops significantly when transitioning from single to multiple objects, it drop as much from 2 to 3 objects. In this work we introduced a task of few-shot multi-object classification and an environment for generating datasets for this task. We compared the proposed dataset to existing ones in singleobject case. Then, we used a simple extension of prototypical networks to conduct experiments multi-object case. We believe that this task will help diagnosing metric-learning models that need to disentangle several objects on an image. One of the future directions we are taking is to lift the limitation of known object order (Section 3.1). Then we plan to use stronger feature extractors BID17 and extend the work to more natural data.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1gxgiA4uN
We introduce a diagnostic task which is a variation of few-shot learning and introduce a dataset for it.
We present a new approach to defining a sequence loss function to train a summarizer by using a secondary encoder-decoder as a loss function, alleviating a shortcoming of word level training for sequence outputs. The technique is based on the intuition that if a summary is a good one, it should contain the most essential information from the original article, and therefore should itself be a good input sequence, in lieu of the original, from which a summary can be generated. We present experimental where we apply this additional loss function to a general abstractive summarizer on a news summarization dataset. The is an improvement in the ROUGE metric and an especially large improvement in human evaluations, suggesting enhanced performance that is competitive with specialized state-of-the-art models. Neural networks are a popular solution to the problem of text summarization, the task of taking as input a piece of natural language text, such as a paragraph or a news article, and generating a more succinct text that captures the most essential information from the original. One popular type of neural network that has achieved state of the art is an attentional encoderdecoder neural network;;. In an encoder-decoder network, the encoder scans over the input sequence by ingesting one word token at a time to create an internal representation. The decoder is trained to compute a probability distribution over next words conditioned on a sequence prefix. A beam search decoder is typically used to find a high likelihood output sequence based on these conditional word probability distributions. Since the next word depends heavily on previous words, the decoder has little hope of a correct distribution over next words unless it has the correct previous words. Thus the decoder is typically trained using teacher forcing , where the reference sequence prefix is always given to the decoder at each decoding step. In other words, regardless of what distributions are output by the decoder in training for timesteps (1, ..., t−1), at timestep t, it is given the reference sequence prefix (y Training at the sequence level can alleviate this discrepancy, but requires a differentiable loss function. In the Related Work section we review previous efforts. We present a novel approach to address the problem by defining a loss function at the sequence level using an encoder-decoder network as a loss function. In training, the summarizer's beam search decoded output sequence is fed as input into another network called the recoder. The recoder is an independent attentional encoder-decoder trained to produce the reference summary. Our experiments show that adding the recoder as a loss function improves a general abstractive summarizer on the popular CNN/DailyMail dataset ; , achieving significant improvements in the ROUGE metric and an especially large improvement in human evaluations. We first give a high level overview of an attentional encoder-decoder model for sequence to sequence learning. We present the specific abstractive model of , which serves as the baseline model in our experiments for comparison. We chose this as the baseline model because it is a general and popular model whose concepts have often appeared in other abstractive summarization works, and its have often been used as a comparison baseline ; ; . The ideas presented in this paper are largely independent of the specific summarizer and may be applicable to the training of other sequence to sequence models, many of which are also trained at the word loss level using teacher forcing. Since our focus is to demonstrate the effectiveness of our new loss function, the generality of the model, which does not account for specifics of the summarization problem such as article or sentence structure , suits our purpose well. The source article and target summary are each treated as a sequence of tokens. A token represents either a word or a punctuation mark. Let (x i) be the sequence of tokens in an article or input sequence, where i ranges from 1 to the length of the input sequence. Let (y t) be the sequence of word tokens in the summary or output sequence, where 1 ≤ t. The goal is to train a neural network to compute, at each timestep t, the probability distribution P (y t |y 1, ..., y t−1) over next tokens y t given previous tokens (y 1, ..., y t−1). In the teacher forcing training method, the network is trained to minimize the loss J ml, also called the maximum likelihood loss, defined as the average over timesteps t of (J ml) t = − log P (y * t |y * 1, ..., y * t−1) where the sequence prefix y * 1,..., y * t−1 is from the reference summary. At test time, the network computes P (y t |y 1, ..., y t−1) where y 1,..., y t−1 are chosen by a beam search decoder based on previously output distributions. The sequence ends when the special token STOP, appended to the end of all reference summaries in training, is output. Each x i is embedded as a dense vector w i using a v × d weight matrix W emb, where v is the size of the vocabulary and d is the number of hidden dimensions, a hyperparameter. The weights W emb are learned as part of training. Each w i is given in order as input into an LSTM to get hidden states (h f i), and also in reverse order into another LSTM to get backward states (h b i). The two state sequences are then stacked to form hidden states (h i) for reference by the decoding phase's attention mechanism. The decoding phase also uses an LSTM. At each output timestep t, a token y is embedded as a dense vector using the same W emb and fed into the LSTM, and the LSTM outputs a probability distribution over next tokens. For t = 0, y is the special placeholder symbol START. For t > 0, when training using teacher forcing, y = y * t−1 is the corresponding token from the reference sequence. When beam search decoding at test time, y = y t−1 is chosen based on the decoder's output probabilities at step t − 1. This difference in information about previous output tokens causes discrepancy between training and test conditions. The decoder makes use of an additional piece of information called attention , which allows it to "read" the encoder's output states. At each timestep with decoder LSTM state s t, the attention vector is computed as Finally, another linear transformation using a weight matrix V and weight vector b is used to map these outputs in the hidden space, through a softmax layer, into output probabilities over the vocabulary. That is, where V, V, b, b are trainable parameters. The pointer generator mechanism allows the decoder to copy a token directly from the input. It is particularly helpful for out-of-vocabulary tokens. The decoder "copies" source tokens with a distribution based on the attention vector a t and mixes it in with weight 1 − p gen, where p gen is a trainable function of the context vectors, states, and next word input. The final distribution P (y t |y 1, ..., y t−1) is the combined The summarizer loss function J S = J ml is computed based on this distribution. In one model variant, a coverage loss J cov that encourages attention vectors a t to look at different encoder states i is also added to the final loss function. That is, Full details on the pointer-generator and coverage mechanisms are found in. We have omitted them here for brevity. The recoder does not depend on these particulars of the summarization model. We focus on the quantities P vocab, P, and the total summarizer loss J S. Specific details of how these are computed are not needed in the following sections. We are now ready to present our main contribution. The recoder is a neural network model that takes the decoded sequence from the summarizer as input and is trained to output the reference summary. The purpose of the recoder is to serve as a sophisticated loss function to help train the summarizer, backpropagating errors to the summarizer's decoded output sequence during training. The intuition is that a good output sequence from the summarizer should allow the recoder to produce a good summary. In principle, the recoder can be any sequence to sequence model that can backpropagate to its inputs. One obvious choice is to use the same model structure as the summarizer itself. For our experiments we found it was sufficient to use the same network structure with lower dimensions, with the same attentional pointer-generator encoder-decoder network as the summarizer, but with half the number of hidden dimensions (from 256 to 128) and a instead of an LSTM in the encoder. This helped reduce the amount of memory required in training. Our first task is to represent the summarizer's decoded outputs as inputs to the recoder in a way that is differentiable. A beam search decoder will find a sequence (y 1, y 2, ...) of high average log probability − log P (y t |y 1, ..., y t−1) over timesteps t. We cannot use this discrete token sequence directly, but we can look at the underlying signals that determine the choice of each token. The output token y t is chosen based on the computed probability distributions P t (y t |y 1, ..., y t−1). Let us denote this distribution by P t for short. For a beam search of width k, the chosen token y t will have a probability in P t that is among the k highest. The exact choice is determined by the beam search mechanism, so we do not have a continuous function that relates P t directly to y t. However, P t does determine the range of choices, and feeding it as an input to the recoder can ensure that it contains the right information. Since each P t is computed by the summarizer based on the decoded prefix sequence (y 1, ..., y t−1), propagating errors to P t improves the summarizer via exposure to what it would see at test time, even if the summarizer is not "aware" of the search mechanism and cannot optimize for it. Since P t has dimensions equal to the size of the vocabulary (v = 50000 in the experiments), we multiply P t by a weight matrix to produce a dense representation in a lower dimensional space (128 in the experiments), just as we did for the one-hot token inputs of the summarizer. While we can use a new weight matrix here, reusing the same embedding matrix W emb as the summarizer helps avoid increasing the number of parameters. For the same reason we also reuse the summarizer's mapping V back to vocabulary space. Together they account for 90% of the summarizer's parameters. The input to the recoder is the sequence (w R t) where w R t = P t W emb with P t treated as a vector of length v. By training the recoder to output the reference summary, errors are propagated to its inputs w R t and then to P t. Note that although the summarizer's input w i is in effect W emb multiplied with one-hot vectors corresponding to x i, P t is the output of a softmax function and will be more evenly distributed than a one-hot vector. Embedded representations w R t will thus be a weighted sum of embedding vectors instead of individual ones like w i. The recoder can accommodate this difference since it is an independent network with its own parameters. The recoder can be trained jointly with the summarizer from scratch. We have found that it also suffices to add the recoder to the pretrained summarizer and continue training jointly using their combined losses. The recoder is trained using teacher forcing to produce the reference summary, analogously to J ml. To be specific, we minimize the loss function J R ml equal to the average of − log P R (y * t |y * 1, ..., y * t−1) across timesteps t, where P R denotes the recoder's output distributions. This maximum likelihood loss is sufficient for the recoder because its output is never used for decoding at test time. Meaningful recoder output depends on relevant information from the original article being encoded in its input w R t. However, J R ml only places requirements on the presence of information. We have not placed any requirements on brevity, so longer sequences w R t would likely yield better under this loss metric, barring any confounding effects from too much information. The effect is only constrained because recoder training is performed jointly with the summarizer, and summaries that are longer than reference summaries would tend to do worse with respect to J s. Intuitively, J R ml encourages recall of information. We can add a loss function on length to counterbalance for precision. The end of an output sequence is determined when the special STOP token is output. Actual words are likely to convey more useful information to the recoder, so training using J R ml lowers the STOP token's output probability. We can control length by applying a loss to every other token for timesteps beyond a desired length. We define the length loss as the average over t of The function Penalty(t) ≥ 1 defines a penalty curve for timesteps beyond a desired length, while hyperparameter λ defines the desired tradeoff between precision and recall. Other ways to control output length are possible, such as by explicitly adding the desired length into decoder state; , although these methods require changes to the model. Article some teenagers get driving lessons from their parents. other teens are taught by licensed instructors. but malia obama is n't your average 16-year-old: her driving lessons were provided by the u.s. secret service. asked who taught malia how to drive, first lady michelle obama told celebrity chef and daytime talk-show host rachael ray in an interview that it was the armed agents who provide around-the-clock security for the family. [...] pgen cov first lady michelle obama told celebrity chef and daytime talk-show host rachael ray in an interview that it was the armed agents who provide around-the-clock security for the family. mrs. obama has n't driven herself in seven or eight years. pgen cov+recoder malia obama, seen with her mother michelle obama in april 2009, reportedly was taught how to drive by secret service agents. but malia obama is n't your average 16-year-old: her driving lessons were provided by the u.s. secret service. asked who taught malia how to drive, first lady michelle obama said in an interview that it was the armed agents who provide around-the-clock security for the family. reference michelle obama told talk-show host rachael ray that secret service agents taught her daughter malia how to drive. mrs. obama has n't driven herself in seven or eight years, she said. she added that driving gives malia' a sense of normalcy,' helping her feel like the rest of her friends who are also driving. The final recoder loss J R is comprised of the teacher forced recoder loss and the length loss Abstracting away individual components of the summarizer's and recoder's loss functions, the end to end training is performed using a combination of their losses J: In Figure 1 comparing output summaries from the baseline pgen cov model trained with J S (described in Experiments section below) and the pgen cov+recoder model additionally trained with loss J S + J R, the latter's output contains mention of "malia", a relevant name in the article that also appears in the reference summary. If we trained only with the recoder loss J R, we would encourage the summarizer to output the right information, but the output may not conform to a language model, except to the extent it helps make the output more intelligible for the recoder. For the purpose of illustration, we continued training the pgen cov model using only J R. It produced the following summary for one of the articles in the CNN/DailyMail dataset: boston's miserable winter is now also its snowiest season it had a special 2.9 inches pushed the city into 108.6 inches in one season. This example output contains some relevant information but has grammatical errors. Summarization is a well studied problem, including both extractive and abstractive approaches;;. Many of the existing abstractive models are based on encoder-decoders and trained using teacher forcing. In this work we focus on improving the training of models such as these, and we have picked one particular model as the baseline for improvement. Many aspects of the By using an unspecialized model with commonly occurring elements as baseline, we hope the same concepts can apply to more advanced and specialized models. There are a number of related techniques to address exposure bias from teacher forcing in a sequence learning problem. One class of such techniques is motivated by the ideas of reinforcement learning (RL) for training actors to make discrete decisions The encoder-decoder is analogous to an actor being trained to learn a policy for choosing an output word at each time step. These decisions are evaluated in training not based on its output probabilities, but based on a reward function defined on the full output sequence, selected via beam search or other methods. Broadly speaking, there are a couple ways to turn rewards into gradients for training probability distributions at each timestep. One technique is the REINFORCE algorithm, Cast in these terms, the recoder serves the role of both components. However, instead of training toward a heuristic such as ROUGE, the recoder uses an encoder-decoder network, allowing it to account for complexities in evaluation of quality that are as sophisticated as the model being trained. An algorithm can only be as good as the metric toward which it is trained, and the recoder helps ease this upper bound on quality. In short, the key difference from reinforcement learning approaches is that the recoder replaces a simple reward heuristic with an encoder-decoder and its loss function. Another approach to account for the beam search decoded output sequence is. In this work, beam search is performed as part of training, and a loss is defined that penalizes the reference sequence prefix falling out of the beam. on two nonsummarization word sequence tasks, a differentiable approximation to beam search is used, and loss is defined on the sequence using Hamming distance. While these approaches can account for the decoder test time sequence output, they do not have the flexibility to credit alternative summaries that differ slightly in phrasing. In comparison, our approach can account for a range of correct summaries that differ from one another, regardless of subsequence overlap. Since the recoder is itself a type of summarization network, it too can keep pace should further improvements in summarizers be developed that output more varied summaries of high quality. However, using a differentiable approximation to beam search enhances backpropagation, and may be a direction for future improvement. The idea of re-encoding an output summary also appears in , where StraightThrough Gumbel-Softmax reparameterization is used to derive a loss function that allows backpropagation. Their reconstruction cycle loss variant is closest in concept to our work, except that since there is no reference summary in their problem, they train their analogue of the recoder to produce the original source. We did not take this approach because in general the summary is a lossy representation of the original, so training the recoder to produce it would subject it a large source of error that it cannot possibly reduce. In the extension to scheduled sampling for machine translation, a decoded sequence prefix is used in training to predict the next reference word after weighting over possible alignments, which allows for flexibility in word ordering. For the summarization problem, alternative summaries that differ by more than word alignment can still be of high quality. The idea of using a second model to help train the primary model has appeared in other contexts.; for machine translation, the decoded output of a translation model is fed as input into a second backtranslation model. The two separate models translate back and forth between source and target languages. We can think of the backtranslation model from target to source language as serving an analogous role to the recoder, although the aim of these works has been to generate synthetic training data to augment limited human training data, rather than to compute gradients directly in an end-to-end model for the purpose of improving the original network. In contrast to the translation problem, we also cannot expect the original article to be recreated from the summary, which in general will be a lossy representation of the article, so the recoder is asked to recreate the reference summary and not the original article. An analogous idea also appears in the problem of sentence compression; Févry & , where decoding from a shortened version of a sentence back to the original sentence is analogous to backtranslating from target to source language. In the related work on dual learning, the two models for the forward and backward problems share parameters to mutual benefit. In contrast, the recoder shares no parameters with the summarizer except vocabulary mappings, relying instead on direct backpropagation. , the decoded output sequence of a translation model is used to train a student model to mimic the original model's output. The aim there is to better distill knowledge into a smaller student model, not to improve the original model. We ran experiments to test the effectiveness of adding the loss function to the attentional encoderdecoder summarization model of. The Tensorflow code, which is based on their published code, will also be made public along with trained models. We started with the provided trained summarizer models and added on recoder losses and continued training. The additional training consumes about double the memory and computation time, as it requires a beam search decoding as part of training, using a beam width of 4. The additional time was about 24 hours for each model, using an Nvidia Tesla P100 GPU on a cloud service provider. Test times are not affected since the recoder is only used during training. We ran experiments using the two versions of the model from , with and without the coverage mechanism, as baselines. The model with coverage is pgen cov, while pgen omits the coverage mechanism. In terms of loss functions, pgen cov is trained using a loss of J S = J ml +J cov as presented above, while pgen is trained using J S = J ml. In either case, the comparison models with recoder is additionally trained using J S + J R. We used the summarizer's (P vocab) t directly in place of P t in computing recoder inputs, bypassing backpropagation to the pointer-generator mechanism, which is most helpful for out-of-vocabulary words that will be treated as UNK for w R t anyway. This yielded similar while converging faster and requiring less computational resources. We fixed the length penalty Penalty(t) to be a graduated curve 1.04 where L is the length of the reference summary for the example. We experimented with different settings of λ to see its effects on output lengths and quality. The final setting of λ = 0.1 used for the ROUGE comparison in Table 1, discussed below, was selected based on its having the highest ROUGE-1 score on the validation set, where scores appeared in the same relative order as on the test set shown in Table 2. In line with the baseline and for comparison with previous work, we first assess quality using the ROUGE metric. The recoder can in principle capture more complex notions of quality than ROUGE, so this heuristic cannot fully capture all aspects of improvement that the human evaluations described below can. Nonetheless, since we do not train toward this metric, it should serve as an independent indicator of overall quality improvements. , e.g. 39.47 vs 39.53 for ROUGE-1. The pretrained models provided with their published code ed in lower scores (differences may be due to changes in the underlying software libraries over time), but we were able to achieve these comparable scores by further training with a lower learning rate. The are shown in Table 1. For pgen, we see that adding the recoder loss in an improvement in scores from (36.08, 15.60, 32.95) to (37.07, 16.02, 33.83). For the higher scoring pgen cov baseline, adding the recoder loss improves scores from (39.47, 17.37, 36.26) to (40.44, 18.15, 36.90). In both cases we see close to a 1 point improvement in ROUGE-1 scores, and a smaller improvement in ROUGE-2 and ROUGE-L scores versus the baseline. We see that adding the recoder to two variations of a strong abstractive summarizer has improved their performance by a significant amount, even though we have not changed the models, only the loss function used to train them. These scores are also an improvement over using the first 3 sentences from the article as the summary (lead-3), which had outperformed the baseline models. To get a sense of the magnitude of improvement, we look at one recently published model that uses the same non-anonymized version of the dataset without further processing, allowing for a relatively direct comparison. The "hierarchical" model of is an abstractive model that accounts for sentence level structure. Overall are close to pgen cov+recoder, which has higher ROUGE-1 and ROUGE-2 scores but lower ROUGE-L. Applying the recoder to a generic baseline has ed in ROUGE gains that match that of a model with more advanced mechanisms. There are other sophisticated abstractive models with higher reported ROUGE scores, especially those that are extractive hybrids or employ RL techniques;; , with a high of 41.69 ROUGE-1 score in the latter work after eliminating duplicate trigrams in post-processing. If we also eliminated duplicate trigrams in post-processing, the pgen cov+recoder scores would be (40.71, 18.20, 37.12), but a full treatment should also eliminate duplicate trigrams during training, requiring a far more complex implementation. These models are generally more tuned to the specific problem and metric, such as accounting for sentence boundaries or including the ROUGE metric as reward, and it is reasonable that they achieve higher scores on this metric. Since the recoder has the potential to capture notions of similarity beyond n-gram overlap as used in ROUGE, we performed further evaluation using human readers. We ran human evaluation experiments comparing both the pgen cov+recoder λ = 0.1 model and the λ = 0.2 model against the pgen cov baseline. The λ = 0.2 model's average summary length of 68.7 is close to the baseline's 66.1, which minimizes differences due to length. From the CNN/DailyMail test set where the model gave beam search decoded outputs that differed from the baseline (95.2% for λ = 0.1 model and 95.5% for λ = 0.2 model, out of 11490 examples), we randomly sampled 300 examples from each model. Workers from Amazon Mechanical Turk were shown up to the first 400 tokens from the article (same as model inputs), the reference summary, and the two generated summaries, randomly assigned to side A and side B. They were asked to select an overall preference between A and B, and then preference in terms of readability and relevance. Since we do not need a confident score for any one example for the purpose of evaluating the model, we limited each example to one worker and each worker to 5 examples to increase diversity across example-worker pairs. Workers were asked the following questions with no other guidance on interpretation: • Which summary is better overall? • Which summary has better readability? • Which summary contains information that is more relevant? The preference are shown in Table 3. We see that pgen cov+recoder was preferred overall 60.3% (λ = 0.1) and 55.0% (λ = 0.2) of the time over the pgen cov baseline. If we account for the remaining 4.8% and 4.5% of cases where the models gave outputs identical to the baseline by assigning equal preference to them, the preference ratios would be adjusted slightly to 59.8%, 54.8% overall and 62.1%, 55.1% relevance respectively. The overall improvement may be largely explained by the improvement in relevance, as the relevance preference was different from overall preference for only 19 and 25 examples respectively. The most direct comparison from previous work may be the human evaluations of that had also compared their model (rnn-ext+abs+RL+rerank), with a ROUGE-1 score of 40.88, against the pgen cov baseline. They allowed a choice of "Equally good/bad" which our survey did not, but if we assign those ratings equally to the two sides, their suggest a preference for relevance 52.8% of the time. If we similarly split "equal" ratings in the head-to-head human evaluations in , their overall preference would be 59.3%. However, their comparison pitted a model (m7) with a ROUGE-1 score of 41.69 against a baseline model (m3) that had only achieved a ROUGE-1 score of 38.01, which is lower than our pgen cov comparison baseline. Differences such as post-processing and survey format make these comparisons imprecise, but they give us a sense of the magnitude of improvement reflected by a 54.8%-59.8% overall preference over the pgen cov baseline. We have presented the use of an encoder-decoder as a sophisticated loss function for sequence outputs in the problem of summarization. The recoder allows us to define a differentiable loss function on the decoded output sequence during training. Experimental using both ROUGE and human evaluations show that adding the recoder in training a general abstractive summarizer significantly boosts its performance, without requiring any changes to the model itself. In future work we may explore whether the general concept of using a model as loss function has wider applicability to other problems.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SylkzaEYPS
We present the use of a secondary encoder-decoder as a loss function to help train a summarizer.
Existing unsupervised video-to-video translation methods fail to produce translated videos which are frame-wise realistic, semantic information preserving and video-level consistent. In this work, we propose a novel unsupervised video-to-video translation model. Our model decomposes the style and the content, uses specialized encoder-decoder structure and propagates the inter-frame information through bidirectional recurrent neural network (RNN) units. The style-content decomposition mechanism enables us to achieve long-term style-consistent video translation as well as provides us with a good interface for modality flexible translation. In addition, by changing the input frames and style codes incorporated in our translation, we propose a video interpolation loss, which captures temporal information within the sequence to train our building blocks in a self-supervised manner. Our model can produce photo-realistic, spatio-temporal consistent translated videos in a multimodal way. Subjective and objective experimental validate the superiority of our model over the existing methods. Recent image-to-image translation (I2I) works have achieved astonishing by employing Generative Adversarial Networks (GANs) . Most of the GAN-based I2I methods mainly focus on the case where paired data exists (Isola et al. (2017b),, Wang et al. (2018b) ). However, with the cycle-consistency loss introduced in CycleGAN, promising performance has been achieved also for the unsupervised image-to-image translation (, , , , ,). While there is an explosion of papers on I2I, its video counterpart is much less explored. Compared with the I2I task, video-to-video translation (V2V) is more challenging. Besides the frame-wise realistic and semantic preserving requirements, which is also required in the I2I task, V2V methods additionally need to consider the temporal consistency issue for generating sequence-wise realistic videos. Consequently, directly applying I2I methods on each frame of the video will not work out as those methods fail to utilize temporal information and can not assure any temporal consistency within the sequence. In their seminal work, Wang et al. (2018a) combined the optical flow and video-specific constraints and proposed a general solution for V2V in a supervised way. Their sequential generator can generate long-term high-resolution video sequences. However, their vid2vid model (Wang et al. (2018a) ) relies heavily on labeled data. Based on the I2I CycleGAN approach, previous methods on unsupervised V2V proposed to design spatio-temporal translator or loss to achieve more temporally consistent while preserving semantic information. In order to generate temporally consistent video sequences, proposed a 3DCycleGAN method which adopts 3D convolution in the generator and discriminator of the CycleGAN framework to capture temporal information. However, since the small 3D convolution operator (with temporal dimension 3) only captures dependency between adjacent frames, 3DCycleGAN can not exploit temporal information for generating long-term consistent video sequences. Furthermore, the vanilla 3D discriminator is also limited in capturing complex temporal relationships between video frames. As a , when the gap between input and target domain is large, 3DCycleGAN tends to sacrifice the image-level reality and generates blurry and gray outcomes. designed a Recycle loss for jointly modeling the spatio-temporal relationship between video frames. They trained a temporal predictor to predict the next frame based on two past frames, and plugged the temporal predictor in the cycle-loss to impose the spatio-temporal constraint on the traditional image translator. As the temporal predictors can be trained from video sequences in source and target domain in a self-supervised manner, the recycle-loss is more stable than the 3D discriminator loss proposed by. The RecycleGAN method of achieved state-of-the-art unsupervised V2V . Despite its success in translation scenarios with less variety, such as faceto-face or flower-to-flower, we have experimentally found that applying RecycleGAN to translate videos between domains with a large gap is still challenging. We think the following two points are major reasons which affect the application of RecycleGAN method in complex scenarios. On one hand, the translator in processes input frames independently, which has limited capacity in exploiting temporal information; and on the other hand, its temporal predictor only imposes the temporal constraint between a few adjacent frames, the generated video content still might shift abnormally: a sunny scene could change to a snowy scene in the following frames. In a concurrent work, incorporate optical flow to add motion cycle consistency and motion translation constraints. However, their Motion-guided CycleGAN still suffers from the same two limitations as the RecycleGAN method. In this paper, we propose UVIT, a novel method for unsupervised video-to-video translation. We assume that a temporally consistent video sequence should simultaneously be: 1) long-term style consistent and 2) short-term content consistent. Style consistency requires the whole video sequence to have the same style, it ensures the video frames to be overall realistic; while the content consistency refers to the appearance continuity of contents in adjacent video frames and ensures the video frames to be dynamically vivid. Compared with previous methods which mainly focused on imposing short-term consistency between frames, we have considered in addition the long-term consistency issue which is crucial to generate visually realistic video sequences. Figure 1: Overview of our proposed UVIT model: given an input video sequence, we first decompose it to the content by Content Encoder and the style by Style Encoder. Then the content is processed by special RNN units-TrajGRUs to get the content used for translation and interpolation recurrently. Finally, the translation content and the interpolation content are decoded to the translated video and the interpolated video together with the style latent variable. We depict here the video translation loss (orange), the cycle consistency loss (violet), the video interpolation loss (green) and the style encoder loss (blue). To simultaneously impose style and content consistency, we adopt an encoder-decoder architecture as the video translator. Given an input frame sequence, a content encoder and a style encoder firstly extract its content and style information, respectively. Then, a bidirectional recurrent network propagates the inter-frame content information. Updating this information with the single frame content information, we get the spatio-temporal content information. At last, making use of the conditional instance normalization (,), the decoder takes the style information as the condition and utilizes the spatio-temporal content information to generate the translation . An illustration of the proposed architecture can be found in figure 1. By applying the same style code to decode the content feature for a specific translated video, we can produce a long-term consistent video sequence, while the recurrent network helps us combine multi-frame content information to achieve content consistent outputs. The conditional decoder also provides us with a good interface to achieve modality flexible video translation. Besides using the style dependent content decoder and bidirectional RNNs to ensure long-term and short-term consistency, another advantage of the proposed method lies in our training strategy. Due to our flexible Encoder-RNN-Decoder architecture, the proposed translator can benefit from the highly structured video data and being trained in a self-supervised manner. Concretely, by removing content information from frame t and using posterior style information, we use our Encoder-RNNDecoder translator to solve the video interpolation task, which can be trained by video sequences in each domain in a supervised manner. In the RecycleGAN method, proposed to train video predictors and plugged them into the GAN losses to impose spatio-temporal constraints. They utilize the structured video data in an indirect way: using video predictor trained in a supervised way to provide spatio-temporal loss for training video translator. In contrast, we use the temporal information within the video data itself, all the components, i.e. Encoders, RNNs and Decoders, can be directly trained with the proposed video interpolation loss. The processing pipelines of using our Encoder-RNN-Decoder architecture for the video interpolation and translation tasks can be found in figure 2, more details of our video interpolation loss can be found in section 2. The main contributions of our paper are summarized as follows: 1. a novel Encoder-RNN-Decoder framework which decomposes content and style for temporally consistent and modality flexible unsupervised video-to-video translation; 2. a novel video interpolation loss that captures the temporal information within the sequence to train translator components in a self-supervised manner; 3. extensive experiments showing the superiority of our model at both video and image level. Let A be the video domain A, a 1:T = {a 1, a 2, ..., a T} be a sequence of video frames in A, let B be the video domain B, b 1:T = {b 1, b 2, ..., b T} be a sequence of video frames in B. For example, they can be sequences of semantic segmentation labels or scene images. Our general goal of unsupervised video-to-video translation is to train a translator to convert videos between domain A and domain B with many-to-many mappings, so that the distribution of the translated video would be close to that of the real target domain video. More concretely, to generate the style consistent video sequence, we assume each video frame has a style latent variable z. Let z a ∈ Z A and z b ∈ Z B be the style latent variables in domain A and B, respectively. Our target is to align the conditional distribution of translated videos and target domain videos, i.e. P (b The style information can be drawn from the prior or encoded from the style encoder in an example-based way. In addition, taking the prior subset information (rain, snow, day, night, etc.) as label and incorporating that into the style code, we can also achieve deterministic control for the style of the output. In this work, we assume a shared content space such that corresponding frames in two domains are mapped to the same latent content code just like UNIT . To achieve the goal of unsupervised video-to-video translation, we propose an Encoder-RNN-Decoder translator which contains the following components: • Two content encoders CE A and CE B, which extract the frame-wise content information in the common spatial content space (e.g., CE A (a t) = l t ). • Two style encoders SE A and SE B, which encode video frames to the respective style domains (e.g., SE A (a 1: is the posterior style latent variable. In practice, we usually take the first frame to conduct style encoding(SE A (a 1) = z post a ). • Two Trajectory Gated Recurrent Units (TrajGRUs) T rajGRU f orw and T rajGRU back, which propagate the inter-frame content information in the forward and the backward direction to form the forward l f orw t and backward l back t content recurrently. • One Merge Module M erge, which adaptively combine l f orw t and l back t. Without the l t from the current frame, it gets the interpolation content l interp t. Using l t to update the l interp t, it gets the translation content l trans t. • Two conditional content decoders CD A and CD B, which take the spatio-temporal content information and the style code to generate the output frame. It can produce the interpolation frame (e.g., CD A (l is the prior style latent variable of domain A drawn from the prior distribution. Combining the above components, we achieve two conditional video translation mappings: In order to achieve the style-consistent translation , we let all the frames in a video sequence to share the same style code z a (z b). Besides imposing long-term style consistency, another benefit of the conditional generator is modality flexible translation. By assigning partial dimension of the style code to encode subset labels in the training phase, we are able to control the subset style of the translated video in a deterministic way. As we propose to use the video interpolation loss to train the translator components in a selfsupervised manner, here we also define the video interpolation mappings: Though the interpolation mapping is conducted within each domain, the interpolation and translation mappings use exactly the same building blocks. An illustration of the translation and interpolation mappings are provided in figure 2. ) with the content (l t) from the current frame (a t). Video translation loss. The translated video frames should be similar to the real samples in the target domain. Both the image-level discriminator and the video-level discriminator are added to ensure the image-level quality and the video-level quality. Here we adopt relativistic LSGAN loss (,). Such loss for domain B can be listed as: ) is defined in the same way. Video interpolation loss. The interpolated video frames should be close to the ground truth frames. At the same time, they should be realistic compared to other frames in the domain. This loss term (in domain A) is as follows: Here, because of the characteristic of bidirectional TrajGRUs, only frames from time 2 to T − 1 are taken to compute the video interpolation loss. a 2:T −1 are the real frames in domain A, a ) is defined in the same way. Cycle consistency loss. This loss is added to ensure semantic consistency. This loss term (in domain A) is defined as: is defined in the same way. Style encoder loss. To train the style encoder, the style reconstruction loss and style adversarial loss are defined as follows: Here, z Our objective for the Generator: Here G are the generator modules, which consist of CE A, CE B, SE A, SE B, T rajGRU f orw, T rajGRU back, M erge, CD A and CD B. Our objective for the Discriminator: Here, D are discriminator modules, which consist of We aim to solve the optimization problem: Implementation details: Our model is trained with 6 frames per batch, with a resolution of 128 × 128. This enables us to train our model with a single Titan Xp GPU. During test time, we follow the experimental setting of Wang et al. (2018a) and load video clips with 30 frames. These 30 frames are divided into 7 smaller sequences of 6 frames with overlap. They all share the same style code to be style consistent. Please note that our model can be easily extended to process video sequences with any lengths. Details of the network architecture are attached in Appendix A.2. We use the Viper dataset . Viper has semantic label videos and scene image videos. There are 5 subsets for the scene videos: day, sunset, rain, snow and night. The large diversity of scene scenarios makes this dataset a very challenging testing bed for the unsupervised V2V task. We quantitatively evaluate translation performance by different methods on the imageto-label and the label-to-image mapping tasks. We further conduct the translation between different subsets of the scene videos for qualitative analysis. Before comparing the proposed UVIT with state-of-the-art approaches, we first conduct ablation study experiments to emphasize our contributions. We provide experimental to show the effect of style-conditioned translation and the effectiveness of the proposed video interpolation loss. UVIT utilizes an Encoder-RNN-Decoder architecture and adopts a conditional decoder to ensure the generated video sequence to be style consistent. The conditional decoder also provides us with a good interface to achieve modality flexible video translation. In our implementation, we use a 21-dimensional vector as the style latent variable to encode the subset label as well as the stochastic part. By changing the subset label, we are able to control the subset style of the generated video in a deterministic way. Meanwhile, by changing the stochastic part, we can generate various video sequences in a stochastic way. In figure 3, we use the same semantic label sequence to generate video sequences with different sub-domain labels. In figure 4, inducing the same subset label -sunset but changing the stochastic part of the style latent variable, we present different sunset videos generated from the same semantic label sequence. Figure 3 and figure 4 clearly show the effectiveness of the proposed conditional video translation mechanism. Please note that the training of our method does not rely on the subset labels, we incorporate subset labels for the purpose of a deterministic controllable translation. Without the subset labels, we can still generate multimodal style consistent in a stochastic way. Video Interpolation Loss: In this part, we provide ablation experiments to show the effectiveness of the proposed video interpolation loss. We conduct ablation studies on both the image-to-label and the label-to-image tasks. Besides comparing UVIT with and without video interpolation loss, we also train UVIT with image reconstruction loss , which only uses image-level information to train encoder-decoder architectures in a self-supervised manner. We denote UVIT trained without video interpolation loss as "UVIT w/o vi-loss" and UVIT trained without video interpolation loss but with image reconstruction loss as "UVIT w/o vi w ir loss". We follow the experimental setting of RecycleGAN and use semantic segmentation metrics to evaluate the image-to-label quantitatively. We report the Mean Intersection over Union (mIoU), Average Class Accuracy (AC) and Pixel Accuracy (PA) achieved by different methods in Table 1. For the label-to-image task, we use the Fréchet Inception Distance (FID) to evaluate the feature distribution distance between translated videos and ground truth videos. The same as vid2vid (a), we use the pretrained I3D model to extract features from videos. We use the semantic labels from the respective sub-domains to generate videos and evaluate the FID score on all the subsets of the Viper dataset. The FID score achieved by the proposed UVIT and its ablations can be found in Table 2. On both the image-to-label and label-to-image tasks, the proposed video interpolation loss plays a crucial role for UVIT to achieve good translation . In addition, compared with the image-level image reconstruction loss, video interpolation loss could effectively incorporate temporal information, and delivers better video-to-video translation . Table 2: Ablation study: Label-to-image FID. More details can be found in Section 3.1. Image-to-label mapping: We use exactly the same setting as our ablation study to compare UVIT with RecycleGAN in the image-to-label mapping task. The mIoU, AC and PA value by the proposed UVIT and competing methods are listed in Table 3. The clearly validate the advantage of our method over the competing approaches in terms of preserving semantic information. Table 5: Label-to-image: Human Preference Score. Vid2vid is a supervised method and the other methods are unsupervised approaches, more details can be found in Section 3.2. Label-to-image mapping: In this setting, we compare the quality of the translated video sequence by different methods. We firstly report the FID score on all the sub-domains of the Viper dataset in the same setting as our ablation experiments. As the original RecycleGAN method can not produce long-term style consistent video sequences, we also report the achieved by our improved version of the RecycleGAN. Concretely, we develop a conditional version which formally controls the style of generated video sequences in a similar way as our UVIT model, and denote the conditional version as improved RecycleGAN. The FID by different methods are shown in Table 4. The proposed UVIT achieves better FID on all the 5 sub-domains. To thoroughly evaluate the visual quality of the video translation , we conduct subjective evaluation on the Amazon Mechanical Turk (AMT) platform. We compare the proposed UVIT with 3DCycleGAN and RecycleGAN. The video-level and image-level human preference scores (HPS) are reported in Table 5. For reference, we also compare the video-level quality between UVIT and the supervised vid2vid model (a). Meanwhile, image-level quality comparison between UVIT and CycleGAN (the image translation baseline) is also included. demonstrates the effectiveness of our proposed UVIT model. In the video-level comparison, our unsupervised UVIT model outperforms the competing unsupervised RecycleGAN and 3DCycle-GAN by a large margin, and achieves comparable with the supervised benchmark. In the image-level comparison, UVIT achieves better HPS than both the V2V competing approaches and the image-to-image baseline. A qualitative example in figure 5 also shows that UVIT model produces a more content consistent video sequence. It could not be achieved by simply introducing the style control without the specialized network structure to record the inter-frame information. Besides translating video sequences between image and label domains, we also train models to translate video sequences between different image subsets and different video datasets. In figure 6, we provide visual examples of video translation from Sunset to Day scenes in the Viper dataset. More of translation between Viper and Cityscapes datasets can be found in our Appendix. In this paper, we have proposed UVIT, a novel method for unsupervised video-to-video translation. A novel Encoder-RNN-Decoder architecture has been proposed to decompose style and content in the video for temporally consistent and modality flexible video-to-video translation. In addition, we have designed a video interpolation loss which utilizes highly structured video data to train our translators in a self-supervised manner. Extensive experiments have been conducted to show the effectiveness of the proposed UVIT model. Without using any paired training data, the proposed UVIT model is capable of producing excellent multimodal video translation , which are image-level realistic, semantic information preserving and video-level consistent. Image level discriminator loss. This loss term (for D img A in domain A) is defined as follows: for domain B is defined in the same way. Video level discriminator loss. This loss term (for D vid A in domain A) is defined as follows: for domain B is defined in the same way. Style latent variable discriminator loss. This loss term (for D Z A in style domain A) is defined as follows: for style domain B is defined in the same way. The Trajectory Gated Recurrent Units (TrajGRUs) can actively learn the locationvariant structure in the video data. It uses the input and hidden state to generate the local neighborhood set for each location at each time, thus warping the previous state to compensate for the motion information. We take two TrajGRUs to propagate the inter-frame information in both directions in the shared content space. With the video being in a resolution of 128 × 128, we use a single Titan Xp GPU to train our network for 3 to 4 days to get a mature model. Due to the GPU memory limitation, the batch size is set to be one. Currently, the frame per clip is 6. Feeding more frames per clip may improve the ability of our model to capture the content dependency in a longer range. However, it requires more GPU memory. The same requirement holds if we want to achieve a higher resolution and display more details. An example of style inconsistency of RecyceGAN is shown in figure 7. A qualitative example of the mapping between images and labels can be found at figure 8, which shows that our UVIT model can output semantic preserving and consistent segmentation labels. More on the label-to-image mapping comparison of UVIT and Improved RecycleGAN are plotted in figure 9 and figure 10. More on label sequences to image sequences with multimodality are plotted in figure 11. The Cityscapes dataset has real-world street scene videos. As a supplement, we conduct qualitative analysis on the translation between scene videos of Cityscapes and Viper dataset. The is organized in figure 12.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkevCyrFDS
A temporally consistent and modality flexible unsupervised video-to-video translation framework trained in a self-supervised manner.
Providing transparency of AI planning systems is crucial for their success in practical applications. In order to create a transparent system, a user must be able to query it for explanations about its outputs. We argue that a key underlying principle for this is the use of causality within a planning model, and that argumentation frameworks provide an intuitive representation of such causality. In this paper, we discuss how argumentation can aid in extracting causalities in plans and models, and how they can create explanations from them. Explainability of AI decision-making is crucial for increasing trust in AI systems, efficiency in human-AI teaming, and enabling better implementation into real-world settings. Explainable AI Planning (XAIP) is a field that involves explaining AI planning systems to a user. Approaches to this problem include explaining planner decision-making processes as well as forming explanations from the models. Past work on model-based explanations includes an iterative approach BID14 as well as using explanations for more intuitive communication with the user BID5. With respect to human-AI teaming, the more helpful and illustrative the explanations, the better the performance of the system overall. Research into the types of questions and motivations a user might have includes work with contrastive questions BID9. These questions are structured as' Why F rather than G?', where F is some part (i.e. action(s) in a plan) of the original solution and G is something the user imagines to be better. While contrastive questions are useful, they do not consider the case when a user doesn't have something else in mind (i.e. G) or has a more general question about the model. This includes the scenario in which the user's understanding of the model is incomplete or inaccurate. Research in the area of model reconciliation attempts to address this knowledge gap BID1.More broadly, questions such as' Why A?', where A is an action in the plan, or'How G?', where G is a (sub)goal, must be answerable and explainable. Questions like these are inherently based upon definitions held in the domain related to a particular problem and solution. The user's motivation behind such questions can vary: he could think the action is unnecessary, be unsure as to its effects, or think there is a better option. Furthermore, questions regarding particular state information may arise, such as'Why A here?' and'Why can't A go here?'. For these, explanations that include relevant state information would vastly improve their efficiency when communicating with a user BID9. This is especially true for long plans, when a user does not have access to a domain, or the domain is too complex to be easily understood. Thus, extracting relevant information about action-state causality from the model is required. In the space of planning, causality underpins a variety of research areas including determining plan complexity BID6 and heuristics BID7. Many planners also can create causal graph visualizations of plans for a user to interact with BID12. The general structure of causality in planning is'action causes state'. Indirectly, this can be seen as'action enables action', where the intermediary state is sufficient for the second action to occur. Hilton describes different'causal chains' which mirror the types of causality found in planning; action-state causality can be identified as either a'temporal' or'unfolding' chain, while action-action causality is similar to an'opportunity chain' BID8. For now, we will focus on these two types of general causality. To represent the causality of a model, argumentation is a good candidate; as detailed by BID0, argumentation frameworks and causal models can be viewed as two versions of one entity. A recent related work uses argumentation for explainable scheduling (Cyras et al. 2019). We consider an ASPIC + (Modgil and Prakken 2013) style framework with defeasible rules capturing the relationships between actions in a plan and strict rules capturing actionstate causality. This structure allows more than a causal representation of a plan; it allows multiple types of causality to be distinguished and different causal'chunks' to be created and combined to be used as justification for explanations. In this paper we present an initial approach for using argumentation to represent causality, which can then be used to form more robust explanations. In the following sections, a motivating scenario will be introduced and used to showcase our current approaches of abstracting causalities and state information into argumentation frameworks. Consider a simple logistics scenario in which three trucks are tasked with delivering three packages to different locations. The user analyzing the planner output has the plan as well as a general, non-technical understanding of the model and the goals of the problem; the user knows that trucks can move between certain waypoints that have connecting roads of differing lengths, there are refueling stations at waypoints B and E, and some subgoals of the problem are to have package 1 delivered to waypoint C, package 2 delivered to waypoint G, and package 3 delivered to waypoint D. The user is also aware that the three trucks and three packages are at waypoint A in the initial state. A basic map of the domain and plan are shown in FIG1, respectively. Even with a simple and intuitive problem such as this, questions may arise which cannot be answered trivially. One such question is'Why drive truck 1 to waypoint E?'. Addressing this question requires the causal consequence of applying the action; in other words, how does driving truck 1 to waypoint E help in achieving the goal(s)?As discussed previously, tracking state information throughout a plan can be useful for explanations. This is especially true when values of state variables are not obvious at any given point in a plan and their relevance to a question is not known. A question such as'Why drive truck 3 to waypoint B?' has this property. These two questions will be addressed in the following sections. As mentioned above, in this paper we will make use of ASPIC + as the underlying argumentation system from which explanations are constructed. However, what we are suggesting is not limited to ASPIC +; we can imagine using most formal argumentation systems to reason in this way. For a full description of ASPIC + see BID10. In this paper we only make use of the ability to construct arguments, and so that is the only aspect of the system that we describe. We start with a language L, closed under negation. A reasoner is then equipped with a set Rules of strict rules, denoted φ 1,..., φ n → φ, and defeasible rules, denoted φ 1,..., φ n ⇒ φ, where φ 1,..., φ n, φ are all elements of L. A knowledge base ∆ is then a set of elements K from L and a set Rules. From ∆ it is possible to construct a set of arguments A(∆), where an argument A is made up of some subset of K, along with a sequence of rules, that lead to a . Given this, Prem(·) returns all the premises, Conc(·) returns the and TopRule(·) returns the last rule in the argument. An argument A is then: DISPLAYFORM0 Sub(A) = {A}; and TopRule(A) = undefined.• A 1,..., A n → φ if A i, 1 ≤ i ≤ n, are arguments and there exists a strict rule of the form Conc(A 1),..., DISPLAYFORM1 • A 1,..., A n ⇒ φ if A i, 1 ≤ i ≤ n, are arguments and there exists a defeasible rule of the form Conc(A 1),..., DISPLAYFORM2 Then, given K = {a; b} and Rules = {a → c; b, c ⇒ d}, we have the following arguments: DISPLAYFORM3 When applied to planning, these arguments define a subsection of a causal chain, as will be described below. In order to utilize causality in explanations, the causal links between actions in a plan need to be extracted and abstracted into a framework. This process is planner-independent, so it requires only the plan, problem, and domain as inputs. An algorithm is used to extract the causalities which then form a knowledge base of causal links. This can then be used by an argumentation engine to construct arguments representing the causal'chunks' in a plan. From this, questions of the forms' Why A?' and'How G?' can be addressed. This process is described in the following sections. To extract causal relationships between actions in a plan, an algorithm similar to the one used in BID2 for detecting action dependencies is utilized: 1. Finds connections between one action's effects and another's preconditions from the domain to form a knowledge base. In general terms we can think of these chunks as being statements in some logical language of the form: ((load truck t1 p1), (drive truck t1 wpC)) ⇒ (unload truck t1 p1) (drive truck t1 wpC) ⇒ (drive truck t1 wpD) (unload truck t1 p1) ⇒ p1 at wpC (drive truck t1 wpD) ⇒ (drive truck t1 wpE) DISPLAYFORM0 Given a knowledge base, the argumentation engine can construct a sequence of arguments with defeasible rules:A 1:(load truck t1 p1) A 2:(drive truck t1 wpC) A 3:A 1, A 2 ⇒ (unload truck t1 p1) A 4:A 3 ⇒ p1 at wpC A 5:A 2 ⇒ (drive truck t1 wpD) A 6:A 5 ⇒ (drive truck t1 wpE) A 7:A 6 ⇒ (ref uel truck t1) A 8:A 7 ⇒ (drive truck t1 wpF) A 9:A 8 ⇒ (drive truck t1 wpG) A 10:A 9 ⇒ (unload truck t1 p2) A 11:A 10 ⇒ p2 at wpG These summarize the causal structure of part of the plan (i.e. a 'causal chunk' as defined in Secion 4.3), summarized in argument A 11, which can then be presented to a user who is seeking explanations. A visualization of these arguments can be seen in FIG4 We define the notion of a causal'chunk' as any subsection(s) of the causal chain(s) extracted from the plan or model and then combined. Intuitively, these chunks can focus on one'topic' (e.g. state variable, object instance) to provide a higher-level abstraction of causality rather than just the individual causal links. The argument A 11 which represents such a causal chunk shows only the action-action causalities (i.e. from just one causal chain) involving the object truck 1. These chunks are created by searching through the Rules of the framework for those pertaining to a specific'topic'.Given arguments such as A 11, we propose two methods of structuring explanations. The first method is allowing the user to engage the system in a dialogue. For our example, the question,'Why e? where e is the action of driving truck 1 to waypoint E could be used to query the system: why e Following work such as BID11, the system replies to this query by building an argument for e, in this case A 6, and using this to provide a suitable response, which might be by returning Conc(A 5), since A 5 ⇒ e. Thus the system could reply with:d, which leads to e where d is drive truck t1 wpD. The user could then continue to unpack the causal chunk by asking: why d and so on. This would provide the user with the causalities which enabled action e to be applied. The same could be done using a forward approach where the argument A 6 is expanded until a subgoal is reached, if possible (e.g. A 11). The user can then ask:why e and the system responds with:e leads to f as in A 7: A 6 ⇒ f. Iteratively, this would show how e leads to some goal or subgoal. Reversing this process will also explain how a goal is reached. The second method of structuring explanations is detailed in Section 5.2, and can be applied to this example similarly. Using a similar method as above, causalities held within the state space of the plan are extracted and represented as a knowledge base. An algorithm is used that iterates through the effects of actions from a plan and extracts the state variables they alter. They can then be used to answer questions such as' Why A here?' and'Why can't A go here?'. In general terms, we define these dependencies as being statements in some logical language of the form: DISPLAYFORM0 which denote the statements'a causes ∆x a' and'b causes ∆y c and ∆z c'. Here, a, b are actions in the plan, and x, y, z are state variables. The x 0, y 0, z 0 denote the values of those variables in the initial state while x f, y f, z f denote the final values in the goal state; ∆x a denotes the change in x after applying action a. Applying this to our logistics example and the question,'Why drive truck 3 to waypoint B?', these strict rules are relevant: DISPLAYFORM1 From these, it is clear the truck's fuel level is too low in the initial state to go anywhere besides waypoint B (see FIG1 . However, it is not clear why the truck does not just stay put. Alone, these rules do not provide a full explanation, but they can be added to the action-action causal chains for more complete explanations. When used in conjunction, the causal traces and opportunity traces form a strong basis of justification for an explanation (see FIG5 for a visual representation). Using the example from before, the relevant defeasible rules from the causal chain are: DISPLAYFORM0 where the of the second rule is a subgoal of the problem, perhaps previously unknown to the user. That is, because the problem requires all trucks to have a minimum amount of fuel at the end, truck 3 had to refuel but could not deliver any packages due to its low initial fuel amount. Thus, combining arguments from both types of causal chains more aptly answers this question. A method for seamlessly creating explanations from this structure is an intended future work. For now, it is possible to extract both the defeasible rules and strict rules governing the causal effects related to a specific topic and present them to a user. How to determine which rules are relevant to a specific user question and how to combine the rules to form higher-level causal chunks are ongoing works. One possible method of creating relevant causal chunks is to extract all rules related to a specific'topic' (e.g. state variable). For the variable't3 fuel', all actions which alter it will be extracted along with any actions that enable the altering actions from the defeasible rules. Additionally, any (sub)goals containing't3 fuel' will be extracted. Together, these form a chunk representing the causes of changes to't3 fuel' as well as its relationship to the (sub)goals. The arguments below represent the causal'chunk': DISPLAYFORM1 where the of A 3 is a subgoal of the problem. When unpacked iteratively, the arguments in the causal chunk centred on't3 fuel' would give a similar output explanation as in the example in Section 4.3. For example, a user asking the question' Why b?' where b is the action (drive truck 3 to waypoint B) would either receive the response: t3 fuel is 2 enables b or the response:b causes t3 fuel decrease 2 and enables c if using a forward chaining approach, where c is the premise of the of A 2, (refuel truck t3). This process would continue until the subgoal t3 fuel >5 is reached. However, identifying what state variables are relevant given a user question is not trivial. The question'Why drive truck 3 to waypoint B?' has no mention of the truck's fuel, so its relevance must be deduced from the plan, problem and domain. Another method of providing explanations is through a graph structure, as depicted in Figure 5. Given a query, the relevant causal chunks would be identified and represented in the graph with individual actions and state changes as nodes and the causal rules between them as edges. This approach could also help explain question of the form, Why can't A go here?, as inapplicable actions (ones not in the plan) can be shown. Developing a robust system such as this is important future work. Figure 5: An example graph with the queried action in blue and nodes contained in the't3 fuel' chunk in orange, and I and G the initial and goal states. Dashed edges denote defeasible rules; solid edges denote strict rules. We acknowledge that this is a preliminary step and more work is required to expand on the ideas presented in this paper. One such future work involves defining exactly what questions, which range from action-specific to model-based, can be answered and explained using our approach. Also, how these questions are captured from a user is an open question. The query,'Why didn't truck 3 deliver any packages?' can be answered using the causal information captured in the framework, but how one converts this question to a form that the system understands requires further research. Potential methods for communicating a user question include a dialogue system or Natural Language Processing techniques. Along with expanding the set of questions that can be addressed, extensions to the argumentation framework itself should be considered. Better methods for creating causal'chunks' for specific user questions are needed. It may be advantageous to use argumentation schemes to help identify relevant topics of chunks and which causal chains should be included from the framework. This relates to the idea of'context' and identifying the motivation of a question. If the system can be more precise in extracting the relevant information, the explanations themselves will be more effective. Related to this is the need to explore other ways of presenting an explanation to a user. Research into the efficacy of explanations and how to properly assess the effectiveness of the explanations in practice are future areas of research, and will require user studies. Our starting point will be the approach outlined in Section 4.3 which has been shown empirically to be effective in contexts such as human-robot teaming BID13. In this paper we proposed an initial approach to explainable planning using argumentation in which causal chains are extracted from a plan and model and abstracted into an argumentation framework. Our hypothesis is that this allows ease of forming and communicating explanations to a user. Furthermore, causal'chunks' can be created by combining relevant causal links from the chains which explain the causalities surrounding one'topic'. We believe these help with making more precise explanations, and that chunks can be used to provide hierarchical explanations. Overall, the approach is a first step towards exploiting the intuitive functionality of argumentation in order to use causality for explanations.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byef4anQcE
Argumentation frameworks are used to represent causality of plans/models to be utilized for explanations.
Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines. We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time. We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can in undesirable local minima that manifest themselves as halo structures. We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos. We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions. We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach. Deep learning methods have proven themselves as powerful computational tools in many disciplines, and within it a topic of strongly growing interest is deep learning for point-based data sets. These Lagrangian representations are challenging for learning methods due to their unordered nature, but are highly useful in a variety of settings from geometry processing and 3D scanning to physical simulations, and since the seminal work of , a range of powerful inference tasks can be achieved based on point sets. Despite their success, interestingly, no works so far have taken into account time. Our world, and the objects within it, naturally move and change over time, and as such it is crucial for flexible point-based inference to take the time dimension into account. In this context, we propose a method to learn temporally stable representations for point-based data sets, and demonstrate its usefulness in the context of super-resolution. An inherent difficulty of point-based data is their lack of ordering, which makes operations such as convolutions, which are easy to perform for Eulerian data, unexpectedly difficult. Several powerful approaches for point-based convolutions have been proposed (; ;), and we leverage similar neural network architectures in conjunction with the permutation-invariant Earth Mover's Distance (EMD) to propose a first formulation of a loss for temporal coherence. In addition, several works have recognized the importance of training point networks for localized patches, in order to avoid having the network to rely on a full view of the whole data-set for tasks that are inherently local, such as normal estimation , and super-resolution ). This also makes it possible to flexibly process inputs of any size. Later on we will demonstrate the importance of such a patch-based approach with sets of changing cardinality in our setting. A general challenge here is to deal with varying input sizes, and for super-resolution tasks, also varying output sizes. Thus, in summary we target an extremely challenging learning problem: we are facing permutation-invariant inputs and targets of varying size, that dynamically move and deform over time. In order to enable deep learning approaches in this context, we make the following key contributions: Permutation invariant loss terms for temporally coherent point set generation; A Siamese training setup and generator architecture for point-based super-resolution with neural networks; Enabling improved output variance by allowing for dynamic adjustments of the output size; The identification of a specialized form of mode collapse for temporal point networks, together with a loss term to remove them. We demonstrate that these contributions together make it possible to infer stable solutions for dynamically moving point clouds with millions of points. More formally, we show that our learning approach can be used for generating a point set with an increased resolution from a given set of input points. The generated points should provide an improved discretization of the underlying ground truth shape represented by the initial set of points. For the increase, we will target a factor of two to three per spatial dimension. Thus, the network has the task to estimate the underlying shape, and to generate suitable sampling positions as output. This is generally difficult due to the lack of connectivity and ordering, and in our case, positions that move over time in combination with a changing number of input points. Hence it is crucial that the network is able to establish a temporally stable latent space representation. Although we assume that we know correspondences over time, i.e., we know which point at time t moved to a new location at time t + ∆t, the points can arbitrarily change their relative position and density over the course of their movement, leading to a substantially more difficult inference problem than for the static case. Deep learning with static point sets was first targeted in PointNet via order-invariant networks, while PointNet++ (extended this concept to generate features for localized groups similar to a convolution operation for grid-based data. This concept can be hierarchically applied to the generated groups, in order to extract increasingly abstract and global features. Afterwards, the extracted features can be interpolated back to the original point cloud. The goal to define point convolutions has been explored and extended in several works. The MCNN approach phrased convolution in terms of a Monte Carlo integration. PointCNN defined a pointwise convolution operator using nearest neighbors, while extension-restriction operators for mapping between a point cloud function and a volumetric function were used in. The PU-Net proposed a network for upsampling point clouds, and proposed a similar hierarchical network structure of PointNets along the lines of PointNet++ to define convolutions. Being closer to our goals, we employ this approach for convolutional operations in our networks below. We do not employ the edge-aware variant of the PU-Net (b) here, as we focus on temporal changes in our work. Permutation invariance is a central topic for point data, and was likewise targeted in other works . The Deep Kd-network defined a hierarchical convolution on point clouds via kd-trees. PointProNets employed deep learning to generate dense sets of points from sparse and noisy input points for 3D reconstruction applications. PCPNet , as another multi-scale variant of PointNet, has demonstrated high accuracy for estimating local shape properties such as normal or curvature. P2PNet ) used a bidirectional network and extends PointNet++ to learn a transformation between two point clouds with the same cardinality. Recently, the area of point-based learning has seen a huge rise in interest. One focus here are 3D segmentation problems, where numerous improvements were proposed, e.g., by SPLATNet , SGPN (a), SpiderCNN , PointConv , SONEt and 3DRNN . Other networks such as Flex Convolution , the SuperPoint Graph , and the fully convolutional network focused on large scale segmentation. Additional areas of interest are shape classification (b; ; ;) and object detection , and hand pose tracking . Additional works have targeted rotation and translation invariant inference , and point cloud autoencoders ). A few works have also targeted generative models based on points, e.g., for point cloud generation, and with adversarial approaches. It is worth noting here that despite the huge interest, the works above do not take into account temporally changing data, which is the focus of our work. A notable exception is an approach for scene flow, in order to estimate 3D motion directly on the basis of point clouds. This work is largely orthogonal to ours, as it does not target generative point-based models. We assume an input point cloud, where d includes 3 spatial coordinates and optionally additional features. Our goal is to let the network f s (X) infer a functionỸ which approximates a desired super-resolution output point cloud Y = {y 1, y 2, ..., y n} of size n ∈ [1, n max] with y i ∈ R 3, i.e. f s (X) =Ỹ ≈ Y. For now we assume that the number of output points n is defined by multiplying k with a user-defined upsampling factor r, i.e. n = rk. Figure 2a) illustrates the data flow in our super-resolution network schematically. We treat the upsampling problem as local one, i.e., we assume that the inference problem can be solved based on a spatially constrained neighborhood. This allows us to work with individual patches extracted from input point clouds. At the same time, it makes it possible to upsample adaptively, for example, by limiting the inference to relevant areas, such as complex surface structures. For the patch extraction we use a fixed spatial radius and normalize point coordinates within each patch to lie in the range of [−1, 1]. Our first building block is a measure for how well two point clouds represent the same object or scene by formulating an adequate spatial loss function. , we base our spatial loss L S on the Earth Mover's Distance (EMD), which solves an assignment problem to obtain a differentiable bijective mapping φ:ỹ → y. With φ we can minimize differences in position for arbitrary orderings of the points clouds via: Figure 2: a) Schematic overview of fs(X). Black arrows represent scalar data. Point data is depicted as colored arrows with the color indicating data cardinality (brown=k, red = kmax, green = nmax, blue = n, and purple =ñ). b) Siamese network setup for temporal loss calculation. When not taking temporal coherence explicitly into account, the highly nonlinear and ill-posed nature of the super-resolution problem can cause strong variations in the output even for very subtle changes in the input. This in significant temporal artifacts that manifest themselves as flickering. In order to stabilize the output while at the same time keeping the network structure as small and simple as possible, we propose the following training setup. Given a sequence of high resolution point clouds Y t, with t indicating time, we can compute a velocity For this we use a finite difference (y t+1 i − y t i), where we assume, without loss of generality, ∆t = 1, i.e. the time step is normalized to one. For training, the low resolution inputs X can now be generated from Y via down-sampling by a factor of r, which yields a subset of points with velocities. Details of our data generation process will be given below. To train a temporally coherent network with the Y t sequences, we employ a Siamese setup shown in Figure 2b. We evaluate the network several times (3 times in practice) with the same set of weights, and moving inputs, in order to enforce the output to behave consistently. In this way we avoid recurrent architectures that would have to process the high resolution outputs multiple times. In addition, we can compute temporal derivatives from the input points, and use them to control the behavior of the generated output. Under the assumption of slowly moving inputs, which theoretically could be ensured for training, a straightforward way to enforce temporal coherence would be to minimize the generated positions over consecutive time steps in terms of an L 2 norm: While this reduces flickering, it does not constrain the change of velocities, i.e., the acceleration. This in a high frequency jittering of the generated point positions. The jitter can be reduced by also including the previous state at time step t − 1 to constrain the acceleration in terms of its L 2 norm: However, a central problem of a direct temporal constraint via Equations and is that it consistently leads to a highly undesirable clustering of generated points around the center point. This is caused by the fact that the training procedure as described so far is unbalanced, as it only encourages minimizing changes. The network cannot learn to reconstruct realistic, larger motions in this way, but rather. can trivially minimize the loss by contracting all outputs to a single point. For this reason, we instead use the estimated velocity of the ground truth point cloud sequence with a forward difference in time, to provide the network with a reference. By using the EMD-based mapping φ established for the spatial loss in Equation, we can formulate the temporal constraint in a permutation invariant manner as Intuitively, this means the generated outputs should mimic the motion of the closest ground truth points. As detailed for the L 2 -based approaches above, it makes sense to also take the ground truth acceleration into account to minimize rapid changes of velocity over time. We can likewise formulate this in a permutation invariant way w.r.t. ground truth points via: We found that a combination of L EV and L EA together with the spatial loss L S from Eq. 1 provides the best , as we will demonstrate below. First, we will introduce the additional loss terms of our algorithm. Existing network architectures are typically designed for processing a fixed amount of input and output points. However, in many cases, and especially for a localized inference of super-resolution, the number of input and output points varies significantly. While we can safely assume that no patch exceeds the maximal number of inputs k max (this can be ensured by working on a subset), it can easily happen that a certain spatial region has fewer points. Simply including more distant points could guarantee that we have a full set of samples, but this would mean the network has to be invariant to scaling, and to produce features at different spatial scales. Instead, we train our network for a fixed spatial size, and ensure that it can process varying numbers of inputs. For inputs with fewer than k max points, we pad the input vector to have a fixed size. Here, we ensure that the padding values are not misinterpreted by the network as being point data. Therefore, we pad X with p ∈ {−2} d, which represents a value outside the regular patch coordinate range [−1, 1]: The first convolutional layer in our network now filters out the padded entries using the following mask: The entries of p allow us to compute the mask on the fly throughout the whole network, without having to pass through k. For an input of size k, our network has the task to generateñ = rk points. As the size of the network output is constant with rk max, the outputs are likewise masked with M out to truncate it to lengthñ for all loss calculations, e.g., the EMD mappings. Thus, as shown in Figure 2a,ñ is used to truncate the point cloudȲ = {ȳ 1,ȳ 2, ...,ȳ nmax} via a mask M out to form the final output Note that in Sec. 3.1, we have for simplicity assumed that n = rk, however, in practice the number of ground truth points n varies. As such,ñ only provides an approximation of the true number of target points in the ground truth data. While the approximation is accurate for planar surfaces and volumes, it is less accurate in the presence of detailed surface structures that are smaller than the spatial frequency of the low-resolution data. We have analyzed the effect of this approximation in Fig. 3. The histograms show that the strongly varying output counts are an important factor in practice, and Fig. 4 additionally shows the improvement in terms of target shape that from incorporating variable output sizes. In general,ñ provides a good approximation for our data sets. However, as there is a chance to infer an improved estimate of the correct output size based on the input points, we have experimented with training a second network to predictñ in conjunction with a differentiable M out. While this could be an interesting feature for future applications, we have not found it to significantly improve . As such, the evaluations and below will use the analytic calculation, i.e.,ñ = rk. For each input point the network generates r output points, which can be seen as individual groups g: ψ(g) = {Ỹ i |i ∈ [rg + 1, (r + 1)g]}. These groups of size r in the output are strongly related to the input points they originate from. Networks that focus on maintaining temporal coherence for the dynamically changing output tends to slide into local minima where r output points are attached as a L2 Loss Velocity Only Velocity + Acceleration Figure 6: Ablation study for our temporal loss formulation. Black points indicate targets, while green points are generated (both shown as time average). a) Result from previous work; b) With L2V loss; c) the proposed velocity loss LEV; d) our full loss formulation with LEV + LEA. While (a) has difficulties to approximate the target shape and the flickering output is visible as blurred positions, the additional loss terms (esp. in (c) and (d) Table 1: Quantitative for the different terms of our loss functions, first for our 2D ablation study and then for our 3D versions. The first three columns contain spatial, the next four temporal metrics. LN = ñ − n 2 2 is given as a measure of accuracy in terms of the size of the generated outputs (it is not part of the training). fixed structure to the input point location. This manifests itself as visible halo-like structures that move statically with the input. Although temporal coherence is good in this case, these cluster-like structures lead to gaps and suboptimal point distributions in the output, particularly over time. These structures can be seen as a form of temporal mode collapse that can be observed in other areas deep learning, such as GANs. To counteract this effect, we introduce an additional mingling loss term to prevent the formation of clusters by pushing the individual points of a group apart: Figure 5: Left, a without the mingling loss from Eq. 6, right with (a single point group highlighted in orange). The former has many repeated copies of a single pattern, which the mingling manages to distribute. Note that in contrast to previously used repulsion losses, L M encourages points to globally mix rather than just locally repelling each other. While a repulsion term can lead to a deterioration of the generated outputs, our formulation preserves spatial structure and temporal coherence while leading to well distributed points, as is illustrated in Fig. 5. In combination with the spatial and temporal terms from above, this leads to our final loss function L f inal = L S + γL EV + µL EA + νL M, with weighting terms γ, µ, ν. We train our network in a fully supervised manner with simulated data. To illustrate the effect of our temporal loss functions, we employ it in conjunction with established network architectures from previous work (; . Details of the data generation and network architectures are given in the appendix. We first discuss our data generation and training setup, then illustrate the effects of the different terms of our loss function, before showing for more complex 3D data sets. As our focus on temporal coherence, which is best seen a) b) c) Figure 7: Our method applied to an animation of a moving spider. (a) Input point cloud, (b) three frames of our method, (c) a detail from previous work (top) and our method (bottom). Note that our method at the bottom preserves the shape with fewer outliers, and leads to a more even distribution of points, despite generating fewer points in total (see Table 2). a) shows averaged latent space values for 100 random patch sequences of our 2D data set. The green curve shows our method with temporal coherence loss, while the pink curve was generated without it. The same data is shown in frequency space in (b), where the red curve represents the frequency of the data with temporal loss, and the blue curve the frequency of the data without. These graph highlights the reduced amount of high frequency changes in the latent space with temporal loss, esp. in frequency space, where the red curve almost entirely lies below the blue one. (c) contains frequency information for the latent space content of the same 100 patch sequences, but with a random order. In this case, the blue and red curve both contain significant amounts of high-frequencies. I.e., our method reliably identifies strongly changing inputs. Table 2: Point counts for the 3D examples of our video. Input counts together with output counts for previous work (P.W.) and our proposed network are shown. Factor columns contain increase in point set size from in-to output. As previous work cannot handle flexible output counts, a fixed number of points is generated per patch, leading to a huge number of redundant points. However, our network flexibly adapts the output size and leads to a significantly smaller number of generated points that cover the object or volume more evenly. in motion, we refer readers to the supplemental video at https://www.dropbox.com/sh/ btrzxavn34qftfe/AAADpIBME0eguA4ew4ylvCX_a?dl=0 in order to fully evaluate the ing quality. Ablation Study We evaluate the effectiveness of our loss formulation with a two dimensional ablation study. An exemplary patch of this study is shown in Fig. 6. In order to compare our method to previous work, we have trained a previously proposed method for point-cloud super-resolution, the PU-Net ) which internally uses a PointNet++ , with our data set, the only difference being that we use zero-padding here. This architecture will be used in the following comparisons to previous work. Fig. 6a ) shows a generated with this network. As this figure contains an average of multiple frames to indicate temporal stability, the blurred regions, esp. visible on the right side of Fig. 6a ), indicate erroneous motions in the output. For this network the difficulties of temporally changing data and varying output sizes additionally lead to a suboptimal approximation of the target points, that is also visible in terms of an increased L S loss in Table 1. While Fig. 6b ) significantly reduces motions, and leads to an improved shape as well as L S loss, its motions are overly constrained. E.g., at the bottom of the shown patch, the generated points should follow the black input points, but in (b) the generated points stay in place. In addition, the lack of permutation invariance leads to an undesirable clustering of generated points in the patch center. Both problems are removed with L EV in Fig. 6c ), which still contains small scale jittering motions, unfortunately. These are removed by L EA in Fig. 6d), which shows the of our full algorithm. The success of our approach for dynamic output sizes is also shown in in the L N column of Table 1, which contains an L 2 error w.r.t. ground truth size of the outputs. Temporally Coherent Features A central goal of our work is to enable the learning of features that remain stable over time. To shed light on how our approach influences the established latent space, we analyze its content for different inputs. The latent space in our case consists of a 256-dimensional vector that contains the features extracted by the first set of layers of our network. Fig. 8 contains a qualitative example for 100 randomly selected patch sequences from our test data set, where we collect input data by following the trajectory of each patch center for 50 time steps to extract coherent data sets. Fig. 8a ) shows the averaged latent space content over time for these sequences. While the model trained with temporal coherence (green curve) is also visually smoother, the difference becomes clearer when considering temporal frequencies. We measure averaged frequencies of the latent space dimensions over time, as shown in Fig. 8b,c). We quantify the differences by calculating the integral of the frequency spectrumf, weighted by the frequency x to emphasize high frequencies, i.e, x x ·f (x)dx. Hence, small values are better. As shown in Fig. 8b ), the version trained without our loss formulations contains significantly more high frequency content. This is also reflected in the weighted integrals, which are 36.56 for the method without temporal loss, and 16.98 for the method with temporal loss. To verify that our temporal model actually establishes a stable temporal latent space instead of ignoring temporal information altogether, we evaluate the temporal frequencies for the same 100 inputs as above, but with a randomized order over time. In this case, our model correctly identifies the incoherent inputs, and yields similarly high frequencies as the regular model with 28.44 and 35.24, respectively. More details in Appendix C. In addition, we evaluated the changes of generated outputs over time w.r.t. ground truth motion. For this we mapped the generated point cloudsỸ t = {y t 1, y t 2, ..., y t n} for 100 frames to evenly and dense sampled ground-truth points on the original mesh Y t = {y t 1, y t 2, ..., y t n} (the moving man shown in Fig. 1). This gives us a dense correlation between the data and the generated point clouds. For the mapping we used an assignment based on nearest neighbors. γ:ỹ → y. Using γ we divideỸ t into n subsetsŶ i = {ỹ j |γ(ỹ j) = y i } which correlate with the corresponding ground-truth points. For each subset we can now compute the mean position 1 |Ŷi| ŷ∈Ŷiŷ and the sample density |Ŷ i | measured by the number of points assigned to a ground-truth sample position. The temporal change of these values are of particular interest. The change of the mean positions should correspond to the ground-truth changes, while the change of the density should be as small as possible. We have evaluated the error of the first and second derivative of positions, as well as the first and second derivative of density (see Fig. 9 and Table 3). As can be seen from the plots, our method leads to clear improvements for all measured quantities. The individual spikes that are visible for both versions in the position errors (c,d) most likely correspond to sudden changes of the input motions for which our networks undershoots by producing a smooth version of the motion. Table 3: Measurements averaged over 100 frames for a version of our network without temporal loss ("w/o") and with our full temporal loss formulation ("with"). The left table shows the for the error evaluation of the velocity and the acceleration, whereas in the right table one can see the variance of the density derivatives. Our patch-based approach currently relies on a decomposition of the input volumes into patches over time, as outlined in Appendix A. As all of the following involve temporal data, full sequences are provided in the accompanying video. We apply our method to several complex 3D models to illustrate its performance. Fig. 7 shows the input as well as several frames generated with our method for an animation of a spider. Our method produces an even and temporally stable reconstruction of the object. In comparison, Fig. 7b) shows the output from the previous work architecture. It exhibits uneven point distributions and outliers, e.g., above the legs of the spider, in addition to uneven motions. A second example for a moving human figure is shown in Fig. 1. In both examples, our network covers the target shape much more evenly despite using much fewer points, as shown in Table 2. Thanks to the flexible output size of our network, it can adapt to sparsely covered regions by generating correspondingly fewer outputs. The previous work architecture, with its fixed output size, needs to concentrate the fixed number of output points within the target shape, leading to an unnecessarily large point count. In order to demonstrate the flexibility of our method, we also apply it to a volumetric moving point cloud obtained from a liquid simulation. Thanks to the patch-based evaluation of our network, it is agnostic to the overall size of the input volume. In this way, it can be used to generate coherent sets with millions of points. These examples also highlight our method's capabilities for generalization. While the 3D model was only trained on data from physics simulations, as outlined above, it learns stable features that can be flexibly applied to volumetric as well as to surface-based data. The metrics in Table 1 show that for both 2D and 3D cases, our method leads to significantly improved quality, visible in lower loss values for spatial as well as temporal terms. Another interesting field of application for our algorithm are physical simulations. Complex simulations such as fluids, often employ particle-based representations. On the one hand, the volume data is much larger than surface-based data, which additionally motivates our dynamic output. On the other hand, time stability plays a very important role for physical phenomena. Our method produces detailed outputs for liquids, as can be seen in our supplemental video. Convergence graphs for the different versions are shown in Fig. 12 of the supplemental material. These graphs show that our method not only successfully leads to very low errors in terms of temporal coherence, but also improves spatial accuracy. The final values of L S for the 2D case are below 0.05 for our algorithm, compared to almost 0.08 for previous work. For 3D, our approach yields 0.04 on average, in contrast to ca. 0.1 for previous work. We have proposed a first method to infer temporally coherent features for point clouds. This is made possible by a combination of a novel loss function for temporal coherence in combination with enabling flexible truncation of the . In addition we have shown that it is crucial to prevent static patterns as easy-to-reach local minima for the network, which we avoid with the proposed a mingling loss term. Our super-resolution above demonstrate that our approach takes an important first step towards flexible deep learning methods for dynamic point clouds. Looking ahead, our method could also be flexibly combined with other network architectures or could be adopted for other applications. Specifically, a combination with PSGN could be used to generate point clouds from image sequences instead of single images. Other conceivable applications could employ methods like with our approach for generating animated meshes. Due to the growing popularity and ubiquity of scanning devices it will, e.g., be interesting to investigate classification tasks of 3D scans over time as future work. Apart from that, physical phenomena such as elastic bodies and fluids can likewise be represented in a Lagrangian manner, and pose interesting challenges and complex spatio-temporal changes. Data Generation We employ a physical simulation to generate our input and output pairs for training. This has the advantage that it leads a large variety of complex motions, and gives full control of the generation process. More specifically, we employ the IISPH algorithm, a form of Lagrangian fluid simulator that efficiently generates incompressible liquid volumes. These simulations also have the advantage that they inherently control the density of the point sampling thanks to their volume conserving properties. In order to generate input pairs for training, we randomly sample regions near the surface and extract points with a given radius around a central point. This represents the high-resolution target. To compute the low-resolution input, we downsample the points with a Poisson-disk sampling to compute a point set with the desired larger spacing. In order to prevent aliasing from features below the coarse resolution, we perform a pass of surface fairing and smoothing before downsampling. Due to the large number of patches that can be extracted from these simulations, we did not find it necessary to additionally augment the generated data sets. Examples of the low-and high-resolution pairs are shown in the supplemental material. Below we will demonstrate that models trained with this data can be flexibly applied to moving surface data as well as new liquid configurations. The surface data is generated from animated triangle meshes that were resampled with bicubic interpolation in order to match a chosen average per-point area. This pattern was generated once and then propagated over time with the animation. When applying our method to new liquid simulations, we do not perform any downsampling, but rather use all points of a low-resolution simulation directly, as a volumetric re-sampling over time is typically error prone, and gives incoherent low resolution inputs. Given a moving point cloud, we decompose it into temporally coherent patches in the following manner: We start by sampling points via a Poisson-disk sampling in a narrow band around the surface, e.g., based on a signed distance function computed from the input cloud. These points will persist as patch centers over time, unless they move too close to others, or too far away from the surface, which triggers their deletion. In addition, we perform several iterations for every new frame to sample new patches for points in the input cloud that are outside all existing patches. Note that this resampling of patches over time happens instantaneously in our implementation. While a temporal fading could easily be added, we have opted for employing transitions without fading, in order to show as much of the patch content as possible. Network Architecture and Training Our architecture heavily relies on a form of hierarchical point-based convolutions. I.e., the network extracts features for a subset of the points and their nearest neighbors. For the point convolution, we first select a given number of group centers that are evenly distributed points from a given input cloud. For each group center, we then search for a certain number of points within a chosen radius (a fraction of the [-1,1] range). This motivates our choice for a coordinate far outside the regular range for the padded points from Sec. 3.2. They are too far away from all groups by construction, so they are filtered out without any additional overhead. In this way, both feature extraction and grouping operations work flexibly with the varying input sizes. Each group is then processed by a PointNet-like sub-structure , yielding one feature vector per group. The is a set of feature vectors and the associated group position, which can be interpreted as a new point cloud to repeatedly apply a point convolution. In this way, the network extracts increasingly abstract and global features. The last set of features is then interpolated back to the original points of the input. Afterwards a sub-pixel convolution layer is used to scale up the point cloud extended with features and finally the final position vectors are generated with the help of two additional shared, fully connected layers. While we keep the core network architecture unmodified to allow for comparisons with previous work, an important distinction of our approach is the input and output masking, as described in Sec. 3.2. Our point data was generated with a mean point spacing, i.e., Poisson disk radius, of 0.5 units. For the 2D tests, an upscaling factor of r = 9 was used. For this purpose, patches with a diameter of 5 were extracted from the low-resolution data and patches with a diameter of 15 from the high-resolution data. We used the thresholds k max = 100 and n max = 900. For the loss, we used γ = 10, µ = 10, and ν = 0.001. The network was trained with 5 epochs for a data set with 185k pairs, and a batch size of 16, the learning rate was 0.001 with a decay of 0.003. For the 3D below, the scaling factor r was set to 8. The diameter of the patches was 6 for the low-resolution data and 12 for the high-resolution data, with k max = 1280 and n max = 10240. The loss parameters were γ = µ = 5, with ν = 0.001. Learning rate and decay were the same for training, but instead we used 10 epochs with 54k patches in 3D, and a batch size of 4. The input feature vector is processed in the first part of our network, which consists of four point convolutions. We use (n g, r g, [l 1, ..., l d]) to represent a level with n g groups of radius r g and [l 1, ..., l d] the d fully connected layers with the width l i (i = 1, ..., d). The parameters we use are (k max, 0.25, ), (k max /2, 0.5, ), (k max /4, 0.6, ) and (k max /8, 0.7, ). We then use interpolation layers to distribute the features of each convolution level among the input points. In this step, we reduce the output of each convolution layer with one shared, fully-connected layer per level, to a size of 64 and then distribute the features to all points of the input point cloud depending on their position. This extends the points of our original point cloud with 256 features. Fig. 11 shows a visual overview of the data flow in our network. Afterwards, we process the data in r separate branches consisting of two shared, fully interconnected layers with 256 and 128 nodes. The output is then processed with two shared fully connected layers of 64 and 3 nodes. Finally, we add our ing data to the input positions that have been repeated r times. This provides an additional skip connection which leads to slightly more stable . All convolution layers and fully interconnected layers use a tanh activation function. For the input feature vector, we make use of additional data fields in conjunction with the point positions. Our network also accepts additional features such as velocity, density and pressure of the SPH simulations used for data generation. For inputs from other sources, those values could be easily computed with suitable SPH interpolation kernels. In practice, we use position, velocity and pressure fields. Whereas the first two are important (as mentioned in Sec. 3.1), the pressure fields turned out to have negligible influence. In this section we give details for the frequency evaluation of Sec. 4. In order to measure the stability of the latent space against temporal changes, we evaluated the latent space of our network with and without temporal loss, once for 100 ordered patch sequences and once for 100 un-ordered ones. The central latent space of our network consists of the features generated by the point-convolution layers in the first part of the network and is 256 dimensional (see Fig. 11). To obtain information about its general behavior, we average the latent space components over all 100 patch sequences, subtract the mean, and normalize the ing vector w.r.t. maximum value for each data set. The is a time sequence of scalar values representing the mean deviations of the latent space. The Fourier transform of these vectorsf, are shown in Fig. 8, and were used to compute the weighted frequency content Figure 10: Examples from our synthetic data generation process. In (a,b) each a high resolution reference frame is shown in purple, and in green the down-sampled low resolution frames generated from it. The training data is generated by sampled patches from these volumes. Figure 11: An overview of our network architecture. The first row shows the hierarchical point convolutions, while the bottom rows illustrate the processing of extracted features until the final output point coordinates are generated. Figure 12: Convergence plots for the training runs of our different 2D and 3D versions. The combined loss only illustrates convergence behavior for each method separately, as weights and terms differ across the four variants. LM for previous work is not minimized, and only given for reference.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJeKh3VYDH
We propose a generative neural network approach for temporally coherent point clouds.
We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The ing methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art. The code for reproducing our work is available at https://git.io/fAjOJ. Recent research has shown that neural networks exhibit significant vulnerability to adversarial examples, or slightly perturbed inputs designed to fool the network prediction. This vulnerability is present in a wide range of settings, from situations in which inputs are fed directly to classifiers BID23 BID3 to highly variable real-world environments BID12. Researchers have developed a host of methods to construct such attacks BID7 BID17 BID2 BID15, most of which correspond to first order (i.e., gradient based) methods. These attacks turn out to be highly effective: in many cases, only a few gradient steps suffice to construct an adversarial perturbation. A significant shortcoming of many of these attacks, however, is that they fundamentally rely on the white-box threat model. That is, they crucially require direct access to the gradient of the classification loss of the attacked network. In many real-world situations, expecting this kind of complete access is not realistic. In such settings, an attacker can only issue classification queries to the targeted network, which corresponds to a more restrictive black box threat model. Recent work BID4 BID1 ) provides a number of attacks for this threat model. BID4 show how to use a basic primitive of zeroth order optimization, the finite difference method, to estimate the gradient from classification queries and then use it (in addition to a number of optimizations) to mount a gradient based attack. The method indeed successfully constructs adversarial perturbations. It comes, however, at the cost of introducing a significant overhead in terms of the number of queries needed. For instance, attacking an ImageNet BID21 classifier requires hundreds of thousands of queries. Subsequent work improves this dependence significantly, but still falls short of fully mitigating this issue (see Section 4.1 for a more detailed analysis). We revisit zeroth-order optimization in the context of adversarial example generation, both from an empirical and theoretical perspective. We propose a new approach for generating black-box adversarial examples, using bandit optimization in order to exploit prior information about the gradient, which we show is necessary to break through the optimality of current methods. We Table 1: Summary of effectiveness of 2 and ∞ ImageNet attacks on Inception v3 using NES, bandits with time prior (Bandits T), and bandits with time and data-dependent priors (Bandits T D). Note that in the first column, the average number of queries is calculated only over successful attacks, and we enforce a query limit of 10,000 queries. For purposes of direct comparison, the last column calculates the average number of queries used for only the images that NES (previous SOTA) was successful on. Our most powerful attack uses 2-4 times fewer queries, and fails 2-5 times less often. Avg Adversarial examples are natural inputs to a machine learning system that have been carefully perturbed in order to induce misbehaviour of the system, under a constraint on the magnitude of the pertubation (under some metric). For image classifiers, this misbehaviour can be either classification as a specific class other than the original one (the targeted attack) or misclassification (the untargeted attack). For simplicity and to make the presentation of the overarching framework focused, in this paper we restrict our attention to the untargeted case. Both our algorithms and the whole framework can be, however, easily adapted to the targeted setting. Also, we consider the most standard threat model in which adversarial perturbations must have p -norm, for some fixed p, less than some p. Suppose that we have some classifier C(x) with a corresponding classification loss function L(x, y), where x is some input and y its corresponding label. In order to generate a misclassified input from some input-label pair (x, y), we want to find an adversarial example x which maximizes L(x, y) but still remains p -close to the original input. We can thus formulate our adversarial attack problem as the following constrained optimization task: DISPLAYFORM0 First order methods tend to be very successful at solving the problem despite its non-convexity BID7 BID2 BID15. A first order method used as the backbone of some of the most powerful white-box adversarial attacks for p bounded adversaries is projected gradient descent (PGD). This iterative method, given some input x and its correct label y, computes a perturbed input x k by applying k steps of the following update (with x 0 = x) DISPLAYFORM1 Here, Π S is the projection onto the set S, B p (x, ε) is the p ball of radius ε around x, η is the step size, and ∂U is the boundary of a set U. Also, as is standard in continuous optimization, we make s l be the projection of the gradient ∇ x L(x l−1, y) at x l−1 onto the unit p ball. This way we ensure that s l corresponds to the unit p -norm vector that has the largest inner product with ∇ x L(x l−1, y).(Note that, in the case of the 2 -norm, s l is simply the normalized gradient but in the case of, e.g., the ∞ -norm, s l corresponds to the sign vector, sgn (∇ x L(x l−1, y)) of the gradient.) So, intuitively, the PGD update perturbs the input in the direction that (locally) increases the loss the most. Observe that due to the projection in, x k is always a valid perturbation of x, as desired. The projected gradient descent (PGD) method described above is designed to be used in the context of so-called white-box attacks. That is, in the setting where the adversary has full access to the gradient ∇ x L(x, y) of the loss function of the attacked model. In many practical scenarios, however, this kind of access is not available-in the corresponding, more realistic black-box setting, the adversary has only access to an oracle that returns for a given input (x, y), only the value of the loss L(x, y).One might expect that PGD is thus not useful in such black-box setting. It turns out, however, that this intuition is incorrect. Specifically, one can still estimate the gradient using only such value queries.(In fact, this kind of estimator is the backbone of so-called zeroth-order optimization frameworks BID22 .) The most canonical primitive in this context is the finite difference method. This method estimates the directional derivative DISPLAYFORM0 Here, the step size δ > 0 governs the quality of the gradient estimate. Smaller δ gives more accurate estimates but also decreases reliability, due to precision and noise issues. Consequently, in practice, δ is a tunable parameter. Now, we can just use finite differences to construct an estimate of the gradient. To this end, one can find the d components of the gradient by estimating the inner products of the gradient with all the standard basis vectors e 1,..., e d: DISPLAYFORM1 We can then easily implement the PGD attack (c.f.) using this estimator: DISPLAYFORM2 Indeed, BID4 were the first to use finite differences methods in this basic form to power PGD-based adversarial attack in the black-box setting. This basic attack was shown to be successful but, since its query complexity is proportional to the dimension, its ing query complexity was prohibitively large. For example, the Inception v3 BID24 classifier on the ImageNet dataset has dimensionality d=268,203 and thus this method would require 268,204 queries. (It is worth noting, however, that BID4 developed additional methods to, at least partially, reduce this query complexity.) In the light of the above discussion, one can wonder if the algorithm can be made more queryefficient. A natural idea here would be to avoid fully estimating the gradient and rely instead only on its imperfect estimators. This gives rise to the following question: How accurate of an gradient estimate is necessary to execute a successful PGD attack?We examine this question first in the simplest possible setting: one in which we only take a single PGD step (i.e., the case of k = 1). Previous work BID7 indicates that such an 0% 5% 10% 15% 20% 25% 30% 35% 40% k percent of ImageNet coordinates 0.2 0.4 0.6 0.8 adversariality rate random-k top-k FIG1: The fraction of correctly estimated coordinates of sgn(∇ x L(x, y)) required to successfully execute the single-step PGD (also known as FGSM) attack, with = 0.05. In the experiment, for each k, the top k percent -chosen either by magnitude (top-k) or randomly (random-k) -of the signs of the coordinates are set correctly, and the rest are set to +1 or −1 at random. The adversariality rate is the portion of 1,000 random ImageNet images misclassified after one FGSM step. For example, estimating only 20% of coordinates correctly leads to misclassification for > 60% of images.attack can already be quite powerful. So, we study how the effectiveness of this attack varies with gradient estimator accuracy. Our experiments, shown in FIG1, suggest that it is feasible to generate adversarial examples without estimating correctly even most of the coordinates of the gradient. For example, in the context of ∞ attacks, setting a randomly selected 20% of the coordinates in the gradient to match the true gradient (and making the remaining coordinates have random sign) is sufficient to fool the classifier on more than 60% images with single-step PGD. Our experiments thus demonstrate that an adversary is likely to be able to cause a misclassification by performing the iterated PGD attack, even when driven by a gradient estimate that is largely imperfect. The above discussion makes it clear that successful attacks do not require a perfect gradient estimation, provided this estimate is suitably constructed. It is still unclear, however, how to efficiently find this kind of imperfect but helpful estimator. Continuous optimization methodology suggests that the key characteristic needed from our estimator is for it to have a sufficiently large inner product with the actual gradient. We thus capture this challenge as the following gradient estimation problem: Definition 1 (Gradient estimation problem). For an input/label pair (x, y) and a loss function L, let g * = ∇ x L(x, y) be the gradient of L at (x, y). Then the goal of the gradient estimation problem is to find a unit vector g maximizing the inner product DISPLAYFORM0 from a limited number of (possibly adaptive) function value queries L(x, y). (The expectation here is taken over the randomness of the estimation algorithm.)One useful perspective on the above gradient estimation problem stems from casting the recovery of g * in as an underdetermined vector estimation task. That is, one can view each execution of the finite difference method (see) as computing an inner product query in which we obtain the value of the inner product of g * and some chosen direction vector A i. Now, if we execute k such queries, and k < d (which is the regime we are interested in), the information acquired in this process can be expressed as the following (underdetermined) linear regression problem Ag * = y, where the rows of the matrix A correspond to the queries A 1,..., A k and the entries of the vector y gives us the corresponding inner product values. Relation to compressive sensing. The view of the gradient estimation problem we developed bears striking similarity to the compressive sensing setting BID5. Thus one might wonder if the toolkit of that area could be applied here. Compressive sensing crucially requires, however, certain sparsity structure in the estimated signal (here, in the gradient g *) and, to our knowledge, the loss gradients do not exhibit such a structure. (We discuss this further in Appendix B.)The least squares method. In light of this, we turn our attention to another classical signal-processing method: norm-minimizing 2 least squares estimation. This method approaches the estimation problem posed in by casting it as an undetermined linear regression problem of the form Ag * = b, where we can choose the matrix A (the rows of A correspond to inner product queries with g *). Then, it obtains the solution g to the regression problem by solving: min DISPLAYFORM1 A reasonable choice for A (via BID11 and related ) is the distancepreserving random Gaussian projection matrix, i.e. A ij normally distributed. The ing algorithm turns out to yield solutions that are approximately those given by Natural Evolution Strategies (NES), which ) previously applied to black-box attacks. In particular, in Appendix A, we prove the following theorem. Theorem 1 (NES and Least Squares equivalence). Letx N ES be the Gaussian k-query NES estimator of a d-dimensional gradient g and letx LSQ be the minimal-norm k-query least-squares estimator of g. For any p > 0, with probability at least 1 − p we have that DISPLAYFORM2 Note that when we work in the underdetermined setting, i.e., when k d (which is the setting we are interested in), the right hand side bound becomes vanishingly small. Thus, the equivalence indeed holds. In fact, using the precise statement (given and proved in Appendix A), we can show that Theorem 1 provides us with a non-vacuous equivalence bound. Further, it turns out that one can exploit this equivalence to prove that the algorithm proposed in Ilyas et al. FORMULA1 is not only natural but optimal, as the least-squares estimate is an information-theoretically optimal gradient estimate in the regime where k = d, and an error-minimizing estimator in the regime where k << d. Theorem 2 (Least-squares optimality (Proof in Appendix A)). For a linear regression problem y = Ag with known A and y, unknown g, and isotropic Gaussian errors, the least-squares estimator is finite-sample efficient, i.e. the minimum-variance unbiased (MVU) estimator of the latent vector g. Theorem 3 (Least-squares optimality (Proof in Meir FORMULA1). In the underdetermined setting, i.e. when k << d, the minimum-norm least squares estimate (x LSQ in Theorem 1) is the minimumvariance (and thus minimum-error, since bias is fixed) estimator with no empirical loss. The optimality of least squares strongly suggests that we have reached the limit of query-efficiency of black-box adversarial attacks. But is this really the case? Surprisingly, we show that an improvement is still possible. The key observation is that the optimality we established of least-squares (and by Theorem 1, the NES approach in) holds only for the most basic setting of the gradient estimation problem, a setting where we assume that the target gradient is a truly arbitrary and completely unknown vector. However, in the context we care about this assumption does not hold -there is actually plenty of prior knowledge about the gradient available. Firstly, the input with respect to which we compute the gradient is not arbitrary and exhibits locally predictable structure which is consequently reflected in the gradient. Secondly, when performing iterative gradient attacks (e.g. PGD), the gradients used in successive iterations are likely to be heavily correlated. The above observations motivate our focus on prior information as an integral element of the gradient estimation problem. Specifically, we enhance Definition 1 by making its objective DISPLAYFORM0, where I is prior information available to us. This change in perspective gives rise to two important questions: does there exist prior information that can be useful to us?, and does there exist an algorithmic way to exploit this information? We show that the answer to both of these questions is affirmative. Consider a gradient ∇ x L(x, y) of the loss function corresponding to some input (x, y). Does there exist some kind of prior that can be extracted from the dataset {x i}, in general, and the input (x, y) in particular, that can be used as a predictor of the gradient? We demonstrate that it is indeed the case, and give two example classes of such priors. Time-dependent priors. The first class of priors we consider are time-dependent priors, a standard example of which is what we refer to as the "multi-step prior." We find that along the trajectory taken by estimated gradients, successive gradients are in fact heavily correlated. We show this empirically by taking steps along the optimization path generated by running the NES estimator at each point, and plotting the normalized inner product (cosine similarity) between successive gradients, given by Figure 2 demonstrates that there indeed is a non-trivial correlation between successive gradientstypically, the gradients of successive steps (using step size from) have a cosine similarity of about 0.9. Successive gradients continue to correlate at higher step sizes: Appendix B shows that the trend continues even at step size 4.0 (a typical value for the total perturbation bound ε). This indicates that there indeed is a potential gain from incorporating this correlation into our iterative optimization. To utilize this gain, we intend to use the gradients at time t − 1 as a prior for the gradient at time t, where both the prior and the gradient estimate itself evolve over iterations. DISPLAYFORM0 Data-dependent priors. We find that the time-dependent prior discussed above is not the only type of prior one can exploit here. Namely, we can also use the structure of the inputs themselves to reduce query complexity (in fact, the existence of such data-dependent priors is what makes machine learning successful in the first place).In the case of image classification, a simple and heavily exploited example of such a prior stems from the fact that images tend to exhibit a spatially local similarity (i.e. pixels that are close together tend to be similar). We find that this similarity also extends to the gradients: specifically, whenever two coordinates (i, j) and DISPLAYFORM1 To corroborate and quantify this phenomenon, we compare ∇ x L(x, y) with an average-pooled, or "tiled", version (with "tile length" k) of the same signal. An example of such an average-blurred gradient can be seen in Appendix B. More concretely, we apply to the gradient the mean pooling operation with kernel size (k, k, 1) and stride (k, k, 1), then upscale the spatial dimensions by k. We then measure the cosine similarity between the average-blurred gradient and the gradient itself. Our , shown in Figure 3, demonstrate that the gradients of images are locally similar enough to allow for average-blurred gradients to maintain relatively high cosine similarity with the actual gradients, even when the tiles are large. Our suggest that we can reduce the dimensionality of our problem by a factor of k 2 (for reasonably large k) and still estimate a vector pointing close to the same direction as the original gradient. This factor, as we show later, leads to significantly improved black-box adversarial attack performance. Given the availability of these informative gradient priors, we now need a framework that enables us to easily incorporate these priors into our construction of black-box adversarial attacks. Our proposed method builds on the framework of bandit optimization, a fundamental tool in online convex optimization BID9. In the bandit optimization framework, an agent plays a game that consists of a sequence of rounds. In round t, the agent must choose a valid action, and then by playing the action incurs a loss given by a loss function t (·) that is unknown to the agent. After playing the action, he/she only learns the loss that the chosen action incurs; the loss function is specific to the round t and may change arbitrarily between rounds. The goal of the agent is to minimize the average loss incurred over all rounds, and the success of the agent is usually quantified by comparing the total loss incurred to that of the best expert in hindsight (the best single-action policy). By the nature of this formulation, the rounds of this game can not be treated as independent -to perform well, the agent needs to keep track of some latent record that aggregates information learned over a sequence of rounds. This latent record usually takes a form of a vector v t that is constrained to a specified (convex) set K. As we will see, this aspect of the bandit optimization framework will provide us with a convenient way to incorporate prior information into our gradient prediction. An overview of gradient estimation with bandits. We can cast the gradient estimation problem as an bandit optimization problem in a fairly direct manner. Specifically, we let the action at each round t be a gradient estimate g t (based on our latent vector v t), and the loss t correspond to the (negative) inner product between this prediction and the actual gradient. Note that we will never have a direct access to this loss function t but we are able to evaluate its value on a particular prediction vector g t via the finite differences method (which is all that the bandits optimization framework requires us to be able to do).Just as this choice of the loss function t allows us to quantify performance on the gradient estimation problem, the latent vector v t will allow us to algorithmically incorporate prior information into our predictions. Looking at the two example priors we consider, the time-dependent prior will be reflected by carrying over the latent vector between the gradient estimations at different points. Data-dependent priors will be captured by enforcing that our latent vector has a particular structure. For the specific prior we quantify in the preceding section (data-dependent prior for images), we will simply reduce the dimensionality of the latent vector via average-pooling ("tiling"), removing the need for extra queries to discern components of the gradient that are spatially close. We now describe our bandit framework for adversarial example generation in more detail. Note that the algorithm is general and can be used to construct black-box adversarial examples where the perturbation is constrained to any convex set (p -norm constraints being a special case). We discuss the algorithm in its general form, and then provide versions explicitly applied to the 2 and ∞ cases. As previously mentioned, the latent vector v t ∈ K serves as a prior on the gradient for the corresponding round t -in fact, we make our prediction g t be exactly v t projected onto the appropriate space, and thus we set K to be an extension of the space of valid adversarial perturbations (e.g. R n for 2 examples, [−1, 1] n for ∞ examples). Our loss function t is defined as DISPLAYFORM0 for a given gradient estimate g, where we access this inner product via finite differences. Here, L(x, y) is the classification loss on an image x with true class y. The crucial element of our algorithm will thus be the method of updating the latent vector v t. We will adapt here the canonical "reduction from bandit information" BID9. Specifically, our update procedure is parametrized by an estimator ∆ t of the gradient ∇ v t (v), and a first-order update step DISPLAYFORM1, which maps the latent vector v t and the estimated gradient of t with respect to v t (which we denote ∆ t) to a new latent vector v t+1. The ing general algorithm is presented as Algorithm 1.In our setting, we make the estimator ∆ of the gradient −∇ v ∇L(x, y), v of the loss be the standard spherical gradient estimator (see BID9). We take a two-query estimate of the expectation, and employ antithetic sampling which in the estimate being computed as DISPLAYFORM2 Algorithm 1 Gradient Estimation with Bandit Optimization DISPLAYFORM3 for each round t = 1,..., T do // Our loss in round t is t (g t) = − ∇ x L(x, y init), g t g t ← v t−1 6: DISPLAYFORM0 where u is a Gaussian vector sampled from N (0, {q 1, q 2} ← {v + δu, v − δu} // Antithetic samples 4: DISPLAYFORM1 // Note that due to cancellations we can actually evaluate ∆ with only two queries to L return ∆ A crucial point here is that the above gradient estimator ∆ t parameterizing the bandit reduction has no direct relation to the "gradient estimation problem" as defined in Section 2.4. It is simply a general mechanism by which we can update the latent vector v t in bandit optimization. It is the actions g t (equal to v t) which provide proposed solutions to the gradient estimation problem from Section 2.4.The choice of the update rule A tends to be natural once the convex set K is known. For K = R n, we can simply use gradient ascent: DISPLAYFORM0 and the exponentiated gradients (EG) update when the constraint is an ∞ bound (i.e. K = [−1, 1] n ): DISPLAYFORM1 Finally, in order to translate our gradient estimation algorithm into an efficient method for constructing black-box adversarial examples, we interleave our iterative gradient estimation algorithm with an iterative update of the image itself, using the boundary projection of g t in place of the gradient (c.f. FORMULA1). This in a general, efficient, prior-exploiting algorithm for constructing black-box adversarial examples. The ing algorithm in the 2 -constrained case is shown in Algorithm 3. We evaluate our bandit approach described in Section 3 and the natural evolutionary strategies (NES) approach of on their effectiveness in generating untargeted adversarial examples. We consider both the 2 and ∞ threat models on the ImageNet BID21 dataset, in terms of success rate and query complexity. We further investigate loss and gradient estimate quality over the optimization trajectory in each method. To show the method extends to other datasets, DISPLAYFORM0 x 0 ← x init // Adversarial image to be constructed 5:while C(x) = y init do 6: DISPLAYFORM1 7: ∆ t ← GRAD-EST(x t−1, y init, v t−1) // Estimated Gradient of t DISPLAYFORM2 v t ← v t−1 + η · ∆ t 10: DISPLAYFORM0 we also compare to NES in the CIFAR-∞ threat model; in all threat models, we show on Inception-v3, Resnet-50, and VGG16 classifiers. In evaluating our approach, we test both the bandit approach with time prior (Bandits T), and our bandit approach with the given examples of both the data and time priors (Bandits T D). We use 10,000 and 1,000 randomly selected images (scaled to) to evaluate all approaches on ImageNet and CIFAR-10 respectively. For NES, Bandits T, and Bandits T D we found hyperparameters (given in Appendix C, along with the experimental parameters) via grid search. For ImageNet, we record the effectiveness of the different approaches in both threat models in Table 1 (2 and ∞ perturbation constraints), where we show the attack success rate and the mean number of queries (of the successful attacks) needed to generate an adversarial example for the Inception-v3 classifier ( for other classifiers in Appendix F). For all attacks, we limit the attacker to at most 10,000 oracle queries. As shown in Table 1, our bandits framework with both data-dependent and time prior (Bandits T D), is six and three times less failure-prone than the previous state of the art (NES) in the ∞ and 2 settings, respectively. Despite the higher success rate, our method actually uses around half as many queries as NES. In particular, when restricted to the inputs on which NES is successful in generating adversarial examples, our attacks are 2.5 and 5 times as query-efficient for the ∞ and 2 settings, respectively. In Appendix G, we also compare against the AutoZOOM method of BID25, where we show that our Bandits T D method at a higher 100% success rate is over 6 times as query-efficient. Finally, we also have similar for CIFAR-10 under the ∞ threat model, which can be found in Appendix E.We also further quantify the performance of our methods in terms of black-box attacks, and gradient estimation. Specifically, we first measure average queries per success after reaching a certain success rate (Figure 4a), which indicates the dependence of the query count on the desired success rate. The data shows that for any fixed success rate, our methods are more query-efficient than NES, and (due to the exponential trend) suggest that the difference may be amplified for higher success rates. We then plot the loss of the classifier over time (averaged over all images), and performance on the gradient estimation problem for both ∞ and 2 cases (which, crucially, corresponds directly to the expectation we maximize in. We show these three plots for ∞ in Figure 4, and show the for 2 (which are extremely similar) in Appendix D, along with CDFs showing the success of each method as a function of the query limit. We find that on every metric in both threat models, our methods strictly dominate NES in terms of performance. All known techniques for generating adversarial examples in the black-box setting so far rely on either iterative optimization schemes (our focus) or so-called substitute networks and transferability. In the first line of work, algorithms use queries to gradually perturb a given input to maximize a corresponding loss, causing misclassification. BID19 presented the first such iterative attack on a special class of binary classifiers. Later, BID26 Figure 4: (left) Average number of queries per successful image as a function of the number of total successful images; at any desired success rate, our methods use significantly less queries per successful image than NES, and the trend suggests that this gap increases with the desired success rate. (center) The loss over time, averaged over all images; (right) The correlation of the latent vector with the true gradient g, which is precisely the gradient estimation objective we define.real-world system with black-box attacks. Specifically, they fool PDF document malware classifier by using a genetic algorithms-based attack. Soon after, described the first black-box attack on deep neural networks; the algorithm uses a greedy search algorithm that selectively changes individual pixel values. BID4 were the first to design black-box attack based on finite-differences and gradient based optimization. The method uses coordinate descent to attack black-box neural networks, and introduces various optimizations to decrease sample complexity. Building on the work of BID4, designed a black-box attack strategy that also uses finite differences but via natural evolution strategies (NES) to estimate the gradients. They then used their algorithm as a primitive in attacks on more restricted threat models. In a concurrent line of work, BID20 introduce a method for attacking models with so-called substitute networks. Here, the attacker trains a model -called a substitute network -to mimic the target network's decisions (obtained with black-box queries), then uses (white-box) adversarial examples for the substitute network to attack the original model. Adversarial examples generated with these methods BID20; BID14 tend to transfer to a target MNIST or CIFAR classifier. We note, however, that for attacking single inputs, the overall query efficiency of this type of methods tends to be worse than that of the gradient estimation based ones. Substitute models are also thus far unable to make targeted black-box adversarial examples. We develop a new, unifying perspective on black-box adversarial attacks. This perspective casts the construction of such attacks as a gradient estimation problem. We prove that a standard least-squares estimator both captures the existing state-of-the-art approaches to black-box adversarial attacks, and actually is, in a certain natural sense, an optimal solution to the problem. We then break the barrier posed by this optimality by considering a previously unexplored aspect of the problem: the fact that there exists plenty of extra prior information about the gradient that one can exploit to mount a successful adversarial attack. We identify two examples of such priors: a "time-dependent" prior that corresponds to similarity of the gradients evaluated at similar inputs, and a "data-dependent" prior derived from the latent structure present in the input space. Finally, we develop a bandit optimization approach to black-box adversarial attacks that allows for a seamless integration of such priors. The ing framework significantly outperforms state-of-the-art by a factor of two to six in terms of success rate and query efficiency. Our thus open a new avenue towards finding priors for construction of even more efficient black-box adversarial attacks. We thank Ludwig Schmidt for suggesting the connection between LSQ and NES. AM supported in part by NSF grants CCF-1553428 and CNS-1815221. LE supported in part by a Siebel Foundation Scholarship and IBM Watson AI grant. AI supported by an Analog Devices Fellowship. Theorem 1 (NES and Least Squares equivalence). Letx N ES be the Gaussian k-query NES estimator of a d-dimensional gradient g and letx LSQ be the minimal-norm k-query least-squares estimator of g. For any p > 0, with probability at least 1 − p we have that DISPLAYFORM0 and in particular, DISPLAYFORM1 with probability at least 1 − p, where DISPLAYFORM2 Proof. Let us first recall our estimation setup. We have k query vectors δ i ∈ R d drawn from an i.i.d Gaussian distribution whose expected squared norm is one, i.e. δ i ∼ N (0, DISPLAYFORM3 Let the vector y ∈ R k denote the inner products of δ i s with the gradient, i.e. DISPLAYFORM4 We define the matrix A to be a k × d matrix with the δ i s being its rows. That is, we have Ag = y. Now, recall that the closed forms of the two estimators we are interested in are given bŷ DISPLAYFORM5 which implies that DISPLAYFORM6 We can bound the difference between these two inner products as DISPLAYFORM7 Now, to bound the first term in, observe that DISPLAYFORM8 and thus DISPLAYFORM9 (Note that the first term in the above sum has been canceled out.) This gives us that DISPLAYFORM10 as long as AA T − I ≤ 1 2 (which, as we will see, is indeed the case with high probability). Our goal thus becomes bounding AA T − I = λ max (AA T − I), where λ max (·) denotes the largest (in absolute value) eigenvalue. Observe that AA T and −I commute and are simultaneously diagonalizable. As a , for any 1 ≤ i ≤ k, we have that the i-th largest eigenvalue λ i (AA T − I) of AA T − I can be written as DISPLAYFORM11 So, we need to bound DISPLAYFORM12 To this end, recall that E[AA T] = I (since the rows of A are sampled from the distribution N (0, 1 d I)), and thus, by the covariance estimation theorem of BID6 (see Corollary 7.2) (and union bounding over the two relevant events), we have that DISPLAYFORM13 and thus DISPLAYFORM14 with probability at least 1 − k k+1 p. To bound the second term in, we note that all the vectors δ i are chosen independently of the vector g and each other. So, if we consider the set {ĝ,δ 1, . . .,δ k} of k + 1 corresponding normalized directions, we have (see, e.g., BID8) that the probability that any two of them have the (absolute value of) their inner product be larger than some ε = 2 log(2(k+1)/p) d is at most DISPLAYFORM15.On the other hand, we note that each δ i is a random vector sampled from the distribution N (0, DISPLAYFORM16, so we have that (see, e.g., Lemma 1 in BID13), for any 1 ≤ i ≤ k and any ε > 0, DISPLAYFORM17.Theorem 2 (Least-Squares Optimality). For a fixed projection matrix A and under the following observation model of isotropic Gaussian noise: y = Ag + ε where ε ∼ N (0, εId), the least-squares estimator as in Theorem 1,x LSQ = A T (AA T) −1 y is a finite-sample efficient (minimum-variance unbiased) estimator of the parameter g. Proving the theorem requires an application of the Cramer-Rao Lower Bound theorem:Theorem 3 (Cramer-Rao Lower Bound). Given a parameter θ, an observation distribution p(x; θ), and an unbiased estimatorθ that uses only samples from p(x; θ), then (subject to Fisher regularity conditions trivially satisfied by Gaussian distributions), DISPLAYFORM0 Now, note that the Cramer-Rao bound implies that if the variance of the estimatorθ is the inverse of the Fisher matrix,θ must be the minimum-variance unbiased estimator. Recall the following form of the Fisher matrix: DISPLAYFORM1 Now, suppose we had the following equality, which we can then simplify using the preceding equation: DISPLAYFORM2 Multiplying the preceding by [I(θ)] −1 on both the left and right sides yields: DISPLAYFORM3 which tells us that is a sufficient condition for finite-sample efficiency (minimal variance). We show that this condition is satisfied in our case, where we have y ∼ Ag + ε,θ =x LSQ, and θ = g. We begin by computing the Fisher matrix directly, starting from the distribution of the samples y: DISPLAYFORM4 ∂ log p(y; g) DISPLAYFORM5 Using FORMULA1, DISPLAYFORM6 Finally, note that we can write: DISPLAYFORM7 which concludes the proof, as we have shown thatx LSQ satisfies the condition, which in turn implies finite-sample efficiency. Claim 1. Applying the precise bound that we can derive from Theorem 1 on an ImageNet-sized dataset (d = 300000) and using k = 100 queries (what we use in our ∞ threat model and ten times that used for our 2 threat model), DISPLAYFORM8 For 10 queries, DISPLAYFORM9 B OMITTED FIGURES Compressed sensing approaches can, in some cases, solve the optimization problem presented in Section 2.4. However, these approaches require sparsity to improve over the least squares method. Here we show the lack of sparsity in gradients through a classifier on a set of canonical bases for images. In FIG4, we plot the fraction of 2 weight accounted for by the largest k components in randomly chosen image gradients when using two canonical bases: standard and wavelet (db4). While lack of sparsity in these bases does not strictly preclude the existence of a basis on which gradients are sparse, it suggests the lack of a fundamental structural sparsity in gradients through a convolutional neural network. We show in FIG6 that the correlation between successive gradients on the NES trajectory are signficantly correlated, even at much higher step sizes (up to 2 norm of 4.0, which is a typical value for ε, the total adversarial perturbation bound and thus an absolute bound on step size). This serves as further motivation for the time-dependent prior. The average number of queries used per successful image for each method when reaching a specified success rate: we compare NES, Bandits T (our method with time prior only), and Bandits T D (our method with both data and time priors) and find that our methods strictly dominate NES-that is, for any desired sucess rate, our methods take strictly less queries per successful image than NES. Here, we give for the CIFAR-10 dataset, comparing our best method (Bandits T D) and NES. We train Inception-v3, ResNet-50, and VGG16 classifiers by fine-tuning the standard PyTorch ImageNet classifiers. As such, all images are upsampled to 224 × 224 (299 × 299) for. Just as for ImageNet, we use a maximum ∞ perturbation of 0.05, where images are scaled to. Table 5: Summary of effectiveness of ∞ CIFAR10 attacks on Inception v3, ResNet-50, and VGG16 (I, R, V) using NES and bandits with time and data-dependent priors (Bandits T D). Note that in the first column, the average number of queries is calculated only over successful attacks, and we enforce a query limit of 10,000 queries. For purposes of direct comparison, the last column calculates the average number of queries used for only the images that NES (previous SOTA) was successful on. Our most powerful attack uses 2-4 times fewer queries, and fails 2-22 times less often. Table 1 ), VGG16, and ResNet50 classifiers. Note that we do not fine-tune the hyperparameters to the new classifiers, but simply use the hyperparameters found for Inception-v3. Nevertheless, our best method consistently outperforms NES on black-box attacks. Table 6: Summary of effectiveness of ∞ and 2 ImageNet attacks on Inception v3, ResNet-50, and VGG16 (I, R, V) using NES and bandits with time and data-dependent priors (Bandits T D). Note that in the first column, the average number of queries is calculated only over successful attacks, and we enforce a query limit of 10,000 queries. For purposes of direct comparison, the last column calculates the average number of queries used for only the images that NES (previous SOTA) was successful on. Our most powerful attack uses 2-4 times fewer queries, and fails 2-5 times less often. To compare with the method of BID25, we consider the same classifier and dataset (Inceptionv3 and Imagenet) under the same 2 threat model. Note that BID25 use mean rather than maximum 2 perturbation to evaluate their attacks (since the method is based on a Lagrangian relaxation). To ensure a fair comparison we compare against the average number of queries to reach the adversarial examples bounded within a pertubation budget of 2 · 10 −4, which is explicitly reported byTu et al..For the bandits approach, we used Bandits T, (the bandits method with the time prior) and Bandits T D (the bandits method with both time and data prior) and run the methods until 100% success is reached. We use the same hyperparameters from the untargeted ImageNet experiments (given in Appendix C). Our findings, given in Table 7 show that our best method achieves an 100% success rate, and an over 6-fold reduction in queries. Note that the method of BID25 achieves 100% success rate in general, but only constrains the mean 2 perturbation, and thus actually achieves a strictly less than 100% success rate with this perturbation threshold. Table 7: Comparison against coordinate-based query efficient finite differences attacks from BID25, using the ImageNet dataset, with a maximum 2 constraint of 0.0002 per-pixel normalized (which is equal to a max-2 threshold reported by BID25). For our methods (Bandits T and Bandits T D) we use the same hyperparameters as in our comparison to NES, which are given in Appendix C. Avg. Queries Success Rate AutoZOOM-BiLin BID25 15,064 <100% AutoZOOM-AE BID25 14,914 <100% Bandits T (Ours) 4455 100% Bandits T D (Ours) 2297 100%
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkMiWhR5K7
We present a unifying view on black-box adversarial attacks as a gradient estimation problem, and then present a framework (based on bandits optimization) to integrate priors into gradient estimation, leading to significantly increased performance.
Collecting high-quality, large scale datasets typically requires significant resources. The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through multitask learning with self-supervised tasks on unlabeled data. To this end, we trained an end-to-end audio feature extractor based on WaveNet that feeds into simple, yet versatile task-specific neural networks. We describe three self-supervised learning tasks that can operate on any large, unlabeled audio corpus. We demonstrate that, in a scenario with limited labeled training data, one can significantly improve the performance of a supervised classification task by simultaneously training it with these additional self-supervised tasks. We show that one can improve performance on a diverse sound events classification task by nearly 6\% when jointly trained with up to three distinct self-supervised tasks. This improvement scales with the number of additional auxiliary tasks as well as the amount of unsupervised data. We also show that incorporating data augmentation into our multitask setting leads to even further gains in performance. Although audio tag classification does not require the fine temporal resolution found in raw audio 91 waveforms, our chosen auxiliary tasks (or any arbitrary auditory task for which we may desire our 92 model to be sufficient) require higher temporal resolutions. To satisfy this, we chose to build our 93 model following the WaveNet architecture. are processed using small, task-specific neural networks built atop a task-agnostic trunk. The trunk architecture principally follows the structure of WaveNet, with several blocks of stacked, dilated, and causal convolutions between every convolution layer. Outputs from the trunk are fed into task-specific heads (details in Section 3.1).As shown Figure 1, our WaveNet trunk is composed of N blocks, where each block consists of S dilated causal convolution layers, with dilation factors increasing from 1 to 2 S − 1, residual connections and saturating nonlinearities. We label the blocks using b = 1, · · ·, N. We use indices ∈ [1 + (b − 1)S, bS] to label layers in block b. Each layer,, of the WaveNet trunk consists of a "residual atom" which involves two computations, labeled as "Filter" and "Gate" in the figure. Each residual atom computation produces a hidden state vector h and a layer output x defined via DISPLAYFORM0 where denotes element-wise products, represents the regular convolution operation, denotes dilated convolutions with a dilation factor of 2 mod bS if is a layer in block b + 1, σ denotes the sigmoid function and W gate and W f ilter are the weights for the gate and filter, respectively. The first (= 0) layer -represented as the initial stage marked "1 × 1 Conv" in Figure 1 -applies causal convolutions to the raw audio waveforms X = (X 1, X 2, · · ·, X T), sampled at 16 kHz, to produce an output DISPLAYFORM1 Given the structure of the trunk laid out above, any given block b has an effective receptive field of 1 + b(2 S − 1). Thus the total effective receptive field of our trunk is τ = 1 + N (2 S − 1). Following an extensive hyperpameter search over various configurations, we settled on [N = 3] blocks comprised of [S = 6] layers each for our experiments. Thus our trunk has a total receptive field of τ = 190, which corresponds to about 12 milliseconds of audio sampled at 16kHz. As indicated above, each task-specific head is a simple neural network whose input data is first 102 constrained to pass through a trunk that it shares with other tasks. Each head is free to process this 103 input to its advantage, independent of the other heads. Each task also specifies its own objective function, as well as a task-specific optimizer, with cus-105 tomized learning rates and annealing schedules, if necessary. We arbitrarily designate supervised 106 tasks as the primary tasks and refer to any self-supervised tasks as auxiliary tasks. In the experiments 107 reported below, we used "audio tagging" as the primary supervised classification task and "next-step 108 prediction", "noise reduction" and "upsampling" as auxiliary tasks training on various amounts of unlabeled data. The parameters used for each of the task specific heads can be found in TAB1 accompanying supplement to this paper. Figure 2: The head architectures were designed to be simple, using only as few layers as necessary to solve the task. Simpler head architectures force the shared trunk to learn a representation suitable for multiple audio tasks. The next-step prediction task can be succinctly formalized as follows: given a sequence 113 {x t−τ +1, · · ·, x t} of frames of an audio waveform, predict the next value x t+1 in the sequence. This prescription allows one to cheaply obtain arbitrarily large training datasets from an essentially 115 unlimited pool of unlabeled audio data. Our next-step prediction head is a 2-layer stack of 1×1 convolutional layers with ReLU nonlinearities 117 in all but the last layer. The first layer contains 128 units, while the second contains a single output unit. The head takes in τ frames of data from the trunk, where τ is the trunk's effective receptive field, and 119 produces an output which represents the model's prediction for the next frame of audio in the sequence. The next-step head treats this as a regression problem, using the mean squared error of the difference 121 between predicted values and actual values as a loss function, i.e. given inputs {x t−τ +1, · · ·, x t}, the head produces an output y t from which we compute a loss L MSE (t) = (y t − x t+1) 2 and then 123 aggregate over the frames to get the total loss. We would like to note that the original WaveNet implementation treated next-step prediction as a 125 classification problem, instead predicting the bin-index of the audio following a µ-law transform. We 126 found that treating the task as a regression problem worked better in multitask situations but make no claims on the universality of this choice. In defining the noise reduction task, we adopt the common approach of treating noise as an additive 130 random process on top of the true signal: if {x t} denotes the clean raw audio waveform, we obtain 131 the noisy version viax t:= x t + ξ t where ξ t an arbitrary noise process. For the denoising task, the 132 model is trained to predict the clean sample, x t, given a window x t− well-adapted to solving either task. Thus, our noise reduction head has a structure similar to the 136 next-step head. It is trained to minimize a smoothed L1 loss between the clean and noisy versions of 137 the waveform inputs, i.e. for each frame t, the head produces an outputŷ t, and we compute the loss DISPLAYFORM0 and then aggregate over frames to obtain the total loss. We used the smooth L1 loss because it 139 provided a more stable convergence for the denoising task than mean squared error. In the same spirit as the denoising task, one can easily create an unsupervised upsampling task Again, given the formal similarity of the upsampling task to the next-step prediction and noise-151 reduction tasks, we used an upsampling head with a structure virtually identical to those described 152 above. As with the denoising task, we used a smooth L1 loss function (see eqn. FORMULA2 having manually-verified labels and the remaining 5763 having non-verified labels, meaning they 177 were automatically categorized using user-provided metadata. The test set is composed of 1600 audio 178 clips with manually-verified labels which are used for the final scoring. The Librispeech dataset 1 (comprised of read English speech sampled at 16 kHz) was used as a proxy for a large unlabeled dataset. The models described below were trained using clips from either the 182 "train-clean-100" or "train-other-500 versions". Models trained with 5, 50 and 100 hours of unlabeled 183 data were sourced from "train-clean-100", while the model trained with 500 hours was sourced entirely from "train-other-500". Due to memory constraints, we limited the duration of each utterance 185 to 2 seconds which we obtained by cropping from a random position in the original clip. This dataset 186 was only used to train the auxiliary tasks. We trained the model using raw audio waveform inputs taken from the FSDKaggle2018 and Lib-189 rispeech datasets. All code for the experiments described here was written in the PyTorch framework. All audio samples were first cropped to two seconds in duration and downsampled to 16 kHz. To normalize for the variation in onset times for different utterances, the 2 seconds were randomly both the main task and the auxiliary tasks, heuristically favoring performance on the main task. We jointly trained the model on all tasks simultaneously by performing a forward pass for each task, important parameters of the model can be found in TAB4 of the accompanying supplement to this 217 paper. As discussed above, we used audio tagging as the main task to investigate whether supervised First, we trained a purely supervised model on 2 seconds of non-silence audio extracted using random In this experiment, we added each of the self-supervised tasks to the baseline model discussed above, simultaneously training them using 100 hours of unlabeled data sampled from the Librispeech dataset 236 along with the main supervised task. We notice that, addition of any self-supervised task showed an average improvement of 4.6% to the MAP@3 score compared to the main task's baseline performance. Adding a pair of tasks gave an average improvement of 4.55% over baseline, showing no improvement over adding a single task. Training with three additional tasks yielded the best with an improvement of 5.33% over the main task. Looking at MAP@3 scores throughout training showed that convergence in every multitask setting was stable, with gradual improvements for increasing number of tasks. The best performance values on the test sets for a sequence of task additions can be found in TAB1.The set of experiments described above demonstrate that, for a fixed amount of unlabeled data (100 hours), simultaneously training a supervised task with various self-supervised tasks yields a significant improvement in the main task's performance. To further test how performance changes with increasing amounts of data, we re-trained our model while varying the amount of unlabeled data used to train the auxiliary tasks. We noticed that even without any additional unlabeled data, the MAP@3 score with three additional tasks was significantly better than the score obtained on a single task. This demonstrates that addition of self-supervised tasks improves the performance of main task. Increasing the size of the unlabeled data for the auxiliary tasks increases the size of the multitask benefit (Figure 3).The MAP@3 Scores at different levels of unlabaled data showed progressive improvement to 0.656, 0.669, with 5 and 50 hours respectively. We observed a peak MAP@3 score of 0.694 with 500 hours of unlabeled data, which is an improvement of 8.94% over the main task's baseline performance. Next, we explore several approaches to data augmentation and compare them with multitask learning. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as noise injection, and pitch shifting. We compared our proposed method with traditional data augmentation strategies by retraining our model only for the main task after applying the aforementioned augmentations to the FSDKaggle2018 training data. The MAP@3 values for the data augmentation experiments on the test sets can be found in TAB2. We observed a peak MAP@3 score of 0.703 with pitch shifting augmentation which is similar in scale to that of our best multitask performance gains. In an attempt to observe how both the techniques work together, we combined data augmentation with multitask learning and obtain an MAP@3 score of 0.726 which was the best score among all the experiments we conducted. Figure 3: Improved MAP@3 scores with increasing amounts of unlabeled data. Shown are the MAP@3 scores on test set when the main task is trained with 3 auxiliary tasks with 0, 5, 50, 100, and 500 hours of unlabeled data respectively. The amount of labelled data is held constant for the whole experiment. We see a smooth increase in performance with increasing amounts of unlabeled data. where one has a limited quantity of labeled data. We have also shown that the performance of the 244 supervised task improves by increasing either the number of auxiliary self-supervised tasks or the 245 quantity of unlabeled data or both. We attain a peak performance boost of 8.94% over the baseline 246 with the inclusion of 3 self-supervised tasks when trained with additional 500 hours of unlabeled data. Finally, our multitask learning scheme further benefits when the training data for the data-constrained 248 task is augmented using standard techniques. Since our suggest that the performance gain with 249 our approach is additive when used with data augmentation, it may be interesting to use multitask 250 learning with other augmentation approaches to observe if they complement each other in different 251 settings. We have strived to systematically present our within a coherent multitask learning framework.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryl-BQRisQ
Improving label efficiency through multi-task learning on auditory data
Information need of humans is essentially multimodal in nature, enabling maximum exploitation of situated context. We introduce a dataset for sequential procedural (how-to) text generation from images in cooking domain. The dataset consists of 16,441 cooking recipes with 160,479 photos associated with different steps. We setup a baseline motivated by the best performing model in terms of human evaluation for the Visual Story Telling (ViST) task. In addition, we introduce two models to incorporate high level structure learnt by a Finite State Machine (FSM) in neural sequential generation process by: Scaffolding Structure in Decoder (SSiD) Scaffolding Structure in Loss (SSiL). These models show an improvement in empirical as well as human evaluation. Our best performing model (SSiL) achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model. We also conducted human evaluation of the generated grounded recipes, which reveal that 61% found that our proposed (SSiL) model is better than the baseline model in terms of overall recipes, and 72.5% preferred our model in terms of coherence and structure. We also discuss analysis of the output highlighting key important NLP issues for prospective directions. Interpretation is heavily conditioned on context. Real world interactions provide this context in multiple modalities. In this paper, the context is derived from vision and language. The description of a picture changes drastically when seen in a sequential narrative context. Formally, this task is defined as: given a sequence of images I = {I 1, I 2, ..., I n} and pairwise associated textual descriptions, T = {T 1, T 2, ..., T n}; for a new sequence I, our task is to generate the corresponding T. Figure 1 depicts an example for making vegetable lasagna, where the input is the first row and the output is the second row. We call this a'storyboard', since it unravels the most important steps of a procedure associated with corresponding natural language text. The sequential context differentiates this task from image captioning in isolation. The narration of procedural content draws slight differentiation of this task from visual story telling. The dataset is similar to that presented by with an apparent difference between stories and instructional in-domain text which is the clear transition in phases of the narrative. This task supplements the task of ViST with richer context of goal oriented procedure (how-to). This paper attempts at capturing this high level structure present in procedural text and imposing this structure while generating sequential text from corresponding sequences of images. Numerous online blogs and videos depict various categories of how-to guides for games, do-ityourself (DIY) crafts, technology, gardening etc. This task lays initial foundations for full fledged storyboarding of a given video, by selecting the right junctions/clips to ground significant events and generate sequential textual descriptions. However, the main focus of this work is generating text from a given set of images. We are going to focus on the domain of cooking recipes in the rest of this paper, leaving the exploration of other domains to future. The two important dimensions to address in text generation are content and structure. In this paper, we discuss our approach in generating more structural/coherent cooking recipes by explicitly modeling the state transitions between different stages of cooking (phases). We address the question of generating textual interpretation of Figure 1: Storyboard for the recipe of vegetable lasagna the procedure depicted as a sequence of pictures (snapped at different instances of time as the procedure progresses). We introduce a framework to apply traditional FSMs to enhance incorporation of structure in neural text generation. We plan to explore backpropable variants in place of FSMs in future to design structure aware generation models. The two main contributions of this paper are:1. A dataset of 16k recipes targeted for sequential multimodal procedural text generation. 2. Two models (SSiD: Structural Scaffolding in Decoder,and SSiL: Structural Scaffolding in Loss) for incorporating high level structure learnt by an FSM into a neural text generation model to improve structure/coherence. The rest of the paper is organized as follows. Section 2 describes the related work performed along the lines of planning while generating, understanding food and visual story telling. Section 3 describes the data we gathered for this task and related statistics. In Section 4, we describe our models: a baseline model (Glocal), SSiD and SSiL in detail. Section 5 presents the attained by each of these models both empirically and qualitatively. Section 6 concludes this work and presents some future directions. Why domain constraint? BID21 and BID14 demonstrated that the predictive ability of a seq2seq model improves as the language corpus is reduced to a specialized domain with specific actions. Our choice of restricting domain to recipes is inspired from this, where the set of events are specialized (such as 'cut', 'mix', 'add') although we are not using event representations explicitly. These specialized set of events are correlated to phases of procedural text as described in the following sections. Planning while writing content: A major challenge faced by neural text generation BID18 while generating long sequences is the inability to maintain structure, contravening the coherence of the overall generated text. This aspect was also observed in various tasks like summarization BID17, story generation BID9. Pre-selecting content and planning to generate accordingly was explored by BID26 and BID19 in contrast to generate as you proceed paradigm. BID8 adapt a hierarchical approach to generate a premise and then stories to improve coherence and fluency. BID31 experimented with static and dynamic schema to realize the entire storyline before generating. However, in this work we propose a hierarchical multi task approach to perform structure aware generation. Comprehending Food: Recent times have seen large scale datasets in food, such as Recipe1M BID20, Food-101 and bench-marking challenges like iFood challenge 1. Food recognition BID0 addresses understanding food from a vision perspective. worked on generating cooking instructions by inferring ingredients from an image. BID32 proposed a method to generate procedure segments for YouCook2 data. In NLP domain, this is studied as generating procedural text by including ingredients as checklists BID15 or treating the recipe as a flow graph BID22. Our work is at the intersection of two modalities (language and vision) by generating procedural text for recipes from a sequence of images. BID2 worked on reasoning non-mentioned causal effects thereby improving the understanding and generation of procedural text for cooking recipes. This is done by dynamically tracking entities by modeling actions using state transformers. is a sequential vision to language task demonstrating differences between descriptions in isolation and stories in sequences. Along similar lines, BID10 created VideoStory dataset with videos posted on social media with the task of generating a multi-sentence story captions for them. BID29 proposed a late fusion based model for ViST challenge. BID16 attained the highest scores on human readability in this task by attending to both global and local contexts. We use this as our baseline model and propose two techniques on top of this baseline to impose structure needed for procedural text. Food is one of the most photographed subject on the instagram network which led to coining the term foodstagram. We identified two how-to blogs: instructables 2 and snapguide.com 3, comprising stepwise instructions (images and text) of various how-to activities like games, crafts etc,. We gathered 16,441 samples with 160,479 photos 4 for food, dessert and recipe topics. We used 80% for training, 10% for validation and 10% for testing our models. In some cases, there are multiple images for the same step and we randomly select an image from the set of images. We indicate that there is a potential space for research here, in selecting most distinguishing/representative/meaningful image. Details of the datasets are presented in Table 1. The distribution of the topics is visualized here 5. A trivial extension could be done on other domains like gardening, origani crafts, fixing guitar strings etc, which is left for future work. Data Sources # Recipes # Avg Steps instructables 9,101 7.14 snapguide 7,340 13.01 Table 1: Details of dataset for storyboarding recipes We first describe a baseline model for the task of storyboarding cooking recipes in this section. We then propose two models with incremental improvements to incorporate the structure of procedural text in the generated recipes: SSiD (Scaffolding Structure in Decoder) and SSiL (Scaffolding Structure in Loss). The architecture of scaffolding structure is presented in FIG1, of which different aspects are described in the following subsections. We have a sequence of images at different phases of cooking as our input and the task is to generate step wise textual descriptions of the recipe. The baseline model is inspired from the best performing system in ViST challenge with respect to human evaluation BID16. The images are first resized into 224 X 224. Image features for each step are extracted from the penultimate layer of pre-trained ResNet-152. These features are then passed through an affinity layer to obtain an image feature of dimension 1024. To maintain the context of the entire recipe (global context), the sequence of these image features are passed through a two layered Bi-LSTM with a hidden size of 1024. To maintain specificity of the current image (local context), the image features for the current step are concatenated using a skip connection to the output of the Bi-LSTM to obtain glocal representation. Dropout of 0.5 is applied systematically at the affinity layer to obtain the image feature representation and after the Bi-LSTM layer. Batch normalization is applied with a momentum 0.01. This completes the encoder part of the sequence to sequence architecture. These glocal vectors are used for decoding each step. These features are passed through a fully connected layer to obtain a representation of 1024 dimension followed by a non-linear transformation using ReLU. These features are then passed through a decoder LSTM for each step in the recipe which are trained by teacher forcing. The overall coherence in generation is addressed by feeding the decoder state of the previous step to the next one. This is a seq2seq model translating one modality into another. The model is optimized using Adam with a learning rate of 0.001 and weight decay of 1e-5. DISPLAYFORM0 The model described above does not explicitly cater to the structure of the narration of recipes in the generation process. However, we know that procedural text has a high level structure that carries a skeleton of the narrative. In the subsequent subsections, we present two models that impose this high level narrative structure as a scaffold. While this scaffold lies external to the baseline model, it functions on imposing the structure in decoder (SSiD) and in the loss term (SSiL). There is a high level latent structure involved in a cooking recipe that adheres to transitions between steps, that we define as phases. Note that the steps and phases are different here. To be specific, according to our definition, one or more steps map to a phase (this work does not deal with multiple phases being a part of a single step). Phases may be'listing ingredients','baking','garnishing' etc., The key idea of the SSiD model is to incorporate the sequence of phases in the decoder to impose structure during text generation 6.There are two sources of supervision to drive the model: multimodal dataset M = {I, T} from Section 3, unimodal textual recipes 7 U to learn phase sequences. Finer phases are learnt using clustering followed by an FSM. 6 To validate the hypothesis of operating FSM with phases over the neural baseline model we have in place, we first performed proof of concept experiments with the step-wise titles present in our instructables dataset. Here, the content words after removal of the stop words for words with high tf-idf values are defined as phases. However, for the actual model, these phases are latent states learnt through an FSM. 7 www.ffts.com/recipes.htmClustering: K-Means clustering is performed on the sentence embeddings with compositional ngram features BID24 on each step of the recipe in U. Aligning with our intuition, when k is 3, it is observed that these clusters roughly indicate categories of desserts, drinks and main course foods (pizza, quesadilla etc,). However, we need to find out finer categories of the phases corresponding to the phases in the recipes. We use k-means clustering to obtain the categories of these phases. We experimented with different number of phases P as shown in Table 2. For example, let an example recipe comprise of 4 steps i.e, a sequence of 4 images. At this point, each recipe can be represented as a hard sequence of phases r = p 1, p 2, p 3, p 4. The phases learnt through clustering are not ground truth phases. We explore the usage of an FSM to individually model hard and a softer representation of the phase sequences by leveraging the states in an FSM. We first describe how the hard representation is modeled. The algorithm was originally developed for building language models for limited token sets in grapheme to phoneme prediction. The iterative algorithm starts with an ergodic state for all phase types and uses entropy to find the best state split that would maximize the prediction. This is presented in Algorithm 1. As opposed to phase sequences, each recipe is now represented as a state sequence (decoded from FSM) i.e, r = s 1, s 2, s 3, s 4 (hard states). This is a hard representation of the sequence of states. We next describe how a soft representation of these states is modeled. Since the phases are learnt in an unsupervised fashion and the ground truth of the phases is not available, we explored a softer representation of the states. We hypothesize that a soft representation of the states might smooth the irregularities of phases learnt. From the output of the FSM, we obtain the state transition probabilities from each state to every other state. Each state s i can be represented as q ij ∀ j ∈ S (soft states), where q ij is the state transition probability from s i to s j and S is the total number of states. This is the soft representation of state sequences. The structure in the recipe is learnt as a sequence of phases and/or states (hard or soft). This is the structural scaffold that we would like to incorporate in the baseline model. In SSiD model, for each step in the recipe, we identify which phase it is in using the clustering model and use the phase sequence to decode state transitions from the FSM. The state sequences are concatenated to the decoder in the hard version and the state transition probabilities are concatenated in the decoder in the soft version at every time step. At this point, we have 2 dimensions, one is the complexity of the phases (P) and the other is the complexity of the states in FSM (S). Comprehensive of searching this space is presented in Table 2. We plan to explore the usage of hidden markov model in place of FSM in future. The score is the each entropy times the number of examples going through that state; end else end Once the best split is found, split moving all incoming arcs of that type to the new state (subtracting them from old one). end In addition to imposing structure via SSiD, we explored measuring the deviation of the structure learnt through phase/state sequences from the original structure. This leads to our next model where the deviation of the structure in the generated output from that of the original structure is reflected in the loss. The decoded steps are passed through the clustering model to get phase sequences and then state transition probabilities are decoded from FSM for the generated output. We go a step further to investigate the divergence between the phases of generated and original steps. This can also be viewed as hierarchical multi-task learning BID28. The first task is to decode each step in the recipe (which uses a cross entropy criterion, L 1). The second task uses KL divergence between phase sequences of decoded and original steps to penalize the model (say, L 2).When there are τ steps in a recipe, we obtain o(s τ 1) and g(s τ 1) as the distributions of phases comprising of soft states for the original and generated recipes respectively. We measure the KL divergence(D KL) between these distributions: DISPLAYFORM0 Each task optimizes different functions and we minimize the combination of the two losses. DISPLAYFORM1 This combined loss is used to penalize the model. Here, α is obtained from KL annealing function that gradually increases the weight of KL term from 0 to 1 during train time. The two dimensions explored in clustering and FSM are the number of phases that are learnt in unsupervised manner (P) and the number of states attained through state splitting algorithm in FSM (S). The of searching this space for the best configuration are presented in Table 2. Table 2: BLEU Scores for different number of phases (P) and states(S)The BLEU score BID25 is the highest when P is 40 and S is 100. Fixing these values, we compare the models proposed in TAB4. The models with hard phases and hard states are not as stable as the one with soft phases since backpropagation affects the impact of the scaffolded phases. Upon manual inspection, a key observation is that for SSiD model, most of the recipes followed a similar structure. It seemed to be conditioned on a global structure learnt from all recipes rather than the current input. However, SSiL model seems to generate recipes that are conditioned on the structure of each particular example. Human Evaluation: We have also performed human evaluation by conducting user preference study to compare the baseline with our best performing SSiL model. We randomly sampled generated outputs of 20 recipes and asked 10 users to answer two preference questions: preference for overall recipe based on images, preference for structurally coherent recipe. For the second question, we gave examples of what structure and phases mean in a recipe. Our SSiL model was preferred 61% and 72.5% for overall and structural preferences respectively. This shows that while there is a viable space to build models that improve structure, generating an edible recipe needs to be explored to improve the overall preference. This is a simple recipe for making a delicious chicken salad. You will need: a butter knife a plate of bread flour a little bit of salt a dash of pepper flakes a couple of tablespoons of olive oil a pinch of sugar. Add butter evenly on the pan. Put the chicken on the grill and set aside.-Ingredients phase wrongly identified. -Wrong ingredients. -Improper . This is a simple recipe for making a delicious and easy dish. Ingredients: 4 pounds chicken 2 tsp salt, ½ tsp sugar, marinara sauce, mozzarella cheese (i used provolone). Tools: a knife, an oven for the chicken, tongs. Mix all ingredients in a bag. Add butter evenly on the pan. Serve the baked chicken wings and enjoy the evening! -Learnt majority structure (step 1) + Got'tongs' right because of separate tools mention. -The action of baking is not explicitly mentioned (before 'baked' wings). You will need: 5 pounds of chicken wings, ½ cup all purpose flour, ½ tsp salt, 2 tsp of paprika, melted butter, silicon mat, baking pan. Preheat oven to 450 F. Mix dry ingredients in the dry ziplock bag. Place a mat on the baking pan and spread butter evenly on it. Spread the chicken pieces on butter on the baking pan. Bake until crispy for 30 minutes. Serve and enjoy! + Global context of baking maintained in preheating. + Non-repetitive ingredients phase. + Referring expressions (baking pan -> it). -Not mentioned tools (tongs). Figure 3 presents the generated text from the three models with an analysis described below. Introducing referring expressions is a key aspect of coherence BID4, as seen in the case of'baking pan' being referred as'it' in the SSiL model. Context Maintenance: Maintaining the overall context explicitly has an effect in the generation of each step. This is reflected in SSiL model where'preheating' is discussed in second step although the image does not show an oven. This structure is learnt from baking step that appears later. Schema for Procedural Text: Explicitly modeling structure for procedural text has enabled the model to conclude the recipe in SSiD and SSiL models by generating words like'serve' and'enjoy'. Lacking this structure, Glocal model talks about setting aside at the end of the recipe. Precision of Entities and Actions: SSiD model introduces'sugar' in ingredients after generating'salt'. A brief manual examination revealed that this co-occurrence is a common phenomenon. Similarly sauce and cheese are wrongly generated. SSiL model misses'tongs' in the first step. There is an inherent trade-off between detailing and presenting a concise overview. For instance, one might not need detailing on how onions are cut in comparison to how layering of cheese is executed. Although, we are not explicitly addressing the issue of identifying complicated and trivial steps, a storyboard format implicitly takes care of this by briefing in pictorial representation and detailing in text. This draws parallels with multimodal summarization. Our main focus in this paper is instilling structure learnt from FSMs in neural models for sequential procedural text generation with multimodal data. Recipes are being presented in the form of graphic novels reflecting the cultural change in expectations of presenting instructions. With this change, a storyboard is a comprehensive representation of the important events. In this direction, we gather a dataset of 16k recipes where each step has text and associated images. The main difference between the dataset of ViST and our dataset is that our dataset is targeted at procedural how-to kind of text (specifically presenting cooking recipes in this work). We setup a baseline inspired from the best performing model in ViST in the category of human evaluation. We learn a high level structure of the recipe as a sequence of phases and a sequence of hard and soft representations of states learnt from a finite state machine. We propose two techniques for incorporating structure learnt from this as a scaffold. The first model imposes structure on the decoder (SSiD) and the second model imposes structure on the loss function (SSiL) by modeling it as a hierarchical multi-task learning problem. We show that our proposed approach (SSiL) improves upon the baseline and achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model. We plan on exploring backpropable variants as a scaffold for structure in future. We also plan to extend these models to other domains present in these sources of data. There is no standard way to explicitly evaluate the high level strcuture learnt in this task and we would like to explore evaluation strategies for the same.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJeQE8LYdV
The paper presents two techniques to incorporate high level structure in generating procedural text from a sequence of images.
Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions. The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models. This paper alleviates the need for such approximations by proposing the \emph{Stein gradient estimator}, which directly estimates the score function of the implicitly defined distribution. The efficacy of the proposed estimator is empirically demonstrated by examples that include meta-learning for approximate inference and entropy regularised GANs that provide improved sample diversity. Modelling is fundamental to the success of technological innovations for artificial intelligence. A powerful model learns a useful representation of the observations for a specified prediction task, and generalises to unknown instances that follow similar generative mechanics. A well established area of machine learning research focuses on developing prescribed probabilistic models BID8, where learning is based on evaluating the probability of observations under the model. Implicit probabilistic models, on the other hand, are defined by a stochastic procedure that allows for direct generation of samples, but not for the evaluation of model probabilities. These are omnipresent in scientific and engineering research involving data analysis, for instance ecology, climate science and geography, where simulators are used to fit real-world observations to produce forecasting . Within the machine learning community there is a recent interest in a specific type of implicit models, generative adversarial networks (GANs) BID10, which has been shown to be one of the most successful approaches to image and text generation BID56 BID2 BID5. Very recently, implicit distributions have also been considered as approximate posterior distributions for Bayesian inference, e.g. see BID25; BID53; BID22; BID19; BID29; BID15; BID23; BID48. These examples demonstrate the superior flexibility of implicit models, which provide highly expressive means of modelling complex data structures. Whilst prescribed probabilistic models can be learned by standard (approximate) maximum likelihood or Bayesian inference, implicit probabilistic models require substantially more severe approximations due to the intractability of the model distribution. Many existing approaches first approximate the model distribution or optimisation objective function and then use those approximations to learn the associated parameters. However, for any finite number of data points there exists an infinite number of functions, with arbitrarily diverse gradients, that can approximate perfectly the objective function at the training datapoints, and optimising such approximations can lead to unstable training and poor . Recent research on GANs, where the issue is highly prevalent, suggest that restricting the representational power of the discriminator is effective in stabilising training (e.g. see BID2 BID21 . However, such restrictions often intro- A comparison between the two approximation schemes. Since in practice the optimiser only visits finite number of locations in the parameter space, it can lead to over-fitting if the neural network based functional approximator is not carefully regularised, and therefore the curvature information of the approximated loss can be very different from that of the original loss (shown in (a)). On the other hand, the gradient approximation scheme (b) can be more accurate since it only involves estimating the sensitivity of the loss function to the parameters in a local region.duce undesirable biases, responsible for problems such as mode collapse in the context of GANs, and the underestimation of uncertainty in variational inference methods BID49.In this paper we explore approximating the derivative of the log density, known as the score function, as an alternative method for training implicit models. An accurate approximation of the score function then allows the application of many well-studied algorithms, such as maximum likelihood, maximum entropy estimation, variational inference and gradient-based MCMC, to implicit models. Concretely, our contributions include:• the Stein gradient estimator, a novel generalisation of the score matching gradient estimator BID16, that includes both parametric and non-parametric forms; • a comparison of the proposed estimator with the score matching and the KDE plug-in estimators on performing gradient-free MCMC, meta-learning of approximate posterior samplers for Bayesian neural networks, and entropy based regularisation of GANs. Given a dataset D containing i.i.d. samples we would like to learn a probabilistic model p(x) for the underlying data distribution p D (x). In the case of implicit models, p(x) is defined by a generative process. For example, to generate images, one might define a generative model p(x) that consists of sampling randomly a latent variable z ∼ p 0 (z) and then defining x = f θ (z). Here f is a function parametrised by θ, usually a deep neural network or a simulator. We assume f to be differentiable w.r.t. θ. An extension to this scenario is presented by conditional implicit models, where the addition of a supervision signal y, such as an image label, allows us to define a conditional distribution p(x|y) implicitly by the transformation x = f θ (z, y). A related methodology, wild variational inference BID25 BID22 ) assumes a tractable joint density p(x, z), but uses implicit proposal distributions to approximate an intractable exact posterior p(z|x). Here the approximate posterior q(z|x) can likewise be represented by a deep neural network, but also by a truncated Markov chain, such as that given by Langevin dynamics with learnable step-size. Whilst providing extreme flexibility and expressive power, the intractability of density evaluation also brings serious optimisation issues for implicit models. This is because many learning algorithms, e.g. maximum likelihood estimation (MLE), rely on minimising a distance/divergence/discrepancy measure D[p||p D], which often requires evaluating the model density (c.f. BID33 BID25 . Thus good approximations to the optimisation procedure are the key to learning implicit models that can describe complex data structure. In the context of GANs, the Jensen-Shannon divergence is approximated by a variational lower-bound represented by a discriminator BID3 BID10 . Related work for wild variational inference BID22 BID29 BID15 BID48) uses a GAN-based technique to construct a density ratio estimator for q/p 0 BID46 BID47 BID50 BID30 and then approximates the KL-divergence term in the variational lower-bound: DISPLAYFORM0 In addition, BID22 and BID29 exploit the additive structure of the KLdivergence and suggest discriminating between q and an auxiliary distribution that is close to q, making the density ratio estimation more accurate. Nevertheless all these algorithms involve a minimax optimisation, and the current practice of gradient-based optimisation is notoriously unstable. The stabilisation of GAN training is itself a recent trend of related research (e.g. see BID36 BID2 . However, as the gradient-based optimisation only interacts with gradients, there is no need to use a discriminator if an accurate approximation to the intractable gradients could be obtained. As an example, consider a variational inference task with the approximate pos- DISPLAYFORM1 Notice that the variational lower-bound can be rewritten as DISPLAYFORM2 the gradient of the variational parameters φ can be computed by a sum of the path gradient of the first term (i.e. DISPLAYFORM3) and the gradient of the entropy term DISPLAYFORM4. Expanding the latter, we have DISPLAYFORM5 in which the first term in the last line is zero BID35. As we typically assume the tractability of ∇ φ f, an accurate approximation to ∇ z log q(z|x) would remove the requirement of discriminators, speed-up the learning and obtain potentially a better model. Many gradient approximation techniques exist BID44 BID9 BID57 BID7, and in particular, in the next section we will review kernel-based methods such as kernel density estimation BID39 and score matching BID16 in more detail, and motivate the main contribution of the paper. We propose the Stein gradient estimator as a novel generalisation of the score matching gradient estimator. Before presenting it we first set-up the notation. Column vectors and matrices are boldfaced. The random variable under consideration is x ∈ X with X = R d×1 if not specifically mentioned. To avoid misleading notation we use the distribution q(x) to derive the gradient approximations for general cases. As Monte Carlo methods are heavily used for implicit models, in the rest of the paper we mainly consider approximating the gradient g(DISPLAYFORM0 We use x i j to denote the jth element of the ith sample x i . We also denote the matrix form of the col- DISPLAYFORM1 T ∈ R K×d, and its approximation DISPLAYFORM2 We start from introducing Stein's identity that was first developed for Gaussian random variables BID42 BID43 then extended to general cases BID11 . Let h : R d×1 → R d ×1 be a differentiable multivariate test function which maps x to a column vector DISPLAYFORM0 T . We further assume the boundary condition for h: DISPLAYFORM1 This condition holds for almost any test function if q has sufficiently fast-decaying tails (e.g. Gaussian tails). Now we introduce Stein's identity BID43 BID11 ) DISPLAYFORM2 in which the gradient matrix term DISPLAYFORM3 This identity can be proved using integration by parts: for the ith row of the matrix h(x)∇ x log q(x) T, we have DISPLAYFORM4 Observing that the gradient term ∇ x log q(x) of interest appears in Stein's identity, we propose the Stein gradient estimator by inverting Stein's identity. As the expectation in is intractable, we further approximate the above with Monte Carlo (MC): DISPLAYFORM5 with err ∈ R d ×d the random error due to MC approximation, which has mean 0 and vanishes as K → +∞. Now by temporarily denoting FORMULA14 can be rewritten as DISPLAYFORM6 DISPLAYFORM7 Thus we consider a ridge regression method (i.e. adding an 2 regulariser) to estimate G: DISPLAYFORM8 with || · || F the Frobenius norm of a matrix and η ≥ 0. Simple calculation shows that DISPLAYFORM9 where DISPLAYFORM10 One can show that the RBF kernel satisfies Stein's identity.In this case h(x) = K(x, ·), d = +∞ and by the reproducing kernel property DISPLAYFORM11 In this section we derive the Stein gradient estimator again, but from a divergence/discrepancy minimisation perspective. Stein's method also provides a tool for checking if two distributions q(x) andq(x) are identical. If the test function set H is sufficiently rich, then one can define a Stein discrepancy measure by DISPLAYFORM0 see BID11 for an example derivation. When H is defined as a unit ball in an RKHS induced by a kernel K(x, ·), and BID6 showed that the supremum in can be analytically obtained as (with K xx shorthand for K(x, x)): DISPLAYFORM1 which is also named the kernelised Stein discrepancy (KSD). BID6 showed that for C 0 -universal kernels satisfying the boundary condition, KSD is indeed a discrepancy measure: S 2 (q,q) = 0 ⇔ q =q. BID12 further characterised the power of KSD on detecting non-convergence cases. Furthermore, if the kernel is twice differentiable, then using the same technique as to derive one can compute KSD by FORMULA17 is equivalent to the V-statistic of KSD if h(x) = K(x, ·), and we have the following: DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 is the solution of the following KSD V-statistic minimisation problem DISPLAYFORM5 One can also minimise the U-statistic of KSD to obtain gradient approximations, and a full derivation of which, including the optimal solution, can be found in the appendix. In experiments we use Vstatistic solutions and leave comparisons between these methods to future work. There exist other gradient estimators that do not require explicit evaluations of ∇ x log q(x), e.g. the denoising auto-encoder (DAE) BID52 BID51 BID0 which, with infinitesimal noise, also provides an estimate of ∇ x log q(x) at convergence. However, applying such gradient estimators in a double-loop optimisation procedure since the gradient approximation is repeatedly required for fitting implicit distributions, which can be significantly slower than the proposed approach. Therefore we focus on "quick and dirty" approximations and only include comparisons to kernel-based gradient estimators in the following. A naive approach for gradient approximation would first estimate the intractable densityq(x) ≈ q(x) (up to a constant), then approximate the exact gradient by DISPLAYFORM0, then differentiated through the KDE estimate to obtain the gradient estimator: DISPLAYFORM1 Interestingly for translation invariant kernels K(x, x) = K(x − x) the KDE gradient estimator can be rewritten asĜ KDE = −diag (K1) −1 ∇, K. Inspecting and comparing it with the Stein gradient estimator, one might notice that the Stein method uses the full kernel matrix as the pre-conditioner, while the KDE method computes an averaged "kernel similarity" for the denominator. We conjecture that this difference is key to the superior performance of the Stein gradient estimator when compared to the KDE gradient estimator (see later experiments). The KDE method only collects the similarity information between x k and other samples x j to form an estimate of ∇ x k log q(x k), whereas for the Stein gradient estimator, the kernel similarity between x i and x j for all i, j = k are also incorporated. Thus it is reasonable to conjecture that the Stein method can be more sample efficient, which also implies higher accuracy when the same number of samples are collected. The KDE gradient estimator performs indirect approximation of the gradient via density estimation, which can be inaccurate. An alternative approach directly approximates the gradient ∇ x log q(x) by minimising the expected 2 error w.r.t. the approximationĝ(DISPLAYFORM0 It has been shown in BID16 that this objective can be reformulated as DISPLAYFORM1 The key insight here is again the usage of integration by parts: after expanding the 2 loss objective, the cross term can be rewritten as DISPLAYFORM2, if assuming the boundary condition FORMULA10 forĝ (see FORMULA13). The optimum of FORMULA0 is referred as the score matching gradient estimator. The 2 objective is also called Fisher divergence BID18 which is a special case of KSD by selecting K(x, x) = δ x=x. Thus the Stein gradient estimator can be viewed as a generalisation of the score matching estimator. The comparison between the two estimators is more complicated. Certainly by the Cauchy-Schwarz inequality the Fisher divergence is stronger than KSD in terms of detecting convergence. However it is difficult to perform direct gradient estimation by minimising the Fisher divergence, since (i) the Dirac kernel is non-differentiable so that it is impossible to rewrite the divergence in a similar form to, and (ii) the transformation to involves computing ∇ xĝ (x). So one needs to propose a parametric approximation to G and then optimise the associated parameters accordingly, and indeed BID38 and BID45 derived a parametric solution by first approximating the log density up to a constant as logq(x):= K k=1 a k K(x, x k) + C, then minimising to obtain the coefficientsâ score k and constructing the gradient estimator aŝ DISPLAYFORM3 Therefore the usage of parametric estimation can potentially remove the advantage of using a stronger divergence. Conversely, the proposed Stein gradient estimator FORMULA18 is non-parametric in that it directly optimises over functions evaluated at locations {x k} K k=1. This brings in two key advantages over the score matching gradient estimator: (i) it removes the approximation error due to the use of restricted family of parametric approximations and thus can be potentially more accurate; (ii) it has a much simpler and ubiquitous form that applies to any kernel satisfying the boundary condition, whereas the score matching estimator requires tedious derivations for different kernels repeatedly (see appendix).In terms of computation speed, since in most of the cases the computation of the score matching gradient estimator also involves kernel matrix inversions, both estimators are of the same order of complexity, which is O(K 3 + K 2 d) (kernel matrix computation plus inversion). Low-rank approximations such as the Nyström method BID40 BID55 ) can enable speed-up, but this is not investigated in the paper. Again we note here that kernel-based gradient estimators can still be faster than e.g. the DAE estimator since no double-loop optimisation is required. Certainly it is possible to apply early-stopping for the inner-loop DAE fitting. However the ing gradient approximation might be very poor, which leads to unstable training and poorly fitted implicit distributions. Though providing potentially more accurate approximations, the non-parametric estimator has no predictive power as described so far. Crucially, many tasks in machine learning require predicting gradient functions at samples drawn from distributions other than q, for example, in MLE q(x) corresponds to the model distribution which is learned using samples from the data distribution instead. To address this issue, we derive two predictive estimators, one generalised from the nonparametric estimator and the other minimises KSD using parametric approximations. Predictions using the non-parametric estimator. Let us consider an unseen datum y. If y is sampled from q, then one can also apply the non-parametric estimator for gradient approximation, given the observed data X = {x 1, ..., DISPLAYFORM0 then the non-parametric Stein gradient estimator computed on X ∪ {y} is ĝ(y) DISPLAYFORM1 with ∇ y K(·, y) denoting a K × d matrix with rows ∇ y K(x k, y), and ∇ y K(y, y) only differentiates through the second argument. Then we demonstrate in the appendix that, by simple matrix calculations and assuming a translation invariant kernel, we have (with column vector 1 ∈ R K×1): DISPLAYFORM2 In practice one would store the computed gradientĜ Stein V, the kernel matrix inverse (K + ηI) −1 and η as the "parameters" of the predictive estimator. For a new observation y ∼ p in general, one can "pretend" y is a sample from q and apply the above estimator as well. The approximation quality depends on the similarity between q and p, and we conjecture here that this similarity measure, if can be described, is closely related to the KSD.Fitting a parametric estimator using KSD. The non-parametric predictive estimator could be computationally demanding. Setting aside the cost of fitting the "parameters", in prediction the time complexity for the non-parametric estimator is O(K 2 + Kd). Also storing the "parameters" needs O(Kd) memory forĜ Stein V. These costs make the non-parametric estimator undesirable for high-dimensional data, since in order to obtain accurate predictions it often requires K scaling with d as well. To address this, one can also minimise the KSD using parametric approximations, in a similar way as to derive the score matching estimator in Section 3.3.2. More precisely, we define a parametric approximation in a similar fashion as FORMULA0, and in the appendix we show that if the RBF kernel is used for both the KSD and the parametric approximation, then the linear coefficients FIG0 T can be calculated analytically:â DISPLAYFORM3 with X the "gram matrix" that has elements X ij = (x i) T x j. Then for an unseen observation y ∼ p the gradient approximation returns ∇ y log q(y) ≈ (â DISPLAYFORM4 T ∇ y K(·, y). In this case one only maintains the linear coefficientsâ Stein V and computes a linear combination in prediction, which takes O(K) memory and O(Kd) time and therefore is computationally cheaper than the non-parametric prediction model. We present some case studies that apply the gradient estimators to implicit models. Detailed settings (architecture, learning rate, etc.) are presented in the appendix. Implementation is released at https://github.com/YingzhenLi/SteinGrad. We first consider a simple synthetic example to demonstrate the accuracy of the proposed gradient estimator. More precisely we consider the kernel induced Hamiltonian flow (not an exact sampler) BID45 on a 2-dimensional banana-shaped object: x ∼ B(x; b = 0.03, v = 100) ⇔ x 1 ∼ N (x 1 ; 0, v), x 2 = + b(x 2 1 − v), ∼ N (; 0, 1). The approximate Hamiltonian flow is constructed using the same operator as in Hamiltonian Monte Carlo (HMC) BID31, except that the exact score function ∇ x log B(x) is replaced by the approximate gradients. We still use the exact target density to compute the rejection step as we mainly focus on testing the accuracy of the gradient estimators. We test both versions of the predictive Stein gradient estimator (see section 3.4) since we require the particles of parallel chains to be independent with each other. We fit the gradient estimators on K = 200 training datapoints from the target density. The bandwidth of the RBF kernel is computed by the median heuristic and scaled up by a scalar between. All three methods are simulated for T = 2, 000 iterations, share the same initial locations that are constructed by target distribution samples plus Gaussian noises of standard deviation 2.0, and the are averaged over 200 parallel chains. We visualise the samples and some MCMC statistics in FIG1. In general all the ing Hamiltonian flows are HMC-like, which give us the confidence that the gradient estimators extrapolate reasonably well at unseen locations. However all of these methods have trouble exploring the extremes, because at those locations there are very few or even no training data-points. Indeed we found it necessary to use large (but not too large) bandwidths, in order to both allow exploration of those extremes, and ensure that the corresponding test function is not too smooth. In terms of quantitative metrics, the acceptance rates are reasonably high for all the gradient estimators, and the KSD estimates (across chains) as a measure of sample quality are also close to that computed on HMC samples. The returned estimates of E[x 1] are close to zero which is the ground true value. We found that the non-parametric Stein gradient estimator is more sensitive to hyper-parameters of the dynamics, e.g. the stepsize of each HMC step. We believe a careful selection of the kernel (e.g. those with long tails) and a better search for the hyper-parameters (for both the kernel and the dynamics) can further improve the sample quality and the chain mixing time, but this is not investigated here. One of the recent focuses on meta-learning has been on learning optimisers for training deep neural networks, e.g. see BID1. Could analogous goals be achieved for approximate inference? In this section we attempt to learn an approximate posterior sampler for Bayesian neural networks (Bayesian NNs, BNNs) that generalises to unseen datasets and architectures. A more detailed introduction of Bayesian neural networks is included in the appendix, and in a nutshell, we consider a binary classification task: p(y = 1|x, θ) = sigmoid(NN θ (x)), p 0 (θ) = N (θ; 0, I). After observing the training data D = {(x n, y n)} N n=1, we first obtain the approximate posterior DISPLAYFORM0 p(y n |x n, θ), then approximate the predictive distribution for a new observation as p(y DISPLAYFORM1 . In this task we define an implicit approximate posterior distribution q φ (θ) as the following stochastic normalising flow θ t+1 = f (θ t, ∇ t, t): given the current location θ t and the mini-batch data {(x m, y m)} M m=1, the update for the next step is DISPLAYFORM2 The coordinates of the noise standard deviation σ φ (θ t, ∇ t) and the moving direction ∆ φ (θ t, ∇ t) are parametrised by a coordinate-wise neural network. If properly trained, this neural network will learn the best combination of the current location and gradient information, and produce approximate posterior samples efficiently on different probabilistic modelling tasks. Here we propose using the variational inference objective computed on the samples {θ k t} to learn the variational parameters φ. Since in this case the gradient of the log joint distribution can be computed analytically, we only approximate the gradient of the entropy term H[q] as in, with the exact score function replaced by the presented gradient estimators. We report the using the non-parametric Stein gradient estimator as we found it works better than the parametric version. The RBF kernel is applied for gradient estimation, with the hyper-parameters determined by a grid search on the bandwidth σ 2 ∈ {0.25, 1.0, 4.0, 10.0, median trick} and η ∈ {0.1, 0.5, 1.0, 2.0}.We briefly describe the test protocol. We take from the UCI repository BID24 six binary classification datasets (australian, breast, crabs, ionosphere, pima, sonar), train an approximate sampler on crabs with a small neural network that has one 20-unit hidden layer with ReLU activation, and generalise to the remaining datasets with a bigger network that has 50 hidden units and uses sigmoid activation. We use ionosphere as the validation set to tune ζ. The remaining 4 datasets are further split into 40% training subset for simulating samples from the approximate sampler, and 60% test subsets for evaluating the sampler's performance. (BID54) evaluated on the test datasets directly. In summary, SGLD returns best in KSD metric. The Stein approach performs equally well or a little better than SGLD in terms of test-LL and test error. The KDE method is slightly worse and is close to MAP, indicating that the KDE estimator does not provide a very informative gradient for the entropy term. Surprisingly the score matching estimator method produces considerably worse (except for breast dataset), even after carefully tuning the bandwidth and the regularisation parameter η. Future work should investigate the usage of advanced recurrent neural networks such as an LSTM BID14, which is expected to return better performance. GANs are notoriously difficult to train in practice. Besides the instability of gradient-based minimax optimisation which has been partially addressed by many recent proposals BID36 BID2 BID5, they also suffer from mode collapse. We propose adding an entropy regulariser to the GAN generator loss. Concretely, assume the generative model p θ (x) is implicitly defined by x = f θ (z), z ∼ p 0 (z), then the generator's loss is defined bỹ DISPLAYFORM0 where J gen (θ) is the original loss function for the generator from any GAN algorithm and α is a hyper-parameter. In practice (the gradient of) FORMULA0 is estimated using Monte Carlo. We empirically investigate the entropy regularisation idea on the very recently proposed boundary equilibrium GAN (BEGAN) BID5 ) method using (continuous) MNIST, and we refer to the appendix for the detailed mathematical set-up. In this case the non-parametric V-statistic Stein gradient estimator is used. We use a convolutional generative network and a convolutional auto-encoder and select the hyper-parameters of BEGAN γ ∈ {0.3, 0.5, 0.7}, α ∈ and λ = 0.001. The Epanechnikov kernel K(x, x): DISPLAYFORM1 2 ) is used as the pixel values lie in a unit interval (see appendix for the expression of the score matching estimator), and to ensure the boundary condition we clip the pixel values into range [10 −8, 1 − 10 −8]. The generated images are visualised in FIG3. BEGAN without the entropy regularisation fails to generate diverse samples even when trained with learning rate decay. The other three images clearly demonstrate the benefit of the entropy regularisation technique, with the Stein approach obtaining the highest diversity without compromising visual quality. We further consider four metrics to assess the trained models quantitatively. First 500 samples are generated for each trained model, then we compute their nearest neighbours in the training set using l 1 distance, and obtain a probability vector p by averaging over these neighbour images' label vectors. In FIG4 we depict the entropy of p (top left), averaged l 1 distances to the nearest neighbour (top right), and the difference between the largest and smallest elements in p (bottom right). The error bars are obtained by 5 independent runs. These demonstrate that the Stein approach performs significantly better than the other two, in that it learns a better generative model not only faster but also in a more stable way. Interestingly the KDE approach achieves the lowest average l 1 distance to nearest neighbours, possibly because it tends to memorise training examples. We next train a fully connected network π(y|x) on MNIST that achieves 98.16% text accuracy, and compute on the generated images an empirical estimate of the inception score BID36 DISPLAYFORM2. High inception score indicates that the generate images tend to be both realistic looking and diverse, and again the Stein approach out-performs the others on this metric by a large margin. Concerning computation speed, all the three methods are of the same order: 10.20s/epoch for KDE, 10.85s/epoch for Score, and 10.30s/epoch for Stein. 1 This is because K < d (in the experiments K = 100 and d = 784) so that the complexity terms are dominated by kernel computations (O(K 2 d)) required by all the three methods. Also for a comparison, the original BEGAN method without entropy regularisation runs for 9.05s/epoch. Therefore the main computation cost is dominated by the optimisation of the discriminator/generator, and the proposed entropy regularisation can be applied to many GAN frameworks with little computational burden. We have presented the Stein gradient estimator as a novel generalisation to the score matching gradient estimator. With a focus on learning implicit models, we have empirically demonstrated the efficacy of the proposed estimator by showing how it opens the door to a range of novel learning tasks: approximating gradient-free MCMC, meta-learning for approximate inference, and unsupervised learning for image generation. Future work will expand the understanding of gradient estimators in both theoretical and practical aspects. Theoretical development will compare both the V-statistic and U-statistic Stein gradient estimators and formalise consistency proofs. Practical work will improve the sample efficiency of kernel estimators in high dimensions and develop fast yet accurate approximations to matrix inversion. It is also interesting to investigate applications of gradient approximation methods to training implicit generative models without the help of discriminators. Finally it remains an open question that how to generalise the Stein gradient estimator to non-kernel settings and discrete distributions. In this section we provide more discussions and analytical solutions for the score matching estimator. More specifically, we will derive the linear coefficient a = (a 1, ..., a K) for the case of the Epanechnikov kernel. A.1 SOME REMARKS ON SCORE MATCHING Remark. It has been shown in BID37; BID0 that de-noising autoencoders (DAEs) BID52, once trained, can be used to compute the score function approximately. Briefly speaking, a DAE learns to reconstruct a datum x from a corrupted input x = x+σ, ∼ N (0, I) by minimising the mean square error. Then the optimal DAE can be used to approximate the score function as ∇ x log p(x) ≈ 1 σ 2 (DAE * (x)−x). applied this idea to train an implicit model for image super-resolution, providing some promising in some metrics. However applying similar ideas to variational inference can be computationally expensive, because the estimation of ∇ z log q(z|x) is a sub-routine for VI which is repeatedly required. Therefore in the paper we deploy kernel machines that allow analytical solutions to the score matching estimator in order to avoid double loop optimisation. Remark. As a side note, score matching can also be used to learn the parameters of an unnormalised density. In this case the target distribution q would be the data distribution andq is often a Boltzmann distribution with intractable partition function. As a parameter estimation technique, score matching is also related to contrastive divergence BID13, pseudo likelihood estimation BID17, and DAEs BID51 BID0. Generalisations of score matching methods are also presented in e.g. BID27 BID28. The derivations for the RBF kernel case is referred to BID45, and for completeness we include the final solutions here. Assume the parametric approximation is defined as logq(x) = K k=1 a k K(x, x k) + C, where the RBF kernel uses bandwidth parameter σ. then the optimal solution of the coefficientsâ score = (Σ + ηI) DISPLAYFORM0 The Epanechnikov kernel is defined as DISPLAYFORM0, where the first and second order gradients w.r.t. DISPLAYFORM1 Thus the score matching objective with logq(DISPLAYFORM2 with the matrix elements DISPLAYFORM3 Define the "gram matrix" X ij = (x i) T x j, we write the matrix form of Σ as DISPLAYFORM4 Thus with an l 2 regulariser, the fitted coefficients arê DISPLAYFORM5 1. The V-statistic of KSD is the following: given samples x k ∼ q, k = 1,..., K and recall DISPLAYFORM0 The last term ∇ x j,x l K jl will be ignored as it does not depend on the approximationĝ. Using matrix notations defined in the main text, readers can verify that the V-statistic can be computed as DISPLAYFORM1 Using the cyclic invariance of matrix trace leads to the desired in the main text. The U-statistic of KSD removes terms indexed by j = l in, in which the matrix form is DISPLAYFORM2 with the jth row of ∇diag(K) defined as ∇ x j K(x j, x j). For most translation invariant kernels this extra term ∇diag(K) = 0, thus the optimal solution ofĜ by minimising KSD U-statistic iŝ DISPLAYFORM3 Let us consider an unseen datum y. If y is sampled from the q distribution, then one can also apply the non-parametric estimator for gradient approximations, given the observed data X = {x 1, ..., x K} ∼ q. Concretely, if writingĝ(y) ≈ ∇ y log q(y) ∈ R d×1 then the non-parametric Stein gradient estimator (using V-statistic) is DISPLAYFORM0 with ∇ y K(·, y) denoting a K ×d matrix with rows ∇ y K(x k, y), and ∇ y K(y, y) only differentiates through the second argument. Thus by simple matrix calculations, we have: DISPLAYFORM1 For translation invariant kernels, typically ∇ y K(y, y) = 0, and more conveniently, DISPLAYFORM2 Thus equation FORMULA2 can be further simplified to (with column vector 1 ∈ R K×1) ∇ y log q(y) DISPLAYFORM3 The solution for the U-statistic case can be derived accordingly which we omit here. We define a parametric approximation in a similar way as for the score matching estimator: DISPLAYFORM0 Now we show the optimal solution of a = (a 1, ..., a K) T by minimising. To simplify derivations we assume the approximation and KSD use the same kernel. First note that the gradient of the RBF kernel is DISPLAYFORM1 Substituting FORMULA5 into FORMULA2: DISPLAYFORM2 DISPLAYFORM3 We first consider summing the j, l indices in ♣. Recall the "gram matrix" X ij = (x i) T x j, the inner product term in ♣ can be expressed as X kk + X jl − X kl − X jk. Thus the summation over j, l can be re-written as DISPLAYFORM4 And thus ♣ = 1 σ 4 a T Λa. Similarly the summation over j, l in ♠ can be simplified into Similarly we can derive the solution for KSD U-statistic minimisation. The U statistic can also be represented in quadratic form S 2 U (q,q) = C +♣ + 2♠, with♠ = ♠ and DISPLAYFORM5 DISPLAYFORM6 Summing over the j indices for the second term, we have DISPLAYFORM7 Working through the analogous derivations reveals thatâ DISPLAYFORM8 We describe the detailed experimental set-up in this section. All experiments use Adam optimiser BID20 with standard parameter settings. We start by reviewing Bayesian neural networks with binary classification as a running example. In this task, a normal deep neural network is constructed to predict y = f θ (x), and the neural network is parameterised by a set of weights (and bias vectors which we omit here for simplicity DISPLAYFORM0 . In the Bayesian framework these network weights are treated as random variables, and a prior distribution, e.g. Gaussian, is also attached to them: p 0 (θ) = N (θ; 0, I). The likelihood function of θ is then defined as DISPLAYFORM1 and p(y = 0|x, θ) = 1 − p(y = 1|x, θ) accordingly. One can show that the usage of Bernoulli distribution here corresponds to applying cross entropy loss for training. After framing the deep neural network as a probabilistic model, a Bayesian approach would find the posterior of the network weights p(θ|D) and use the uncertainty information encoded in it for future predictions. By Bayes' rule, the exact posterior is DISPLAYFORM2 p(y n |x n, θ), and the predictive distribution for a new input x * is DISPLAYFORM3 Again the exact posterior is intractable, and approximate inference would fit an approximate posterior distribution q φ (θ) parameterised by the variational parameters φ to the exact posterior, and then use it to compute the (approximate) predictive distribution.p(y * = 1|x *, D) ≈ p(y * = 1|x *, θ)q φ (θ)dθ. Since in practice analytical integration for neural network weights is also intractable, the predictive distribution is further approximated by Monte Carlo: DISPLAYFORM4 Now it remains to fit the approximate posterior q φ (θ), and in the experiment the approximate posterior is implicitly constructed by a stochastic flow. For the training task, we use a one hidden layer neural network with 20 hidden units to compute the noise variance and the moving direction of the next update. In a nutshell it takes the ith coordinate of the current position and the gradient θ t (i), ∇ t (i) as the inputs, and output the corresponding coordinate of the moving direction ∆ φ (θ t, ∇ t)(i) and the noise variance σ φ (θ t, ∇ t)(i). Softplus non-linearity is used for the hidden layer and to compute the noise variance we apply ReLU activation to ensure non-negativity. The step-size ζ is selected as 1e-5 which is tuned on the KDE approach. For SGLD step-size 1e-5 also returns overall good . The training process is the following. We simulate the approximate sampler for 10 transitions and sum over the variational lower-bounds computed on the samples of every step. Concretely, the maximisation objective is DISPLAYFORM5 where T = 100 and q t (θ) is implicitly defined by the marginal distribution of θ t that is dependent on φ. In practice the variational lower-bound L VI (q t) is further approximated by Monte Carlo and data sub-sampling: DISPLAYFORM6 strategies are considered to approximate the contribution of the entropy term. Given K samples x 1,..., x k ∼ p θ (x), The first proposal considers a plug-in estimate of the entropy term with a KDE estimate of p θ (x), which is consistent with the KDE estimator but not necessary with the other two (as they use kernels when representing log p θ (x) or ∇ x log p θ (x)). The second one uses a proxy of the entropy loss −H[p] ≈ 1 K K k=1 ∇ x k log p θ (x k) T x k with generated samples {x k} and ∇ x k log p θ (x k) approximated by the gradient estimator in use. In the experiment, we construct a deconvolutional net for the generator and a convolutional autoencoder for the discriminator. The convolutional encoder consists of 3 convolutional layers with filter width 3, stride 2, and number of feature maps. These convolutional layers are followed by two fully connected layers with units. The decoder and the generative net have a symmetric architecture but with stride convolutions replaced by deconvolutions. ReLU activation function is used for all layers except the last layer of the generator, which uses sigmoid non-linearity. The reconstruction loss in use is the squared 2 norm || · || 2 2. The randomness p 0 (z) is selected as uniform distribution in [-1, 1] as suggested in the original paper BID5. The minibatch size is set to K = 100. Learning rate is initialised at 0.0002 and decayed by 0.9 every 10 epochs, which is tuned on the KDE model. The selected γ and α values are: for KDE estimator approach γ = 0.3, αγ = 0.05, for score matching estimator approach γ = 0.3, αγ = 0.1, and for Stein approach γ = 0.5 and αγ = 0.3. The presented use the KDE plug-in estimator for the entropy estimates (used to tune β) for the KDE and score matching approaches. Initial experiments found that for the Stein approach, using the KDE entropy estimator works slightly worse than the proxy loss, thus we report using the proxy loss. An advantage of using the proxy loss is that it directly relates to the approximate gradient. Furthermore we empirically observe that the performance of the Stein approach is much more robust to the selection of γ and α when compared to the other two methods.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJi9WOeRb
We introduced a novel gradient estimator using Stein's method, and compared with other methods on learning implicit models for approximate inference and image generation.
Noise injection is a fundamental tool for data augmentation, and yet there is no widely accepted procedure to incorporate it with learning frameworks. This study analyzes the effects of adding or applying different noise models of varying magnitudes to Convolutional Neural Network (CNN) architectures. Noise models that are distributed with different density functions are given common magnitude levels via Structural Similarity (SSIM) metric in order to create an appropriate ground for comparison. The basic are conforming with the most of the common notions in machine learning, and also introduces some novel heuristics and recommendations on noise injection. The new approaches will provide better understanding on optimal learning procedures for image classification. Convolutional Neural Networks (CNNs) find an ever-growing field of application throughout image and sound processing tasks, since the success of AlexNet in the 2012 ImageNet competition. Yet, training these networks still keeps the need of an "artistic" touch: even the most cited state-of-the-art studies employ wildly varying set of solvers, augmentation and regularization techniques . In this study, one of the crucial data augmentation techniques, noise injection, will be thoroughly analysed to determine the correct way of application on image processing tasks. Adding noise to the training data is not a procedure that is unique to the training of neural architectures: additive and multiplicative noise has long been used in signal processing for regression-based methods, in order to create more robust models . The technique is also one of the oldest data augmentation methods employed in the training of feed forward networks, as analysed by , yet it is also pointed out in the same study that while using additive Gaussian noise is helpful, the magnitude of the noise cannot be selected blindly, as a badly-chosen variance may actually harm the performance of the ing network (see and for more examples). The main reasons for noise injection to the training data can be listed as such in a non-excluding manner: first of all, injection of any noise type makes the model more robust against the occurrence of that particular noise over the input data (see and for further reference), such as the cases of Gaussian additive noise in photographs, and Gaussian-Poisson noise on low-light charge coupled devices . Furthermore, it is shown that the neural networks optimize on the noise magnitude they are trained on . Therefore, it is important to choose the correct type and level of the noise to augment the data during training. Another reason for noise addition is to encourage the model to learn the various aspects of each class by occluding random features. Generally, stochastic regularization techniques embedded inside the neural network architectures are used for this purpose, such as Dropout layers, yet it is also possible to augment the input data for such purposes as in the example of "cutout" regularization proposed by. The improvement of the generalization capacity of a network is highly correlated with its performance, which can be scored by the accuracy over a predetermined test set. There has been similar studies conducted on the topic, with the example of which focuses on the effects of noise injection on the training of deep networks and the possible denoising methods, yet they fail to provide a proper methodology to determine the level of noise to be injected into the training data, and use PSNR as the comparison metric between different noise types which is highly impractical (see Section 3). To resolve these issues, this study focuses on the ways to determine which noise types to combine the training data with, and which levels, in addition to the validity of active noise injection techniques while experimenting on a larger set of noise models. In the structure of this work, the effect of injecting different types of noises into images for varying CNN architectures is assessed based on their performance and noise robustness. Their interaction and relationship with each other are analyzed over (also noise-injected) validation sets. Finally as a follow-up study, proper ways on adding or applying noise to a CNN for image classification tasks are discussed. Noise can be -somewhat broadly-defined as unwanted component of the image . It can be sourced from the environment at which the image is taken, the device utilized to take the image, or the medium of communication that is used to convey the information of image from source to the receiver. According to its properties and nature, noise and image can be analytically decomposed as additive or multiplicative, but some of the noise types cannot be described by neither of these classes. Let f (x) denote an image signal. This signal can be decomposed into two components in an additive manner as f (x) = g(x) + n(x) where g(x) denoting the desired component of the image, and n(x) standing for the unwanted noise component. Most commonly encountered variant of this noise class is Gaussian noise, whose multivariate probability density function can be written as: where m and Σ denoting the n-dimensional mean vector and symmetric covariance matrix with rank n, respectively. In images, the mean vector m is generally zero, therefore the distribution is centered and the magnitude is controlled by the variance. This study also follows these assumptions. Again, let f (x) denote an image signal. This signal can also be decomposed into respective desired and noise components as f (x) = g(x)(1 + n(x)). The noise component in this model is called multiplicative noise. The most common variant in this case is called speckle noise, which may have different density functions, and in this study Gaussian is assumed. Similar with the additive noise, the mean is assumed to be 0 and the magnitude refers to the variance. Speckle noise can be encountered in coherent light imaging such as in the cases of SAR images and images with laser-based illumination, but they may also be observed in other digital images . There exists many other noise instances that cannot be modeled by additive or multiplicative decompositions. Most common of these types are listed below, whose effects on the performances of CNNs are also analysed in this study. Salt and pepper (S&P) noise. This noise manifests itself as a basic image degradation, for which only a few pixels in an image are noisy, but they are extremely noisy in a way that pixels are either completely black or white. The magnitude of this noise is the probability of a pixel to be completely black (i.e. pepper), completely white (i.e. salt), or stay unchanged. The probabilities for pepper and salt cases are assumed to be equal, and total probability of the degradation of a pixel is referred as the the magnitude of the noise. Poisson noise. Also referred as photon counting noise or shot noise, this noise type has a particular probability density function: where λ stands for both the variance and the mean of the distribution. As Poisson noise is signaldependant, it does not have a direct magnitude parameter similar to other noise types, therefore a magnitude factor c is used that divides the intensity values of all pixels from which the distribution is sampled, and returned to the original range by multiplying to the same factor. Occlusion noise. Although it is not generally referred as a noise type, occlusion of important features can happen for a number of reasons in an image: image may be cropped, damaged or a particular object in it may be hidden by an obstacle such as a tree. This noise is realized with zerointensity squares appearing on the image, and the magnitude is determined according to the size of the square as the shape of the occluding object does not have a large impact on the final performance of the model . As listed in this section, five different types of noise and their various combinations are added or applied to the images, with varying magnitudes. Robustness against each of these noise types is also assessed. In general, Peak Signal-to-Noise Ratio (PSNR), and similarly Mean Squared Error (MSE) are the most commonly used quality metrics in the image processing field. For an 8-bit two-dimensional MxN imagef (n 1, n 2) and its noise-free counterpart f (n 1, n 2), the MSE is defined as From above definition, PSNR (in dB) can be derived. P SN R = 10 log 10 255 There are several limitations of using PSNR as the image quality metric of a data set: it is shown that PSNR loses its validity as a quality metric when the content and/or codec of the images are different as in that case the correlation between subjective quality and PSNR is highly reduced . Also, even though the sensitivity of PSNR to Gaussian noise is very high, the metric is unable to present similar performance for different types of perturbation (Horé &). There exists another widely accepted image quality metric called Structural Similarity (SSIM), which resolves or alleviates some of the above-listed problems. The Structural Similarity between two non-negative image signals x and y, whose means, standard deviations and covariance are denoted by µ x, µ y, σ x, σ y and σ xy respectively, can be expressed as: where C 1 and C 2 are constants to avoid instability . This metric combines luminance, contrast and structure information in order to provide an assessment of similarity in the range from 0 to 1, where 1 stands for highest similarity. In image classification tasks, the objective is mostly to classify the depicted objects or beings according to the human perception. Therefore, it is crucial for the used metric to be consistent with human opinions: it is shown in several studies that SSIM provides a quality metric closer to average opinion of public than PSNR (. Furthermore, when sampled from the same noise distributions over identical images of a data set, distribution of PSNR values of the noisy images have significantly high kurtosis than their SSIM counterparts in every noise type except S&P. This behavior is demonstrated over a subset of ImageNet dataset called Imagewoof (can be accessed via github.com/fastai/imagenette), with 10 classes and 12454 total number of images on the training set. Each type of noise listed in Section 2 is applied to each image, sampling from the same distribution, and quality metrics are recorded. The utilized noise magnitudes are 0.2 variance for Gaussian noise, 0.2 variance for speckle noise, 0.05 total probability for S&P noise, 0.2 scaling factor for Poisson noise, and 0.3 relative side length for occlusion noise; which are all chosen to provide sensible values from the metrics. The distribution of the metrics can be seen at Figure 1 Table 1. This is interpreted as PSNR having propensity to produce more outliers than SSIM for the same levels of noise . For the reasons listed above, SSIM will be used as the primary metric throughout this study. Effects of injecting different noise types into training data are evaluated for different magnitudes and types of the noise as listed in Section 2. The chosen datasets are two different subsets of ImageNet dataset, namely Imagenette and Imagewoof, each consisting of 10 different classes with 12894 and 12454 training samples respectively and 500 test samples (both can be accessed via github.com/fastai/imagenette). Former dataset contains ten easily classified classes of the original set, while the latter task is the classification of ten different dog breeds and require the network to successfully learn the particularly small features of the classes. Image data range is henceforth from 0 to 1. In order to select the magnitudes of each noise component, a sweep for mean SSIM (MSSIM) of an array of noise magnitudes for each noise type is conducted over 200 images of Imagewoof dataset. Resulting graph can be seen in Figure 2. Very similar are also observed when the same procedure is conducted on Imagenette dataset. According to the shapes of the curves, a quadratic polynomial is fitted to the relative side length of occlusion noise, and logarithmic polynomials are fitted to the rest. Look-up table for the fittings can be seen at Table 2, which are also the degradations applied in the experiments. Exemplary application of the noise can be seen at Figure 3. As the training model, a 18-layer deep residual network (ResNet18V2) as proposed by is chosen, because of the fact that it is a well-known architecture and also sufficiently deep: demonstrate that performance of the deep networks are more sensitive to data augmentation than their more shallow counterparts. The residual connections and layering structure of ResNet18V2 exhibit similar architectural properties of the most often utilized CNNs in the field. Adam solver with learning rate 1e-4 is preferred for training. Models are trained for 20 epochs. No dropout or weight decay is used for regularization purposes, and Batch Normalization layers are used in accordance with. Chosen CNN architecture is trained with noise injected to the training data for all noise models described in Section 2, for magnitudes corresponding to respective MSSIM values from Table 2. Total number of 52 networks are trained (25 on noisy data and 1 on original data for each dataset). The categorical accuracy of each trained network on the validation set can be seen at Figures 4 and 5, and the accuracies of the CNNs depending on the noise they are trained with for noisy test set can be observed at Figure 8. The latter can be considered as the robustness test of the trained networks. One of the most important features of these heatmaps, robustness of the models against the noise injected to their input, is also plotted for each dataset individually in Figures 6 and 7. There are several confirmations to acquire from this set of for the literature: first of all, there exists a trade-off between noise robustness and clean set accuracy. Yet contrary to the common notion, we believe that the data presents a highly valid optimum for this exchange in our study. As it can be seen from Figures 6 and 7; in order to create a robust model against particular kind of noise while maintaining the performance of the model, one must apply a level of degradation that in 0.8 MSSIM over training data. We believe that as long as the noise or perturbation is somewhat homogeneously distributed, this rule of thumb will hold for all image classification tasks. However, the same thing cannot be said for non-homogeneously distributed noise models, as SSIM (and also PSNR as demonstrated in Section 3) fails to capture the level of degradation appropriately for such a verdict (see Occlusion in Figures 6 and 7). A second confirmation of the current literature is the fact that the neural networks optimize on the noise level they are trained with, as seen again at Figures 6 and 7, and also the diagonals of Figure 8. Yet, the level of this optimization is quite small after 0.5 MSSIM, featuring similar robustness for each trained model. Therefore, it is not particularly necessary to determine the noise level of a dataset, or sample the noise from a predetermined interval, as long as the MSSIM does not drop below 0.5, in which case noise removal techniques need to be considered for better models. As noted above, occlusion noise type will not be thoroughly analyzed in this section because of the fact that the quality metric has failed to provide sufficient comparative data for this discussion. Yet, the performance data and the lack of robustness the other models exhibit towards this particular noise type shows that "cutout" regularization as presented by is a crucial part of data augmentation in addition to any other perturbation or noise injection technique. A way to further extend the contribution of this method would be to alternate the intensity level of the patches from 0 to 255 for 8-bit images, which can be a topic of another research. For the rest of the noise types; Gaussian, speckle and Poisson noises are observed to increase the performance of the model while boosting the robustness, and their effects exhibit the possibility of interchangeable usage. For image classification tasks involving RGB images of daily objects, injection of only one of these noise types with above-mentioned level is believed to be sufficient as repetition of the clusters can be observed in Figure 8. Among these three, Gaussian noise is recom-mended considering the of model performance. S&P noise contamination, on the other hand, may not be resolved by injection of the former noise types as the other models are not sufficiently robust against it. Therefore, at this point one of the two methodologies are suggested: either S&P noise can be removed by simple filtering techniques, or S&P noise can be applied in an alternating manner with Gaussian noise during data augmentation. Former approach is recommended for the simplicity of the training procedure. The constant behaviour of the models towards occlusion noise in Figures 6, 7 and 8 unfortunately does not have a satisfactory explanation, despite several diagnostics of the training procedure. A longer training procedure, which was not feasible in our experiment because of the model count, may resolve these undesirable . In this study, an extensive analysis of noise injection to training data has conducted. The confirmed some of the notions in the literature, while also providing new rule of thumbs for CNN training. As further targets of research, extension of "cutout" regularization as described in the above paragraphs, and the distribution behavior of the SSIM and PSNR metrics in Figure 2 with regards to the work of Horé & may be pursued.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkeKtyHYPS
Ideal methodology to inject noise to input data during CNN training
State-of-the-art Unsupervised Domain Adaptation (UDA) methods learn transferable features by minimizing the feature distribution discrepancy between the source and target domains. Different from these methods which do not model the feature distributions explicitly, in this paper, we explore explicit feature distribution modeling for UDA. In particular, we propose Distribution Matching Prototypical Network (DMPN) to model the deep features from each domain as Gaussian mixture distributions. With explicit feature distribution modeling, we can easily measure the discrepancy between the two domains. In DMPN, we propose two new domain discrepancy losses with probabilistic interpretations. The first one minimizes the distances between the corresponding Gaussian component means of the source and target data. The second one minimizes the pseudo negative log likelihood of generating the target features from source feature distribution. To learn both discriminative and domain invariant features, DMPN is trained by minimizing the classification loss on the labeled source data and the domain discrepancy losses together. Extensive experiments are conducted over two UDA tasks. Our approach yields a large margin in the Digits Image transfer task over state-of-the-art approaches. More remarkably, DMPN obtains a mean accuracy of 81.4% on VisDA 2017 dataset. The hyper-parameter sensitivity analysis shows that our approach is robust w.r.t hyper-parameter changes. Recent advances in deep learning have significantly improved state-of-the-art performance for a wide range of applications. However, the improvement comes with the requirement of a massive amount of labeled data for each task domain to supervise the deep model. Since manual labeling is expensive and time-consuming, it is therefore desirable to leverage or reuse rich labeled data from a related domain. This process is called domain adaptation, which transfers knowledge from a label rich source domain to a label scarce target domain . Domain adaptation is an important research problem with diverse applications in machine learning, computer vision (; ;) and natural language processing . Traditional methods try to solve this problem via learning domain invariant features by minimizing certain distance metric measuring the domain discrepancy, for example Maximum Mean Discrepancy (MMD) (; ; and correlation distance . Then labeled source data is used to learn a model for the target domain. Recent studies have shown that deep neural networks can learn more transferable features for domain adaptation . Consequently, adaptation layers have been embedded in the pipeline of deep feature learning to learn concurrently from the source domain supervision and some specially designed domain discrepancy losses;; ). However, none of these methods explicitly model the feature distributions of the source and target data to measure the discrepancy. Inspired from the recent works by and , which have shown that modeling feature distribution of a training set improves classification performance, we explore explicit distribution modeling for UDA. We model the feature distributions as Gaussin mixture distributions, which facilitates us to measure the discrepancy between the source and target domains. Our proposed method, i.e., DMPN, works as follows. We train a deep network over the source domain data to generate features following a Gaussian mixture distribution. The network is then used to assign pseudo labels to the unlabeled target data. To learn both discriminative and domain invariant features, we fine-tune the network to minimize the cross-entropy loss on the labeled source data and domain discrepancy losses. Specifically, we propose two new domain discrepancy losses by exploiting the explicit Gaussian mixture distributions of the deep features. The first one minimizes the distances between the corresponding Gaussian component means between the source and target data. We call it Gaussian Component Mean Matching (GCMM). The second one minimizes the negative log likelihood of generating the target features from the source feature distribution. We call it Pseudo Distribution Matching (PDM). Extensive experiments on Digits Image transfer tasks and synthetic-to-real image transfer task demonstrate our approach can provide superior than state-of-the-art approaches. We present our proposed method in Section 3, extensive experiment and analysis in Section 4 and in Section 5. Domain adaptation is an important research problem with diverse applications in machine learning, computer vision (; ;) and natural language processing . According to the survey , traditional domain adaptation methods can be organized into two categories: feature matching and instance re-weighting. Feature matching aims to reduce the domain discrepancy via learning domain invariant features by minimizing certain distance metric, for example Maximum Mean Discrepancy (MMD) (; ;, correlation distance , Central Moment Discrepancy (CMD) and et al. Then labeled source data is used to learn a model for the target domain. Instance reweighting aims to reduce the domain discrepancy via re-weighting the source instances according to their importance weights with respect to the target distribution . In the era of deep learning, studies have shown that deep neural networks can learn more transferable features for domain adaptation , therefore, domain adaptation layers have been embedded in the pipeline of deep feature learning to learn concurrently from the source domain supervision and some specially designed domain discrepancy losses;; ). Some recent works , , add a domain discriminator into the deep feature learning pipeline, where a feature generator and a domain discriminator are learned adversarially to generate domain invariant features. All these works can be categorized as the feature matching type of domain adaptation method. However, none of them models the feature distributions of the source and target data for distribution matching. In this paper, we show that explicitly modeling the feature distributions enables us to measure the domain discrepancy more easily and helps us to propose new domain discrepancy losses. Prototypical network (PN) was first proposed in for few shot learning, which shows that learning PN is equivalent to performing mixture density estimation on the deep features with an exponential density. Recently, in's and's works, it has been shown that modeling the deep feature distribution of a training set as Gaussian mixture distribution improves classification performance. As Gaussian density belongs to one type of exponential density, the models proposed in's and's works are variants of PN. However, the two works study the classification problem in a single domain, which is different from our work on the problem of domain adaptation. , prototypical networks are first applied for domain adaptation. Multi-granular domain discrepancy minimization at both class-level and sample-level are employed in to reduce the domain difference and achieves state-of-the-art in various domain adaptation tasks. However, in's work, the deep feature distribution is modeled implicitly when they apply PN for UDA, in our work, we explicitly model the deep feature distribution as Gaussian mixture distribution for UDA. In Unsupervised Domain Adaptation (UDA), we are given N s labeled samples in the source domain and N t unlabeled samples in the target domain. The source and target samples share the same set of labels and are sampled from probability distributions P s and P t respectively with P s = P t. The goal is to transfer knowledge learnt from the labeled source domain to the unlabeled target domain. We model the deep embedded features of the source data as a Gaussian mixture distribution where the Gaussian component means act as the prototypes for each class. Let {µ be the Gaussian component means and covariance matrices of the Gaussian mixture distribution, then the posterior distribution of a class y given the embedded feature f can be expressed as in Eqn. 1 where f = F (x, θ), F: X → R d is the embedding function with parameter θ and d is the dimension of the embedded feature, p(c) is the prior probability of class c and C is the total number of classes. With labeled source data, a classification loss L cls can be computed as the cross-entropy between the posterior probability distribution and the one-hot class label as shown in Eq. 2 and following , a log likelihood regularization term L lkd can be defined as in Eq. 3, where f The final loss function L GM for training a network with Gaussian mixture feature distribution is defined as, where ϕ is a non-negative weighting coefficient. Notice, the distribution parameters {µ are learned automatically from data. To match the deep feature distributions between the source and target data, we propose to match the corresponding Gaussian component means between them. We utilize the network learnt on the labeled source data to assign pseudo labels to target samples. As such, we denote the target samples with pseudo labels asD t = {( . We empirically estimate the Gaussian component means {µ where D s c andD t c denote the sets of source/target samples from class c, f where || · || is the L 2 norm between two vectors. Intuitively, if the source features and target features follow the same Gaussian mixture distribution, then the Gaussian component means of the same class from the two domains will be the same. Thus minimizing L GCM M helps to reduce the domain discrepancy. Better illustrated in Fig. 1 1 {µ, as the latter are learned directly from data and are used to assign pseudo labels for target data. Figure 1: Illustration of the overall training objective. This figure displays the model after we finish pre-training it with the labeled source data on L GM . Different colors represent different classes. Dotted ellipses represent Gaussian mixture distribution of the source embedded features. The amorphous shapes represent pseudo labeled target feature distribution before we optimize the network further on the overall objective function in Eqn. 7. GCMM loss tries to bring the corresponding Gaussian component means between the source data and pseudo labeled target data closer, represented by the black two-way arrows. Minimizing GCMM brings the feature distributions of the source and target domains closer, thus reducing the domain discrepancy. PDM loss tries to match the pseudo target feature distribution to the source Gaussian mixture distribution, represented by the colored one-way arrow. Minimizing PDM increases the likelihood of target features on the source feature distribution, thus reducing the domain discrepancy. Best viewed in color. On the pseudo labeled target dataD t, we further propose to match the embedded target feature distribution with the source Gaussian mixture feature distribution via minimizing the following pseudo negative log likelihood loss, which we denoted as L P DM : Minimizing L P DM 2 maximizes the likelihood of the pseudo labeled target features on the source Gaussian mixture feature distribution. To achieve that, the network is enforced to learn an embedding function which produces similar embedded feature distributions between the source data and target data. Otherwise, this term will induce a large loss value and dominate the overall objective function to be minimized. Therefore, minimizing L P DM helps to reduce the domain discrepancy. As we are using pseudo labeled target data to calculate this domain discrepancy loss function, we term it as Pseudo Distribution Matching (PDM) loss. Furthermore, while minimizing GCMM loss brings the source and target feature distribution closer, minimizing PDM loss shapes the target feature distribution to be similar as the source Gaussian mixture distribution. Thus, these two loss functions complement each other to reduce the distribution discrepancy. Better illustrated in Fig. 1. The overall training objective of DMPN can be written as follows: where minimizing the first two terms of the objective function helps the model to learn discriminative features with the supervision from the labeled source data, and minimizing the last two terms helps to match the embedded feature distributions between the source and target domains so that the learned classifier from the labeled source data can be directly applied in the target domain. The whole model is illustrated in Fig. 1. Training Procedure. To train DMPN, we first pre-train a network with labeled source data on L GM. Then mini-batch gradient descent algorithm is adopted for further optimization of the network on 2 Notice, gradient from LP DM does not back-propagate to update {µ . We learn source distribution parameters only from labeled source data. Eqn. 7, where half of the samples in the mini-batch are from labeled source data D s and the other half are from unlabeled target data D t . To obtain pseudo labels for the unlabeled target data, we use the learned source distribution parameters to calculate the class probabilities for each target data point as in Eqn. 1 and assign the class with the largest probability as the pseudo label. To remedy the error of the self-labeling, we took similar approach as in and to filter unlabeled target data points whose maximum predicted class probability is smaller than some threshold. Apart from that, we also propose to weight the contribution of each sample to the discrepancy loss based on the predicted probability. In this way, less confidently predicted target samples will make smaller contributions in the training process. Inference. For inference, we first apply the learned embedding function F on the target data, then we will use the learned distribution parameters to calculate the class probabilities for each target data point as in Eqn. 1. Finally, we output the class with the largest probability for each target data point as our prediction. There is another type of domain adaptation problem, called Supervised Domain Adaptation (SDA) in the literature. In SDA, we are provided with a large amount of labeled source data and a small amount of labeled target data, the goal is to find a hypothesis that works well in the target domain. By employing pseudo labeled target data in the training process, our method can be considered as working on a generalized problem of SDA, where the labeled target data is noisy. has proved that we can bound the target error of a domain adaptation algorithm that minimizes a convex combination of empirical source and target error in SDA as follows: where is the convex combination of the source and target error with γ, f s and f t are the labeling function in the source and target domains respectively, h is a hypothesis in class H, | measures the domain discrepsancy in the hypothesis space H and λ = s (h *) + t (h *) is the combined error in two domains of the joint ideal hypothesis h * = arg min h∈H s (h) + t (h). Denote the noise ratio of the target labeling function to be ρ, the convex combination of the source and noisy target error as˜ γ (h) = γ t (h) + (1 − γ) s (h), where t (h) is the target error on the noisy target labeling function, then we can bound the target error as follows: In summary, this bound is decomposed into three parts: the domain discrepancy d H∆H, the error λ of the ideal joint hypothesis and the noise ratio ρ of the pseudo labels. In DMPN, we minimize the first term through minimizing the domain discrepancy losses, as d H∆H is small when the source features and target features have similar distribution and minimizing the domain discrepancy losses makes the source and target feature to distribute similarly. The second term is assumed to be small, as otherwise there is no classifier that performs well on both domains. Finally, during training, as we continuously improve the accuracy of the classifier for target data, we get more and more accurate predictions, thus reducing the noise ratio ρ. We empirically verify that ρ is decreasing in Section 4.2. digits from'0' to'9'. The MNIST dataset consists of 70k images and the USPS dataset has 9.3k images. Unlike MNIST and USPS, the SVHN (S) dataset is a real-world Digits dataset of house numbers in Google street view images and contains 100k cropped Digits images. We follow the standard evaluation protocol . We consider three directions of adaptation: M → U, U → M and S → M. For the transfer between MNIST and USPS, we sample 2k images from MNIST training set and 1.8k images from USPS training set for adaptation and evaluation is reported on the standard test sets: MNIST, USPS. For S → M, we use the whole training set SVHN and MNIST for adaptation and evaluation is reported on the standard test set MNIST. In addition, we use the same CNN architecture, namely a simple modified version of to be diagonal and the prior probability to be p(c) = 1/C when pre-training the network on the labeled source data. The three trade-off parameters ϕ, α and β in Eqn. 7 are simply set to be 0.1, 1, 0.1. We strictly follow and set the embedding dimension d as 10/512 for Digits/synthetic-to-real image transfer. We implement DMPN with Pytorch. We use ADAM with 0.0005 weight decay and 0.9/0.999 momentum for training and set the mini-batch size to be 128/120 in Digits/synthetic-to-real image transfer. We train the network for 350 epochs for the Digits Image transfer tasks. The learning rate is initially set to be 1e-5 for the covariance matrices and 1e-3 for the other parameters 4 and is decayed by 0.1 at epoch 150 and 250. For the synthetic-to-real image transfer, we fix the learning rate to be 1e-6 and train the network for 100 epochs 4. Finally, for the Digits Image transfer tasks, we apply weighted PDM loss to remedy the labeling error, where each sample is weighted by the maximum predicted class probability. For the synthetic-to-real image transfer task, we apply filtering to remedy the labeling error, where only target examples with maximum predicted probability over 0.8 is used for training. Following the standard, for Digits Image transfer tasks, we adopt the classification accuracy on target domain as evaluation metric and for synthetic-to-real image transfer, we use the average per class accuracy for evaluation metric. We will publish our code upon acceptance. Compared Methods. To demonstrate the benefits of our proposed method, we compare it with the following approaches: Source-only directly exploits the classification model trained on source domain to classify target samples. separates the source feature learning and target feature learning using different networks and use a domain discriminator to learn domain invariant features. JAN aligns the joint distribution of the network activation of multiple layers across domains. MCD employs task-specific decision boundaries to align the distributions of source and target domains. CDAN+E adds a conditional adversarial classifier on the deep feature learning pipeline to learn domain invariant features. S-En+Mini-aug modifies the mean teacher variant of temporal ensembling for UDA. TPN is the first work to apply PN for UDA. TPN gen is the variant trained only with general-purpose domain discrepancy loss. DMPN is our proposed method. DMPN GCM M and DMPN P DM are trained only with GCMM loss and PDM loss respectively. Train-on-target is an oracle that trained on labeled target samples. Table 1 shows the of all methods for the two tasks. Overall, our proposed method achieves superior than all the existing methods. For the Digits Image transfer tasks, DMPN has improved the accuracy for M → U, U → M and S → M by 2.6%, 0.7% and 3.8% respectively compared to the second best. We have made great advancement considering the second best accuracy are already quite high. For the task S → M, due to convergence reasons, we have added batch normalization layers to the original CNN architectures. For fair comparison, we have re-run some experiments on other methods by adding batch normalization layers to them. For methods whose public code are not available, we simply report the accuracy with the original CNN architecture. For ADDA, adding batch normalization layer has improved its accuracy from 76.0% to 83.6%, which has an increase of 7.6% of accuracy. However, we doubt adding batch normalization layers will have the same effect on TPN, as TPN already has a quite high accuracy. Nonetheless, we think our accuracy of 96.8% on this task will be difficult for the other methods to surpass even with batch normalization layers. For the Synthetic-to-real image transfer task, we only compare with methods without extensive data augmentations and our method has increased the state-of-the-art single model mean accuracy by 1.0%. TPN gen reduces the domain discrepancy via minimizing the pairwise Reproducing Kernel Hilbert Space (RKHS) distances among the corresponding prototypes of the source-only, target-only and source-target data. In DMPN GCMM we minimize the L 2 distance between the corresponding Gaussian component means of the source and target data. The L 2 distance can be viewed as the distance in a Linear RKHS space. The calculation of our proposed GCMM loss is much simpler than the general purpose loss, yet with explicitly modeling of the feature distributions, DMPN GCMM has a gain of accuracy of 3.0%, 1.1%, 6.3% and 6.6% Ablation Analysis. In Table 1, combining GCMM loss and PDM loss helps to increase the accuracy , showing that the two domain discrepancy losses are compatible to each other. DMPN GCMM performs better than or similar to almost all other domain adaptation methods and DMPN PDM performs better than most of them. Convergence Analysis. Figure 2 (a) shows the training progress of DMPN. The GCMM loss and PDM loss keep decreasing with more training epochs. The prediction accuracy on the unlabeled target data keeps increasing. And the noise ratio ρ decreases along the training process, from the initial value of 38.6% decreases to 22.9%, which supports our theoretical analysis in Section 3.5. Figure 3 shows the t-SNE visualizations of the source and target embedded features during training, which shows that target classes are becoming increasingly well discriminated by the source classifier. shows the sensitivity analysis on the hyper-parameters α, β and ϕ with the other hyper-parameters fixed. Overall, the experiment show that we can get similar accuracy or even better when changing the hyper-parameters in a certain range, demonstrating that our method is robust against hyper-parameter changes. The sensitivity analysis on the confidence threshold is in the Appendix A.2, which shows our method is robust against threshold value. In this paper, we propose Distribution Matching Prototypical Network (DMPN) for Unsupervised Domain Adaptation (UDA) where we explicitly model and match the deep feature distribution of the source and target data as Gaussian mixture distributions. Our work fills the gap in UDA where stateof-the-art methods assume the deep feature distributions of the source and target data are unknown when minimizing the discrepancy between them. We propose two new domain discrepancy losses based on the Figure 4: Sensitivity analysis on confidence threshold. Fig. 4 shows the sensitivity analysis of our method on different values of confidence threshold on VisDA 2017 dataset. The experiment show that we can get similar accuracy or even better when changing the confidence threshold in a certain range, demonstrating that our method is robust against hyper-parameter changes. A.3 OFFICE-HOME TRANSFER Table 3 presents experiment of state-of-the-art UDA methods and our method on OfficeHome dataset. Our method gives the best accuracy in all transfer tasks, showing the effectiveness of our method. In this experiment, we train the network for 100 epochs. The learning rate is initially set to be 1e-5 for all the parameters and is decayed by 0.1 at epoch 60 and 80. 1+e −γp respectively, where γ is set to be the default value 10, p is the training process changing from 0 to 1.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1eX1yrKwB
We propose to explicitly model deep feature distributions of source and target data as Gaussian mixture distributions for Unsupervised Domain Adaptation (UDA) and achieve superior results in multiple UDA tasks than state-of-the-art methods.
Efficiently learning to solve tasks in complex environments is a key challenge for reinforcement learning (RL) agents. We propose to decompose a complex environment using a task-agnostic world graphs, an abstraction that accelerates learning by enabling agents to focus exploration on a subspace of the environment. The nodes of a world graph are important waypoint states and edges represent feasible traversals between them. Our framework has two learning phases: 1) identifying world graph nodes and edges by training a binary recurrent variational auto-encoder (VAE) on trajectory data and 2) a hierarchical RL framework that leverages structural and connectivity knowledge from the learned world graph to bias exploration towards task-relevant waypoints and regions. We show that our approach significantly accelerates RL on a suite of challenging 2D grid world tasks: compared to baselines, world graph integration doubles achieved rewards on simpler tasks, e.g. MultiGoal, and manages to solve more challenging tasks, e.g. Door-Key, where baselines fail. Many real-world applications, e.g., self-driving cars and in-home robotics, require an autonomous agent to execute different tasks within a single environment that features, e.g. high-dimensional state space, complex world dynamics or structured layouts. In these settings, model-free reinforcement learning (RL) agents often struggle to learn efficiently, requiring a large amount of experience collections to converge to optimal behaviors. Intuitively, an agent could learn more efficiently by focusing its exploration in task-relevant regions, if it has knowledge of the high-level structure of the environment. We propose a method to 1) learn and 2) use an environment decomposition in the form of a world graph, a task-agnostic abstraction. World graph nodes are waypoint states, a set of salient states that can summarize agent trajectories and provide meaningful starting points for efficient exploration (; ;). The directed and weighted world graph edges characterize feasible traversals among the waypoints. To leverage the world graph, we model hierarchical RL (HRL) agents where a high-level policy chooses a waypoint state as a goal to guide exploration towards task-relevant regions, and a low-level policy strives to reach the chosen goals. Our framework consists of two phases. In the task-agnostic phase, we obtain world graphs by training a recurrent variational auto-encoder (VAE) (; ;) with binary latent variables over trajectories collected using a random walk policy and a curiosity-driven goal-conditioned policy . World graph nodes are states that are most frequently selected by the binary latent variables, while edges are inferred from empirical transition statistics between neighboring waypoints. In the task-specific phase, taking advantage of the learned world graph for structured exploration, we efficiently train an HRL model . In summary, our main contributions are: • A task-agnostic unsupervised approach to learn world graphs, using a recurrent VAE with binary latent variables and a curiosity-driven goal-conditioned policy. • An HRL scheme for the task-specific phase that features multi-goal selection (Wide-thenNarrow) and navigation via world graph traversal. 4. On its traversal course to wide goal, agent hits final target and exits.: waypoints selected by the manager: waypoints initiates traversal: trajectories directly from worker actions: exit point: agent: final goal from manager close to selected waypoints: trajectories from world graph traversal Figure 1: Top Left: overall pipeline of our 2-phase framework. Top Right (world graph discovery): a subgraph exemplifies traversal between waypoint states (in blue), see Section 3 for more details. Bottom (Hierarhical RL): an example rollout from our proposed HRL policy with Wide-then-Narrow Manager instructions and world graph traversals, solving a challenging Door-Key task, see Section 4 for more details. • Empirical evaluations on multiple tasks in complex 2D grid worlds to validate that our framework produces descriptive world graphs and significantly improves both sample efficiency and final performance on these tasks over baselines, especially thanks to transfer learning from the unsupervised phase and world graph traversal. An understanding of the environment and its dynamics is essential for effective planning and control in model-based RL. For example, a robotics agent often locates or navigates by interpreting a map (; ;). Our exploration strategy draws inspiration from active localization, where robots are actively guided to investigate unfamiliar regions . Besides mapping, recent works (; ;) learn to represent the world with generative latent states (; ; Racanière et al., 2017). If the latent dynamics are also extrapolated, the latent states can assist planning (a;) or model-based RL . While also aiming to model the world, we approach this as abstracting both the structure and dynamics of the environment in a graph representation, where nodes are states from the environment and edges encode actionable efficient transitions between nodes. Existing works (; ; ;) have shown benefits of such graph abstractions but typically select nodes only subject to a good coverage the observed state space. Instead, we identify a parsimonious subset of states that can summarize trajectories and provide more useful intermediate landmarks, i.e. waypoints, for navigating complex environments. Our method for estimating waypoint states can be viewed as performing automatic (sub)goal discovery. Subgoal and subpolicy learning are two major approaches to identify a set of temporally-extended actions, "skills", that allow agents to efficiently learn to solve complex tasks. Subpolicy learning identifies policies useful to solve RL tasks, such as option-based methods and subtask segmentations . Subgoal learning, on the other hand, identifies "important states" to reach (Şimşek et al., 2005). Previous works consider various definitions of "important" states: frequently visited states during successful task completions , states introducing the most novel information , bottleneck states connecting densely-populated regions (;Şimşek et al., 2005), or environment-specific heuristics . Our work draws intuition from unsupervised temporal segmentation and imitation learning . We define "important" states (waypoints) as the most critical states in recovering action sequences generated by some agent, which indicates that these states contain the richest information about the executed policy . We propose a method for learning a world graph G w, a task-agnostic abstraction of an environment that captures its high-level structure and dynamics. In this work, the primary use of world graphs is to accelerate reinforcement learning of downstream tasks. The nodes of G w, denoted by a set of waypoints states s p ∈ V p, are generically "important" for accomplishing tasks within the environment, and therefore useful as starting points for exploration. Our method identifies such waypoint states from interactions with the environment. In addition, we embed feasible transitions between nearby waypoint states as the edges of G w. In this work, we define important states in the context of learning G w (see Section 2 for alternative definitions). That is, we wish to discover a small set of states that, when used as world graph nodes, concisely summarize the structure and dynamics of the environment. Below, we describe 1) how to collect state-action trajectories and an unsupervised learning objective to identify world graph nodes, and 2) how the graph's edges (i.e., how to transition between nodes) are formed from trajectories. The structure and dynamics of an environment are implicit in the state-action trajectories observed during exploration. To identify world graph nodes from such data, we train a recurrent variational autoencoder (VAE) that, given a sequence of state-action pairs, identifies a subset of the states in the sequence from which the full action sequence can be reconstructed (Figure 2). In particular, the VAE infers binary latent variables that controls whether each state in the sequence is used by the generative decoder, i.e., whether a state is "important" or not. Binary Latent VAE The VAE consists of an inference, a generative and a prior network. These are structured as follows: the input to the inference network q φ is a trajectory of state-action pairs observed from the environment τ ={(s t, a t)} T t=0, with s={s t} T t=0 and a={a t} T t=0 denoting the state and action sequences respectively. The output of the inference network is the approximated posterior over a sequence z={z t} T t=0 of binary latent variables, denoted as q φ (z|a, s). The generative network p θ computes a distribution over the full action sequence a using the masked state sequence, where s t is masked if z t =0 (we fix z 0 =z T =1 during training), denoted as p θ (a|s, z). Finally, a state-conditioned p ψ (z t |s t) given by the prior network p ψ for each s t encodes the empirical average probability that state s t is activated for reconstruction. This choice encourages inference to select within a consistent subset of states for use in action reconstruction. In particular, the waypoint Algorithm 1: Identifying waypoint states V p and learning a goal-conditioned policy π g Result: Waypoint states V p and a goal-conditioned policy π g Initialize network parameters for the recurrent variational inference model V Initialize network parameters for the goal-conditioned policy π g Initialize V p with the initial position of the agent, i.e. V p = {s 0 =} while VAE reconstruction error has not converged do for n ← 1 to N do Sample random waypoint s p ∈ V p Navigate agent to s p and perform T -step rollout using a randow walk policy: T Navigate agent to s p and perform T -step rollout using π g with goal g n: τ Re-label π g rewards with action reconstruction error as curiosity bonus: end Perform policy gradient update of π g using τ π and r π Update V using τ r and τ π Update V p as set of states with largest prior mean αs αs+βs. end states V p are chosen as the states with the largest prior means and during training, once every few iterations, V p is updated based on the current prior network. Objective Formally, we optimize the VAE using the following evidence lower bound (ELBO): To ensure differentiablity, we apply a continuous relaxation over the discrete z t. We use the Beta distribution p ψ (z t) = Beta(α t, β t) for the prior and the Hard Kumaraswamy distribution q ψ (z t |a, z) = HardKuma(α t,β t) for the approximate posterior, which resembles the Beta distribution but is outside the exponential family . This choice allows us to sample 0s and 1s without sacrificing differentiability, accomplished via the stretch-and-rectify procedure and the reparametrization trick . Lastly, to prevent the trivial solution of using all states for reconstruction, we use a secondary objective L 0 to regularize the L 0 norm of z at a targeted value µ 0 , the desired number of selected states out of T steps, e.g. for when T = 25, we set µ 0 = 5, meaning ideally 5 out of 25 states are activated for action reconstruction. Another term L T to encourage temporal separation between selected states by targeting the number of 0/1 switches among z at 2µ 0: See Appendix A for details on training the VAE with binary z t, including integration of the Hard Kumaraswamy distribution and how to regularize the statistics of z. Naturally, the latent structure learned by the VAE depends on the trajectories used to train it. Hence, collecting a rich set of trajectories is crucial. Here, we propose a strategy to bootstrap a useful set of trajectories by alternately exploring the environment based on the current iteration's V p and updating the VAE and V p, repeating this cycle until the action reconstruction accuracy plateaus (Algorithm 1). During exploration, we use action replay to navigate the agent to a state drawn from the current iteration's V p. Although resetting via action replay assumes our underneath environment to be deterministic, in cases where this resetting strategy is infeasible, it may be modified so long as to allow the exploration starting points to expand as the agent discovers more of its environment. For each such starting point, we collect two rollouts. In the first rollout, we perform a random walk to explore the nearby region. In the second rollout, we perform actions using a goal-conditioned policy π g (GCP), setting the final state reached by the random walk as the goal. Both rollouts are used for trianing the VAE and the latter is also used for training π g. GCP provides a venue to integrate intrinsic motivation, such as curiosity (; ; ;) to generate more diverse rollouts. Specifically, we use the action reconstruction error of the VAE as an intrinsic reward signal when training π g. This choice of curioisty also prevents the VAE from collapsing to the simple behaviors of a vanilla π g. The final stage is to construct the edges of G w, which should ideally capture the environment dynamics, i.e. how to transition between waypoint states. Once VAE training is complete and V p is fixed, we collect random walk rollouts from each of the waypoints s p ∈ V p to estimate the underlying adjacency matrix . More precisely, we claim a directed edge s p → s q if there exists a random walk trajectory from s p to s q that does not intersect a third waypoint. We also consider paths taken by π g (starting at s p and setting s q as the goal) and keep the shortest observed path from s p to s q as a world graph edge transition. We use the action sequence length of the edge transition between adjacent waypoints as the weight of the edge. As shown experimentally, a key benefit of our approach is the ability to plan over G w. To navigate from one waypoint to another, we can use dynamic programming to output the optimal traversal of the graph. World graphs present a high-level, task-agnostic abstraction of the environment through waypoints and feasible transition routes between them. A key example of world graph applications for taskspecific RL is structured exploration: instead of exploring the entire environment, RL agents can use world graphs to quickly identify task-relevant regions and bias low-level exploration to these regions. Our framework to leverage world graphs for structured exploration consists of two parts: 1. Hierarchical RL wherein the high-level policy selects subgoals from V p. 2. Traversals using world graph edges. Formally, an RL agent learning to solve a task is formulated as a Markov Decision Process: at time t, the agent is in a state s t, executes an action a t via a policy π(a t |s t) and receives a rewards r t. The agent's goal is to maximize its cumulative expected return R = E (st,at)∼π,p,p0 t≥0 γ t r t, where p(s t+1 |s t, a t), p 0 (s 0) are the transition and initial state distributions. To incorporate world graphs with RL, we use a hierarchical approach based on the Feudal Network (FN) , depicted in Figure 3. A standard FN Collect randomly spawned balls, each ball gives +1 reward. To end an episode, the agent has to exit at a designated point. Balls are located randomly, dense reward. Agents receive a single reward r ≤ 1 proportional to the number of balls collected upon exiting. Balls are located randomly, sparse reward. Spawn lava blocks at random locations each time step that immediately terminates the episode if stepped on. Stochastic environment. Multiple objects: lava and balls are randomly located, dense reward. Agent has to pick up a key to open a door (reward +1) and reach the exit point on the other side (reward +1). Walls, door and key are located randomly. Agents have additional actions: pick and toggle. Table 1: An overview of tasks used to evaluate the benefit of using world graphs. Visualizations can be found in Appendix D. decomposes the policy of the agent into two separate policies that receive distinct streams of reward: a high-level policy ("Manager") learns to propose subgoals; a low-level policy ("Worker") receives subgoals from the Manager as inputs and is rewarded for taking actions in the environment that reach the subgoals. The Manager receives the environment reward defined by the task and therefore must learn to emit subgoals that lead to task completion. The Manager and Worker do not share weights and operate at different temporal resolutions: the Manager only outputs a new subgoal if either the Worker reaches the chosen one or a subgoal horizon c is exceeded. For all our experiments, policies are trained using advantage actor-critic (A2C), an on-policy RL algorithm (; ; b). To ease optimization, the feature extraction layers of the Manager and Worker that encode s t are initialized with the corresponding layers from π g, the GCP learned during world graph discovery phase. More details are in Appendix B. To incorporate the world graph, we introduce a Manager policy that factorizes subgoal selection as follows: a wide policy π w (g w t |s t) selects a waypoint state as the wide goal g w ∈ V p, and a narrow policy π n (g n t |s t, g The wide-then-narrow subgoal format simplifies the search space for the Manager policy. Using waypoints as wide goals also makes it possible to leverage the edges of the world graph for planning and executing the planned traversals. This process breaks down as follows: 1. When to Traverse: When the agent encounters a waypoint state s t ∈ V p, a "traversal" is initiated if s t has a feasible connection in G w to the active wide goal g w t . 2. Planning: Upon triggering a traversal, the optimal traversal route from the initiating state to g w t is estimated from the G w edge weights using classic dynamic programming planning . This yields a sequence of intermediate waypoint states. 3. Execution: Execution of graph traversals depends on the nature of the environment. If deterministic, the agent simply follows the action sequences given by the edges of the traversal. Otherwise, the agent uses the pretrained GCP π g to sequentially reach each of the intermediate waypoint states along the traversal (we fine-tune π g in parallel where applicable). If the agent fails to reach the next waypoint state within a certain time limit, it stops its current pursuit and a new (g w, g n) pair is received from the Manager. World graph traversal allows the Manager to assign task-relevant wide goals g w that can be far away from the agent yet still reachable, which consequentially accelerates learning by focusing exploration around the task-relevant region near g w. We now assess each component of our framework on a set of challenging 2D grid worlds. Our ablation studies demonstrate the following benefits of our framework: • It improves sample efficiency and performance over the baseline HRL model. • It benefits tasks varying in envirionment scale, task type, reward structure, and stochasticity. • The identified waypoints provide superior world representations for solving downstream tasks, as compared to graphs using randomly selected states as nodes. Implementation details, snippets of the tasks and mazes are in Appendix C-D. For our ablation studies, we construct 2D grid worlds of increasing sizes (small, medium and large) along with challenging tasks with different reward structures, levels of stochasticity and logic (summarized in Table 1). In all tasks, every action taken by the agent receives a negative reward penalty. We follow a rigorous evaluation protocol (; ;): each experiment is repeated with 3 training seeds. 10 additional validation seeds are used to pick the model with the best reward performance. This model is then tested on 100 testing seeds. We report mean reward and standard deviation. We ablate each of the following components in our framework and compare against non-hierarchical (A2C) and hierarchical baselines (FN): 1. initializing the feature extraction layers of the Manager and Worker from π g, 2. applying Wide-then-Narrow Manager (WN) goal instruction, and 3. allowing the Worker to traverse along G w. Results are shown in Table 2. In sum, each component improves performance over the baselines. Wide and narrow goals Using two goal types is a highly effective way to structure the Manager instructions and enables the Worker to differentiate the transition and local task-solving phases. We note that for small MultiGoal, agents do not benefit much from G w traversal: it can rely solely on the guidance from WN goals to master both phases. However with increasing maze size, the Worker struggles to master traversals on its own and thus fails solving the tasks. World Graph Traversal As conjectured in Section 4.3, the performance gain of our framework can be explained by the larger range and more targeted exploration strategy. In addition, the Worker We see that 1) traversal speeds up convergence, 2) V rand gives higher variance and slightly worse performance than Vp. Right: comparing with or without πg initialization on Vp, all models use WN. We see that initializing the task-specific phase with the task-agnostic goal-conditioned policy significantly boosts learning. does not have to learn long distance transitions with the aid of G w traversals. Figure 4 confirms that G w traversal speeds up convergence and its effect becomes more evident with larger mazes. Note that the graph learning stage only need 2.4K iterations to converge. Even when taking these additional environment interactions into account, G w traversal still exhibits superior sample efficiency, not to mention that the graph is shared among all tasks. Moreover, solving Door-Key involves a complex combination of sub-tasks: find and pick up the key, reach and open the door and finally exit. With limited reward feedback, this is particularly difficult to learn. The ability to traverse along G w enables longer-horizon planning on top of the waypoints, thanks to which the agents boost the success rate on medium Door-Key from 0.56±0.02 to 0.75±0.06. To highlight the benefit of establishing the waypoints learned by the VAE as nodes for G w, we compare against using a G w constructed around randomly selected states (V rand). The edges of the random-node graph are formed in the same way as described in Section 3.3 and its feature extractor is also initialized from π g. Although granting knowledge acquired during the unsupervised phase to V rand is unfair to V p, deploying both initialization and traversal while only varying V rand and V p isolates the effect from the nodes to the best extent. The comparative (in Table 3, learning curves for MultiGoal in Figure 4) suggest V p generally outperforms V rand. Door-Key is the only task in which the two matches. However, V rand exhibits a large variance, implying that certain sets of random states can be suitable for this task, but using learned waypoints gives strong performance more consistently. Initialization with GCP Initializing the weights of the Worker and Manager feature extractors from π g (learned during the task-agnostic phase) consistently benefits learning. In fact, we observe that models starting from scratch fail on almost all tasks within the maximal number of training iterations, unless coupled with G w traversal, which is still inferior to using π g -initialization. Particularly, for the small MultiGoal-Stochastic environment, there is a high chance that a lava square blocks traversal; therefore, without the environment knowledge from π g transferred by weight initialization, the interference created by the episode-terminating lava prevents the agent from learning the task. We have shown that world graphs are powerful environment abstractions, which, in particular, are capable of accelerating reinforcement learning. Future works may extend their applications to more challenging RL setups, such as real-world multi-task learning and navigation. It is also interesting to generalize the proposed framework to learn dynamic world graphs for evolving environments, and applying world graphs to multi-agent problems, where agents become part of the world graphs of other agents. As illustrated in the main text, the main objective for the recurrent VAE is the following evidence lower bound with derivation: log p(a|s) = log p(a|s, z)dz = log p(a|s, z)p(z|s) q(z|a, s) q(z|a, s) dz = log p(a|s, z) p(z|s) q(z|a, s) q(z|a, s)dz The inference network q ψ takes in the trajectories of state-action pairs τ and at each time step approximates the posterior of the corresponding latent variable z t. The prior network p ψ takes the state s t at each time step and outputs the state-conditioned prior p ψ (s t). We choose Beta as the prior distribution and the Hard Kuma as the approximated posterior to relax the discrete latent variables to continuous surrogates. The Kuma distribution Kuma(α, β) highly resembles the Beta Distribution in shape but does not come from the exponential family. Similar to Beta, the Kuma distribution also ranges from bimodal (when α ≈ β) to unimodal (α/β → 0 or α/β → ∞). Also, when α = 1 or β = 1, Kuma(α, β) = Beta(α, β). We observe empirically better performance when we fix β = 1 for the Kuma approximated posterior. One major advantage of the Kuma distribution is its simple Cumulative Distribution Function (CDF): It is therefore amendable to the reparametrization trick (; ;) by sampling from uniform distribution u ∼ U: Lastly, the KL-divergence between the Kuma and Beta distributions can be approximated in closed form : where Ψ is the Digamma function, γ the Euler constant, and the approximation uses the first few terms of the Taylor series expansion. We take the first 5 terms here. Next, we make the Kuma distribution "hard" by following the steps in. First stretch the support to (r = 0 − 1, l = 1 + 2), 1, 2 > 0, and the ing CDF distribution takes the form: Then, the non-eligible probabilities for 0's and 1's are attained by rectifying all samples below 0 to 0 and above 1 to 1, and other value as it is, that is Lastly, we impose two additional regularization terms L and L T on the approximated posteriors. As described in the main text, L prevents the model from selecting all states to reconstruct {a t} by restraining the expected L 0 norm of z = (z 1 · · · z T −1) to approximately be at a targeted value µ 0 . In other words, this objective adds the constraint that there should be µ 0 of activated z t = 1 given a sequence of length T. The other term L T encourages temporally isolated activation of z t, meaning the number of transition between 0 and 1 among z t's should roughly be 2µ 0. Note that both expectations in Equation 2 have closed forms for HardKuma. Lagrangian Relaxation. The overall optimization objective consists of action sequence reconstruction, KL-divergence between the posterior and prior, L 0 and L T (Equation 12). We tune the objective weights λ i using Lagrangian relaxation (; ;), treating λ i's as learnable parameters and performing alternative optimization between λ i's and the model parameters. We observe that as long as their initialization is within a reasonable range, λ i's converge to a local optimum: We observe this approach to produce efficient and stable mini-batch training. Optimizing composite neural networks like HRL is sensitive to weight initialization , due to its complexity and lack of clear supervision at various levels. Therefore, taking inspiration from prevailing pre-training procedures in computer vision and NLP , we take advantage of the weights learned by π g during world graph discovery when initializing the Worker and Manager policies for downstream HRL, as π g has already implicitly embodied much environment dynamics information. More specifically, we extract the weights of the feature extractor, i.e. the state encoder, and use them as the initial weights for the state encoders of the HRL policies. Our empirical demonstrate that such weight initialization consistently improves performance and validates the value of skill/knowledge transfer from GCP . Model code folder including all architecture details is shared in comment. Our models are optimized with Adam using mini-batches of size 128, thus spawning 128 asynchronous agents to explore. We use an initial learning rate of 0.0001, with = 0.001, β 1 = 0.9, β 2 = 0.999; gradients are clipped to 40 for inference and generation nets. For HardKuma, we set l = −0.1 and r = 1.1. The maximum sequence length for BiLSTM is 25. The total number of training iterations is 3600 and model usually converges around 2400 iterations. We train the prior, inference, and generation networks end-to-end. We initialize λ i's (see Lagrangian Relaxation) to be λ 1 = 0.01 (KL-divergence),λ 2 = 0.06 (L 0), λ 3 = 0.02 (L T). After each update of the latent model, we update λ i's, whose initial learning rate is 0.0005, by maximizing the original objective in a similar way as using Lagrangian Multiplier. At the end of optimization, λ i's converge to locally optimal values. For example, with the medium maze, λ 1 = 0.067 for the KL-term, λ 2 = 0.070 for the L 0 and λ 3 = 0.051 for the L T term. The total number of waypoints |V p | is set to be 20% of the size of the full state space. The procedure of the Manager and the Worker in sending/receiving orders using either traversal paths among V p from replay buffer for deterministic environments or with π g for stochastic ones follows: 1. The Manager gives a wide-narrow subgoal pair (g w, g n). 2. The agent takes action based on the Worker policy π ω conditioned on (g w, g n) and reaches a new state s. If s ∈ V p, g w has not yet met, and there exists a valid path basing on the edge paths from the world graph s → g w, agent then either follows replay actions or π g to reach g w. If π g still does not reach desired destination in a certain steps, then stop the agent wherever it stands; also π g can be finetuned here. 3. The Worker receives positive reward for reaching g w for the first time. 4. If agent reaches g n, the Worker also receives positive rewards and terminates this horizon. 5. The Worker receives negative for every action taken except for during traversal; the Manager receives negative reward for every action taken including traversal. 6. When either g n is reached or the maximum time step for this horizon is met, the Manager renews its subgoal pair. The training of the Worker policy π ω follows the same A2C algorithm as π g. The training of the Manager policy π m also follows a similar procedure but as it operates at a lower temporal resolution, its value function regresses against the t m -step discounted reward where t m covers all actions and rewards generated from the Worker. When using the Wide-then-Narrow instruction, the policy gradient for the Manager policy π m becomes: E (st,at)∼π,p,p0 [A m,t ∇ log (π ω (g w,t |s t) π n (g n,t |s t, g w,t, s w,t))] + ∇ [H (π ω) + H (π n (·|g w,t))], where A m,t is the Manager's advantage at time t. Also, for Manager, as the size of the action space scales linearly with |S|, the exact entropy for the π m can easily become intractable. Essentially there are O |V| × (N 2) possible actions. To calculate the entropy exactly, all of them has to be summed, making it easily computationally intractable: H = w∈V wn∈sw π n (w n |s w, s t)π ω (w|s t) log ∇π n (w n |s w, s t)π ω (w|s t). Thus in practice we resort to an effective alternative H (π ω) + H (π n (·|g w,t)). Psuedo-code for Manager training is in Algorithm 2. For training the HRL policies, we inherit most hyperparameters from those used when training π g, as the Manager and the Worker both share similar architectures with π g. The hyperparameters used when training π g follow those from. Because the tasks used in HRL experiments are more difficult than the generic goal-reaching task, we set the maximal number of training iterations to 100K abd training is stopped early if model performance reaches a plateau. The rollout steps for each iteration is 60. Hyperparameters specific to HRL are the horizon c = 20 and the size of the Manager's local attention range (that is, the neighborhood around g w within which g n is selected), which are N = 5 for small and medium mazes, and N = 7 for the large maze.
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkgRe1SFDS
We learn a task-agnostic world graph abstraction of the environment and show how using it for structured exploration can significantly accelerate downstream task-specific RL.
We introduce the notion of property signatures, a representation for programs and program specifications meant for consumption by machine learning algorithms. Given a function with input type τ_in and output type τ_out, a property is a function of type: (τ_in, τ_out) → Bool that (informally) describes some simple property of the function under consideration. For instance, if τ_in and τ_out are both lists of the same type, one property might ask ‘is the input list the same length as the output list? ’. If we have a list of such properties, we can evaluate them all for our function to get a list of outputs that we will call the property signature. Crucially, we can ‘guess’ the property signature for a function given only a set of input/output pairs meant to specify that function. We discuss several potential applications of property signatures and show experimentally that they can be used to improve over a baseline synthesizer so that it emits twice as many programs in less than one-tenth of the time. Program synthesis is a longstanding goal of computer science research (; ; ; Shaw; ;), arguably dating to the 1940s and 50s . Deep learning methods have shown promise at automatically generating programs from a small set of input-output examples (; ; b; 2019b). In order to deliver on this promise, we believe it is important to represent programs and specifications in a way that supports learning. Just as computer vision methods benefit from the inductive bias inherent to convolutional neural networks , and likewise with LSTMs for natural language and other sequence data , it stands to reason that ML techniques for computer programs will benefit from architectures with a suitable inductive bias. We introduce a new representation for programs and their specifications, based on the principle that to represent a program, we can use a set of simpler programs. This leads us to introduce the concept of a property, which is a program that computes a boolean function of the input and output of another program. For example, consider the problem of synthesizing a program from a small set of input-output examples. Perhaps the synthesizer is given a few pairs of lists of integers, and the user hopes that the synthesizer will produce a sorting function. Then useful properties might include functions that check if the input and output lists have the same length, if the input list is a subset of the output, if element 0 of the output list is less than element 42, and so on. The outputs of a set of properties can be concatenated into a vector, yielding a representation that we call a property signature. Property signatures can then be used for consumption by machine learning algorithms, essentially serving as the first layer of a neural network. In this paper, we demonstrate the utility of property signatures for program synthesis, using them to perform a type of premise selection as in. More broadly, however, we envision that property signatures could be useful across a broad range of problems, including algorithm induction , improving code readability , and program analysis . More specifically, our contributions are: • We introduce the notion of property signatures, which are a general purpose way of featurizing both programs and program specifications (Section 3). • We demonstrate how to use property signatures within a machine-learning based synthesizer for a general-purpose programming language. This allows us to automatically learn a useful set of property signatures, rather than choosing them manually (Sections 3.2 and 4). • We show that a machine learning model can predict the signatures of individual functions given the signature of their composition, and describe several ways this could be used to improve existing synthesizers (Section 5). • We perform experiments on a new test set of 185 functional programs of varying difficulty, designed to be the sort of algorithmic problems that one would ask on an undergraduate computer science examination. We find that the use of property signatures leads to a dramatic improvement in the performance of the synthesizer, allowing it to synthesize over twice as many programs in less than one-tenth of the time (Section 4). An example of a complex program that was synthesized only by the property signatures method is shown in Listing 1. For our experiments, we created a specialized programming language, called Searcho 1 (Section 2), based on strongly-typed functional languages such as Standard ML and Haskell. Searcho is designed so that many similar programs can be executed rapidly, as is needed during a large-scale distributed search during synthesis. We release 2 the programming language, runtime environment, distributed search infrastructure, machine learning models, and training data from our experiments so that they can be used for future research. Listing 1: A program synthesized by our system, reformatted and with variables renamed for readability. This program returns the sub-list of all of the elements in a list that are distinct from their previous value in the list. In Inductive Program Synthesis, we are given a specification of a program and our goal is to synthesize a program meeting that specification. Inductive Synthesis is generally divided into Programming by Example (PBE) and Programming by Demonstration (PBD). This work is focused on PBE. In PBE, we are given a set of input/output pairs such that for each pair, the target program takes the input to the corresponding output. Existing PBE systems include , , and. A PBE specification might look like Listing 2: 1 io_pairs = [,,,] Listing 2: An example PBE specification. for which a satisfying solution would be the function squaring its input. Arbitrarily many functions satisfy this specification. It is interesting but out of scope 3 to think about ways to ensure that the synthesis procedure recovers the'best' or'simplest' program satisfying the specification. Much (though not all) work on program synthesis is focused on domain specific languages that are less than maximally expressive (; ; ;). We would like to focus on the synthesis of programs in a Turing complete language, but this presents technical challenges: First, general purpose languages such as C++ or Python are typically quite complicated and sometimes not fully specified; this makes it a challenge to search over partial programs in those languages. Second, sandboxing and executing code written in these languages is nontrivial. Finally, searching over and executing many programs in these languages can be quite slow, since this is not what they were designed for. For these reasons, we have created a general-pupose, Turing complete programming language and runtime. The programming language is called Searcho and it and its runtime have been designed specifically with program synthesis in mind. The language can roughly be thought of as a more complicated version of the simply typed lambda calculus or as a less complicated version of Standard ML or OCaml. 4 Searcho code is compiled to bytecode and run on the Searcho Virtual Machine. Code is incrementally compiled, which means that the standard library and specification can be compiled once and then many programs can be pushed on and popped off from the stack in order to check them against the specification. Searcho is strongly typed with algebraic datatypes 5 Searcho includes a library of 86 functions, all of which are supported by our synthesizer. This is a significantly larger language and library than have been used in previous work on neural program synthesis. We have also implemented a baseline enumerative synthesizer. The main experiments in this paper will involve plugging the outputs of a machine learning model into the configuration for our baseline synthesizer to improve its performance on a set of human-constructed PBE tasks. Consider the PBE specification in Listing 3: 1, 2345, 34567], Listing 3: An example PBE Specification. We can see that the function concatenating the input list to its reverse will satisfy the specification, but how can we teach this to a computer? we take the approach of training a machine learning model to do premise selection for a symbolic search procedure. But how do we get a representation of the specification to feed to the model? , the model acts only on integers and lists of integers, constrains all integers to lie in [−256, 256], has special-case handling of lists, and does not deal with polymorphic functions. It would be hard to apply this technique to the above specification, since the first example contains unbounded integers, the second example contains a different type than the first 6, and the third and fourth examples contain recursive data structures (lists of characters and lists of integers respectively). Thankfully, we can instead learn a representation that is composed of the outputs of multiple other programs running on each input/output pair. We will call these other programs properties. Consider the three properties in Listing 4. 1 all_inputs_in_outputs ins outs = all (map (\x -> x in outs) ins) 2 ouputs_has_dups ins outs = has_duplicates (outs) 3 input_same_len_as_output ins outs = (len ins) == (len outs) Listing 4: Three function projections that can act on the specification from Listing 3. 4 In this paper, we will present illustrative programs in Haskell syntax to make them more broadly readable. Searcho programs will be presented in Searcho syntax, which is similar. 5 Types have been shown to substantially speed up synthesis. See e.g. Figure 6 of. 6 So any function satisfying the spec will be parametrically polymorphic. Each of these three programs can be run on all 4 of the input output pairs to yield a Boolean. The first always returns True for our spec, as does the second. The third always returns False on the given examples, although note that it would return True if the examples had contained the implicit base case of the empty list. Thus, we can write that our spec has the'property signature' [True, True, False]. How is this useful? From the first property we can infer that we should not throw away any elements of the input list. From the third we might guess that we have to add or remove elements from the input list. Finally, the second might imply that we need to create copies of the input elements somehow. This does not narrow our search down all the way, but it narrows it down quite a lot. Since the properties are expressed in the same language as the programs we are synthesizing, we can emit them using the same synthesizer. Later on, we will describe how we enumerate many random properties and prune them to keep only the useful ones. The property signatures that we consider in our experiments contain thousands of values. Since the output of these properties is either always True, always False, or sometimes True and sometimes False, a neural network can learn embeddings for those three values and it can be fed a vector of such values, one for each applicable property, as the representation of a program specification. Now we describe our representation for a program f:: τ in → τ out. Each property is a program p:: (τ in, τ out) → Bool that represents a single "feature" of the program's inputs and outputs which might be useful for its representation. 7 In this section, we assume that we have determined a sequence P = [p 1 . . . p n] of properties that are useful for describing f, and we wish to combine them into a single representation of f. Later, we will describe a learning principle for choosing relevant properties. We want the property signature to summarize the output of all the properties in P over all valid inputs to f. To do this, we first extend the notion of property to a set of inputs in the natural way. If S is a set of values of type τ in and p ∈ P, we define p(S) = {p(x, f (x)) | x ∈ S}. Because p(S) is a set of booleans, it can have only three possible values, either p(S) = {True}, or p(S) = {False}, or p(S) = {True, False}, corresponding respectively to the cases where p is always true, always false, or neither. To simplify notation slightly, we define the function Π as Π({True}) = AllTrue, Π({False}) = AllFalse, and Π({True, False}) = Mixed. Finally, we can define the property signature sig(P, f) for a program f and a property sequence P as where V (τ in) is the possibly infinite set of all values of type τ in. Computing the property signature for f could be intractable or undecidable, as it might require proving difficult facts about the program. Instead, in practice, we will compute an estimated property signature for a small set of input-output pairs S io. The estimated property signature summarizes the actions of P on S io rather than on the full set of inputs V (τ in). Formally, the estimated property signature is This estimate gives us an under-approximation of the true signature of f in the following sense: If we have sig(P, S) = Mixed, we must also have sig(P, f) = Mixed. If sig(P, S) = AllTrue, then either sig(P, f) = AllTrue or sig(P, f) = Mixed, and similarly with AllFalse. Estimated property signatures are particularly useful for synthesis using PBE, because we can compute them from the input-output pairs that specify the synthesis task, without having the definition of f. Thus we can use estimated property signatures to'featurize' PBE specifications for use in synthesis. How do we choose a set of properties that will be useful for synthesis? Given a training set of random programs with random input/output examples, we generate many random properties. We then prune the random properties based on whether they distinguish between any of the programs. Then, given a test suite of programs, we do an additional pruning step: among all properties that give the same value for every element of the test suite, we keep the shortest property, because of Occam's razor considerations. Given these'useful' properties, we can train a premise selector to predict library function usage given properties. Specifically, from the remaining properties, we compute estimated property signatures for each function in the training set, based on its input output examples. Then we use the property signature as the input to a feedforward network that predicts the number of times each library function appears in the program. In Section 4, we will give more details about the architecture of this premise selector, and evaluate it for synthesis. For now, we point out that this premise selector could itself be used to find useful properties, by examining which properties are most useful for the model's predictions. Experiments in the next section will establish that property signatures let our baseline synthesizer emit programs it previously could not, but we think that they can have broader utility: • They allow us to represent more types of functions. Property signatures can automatically deal with unbounded data types, recursive data types, and polymorphic functions. • They reduce dependency on the distribution from which examples are drawn. If the user of a synthesizer gives example inputs distributed differently than the training data, the'estimated' properties might not change much. • They can be used wherever we want to search for functions by semantics. Imagine a search engine where users give a specification, the system guesses a property signature, and this signature guess is used to find all the pre-computed functions with similar semantics. • Synthesized programs can themselves become new properties. For example, once I learn a program for primality checking, I can use primality checking in my library of properties. We design an experiment to answer the following question: Can property signatures help us synthesize programs that we otherwise could not have synthesized? As we will show, the answer is yes! How Does the Baseline Synthesizer Work? Our baseline synthesizer is very similar to that in and works by filling in typed holes 9. That is, we infer a program type τ in → τ out from the specification and the synthesizer starts with a empty'hole' of type τ in → τ out and then fills it in all possible ways allowed by the type system. Many of these ways of filling-in will yield new holes, which can in turn be filled by the same technique. When a program has no holes, we check if it satisfies the spec. We order the programs to expand by their cost, where the cost is essentially a sum of the costs of the individual operations used in the program. At the beginning of the procedure, the synthesizer is given a configuration, which is essentially a weighted set of pool elements that it is allowed to use to fill in the holes. A pool element is a rewrite rule that replaces a hole with a type-correct Searcho program, which may itself contain its own, new holes. In our synthesizer, there is one possible pool element for each of the 86 library functions in Searcho, which calls the library function, with correctly-typed holes for each of its arguments. The configuration will specify a small subset of these pool elements to use during search. It is through the configuration that we will use machine learning to inform the search procedure, as we describe later. See Appendix A.1 for further details on this baseline system. How is the Training Data Generated? Our test corpus contains programs with 14 different types. For each of those 14 types, we randomly sample configurations and then randomly generate training programs for each configuration, pruning for observational equivalence. We generate up 10,000 semantically distinct programs for each type, though of course some function types admit less distinct programs than this (e.g. Bool → Bool). We also generate and prune random properties as described in Section 3.2. See Listing 5 for examples of useful properties that were generated. How was the Test Set Constructed? We've constructed a test set of 185 human generated programs ranging in complexity from one single line to many nested function calls with recursion. Programs in the test set include computing the GCD of two integers, computing the n-th fibonacci number, computing the intersection of two sets, and computing the sum of all pairs in two lists. We ensure that none of the test functions appear in the training set. See the open source code for more details on this. What is the Architecture of the Model? As mentioned above, we train a neural network to predict the number of times each pool element will appear in the output. This neural network is fully connected, with learned embeddings for each of the values AllTrue, AllFalse and Mixed. How does the Model Output Inform the Search Procedure? Since we have a large number of pool elements, we can't run the synthesizer with all pool elements if we want to find programs of reasonable length. This is both because we will run out of memory and because it will take too long. Thus, we randomly sample configurations with less pool elements. We then send multiple such configurations to a distributed synthesis server that tries them in parallel. When we use the model predictions, we sample pool elements in proportion to the model's predicted number of times that pool element appears. The baseline samples pool elements in proportion to their rate of appearance in the training set. We ran 3 different runs of our distributed synthesizer for 100,000 seconds with and without the aid of property signatures. The baseline synthesizer solved 28 test programs on average. With property signatures, the synthesizer solved an average of 73 test programs. See Figure 1 for more discussion. Indeed, it can be seen from the figure that not only did the synthesizer solve many more test programs using property signatures, but it did so much faster, synthesizing over twice as many programs in one-tenth of the time as the baseline. Most programs involve composing functions with other functions. Suppose that we are trying to solve a synthesis problem from a set of input/output examples, and during the search we create a partial program of the form f (g(x)) for some unknown g. Since we know f, we know its property signature. Since we have the program specification, we also have the estimated property signature for f • g:= f (g(x) ). If we could somehow guess the signature for g, we could look it up in a cache of previously computed functions keyed by signature. If we found a function matching the desired Figure 1: Comparison of synthesis with property signatures and without property signatures. The x-axis denotes time elapsed in seconds. Roughly speaking, we let the distributed synthesizer run for 1 day. The y-axis represenets the cumulative number of programs synthesized. On average, the baseline solved 28 of the test programs, while the baseline enhanced with property signatures solved 73 test programs (around 2.6 times as many programs). Both the baseline and the run with property signatures were run with three different random seeds. Altogether, this experiment provides strong evidence that property signatures can be useful. signature, we would be done. If no matching function exists in the cache, we could start a smaller search with only the signature of g as the target, then use that in our original search. We could attempt to encode the relationship between f and g into a set of formal constraints and pass that to a solver of some kind (De Moura & Bjørner, 2008), and while that is potentially an effective approach, it may be difficult to scale to a language like Searcho. Instead, we can simply train a machine learning model to predict the signature of g from the signature of f and the signature of f • g. Here we present an experiment to establish a proof of concept of this idea. First, we generated a data set of 10,000 random functions taking lists of integers to lists of integers. Then we randomly chose 50,000 pairs of functions from this list, arbitrarily designating one as f and one as g. We then computed the signatures of f, g and f • g for each pair, divided the data into a training set of 45,000 elements and a test set of 5,000 elements, and trained a small fully connected neural network to predict the signature of g from the other two signatures. On the test set, this model had 87.5% accuracy, which is substantially better than chance. We inspected the predictions made on the test set and found interesting examples like the one in Listing 6, where the model has learned to do something you might (cautiously) refer to as logical deduction on properties. This is suggestive of the expressive power of property signatures. It also points toward exciting future directions for research into neurally guided program synthesis. Listing 6: Example of successful prediction made by our composition predictor model. The property in question checks whether all the elements of the output list are members of the input list. For f, the value is AllTrue, and for f • g the value is Mixed. The model doesn't know g or its signature, but correctly predicts that the value of this property for g must be Mixed. There is substantial prior work on program synthesis in general. We can hardly do it justice here, but see some of;;; for more detailed surveys. Property Based Testing: Function properties are similar to the properties from Property Based Testing, a software testing methodology popularized by the QuickCheck library that has now spread to many contexts (; ; ; ; ;). Quickcheck properties are human-specified and operate on functions, while our properties operate on input/output pairs. Automated Theorem Proving: Synthesizing programs using machine learning is related to the idea of proving theorems using machine learning . Synthesis and theorem proving are formally related as well . Most existing work on synthesis approaches is from the perspective of programming language design. Our baseline synthesizer borrows many ideas from. use refinement types (roughly, a decidable version of dependent types -see) to give program specifications, allowing the type-checker to discard many candidate programs. Property signatures can be thought of as a compromise between refinement types and dependent types: we can write down specifications with them that would be impossible to express in refinement types, but we can only check those specifications empirically. More recently, researchers have used machine learning to synthesize and understand programs. We have mentioned , but see all of: introduces the idea of features: a predecessor to the idea of properties. Features differ from properties in that they are hand-crafted rather than learned, and that they were applied only on a limited string processing domain. The relationship between this work and merits special discussion. Aside from the inclusion of property signatures, they differ in the following ways: • We use a more expressive DSL. Their DSL only allows linear control flow with a small set of functions, whereas our language is Turing complete (it has looping, recursion, etc). We also have a larger set of allowed component functions: 86 vs. 34. • Their machine learning method does not work straightforwardly for arbitrary programs. Their training and test programs only deal with integers and lists of integers, while we have 14 different function types. It would thus not be feasible to compare the techniques on anything but a tiny subset of our existing test set. • The test cases in are generated from their enumerative synthesizer. It is therefore guaranteed that the synthesizer will be able to emit them in a reasonable amount of time during testing, so their demonstrated improvements are'merely' speed-ups. Our test cases are human generated, and over half of the programs synthesized using property signatures were not synthesized at all 10 given over a day of time. In this work, we have introduced the idea of properties and property signatures. We have shown that property signatures allow us to synthesize programs that a baseline otherwise was not able to synthesize, and have sketched out other potential applications as well. Finally, we have open sourced all of our code, which we hope will accelerate future research into ML-guided program synthesis. The top-down synthesizer that we use as a baseline in this work. In a loop until a satisfying program is found or we run out of time, we pop the lowest-cost partial program from the queue of all partial programs, then we fill in the holes in all ways allowed by the type system, pushing each new partial program back onto the queue. If there are no holes to fill, the program is complete, and we check it against the spec. The cost of a partial program is the sum of the costs of its pool elements, plus a lower bound on the cost of filling each of its typed holes, plus the sum of the costs of a few special operations such as tuple construction and lambda abstraction. This section contains details on the baseline synthesizer that did not fit into the main text. Figure 2 gives a more formal description of the basic synthesis algorithm. Listing 7 shows an example trajectory of partial program expansions. Listing 7: The trajectory the synthesizer took to generate the swap function, which just swaps the two elements of a tuple. Since it knows it needs to take a tuple of ints as an argument and return a tuple of ints, it starts with a hole of type (Int, Int) in line 1. It then converts that hole into a tuple of holes, both of type Int in line 2, fills one of the holes with a reference to one of the arguments in line 3, and fills in the final hole with a reference to the other argument in line 4. Note that this listing doesn't show all programs attempted, it just shows the sequence of partial programs that led to the final solution. We have conducted an experiment to compare premise selection using Property Signatures to the premise selection algorithm from . This required considerable modifications to the experimental procedure. First, since the premise-selection part of DeepCoder can only handle Integers and lists of Integers, we restricted the types of our training and test functions. In particular, we read through and found four function types in use: The types of f and g in 8 are taken directly from . The types of h and k are inferred from examples given in the appendix of . Their DSL does not technically have tuples, but we have wrapped the inputs of their'two-input-functions' in tuples for convienence. Second, since DeepCoder can only handle integers betwen −255 and 255, we first re-generated all of our random inputs (used for 'hashing' of generated training data) to lie in that range. We then generated random training functions of the above four types. We then made a data set of training functions associated with 5 input-output pairs, throwing out pairs where any of the outputs were outside the aforementioned range, and throwing out functions where all outputs contained some number outside that range. Third, of the examples in our test set with the right types, we modified their input output pairs in a similar way. We filtered out functions that could not be so modified. After doing so, we were left with a remaining test suite of 32 functions. Finally, we trained a model to predict functions-to-use from learned embeddings of the input-output pairs, as in DeepCoder. We didn't see a description of how functions with multiple inputs had their inputs embedded, so we elected to separate them with a special character, distinct from the null characters that are used to pad lists. Compared with the Property Signatures method, this technique in far fewer synthesized test set programs. We did 3 random restarts for each of DeepCoder, Property Signatures, and the Random Baseline (recall that the random baseline itself is already a relatively sophisticated synthesis algorithm -it's just the configurations that are random). The 3 DeepCoder runs synthesized an average of 3.33 test programs, while the Property Signature runs (trained on the same modified training data and tested on the same modified test data) synthesized 16.33. The random baseline synthesized 3 programs on average. A priori, this seems like a surprisingly large gap, but it actually fits with what we know from existing literature. observe something similar: which is that DeepCoder-esque techniques tend to generalize poorly to a a test set where the input-output pairs come from a different distribution than they do in training. This is the case in our experiment, and it will be the case in any realistic setting, since the test set will be provided by users. Property Signatures are (according to our experiments) much less sensitive to such shift. This makes intuitive sense: whether an input list is half the length of an output list (for instance) is invariant to the particular distribution of members of the list. Note that even if Property Signatures did not outperform DeepCoder on this subset of our test set, they would still constitute an improvement due to their allowing us to operate on arbitrary programs and inputs types.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylHspEKPr
We represent a computer program using a set of simpler programs and use this representation to improve program synthesis techniques.
Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment. Bilateral cooperative relationships, where individuals face a choice to pay personal costs to give larger benefits to others, are ubiquitous in our daily lives. In such situations mutual cooperation can lead to higher payoffs for all involved but there always exists an incentive to free ride. In a seminal work BID3 asks a practical question: since social dilemmas are so ubiquitous, how should a person behave when confronted with one? In this work we will take up a variant of that question: how can we construct artificial agents that can solve complex bilateral social dilemmas? First, we must define what it means to'solve' a social dilemma. The simplest social dilemma is the two player, repeated Prisoner's Dilemma (PD). Here each player chooses to either cooperate or defect each turn. Mutual cooperation earns high rewards for both players. Defection improves one's payoff but only at a larger cost to one's partner. For the PD, BID2 suggest the strategy of tit-for-tat (TFT): begin by cooperating and in later turns copy whatever your partner did in the last turn. TFT and its variants (eg. Win-Stay-Lose-Shift, BID37) have been studied extensively across many domains including the social and behavioral sciences, biology, and computer science. TFT is popular for several reasons. First, it is able to avoid exploitation by defectors while reaping the benefits of cooperation with cooperators. Second, when TFT is paired with other conditionally cooperative strategies (eg. itself) it achieves cooperative payoffs. Third, it is error correcting because after an accidental defection is provides a way to return to cooperation. Fourth, it is simple to explain to a partner and creates good incentives: if one person commits to using TFT, their partner's best choice is to cooperate rather than try to cheat. Our contribution is to expand the idea behind to TFT to a different environment: one shot Markov social dilemmas that require function approximation (eg. deep reinforcement learning). We will work with the standard deep RL setup: at training time, our agent is given access to the Markov social dilemma and can use RL to compute a strategy. At test time the agent is matched with an unknown partner and gets to play the game with that partner once. We will say that the agent can solve a social dilemma if it can satisfy the four TFT properties listed above. We call our strategy approximate (because we use RL function approximation) Markov (because the game is Markov) tit-for-tat (amTFT) which we show can solve more complex Markov social dilemmas. The first issue amTFT needs to tackle is that unlike in the PD'cooperation' and'defection' are no longer simple labeled strategies, but rather sequences of choices. amTFT uses modified self-play 1 to learn two policies at training time: a fully cooperative policy and a'safe' policy (we refer to this as defection). The second issue is that we are considering a setup where our agent will only play the social dilemma once at test time. Thus the goal of amTFT is to intelligently switch between the learned policies within a single game.3 amTFT performs this as follows: at each time step during test time the amTFT agent computes the gain from the action their partner actually chose compared to the one prescribed by the cooperative policy. This can be done either using a learned Q function or via policy rollouts. We refer to this as a per period debit. If the total debit is below a threshold amTFT behaves according to the cooperative policy. If the debit is above the threshold, the agent switches to the defecting policy for k turns and then returns to cooperation. This k is computed such that the partner's gains (debit) are smaller than the losses they incur (k lost turns of cooperation).We show both analytically and experimentally that amTFT can solve Markov social dilemmas (in the Axelrod sense defined above). Our experiments using a grid-world, Coins, and a modification of an Atari game where players must learn from pixels, the Pong Player's Dilemma also demonstrate that an important component of amTFT is defining a partner's'defection' in terms of value and not actions. This choice makes amTFT robust to a partner using one of a class of outcome-equivalent cooperative policies as well function approximation, important properties for scaling agents beyond simple games. We note that for the purposes of this paper we define the'cooperative' the policies as the ones which maximize the sum of both players' payoff. This definition seems natural for the case of the symmetric games we study (and is the one that is typically used in eg. the literature on the evolution of cooperation). However, it is well known that human social preferences take into account distribution (eg. inequity BID15), various forms of altruism BID1 BID44, and context dependent concerns (eg. social norms, see BID46 ; BID23 ; BID43 for how social norms affect economic games and can be manipulated in the lab). Thus when applying amTFT in other circumstances the correct'focal point' needs to be chosen. The automatic determination of focal points is an important topic for future research but far beyond the scope of this paper. However, we note that once this focal point is determined the amTFT algorithm can be used exactly as in this paper simply by swapping out the cooperative objective function during training time. A large literature on the'folk theorem' asks whether in a repeated game there exists an equilibrium which maintains cooperative payoffs using strategies which take as input histories of observations BID19 BID13 and output stage-game actions. A computer science branch of this literature asks whether it is possible to compute such equilibria either in repeated matrix games BID32 or in repeated Markov games BID4. These works are related to our questions but have two key differences: first, they focus on switching strategies across iterations of a repeated game rather than within a single game. Second, perhaps more importantly, this literature focuses on finding equilibria unlike the Axelrod setup which focuses on finding a'good' strategy for a single agent. This difference in focus is starkly illustrated by TFT itself because both agents choosing TFT is not an equilibrium (since if one agent commits to TFT the partner's best response is not TFT, but rather always cooperate). 1 We note that one advantage of amTFT is that it requires no additional machinery beyond what is required by standard self-play, thus if we can construct competitive agents in some environment (eg. Atari, BID34) then we can also construct agents that solve social dilemmas in that environment.2 In the PD this action is'defect' but in most real social dilemmas this is not the case. For example social dilemmas occur naturally in economic situations where agents face a choice of trading and realizing gains from trade or simply producing everything they need on their own. In this case a safe policy is the outside option of'stop transacting with this agent.'3 The focus on a single test game is a way in which the problem we consider differs from what is normally studied in the literature on maintaining cooperation in repeated games BID19 BID32 BID4. In a standard'folk theorem' setup agents play a game repeatedly and maintain cooperation in one iteration of a game by threats of defection in the next iteration. A second related literature focuses on learning and evolution in games BID18 BID47 BID51 BID38 BID9 with recent examples applying deep learning to this question BID41. Though there is a large component of this literature focusing on social dilemmas, these works typically are interested how properties of the environment (eg. initial states, payoffs, information, learning rules used) affect the final state of a set of agents that are governed by learning or evolutionary dynamics. This literature gives us many useful insights, but is not usually focused on the question of design of a single agent as we are. A third literature focuses on situations where long term interactions with the same partner means that a good agent needs to either to discern a partner's type BID31 or be able shape the adaptation of a learning partner BID4 BID17. BID4 use reward shaping in the Prisoner's Dilemma to construct'leader' agents that convince'followers' to cooperate and BID17 uses a policy gradient learning rule which includes an explicit model of the partner's model. These works are related to ours but deal with situations where interactions are long enough for the partner to learn (rather than a single iteration) and require either explicit knowledge about the game structure BID4 or the partner's learning rule BID17.There is a recent surge of interest in using deep RL to construct agents that can get high payoffs in multi-agent environments. Much of this literature focuses either on zero-sum environments (; BID52 BID8 BID24 ;) or coordination games without an incentive to defect BID33 BID16 BID45; BID42 BID28 BID11 BID14 BID22 ) and uses self-play to construct agents that can achieve good outcomes. 4 We show that in the presence of social dilemmas applying this self-play approach naively often leads to bad outcomes. Finally, there is a large literature using the repeated PD to study human decision-making in social dilemmas BID20 BID7. In addition, recent work in cognitive science has begun to use more complex games and RL techniques quite related to ours . However, while this work provides useful insights into potentially useful strategies the main objective of this work is to understand human decision-making, not to actively improve the construction of agents. We now turn to formalizing our main idea. We will work with a generalization of Markov decision problems:Definition 1 BID49 ) A (finite, 2-player) Markov game consists of a set of states S = {s 1, . . ., s n}; a set of actions for each player DISPLAYFORM0 which tells us the probability distribution on the next state as a function of current state and actions; a reward function for each player R i: S × A 1 × A 2 → R which tells us the utility that player gains from a state, action tuple. We assume rewards are bounded. Players can choose between policies which are maps from states to probability distributions on actions π i: S → ∆(A i). We denote by Π i the set of all policies for a player. Through the course of the paper we will use the notation π to refer to some abstract policy andπ to learned approximations of it (eg. the output of a deep RL procedure).Definition 2 A value function for a player i inputs a state and a pair of policies V i (s, π 1, π 2) and gives the expected discounted reward to that player from starting in state s. We assume agents discount the future with rate δ which we subsume into the value function. A related object is the Q function for a player i inputs a state, action, and a pair of policies Q i (s, π 1, π 2) and gives the expected discounted reward to that player from starting in state s taking action a and then continuing according to π 1, π 2 afterwards. We will be talking about strategic agents so we often refer to the concept of a best response: Definition 3 A policy for agent j denoted π j is a best response starting at state s to a policy π i if for any π j and any s along the trajectory generated by these policies we have DISPLAYFORM1 We denote the set of such best responses as BR j (π i, s). If π j obeys the inequality above for any choice of state s we call it a perfect best response. The set of stable states in a game is the set of equilibria. We call a policy for player 1 and a policy for player 2 a Nash equilibrium if they are best responses to each other. We call them a Markov perfect equilibrium if they are perfect best responses. We are interested in a special set of policies:Definition 4 Cooperative Markov policies starting from state s (π DISPLAYFORM2 We let the set of cooperative policies be denoted by Π C i (c). Let the set of policies which are cooperative from any state be the set of perfectly cooperative policies. A social dilemma is a game where there are no cooperative policies which form equilibria. In other words, if one player commits to always cooperate, there is a way for their partner to exploit them and earn higher rewards at their expense. Note that in a social dilemma there may be policies which achieve the payoffs of cooperative policies because they cooperate on the trajectory of play and prevent exploitation by threatening non-cooperation on states which are never reached by the trajectory. The state representation used plays an important role in determining whether equilibria which achieve cooperative payoffs exist. Specifically, a policy which rewards cooperation today with cooperation tomorrow must be able to remember whether cooperation happened yesterday. In both of our example games, Coins and the PPD, if the game is played from the pixels without memory maintaining cooperation is impossible. This is because the current state does not contain information about past behavior of one's partner. Thus, some memory is required to create policies which maintain cooperation. This memory can be learned (eg. an RNN) or it can be an explicitly designed summary statistic (our approach). However, adding memory does not remove equilibria where both players always defect, so adding memory does not imply that self-play will find policies that maintain cooperation BID17 BID47. In the appendix we show that even in the simplest situation, the one memory repeated PD, always defecting equilibria can be more robust attractors than ones which maintain cooperation. amTFT is designed to get around this problem by using modified self-play to explicitly construct the cooperative and cooperation maintaining strategies as well as then switching rule. We begin with the theory behind amTFT. We begin with a social dilemma where pure cooperators can be exploited. We aim to construct a simple meta-policy which incentivizes cooperation along the path of play by switching intelligently between policies in response to its partner. We assume that cooperative polices are exchangeable. That is, for any pair (π DISPLAYFORM0 and that all pairs give a unique distribution of the total rewards between the two players. If policies are not exchangeable or can give different distributions of the total payoff then in addition to having a cooperation problem, we also have a coordination problem (ie. in which particular way should agents cooperate? how should gains from cooperation be split?). This is an important question, especially if we want our agents to interact with humans, and is related to the notion of choosing focal points in coordination/bargaining games. However, a complete solution is beyond the scope of this work and will often depend on contextual factors. See eg. BID48; BID46; BID26; BID42; BID6 for more detailed discussion. For the social dilemma to be solvable, there must be strategies with worse payoffs to both players. Consider an equilibrium (π DISPLAYFORM1 2) which has worse payoffs for player 2. We assume that (π DISPLAYFORM2 is an equilibrium even if played for a finite time, which we call π D -dominance. We use π D -dominance to bound the payoffs of a partner during the execution of a punishment phase, thus it is a sufficient but not necessary condition. We discuss in the Appendix how this assumption can be relaxed. To define this formally, we first introduce the notation of a compound policy π X k Z which is a policy that behaves according to X for k turns and then Z afterwards. Definition 5 We say a game is π D dominant (for player 2) if for any k, any state s, and any policy DISPLAYFORM3 ).In theory, with access to π C, π D, their Q functions, and no noise or function approximation, we can construct amTFT as follows. Suppose the amTFT agent plays as player 1 (the reverse is symmetric).At the start of the game the amTFT agent begins in phase C. If the phase is C then the agent plays according to π C. At each time step, if the agent is in a C phase, the agent looks at the action a 2 chosen by their partner. The agent computes DISPLAYFORM4 If d > 0 then starting at the next time step when state s is reached the agent enters into a D phase where they choose according to π D for k periods. k is computed such that DISPLAYFORM5 Here α > 1 controls how often an agent can be exploited by a pure defector. After this k is over the agent returns to the C phase. The amTFT strategy gives a nice guarantee: DISPLAYFORM6 δ then if player 1 is an amTFT agent, a fully omniscient player 2 maximizes their payoffs by behaving according to π C 2 when 1 is in a C phase and π D 2 when 1 is in a D-phase. Thus, if agents start in the C phase and there is no noise, they cooperate forever. If they start in a D phase, they eventually return to a C phase. The proof is quite simple and we relegate it to the Appendix. However, we now see that amTFT has the desiderata we have asked for: it is easy to explain, it cooperates with a pure cooperator, it does not get completely exploited by a pure defector, 6 and incentivizes cooperation along the trajectory of play. We now use RL methods to construct the components required for amTFT by approximating the cooperative and defect policies as well as the switching policy. To construct the required policies we use self-play and two reward schedules: selfish and cooperative. In the selfish reward schedule each agent i treats the other agent just as a part of their environment and tries to maximize their own reward. We assume that RL training converges and we call the converged policies under the selfish reward scheduleπ In the cooperative reward schedule each agent gets rewards both from their own payoff and the rewards the other agent receives. That is, we modify the reward function so that it is DISPLAYFORM0 6 In an infinite length game amTFT will get exploited an infinite number of times as it tries to return to cooperation after each D phase. One potential way to avoid this to avoid this is to increase α at each D phase. 7 This makes amTFT subtly different from TFT. TFT requires one's partner to cooperate even during the D phase for the system return to cooperation. By contrast, amTFT allows any action during the D phase, this makes it similar to the rPD strategy of Win-Stay-Lose-Shift or Pavlov BID37.We call the converged policy and value function approximationsπ C i andQ i CC. In this paper we are agnostic to which learning algorithm is used to compute policies. In general there can be convergence issues with selfish self-play BID18 BID9 BID40 while in the cooperative reward schedule the standard RL convergence guarantees apply. The latter is because cooperative training is equivalent to one super-agent controlling both players and trying to optimize for a single scalar reward. With the value functions and policies in hand from the procedure above, we can construct an amTFT meta-policy. For the purposes of this construction, we consider agent 1 as the amTFT agent (but everything is symmetric). The amTFT agent keeps a memory state (W t, b t) which both start at 0.The amTFT agent sees the action a of their partner at time t and approximates the gain from this deviation as DISPLAYFORM1 To compute this debit we can either use learned Q functions or we can simply use rollouts. The amTFT agent accumulates the total payoff balance of their partner as W t = W t−1 + D t. If W t is below a fixed threshold T the amTFT agent chooses actions according to π C. If W t crosses a threshold T the mTFT agent uses rollouts to compute a k such that the partner loses more fromπ DISPLAYFORM2 relative to cooperation than some constant α times the current debit. The hyperparameters T and α trade off robustness to approximation error and noise. Raising T allows for more approximation error in the calculation of the debit but relaxes the incentive constraints on the agent's partner. Raising α makes the cost of defection higher but makes false positives more costly. The algorithm is formalized below:Algorithm 1 Approximate Markov Tit For Tat (for Agent 1) 2 ) -we call this the'counterfactual path' (c) The amTFT agent takes the difference in the average total reward to the partner from the two paths and uses that as D t -this is an estimate of the reward of the one shot deviation to a from the recommended strategyπ DISPLAYFORM3 DISPLAYFORM4 This procedure is an unbiased estimator of Q CC in the limit of large B and M but is computationally intensive at test time. 8 In games where an action today can only affect payoffs up to M periods from now it suffices to use rollouts of length M and elide the continuation value. The value-based construction gives amTFT a particular robustness property -if the partner is not usingπ C 2 exactly but is using a policy that is outcome equivalent to it the estimated D t values will end up being 0 in expectation and so the amTFT agent will continue to cooperate. We will see in our experiments that this property is important to the success of amTFT in real Markov social dilemmas. We test amTFT in two environments: one grid-world and one where agents must learn from raw pixels. In the grid-world game Coins two players move on a 5 × 5 board. The game has a small probability of ending in every time step, we set this so the average game length is 500 time steps. Coins of different colors appear on the board periodically, and a player receives a reward of 1 for collecting (moving over) any coin. However, if a player picks up a coin of the other player's color, the other player loses 2 points. The payoff for each agent at the end of each game is just their own point total. The strategy which maximizes total payoff is for each player to only pick up coins of their own color; however each player is tempted to pick up the coins of the other player's color. We also look at an environment where strategies must be learned from raw pixels. We use the method of to alter the reward structure of Atari Pong so that whenever an agent scores a point they receive a reward of 1 and the other player receives −2. We refer to this game as the Pong Player's Dilemma (PPD). In the PPD the only (jointly) winning move is not to play. However, a fully cooperative agent can be exploited by a defector. We are interested in constructing general strategies which scale beyond tabular games so we use deep neural networks for state representation for both setups. We use standard setups so we relegate the details of the networks as well as the training to the appendix. We perform both Selfish (self play with reactive agents receiving own rewards) and Cooperative (self play with both agents receiving sum of rewards) training for both games. We train 100 replicates for Coins and 18 replicates for the PPD. In both games Selfish training leads to suboptimal behavior while Cooperative training does find policies that implement socially optimal outcomes. In Coinŝ π D agents converge to picking up coins of all colors while socialπ C agents learn to only pick up matching coins. In PPD selfishly trained agents learn to compete and try to score while prosocially trained agents gently hit the ball back and forth. In two Markov social dilemmas we find that standard self-play converges to defecting strategies while modified self-play finds cooperative, but exploitable strategies. We use the of these two training schedules to constructπ C andπ D.8 A less computationally demanding way to execute amTFT is to use a model to approximate QCC directly. This is difficult in practice since any bias in the model is accumulated across periods and because the model needs to be accurate everywhere, not just on the trajectory of π C. In the appendix we discuss some on learning a modelQ and improving the efficiency of such procedures is an important direction for future work. Figure 2: In two Markov social dilemmas, amTFT satisfies the Axelrod desiderata: it mostly cooperates with itself, is robust against defectors, and incentivizes cooperation from its partner. The'Grim' strategy based on BID4 behaves almost identically to pure defection in these social dilemmas. The of standard self-play is π D. The full tournament of all strategies against each other is shown in the Appendix. We evaluate the performance of various Markov social dilemma strategies in a tournament. To construct a matchup between two strategies we construct agents and have them play a fixed length iteration of the game. Note that at training time we use a random length game but at test time we use a fixed length one so that we can compare payoffs more efficiently. We use 1000 replicates per strategy pair to compute the average expected payoff. We compareπ C,π D, and amTFT.We also compare the direct adaptation of the construction in BID4. Recall that the folk theorem algorithm maintains equilibria by threat of deviation later: if either agent's behavior in game iteration t does not accord with the cooperative policy, both agents switch to a different policy in the next repetition of the game. We adapt this to the single test game setting as follows: the agent computes policiesπ C,π D. If their partner j takes an action a in a state s where a =π C j (s) the agent switches toπ D forever. We call this the Grim Trigger Strategy due to its resemblance to the rPD strategy of the same name. In both games we ask how well the strategies satisfy Axelrod's desiderata from the introduction. Specifically, we would like to measure whether a strategy avoids exploitation, cooperates with conditional cooperators, and incentivizes its partner to cooperate. Let S i (X, Y) be the average reward to player i when a policy of type X is matched with type Y. The metric DISPLAYFORM0 measures how safe a strategy is from exploitation by a defector. The larger this value, the worse that π X is exploited by a pure defector. We measure a strategy's ability to achieve cooperative outcomes with policies of their same type as DISPLAYFORM1 This measure can be thought of as quantifying two things. First, how much social welfare is achieved in a world where everyone behaves according to strategy X. Second, while we cannot enumerate all possible conditionally cooperative strategies, in the case of Grim and amTFT this serves as an indicator of how well they would behave against a particular conditional cooperator -themselves. Finally, we measure if X incentivizes cooperation from its partner. For this we use the measure DISPLAYFORM2 The higher this number, the better off a partner is from committing to pure cooperation rather than trying to cheat. Figure 2 shows our metrics evaluated for the strategies of always cooperate, always defect, amTFT and Grim. Pure cooperation is fully exploitable and pure defection gets poor payoffs when matched with itself. Neither pure strategy incentivizes cooperation. amTFT avoids being exploited by defectors, does well when paired with itself and incentivizes cooperative strategies from its partner. We also see that inferring a partner's cooperation using the value function (amTFT) is much more stable than inferring it via actions (Grim). The above show that amTFT is a good strategy to employ in a mixed environment which includes some cooperators, some tit-for-tat agents and some defectors. We consider what happens if we fix the one player (the Teacher) to use a fixed policy but let the other player be a selfish deep RL agent (the Learner). We perform the retraining in the domain of Coins. 9 This retraining procedure can also be used as an additional metric of the exploitability of a given strategy, rather than asking whetherπ D can exploit it, we ask whether a learner trying to maximize its own payoff can find some way to cheat. Recall that when selfish RL agents played with each other, they converged to the Selfish'grab all coins' strategy. We see that Learners paired with purely cooperative teachers learn to exploit the teachers, learners paired withπ D also learn to exploit (this learning happens much slower because a fully trainedπ D policy is able to grab coins very quickly and thus it is hard for a blank slate agent to learn at all), however learners paired with amTFT learn to cooperate. Note that choosing amTFT as a strategy leads to higher payoffs for both the Learner and the Teacher, thus even if we only care about the payoffs accrued to our own agent we can do better with amTFT than a purely greedy strategy. Humans are remarkably adapted to solving bilateral social dilemmas. We have focused on how to give artificial agents this capability. We have shown that amTFT can maintain cooperation and avoid exploitation in Markov games. In addition we have provided a simple construction for this strategy that requires no more than modified self-play. Thus, amTFT can be applied to social dilemmas in many environments. Our emphasize the importance of treating agents as fundamentally different than other parts of the environment. In particular, agents have beliefs, desires, learn, and use some form of optimization while objects follow simple fixed rules. An important future direction for constructing cooperative agents is to continue to incorporate ideas from inverse reinforcement learning BID0 BID36 and cognitive science BID5 BID26 to construct agents that exhibit some theory of mind. There is a growing literature on hybrid systems which include both human and artificial agents BID10 BID50. In this work we have focused on defining'cooperation' as maximizing the joint payoff. This assumption seems reasonable in symmetric situations such as those we have considered, however, as we discuss in the introduction it may not always be appropriate. The amTFT construction can be easily modified to allow other types of focal points simply by changing the modified reward function used in the training of the cooperative strategies (for example by using the inequity averse utility functions of BID15). However moving forward in constructing agents that can interact in social dilemmas with humans will require AI designers (and their agents) to understand and adapt to human cooperative and moral intutions BID27; BID21 BID39 In a social dilemma there exists an equilibrium of mutual defection, and there may exist additional equilibria of conditional cooperation. Standard self-play may converge to any of these equilibria. When policy spaces are large, it is often the case that simple equilibria of constant mutual defection have larger basins of attraction than policies which maintain cooperation. We can illustrate this with the simple example of the repeated Prisoner's Dilemma. Consider a PD with payoffs of 0 to mutual defection, 1 for mutual cooperation, w > 1 for defecting on a cooperative partner and −s for being defected on while cooperating. Consider the simplest possible state representation where the set of states is the pair of actions played last period and let the initial state be (C, C) (this is the most optimistic possible setup). We consider RL agents that use policy gradient ( displayed here come from using Adam BID25, similar were obtained with SGD though convergence speed was much more sensitive to the setting of the learning rate) to learn policies from states (last period actions) to behavior. Note that this policy space contains TFT (cooperate after (C, C), (D, C), defect otherwise), Grim Trigger (cooperate after (C, C), defect otherwise) and Pavlov or Win-Stay-Lose-Shift (cooperate after (C, C), (D, D), defect otherwise BID37 ) which are all cooperation maintaining strategies (though only Grim and WSLS are themselves full equilibria).Each episode is defined as one repeated PD game which lasts a random number of periods with stopping probability of stopping.05 after each period. Policies in the game are maps from the onememory state space {(C, C), (D, C), (C, D), (D, D)} to either cooperation or not. These policies are trained using policy gradient and the REINFORCE algorithm . We vary w and set s = 1.5w such that (C, C) is the most efficient strategy always. Note that all of these parameters are well within the range where humans discover cooperative strategies in experimental applications of the repeated PD BID7. Figure 4 shows that cooperation only robustly occurs when it is a dominant strategy for both players (w < 0) and thus the game is no longer a social dilemma. 10.10 Note that these use pairwise learning and therefore are different from evolutionary game theoretic on the emergence of cooperation BID38. Those show that indeed cooperation can robustly emerge in these kinds of strategy spaces under evolutionary processes. Those differ because they rely on the following argument: suppose we have a population of defectors. This can be invaded by mutants of TFT because TFT can try cooperation in the first round. If it is matched with a defector, it loses once but it then defects for the rest of the time, if it is matched with another TFT then they cooperate for a long time. Thus, for Figure 4: Results from training one-memory strategies using policy gradient in the repeated Prisoner's Dilemma. Even in extremely favorable conditions self-play fails to discover cooperation maintaining strategies. Note that temptation payoff.5 is not a PD and here C is a dominant strategy in the stage game. To prove the theorem we will apply the one deviation principle. To show this, we fix player 1 to be an amTFT agent and look at player 2. Note that from the point of view of player 2 this is now a Markov game with a state representation of (s, k) where if k = 0 player 1 behaves according to π C and if k > 0 player 1 is in the D phase and thus behaves according to π D k C.We consider the policy for player 2 of'play π C 2 when player 1 is in the C phase and play π D 2 when player 1 is in the D phase.' Recall by the Principle of Optimality if there does not exist a one shot deviation a at any state under which player 2 earns a higher payoff, then there does not exist a better policy than the one prescribed. Consider starting at k > 0. The suggested policy has player 2 play π DISPLAYFORM0. By π D -dominance this is the best response to π DISPLAYFORM1 so there are no one-shot deviations in the D phase. Let us consider what happens in the C phase (k = 0). By the assumption of the theorem at any state s we know that DISPLAYFORM2 Let {r t 2 (s, π 1, π 2)} be the per-period reward stream (note here each r is a random variable) for player 2 induced by the policies π 1, π 2. Since DISPLAYFORM3 where δ is the discount rate. Because rewards are bounded then for any > 0 there exists k such that DISPLAYFORM4 sufficiently long games the risk of one round of loss is far smaller than the potential fitness gain of meeting another mutant. Thus TFT can eventually gain a foothold. It is clear why in learning scenarios such arguments cannot apply. That is, the first k terms time steps approximate the full discounted expectation arbitrarily well. This also means that for some k DISPLAYFORM5 From any state, the highest profit an agent can make from deviating from π C 2 with a single action with an amTFT partner is d *. However we have shown that there exists a length k such that moving to D for k turns costs the agent more than d * δ. Therefore there is no time during the C phase they wish to deviate. This completes the proof. DISPLAYFORM6 The reason we made the π D -dominance assumption is to bound the expected payoff of an agent playing against π D k C and therefore bound the necessary length of a D phase after a particular deviation. However, in order to compute what the length of the D phase the amTFT agent needs access to the best response policy to π D k C, or its associated value function. With π D -dominance we assume that π D is that best response. Even if π D -dominance does not strictly hold, it is likely a sufficient approximation. If necessary however, one can train an RL agent on episodes where their partner plays π D k C, where k is observed. This allows one to approximate the best response policy to π D k C which will then give us what we need to compute the responses to deviations from π C in the D phase that incentivize full cooperation. We used rollouts to calculate the debit to the amTFT's partner at each time period. This estimator has good performance for both PPD and Coins given their reward structure. It is also possible to use a learned model of Q. Learning a sufficiently accurate modelQ is challenging for several reasons. First, it has to have very low bias, since any bias inQ will be accumulated over periods. Second, the one-shot deviation principle demands thatQ be accurate for all state-action pairs, not just those sampled by the policies (π C, π C). Standard on-policy value function estimation will only produce accurate estimates of Q at states sampled by the cooperative policies. As an example, in Coins, since the cooperative policies never collect their partner's coinsQ for these state-action pairs may be inaccurate. We found that it was possible in Coins to learn a modelQ to calculate debit without policy rollouts using the same neural network architecture that was used to train the policies. However, we found that in order to train aQ model accurate enough to work well we had to use a modified training procedure. After finishing Selfish and Cooperative training, we perform a second step of training using a fixed (converged)π C. In order to sample states off the path ofπ C during this step, the learner behaves according to a mixture of π C, π D, and random policies while the partner continues according tô π C.Q is updated via off-policy Bellman iteration. We found this modified procedure produced â Q function that was good enough to maintain cooperation (though still not as efficient as rollouts). For more complex games, an important area for future work is to develop methodologies to compute more accurate approximations of Q or combine aQ model with rollouts effectively. For Coins there are four actions (up, down, left, right), and S is represented as a 4 × 5 × 5 binary tensor where the first two channels encode the location of the each agent and the other two channels encode the location of the coin (if any exist). At each time step if there is no coin on the board a coin is generated at a random location with a random color, with probability 0.1.A policy π(s; θ): s → ∆(a) is learned via the advantage actor critic algorithm. We use a multi-layer convolutional neural network to jointly approximate the policy π and state-value functionV. For this small game, a simpler model could be used, but this model generalizes directly to games with higher-dimensional 2D state spaces (e.g. environments with obstacles). For a given board size k, the model has log 2 (k) + 1 repeated layers, each consisting of a 2D convolution with kernel size 3, followed by batch normalization and ReLU. The first layer has stride 1, while the successive layers each have stride 2, which decreases the width and height from k to k/2 while doubling the number of channels. For the 5 × 5 board, channel sizes are 13, 26, 52, 104. From these 104 features, π is computed via a linear layer with 4 outputs with softmax, to compute a distribution over actions, while the value function is computed via a single-output linear layer. The actor and critic are updated episodically with a common learning rate -at the end of each game we update the model on a batch of episodes via ∆θ i = λ A t ∂V (s t) ∂θ i +à t log π(s t, a t) ∂π(s t, a t) ∂θ i where A is the advantage A t = r t + δV (s t+1) − V (s t)andà is the advantaged normalized over all episodes and periods in the batch DISPLAYFORM0 We train with a learning rate of 0.001, continuation probability.998 (i.e. games last on average 500 steps), discount rate 0.98, and a batch size of 32. We train for a total of 40, 000 games. We use the arcade learning environment modified for 2-player play as proposed in , with modified rewards of +1 for scoring a point and -2 for being scored on. We train policies directly from pixels, using the pytorch-a3c package https://github.com/ikostrikov/ pytorch-a3c. Policies are trained directly from pixels via A3C BID35. Inputs are rescaled to 42x42 and normalized, and we augment the state with the difference between successive frames with a frame skip of 8. We use 38 threads for A3C, over a total of 38,000 games (1,000 per thread). We use the default settings from pytorch-a3c: a discount rate of 0.99, learning rate of 0.0001, 20-step returns, and entropy regularization weight of 0.01.The policy is implemented as a convolutional neural network with four layers, following pytorch-a3c. Each layer uses a 3x3 kernel with stride 2, followed by ELU. The network has two heads for the actor and critic. We elide the LSTM layer used in the pytorch-a3c library, as we found it to be unnecessary. (a) Coins Results (b) PPD Results Figure 5: Results of the tournament in two Markov social dilemmas. Each cell contains the average total reward of the row strategy against the column strategy. amTFT achieves close to cooperative payoffs with itself and achieves close to the defect payoff against defectors. Its partner also receives a higher payoff for cooperation than defection.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJIN_4lA-
How can we build artificial agents that solve social dilemmas (situations where individuals face a temptation to increase their payoffs at a cost to total welfare)?
The reparameterization trick has become one of the most useful tools in the field of variational inference. However, the reparameterization trick is based on the standardization transformation which restricts the scope of application of this method to distributions that have tractable inverse cumulative distribution functions or are expressible as deterministic transformations of such distributions. In this paper, we generalized the reparameterization trick by allowing a general transformation. Unlike other similar works, we develop the generalized transformation-based gradient model formally and rigorously. We discover that the proposed model is a special case of control variate indicating that the proposed model can combine the advantages of CV and generalized reparameterization. Based on the proposed gradient model, we propose a new polynomial-based gradient estimator which has better theoretical performance than the reparameterization trick under certain condition and can be applied to a larger class of variational distributions. In studies of synthetic and real data, we show that our proposed gradient estimator has a significantly lower gradient variance than other state-of-the-art methods thus enabling a faster inference procedure. Most machine learning objective function can be rewritten in the form of an expectation: where θ is a parameter vector. However, due to the intractability of the expectation, it's often impossible or too expensive to calculate the exact gradient w.r.t θ, therefore it's inevitable to estimate the gradient ∇ θ L in practical applications. Stochastic optmization methods such as reparameterization trick and score function methods have been widely applied to address the stochastic gradient estimation problem. Many recent advances in large-scale machine learning tasks have been brought by these stochastic optimization tricks. Like in other stochastic optimzation related works, our paper mainly focus on variational inference tasks. The primary goal of variational inference (VI) task is to approximate the posterior distribution in probabilistic models . To approximate the intractable posterior p(z|x) with the joint probability distribution p(x, z) over observed data x and latent random variables z given, VI introduces a parameteric family of distribution q θ (z) and find the best parameter θ by optimizing the Kullback-Leibler (KL) divergence D KL (q(z; θ) p(z|x)). The performance of VI methods depends on the capacity of the parameteric family of distributions (often measured by Rademacher complexity) and the ability of the optimizer. In this paper, our method tries to introduce a better optimizer for a larger class of parameteric family of distributions. The main idea of our work is to replace the parameter-independent transformation in reparameterization trick with generalized transformation and construct the generalized transformation-based (G-TRANS) gradient with the velocity field which is related to the characteristic curve of the sublinear partial differential equation associated with the generalized transformation. Our gradient model further generalizes the G-REP and provides a more elegant and flexible way to construct gradient estimators. We mainly make the following contributions: 1. We develop a generalized transformation-based gradient model based on the velocity field related to the generalized transformation and explicitly propose the unbiasedness constraint on the G-TRANS gradient. The proposed gradient model provides a more poweful and flexible way to construct gradient estimators. 2. We show that our model is a generalization of the score function method and the reparameterization trick. Our gradient model can reduce to the reparameterization trick by enforcing a transport equation constraint on the velocity field. We also show our model's connection to control variate method. 3. We propose a polynomial-based gradient estimator that cannot be induced by any other existing generalized reparameterization gradient framework, and show its superiority over similar works on several experiments. The rest of this paper is organized as follows. In Sec.2 we review the stochastic gradient variational inference (SGVI) and stochastic gradient estimators. In Sec.3 we propose the generalized transformation-based gradient. In Sec.4 we propose the polynomial-based G-TRANS gradient estimator. In Sec.5 we study the performance of our gradient estimator on synthetic and real data. In Sec.6 we review the related works. In Sec.7 we conclude this paper and discuss future work. To obtain the best variational parameter θ, rather than minimize the KL divergence D KL (q(z; θ) p(z|x)), we usually choose to maximize the evidence lower bound (ELBO) , where The entropy term H[q(z; θ)] is often assumed to be available analytically and usually omitted in the procedure of stochastic optimization. This stochastic optimization problem is the basic setting for our method and experiments. Without extra description, we only consider the simplified version of the ELBO: Generally, this expectation is intractable to compute, let alone its gradient. Therefore, a common stochastic optimization method for VI task is to construct a Monte Carlo estimator for the exact gradient of the ELBO w.r.t θ. Among those gradient estimators, the score function method and the reparamterization trick are most popular and widely applied. Score function method. The score function estimator, also called log-derivative trick or reinforce; is a general way to obtain unbiased stochastic gradients of the ELBO (; ;). The simplest variant of the score function gradient estimator is defined as: and then we can build the Monte Carlo estimator by drawing samples from the variational distribution q θ (z) independently. Although the score function method is very general, the ing gradient estimator suffers from high variance. Therefore, it's necessary to apply variance reduction (VR) methods such as RaoBlackwellization and control variates in practice. Reparameterization trick. In reparameterization trick, we assume that there is an invertible and continuously differentiable standardization function φ(z, θ) that can transform the variational distribution q(z; θ) into a distribution s(ρ) that don't depend on the variational parameter θ as follows, θ (ρ) Then the reparameterization trick can turn the computation of the gradient of the expectation into the expectation of the gradient: Although this reparameterization can be done for many commonly used distributions, such as the Gaussian distribution, it's hard to find appropriate standardization functions for a number of standard distributions, such as Gamma, Beta or Dirichlet because the standardization functions will inevitably involve special functions. On the other hand, though the reparameterization trick is not as generally applicable as the score function method, it does in a gradient estimator with lower variance. Define a random variable ρ by an invertible differentiable transformation ρ = φ(z, θ), where φ is commonly called generalized standardization transformation since it's dependent on the variational parameter θ. Theorem 3.1. Let θ be any component of the variational parameter θ, the probability density function of ρ be w(ρ, θ) and where The proof details of the Theorem.3.1 are included in the Appendix. A.1. We refer to the gradient ∂L ∂θ with v θ satisfying the unbiasedness constraint as generalized transformation-based (G-TRANS) gradient. We can construct the G-TRANS gradient estimator by choosing v θ of specific form. In the following, we demonstrate that the score function gradient and reparameterization gradient are special cases of our G-TRANS gradient model associating with special velocity fields. Remark. The score function method is a special case of the G-TRANS model when The standardization function φ doesn't depend on the parameter θ when v θ = 0 according to the velocity field equation (Equ.5). Conversely, for any φ that doesn't depend on θ, we have = 0, thus the ing gradient estimator has a same variance as the score function estimator. Remark. The reparameterization trick is a special case when The detailed computation to obtain the transport equation (Equ.7) is included in the Appendix. A.1. The transport equation is firstly introduced by , however, their work derive this equation by an analog to the optimal transport theory. In 1-dimensional case, for any standardization distributions w(ρ) that doesn't depend on the parameter θ, the variance of the ing gradient estimator is some constant (for fixed θ) determined by the unique 1-dimensional solution of the transport equation. For the existence of the velocity field v θ and the generalized standardization transformation φ(z, θ), g(z, θ) must satisfy some strong differential constraints . We can see that the G-TRANS model is a special case of the control variate method with a complex differential structure. This connection to CV means our gradient model can combine the advantages of CV and generalized reparameterization. Theorem.3.1 transforms the generalized unbiased reparameterization procedure into finding the appropriate velocity field that satisfy the unbiasedness constraint. It's possible to apply variational optimization theory to find the velocity field with the least estimate variance, however, the solution to the Euler-Lagrange equation contains f (z) in the integrand which makes it impractical to use in real-world model (See Appendix. A.2 for details). By introducing the notion of velocity field, we provide a more elegant and flexible way to construct gradient estimator without the need to compute the Jacobian matrix for a specific transformation. In the next section, we introduce a polynomial-based G-TRANS gradient estimator that cannot be incorporated into any other existing generalized reparameterized gradient framework and is better than the reparameterization gradient estimator theoretically. In this section, we always assume that the base distribution q(z, θ) can be factorized as where N is the dimension of the random variable z, θ i is a slice of θ and θ i share no component with θ j if i = j. We consider an ad-hoc velocity field family: We always assume v θ ah to be continuous which guarantees the existence of the solution to the velocity field equation. We verify in the Appendix. A.3 that v θ ah (z, θ) satisfy the unbiasedness constraint if h(z, θ) is bounded. It's easy to see that the gradient estimator that from v θ ah is more general than the score function method or reparameterization trick since they are two special cases when h(z, θ) = 0 or h(z, θ) = f (z) respectively. In this paper, we mainly consider a more special family of the v θ ah (z, θ): where zi ∂q(z,θ) ∂θ dz i ), but their properties are similar (we present some theoretical of v θ dp in the Appendix. A.4). Therefore we only consider v θ poly (z, θ) here. We refer to v θ poly as polynomial velocity field. Proposition 4.1. For distributions with analytical high order moments such as Gamma, Beta or Dirichlet distribution, the expectation are polynomials of random variable z. Therefore, for distribution with analytical high order moments, With Proposition.4.1, we can write the G-TRANS gradient for the polynomial velocity field as: Thus we can construct a G-TRANS gradient estimator based upon the polynomial velocity field with a samplez drawn from q(z, θ): The polynomial-based G-TRANS gradient estimator has a form close to control variate, thus cannot be induced by any other existing generalized reparameterized gradient framework. In the following, we show that the polynomial-based G-TRANS gradient estimator performs better than the reparameterization gradient estimator under some condition. dz N ), then the gradient estimator ed from polynomial velocity field has a smaller variance than the reparameterization gradient estimator. Proof. Since E q [P k (z, θ) ∂ ∂θ log q] can be resolved analytically, we have then by reorganizing the expression Var(− ∂f ∂zi), we can prove this proposition. As an example about how to choose a good polynomial, for ), we can obtain a polynomial-based G-TRANS gradient estimator that is better than the reparameterization gradient estimator according to the Proposition.4.2. And we can adjust the value of C i (θ) to obtain better performance. According to the approximation theory, we can always find a polynomial P k (z, θ) that is close enough to f (z), and in this case, we can dramatically reduce the variance of the ing gradient estimator. For example, within the convergence radius, we can choose P k (z, θ) to be the k-th degree Taylor polynomial of f (z) with the remainder |f (z) − P k (z, θ)| being small. In the practical situation, however, it's often difficult to estimate the coefficients of the polynomial P k (z, θ). And when k is large, we need to estimate O(N k) coefficients which is almost impossible in real-world applications. Therefore in the following experiments, we only consider k < 2. In this section, we use a Dirichlet distribution to approximate the posterior distribution for a probilistic model which has a multinomial likelihood with a Dirichlet prior. We use Gamma distributions to simulate Dirichlet distributions. If Then the problem we study here can be written as: with f (z) being the multinomial log-likelihood. We use shape parameter α = (α 1, . . ., α K) to parameterize the variational Dirichlet distribution. To construct polynomial-based G-TRANS gradient estimator for the factorized distribution K k=1 Gamma(z k ; α k, 1), we need an accurate and fast way to approximate the derivative of the lower incomplete gamma function (part of the gamma CDF) w.r.t the shape parameter. The lower incomplete gamma function γ(α, z) is a special function and does not admit analytical expression for derivative w.r.t. the shape parameter. However, for small α and z, we have In practice, we take the first 200 terms from this power series. And the approximation error is smaller than 10 −9 when α < 5 and z < 20 with double precision floating point number. For large α, we use central finite difference to approximate the derivative. This approximation scheme for lower incomplete gamma function can also be used to construct polynomial-based G-TRANS gradient estimator for distributions that can be simulated by the Gamma distribution such as Beta distribution and Dirichlet distribution. We follow the experiment setting in. Fig.1 shows the ing variance of the first component of the gradient based on samples simulated from a Dirichlet distribution with K = 100 components, and gradients are computed with N = 100 trials. We use P 1 (z) = c · z to construct the G-TRANS gradient estimator, and we assign 0.2,0 and −0.1 to c successively as α 1 increases. Results. From Fig.1, we can see that the IRG method and our G-TRANS gradient estimator has obviously lower gradient variance than the RSVI (even with the shape augmentation trick ) or G-REP method. Further, our G-TRANS gradient estimator outperforms the IRG method when α 1 is large though there is no obvious difference between these two methods when α 1 is small. In this section, we study the performance of our G-TRANS gradient estimator on the Sparse Gamma deep exponential family (DEF) model with the Olivetti faces dataset that consists of 64 × 64 gray-scale images of human faces in 8 bits. We follow the Sparse Gamma DEF setting in where the DEF model is specified by: . C is the polynomial coefficient, B denotes shape augmentation and optimal concentration is α = 2. Here n is the number of observations, is the layer number, k denotes the k-th component in a specific layer and d is the dimension of the output layer (layer 0). z n,k is local random variable, w k,k is global weight that connects different layers like deep neural networks, and x n,d denotes the set of observations. We use the experiment setting in. α z is set to 0.1, all priors on the weights are set to Gamma(0.1, 0.3), and the top-layer local variables priors are set to Gamma(0.1, 0.1). The model consists of 3 layers, with 100, 40, and 15 components in each. All variational Gamma distributions are parameterized by the shape and mean. For non-negative variational parameters θ, the transfomration θ = log(1 + exp(ϑ)) is applied to avoid constrained optimization. In this experiment, we use the step-size sequence ρ n proposed by: δ = 10 −16, t = 0.1, η = 0.75 is used in this experiment. The best of RSVI is reproduced with B = 4 . We still use P 1 (z) = c · z to construct the G-TRANS gradient estimator and we use c = −10.0 for all time. Results. From Fig.2, We can see that G-TRANS achieves significant improvements in the first 1000 runs and exceeds RSVI though with a slower initial improvement. G-TRANS achieves obviously better accuracy than ADVI, BBVI, G-REP and RSVI, and keeps improving the ELBO even after 75000 runs. G-TRANS is faster than the IRG in early training stage which means G-TRANS has a lower gradient variance. However, this speed advantage of G-TRANS gradually decreases as the step size goes down in the later training stage. There are already some lines of research focusing on extending the reparameterization trick to a larger class of distributions. The G-REP generalizes the reparameterization gradient by using a standardization transformation that allows the standardization distribution to depend weakly on variational parameters. Our gradient model gives a more elegant expression of the generalized reparameterized gradient than that of G-REP which decomposes the gradient as g rep + g cor. Different from G-REP, our model hides the transformation behind the velocity field thus the expensive computation of the Jacobian matrix of the transformation is evaded. And it's more flexible to construct gradient estimator with the velocity field than the very detailed transformation. The RSVI develops a similar generalized reparameterized gradient model with the tools from rejection sampling literatures. RSVI introduces a score function gradient term to compensate the gap that is caused by employing the proposal distribution of a rejection sampler as a surrogate distribution for reparameterization gradient, although the score function gradient term can often be ignored in practice to reduce the gradient variance at the cost of small bias. Unlike RSVI, our gradient estimator can be constructed with deterministic procedure which avoids the additional stochasticity introduced by the accept-reject steps thus lower gradient variance. The path-wise derivative is closely related to our model. They obtain the transport equation by an analog to the displacement of particles, while we derive the transport euqation for reparameterization gradient by rigorous mathematical deduction. The path-wise gradient model can be seen as a special case of our G-TRANS gradient model. Their work only focus on standard reparameterization gradient while our model can admit generalized transformation-based gradient. The velocity field used in their work must conform to the transport equation while we only require the velocity field to satisfy the unbiasedness constraint. The implicit reparameterization gradient (IRG) differentiates from the path-wise derivative only by adopting a different method for multivariate distributions. There are also some other works trying to address the limitations of standard reparameterization. applies implicit reparameterization for mixture distributions and uses approximations to the inverse CDF to derive gradient estimators. Both work involve expensive computation that cannot be extended to large-scale variational inference. expressed the gradient in a similar way to G-REP and automatically estimate the gradient in the context of stochastic computation graphs, but their work is short of necessary details therefore cannot be applied to general variational inference task directly. ADVI transforms the random variables such that their support are on the reals and then approximates transformed random variables with Gaussian variational posteriors. However, ADVI struggles to approximate probability densities with singularities as noted by. We proposed a generalized transformation-based (G-TRANS) gradient model which extends the reparameterization trick to a larger class of variational distributions. Our gradient model hides the details of transformation by introducing the velocity field and provides a flexible way to construct gradient estimators. Based on the proposed gradient model, we introduced a polynomial-based G-TRANS gradient estimator that cannot be induced by any other existing generalized reparameterization gradient framework. In practice, our gradient estimator provides a lower gradient variance than other state-of-the-art methods, leading to a fast converging process. For future work, We can consider how to construct G-TRANS gradient estimators for distributions that don't have analytical high-order moments. We can also utilize the from the approximation theory to find certain kinds of high-order polynomial functions that can approximate the test function effectively with cheap computations for the coefficients. Constructing velocity fields with the optimal transport theory is also a promising direction. A.1 PROOF OF THEOREM.3.1 We assume that transformed random variable ρ = φ(z, θ) is of the same dimension as z. And we assume that there exists ψ(ρ, θ) that satisfy the constraint z = ψ(φ(z, θ), θ). Firstly, by the change-of-variable technique, we have Take derivative w.r.t θ (any component of θ) at both sizes, we have With the rule of determinant derivation, we have Substitute the Equ.19 into Equ.18, we have, we obtain the first of the Theorem.3.1. As for the second part, we have Thus we obtain the second part of the Theorem.3.1. Proof ends. As a by-product, if we make ∂ ∂θ w(φ(z, θ), θ) = 0, we can obtain the transport equation for the reparameterization trick: And ∂ ∂θ w(φ(z, θ), θ) = 0 also means that the standardization distribution is independent with θ which is the core of the reparameterization trick. For the simplicity of the proof, we only consider the 1-dimensional here. And denote where with the unbiased constraint, we have E q(z,θ) [r(z, θ)] = E q(z,θ) [f (z) ∂q(z,θ) ∂θ q(z,θ) ] = const, so we need to consider the term E q(z,θ) [(r θ (z, θ)) 2 ] only. According to the Euler-Lagrange equation, we have Simplify it, we have (f ∂q ∂θ Then we have Thus we have which is usually intractable in real world practice. Here we verify that v If h(z, θ) is bounded, we have Therefore, E q θ [If we take the dual polynomial velocity field v θ dp in the G-TRANS framework, we can reach a dual to the Proposition.4.2: Proposition A.1. If Cov(P k ∂ log q(z,θ) ∂θ, (2f − P k) ∂ log q(z,θ) ∂θ ) > 0, then the gradient estimator ed from dual polynomial velocity field has a smaller gradient variance than the score function gradient estimator. The proof is similar to that of Proposition.4.2.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1lqSC4YvB
We propose a novel generalized transformation-based gradient model and propose a polynomial-based gradient estimator based upon the model.
The fault diagnosis in a modern communication system is traditionally supposed to be difficult, or even impractical for a purely data-driven machine learning approach, for it is a humanmade system of intensive knowledge. A few labeled raw packet streams extracted from fault archive can hardly be sufficient to deduce the intricate logic of underlying protocols. In this paper, we supplement these limited samples with two inexhaustible data sources: the unlabeled records probed from a system in service, and the labeled data simulated in an emulation environment. To transfer their inherent knowledge to the target domain, we construct a directed information flow graph, whose nodes are neural network components consisting of two generators, three discriminators and one classifier, and whose every forward path represents a pair of adversarial optimization goals, in accord with the semi-supervised and transfer learning demands. The multi-headed network can be trained in an alternative approach, at each iteration of which we select one target to update the weights along the path upstream, and refresh the residual layer-wisely to all outputs downstream. The actual show that it can achieve comparable accuracy on classifying Transmission Control Protocol (TCP) streams without deliberate expert features. The solution has relieved operation engineers from massive works of understanding and maintaining rules, and provided a quick solution independent of specific protocols. A telecommunications network is a collection of distributed devices, entirely designed and manufactured by humans for a variety of transmission, control and management tasks, striving to provide a transparent channel between external terminals, via an actual internal relay process node by node. As a typical conversation in the style of client and server, the two linked nodes send their messages in the form of packets, encapsulated the load with miscellaneous attributes in headers to ensure the correctness, consistency, and smoothness of the entire process. A typical header includes packet sequence number, source and destination addresses, control bits, error detection codes, etc. The large-scale network cannot always work ideally, due to its inherent complexity inside massive devices and their interactions. When there is a malfunction of a device, either caused by the traffic overload, or software bugs, or hardware misconfiguration, or malicious attacks, it will be reflected on the packet streams that pass through, such as packet loss, timeout, out of order, etc. System administrators captured those suspicious streams and sent back to the service center for cautious offline analysis, which is time-consuming and domain-specific. The primary challenge of automatic diagnosis is that, it is almost impossible to formalize all the logic inside the system and make them available to artificial intelligence. A typical modern communication system consists of tens of thousands devices end-to-end and runs based on a list of hundreds of protocols layer-by-layer BID6 ). If we could figure out the latent states of protocols by constructing specific features from raw bytes, the subsequent classification tasks would be quite straightforward and easy to implement. For instance, the Transmission Control Protocol (TCP) relies on sequence numbers to judge the receiving order of packets, which may be just big integers roughly linearly growing from the view of machine learning models. Another example is a few critical control bits may reside among much more useless bits, such as checksum codes, which is harmful noises for models. Even we have the patience to dive into all the industrial protocols and build up an exhausted feature library; eventually, we will fail again to achieve the target of automation, one of the main advantages of the modern data-driven approach. Another difficulty is scarce of labeled samples. In spite of there are seemingly numerous packet flows running through the Internet all the time, the real valid faults occur at random and occupy only a tiny portion of whole traffic volume. The actual labeled data are usually collected from the archive of fault cases, which is hard to have enough samples for all possible categories, or cannot at least cover them completely. The previous works on this issue mainly follow two technical routes: 1) a traditional two-phase framework, using expert features and some general-propose classifiers BID1 ); 2) an end-to-end approach based on deep learning for automatic feature extraction . All these prior arts seldom use the generative models, which is usually more promising for expressing structural relationship among random variables. And they may fuse 1-2 data sources in semi-supervised setting , but not scale to even more data sources. In this paper, we resort to a generative model to mimic the messages in a terminals conversation and enrich the target data domain from two abundant but different information sources: labeled but from simulation, and genuine but unlabeled. The transfer and semi-supervised demands are integrated into an intuitive framework, composed of a connected graph of multiple simple Generative Adversarial Networks (GANs)' components, trained in an alternative optimization approach. The contribution of this paper includes: 1) combine three kinds of data sources in a generative approach, to solve the small-sample problem with a simulation environment; 2) extend the two players in usual GANs to a system of multiple ones, still keeping its merit of end-to-end training; 3) verify its effect on our practice problem of packet sequence classification. The left of paper is organized as below: first, we introduce the previous work selectively in network anomaly detection and the research frontier in the generative neural network. Next, we present the model and algorithm in detail with feature design at different levels. The of experiments are followed in Section 4. Finally, we conclude the whole article. The anomaly detection in communication packets has been long-term studied, either for the Quality of Service (QoS) management or instruction detection. The article BID1 summarized the past works based on their applied technologies, which almost cover all popular machine learning methods before 2012.1 The works after that mainly switched to deep learning as it goes popular, which are surveyed by ourselves. In general, all of these keep developing with both two aspects: the level of automation and the ability of models. BID4 started to train neural networks (NN) on labeled data to build models for more categories of anomalies than hard-coded rules, with an expert feature library and three-layer perceptron classifiers. Later, BID0 verified the feasibility of Self Organizing Map (SOM) in an unsupervised scenario, where the unexpected packets were automatically saved for analyzers. BID2 used the online learning to quickly adapt to the newly occurred attacking samples feedbacked by users, without the effort of retraining. BID11 used the self-taught learning to enrich the dataset from unlabeled live stream, and build up a comprehensive feature space for embedding. The enhancement can be observed even using a simple K-Nearest-Neighbors. On the other hand, the neural network models also advance consistently. The early attempts include PCA NNLiu et al. FORMULA3, Wavelet NNSun et al., etc. Yin et al. (2017 used Recurrent Neural Network (RNN) on classifying 5 categories of packet flows, and achieved obviously better than models ignoring the temporal orders. BID16 designed an NN with 6-dimensional manual feature vectors and 3 hidden layers for inherent mapping as input and claimed accuracy improvements after testing. also used self-taught learning, similar to BID11, but with more sophisticated models. It extracted features automatically from unlabeled data by a sparse auto-encoder, and classify them by a single hidden layer perceptron. To our best knowledge, it is the first time we employ the generative neural networks for the semisupervised and transfers learning simultaneously on this problem. The classical GANs compose of two neural components contesting with each other in a zero-sum game BID7 ). It can be extended to more components for the following purposes (but not limited to): 1) reflecting the relationship between multiple random variables. BID12 solved the multi-class classification by adding an extra classifier to denote the conditional probability p(y|X). The newly classifier can shift the burden of predicting labels from identifying fake samples, which is blended in the previous work BID14. This triple-player formulation makes the model even clearer and brings more stableness during training. In multi-view learning, BID3 defined one discriminator for each view and enabled the distribution estimation over all possible output y if any subset of view on a particular input X is fixed. 2) Enhancing the training process. BID8 addressed the mode collapse problem in GANs by training many generators, which envisions to discriminate itself to an additional classifier and fool the original discriminator meanwhile. It improved the steadiness of training significantly. Instead, BID5 used multiple discriminators to ensemble a more powerful one; on the other hand, the training process is retarded by a predefined function (similar to soft-max) on the top of them to match the generator's capability better. In this paper, we only focus on the former purpose. The packet stream in reality is a sequence of attributed events e = {(t i, {a i,j})|i ∈ N, j ∈ N }, where t i is the timestamp sealed by the receiver, and a i,j is a tuple of key-value parsed from packet headers. The label c of e can be K classes of anomalies containing 1 special class for normality. To focus the main aspect of our problem, we prefer to simplify it by two assumptions: 1) anomalies can only happen at one side of the communication, such as server side, to prevent the number of possible situations from blowing up to K 2. It seldom happens that, both terminals have problems simultaneously and produce a complicated interaction that can fully not be diagnosed by a onesided model. In fact, we can train two individual models for both sides, and thus the records from client side can be removed from the train set in experiments.2) The continuous valued (or finegrained in 10 −6 s) timestamps are ignored, while only their ascending index is kept, from 1 to T. We insert dummy packets, which replicate sequence id from the previous one and fill all other fields with 0, to denote an occurence of timeout between two consecutive items. The overall number of dummy packets is informative to models since it indicates how many periods the opposite side has not responded. It is justified because most protocols are indifferent to the exact time intervals during sending/receiving unless they exceed the predefined timeout threshold. The available content of attributes depends on how much effort we want to pay for an accurate inspection of packet headers. The clearer we want to know about the states of every system running, the much we should know about the details of a given protocol. There are 3 levels of feature engineering: 1) raw bytes of headers, which needs little effort; 2) the numerical values (sequence index, integer or Boolean) parsed by a data scheme indicating their positions and necessity; 3) the latent states defined in protocols, based on complete domain knowledge and Finite-State Machine (FSM) -driven analysis. For instance, a packet at level 1 may be only a binary sequence, like 1001... 1000, which is unreadable for humans. At level 2 FIG0, it turns to be a data structure with proper data types, but without semantics defined by the protocol draft. The array of structures can further be arranged into a multi-dimensional vector. At level 3 FIG0, the inherent states are explicitly parsed out, and the sequence finally achieves its most compact representation -a categorical vector. NN can digest all levels above, though an extra effort is needed for discrete values at level 3, discussed later in Sec. 3.2.2. A simplified version of FSM for a sender during TCP's transmission phase. In the typical situation, the sender shuttles between the Ready and Listen state. When there is some error occurred in the direction that data goes, the system will retreat to a previous position and resend the datum; when the acknowledge packet loses in the coming direction, a timeout will be triggered, and the system will resend the last piece of data. In practice, these states are implicitly encoded in the headers, and the computer program has to analyze them based on TCP protocol. Assume that the N -dimensional sequence Y = {y t : t ∈ T} is generated from repetitive applying of a function G on a latent M -dimensional vector z: DISPLAYFORM0 Where c is a categorical variable to switch between the types of anomalies, and θ is the parameters of G. We assume that, if z ∼ N (0, σ 2) and c ∼ Cat(π), the ed Y will conform to a distribution where our observationsŶ come from. In the adversary approach, we guide the training of G by minimizing the distance (cross entropy, etc.) between the p(Y, c) and observedp(Y, c), via the transformation of an extra binary discriminator D r: DISPLAYFORM1 where H is the cross entropy between two distributions. Similarly, we define a function G s that transforms Y to its correspondence Y s in the simulation world, and a function D s to discriminate them from real simulation data: DISPLAYFORM2. For the unlabeled data, we define a D c to distinguish them from the generated samples without labels c: DISPLAYFORM3 Figure 2: The layout of building blocks in our solution. The arrows denote for the information flow among blocks, and they are connected as three forward paths in (a), representing real, simulated and unlabeled information sources colored as orange, green, and blue. The noises driven the whole process is assumed to be sampled from two independent sources, z ∼ N (0, σ 2) and c ∼ Cat(π). The π are estimated directly based on real labeled data. In (b), the white block C is trained with a fixed G, colored with shading.. The overall optimization goal is a convex combination of Eq. 3, 5 and 7: DISPLAYFORM4 where λ s, λ c is the coefficients to adjust the importance of targets. Once we obtain the p(Y, c) in the implicit form of G, the classifier p(c|Y) can be derived with the marginal distribution p(c) according to the Bayes rule. The neural components and their connections are shown in Fig. 2. There are 3 information sinks in Fig. 2a), each of which connects to exact 2 data sources and corresponds to one part of loss function in Eq. 8; minimizing them concurrently, is equivalent to make the data come from different paths look the same with the best effort. Note that the graph G is connected, directed and acyclic, which makes any topological order (must exist based on graph theory) of nodes is viable to a gradient descent approach. It provides a convenience for the design of optimization heuristics in Sec. 3.2.3. The trained G will be frozen in a secondary training process in Fig. 2b ), to consistently offer the classifier C its pseudo-samples with labels until convergence. The neural blocks in Fig. 2 can be built from even more fine-grained modules: 1 an input layer concatenating a vector of z (or y) and an optional one-hot vector for c; 2 a Long Short-Term Memory (LSTM) layer mapping a sequence to a vector; 3 a LSTM layer reversely mapping a vector to a sequence; 4 a fully connected layer; 5 a sigmoid (or softmax) layer classifying real/fake samples (or outputting labels). The generators G is built by 1 + 3, and G s is 2 + 3, and C is 2 + 5; all D, D s, D u share a same structure, which is 2 + 1 + 5. The modules 2, 3 and 4 are kept as one single hidden layer here, after the multi-layer structures have been tried in practice and their improvement was found to be negligible. For the discrete state sequence at feature level 3, it is feasible to map its discrete values into continuous vector globally by Word2vec during preprocessing, since the C is what we finally interested for diagnosis and the intermediate are not visible to end users indeed. We need a heuristics to guide the whole optimization process of G, depicted in Algo. 1. During every mini-batch iterations of training, every time we select a forward path whose loss function contributes most in the Eq. 8, weighted by λs, and update the θ of blocks on the way in the form of gradient descent. The all three sub-loss functions will be updated after G has been modified. Once the process has found a selected target that does not contribute to the overall goal, the update will be rolled back, and the algorithm will switch to others. The failure record for any target will not bring into next batch. For individual blocks, RMSprop is preferred for components containing a recurrent layer, including G, G s and C, while all discriminators use Adam instead. The real labeled data are collected from the archive of fault cases, probed at the core section of a wireless telecom site in service. The problematic TCP streams have 4 categories: 1) uplink packet loss at random, 2) downlink packet loss at random, 3) packet loss periodically when load exceeds capacity, 4) checksum error at random, 5) healthy but false alarmed. The faults in the connection establishment and release phase are not considered here. In the historical records, all we have are 18 samples, unevenly distributed from category 1-5, which are 5, 7, 3, 1, and 2.The unlabeled data are captured from the same site, having 2000 real samples sufficient for training propose. Though its size is much larger than the labeled ones, it can hardly contain valid anomalies by an arbitrary, short-term collection. It provides a reference for the average condition of service quality for a specific site. The simulation is conducted from a mirror environment in labs, with a similar or maybe simplified configuration. The 5 types of anomalies are generated in equal ratio, and each of them has 400 records. The phenomenon of occurring errors here is much more evident than reality: 1) for uplink and 2) downlink packets, the portability of loss is 50%, 3) the stream have 50% of time transmitting over the limit of throughput; 4) the probability of checksum error is 50%.The sequence lengths of all synthetic data are all fixed to 500.The data preprocessing on the TCP header is based on different levels discussed in Sec. 3.1, where the latent states of TCP include normal, resend, and timeout, distilled from 7 useful attributes, and also from 24 bytes of raw binary records. A simple FSM is defined to compress the attributes to several states, according to TCP's standard logic. All sequences are split into the size of 500, and the ones shorter than that are padded by 0. The performance of our multi-class problem is evaluated by the accuracy Acc = N correct /N, which is only metrics cared by operation engineers. They rank all the possible causes descendingly based on the model's output and take the whole list as a recommendation, which acts as a shortcut leading them to the underlying causes much more quickly, especially compared to their existing manualbased approach. We measure two variations of accuracy: averaging on samples as above, and averaging on class, i.e., K c=1 N Nc Acc c, to emphasize the performance on minor-classes. 3-fold cross-validation is used for the set of real labeled data, with all other 2 datasets always assisting the train partition for every fold. The program is based on Keras 2 and a GAN library 3. The one-to-many recurrent layer of Keras is implemented merely in the form of many-to-many, by repeating the input as a vector of equal length as output. The dimension of noise X is set to 20, and hidden dimension of LSTM is 10 with L2 regularization. The learning rates of all components are configured to 10 −3. All other parameters are kept by default. We have two kinds of factors to combine into a workable model: feature levels and data sources, shown in Tab. 1. The solution in Fig. 2a) can be trimmed based on available data sources. In the group 1 of Tab.1, we give two referential models for comparison, one (Line 1) is the dummy model which always give the most frequent class as prediction, and the other (Line 2) is the if we only use the simulation data both for train and test with one standalone classifier. With the deliberately amplified anomalies and evenly distributed classes, the performance can be quite ideal, and to some extent be an empirical upper limit of diagnosis. It can be observed in group 4∼5 that, the simulation data are crucial for substantial enhancement by adding more typical anomalies, while the unlabeled contributes slightly via only supplying normal samples. The improvements in weighted accuracy are more obvious, which is more than twice that of dummy model. On the other hand, the features still act an essential role in our problem. The level 3 feature can always perform better than the rest two levels, while the level 1 can especially not produce any meaningful . The level 2 can approximate the best performance with the help of massive data, offering an always worse but still acceptable accuracy, without the effort of understanding protocols. The evolution of losses of 3 discriminators is demonstrated in FIG2, and the classifier is shown in FIG2 ). All loss curves of GANs' components converge to their roughly horizontal limits, with more or less mutually influenced fluctuations, caused by the continuous attempts to seek for a better equilibrium in every batch. However, these efforts seem to merely make the overall loss tremble, instead of moving to lower places. The variances of simulated data are obviously much smaller than real data, which may be ascribed to the sufficient size of data and evenly distribution among classes, whereas the real data are of imbalance, and have few samples to validate the improvements during training. We terminated the whole process at iteration 10 4, and use the trained G to gain a corresponding C, which is much easier to train as the loss goes steadily to its convergence level, shown in FIG2 ). In this paper, the widely used semi-supervised and transfer learning requirements have been implemented in an integrated way, via a system of cooperative or adversarial neural blocks. Its effectiveness has been verified in our application of packet flow classification, and it is hopeful to be a widely adopted method in this specific domain. The work also prompts us that, complex machine learning tasks and their compound loss functions can be directly mapped into connected networks, and their optimization process can be designed over an entire graph, rather than each individual's hierarchical layers. In future work, we may study how to apply this approach to even larger scale tasks, and make a theoretical analysis of the existence of equilibrium and why we can always reach it.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
SJjADecmf
semi-supervised and transfer learning on packet flow classification, via a system of cooperative or adversarial neural blocks
Our work addresses two important issues with recurrent neural networks: they are over-parameterized, and the recurrent weight matrix is ill-conditioned. The former increases the sample complexity of learning and the training time. The latter causes the vanishing and exploding gradient problem. We present a flexible recurrent neural network model called Kronecker Recurrent Units (KRU). KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix. It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors. Thanks to the small dimensionality of the factors, maintaining these constraints is computationally efficient. Our experimental on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance. These in particular show that while there are advantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced. Deep neural networks have defined the state-of-the-art in a wide range of problems in computer vision, speech analysis, and natural language processing BID28 BID36. However, these models suffer from two key issues. They are over-parametrized; thus it takes a very long time for training and inference. Learning deep models is difficult because of the poor conditioning of the matrices that parameterize the model. These difficulties are especially relevant to recurrent neural networks. Indeed, the number of distinct parameters in RNNs grows as the square of the size of the hidden state conversely to convolutional networks which enjoy weight sharing. Moreover, poor conditioning of the recurrent matrices in the gradients to explode or vanish exponentially fast along the time horizon. This problem prevents RNN from capturing long-term dependencies BID22 BID5.There exists an extensive body of literature addressing over-parametrization in neural networks. BID31 first studied the problem and proposed to remove unimportant weights in neural networks by exploiting the second order information. Several techniques which followed include low-rank decomposition BID13, training a small network on the soft-targets predicted by a big pre-trained network BID2, low bit precision training BID12, hashing BID8, etc. A notable exception is the deep fried convnets BID44 which explicitly parameterizes the fully connected layers in a convnet with a computationally cheap and parameter-efficient structured linear operator, the Fastfood transform BID29. These techniques are primarily aimed at feed-forward fully connected networks and very few studies have focused on the particular case of recurrent networks BID1.The problem of vanishing and exploding gradients has also received significant attention. BID23 proposed an effective gating mechanism in their seminal work on LSTMs. Later, this technique was adopted by other models such as the Gated Recurrent Units (GRU) BID10 and the Highway networks BID39 for recurrent and feed-forward neural networks respectively. Other popular strategies include gradient clipping BID37, and orthogonal initialization of the recurrent weights. More recently BID1 proposed to use a unitary recurrent weight matrix. The use of norm preserving unitary maps prevent the gradients from exploding or vanishing, and thus help to capture long-term dependencies. The ing model called unitary RNN (uRNN) is computationally efficient since it only explores a small subset of general unitary matrices. Unfortunately, since uRNNs can only span a reduced subset of unitary matrices their expressive power is limited BID42. We denote this restricted capacity unitary RNN as RC uRNN. Full capacity unitary RNN (FC uRNN) BID42 proposed to overcome this issue by parameterizing the recurrent matrix with a full dimensional unitary matrix, hence sacrificing computational efficiency. Indeed, FC uRNN requires a computationally expensive projection step which takes O(N 3) time (N being the size of the hidden state) at each step of the stochastic optimization to maintain the unitary constraint on the recurrent matrix. BID35 in their orthogonal RNN (oRNN) avoided the expensive projection step in FC uRNN by parametrizing the orthogonal matrices using Householder reflection vectors, it allows a fine-grained control over the number of parameters by choosing the number of Householder reflection vectors. When the number of Householder reflection vector approaches N this parametrization spans the full reflection set, which is one of the disconnected subset of the full orthogonal set. BID25 also presented a way of parametrizing unitary matrices which allows fine-grained control on the number of parameters. This work called as Efficient Unitary RNN (EURNN), exploits the continuity of unitary set to have a tunable parametrization ranging from a subset to the full unitary set. Although the idea of parametrizing recurrent weight matrices with strict unitary linear operator is appealing, it suffers from several issues: Strict unitary constraints severely restrict the search space of the model, thus making the learning process unstable. Strict unitary constraints make forgetting irrelevant information difficult. While this may not be an issue for problems with non-vanishing long term influence, it causes failure when dealing with real world problems that have vanishing long term influence 4.7. BID20 have previously pointed out that the good performance of strict unitary models on certain synthetic problems is because it exploits the biases in these data-sets which favors a unitary recurrent map and these models may not generalize well to real world data-sets. More recently BID41 have also studied this problem of unitary RNNs and the authors found out that relaxing the strict unitary constraint on the recurrent matrix to a soft unitary constraint improved the convergence speed as well as the generalization performance. Our motivation is to address the problems of existing recurrent networks mentioned above. We present a new model called Kronecker Recurrent Units (KRU). At the heart of KRU is the use of Kronecker factored recurrent matrix which provide an elegant way to adjust the number of parameters to the problem at hand. This factorization allows us to finely modulate the number of parameters required to encode N × N matrices, from O(log(N)) when using factors of size 2 × 2, to O(N 2) parameters when using a single factor of the size of the matrix itself. We tackle the vanishing and exploding gradient problem through a soft unitary constraint BID26 BID20 BID11 BID41. Thanks to the properties of Kronecker matrices BID40, this constraint can be enforced efficiently. Please note that KRU can readily be plugged into vanilla real space RNN, LSTM and other variants in place of standard recurrent matrices. However in case of LSTMs we do not need to explicitly enforce the approximate orthogonality constraints as the gating mechanism is designed to prevent vanishing and exploding gradients. Our experimental on seven standard data-sets reveal that KRU and KRU variants of real space RNN and LSTM can reduce the number of parameters drastically (hence the training and inference time) without trading the statistical performance. Our core contribution in this work is a flexible, parameter efficient and expressive recurrent neural network model which is robust to vanishing and exploding gradient problem. The paper is organized as follows, in section 2 we restate the formalism of RNN and detail the core motivations for KRU. In section 3 we present the Kronecker recurrent units (KRU). We present our experimental findings in section 4 and section 5 concludes our work. DISPLAYFORM0 Hidden and output bias σ, L(ŷ, y) Point-wise non-linear activation function and the loss function 2 RECURRENT NEURAL NETWORK FORMALISM TAB0 summarizes some notations that we use in the paper. We consider the field to be complex rather than real numbers. We will motivate the choice of complex numbers later in this section. Consider a standard recurrent neural network BID14. Given a sequence of T input vectors: x 0, x 1,..., x T −1, at a time step t RNN performs the following: DISPLAYFORM0 DISPLAYFORM1 whereŷ t is the predicted value at time step t. The total number of parameters in a RNN is c(DN + N 2 + N + M + M N), where c is 1 for real and 2 for complex parametrization. As we can see, the number of parameters grows quadratically with the hidden dimension, i.e., O(N 2). We show in the experiments that this quadratic growth is an over parametrization for many real world problems. Moreover, it has a direct impact on the computational efficiency of RNNs because the evaluation of Wh t−1 takes O(N 2) time and it recursively depends on previous hidden states. However, other components Ux t and Vh t can usually be computed efficiently by a single matrix-matrix multiplication for each of the components. That is, we can perform U[x 0, . . ., x T] and V[h 0, . . ., h T −1], this is efficient using modern BLAS libraries. So to summarize, if we can control the number of parameters in the recurrent matrix W, then we can control the computational efficiency. The vanishing and exploding gradient problem refers to the decay or growth of the partial derivative of the loss L with respect to the hidden state h t i.e. ∂L ∂ht as the number of time steps T grows BID1. By the application of the chain rule, the following can be shown BID1: DISPLAYFORM0 From Equation 3, it is clear that if the absolute value of the eigenvalues of W deviates from 1 then ∂L ∂ht may explode or vanish exponentially fast with respect to T − t. So a strategy to prevent vanishing and exploding gradient is to control the spectrum of W. Although BID1 and BID42 use complex valued networks with unitary constraints on the recurrent matrix, the motivations for such models are not clear. We give a simple but compelling reason for complex-valued recurrent networks. The absolute value of the determinant of a unitary matrix is 1. Hence in the real space, the set of all unitary (orthogonal) matrices have a determinant of 1 or −1, i.e., the set of all rotations and reflections respectively. Since the determinant is a continuous function, the unitary set in real space is disconnected. Consequently, with the real-valued networks we cannot span the full unitary set using the standard continuous optimization procedures. On the contrary, the unitary set is connected in the complex space as its determinants are the points on the unit circle and we do not have this issue. As we mentioned in the introduction BID25 uses this continuity of unitary space to have a tunable continuous parametrization ranging from subspace to full unitary space. Any continuous parametrization in real space can only span a subset of the full orthogonal set. For example, the Householder parametrization BID35 suffers from this issue. We consider parameterizing the recurrent matrix W as a Kronecker product of F matrices DISPLAYFORM0 Where each W f ∈ C P f ×Q f and DISPLAYFORM1 To illustrate the Kronecker product of matrices, let us consider the simple case when ∀ f {P f = Q f = 2}. This implies F = log 2 N. And W is recursevly defined as follows: DISPLAYFORM2 DISPLAYFORM3 When ∀ f {p f = q f = 2} the number of parameters is 8 log 2 N and the time complexity of hidden state computation is O(N log 2 N). When ∀ f {p f = q f = N} then F = 1 and we will recover standard complex valued recurrent neural network. We can span every Kronecker representations in between by choosing the number of factors and the size of each factor. In other words, the number of Kronecker factors and the size of each factor give us fine-grained control over the number of parameters and hence over the computational efficiency. This strategy allows us to design models with the appropriate trade-off between computational budget and statistical performance. All the existing models lack this flexibility. The idea of using Kronecker factorization for approximating Fisher matrix in the context of natutal gradient methods have recently recieved much attention. The algorithm was originally presented in BID33 and was later extended to convolutional layers, distributed second order optimization BID3 and for deep reinforcement learning BID43. However Kronecker matrices have not been well explored as learnable parameters except BID45 ) used it's spectral property for fast orthogonal projection and BID46 used it as a layer in convolutional neural networks. Poor conditioning in vanishing or exploding gradients. Unfortunately, the standard solution which consists of optimization on the strict unitary set suffers from the retention of noise over time. Indeed, the small eigenvalues of the recurrent matrix can represent a truly vanishing long-term influence on the particular problem and in that sense, there can be good or bad vanishing gradients. Consequently, enforcing strict unitary constraint (forcing the network to never forget) can be a bad strategy. A simple solution to get the best of both worlds is to enforce unitary constraint approximately by using the following regularization: DISPLAYFORM0 Please note that these constraints are enforced on each factor of the Kronecker factored recurrent matrix. This procedure is computationally very efficient since the size of each factor is typically small. It suffices to do so because if each of the Kronecker factors {W 0, . . ., W F −1} are unitary then the full matrix W is unitary BID40 and if each of the factors are approximately unitary then the full matrix is approximately unitary. We apply soft unitary constraints as a regularizer whose strength is cross-validated on the validation set. This type of regularizer has recently been exploited for real-valued models. BID11 showed that enforcing approximate orthogonality constraint on the weight matrices make the network robust to adversarial samples as well as improve the learning speed. In metric learning BID26 have shown that it better conditions the projection matrix thereby improving the robustness of stochastic gradient over a wide range of step sizes as well asthe generalization performance. BID20 and BID41 have also used this soft unitary contraints on standard RNN after identifying the problems with the strict unitary RNN models. However the computational complexity of naively applying this soft constraint is O(N 3). This is prohibitive for RNNs with large hidden state unless one considers a Kronecker factorization. Existing deep learning libraries such as Theano BID6, Tensorflow BID0 and Pytorch BID38 do not support fast primitives for Kronecker products with arbitrary number of factors. So we wrote custom CUDA kernels for Kronecker forward and backward operations. All our models are implemented in C++. We will release our library to reproduce all the which we report in this paper. We use tanh as activation function for RNN, LSTM and our model KRU-LSTM. Whereas RC uRNN, FC uRNN and KRU uses complex rectified linear units BID1. Copy memory problem BID23 tests the model's ability to recall a sequence after a long time gap. In this problem each sequence is of length T + 20 and each element in the sequence come from 10 classes {0, . . ., 9}. The first 10 elements are sampled uniformly with replacement from {1, . . ., 8}. The next T − 1 elements are filled with 0, the'blank' class followed by 9, the'delimiter' and the remaining 10 elements are'blank' category. The goal of the model is to output a sequence of T + 10 blank categories followed by the 10 element sequence from the beginning of the input sequence. The expected average cross entropy for a memory-less strategy is FORMULA3, we choose the training and test set size to be 100K and 10K respectively. All the models were trained using RMSprop with a learning rate of 1e−3, decay of 0.9 and a batch size of 20. For both the settings T = 1000 and T = 2000, KRU converges to zero average cross entropy faster than FC uRNN. All the other baselines are stuck at the memory-less cross entropy. The are shown in figure 1. For this problem we do not learn the recurrent matrix of KRU, We initialize it by random unitary matrix and just learn the input to hidden, hidden to output matrices and the bias. We found out that this strategy already solves the problem faster than all other methods. Our model in this case is similar to a parametrized echo state networks (ESN). ESNs are known to be able to learn long-term dependencies if they are properly initialized BID24. We argue that this data-set is not an ideal benchmark for evaluating RNNs in capturing long term dependencies. Just a unitary initialization of the recurrent matrix would solve the problem. Following BID1 we describe the adding problem BID23. Each input vector is composed of two sequences of length T. The first sequence is sampled from U. In the second sequence exactly two of the entries is 1, the'marker' and the remaining is 0. The first 1 is located uniformly at random in the first half of the sequence and the other 1 is located again uniformly at random in the other half of the sequence. The network's goal is to predict the sum of the numbers from the first sequence corresponding to the marked locations in the second sequence. We evaluate four settings as in BID1 with T =100, T =200, T =400, and T =750. For all four settings, KRU uses a hidden dimension N of 512 with 2x2 Kronecker factors which corresponds to ≈3K parameters in total. We use a RNN of N = 128 (≈ 17K parameters), LSTM of N = 128 (≈ 67K parameters), RC uRNN of N = 512 (≈ 7K parameters), FC uRNN of N = 128 (≈ 33K parameters). The train and test set sizes are chosen to be 100K and 10K respectively. All the models were trained using RMSprop with a learning rate of 1e−3 and a batch size of 20 or 50 with the best are being reported here. The are presented in figure 2. KRU converges faster than all other baselines even though it has much fewer parameters. This shows the effectiveness of soft unitary constraint which controls the flow of gradients through very long time steps and thus deciding what to forget and remember in an adaptive way. LSTM also converges to the solution and this is achieved through its gating mechanism which controls the flow of the gradients and thus the long term influence. However LSTM has 10 times more parameters than KRU. Both RC uRNN and FC uRNN converges for T = 100 but as we can observe, the learning is not stable. The reason for this is that RC uRNN and FC uRNN retains noise since they are strict unitary models. Please note that we do not evaluate RC uRNN for T = 400 and T = 750 because we found out that the learning is unstable for this model and is often diverging. Results on adding problem for T =100, T =200, T =400 and T =750. KRU consistently outperforms the baselines on all the settings with fewer parameters. As outlined by, we evaluate the Pixel by pixel MNIST task. MNIST digits are shown to the network pixel by pixel and the goal is to predict the class of the digit after seeing all the pixels one by one. We consider two tasks: Pixels are read from left to right from top or bottom and Pixels are randomly permuted before being shown to the network. The sequence length for these tasks is T = 28 × 28 = 784. The size of the MNIST training set is 60K among which we choose 5K as the validation set. The models are trained on the remaining 55K points. The model which gave the best validation accuracy is chosen for test set evaluation. All the models are trained using RMSprop with a learning rate of 1e−3 and a decay of 0.9.The are summarized in FIG3 and table 2. On the unpermuted task LSTM achieve the state of the art performance even though the convergence speed is slow. Recently a low rank plus diagonal gated recurrent unit (LRD GRU) have shown to achieves 94.7 accuracy on permuted MNIST with 41.2K parameters whereas KRU achieves 94.5 with just 12K parameters i.e KRU has 3x parameters less than LRD GRU. Please also note that KRU is a simple model without a gating mechanism. KRU can be straightforwardly plugged into LSTM and GRU to exploit the additional benefits of the gating mechanism which we will show in the next experiments with a KRU-LSTM. We now consider character level language modeling on Penn TreeBank data-set BID32. Penn TreeBank is composed of 5017K characters in the training set, 393K characters in the validation set and 442K characters in the test set. The size of the vocabulary was limited to 10K most frequently occurring words and the rest of the words are replaced by a special <UNK> character BID36. The total number of unique characters in the data-set is 50, including the special <UNK> character. All our models were trained for 50 epochs with a batch size of 50 and using ADAM BID27. We use a learning rate of 1e−3 which was found through cross-validation with default beta parameters BID27 ). If we do not see an improvement in the validation bits per character (BPC) after each epoch then the learning rate is decreased by 0.30. Back-propagation through time (BPTT) is unrolled for 30 time frames on this task. We did two sets of experiments to have fair evaluation with the models whose were available for a particular parameter setting BID35 and also to see how the performance evolves as the number of parameters are increased. We present our in table 3. We observe that the strict orthogonal model, oRNN fails to generalize as well as other models even with a high capacity recurrent matrix. KRU and KRU-LSTM performs very close to RNN and LSTM with fewer parameters in the recurrent matrix. Please recall that the computational bottleneck in RNN is the computation of hidden states 2.1 and thus having fewer parameters in the recurrent matrix can significantly reduce the training and inference time. Recently HyperNetworks BID18 have shown to achieve the state of the art performance of 1.265 and 1.219 BPC on the PTB test set with 4.91 and 14.41 million parameters respectively. This is respectively 13 and 38 times more parameters than the KRU-LSTM model which achieves 1.47 test BPC. Also Recurrent Highway Networks (RHN) BID47 proved to be a promising model for learning very deep recurrent neural networks. Running experiments, and in particular exploring meta-parameters with models of that size, requires unfortunately computational means beyond what was at our disposal for this work. However, there is no reason that the consistent behavior and improvement observed on the other reference baselines would not generalize to that type of large-scale models. BID9 our main objective here is to have a fair evaluation of different recurrent neural networks. We took the baseline RNN and LSTM models of BID9 whose model sizes were chosen to be small enough to avoid overfitting. We choose the model size of KRU and KRU-LSTM in such way that it has fewer parameters compared to the baselines. As we can in the table 4 both our models (KRU and KRU-LSTM) overfit less and generalizes better. We also present the wall-clock running time of different methods in the figure 4. BID9 100 ≈20K 10K 8.82 9.10 5.64 9.03 LSTM BID9 Framewise phoneme classification BID16 is the problem of classifying the phoneme corresponding to a sound frame. We evaluate the models for this task on the real world TIMIT data-set BID15. TIMIT contains a training set of 3696 utterances among which we use 184 as the validation set. The test set is composed of 1344 utterances. We extract 12 Mel-Frequency Cepstrum Coefficients (MFCC) BID34 ) from 26 filter banks and also the log energy per frame. We also concatenate the first derivative, ing in a feature descriptor of dimension 26 per frame. The frame size is chosen to be 10ms and the window size is 25ms. The number of time steps to which back-propagation through time (BPTT) is unrolled corresponds to the length of each sequence. Since each sequence is of different length this implies that for each sample BPTT steps are different. All the models are trained for 20 epochs with a batch size of 1 using ADAM with default beta parameters BID27. The learning rate was cross-validated for each of the models from η ∈ {1e−2, 1e−3, 1e−4} and the best are reported here. The best learning rate for all the models was found out to be 1e−3 for all the models. Again if we do not observe a decrease in the validation error after each epoch, we decrease the learning rate by a factor of γ ∈ {1e−1, 2e−1, 3e−1} which is again cross-validated. Figure 5 summarizes Figure 5: KRU and KRU-LSTM performs better than the baseline models with far less parameters in the recurrent weight matrix on the challenging TIMIT data-set BID15. This significantly bring down the training and inference time of RNNs. Both LSTM and KRU-LSTM converged within 5 epochs whereas RNN and KRU took 20 epochs. A similar was obtained by BID16 using RNN and LSTM with 4 times less parameters respectively than our models. However in their work the LSTM took 20 epochs to converge and the RNN took 70 epochs. We have also experimented with the same model size as that of BID16 and have obtained very similar as in the table but at the expense of longer training times. Here we study the properties of soft unitary constraints on KRU. We use Polyphonic music modeling data-sets BID7: JSB Chorales and Piano-midi, as well as TIMIT data-set for this set of experiments. We varied the amplitude of soft unitary constraints from 1e − 7 to 1e − 1, the higher the amplitude the closer the recurrent matrix will be to the unitary set. All other hyper-parameters, such as the learning rate and the model size are fixed. We present our studies in the figure 6. As we increase the amplitude we can see that the recurrent matrix is getting better conditioned and the spectral norm or the spectral radius is approaching towards 1. As we can see that the validation performance can be improved using this simple soft unitary constraints. For JSB Chorales the best validation performance is achieved at an amplitude of 1e − 2, whereas for Piano-midi it is at 1e − 1.For TIMIT phoneme recognition problem, the best validation error is achieved at 1e − 5 but as we increase the amplitude further, the performance drops. This might be explained by a vanishing long-term influence that has to be forgotten. Our model achieve this by cross-validating the amplitude of soft unitary constraints. These experiments also reveals the problems of strict unitary models such as RC uRNN BID1, FC uRNN BID42, oRNN BID35 and EURNN BID25 ) that they suffer from the retention of noise from a vanishing long term influence and thus fails to generalize. A popular heuristic strategy to avoid exploding gradients in RNNs and thereby making their training robust and stable is gradient clipping. Most of the state of the art RNN models use gradient clipping for training. Please note that we are not using gradient clipping with KRU. Our soft unitary constraints offer a principled alternative to gradient clipping. Moreover BID19 recently showed that gradient descent converges to the global optimizer of linear recurrent neural networks even though the learning problem is non-convex. The necessary condition for the global convergence guarantee requires that the spectral norm of recurrent matrix is bounded by 1. This seminal theoretical also inspires to use regularizers which control the spectral norm of the recurrent matrix, such as the soft unitary constraints. We have presented a new recurrent neural network model based on its core a Kronecker factored recurrent matrix. Our core reason for using a Kronecker factored recurrent matrix stems from it's elegant algebraic and spectral properties. Kronecker matrices are neither low-rank nor block-diagonal but it is multi-scale like the FFT matrix. Kronecker factorization provides a fine control over the model capacity and it's algebraic properties enable us to design fast matrix multiplication algorithms. It's spectral properties allow us to efficiently enforce constraints like positive semi-definitivity, unitarity and stochasticity. As we have shown, we used the spectral properties to efficiently enforce a soft unitary constraint. Experimental show that our approach out-perform classical methods which uses O(N 2) parameters in the recurrent matrix. Maybe as important, these experiments show that both on toy problems (§ 4.1 and 4.2), and on real ones (§ 4.3, 4.4,, and § 4.6), while existing methods require tens of thousands of parameters in the recurrent matrix, competitive or better than state-of-the-art performance can be achieved with far less parameters in the recurrent weight matrix. These surprising provide a new and counter-intuitive perspective on desirable memory-capable architectures: the state should remain of high dimension to allow the use of high-capacity networks to encode the input into the internal state, and to extract the predicted value, but the recurrent dynamic itself can, and should, be implemented with a low-capacity model. From a practical standpoint, the core idea in our method is applicable not only to vanilla recurrent neural networks and LSTMS as we showed, but also to a variety of machine learning models such as feed-forward networks BID46, random projections and boosting weak learners. Our future work encompasses exploring other machine learning models and on dynamically increasing the capacity of the models on the fly during training to have a perfect balance between computational efficiency and sample complexity. Given a sequence of T input vectors: x 0, x 1,..., x T −1, let us consider the operation at the hidden layer t of a recurrent neural network: DISPLAYFORM0 By the chain rule, DISPLAYFORM1 where σ is the non-linear activation function and J k+1 = diag(σ (z k+1)) is the Jacobian matrix of the non-linear activation function. DISPLAYFORM2 From equation 14 it is clear the norm of the gradient is exponentially dependent upon two factors along the time horizon:• The norm of the Jacobian matrix of the non-linear activation function J k+1.• The norm of the hidden to hidden weight matrix W.These two factors are causing the vanishing and exploding gradient problem. Since the gradient of the standard non-linear activation functions such as tanh and ReLU are bounded between, J k+1 does not contribute to the exploding gradient problem but it can still cause vanishing gradient problem. LSTM networks presented an elegant solution to the vanishing and exploding gradients through the introduction of gating mechanism. Apart from the standard hidden state in RNN, LSTM introduced one more state called cell state c t. LSTM has three different gates whose functionality is described as follows: DISPLAYFORM0 Decides what information to keep and erase from the previous cell state. DISPLAYFORM1 Decides what new information should be added to the cell state.• Output gate (W o, U o, b o):Decides which information from the cell state is going to the output. In addition to the gates, LSTM prepares candidates for the information from the input gate that might get added to the cell state through the action of input gate. Let's denote the parameters describing the function that prepares this candidate information as W c, U c, b c.Given a sequence of T input vectors: x 0, x 1,..., x T −1, at a time step t LSTM performs the following: DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 where σ and τ are the point-wise sigmoid and tanh functions. indicates element-wise multiplication. The first three are gating operations and the 4th one prepares the candidate information. The 5th operation updates the cell-state and finally in the 6th operation the output gate decided what to go into the current hidden state. Unitary evolution RNN (uRNN) proposed to solve the vanishing and exploding gradients through a unitary recurrent matrix, which is for the form: DISPLAYFORM0 Where: DISPLAYFORM1 Diagonal matrices whose diagonal entries are of the from D kk = e iθ k, implies each matrix have N parameters, (θ 0, . . ., θ N −1).• F and F −1: Fast Fourier operator and inverse fast Fourier operator respectively.• R 1, R 2: Householder reflections. DISPLAYFORM2 The total number of parameters for this uRNN operator is 7N and the matrix vector can be done N log(N) time. It is parameter efficient and fast but not flexible and suffers from the retention of noise and difficulty in optimization due its unitarity. Orthogonal RNN (oRNN) parametrizes the recurrent matrices using Householder reflections. DISPLAYFORM3 where DISPLAYFORM4 and DISPLAYFORM5 where DISPLAYFORM6 The number of parameters in this parametrization is O(N K). When N = K = 1 and v = 1, it spans the rotation subset and when v = −1, it spans the full reflection subset. Consider a matrix W ∈ C N ×N factorized as a Kronecker product of F matrices W 0,..., W F −1, DISPLAYFORM0 Where each W i ∈ C Pi×Qi respectively and DISPLAYFORM1 DISPLAYFORM2 Proof. DISPLAYFORM3 For simplicity here we use real number notations. Consider a dense matrix X ∈ R M ×K and a Kronecker factored matrix DISPLAYFORM0 The computational complexity first expanding the Kronecker factored matrix and then computing the matrix product is O(M N K). This can be reduced by exploiting the recursive definition of Kronecker matrices. For examples when N = K and ∀ f {P f = Q f = 2}, the matrix product can be computed DISPLAYFORM1 The matrix product in 29 can be recursively defined as DISPLAYFORM2 Please note that the binary operator is not the standard matrix multiplication operator but instead it denotes a strided matrix multiplication. The stride is computed according to the algebra of Kronecker matrices. Let us define Y recursively: DISPLAYFORM3 Combining equation 34 and 32 DISPLAYFORM4 We use the above notation for Y in the algorithm. That is the algorithm illustrated here will cache all the intermediate outputs (Y 0, . . ., Y F −1) instead of just Y F −1. These intermediate outputs are then later to compute the gradients during the back-propagation. This cache will save some computation during the back-propagation. If the model is just being used for inference then the algorithm can the organized in such a way that we do not need to cache the intermediate outputs and thus save memory. Algorithm for computing the product between a dense matrix and a Kronecker factored matrix34 is given below 1. All the matrices are assumed to be stored in row major order. For simplicity the algorithm is illustrated in a serial fashion. Please note the lines 4 to 15 except lines 9-11 can be trivially parallelized as it writes to independent memory locations. The GPU implementation exploits this fact. Algorithm 1 That is, the Kronecker layer is parametrized by a Kronecker factored matrix W = ⊗ F −1 f =0 W f stored as it factors {W 0, . . ., W F −1} and it takes an input X and produces output Y = Y F −1 using the algorithm 1.The following algorithm 2 computes the Gradient of the Kronecker factors: {gW 0, . . ., gW F −1} and the Jacobian of the input matrix gX given the Jacobian of the output matrix: gY = gY F −1.Algorithm 2 Gradient computation in a Kronecker layer. Input: Input matrix X ∈ R M ×K, Kronecker factors {W 0, . . ., W F −1}: W f ∈ R p f ×q f, Size of each Kronecker factors {(P 0, Q 0),..., (P F −1, Q F −1)}: DISPLAYFORM5
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1YynweCb
Out work presents a Kronecker factorization of recurrent weight matrices for parameter efficient and well conditioned recurrent neural networks.
This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data. We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO). We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation. This is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations. To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure. Representation learning is a fundamental problem in Machine learning and holds the promise to enable data-efficient learning and transfer to new tasks. Researchers working in domains like Computer Vision and Natural Language Processing have already demonstrated the effectiveness of representations and features computed by deep architectures for the solution of other tasks. A case in point is the example of the FC7 features from the AlexNet image classification architecture that have been used for many other vision problems . The effectiveness of learned representations has given new impetus to research in representation learning, leading to a lot of work being done on the development of techniques for inducing representations from data having desirable properties like disentanglement and compactness (; ; ;). Many popular techniques for generating representation are based on the Variational AutoEncoders (VAE) model . The use of deep networks as universal function approximators has facilitated very rapid advancements which samples generated from these models often being indistinguishable from natural data. While the quality of generated examples can provide significant convincing evidence that a generative model is flexible enough to capture the variability in the data distribution, it is far from a formal guarantee that the representation is fit for other purposes. In fact, if the actual goal is learning good latent representations, evaluating generative models only based on reconstruction fidelity and subjective quality of typical samples is neither sufficient nor entirely necessary, and can be even misleading. In this paper, we uncover the problematic failure mode where representations learned by VAEs exhibit over-sensitivity to semantically-irrelevant changes in data. One example of such problematic behaviour can be seen in Figure 1. We identify a cause for this shortcoming in the classical Vari-ational Auto-encoder (VAE) objective, the evidence lower bound (ELBO), that fails to control the behaviour of the encoder out of the support of the empirical data distribution. We show this behaviour of the VAE can lead to extreme errors in the recovered representation by the encoder and is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with properties that enforce insensitivity of the representation with respect to families of transformations. To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure. Figure 1: An illustration of the intrinsic fragility of VAE representations. Outputs from a Variational Autoencoder with encoder f and decoder g parametrized by η and θ, respectively, trained on CelebA. Conditioned on the encoder input X a = x a the decoder output X = g(f (x a)) = (g • f)(x a) is shown on the top row. When the original example is perturbed with a carefully selected vector d such that X b = X a + d with d ≤, the output X turns out to be perceptually very different. Such examples suggest that either the representations Z a and Z b are very different (the encoder is not smooth), or the decoder is very sensitive to small changes in the representation (the decoder is not smooth), or both. We identify the source of the problem primarily as the encoder and propose a practical solution. It is clear that if learned representations are overly sensitive to irrelevant changes in the input (for example, small changes in the pixels of an image or video, or inaudible frequencies added to an audio signal), models that rely on these representations are naturally susceptible to make incorrect predictions when inputs are changed. We argue that such specifications about the robustness properties of learned representations can be one of the tractable guiding features in the search for good representations. Based on these observations, we make the following contributions: 1. We introduce a method for learning robust latent representations by explicitly targeting a structured model that admits the original VAE model as a marginal. We also show that in the case the target is chosen a pairwise conditional random field with attractive potentials, this choice leads naturally to the Wasserstein divergence between posterior distributions over the latent space. This insight provides us a flexible class of robustness metrics for controlling representations learned by VAEs. 2. We develop a modification to training algorithms for VAEs to improve robustness of learned representations, using an external selection mechanism for obtaining transformed examples and by enforcing the corresponding representations to be close. As a particular selection mechanism, we adopt attacks in adversarial supervised learning to attacks to the latent representation. Using this novel unsupervised training procedure we learn encoders with adjustable robustness properties and show that these are effective at learning representations that perform well across a variety of downstream tasks. 3. We show that alternative models proposed in the literature, in particular β-VAE model used for explicitly controlling the learned representations, or Wasserstein Generative Adversarial Networks (GANs) can also be interpreted in our framework as variational lower bound maximization. 4. We show empirically using simulation studies on MNIST, color MNIST and CelebA datasets, that models trained using our method learn representations that provide a higher degree of adversarial robustness even without supervised adversarial training. Modern generative models are samplers p(X|θ) for generating realizations from an ideal target distribution π(X), also known as the data distribution. In practice π(X) is unknown in the sense that it is hard to formally specify. Instead, we have a representative data set X, samples that are assumed to be conditionally independently drawn from the data distribution π(X) of interest. We will refer to the empirical distribution asπ(X) =, thereby also learning a generator. The VAE corresponds to the latent variable model p(X|Z, θ)p(Z) with latent variable Z and observation X. The forward model p(X|Z = z, θ) (the decoder) is represented using a neural network g with parameters θ, usually the mean of a Gaussian N (X; g(z; θ), vI x ) where v is a scalar observation noise variance and I x is an identity matrix. The prior is usually a standard Gaussian p(Z = z) = N (z; 0, I z). The exact posterior over latent variables p(Z|X = x, θ) is approximated by a probability model q(Z|X = x, η) with parameters η. A popular choice here is a multivariate Gaussian N (Z; µ(x; η), Σ(x; η)), where the mapping f such that (µ, Σ) = f (x, η) is chosen to be a neural network (with parameters η to be learned from data). We will refer to the pair f, g as an encoder-decoder pair. Under the above assumptions, VAE's are trained by maximizing the following form of the ELBO using stochastic gradient descent (SGD), The gradient of the Kullback-Leibler (KL) divergence term above (see A.1) is available in closed form. An unbiased estimate of the gradient of the first term can be obtained via sampling z from q using the reparametrization trick , aided by automatic differentiation. Under the i.i.d. assumption, where each data point x (n), for n = 1... N is independently drawn from the model an equivalent batch ELBO objective can be defined as where the empirical distribution of observed data is denoted asπ (See E.1 for a derivation). This form makes it more clear that the variational lower bound is only calculating the distance between the encoder and decoder under the support of the empirical distribution. To see how this locality leads to a fragile representation, we construct a VAE with discrete latents and observations. We let X ∈ {1, . . ., N x} and Z ∈ {1, . . ., N z} and define the following system of conditional distributions as the decoder and encoder models as: where ω(u) = cos(2πu). These distributions can be visualized by heatmaps of probability tables where i and j are row and column indicies, respectively Figure 2. This particular von-Mises like parametrization is chosen for avoiding boundary effects due to a finite latent and observable spaces. The prior p(Z) is taken as uniform, and is not shown. Note that this parametrization emulates a high capacity network that can model any functional relationship between latent states and observations, while being qualitatively similar to a standard VAE model with conditionally Gaussian decoder and encoder functions. In reality, the true target density is not available but we would have a representative sample. To simulate this scenario, we sample a'dataset' from a discrete target distribution π(X): this is merely a draw from a multinomial distribution, yielding a multinomial vector s with entries s i that gives the count how many times we observe x = i. The of such an experiment are depicted in Figure 3 (a) (see caption for details). This picture reveals several important properties of a VAE approximation. Figure 3: (a) Result by optimizing the ELBO for a VAE that illustrates the fragility of the encoder. Subfigure with the title'Data' (π(X)) is a random sample from the true target'Target' (π(X)) on the right. The ing encoder q(Z|X) and decoder p(X|Z) are shown as'Q' and'P', respectively. The vertical and horizontal axes correspond to latents Z and observations X respectively. Integrating over the decoder distribution using a uniform prior p(Z) over the latents, we obtain the model The obtained by a smooth encoder. Both the decoder and the representation (encoder) are more smooth while essentially having a similar fitting quality. 1. After training, we observe that when j and j are close, the corresponding conditionals p(X|Z = j) and p(X|Z = j) are close (hence corresponding decoder mean parameters m j and m j are close, hence (see middle panel of Fig.3 (a) with the title P showing the decoder). This smoothness is perhaps surprising at a first sight: in this example, we could arbitrarily permute columns of the decoder and still get the same marginal distribution. Technically speaking, given a uniform prior p(Z), the marginal likelihood p(X|θ) is entirely invariant with respect to permutations of the latent state. In fact if the encoder distribution wouldn't be constrained we could also permute the columns of the encoder to keep the ELBO invariant. In the appendix E.2, we provide an argument why the choice of an unimodal encoder model and optimization of the variational objective leads naturally to smooth decoder functions. 2. The encoders found by the VAE on the other hand are not smooth at all, despite the fact that the model shows a relatively good fit. This behaviour alerts us about judging generative models only by the quality of the samples, by traversing the latent space and generating conditional samples from the decoder. The quality of the decoder seems to be not a proxy for the robustness of the representation. The fragility of representations is inherent from the ELBO objective. For the entire dataset, a batch ELBO that involves the counts s i can be written as The last expression is proportional to the negative KL divergence between two tabular distributions:. As such, whenever s i is zero, the contribution of row i of the encoder distribution vanishes and the corresponding parameters µ i and σ i are not effecting the lower bound. In a sense, the objective does not enforce any structure on the encoder outside of the position of the data points in the training set. This figure shows that the outof-sample behaviour (i.e., for i whereπ(X) = 0) the encoder is entirely initialization dependent, hence no learning takes place. We would also expect that the ing representations would be fragile, in the sense that a small perturbation of an observation can in a large change in the encoder output. In this section, we will adopt a strategy for training the encoder that is guaranteed not to change the original objective of the decoder when maximizing the lower bound while obtaining a smoother representation. The key idea of our approach is that we assume an external selection mechanism that is able to provide new fictive data point x in the vicinity of each observation in our data set x. Here, "in the vicinity" means that we desire that the corresponding latent state of the original datapoint z = f (x; η) and the latent state of the fictitious point z = f (x ; η) should be close to each other in some sense. Assuming the existence of such an external selection mechanism, we first define the following augmented distribution where This is a pairwise conditional Markov random field (CRF) model , where we take c(Z a, Z b) as a pairwise cost function. A natural choice here would be, for example, the Euclidean square distance Z a − Z b 2. Moreover, we choose a nonnegative coupling parameter γ ≥ 0. For any pairwise Q(Z a, Z b) distribution, the ELBO has the following form It may appear that the SE has to maintain a pairwise approximation distribution Q(Z a, Z b). However, this turns out to be not necessary. Given the encoder, the marginals of, so the only remaining terms that depend on the pair distribution are the final two terms in. We note that this two terms are just the objective function of the entropy regularized optimal transport problem . If we view Q(Z a, Z b) as a transport plan, the first term is maximal when the expected cost is minimal while the second term is maximal when the variational distribution is factorized as In retrospection, this link is perhaps not that surprising as the Wasserstein distance, the solution of the optimal transport problem, is itself defined as the solution to a variational problem : Consider a set Γ of joint densities Q(Z a, Z b) with the property that Q has fixed marginals The Wasserstein divergence 1, denoted by WD is defined as the solution of the optimization problem with respect to pairwise distribution Q where c(Z a, Z b) is a function that specifies the'cost' of transferring a unit of probability mass from It is important to note that with our choice of the particular form of the variational distribution Q(Z a, Z b) we can ensure that we are still optimizing a lower bound of the original problem. We can achieve this by simply integrating out the X, effectively ignoring the likelihood term for the fictive observations. Our choice does not modify the original objective of the decoder due to the fact that the marginals are fixed given η. To see this, take the exponent of and integrate over the unobserved X log p(X = x|θ) = log dX p(X = x, X |θ) we name this lower bound B SE as the Smooth Encoder ELBO (SE-ELBO). The gradient of B SE with respect to the decoder parameters θ is identical to the gradient of the original VAE objective B. This is intuitive as x is an artificially generated sample, we should use only terms that depend on x and not on x. Another advantage of this choice is that it is possible to optimize the decoder and encoder concurrently as in the standard VAE. Only an additional term enters for the regularization of the encoder where the marginals obtained via amortized inference q(Z a |x a, η) and q(Z b |x b, η) are forced to be close in a regularized Wasserstein distance sense, with the coupling strength γ. Effectively, we are doing data augmentation for smoothing the representations obtained by the encoder without changing the actual data distribution. In the appendix E.3, we also provide an argument about the smoothness of the corresponding encoder mapping, justifying the name. The ing algorithm is actually a simple modification to the standard VAE and is summarized below: Adversarial attacks are one of the most popular approaches for probing trained models in supervised tasks, where the goal of an adversarial attack is finding small perturbations to an input example that would maximally change the output, e.g., flip a classification decision, change significantly a prediction . The perturbed input is named as an adversarial example and these extra examples are used, along with the original data points, for training adversarially robust models . As extra samples are also included, such a training procedure is referred as data augmentation. However, in unsupervised learning and density 1 We use the term divergence to distinguish the optimal transport cost from the corresponding metric. This distinction is reminiscent to the distinction between Euclidian divergence · 2 and the Euclidian distance · estimation, data augmentation is not a valid approach as the underlying empirical distribution would be altered by the introducing new points. However, as we let the encoder to target a different distribution than the actual decoder, we can actually use the extra, self generated samples to improve desirable properties of a given model. Hence this approach could also be interpreted as a'self-supervised' learning approach where we bias our search for a'good encoder' and the data selection mechanism acts like a critique, carefully providing examples that should lead to similar representations. In this paper we will restrict ourselves to Projected Gradient Descent (PGD) attacks popular in adversarial training as a selection mechanism, where the goal of the attacker is finding a point that would introduce the maximum difference in the Wasserstein distance of the latent representation. In other words, we implement our selection mechanism where the extra data point is found by approximately solving the following constrained optimization problem This attack is assigned a certain iteration budget L for a given radius, that we refer as selection iteration budget and the selection radius, respectively. We note a similar attack mechanism is proposed for generative models as described in , where one of the proposed attacks is directly optimizing against differences in source and target latent representations. Note that our method is not restricted to a particular selection mechanism; indeed two inputs that should give a similar latent representation could be used as candidates. Goal and Protocol In our experiments, we have tested and compared the adversarial accuracy of representations learned using a VAE and our smooth encoder approach. We adopt a two step experimental protocol, where we first train encoder-decoder pairs agnostic to any downstream task. Then we fix the representation, that is we freeze the encoder parameters and only use the mean of the encoder as the representation, then train a simple linear classifier based on the fixed representation using standard techniques. In this supervised stage, no adversarial training technique is employed. Ideally, we hope that such an approach will provide a degree of adversarial robustness, without the need for a costly, task specific adversarial training procedure. To evaluate the robustness of the ing classifier, for each data point in the test set, we search for an adversarial example using an untargeted attack that tries to change the classification decision. The adversarial accuracy is reported in terms of percentage of examples where the attack is not able to find an adversarial example. The VAE and SE decoder and encoder are implemented using standard MLP and ConvNet architectures. The selection procedure for SE training is implemented as a projected gradient descent optimization (a PGD attack) with selection iteration budget of L iterations to maximize the Wasserstein distance between q(Z|X = x) and q(Z|X = x + δ) with respect to the perturbation δ where δ ∞ <. Further details about the experiment can be found in the appendix C.1. We run simulations on ColorMNIST, MNIST and CelebA datasets. The ColorMNIST is constructed from the MNIST dataset by coloring each digit artificially with all of the colors corresponding to the seven of the eight corners of the RGB cube (excluding black). We present the with the strongest attack we have experimented: a PGD attack with 100 iterations and 10 restarts. We observe that for weaker attacks (such as 50 iterations with no restarts), the adversarial accuracy is typically much higher. For the ColorMNIST dataset, the are shown in Figure 4 where we test the adversarial accuracy of representations learned by our method and compare it to a VAE. We observe that the adversarial accuracy of a VAE representation quickly drops towards zero while SE can maintain adversarial accuracy in both tasks. In particular, we observe that for the arguably simpler color classification task, we are able to obtain close to perfect adversarial test accuracy using representations learned by the VAE and SE. However, when the classifiers are attacked using PGD, the adversarial accuracy quickly drops with increasing radius size, while the accuracy degrades more gracefully in the SE case. In Figure 5, we show the robustness behaviour of the method for different architectures. A ConvNet seems to perform relatively better than an MLP but these show that the VAE representation is not robust, irrespective of the architecture. We have also carried out controlled experiments with random selection instead of the more costly untargetted adversarial attacks (See appendix C.1 Figure 7(a) for further ). We observe some limited improvements with SE using random selection in adversarial accuracy compared to VAE but training a SE with adversarial selection seems to be much more effective. We note that the selection iteration budget was lower (L = 20 with no restarts) than the attack iteration budget (100 with 10 restarts) during evaluation. It was not practical to train the encoder with more powerful selection attacks, thus it remains to be seen if the tendency of increased accuracy with increased iteration budgets would continue. We also observe that essentially the same level of adversarial accuracy can be obtained with a small fraction of the available labels (See appendix C.1 Figure 8 for further ). We have also repeated our experiments on the CelebA dataset, a large collection of high resolution face images labeled with 40 attribute labels per example. We have used 17 of the attribute labels as the targets of 17 different downstream classification tasks. The are shown in Table. 2. The clearly illustrate that we can achieve much more robust representations than a VAE. It is also informative to investigate specific adversarial examples to understand the failure modes. In Figure 6 we show two illustrative examples from the CelebA. Here we observe that attacks to the SE representations are much more structured and semantically interpretable. In our exploratory investigations, we qualitatively observe that the reconstructions corresponding to the adversarial In the VAE case, the attacker is able to find a perturbation to map the representation to that of a bearded person. However, the perturbation here does not seem to have a particular visible structure. (b) The SE representation is attacked by a perturbation that can clearly be identified as drawing a beard on the image. In this case, the attack is able to fool the classifier and the generated image from the representation is that of a person with beard and mustache. In the second example (c), the VAE representation seems to be attacked by exploiting the non-smooth nature of the encoder; the attacker is able to identify an adversarial example that maps the latent representation to the one in the vicinity of a clearly different person with the desired features, as can be seen from the corresponding reconstruction. In contrast, in the SE case (d), the attack adds a much more structured perturbation, and in this example it was actually not successful in switching the decision. Additionally, from the reconstruction it is evident that a latent feature is attacked that seems to control the receding hairline. examples are almost always legitimate face images with clearly recognizable features. This also seems to support our earlier observation that VAE decoders are typically smooth while the encoders are inferring non-robust features. Our approach seems to be a step towards obtaining more robust representations. Table 1: Comparison of nominal (Nom) and adversarial (Adv) accuracy (in percentage) on 17 downstream tasks using a VAE and a SE trained with a selection radius of = 0.1. The experiment is carried out using the experimental protocol described in the text (Section 4). The adversarial evaluation on CelebA with Attack radius of 0.1 and attack iteration budget of 100 with 10 restarts. The literature on deep generative models and representation learning is quite extensive and is rapidly expanding. There is a plethora of models, but some approaches have been quite popular in recent years: Generative Adversarial Networks (GANs) and VAEs. While the connection of our approach to VAE's is evident, there is also a connection to GANs. In the appendix, we provide the details where we show that a GAN decoder can be viewed as an instance of a particular smooth encoder. Our method is closely related to the β-VAE , used for controlling representations replaces the original variational objective with another one for explicitly trading the data fidelity with that of prior fidelity. In the appendix, we show that the method can be viewed as an instance of the smooth encoders. Wasserstein distance minimization has been applied in generative models as an alternative objective for fitting the decoder. Following the general framework sketched in, the terms of the variational decomposition of the marginal likelihood can be modified in order to change the observation model or the regulariser. For example, Wasserstein AutoEncoders (WAE), or sliced propose to replace data fidelity and/or the KL terms with a Wasserstein distance. Our approach is different from these approaches as we do not propose to replace the likelihood as a fundamental principle for data fitting. In contrast, the Wasserstein distance formulation naturally emerges from the particular model choice and the corresponding variational approximation. Our approach involves an adversarial selection step. The word'Adversarial' is an overloaded term in generative modelling so it is important to mention differences between our approach. Adversarial Variational Bayes is a well known technique in the literature that aims to combine the empirical success of GANs with the probabilistic formulation of VAEs, where the limiting functional form of the variational distribution can be replaced by blackbox inference . This approach also does not modify the original VAE objective, however, the motivation here is different as the aim is developing a more richer family. In our view, for learning useful representations, when the decoder is unknown, the advantage of having a more powerful approximating family is not clear yet; this can even make the task of learning a good representation harder. Adversarial Autoencoders , Adversarially Learned Inference (ALI) and BiGANs (Bidirectional GANs) are also techniques that combine ideas from GANs and VAEs for learning generative models. The key idea is matching an encoder process q(z|x)p(x) and to the decoder process p(z)p(x|z) using an alternative objective, rather than by minimizing the KL divergence as done by the variational inference (see (??)). In this formulation, p(x) is approximated by the empirical data distribution, and p(z) is the prior model of a VAE. The encoder q(z|x) and decoder p(x|z) are modelled using deep networks. This approach is similar to Wasserstein autoencoders that propose to replace the likelihood principle. The idea of improving VAEs by capturing the correlation structure between data points using MRFs and graphical models has been also been recently proposed under the name Correlated Variational Auto-Encoders (CVAEs). Our approach is similar, however we introduce the correlation structure not between individual data points but only between true data points and artificially selected data points. We believe that correctly selecting such a correlation structure of the individual data points can be quite hard in practice, but if such prior knowledge is available, CVAE can be indeed a much more powerful model than a VAE. We note that a proposal for automatically learning such a correlation structure is also recently proposed by . In this paper, we have introduced a method for improving robustness of latent representations learned by a VAE. It must be stressed that our goal is not building the most powerful adversarially robust supervised classifier, but obtaining a method for learning generic representations that can be used for several tasks; the tasks can be even unknown at the time of learning the representations. While the nominal accuracy of an unsupervised approach is expected to be inferior to a supervised training method that is informed by extra label information, we observe that significant improvements in adversarial robustness can be achieved by our approach that forces smooth representations. The KL divergence between two Gaussian distributions translates to a well known divergence in the parameters (in the general case this is a Bregman divergence) where P a = N (µ a, Σ a) and P b = N (µ b, Σ b) are Gaussian densities with mean µ · and covariance matrix Σ ·, and | · | denotes the determinant for a matrix argument, and Tr denotes the trace. The KL divergence consists of two terms, the first term is the scale invariant divergence between two covariance matrices also known as a Itakuro-Saito divergence and the second term is a Mahalonobis distance between the means. The KL divergence is invariant to the choice of parametrization or the choice of the coordinate system. Consider a set Γ of joint densities Q(Z a, Z b) with the property that Q has fixed marginals Q a (Z a) and The Wasserstein divergence WD is defined as the solution of the optimization problem with respect to pairwise distribution Q where c(z a, z b) is a function that specifies the'cost' of transferring a unit of probability mass from z a to z b. The 2 -Wasserstein distance W 2 2 for two Gaussians has an interesting form. The optimum transport plan, where the minimum of is attained, is given where b. It can be checked that this optimal Guassian density is degenerate in the sense that there exists a linear mapping between z a and z b: where A 1/2 denotes the matrix square root, a symmetric matrix such that (A 1/2) 2 = A for a symmetric positive semidefinite matrix A. The 2 -Wasserstein distance is the value attained by the optimum transport plan Entropy Regularized 2 -Wasserstein is the value attained by the minimizer of the following functional where H is the entropy of the joint distribution Q. Using the form in subject to the semidefinite constraint The entropy of a Gaussian Q(z a, z b) is given by the Schur formula Here, D is the dimension of the vector (z a, z b). The entropy regularized problem has a solution where we need to minimizẽ Taking the derivative and setting to zero we obtain a particular Matrix Ricatti equation that gives us a closed form formula for the specific entropy regularized Wasserstein distance For the case of two univariate Gaussians, i.e., when the joint distribution has the form the solution is given by the solution of the scalar quadratic equation. We take the root that gives a feasible solution as the minimizer. In the scalar case, this is the solution that satisfies where we have defined It can be easily checked that the other root is infeasible. For the scalar ψ case we obtain where D z is the dimension of the latent representation, and µ k and Σ k are the k'th component of the output of a neural network with parameters η. Similarly, x i denotes the i'th component of the observation vector x of size D x. For optimization, we need an unbiased estimate of the gradient of the SE-ELBO with respect to encoder parameters η and decoder parameters θ: Given x, we first select a fictive sample x via a selection mechanism, in this case as an adversarial attack as explained in section 3.1. Sample a latent representation and calculate the associated prediction The terms of the SE-ELBO can be calculated as We always train decoder-encoder pairs with identical architectures using both the standard VAE ELBO and the SE ELBO with a fixed γ. Then, in each case by fixing the encoder (that is essentially using the same representation) and by only using the mean activations of the encoders, we train linear classifiers using standard training for solving several downstream tasks. For both encoder and decoder networks we use a 4 layer multi layer perceptron (MLP) and a convolutional network (ConvNET) architectures with 200 units of ReLU activations at each layer. We carried out experiments with latent space dimensions of 32, 64 and 128, corresponding to an output sizes of an encoder with 64, 128 and 256 units, with two units per dimensions to encode the mean and the log-variance parameters of a fully factorized Gaussian condition distribution. The training is done using the Adam optimizer. Each network (both the encoder and decoder) are randomly initialized and trained for 300K iterations. GANs are presented as neural sampling models of observations x of form x = f (ζ; η) where f is typically a deep neural network with parameters η, and ζ is a realization from some simple distribution p(Z). In the context of GANs, the function f is called a generator. When the dimension of x is bigger than the dimension of ζ, the density p(x) induced by the transformation f is inevitably a degenerate distribution. Since f is continuous, and it is concentrated on a subset of the data space X f ≡ {x : ∃ζ, x = f (ζ; η)}. Our use of letter f and parameters η is deliberate and we will illustrate in the sequel that the generator network of a GANs is actually analogous to a smooth encoder, where the roles of the latent variables and observations are switched, but we will first review GANs. To fit a degenerate distribution to a dataset, the GAN approach adopts a strategy where the generator is co-trained with a second neural network d(x; w) with parameters w with the following objective where D real (x) is the empirical data distribution. This objective is (unfortunately) referred as an adversarial objective in the literature, not to be confused with adversarial attack mechanism in the context of supervised learning. The function d is called a discriminator. After replacing expectations with finite sample averages, this objective enforces that in a dataset that contains both synthetically generated (fake) and real examples, the classification function d should increase the correct classification rate by discriminating fakes from real examples while the generator f should decrease the detection rate of fake examples. When 0 ≤ d(·) ≤ 1, which is the case for a classifier, one can also write the objective as where l(x; w) = log d(x; w). This form also highlights an interesting alternative formulation and an interpretation in terms of optimal transport. In fact, not long after the seminal work of , the mechanism beyond the GAN objective and its direct connection to the theory of optimal transport has been recognized by the seminal paper where the problem is further framed as with the constraint that |l(x; w) − l(x; w)| ≤ c(x,x), i.e. l is a Lipschitz function for some L where c(x,x) ≤ L x −x. Here, D fake (x; θ) is the fitted density ofx = f (ζ; η). This is the dual formulation of the optimal transport problem, that can be understood as an economic transaction between a customer and a shipment company. Here, the difference l(x; w) − l(x; w) can be interpreted as the profit made by the company for the shipment of one unit of mass from x and tox, and the Lipschitz condition ensures that it makes still sense for the customer to make use of the services of the company rather than simply doing the transport of her own. The customer wants to pay less, so she should minimize the profit of the company. This can be achieved by changing the desired delivery distribution D fake by adjusting θ, so that the transfer from the fixed source distribution D real is minimized. Ideally, when D fake = D real, there is nothing to transfer and no cost is incurred. This objective also minimizes the Wasserstein distance between the actual data distribution D real and the fake data distribution D fake as given by the generator. Once the GAN objective can be viewed as minimizing a particular Wasserstein distance, it is rather straightforward to view it as a maximizer of a particular ELBO corresponding to a particular smooth encoder, albeit in one where the positions of the observations and the latents are exchanged and a very large coupling coefficient γ is chosen. Moreover, the variational marginals have specific forms: One marginal Q a (X) is chosen as the empirical data distribution and the other marginal is chosen as having the form of a neural sampler The artificial extended target becomes It can be seen that the ELBO in this case becomes Now, by taking the coupling γ sufficiently large, the coupling term dominates the lower bound and we obtain the Wasserstein minimization objective. The random draws from p(Z) become the selection mechanism. Moreover, the terms that depend on the artificial target p(Z|X, θ) become also irrelevant so in this regime the problem becomes just solving the optimal transport problem between Q a and Q b. A link between entropic GANs and VAEs is also pointed at in the literature, albeit for calculating a likelihood for. However, our motivations as well as the interpretation of the connection is quite different and we view the GAN decoder as an instance of the smooth encoder. Targeting the encoder to an augmented distribution different than the decoder us the freedom to express some extensions of VAE in the same framework. One of such extensions is the β-VAE, quite often used for controlling representations replaces the original variational objective with the following objective The justification in the original paper is obtained from an implicit robustness criteria where D KL (q(Z|X a = x, η)||p(Z)) < and β appears in a Lagrangian formulation. have also provided an alternative justification. In our formulation, β can be simply interpreted as a dispersion term that is related to the number of points selected by the selection mechanism. To see this, suppose the selection mechanism chooses β − 1 points x b,i where i = 1... β − 1 that are identical to the true observation x = x b,i = x i for i = 1... β − 1. Now, instead of integrating out Z 1:β−1, we choose a variational distribution with identical marginals of form The variational lower bound becomes identical to the β-VAE objective as where the last step follows due to the functional form of the variational distribution. E TECHNICAL In section 2.2, we have defined a batch ELBO. To see the connection to VAE ELBO we first define the empirical data distribution π(X) = 1 N N i=1 δ(X − x i). We can now write log p(X = x|θ) ≥ E {log p(X = x|Z, θ)} q(Z|X=x,η) − D KL (q(Z|X = x, η)||p(Z)) ≡ B x (η, θ) E {log p(X = x i |Z, θ)} q(Z|X=xi,η) E {log q(Z|X = x i, η)} q(Z|X=xi,η) E {log p(Z)} q(Z|X=xi,η) = E {log p(X|Z, θ)} q(Z|X,η)π(X) − E {log q(Z|X, η)} q(Z|X,η)π(X) +E {log p(Z)} q(Z|X,η)π(X) −E {log π(X)} q(Z|X,η)π(X) + E {log π(X)} π(X) = −D KL (q(Z|X, η)π(X)||p(X|Z, θ)p(Z)) + const This shows that the ELBO is minimizing the KL distance between one exact and one approximate factorization of the joint distribution p(X, Z) = p(X|Z, θ)p(Z) ≈ q(Z|X, η)π(X). In the context of a VAE, the smoothness of the decoder is implicitly enforced by the highly constrained encoder distribution and the dynamics of an SGD based training. In the sequel, we will illustrate that, if two latent coordinates are sufficiently close, the decoder mean mapping is forced to be bounded. In a standard VAE, the encoder output for each data point is conditionally Gaussian as q(Z|X = x; η) = N (f µ (x; η), f Σ (x; η)). The decoder is chosen as p(X|Z = z; η) = N (g(z; θ), vI). The decoder parameters θ under the ELBO depend only on the data fidelity term x − g(z; θ) 2 /v. For a moment, assume that the encoder is fixed and focus on a single data point x. During training, a set of latent state vectors z i for i = 1... T are sampled from the conditionally Gaussian encoder distribution. When the dimension of the latent space D z is large, these samples z i will be with high probability on the typical set. The typical set of a nondegenerate Gaussian distribution is approximately the surface of a Mahalanobis ball, a compact hyper-ellipsoid M (x) centered at f µ (x; η) with scaling matrix f Σ (x; η) 1/2. If we assume that the training procedure is able to reduce the error in the sense that x − g(z i ; θ) ≤ E for all z i where E is a bound on the error magnitude for z i sampled from the encoder, the decoder is forced to give the same output for each point approximately on M (x). For a point z a drawn from q(Z|X = x; η) we have z a − f µ (x; η) K ≈ D z with high probability where K = f Σ (x; η) −1 and x K ≡ √ x Kx. For a point z b independently drawn from q(Z|X = x; η), by the triangle inequality we have g(z a ; θ) − g(z b ; θ) ≤ 2E where the Mahalanobis distance where λ min is the smallest eigenvalue of the covariance matrix. Hence the distance is also bounded when the variance is not degenerate and minimum distance will be on the order of z a − z b ≈ 2 √ D z λ min so we expect the ratio to be bounded We see that the ELBO objective enforces the decoder to be invariant on the typical set of q(Z|X = x; η), where most of the probability mass is concentrated. Now, for each data point x, the corresponding latent space hyper-ellipsoid M (x) are forced to be large in the sense of having a large determinant by the entropy term of the encoder that promotes large log-determinant. The size of M (x) is also controlled by the prior fidelity term, avoiding blowing up. Hence the union ∪ x∈X M (x), where X is the dataset, will approximately cover the latent space when the encoder has converged and on each hyper-ellipsoid M (x) the decoder will be enforced to be smooth. In this section we show that the smooth encoder training forces a small Lipschitz constant for the encoder mean mapping. To simplify the argument, we will assume that the variance mapping of the encoder would be a constant function that does not vary with x, i.e., f Σ (x; η) = Σ(η). The latter assumption could be removed by considering a metric on the joint space of the means and covariance. Using the adversarial selection mechanism, during training we solve the following problem using PGD: x * = arg max x: x −x p ≤ WD(q(Z|X = x, η), q(Z|X = x, η)) Assuming that PGD finds the global maximum at the boundary of the -ball where x − x * p =, under constant variance assumption for the encoder we can see that the Wasserstein divergence simply becomes the square distance between mean mappings WD(q(Z|X = x, η), q(Z|X = x *, η)) = f µ (x; η) − f µ (x * ; η) 2 2 We know that the SE ELBO objective has to minimize this distance for any coupling term γ so the procedure actually tries to reduce the local Lipschitz constant L(x) around data point x L(x) = f µ (x; η) − f µ (x * ; η) x − x * p ≤ E and promotes smoothness where E is an upper bound on the change in the representation f µ (x; η)− f µ (x * ; η) ≤ E.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1gfFaEYDS
We propose a method for computing adversarially robust representations in an entirely unsupervised way.
We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while being more global in its search compared to other greedy approaches. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference. Structure learning and causal inference have many important applications in different areas of science such as genetics, biology and economics. Bayesian networks (BN), which encode conditional independencies using directed acyclic graphs (DAG), are powerful models which are both interpretable and computationally tractable. Causal graphical models (CGM) are BNs which support interventional queries like: What will happen if someone external to the system intervene on variable X? Recent work suggests that causality could partially solve challenges faced by current machine learning systems such as robustness to out-of-distribution samples, adaptability and explainability. However, structure and causal learning are daunting tasks due to both the combinatorial nature of the space of structures and the question of structure identifiability. Nevertheless, these graphical models known qualities and promises of improvement for machine intelligence renders the quest for structure/causal learning appealing. The problem of structure learning can be seen as an inverse problem in which the learner tries to infer the causal structure which has generated the observation. In this work, we propose a novel score-based method for structure learning named GraN-DAG which makes use of a recent reformulation of the original combinatorial problem of finding an optimal DAG into a continuous constrained optimization problem. In the original method named NOTEARS, the directed graph is encoded as a weighted adjacency matrix W which represents coefficients in a linear structural equation model (SEM). To enforce acyclicity, the authors propose a constraint which is both efficiently computable and easily differentiable. Most popular score-based methods for DAG learning usually tackle the combinatorial nature of the problem via greedy search procedures relying on multiple heuristics. Moving toward the continuous paradigm allows one to use gradient-based optimization algorithms instead of handdesigned greedy search algorithms. Our first contribution is to extend the work of to deal with nonlinear relationships between variables using neural networks (NN). GraN-DAG is general enough to deal with a large variety of parametric families of conditional probability distributions. To adapt the acyclicity constraint to our nonlinear model, we use an argument similar to what is used in and apply it first at the level of neural network paths and then at the level of graph paths. Our adapted constraint allows us to exploit the full flexibility of NNs. On both synthetic and real-world tasks, we show GraN-DAG outperforms other approaches which leverage the continuous paradigm, including DAG-GNN, a recent nonlinear extension of independently developed which uses an evidence lower bound as score. Our second contribution is to provide a missing empirical comparison to existing methods that support nonlinear relationships but tackle the optimization problem in its discrete form using greedy search procedures such as CAM. We show that GraN-DAG is competitive on the wide range of tasks we considered. We suppose the natural phenomenon of interest can be described by a random vector X ∈ R d entailed by an underlying CGM (P X, G) where P X is a probability distribution over X and G = (V, E) is a DAG. Each node i ∈ V corresponds to exactly one variable in the system. Let π G i denote the set of parents of node i in G and let X π G i denote the random vector containing the variables corresponding to the parents of i in G. We assume there are no hidden variables. In a CGM, the distribution P X is said to be Markov to G which means we can write the probability density function (pdf) as p(. A CGM can be thought of as a BN in which directed edges are given a causal meaning, allowing it to answer queries regarding interventional distributions. In general, it is impossible to recover G only given samples from P X. It is, however, customary to rely on a set of assumptions to render the structure fully or partially identifiable. Definition 1. Given a set of assumptions A on a CGM M = (P X, G), its graph G is said to be identifiable from P X if there exists no other CGMM = (P X,G) satisfying all assumptions in A such thatG = G andP X = P X. There are many examples of graph identifiability for continuous variables as well as for discrete variables. Those are obtained by assuming that the conditional pdf p i ∀i belongs to a specific parametric family P. For example, if one assumes that where f i is a nonlinear function satisfying some mild regularity conditions, then G is identifiable from P X (see for the complete theorem and its proof). We will make use of this in Section 4. Structure learning is the problem of learning G using a data set of n samples {x,..., x (n) } from P X. Score-based approaches cast this estimation problem as an optimization problem over the space of DAGs, i.e. Ĝ = arg max G∈DAG Score(G). The score is usually the maximum likelihood of your data given a certain model. Most score-based methods embrace the combinatorial nature of the problem via greedy search procedures. We now present the work of which approaches the problem from a continuous optimization perspective. To cast the combinatorial optimization problem into a continuous constrained one, proposes to encode the graph G on d nodes as a weighted adjacency matrix d×d which represents (possibly negative) coefficients in a linear structural equation model (SEM) of the form X i:= u i X + N i ∀i where N i is a noise variable. Let G U be the directed graph associated with the SEM and let A U be the (binary) adjacency matrix associated with G U. One can see that the following equivalence holds: To make sure G U is acyclic, the authors propose the following constraint on U: where e M ∞ k=0 M k k! is the matrix exponential and is the Hadamard product. It can be shown that G U is acyclic iff the constraint is satisfied (see for a proof). The authors propose to use a regularized negative least square score (maximum likelihood for a Gaussian noise model). The ing continuous constrained problem is where X ∈ R n×d is the design matrix containing all n samples. The nature of the problem has been drastically changed: we went from a combinatorial to a continuous problem. The difficulties of combinatorial optimization have been replaced by those of non-convex optimization, since the feasible set is non-convex. Nevertheless, a standard numerical solver for constrained optimization such has an augmented Lagrangian method (AL) can be applied to get an approximate solution. 3 GraN-DAG: Gradient-based neural DAG learning We propose a new nonlinear extension to the framework presented in Section 2.3. For each variable X i, we learn a fully connected neural network with L hidden layers parametrized by } where W (i) is the th weight matrix of the ith NN. Each NN takes as input X −i ∈ R d, i.e. the vector X with the ith component masked to zero, and outputs θ (i) ∈ R m, the m-dimensional parameter vector of the desired distribution family for variable X i. The fully connected NNs have the following form where g is a nonlinearity applied element-wise. Let φ {φ,..., φ (d) } represents all parameters of all d NNs. Without any constraint on its parameter φ (i), neural network i models the conditional pdf p i (x i |x −i ; φ (i) ). Note that the product is not a valid joint pdf since it does not decompose according to a DAG. We now show how one can constrain φ to make sure the product of all conditionals outputted by the NNs is a valid joint pdf. The idea is to define a new weighted adjacency matrix A φ similar to the matrix U encountered in Section 2.3, which can be directly used inside the constraint of Equation 3 to enforce acyclicity. Before defining the weighted adjacency matrix A φ, we need to focus on how one can make some NN outputs unaffected by some inputs. Since we will discuss properties of a single NN, we drop the NN subscript (i) to improve readability. We will use the term neural network path to refer to a computation path in a NN. For example, in a NN with two hidden layers, the sequence of weights (W kh2) is a NN path from input j to output k. We say that a NN path is inactive if at least one weight along the path is zero. We can loosely interpret the path product |W h1j ||W h2h1 ||W kh2 | ≥ 0 as the strength of the NN path, where a path product equal to zero if and only if the path is inactive. Note that if all NN paths from input j to output k are inactive (i.e. the sum of their path products is zero), then output k does not depend on input j anymore since the information in input j will never reach output k. The sum of all path products from every input j to every output k can be easily computed by taking the product of all the weight matrices in absolute value. where |W | is the element-wise absolute value of W. It can be verified that C kj is the sum of all NN path products from input j to output k. To have θ independent of variable X j, it is sufficient to have m k=1 C kj = 0. This is useful since, to render our architecture acyclic, we need to force some neural network inputs to be inactive (this corresponds to removing edges in our graph). We now define a weighted adjacency matrix A φ that can be used in constraint of Equation 3. where C (i) denotes the connectivity matrix of the NN associated with variable X i. As the notation suggests, A φ ∈ R d×d ≥0 depends on all weights of all NNs. Moreover, it can effectively be interpreted as a weighted adjacency matrix similarly to what we presented in Section 2.3, since we have that We note G φ to be the directed graph entailed by parameter φ. We can now write our adapted acyclicity constraint: This guarantees acyclicity. The argument is identical to the linear case, except that now we rely on implication instead of. We propose solving the maximum likelihood optimization problem where π φ i denotes the set of parents of variable i in graph G φ. Note that; θ (i) ) is a valid log-likelihood function when constraint is satisfied. As suggested in, we apply an augmented Lagrangian approach to get an approximate solution to program. Augmented Lagrangian methods consist of optimizing a sequence of subproblems for which the exact solutions are known to converge to a stationary point of the constrained problem under some regularity conditions. We approximately solve each subproblem using RMSprop, a stochastic gradient descent variant popular for NN. We empirically compare GraN-DAG to various baselines (both in the continuous and combinatorial paradigm), namely DAG-GNN, NOTEARS, RESIT We first present a comparison on synthetic data sets. We sampled 10 graphs (e.g. with 50 nodes and an average of 200 edges) and data distributions of the form ) with f i ∼ GP and evaluated different methods using SHD and SID (we report the average and the standard deviation over those data sets). Note that we are in the identifiable case presented in Section 2.2. GraN-DAG, NOTEARS and CAM all make the correct gaussian assumption in their respective models. In Table 1 we report a subset of our . GraN-DAG outperforms other continuous approaches while being competitive with the best performing discrete approach we considered. In addition, we considered a well known real world data set which measures the expression level of different proteins and phospholipids in human cells (the ground truth graph has 11 nodes and 17 edges). We found GraN-DAG to be competitive with other approaches. Our implementation of GraN-DAG can be found here.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryl6nX398r
We are proposing a new score-based approach to structure/causal learning leveraging neural networks and a recent continuous constrained formulation to this problem
We study the problem of designing provably optimal adversarial noise algorithms that induce misclassification in settings where a learner aggregates decisions from multiple classifiers. Given the demonstrated vulnerability of state-of-the-art models to adversarial examples, recent efforts within the field of robust machine learning have focused on the use of ensemble classifiers as a way of boosting the robustness of individual models. In this paper, we design provably optimal attacks against a set of classifiers. We demonstrate how this problem can be framed as finding strategies at equilibrium in a two player, zero sum game between a learner and an adversary and consequently illustrate the need for randomization in adversarial attacks. The main technical challenge we consider is the design of best response oracles that can be implemented in a Multiplicative Weight Updates framework to find equilibrium strategies in the zero-sum game. We develop a series of scalable noise generation algorithms for deep neural networks, and show that it outperforms state-of-the-art attacks on various image classification tasks. Although there are generally no guarantees for deep learning, we show this is a well-principled approach in that it is provably optimal for linear classifiers. The main insight is a geometric characterization of the decision space that reduces the problem of designing best response oracles to minimizing a quadratic function over a set of convex polytopes. In this paper, we study adversarial attacks that induce misclassification when a learner has access to multiple classifiers. One of the most pressing concerns within the field of AI has been the welldemonstrated sensitivity of machine learning algorithms to noise and their general instability. Seminal work by has shown that adversarial attacks that produce small perturbations can cause data points to be misclassified by state-of-the-art models, including neural networks. In order to evaluate classifiers' robustness and improve their training, adversarial attacks have become a central focus in machine learning and security BID21 BID17 BID23.Adversarial attacks induce misclassification by perturbing data points past the decision boundary of a particular class. In the case of binary linear classifiers, for example, the optimal perturbation is to push points in the direction perpendicular to the separating hyperplane. For non-linear models there is no general characterization of an optimal perturbation, though attacks designed for linear classifiers tend to generalize well to deep neural networks BID21.Since a learner may aggregate decisions using multiple classifiers, a recent line of work has focused on designing attacks on an ensemble of different classifiers BID31 BID0 BID13. In particular, this line of work shows that an entire set of state-of-the-art classifiers can be fooled by using an adversarial attack on an ensemble classifier that averages the decisions of the classifiers in that set. Given that attacking an entire set of classifiers is possible, the natural question is then:What is the most effective approach to design attacks on a set of multiple classifiers?The main challenge when considering attacks on multiple classifiers is that fooling a single model, or even the ensemble classifier (i.e. the model that classifies a data point by averaging individual predictions), provides no guarantees that the learner will fail to classify correctly. Models may have different decision boundaries, and perturbations that affect one may be ineffective on another. Furthermore, a learner can randomize over classifiers and avoid deterministic attacks (see Figure 1). c 2 c 1 Figure 1: Illustration of why randomization is necessary to compute optimal adversarial attacks. In this example using binary linear classifiers, there is a single point that is initially classified correctly by two classifiers c1, c2, and a fixed noise budget α in the ℓ2 norm. A naive adversary who chooses a noise perturbation deterministically will always fail to trick the learner since she can always select the remaining classifier. An optimal adversarial attack in this scenario consists of randomizing with equal probability amongst both noise vectors. In this paper, we present a principled approach for attacking a set of classifiers which proves to be highly effective. We show that constructing optimal adversarial attacks against multiple classifiers is equivalent to finding strategies at equilibrium in a zero sum game between a learner and an adversary. It is well known that strategies at equilibrium in a zero sum game can be obtained by applying the celebrated Multiplicative Weights Update framework, given an oracle that computes a best response to a randomized strategy. The main technical challenge we address pertains to the characterization and implementation of such oracles. Our main contributions can be summarized as follows:• We describe the Noise Synthesis FrameWork (henceforth NSFW) for generating adversarial attacks. This framework reduces the problem of designing optimal adversarial attacks for a general set of classifiers to constructing a best response oracle in a two player, zero sum game between a learner and an adversary; • We show that NSFW is an effective approach for designing adversarial noise that fools neural networks. In particular, applying projected gradient descent on an appropriately chosen loss function as a proxy for a best response oracle achieves performance that significantly improves upon current state-of-the-art attacks (see in Figure 2); • We show that applying projected gradient descent on an appropriately chosen loss function is a well-principled approach. We do so by proving that for linear classifiers such an approach yields an optimal adversarial attack if the equivalent game has a pure Nash equilibrium. This is shown via a geometric characterization of the decision boundary space which reduces the problem of designing optimal attacks to a convex program; • If the game does not have a pure Nash equilibrium, there is an algorithm for finding an optimal adversarial attack for linear classifiers whose runtime is exponential in the number of classifiers. We show that finding an optimal strategy in this case is NP-hard. Paper organization. Following a discussion on related work, in Section 2 we formulate the problem of designing optimal adversarial noise and show how it can be modeled as finding strategies at equilibrium in a two player, zero sum game. Afterwards, we discuss our approach for finding such strategies using MWU and proxies for best response oracles. In Section 2.1, we justify our approach by proving guarantees for linear classifiers. Lastly, in Section 3, we present our experiments. Additional related work. The field of adversarial attacks on machine learning classifiers has recently received widespread attention from a variety of perspectives BID1 BID9 BID25 BID3. In particular, a significant amount of effort has been devoted to computing adversarial examples that induce misclassification across multiple models BID22 BID21. There has been compelling evidence which empirically demonstrates the effectiveness of ensembles as way of both generating and defending against adversarial attacks. For example, BID31 establish the strengths of ensemble training as a defense against adversarial attacks. Conversely, provide the first set of experiments showing that attacking an ensemble classifier is an effective way of generating adversarial examples that transfer to the underlying models. Relative to their investigation, our work differs in certain key aspects. Rather than analyzing adversarial noise from a security perspective and developing methods for black-box attacks, we approach the problem from a theoretical point of view and introduce a formal characterization of the optimal attack against a set of classifiers. Furthermore, by analyzing noise in the linear setting, we design algorithms for this task that have strong guarantees of performance. Through our experiments, we demonstrate how these algorithms motivate a natural extension for noise in deep learning that achieves state-of-the-art . Given a set of point-label pairs {( DISPLAYFORM0 where DISPLAYFORM1, a deterministic adversarial attack is a totally ordered set of noise vectors, V = (v 1, . . ., v m) ∈ R d×m. We say that q is an adversarial attack if q is a distribution over sets of noise vectors. An adversarial attack q is α-bounded if for all sets V that have non-zero probability under q, each individual noise vector v i ∈ V has bounded norm, e.g ||v i || p ≤ α. We focus on the case where each vector v i is bounded to have ℓ 2 norm less than a fixed value α, however, our model can be easily extended to a variety of norms. For a given classifier c: DISPLAYFORM0, a realization of the adversarial attack, V = (v 1, . . ., v m), induces misclassification on (x j, y j) if c(x j + v j) ∕ = y j. Given a finite set of classifiers C and a data set S = {(x i, y i)} m i=1 of point-label pairs as above, an optimal adversarial attack is a distribution q over sets of noise vectors that maximizes the minimum 0-1 loss of the classifiers in C: DISPLAYFORM1 Optimal adversarial attacks are equilibrium strategies in a zero sum game. An equivalent interpretation of the optimization problem described in Equation FORMULA3 is that of a best response in a two player, zero sum game played between a learner and an adversary. When the learner plays classifier c ∈ C and the adversary plays an attack V, the payoff to the adversary is M (c, DISPLAYFORM2, which is the average 0-1 loss of the learner.2 The learner and the adversary can choose to play randomized strategies p, q over classifiers and noise vectors yielding expected payout E(c,V)∼(p,q) M (c, V). The (mixed) equilibrium strategy of the game is the pair of distributions p, q that maximize the minimum loss DISPLAYFORM3 Computing optimal adversarial attacks via MWU. As discussed above, the optimization problem of designing optimal adversarial attacks reduces to that of finding strategies at equilibrium in a zero sum game. It is well known that the celebrated Multiplicative Weight Updates algorithm can be used to efficiently compute equilibrium strategies of zero sum games when equipped with a best response oracle that finds an optimal set of perturbations for any strategy chosen by the learner: DISPLAYFORM4 Our framework for generating adversarial noise applies the Multiplicative Weight Updates algorithm as specified in Algorithm 1. The algorithm returns distributions p, q that are within δ of the equilibrium value of the game λ = min DISPLAYFORM5 ln n δ 2 ) calls to a best response oracle.3 In this work, we focus on developing attacks on neural networks and linear models. Yet, our framework is general enough to generate optimal attacks for any domain in which one can approximate a best response. We analyze the convergence of NSFW in Appendix G.Approximating a best response. Given the framework described above, the main challenge is in computing a best response strategy. To do so, at every iteration, as a proxy for a best response, we apply projected gradient descent (PGD) to an appropriately chosen surrogate loss function. In particular, given DISPLAYFORM6 we aim to solve: DISPLAYFORM7 ℓ is a loss function that depends on the type of attack (targeted vs. untargeted) and the type of classifiers in C (linear vs. deep). We introduce a series of alternatives for ℓ in the following section. As we will now show, maximizing the loss of the learner by applying PGD to a weighted sum of loss functions is a well-principled approach to computing best responses as it is guaranteed to converge to the optimal solution in the case where C is composed of linear classifiers. While there are generally no guarantees for solving non-convex optimization problems of this sort for deep neural networks, in Section 3, we demonstrate the effectiveness of our approach by showing that it experimentally improves upon current state-of-the-art attacks. Input: DISPLAYFORM0 The main theoretical insight that leads to provable guarantees for generating adversarial noise is a geometric characterization of the underlying structure of adversarial attacks. Regardless of the type of model, selecting a distribution over classifiers partitions the input space into disjoint regions, each of which is associated with a single loss value for the learner. Given a distribution over classifiers played by the learner, computing a best response strategy for the adversary then reduces to a search problem. In this problem, the search is for points in each region that lie within the noise budget and can be misclassified. The best response is to select the region which induces the maximal loss. In the case of linear classifiers, the key observation is that the regions are convex. As a , designing optimal adversarial attacks reduces to solving a series of quadratic programs. Lemma 1. Selecting a distribution p over a set C of n linear classifiers, partitions the input space R d into k n disjoint, convex sets T j such that: DISPLAYFORM0 2. There exists a finite set of numbers a 1,... a k n, not necessarily all unique, such that DISPLAYFORM1 Proof Sketch (see full proof in Appendix C). Each set T j is defined according to the predictions of the classifiers c i ∈ C on points x ∈ T j. In particular, each region T j is associated with a unique label vector DISPLAYFORM2 Since the prediction of each classifier is the same for all points in a particular region, the loss of the learner i∈[n] p[i]ℓ 0-1 (c i, x, y) is constant over the entire region. Convexity then follows by showing that each T j is an intersection of hyperplanes. This characterization of the underlying geometry now allows us to design best response oracles for linear classifiers via convex optimization. For our analysis, we focus on the case where C consists of "one-vs-all" classifiers. In the appendix, we show how our can be generalized to other methods for multilabel classification by reducing these other approaches to the "one-vs-all" case. Given k classes, a "one-vs-all" classifier c i consists of k linear functions c i, DISPLAYFORM3 On input x, predictions are made according to the rule c i (x) = arg max j c i,j (x). Lemma 2. For linear classifiers, implementing a best response oracle reduces to the problem of minimizing a quadratic function over a set of k n convex polytopes. Proof Sketch (see full proof in Appendix C). The main idea behind this lemma is that given a distribution over classifiers, the loss of the learner can be maximized individually for each point (x, y) ∈ S. Furthermore, by Lemma 1, the loss can assume only finitely many values, each of which is associated with a particular convex region T j of the input space. Therefore, to compute a best response, we can iterate over all regions and choose the one associated with the highest loss. To find points in each region T j, we can simply minimize the ℓ 2 norm of a perturbation v such that x + v ∈ T j, which can be framed as minimizing a quadratic function over a convex set. These give an important characterization, but it also shows that the number of polytopes is exponential in the number of classifiers. To overcome this difficulty, we demonstrate how when there exists a pure strategy Nash equilibrium (PSNE), that is a single set of noise vectors V where every vector is bounded by α and min ci∈C M (c i, V) = 1, PGD applied to the reverse hinge loss, ℓ r, is guaranteed to converge to a point that achieves this maximum for binary classifiers. More generally, given a label vector s j ∈ [k] n, PGD applied to the targeted reverse hinge loss, ℓ t, converges to a point within the noise budget that lies within the specified set T j. We define ℓ r and ℓ t as follows: DISPLAYFORM4 The proof follows standard arguments for convergence of convex and β-smooth functions. Theorem 1. Given any precision > 0 and noise budget α > 0:• For a finite set of linear binary classifiers C and a point (x, y), running PGD for T = 4α/ iterations on the objective DISPLAYFORM5 converges to a point that is within of the pure strategy Nash equilibrium f (x + v *), if such an equilibrium exists;• For a finite set of linear multilabel classifiers C, given a label vector s j ∈ [k] n and a distribution p over C, running PGD for T = 4α/ iterations on the objective DISPLAYFORM6 Proof Sketch. From the definition of the reverse hinge loss, we see that ℓ r (c i, x ′, y) = 0 if and only if ℓ 0-1 (c i, x ′, y) = 1. Similarly, the targeted loss ℓ t (c i, x ′, j) is 0 if and only if c i predicts x ′ to have label j. For linear classifiers, both of these functions are convex and β-smooth. Hence PGD converges to a global minimum, which is zero if there exists a pure equilibrium in the game. The requirement that there exist a feasible point x ′ within T j is not only sufficient, it is also necessary in order to avoid a brute force search. Designing an efficient algorithm to find the region associated with the highest loss is unlikely as the decision version of the problem is NP-hard even for binary linear classifiers. We state the theorem below and defer the proof to the appendix. Theorem 2. Given a set C of n binary, linear classifiers, a number B, a point (x, y), noise budget α, and a distribution p, finding v with ||v|| 2 ≤ α s.t. the loss of the learner is exactly B is NP-complete. As we show in the following section, this hardness does not limit our ability to compute optimal adversarial examples. Most of the problems that have been examined in the context of adversarial noise suppose that the learner has access only to a small number of classifiers (e.g less than 5) BID8 BID0 BID31 BID13. In such cases we can solve the convex program over all regions and find an optimal adversarial attack, even when a pure Nash equilibrium does not exist. We evaluate the performance of NSFW at fooling a set of classifiers by comparing against noise generated by using state-of-the-art attacks against an ensemble classifier. Recent work by and BID31, demonstrates how attacking an ensemble of a set of classifiers generates noise that improves upon all previous attempts at fooling multiple classifiers. We test our methods on deep neural networks on MNIST and ImageNet, as well as on linear classifiers where we know that NSFW is guaranteed to converge to the optimal adversarial attack. We use the insights derived from our theoretical analysis of linear models to approximate a best response oracle for this new setting. Specifically, at each iteration of NSFW we compute a best response as in Equation FORMULA9 by running PGD on a weighted sum of untargeted reverse hinge losses, ℓ ut, introduced in this domain by BID4. Given a network c i, we denote c i,j (x) to be the probability assigned by the model to input x belonging to class j (the jth output of the softmax layer of the model). DISPLAYFORM0 For MNIST, the set of classifiers C consists of 5 convolutional neural networks, each with a different architecture, that we train on the full training set of 55k images (see Appendix for details). All classifiers (models) were over 97% accurate on the MNIST test set. For ImageNet, C consists of the InceptionV3, DenseNet121, ResNet50, VGG16, and Xception models with pre-trained weights Figure 2: Visual comparison of misclassification using state-of-the-art adversarial attacks. We compare the level of noise necessary to induce similar levels of misclassification by attacking an ensemble classifier using the (from left to right) Fast Gradient Method (FGM), the Madry attack, and the Momentum Iterative Method (MIM) versus applying NSFW (rightmost column) on the same set of classifiers. To induce a maximum of 17% accuracy across all models, we only need to set α to be 300 for NSFW. For the MIM attack on the ensemble we need to set α = 2000. For FGM and the Madry attack, the noise budget must be further increased to 8000.downloaded from the Keras repository BID6 BID12 BID27 BID7 BID30 BID14. To evaluate the merits of our approach, we compare our against attacks on the ensemble composed of C as suggested by. More specifically, we create an ensemble by averaging the outputs of the softmax layers of the different networks using equal weights. We generate baseline attacks by attacking the ensemble using the Fast Gradient Method by BID11, the Projected Gradient Method by, and the Momentum Iterative Method by BID8 which we download from the Cleverhans library BID24. We select the noise budget α by comparing against the average ℓ 2 distortion reported by similar papers in the field. For MNIST, we base ourselves off the values reported by BID4 and choose a noise budget of 3.0. For ImageNet, we compare against. In their paper, they run similar untargeted experiments on ImageNet with 100 images and report a noise budget of 22 when measured as the root mean squared deviation. Converted to the ℓ 2 norm, this corresponds to α ≥ 8500.6 We found this noise budget to be excessive, yielding images comparable to those in the leftmost column in Figure 2. Therefore, we chose α = 300 (roughly 3.5% of the total distortion used in) which ensures that the perturbed images are visually indistinguishable from the originals to the human eye (see rightmost column in Figure 2 Table 1 : Accuracies of ImageNet models under different noise algorithms using a noise budget of 300.0 in the ℓ2 norm. Entry (i, j) indicates the accuracy of each model j when evaluated on noise from attack i. The last two columns report the mean and max accuracy of the classifiers on a particular attack. We see that NSFW significantly outperforms noise generated by an ensemble classifier for all choices of attack algorithms.4 Specific details regarding model architectures as well as the code for all our experiments can be found in our repository which will be made public after the review period in order to comply with anonymity guidelines. The test set accuracies of all ImageNet classifiers are displayed on the Keras website.5 Momentum Iterative Method won the 2017 NIPS adversarial attacks competition. 6 In their paper, define the root mean squared deviation of two points x, x as (xi − x i) 2 /N where N is the dimension of x. For ImageNet, our images are of dimension 224×224×3, while for MNIST they are of size 28 × 28 × 1. For further perspective, if we convert our noise budgets from the ℓ2 norm to RMSD, our budgets would correspond to.77 and.11 for ImageNet and MNIST respectively. For our experiments, we ran NSFW for 50 MWU iterations on MNIST models and for 10 iterations on ImageNet classifiers. We use far fewer iterations than the theoretical bound since we found that in practice NSFW converges to the equilibrium solution in only a small number of iterations (see Figure 5 in Appendix A). At each iteration of the MWU we approximate a best response as described in Equation 3 by running PGD using the Adam optimizer on a sum of untargeted reverse hinge losses. Specifically, we run the optimizer for 5k iterations with a learning rate of.01. At each iteration, we clip images to lie in the range Finally, for evaluation, for both MNIST and ImageNet we selected 100 images uniformly at random from the set of images in the test sets that were correctly classified by all models. In Table 1, we report the empirical accuracy of all classifiers in the set C when evaluated on NSFW as well as on the three baseline attacks. To compare their performance, we highlight the average and maximum accuracies of models in C when attacked using a particular noise solution. From Table 1, we see that on ImageNet our algorithm in solutions that robustly optimize over the entire set of models using only a small amount of noise. The maximum accuracy of any classifier is 17% under NSFW, while the best ensemble attack yields a max accuracy of only 68%. If we wish to generate a similar level of performance from the ensemble baselines, we would need to increase the noise budget to 8000 for FGM and the Madry attack and to 2000 for the Momentum Iterative Method. We present a visual comparison of the different attacks under these noise budgets required to achieve accuracy of 17% in Figure 2. On MNIST, we find similar . NSFW yields a max accuracy of 22.6% compared to the next best of 48% generated by the Madry attack on the ensemble. We summarize the for MNIST in Table 2 presented in Appendix A. As seen in the previous section, noise generated by directly attacking an ensemble of classifiers significantly underperforms NSFW at robustly fooling the underlying models. In this section, we aim to understand this phenomenon by analyzing how the decision boundary of the ensemble model compares to that of the different networks. In particular, we visualize the class boundaries of convolutional neural networks using the algorithm proposed by BID28 for generating saliency maps. 8 The class saliency map indicates which features (pixels) are most relevant in classifying an image to have a particular label.9 Therefore, they serve as one way of understanding the decision boundary of a particular model by highlighting which dimensions carry the highest weight. In FIG0, we see that the class saliency maps for individual models exhibit significant diversity. The ensemble of all 5 classifiers appears to contain information from all models, however, certain regions that are of central importance for individual models are relatively less prominent in the ensemble saliency map. Compared to our approach which calculates individual gradients for classifiers in C, creating an ensemble classifier obfuscates key information regarding the decision boundary of individual models. We make this discussion rigorous by analyzing the linear case in Appendix B. NSFW on linear multiclass models using different noise functions and varying the noise budget α. NSFWOracle corresponds to running Algorithm 1 using the best response oracle described in Lemma 2. Similarly, NSFW-Untargeted shows the of running NSFW and applying PGD to a weighted sum of untargeted losses as in Equation FORMULA9. The label iteration method is described below. Lastly, the ensemble attack corresponds to the optimal noise on an equal weights ensemble of models in C. On the right, we illustrate the convergence of NSFW on linear binary classifiers with maximally different decision boundaries to compare against the convergence rate observed for neural nets in Figure 5 and better understand when weight adaptivity is necessary. In addition to evaluating our approach on neural networks, we performed experiments with linear classifiers. Since we have a precise characterization of the optimal attack on a set of linear classifiers, we can rigorously analyze the performance of different methods in comparison to the optimum. We train two sets of 5 linear SVM classifiers on MNIST, one for binary classification (digits 4 and 9) and another for multiclass (first 4 classes, MNIST 0-3). To ensure a diversity of models, we randomly zero out up to 75% of the dimensions of the training set for each classifier. Hence, each model operates on a random subset of features. All models achieve test accuracies of above 90%. For our experiments, we select 1k points from each dataset that are correctly classified by all models. In order to better compare across different best response proxies, we further extend NSFW by incorporating the label iteration method as another heuristic to generate untargeted noise. Given a point (x, y), the iterative label method attempts to calculate a best response by running PGD on the targeted reverse hinge loss for every label j ∈ [k] \ {y} and choosing the attack associated with the minimal loss. Compared to the untargeted reverse hinge loss, it has the benefit of being convex. As for deep learning classifiers, we compare our to the noise generated by the optimal attack on an ensemble of models in C. Since the class of linear classifiers is convex, creating an equal weights ensemble by averaging the weight vectors in just another linear classifier. We can compute the optimal attack by running the best response oracle described in Section 2.1 for the special case where C consists of a single model and then scaling the noise to have norm equal to α. As seen in the leftmost plot in FIG3, even for linear models there is a significant difference between the optimal attack and other approaches. Specifically, we observe an empirical gap between NSFW equipped with the best response oracle as described in Lemma 2 vs. NSFW with proxy best response oracles, e.g. the oracle that runs PGD on appropriately chosen loss functions.10 This difference in performance is consistent across a variety of noise budgets. Our main takeaway is that in theory and in practice, there is a significant benefit in applying appropriately designed best response oracles. Lastly, on the right in FIG3, we illustrate how the adaptivity of MWU is in general necessary to compute optimal attacks. While for most cases, NSFW converges to the equilibrium solution almost immediately, if the set of classifiers is sufficiently diverse, running NSFW for a larger number of rounds drastically boosts the quality of the attack. (See Appendix A for details.) Designing adversarial attacks when a learner has access to multiple classifiers is a non-trivial problem. In this paper we introduced NSFW which is a principled approach that is provably optimal on linear classifiers and empirically effective on neural networks. The main technical crux is in designing best response oracles which we achieve through a geometrical characterization of the optimization landscape. We believe NSFW can generalize to domains beyond those in this paper. A ADDITIONAL EXPERIMENTS AND DETAILS ON EXPERIMENTAL SETUP Figure 5: Fast convergence of NSFW on MNIST and ImageNet deep learning models. NSFW-Untargeted corresponds to running NSFW and applying PGD on a sum of untargeted reverese hinge losses as described in Section 3.1. The dotted lines correspond to running the indicated attack on the ensemble of models in C. For both datasets, we find that NSFW converges almost immediately to the equilibrium noise solution. These misclassification can also be examined in Tables 1 and 2.We now discuss further details regarding the setup of the experiments presented in Section 3. In the case of deep learning, we set hyperparameters for of all the baseline attacks (Fast Gradient Method, Madry attack, and the Momentum Iterative Method) by analyzing the values reported in the original papers. When running the Projected Gradient Method by Madry et al., given a noise budget α, we run the algorithm for 40 iterations with a step size of α/40 × 1.25 so as to mimic the setup of the authors. In the case of the Momentum Iterative Method, we run the attack for 5 iterations with a decay factor µ = 1.0 and a step size of α/5 as specified in BID8. FGM has no hyperparameters other than the noise budget. For all methods, we clip solutions to lie within the desired pixel range and noise budget. When comparing different algorithms to compute best responses for linear multiclass classifiers as described in Section 3.3, we run the NSFW algorithm with α =.2k for k ∈. In the case of binary classifiers FIG4 ), we find that the margins are smaller, and hence run NSFW with α =.05 +.1k for k ∈. For each value of α and choice of noise function, we run NSFW for 50 iterations. The one exception is that, for the multiclass experiments with α equal to.2 or.4, we ran the best response oracle for only 20 iterations due to computational constraints. When optimizing the loss of the learner through gradient descent (e.g when using PGD on appropriately chosen loses), we set the number of iterations to 3k and the learning rate to.01.We set up the weight adaptivity experiment described at the end of Section 3.3 (rightmost plot of FIG3) as follows. We train 5 linear binary SVM classifiers on our binary version of the MNIST dataset. For each classifier, we zero out 80% of the input dimensions so that each model has nonzero weights for a strictly different subset of features, thereby ensuring maximum diversity in the decision Table 2: Classification accuracies for deep learning MNIST models under different noise algorithms. As in the ImageNet case, we find that the NSFW algorithm improves upon the performance of state-of-the-art attacks and robustly optimizes over the entire set of classifiers. Moreover, we find that, for all attacks, there is a significant difference between the average and maximum accuracy of classifiers in C, further highlighting the need to design noise algorithms that are guaranteed to inhibit the performance of the best possible model. FIG3 NSFW equipped with the best response outperforms other approaches at generating noise for linear models. Furthermore, we see there is a performance gap between gradient based approaches and the theoretically optimal one that leverages convex programming.boundaries across models. In order to generate noise, we select 500 points uniformly at random from the test set that were correctly classified by all modes. We then run NSFW equipped with the best response oracle described in Lemma 2 for 50 iterations with a noise budget of.4. In this section, we provide a brief theoretical justification as to why methods designed to attack an ensemble constructed by averaging individual models in a set of classifiers (e.g. Attacks on ensemble classifiers, as seen in, typically consist of applying gradient based optimization to an ensemble model E(C, p) made up of classifiers C and ensemble weights p. For concreteness, consider the simplest case where C is composed of linear binary classifiers. To find adversarial examples, we run gradient descent on a loss function such as the reverse hinge loss that is 0 if and only if the perturbed example x ′ = x + v with true label y is misclassified by c i.Assuming x ′ is not yet misclassified by the ensemble, running SGD on the ensemble classifier with the reverse hinge loss function in a gradient update step of ∇ℓ r (E(C, p), x ′, y) = i p[i]w i. This is undesirable for two main reasons:• First, the ensemble obscures valuable information about the underlying objective. If x ′ is misclassified by a particular model c i but not the ensemble, c i still contributes p[i]w i to the ing gradient and biases exploration away from promising regions of the search space;• Second, fooling the ensemble does not guarantee that the noise will transfer across the underlying models. Assuming the true label y is -1 and that x ′ is correctly classified by all models, ℓ r (E(C, p), x ′, y) = 0 if and only if there exists a subset of classifiers DISPLAYFORM0 Hence, the strength of an ensemble classifier is only as good as its weakest weighted majority. Lemma 1. Selecting a distribution p over a set C of n linear classifiers, partitions the input space R d into k n disjoint, convex sets T j such that:1. For each T j, there exists a unique label vector s j ∈ [k] n such that for all x ∈ T j and c i ∈ C, c i (x) = s j,i, where s j,i is a particular label in [k].2. There exists a finite set of numbers a 1,... a k n, not necessarily all unique, such that n i=1 p[i]ℓ 0-1 (c i, x, y) = a j for a fixed y and all x ∈ T j 3. R d \ j T j is a set of measure zero. Proof. Given a label vector s j, we define each T j as the set of points x where c i (x) = s j,i for all i ∈ [n]. This establishes a bijection between the elements of [k] n and the sets T j. All the T j are pairwise disjoint since their corresponding label vectors in [k] n must differ in at least one index and by construction each classifier can only predict a single label for x ∈ T j.To see that these sets are convex, consider points x 1, x 2 ∈ T j and an arbitrary classifier c i ∈ C s.t. c i (x) = z for all x ∈ T j. If we let x ′ = γx 1 + (1 − γ)x 2 where γ ∈ then the following holds for all j ∈ [k] where j ∕ = z: DISPLAYFORM0 Furthermore, for each T j, there exists a number a j ∈ R ≥0 such that the expected loss of the learner DISPLAYFORM1 Since the distribution p is fixed, the loss of the learner is uniquely determined by the correctness of the predictions of all the individual classifiers c i. Since these are the same for all points in T j, the loss of the learner remains constant. Lastly, the set R d \ i T i is equal to the set of points x where there are ties for the maximum valued classifier. This set is a subset of the set of points K that lie at the intersection of two hyperplanes: DISPLAYFORM2 Finally, we argue that K has measure zero. For all ε > 0, x ∈ K, there exists an x ′ such that ||x − x ′ || 2 < ε and x ′ / ∈ K since the intersection of two distinct hyperplanes is of dimension two less than the overall space. Therefore, R d \ i T i must also have measure zero. Lemma 2. For linear classifiers, implementing a best response oracle reduces to the problem of minimizing a quadratic function over a set of k n convex polytopes. Proof. We outline the proof as follows. Given a distribution p over C, the loss of the learner DISPLAYFORM3 can be optimized individually for each v t since the terms in the sum are independent from one another. We leverage our from Lemma 1 to demonstrate how we can frame the problem of finding the minimum perturbation v j such that x + v j ∈ T j as the minimization of a convex function over a convex set. Since the loss of the learner is constant for points that lie in a particular set T j, we can find the optimal solution by iterating over all sets T j and selecting the perturbation with ℓ 2 norm less than α that is associated with the highest loss. The best response oracle then follows by repeating the same process for each point (x, y).Given a point (x, y) solving for the minimal perturbation v such that x + v ∈ T j can be expressed as the minimization of a quadratic function subject to n(k − 1) linear inequalities. DISPLAYFORM4 Each constraint in can be expressed as k − 1 linear inequalities. For a particular z ∈ [k], c i ∈ C we write c i (x + v) = z as c i,z (x + v) > c i,l (x + v) for all l ∕ = z. Lastly, squaring the norm of the vector is a monotonic transformation and hence does not alter the underlying minimum. Here we extend the from our analysis of linear classifiers to other methods for multilabel classification. In particular, we show that any "all-pairs" or multivector model can be converted to an equivalent "one-vs-all" classifier and hence all of our also apply to these other approaches. All-Pairs. In the "all-pairs" approach, each linear classifier c consists of k 2 linear predictors c i,j trained to predict between labels i, j ∈ [k]. As per convention, we let c i,j (x) = −c j,i (x). Labels are chosen according to the rule: DISPLAYFORM0 Given an "all-pairs" classifier c, we show how it can be transformed into a "one-vs-all" classifier c DISPLAYFORM1 Multivector. Lastly, we extend our to multilabel classification done via class-sensitive feature mappings and the multivector construction by again reducing to the "one-vs-all" case. Given a function Ψ: DISPLAYFORM2 n, labels are predicted according to the rule: DISPLAYFORM3 While there are several choices for the Ψ, we focus on the most common, the multivector construction:Ψ(x, y) = 0,..., 0 DISPLAYFORM4, 0,..., 0 DISPLAYFORM5 DISPLAYFORM6 This in effect ensures that becomes equivalent to that of the "one-vs-all" approach: DISPLAYFORM7 E CONVERGENCE ANALYSIS OF PROJECTED GRADIENT DESCENT Theorem 1. Given any precision > 0 and noise budget α > 0:• For a finite set of linear binary classifiers C and a point (x, y), running PGD for T = 4α/ iterations on the objective f (v) = n i=i p[i]ℓ r (c i, x + v, y) converges to a point that is within of the pure strategy Nash equilibrium f (x + v *), if such an equilibrium exists;• For a finite set of linear multilabel classifiers C, given a label vector s j ∈ [k] n and a distribution p over C, running PGD for T = 4α/ iterations on the objective f (v) = n i=i p[i]ℓ t (c i, x+v, s j,i) converges to a point x+v (T) such that f (x+v (T) )−f (x+v *) ≤ where x + v * ∈ T j and ||v * || 2 ≤ α, if such a point exists. Proof. We know that if a function f is convex and β-smooth, then running projected gradient descent over a convex set, in the following rate of convergence, where v is the optimal solution and v is the initial starting point (See Theorem 3.7 in BID2). DISPLAYFORM8 Given n classifiers, the objective is Furthermore, since v * is a pure strategy Nash equilibrium, f (v) = 0 and the maximum difference between f (v) − f (v), for any v, is bounded by: DISPLAYFORM9 Since ||v (T) − v || 2 ≤ α, we have that: DISPLAYFORM10 Lastly, we can normalize all the w i such that ||w i || 2 = 1 without changing the predictions of the c i and arrive at our desired . For the multiclass case, we have that: Using the fact that all weight vectors w i,j can be transformed to have ℓ 2 norm equal to 1, we have that DISPLAYFORM11 DISPLAYFORM12. Lastly, we can check that ℓ t is β-smooth with β = α n i=1 p[i], which yields the same bound as in the binary case. Theorem 2. Given a set C of n binary, linear classifiers, a number B, a point (x, y), noise budget α, and a distribution p, finding v with ||v|| 2 ≤ α s.t. the loss of the learner is exactly B is NP-complete. Proof. We can certainly verify in polynomial time that a vector v induces a loss of B simply by calculating the 0-1 loss of each classifier. Therefore the problem is in NP.To show hardness, we reduce from Subset Sum. Given n numbers p 1,... p n and a target number B, 11 we determine our input space to be R n, the point x to be the origin, the label y = −1, and the noise budget α = 1. Next, we create n binary classifiers of the form c i (x) = 〈e i, x〉 where e i is the ith standard basis vector. We let p i be the probability with which the learner selects classifier c i. We claim that there is a subset that sums to B if and only if there exists a region T j ⊂ R n on which the learner achieves loss B. Given the parameters of the reduction, the loss of the learner is determined by the sum of the probability weights of classifiers c i such that c i (x + v) = +1 for points x + v ∈ T j. If we again identify sets T j with sign vectors s j ∈ {±1} n as per Lemma 2, there is a bijection between the sets T j and the power set of {p 1, . . ., p n}. A number p i is in a subset U j if the ith entry of s j is equal to +1.Lastly, we can check that there are feasible points within each set T j and hence that all subsets within the original Subset Sum instance are valid. Each T j simply corresponds to a quadrant of R n. For any ε > 0 and for any T j, there exists a v j with ℓ 2 norm less than ε such that x + v j ∈ T j. Therefore, there is a subset U j that sums to B if and only if there is a region T j in which the learner achieves loss B.11 Without loss of generality, we can assume that instances of Subset Sum only have values in the range. We can reduce from the more general case by simply normalizing inputs to lie in this range.12 We can again normalize values so that they form a valid probability distribution.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkl4M3R5K7
Paper analyzes the problem of designing adversarial attacks against multiple classifiers, introducing algorithms that are optimal for linear classifiers and which provide state-of-the-art results for deep learning.
Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games. We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource. We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards. Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments. We present a closed-loop (CL) analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions. We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions (e.g., deep neural networks); and show that a closed-loop Nash equilibrium (NE) can be found (or at least approximated) by solving a related optimal control problem (OCP). This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem. This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms. We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game. We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game. In a noncooperative stochastic dynamic game, the agents compete in a time-varying environment, which is characterized by a discrete-time dynamical system equipped with a set of states and a state-transition probability distribution. Each agent has an instantaneous reward function, which can be stochastic and depends on agents' actions and current system state. We consider that both the state and action sets are subsets of real vector spaces and subject to coupled constraints, as usually required by engineering applications. A dynamic game starts at some initial state. Then, the agents take some action and the game moves to another state and gives some reward values to the agents. This process is repeated at every time step over a (possibly) infinite time horizon. The aim of each agent is to find the policy that maximizes its expected long term return given other agents' policies. Thus, a game can be represented as a set of coupled optimal-control-problems (OCPs), which are difficult to solve in general. OCPs are usually analyzed for two cases namely open-loop (OL) or closed-loop (CL), depending on the information that is available to the agents when making their decisions. In the OL analysis, the action is a function of time, so that we find an optimal sequence of actions that will be executed in order, without feedback after any action. In the CL setting, the action is a mapping from the state, usually referred as feedback policy or simply policy, so the agent can adapt its actions based on feedback from the environment (the state transition) at every time step. For deterministic systems, both OL and CL solutions can be optimal and coincide in value. But for stochastic system, an OL strategy consisting in a precomputed sequence of actions cannot adapt to the stochastic dynamics so that it is unlikely to be optimal. Thus, CL are usually preferred over OL solutions. For dynamic games, the situation is more involved than for OCPs, see, e.g., BID1. In an OL dynamic game, agents' actions are functions of time, so that an OL equilibrium can be visualized as a set of state-action trajectories. In a CL dynamic game, agents' actions depend on the current state variable, so that, at every time step, they have to consider how their opponents would react to deviations from the equilibrium trajectory that they have followed so far, i.e., a CL equilibrium might be visualized as a set of trees of state-action trajectories. The sets of OL and CL equilibria are generally different even for deterministic dynamic games BID10 BID5.The CL analysis of dynamic games with continuous variables is challenging and has only be addressed for simple cases. The situation is even more complicated when we consider coupled constraints, since each agent's actions must belong to a set that depends on the other agents' actions. These games, where the agents interact strategically not only with their rewards but also at the level of the feasible sets, are known as generalized Nash equilibrium problems BID3.There is a class of games, named Markov potential games (MPGs), for which the OL analysis shows that NE can be found by solving a single OCP; see BID6 BID25 for recent surveys on MPGs. Thus, the benefit of MPGs is that solving a single OCP is generally simpler than solving a set of coupled OCPs. MPGs appear often in economics and engineering applications, where multiple agents share a common resource (a raw material, a communication link, a transportation link, an electrical transmission line) or limitations (a common limit on the total pollution in some area). Nevertheless, to our knowledge, none previous study has provided a practical method for finding CL Nash equilibrium (CL-NE) for continuous MPGs. Indeed, to our knowledge, no previous work has proposed a practical method for finding or approximating CL-NE for any class of Markov games with continuous variables and coupled constraints. State-of-the-art works on learning CL-NE for general-sum Markov games did not consider coupled constraints and assumed finite state-action sets BID18 BID16.In this work, we extend previous OL analysis due to BID26 BID23 and tackle the CL analysis of MPGs with coupled constraints. We assume that the agents' policies lie in a parametric set. This assumption makes derivations simpler, allowing us to prove that, under some potentiality conditions on the reward functions, a game is an MPG. We also show that, similar to the OL case, the Nash equilibrium (NE) for the approximate game can be found as an optimal policy of a related OCP. This is a practical approach for finding or at least approximating NE, since if the parametric family is expressive enough to represent the complexities of the problem under study, we can expect that the parametric solution will approximate an equilibrium of the original MPG well (under mild continuity assumptions, small deviations in the parametric policies should translate to small perturbations in the value functions). We remark that this parametric policy assumption has been widely used for learning the solution of single-agent OCPs with continuous state-action sets; see, e.g., BID9; BID17 BID24 BID20. Here, we show that the same idea can be extended to MPGs in a principled manner. Moreover, once we have formulated the related OCP, we can apply reinforcement learning techniques to find an optimal solution. Some recent works have applied deep reinforcement learning (DRL) to cooperative Markov games BID4 BID22, which are a particular case of MPGs. Our show that similar approaches can be used for more general MPGs. We provide sufficient and necessary conditions on the agents' reward function for a stochastic game to be an MPG. Then, we show that a closed-loop Nash equilibrium can be found (or at least approximated) by solving a related optimal control problem (OCP) that is similar to the MPG but with a single-objective reward function. We provide two ways to obtain the reward function of this OCP: i) computing the line integral of a vector field composed of the partial derivatives of the agents' reward, which is theoretically appealing since it has the form of a potential function but difficult to obtain for complex parametric policies; ii) and as a separable term in the agents' reward function, which can be obtained easily by inspection for any arbitrary parametric policy. We illustrate the proposed approach by applying DRL to a noncoooperative Markov game that models a communications engineering application (in addition, we illustrate the differences with the previous standard approach by solving a classic resource sharing game analytically in the appendix). Let N {1, . . ., N} denote the set of agents. Let a k,i be the real vector of length A k that represents the action taken by agent k ∈ N at time i, where A k ⊆ R A k is the set of actions of agent k ∈ N. Let A k∈N A k denote the set of actions of all agents that is the Cartesian product of every agent's action space, such that A ⊆ R A, where A = k∈N A k. The vector that contains the actions of all agents at time i is denoted a i ∈ A. Let X ⊆ R S denote the set of states of the game, such that x i is a real vector of length S that represents the state of the game at time i, with components x i (s): DISPLAYFORM0Note that the dimensionality of the state set can be different from the number of agents (i.e., S = N). State transitions are determined by a probability distribution over the future state, conditioned on the current state-action pair: DISPLAYFORM1 where we use boldface notation for denoting random variables. State transitions can be equivalently expressed as a function, f: X × A × Θ → X, that depends on some random variable θ i ∈ Θ, with distribution p θ (·|x i, a i), such that DISPLAYFORM2 We include a vector of C constraint functions, DISPLAYFORM3, where g c: X × A → R; and define the constraint sets for i = 0: C 0 A ∩ {a 0 : g(x 0, a 0) ≤ 0}; and for i = 0,..., ∞: C i {X ∩ {x i : DISPLAYFORM4, which determine the feasible states and actions. The instantaneous reward of each agent, r k,i, is also a random variable conditioned on the current state-action pair: DISPLAYFORM5 We assume that θ i and σ k,i are independent of each other and of any other θ j and σ k,j, at every time step j = i, given x i and a i .Let π k : X → A k and π : X → A denote the policy for agent k and all agents, respectively, such that: DISPLAYFORM6 Let Ω k and Ω = k∈N Ω k denote the policy spaces for agent k and for all agents, respectively, such that π k ∈ Ω k and π ∈ Ω. Note that Ω(X) = A. Introduce also π −k: X → A −k as the policy of all agents except that of agent k. Then, by slightly abusing notation, we write: DISPLAYFORM7 The general (i.e., nonparametric) stochastic game with Markov dynamics consists in a multiobjective variational problem with design space Ω and objective space R N, where each agent aims to find a stationary policy that maximizes its expected discounted cumulative reward, for which the vector of constraints, g, is satisfied almost surely: DISPLAYFORM8 Similar to static games, since there might not exist a policy that maximizes every agent's objective, we will rely on Nash equilibrium (NE) as solution concept. But rather than trying to find a variational NE solution for, we propose a more tractable approximate game by constraining the policies to belong to some finite-dimensional parametric family. Introduce the set of parametric policies, Ω w, as a finite-dimensional function space with parameter w ∈ W ⊆ R W: Ω w {π(·, w): w ∈ W}. Note that for a given w, the parametric policy is still a mapping from states to actions: π(·, w): X → A. Let w k ∈ W k ⊆ R W k denote the parameter vector of length W k for the parametrized policy π k, so that it lies in the finite-dimensional space Ω w k DISPLAYFORM9 Let w −k denote the parameters of all agents except that of agent k, so that we can also write: DISPLAYFORM10 In addition, we use w k to denote the -th component of DISPLAYFORM11. By constraining the policy of G 1 to lie in Ω w k, we obtain a multiobjective optimization problem with design space W: DISPLAYFORM12 The solution concept in which we are interested is the parametric closed-loop Nash equilibrium (PCL-NE), which consists in a parametric policy for which no agent has incentive to deviate unilaterally. DISPLAYFORM13 Since G 2 is similar to G 1 but with an extra constraint on the policy set, loosely speaking, we can see a PCL-NE as a projection of some NE of G 1 onto the manifold spanned by parametric family of choice. Hence, if the parametric family has arbitrary expressive capacity (e.g., a neural network with enough neurons in the hidden layers), we can expect that the ing PCL-NE evaluated on G 1 will approximate arbitrarily close the performance of an exact variational equilibrium. We consider the following general assumptions. The state and parameter sets, X and W, are nonempty and convex. Assumption 2 The reward functions r k are twice continuously differentiable in X × W, ∀k ∈ N. The state-transition function, f, and constraints, g, are continuously differentiable in X × W, and satisfy some regularity conditions (e.g., Mangasarian-Fromovitz).Assumption 4 The reward functions r k are proper, and there exists a scalar B such that the level DISPLAYFORM0 are nonempty and bounded ∀k ∈ N.Assumptions 1-2 usually hold in engineering applications. Assumption 3 ensures the existence of feasible dual variables, which is required for establishing the optimality conditions. Assumption 4 will allow us to ensure the existence of PCL-NE. We say that r k is proper if: DISPLAYFORM1 In this section, we review the standard approach for tackling CL dynamic games (González-Sánchez and Hernández-). For simplicity, we consider deterministic game and no constraints: DISPLAYFORM0 First, it inverts f to express the policy in reduced form, i.e., as a function of current and future states: DISPLAYFORM1 This implicitly assumes that such function h: X × X → A exists, which might not be the case if f is not invertible. Next, π k is replaced with in each r k: DISPLAYFORM2 where r k: X×X → R is the reward in reduced-form. Then, the Euler equation (EE) and transversality condition (TC) are obtained from r k for all k ∈ N and used as necessary optimality conditions: DISPLAYFORM3 When r k are concave for all agents, and X ⊆ R + (i.e., X = {x i : DISPLAYFORM4, these optimality conditions become sufficient for Nash equilibrium (González-Sánchez and Hernández-, Theorem 4.1). Thus, the standard approach consists in guessing parametric policies from the space of functions Ω, and check whether any of these functions satisfies the optimality conditions. We illustrate this procedure with a well known resource-sharing game named "the great fish war" due to BID11, with Example 1 in Appendix A.Although the standard approach sketched above (see also Appendix A) has been the state-of-the-art for the analysis of CL dynamic games, it has some drawbacks: i) The reduced form might not exist; ii) constraints are not handled easily and we have to rely in ad hoc arguments for ensuring feasibility; iii) finding a specific parametric form that satisfies the optimality conditions can be extremely difficult since the space of functions is too large; and iv) the rewards have to be concave for all agents in order to guarantee that any policy that satisfies the conditions is an equilibrium. In order to overcome these issues, we propose to first constrain the set of policies to some parametric family, and then derive the optimality conditions for this parametric problem; as opposed to the standard approach that first derives the optimality conditions of G 1, and then guesses a parametric form that satisfies them. Based on this insight, we will introduce MPG with parametric policies as a class of games that can be solved with standard DRL techniques by finding the solution of a related (single-objective) OCP. We explain the details in the following section. In this section, we extend the OL analysis of BID25 to the CL case. We define MPGs with CL information structure; introduce a parametric OCP; provide verifiable conditions for a parametric approximate game to be an MPG in the CL setting; show that when the game is an MPG, we can find a PCL-NE by solving the parametric OCP with a specific objective function; and provide a practical method for obtaining such objective function. First, we define MPGs with CL information structure and parametric policies as follows. Definition 2 Given a policy family π(·, w) ∈ Ω w, game is an MPG if and only if there is a function J: X × W × Σ → R, named the potential, that satisfies the following condition ∀k ∈ N: DISPLAYFORM0 Definition 2 means that there exists some potential function, J, shared by all agents, such that if some agent k changes its policy unilaterally, the change in its reward, r k, equals the change in J.The main contribution of this paper is to show that when is a MPG, we can find one PCL-NE by solving a related parametric OCP. The generic form of such parametric OCP is as follows: DISPLAYFORM1 where we replaced the multiple objectives (one per agent) with the potential J as single objective. This is convenient since solving a single objective OCP is generally much easier than solving the Markov game. However, we still have to find out how to obtain J. The following Theorem formalizes the relationship between G 2 and P 1 and shows one way to obtain J (proof in Appendix C).Theorem 1 Let Assumptions 1-4 hold. Let the reward functions satisfy the following ∀k, j ∈ N: DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 where the expected value is taken component-wise. Then, game is an MPG that has a PCL-NE equal to the solution of OCP. The potential J that is the instantaneous reward for the OCP is given by line integral: DISPLAYFORM5 where η(z) (η k (z)) S m=1 and ξ(z) (ξ k (z)) k∈N are piecewise smooth paths in X and W, respectively, with components ξ k (z) (ξ k, (z)) W k =1, such that the initial and final state-action conditions are given by (η, ξ) and (η = x i, ξ = w).From FORMULA2, we can see that J is obtained through the line integral of a vector field with components the partial derivatives of the agents' rewards (see Appendix C), and so the name potential function. Note also that Theorem 1 proves that any solution to P 1 is also a PCL-NE of G 2, but we remark that there may be more equilibria of the game that are not solutions to P 1 (see Appendix C).The usefulness of Theorem 1 is that, once we have the potential function, we can formulate and solve the related OCP for any specific parametric policy family. This is a considerable improvement over the standard approach. On one hand, if the chosen parametric policy contains the optimal solution, then we will obtain the same equilibrium as the standard approach. On the other hand, if the chosen parametric family does not have the optimal solution, the standard approach will fail, while our approach will always provide a solution that is an approximation (a projection over Ω w) of an exact variational equilibrium. Moreover, as mentioned above, we can expect that the more expressive the parametric family, the more accurate the approximation to the variational equilibrium. In Appendix B, we show how to to solve "the great fish war" game with the proposed framework, yielding the same solution as with the standard approach, with no loss of accuracy. Although expressing J as a line integral of a field is theoretically appealing, if the parametric family is involved-as it is usually the case for expressive policies like deep neural-networks-then might be difficult to evaluate. The following show how to obtain J easily by visual inspection. First, the following corollary follows trivially from FORMULA10 - FORMULA25 and shows that cooperative games, where all agents have the same reward, are MPGs, and the potential equals the reward:Corollary 1 Cooperative games, where all agents have a common reward, such that DISPLAYFORM6 are MPGs; and the potential function equals the common reward function in.Second, we address noncooperative games, and show that the potential can be found by inspection as a separable term that is common to all agents' reward functions. Interestingly, we will also show that a game is an MPG in the CL setting if and only if all agents' policies depend on disjoint subsets of components of the state vector. More formally, introduce X π k as the set of state vector components that influence the policy of agent k and introduce a new state vector, x π k, and let x π −k,i be the vector of components that do not influence the policy of agent k: DISPLAYFORM7 In addition, introduce X r k as the set of components of the state vector that influence the reward of agent k directly (not indirectly through any other agent's policy), and define the state vectors: DISPLAYFORM8 Introduce also the union of these two subsets, DISPLAYFORM9, and its corresponding vectors: DISPLAYFORM10 Then, the following theorem allows us to obtain the potential function (proof in Appendix D).Theorem 2 Let Assumptions 1-4 hold. Then, game is an MPG if and only if: i) the reward function of every agent can be expressed as the sum of a term common to all agents plus another term that depends neither on its own state-component vector, nor on its policy parameter: DISPLAYFORM11 25) and ii) the following condition on the non-common term holds: DISPLAYFORM12 Moreover, if holds, then the common term in, J, equals the potential function.Note that holds in the following cases: i) when Θ k = 0, as the cooperative case described in Corollary 1; ii) when Θ k does not depend on the state but only on the parameter vector, i.e., Θ k: j∈N,j =k W j → R, as in "the great fish war" example described in Appendix B; or iii) when all agents have disjoint state-component subsets, i.e., X Θ k ∩ X Θ j = ∅, ∀(k, j) ∈ {N × N : k = j}. An interesting insight from Theorem 2 is that a dynamic game that is potential when it is analyzed in the OL case (i.e., the policy is a predefined sequence of actions), might not be potential when analyzed in the CL parametric setting. This is straightforward since the potentiality condition in the OL case provided by (, Cor. 1 In order to apply Theorems 1 and 2, we are implicitly assuming that there exists solution to the OCP. We finish this section, by showing that this is actually the case in our setting (proof in Appendix E). In other words, Prop. 1 shows that there exists a deterministic policy that achieves the optimal value of P 1, which is also an NE of G 2 if conditions- or equivalently- hold. We remark that there might be many other-possibly stochastic-policies that are also NE of the game. In this section, we show how to use the proposed MPGs framework to learn an equilibrium of a communications engineering application. We extend the Medium Access Control (MAC) game presented in BID25 to stochastic dynamics and rewards (where previous OL solutions would fail), and use the Trust Region Policy Optimization (TRPO) algorithm BID20, which is a reliable reinforcement learning method policy search method that approximates the policy with a deep-neural network, to learn a policy that is a PCL-NE of the game. We consider a MAC uplink scenario with N = 4 agents, where each agent is a user that sets its transmitter power aiming to maximize its data rate and battery lifespan. If multiple users transmit at the same time, they will interfere with each other and decrease their rate, using their batteries inefficiently, so that they have to find an equilibrium. Let x k,i ∈ [0, B k,max] X k denote the battery level for each agent k ∈ N, which is discharged proportionally to the transmitted power, Let a k,i ∈ [0, P k,max] A k be the transmitted power for the k-th user, where constants P k,max and B k,max stand for the maximum allowed transmitter power and battery level, respectively. The system state is the vector with all user's battery levels: x i = (x k,i) k∈N ∈ X; such that S = N and all state vector components are unshared, i.e., X = k∈N X k ⊂ R N, and X k = {k}. We remark that although each agent's battery depletion level depends directly on its action and its previous battery level only, it also depends indirectly on the strategies and battery levels of the rest of agents. The game can be formalized as follows: DISPLAYFORM0 where h k is the random fading channel coefficient for user k, α is the weight for the battery reward term, and δ is the discharging factor. First of all, note that each agent's policy and reward depend only on its own battery level, x k,i. Therefore, we can apply Theorem 2 and establish that the game is a MPG, with potential function: DISPLAYFORM1 Thus, we can formulate OCP with single objective given by.Since the battery level is a positive term in the reward, the optimal policy will make the battery deplete in finite time (formal argument can be derived from transversality condition). Moreover, since δ k,i ≥ 0, the episode gets into a stationary (i.e., terminal) state once the battery has been depleted. We have chosen the reward to be convex. The reason is that in order to compute a benchmark solution, we can solve the finite time-horizon convex OCP exactly with a convex optimization solver, e.g., CVX BID7, and use the as a baseline for comparing with the solution learned by a DRL algorithm. Nevertheless, standard solvers do not allow to include random variables. To surmount this issue, we generated 100 independent sequences of samples of h k,i and δ k,i for all k ∈ N and length T = 100 time steps each, and obtain two solutions with them. We set |h DISPLAYFORM2 where v k,i is uniform in [0.5, 1], |h 1 | 2 = 2.019, |h 2 | 2 = 1.002, |h 3 | 2 = 0.514 and |h 4 | 2 = 0.308; and δ k,i is uniform in [0.7, 1.3]. The first solution is obtained by averaging the sequences, and building a deterministic convex problem with the average sequence, which yielded an optimal value V cvx = 33.19. We consider V cvx to be an estimator of the optimal value of the stochastic OCP. The second solution is obtained by building 100 deterministic problems, solving them, and averaging their optimal values, which yielded an optimal value V avg,cvx = 34.90. We consider V avg,cvx to be an upper bound estimate of the optimal value of the stochastic OCP (Jensen's inequality). The batteries depleted at a level x T < 10 −6 in all cases, concluding that time horizon of T = 100 steps is valid. We remark that these benchmark solutions required complete knowledge of the game. When we have no prior knowledge of the dynamics and rewards, the proposed approach allows as to learn a PCL-NE of by using any DRL method that is suitable for continuous state and actions, like TRPO BID20, DDPG or A3C BID15. DRL methods learn by interacting with a black-box simulator, such that at every time step i, agents observe state x i, take action a i = π w (x i) and observe the new stochastic battery levels and reward values, with no prior knowledge of the reward or state-dynamic functions. As a proof of concept, we perform simulations with TRPO, approximating the policy with a neural network with 3 hidden layers of size 32 neurons per layer and RELU activation function, and an output layer that is the mean of a Gaussian distribution. Each iteration of TRPO uses a batch of size 4000 simulation steps (i.e., tuples of state transition, action and rewards). The step-size is 0.01. FIG0 shows the . After 400 iterations, TRPO achieves an optimal value V trpo = 32.34, which is 97.44% of V cvx, and 92.7% of the upper bound V avg,cvx. We have extended previous on MPGs with constrained continuous state-action spaces providing practical conditions and a detailed analysis of Nash equilibrium with parametric policies, showing that a PCL-NE can be found by solving a related OCP. Having established a relationship between a MPG and an OCP is a significant step for finding an NE, since we can apply standard optimal control and reinforcement learning techniques. We illustrated the theoretical by applying TRPO (a well known DRL method) to an example engineering application, obtaining a PCL-NE that yields near optimal , very close to an exact variational equilibrium. A EXAMPLE: THE "GREAT FISH WAR" GAME -STANDARD APPROACH Let us illustrate the standard approach described in Section 3 with a well known resource-sharing game named "the great fish war" due to BID11. We follow (González-Sánchez and Hernández-, Sec. 4.2). Example 1. Let x i be the stock of fish at time i, in some fishing area. Suppose there are N countries obtaining reward from fish consumption, so that they aim to solve the following game: DISPLAYFORM0 where x 0 ≥ 0 and 0 < α < 1 are given. In order to solve G fish, let us express each agent's action as: DISPLAYFORM1 so that the rewards can be also expressed in reduced form, as required by the standard-approach: DISPLAYFORM2 Thus, the Euler equations for every agent k ∈ N and all t = 0,..., ∞ become: DISPLAYFORM3 Now, the standard method consists in guessing a family of parametric functions that replaces the policy, and checking whether such parametric policy satisfies for some parameter vector. Let us try with policies that are linear mappings of the state: DISPLAYFORM4 By replacing in, we obtain the following set of equations: DISPLAYFORM5 Fortunately, it turns out that has solution (which might not be the case for other policy parametrization), with parameters given by: DISPLAYFORM6 Since 0 < α < 1 and 0 ≤ γ < 1, it is apparent that w k > 0 and the constraint π k (x i) ≥ 0 holds for all x i ≥ 0. Moreover, since k∈N w k < 1, we have that x i+1 ≥ 0 for any x 0 ≥ 0. In addition, since x i is a resource and the actions must be nonnegative, it follows that lim i→∞ x i = 0 (there is no reason to save some resource). Therefore, the transversality condition holds. Since the rewards are concave, the states are non-negative and the linear policies with these coefficients satisfy the Euler and transversality equations, we conclude that they constitute an equilibrium (González-Sánchez and Hernández-, Theorem 4.1).B EXAMPLE: "GREAT FISH WAR" GAME -PROPOSED APPROACHIn this section, we illustrate how to apply the proposed approach with the same "the great fish war" example, obtaining the same as with the standard approach. Example 2. Consider "the great fish war" game described in Example 1. In order to use our approach, we replace the generic policy with the specific policy mapping of our preference. We choose the linear mapping, π k (x i) = w k x i, to be able to compare the with those obtained with the standard approach. Thus, we have the following game: DISPLAYFORM7 Let us verify conditions FORMULA9 - FORMULA9. For all k, j ∈ N we have: DISPLAYFORM8 DISPLAYFORM9 Since conditions FORMULA9 - FORMULA9 hold, we conclude that FORMULA5 is an MPG. By applying the line integral FORMULA2, we obtain: DISPLAYFORM10 Now, we can solve OCP with potential function. For this particular problem, it is easy to solve the KKT system in closed form. Introduce a shorthand: DISPLAYFORM11 The Euler-Lagrange equation for this problem becomes: DISPLAYFORM12 The optimality condition with respect to the policy parameter becomes: DISPLAYFORM13 Let us solve for β i in: DISPLAYFORM14 Replacing FORMULA6 and the state-transition dynamics in FORMULA6, we obtain the following set of equations: DISPLAYFORM15 Hence, the parameters can be obtained as: DISPLAYFORM16 This is exactly the same solution that we obtained in Example 1 with the standard approach. We remark that for the standard approach, we were able to obtain the policy parameters since we put the correct parametric form of the policy in the Euler equation. If we had used another parametric family without a linear term, the Euler equations might have no solution and we would have got stuck. In contrast, with our approach, we could freely choose any other form of the parametric policy, and always solve the KKT system of the approximate game. Broadly speaking, we can say that the more expressive the parametric family, the more likely that the optimal policy of the original game will be accurately approximated by the optimal solution of the approximate game. Proof: The proof mimics the OL analysis from BID25. Let us build the KKT systems for the game and the OCP with parametric policies. For game, each agent's Lagrangian is given ∀k ∈ N by DISPLAYFORM0 where DISPLAYFORM1 ∈ R C are the vectors of multipliers at time i (which are random since they depend on θ i and x i), and we introduced: DISPLAYFORM2 Introduce a shorthand for the instantaneous Lagrangian of agent k: DISPLAYFORM3 The discrete time stochastic Euler-Lagrange equations applied to each agent's Lagrangian are different from the OL case studied in BID25 (see also (, Sec. 6 .1)), since we only take into account the variation with respect to the state: DISPLAYFORM4 where 0 S denotes the vector of length S. The transversality condition is given by DISPLAYFORM5 In addition, we have an optimality condition for the policy parameter w k: DISPLAYFORM6 From these first-order optimality conditions, we obtain the KKT system for every agent k ∈ N and all time steps i = 1,..., ∞: DISPLAYFORM7 DISPLAYFORM8 DISPLAYFORM9 DISPLAYFORM10 where λ k,i−1 is considered deterministic since it is known at time i. Now, we derive the KKT system of optimality conditions for the OCP. The Lagrangian for is given by: DISPLAYFORM11 where DISPLAYFORM12 ∈ R C are the corresponding multipliers, which are random variables since they depend on θ i and x i. By taking the discrete time stochastic EulerLagrange equations and the optimality condition with respect to the policy parameter for the OCP, we obtain are a KKT system for the OCP: i = 1,..., ∞: DISPLAYFORM13 DISPLAYFORM14 DISPLAYFORM15 DISPLAYFORM16 DISPLAYFORM17 where β i−1 is known at time i and includes the multipliers related to x i−1.By comparing FORMULA8 - FORMULA9 and FORMULA2 - FORMULA9, we conclude that both KKT systems are equal if the following holds ∀k ∈ N and i = 1,..., ∞: DISPLAYFORM18 DISPLAYFORM19 DISPLAYFORM20 Since Assumption 4 ensures existence of primal variable for the OCP, Assumption 3 guarantee the existence of dual variables that satisfy its KKT system. By applying and replacing the dual variables of the KKT of the game with the OCP dual variables for every agent, we obtain a system of equations where the only unknowns are the user strategies. This system is similar to the OCP in the primal variables. Therefore, the OCP primal solution also satisfies the KKT necessary conditions of the game. Moreover, from the potentiality condition, it is straightforward to show that this primal solution of the OCP is also a PCL-NE of the MPG (see also BID25, Theorem 1)).Introduce the following vector field:F (x i, w, σ i) ∇ (xi,w) J (x i, π(x i, w), σ i ).Since F is conservative by construction (, Theorems 10.4, 10.5 and 10.9), conditions- are equivalent to FORMULA10 - FORMULA25 and we can calculate a potential J through line integral. Proof: We can rewrite game by making explicit that the actions from the policy mapping, which yields an expression that reminds the OL problem but with extra constraints: DISPLAYFORM0 where it is clear that: a i (a k,i, a −k,i) = π(x i, w) Rewrite also OCP with explicit dependence on the actions: DISPLAYFORM1 By following the Euler-Lagrange approach described in Theorem 1, we have that the KKT systems for game and OCP are equal if the dual variables are equal (including new extra dual variables for the equality constraints that relate the action and the policy) and the following first-order conditions hold ∀k ∈ N and i = 1,..., ∞: DISPLAYFORM2 E ∇ a k,i r k x r k,i, a k,i, a −k,i, σ k,i = E ∇ a k,i J (x i, a i, σ i).The benefit of this reformulation is that the gradient in is taken with respect to the components in X r k only (instead of the whole set X), at the cost of replacing with the sequence of conditions. We have to realize that a k,i is indeed a function of variables x π k,i and w k. In order to understand the influence of this variable change, we use the identity a k,i = π w k (x π k,i) and apply the chain rule to both sides of, obtaining: DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 In addition, since J is proper, it must be upper bounded, i.e., ∃U ∈ R, such that J ≤ U. Then, we have: DISPLAYFORM7 Since B ≤ U, we have that DISPLAYFORM8 Therefore, the level sets are bounded. From Assumption 2 the fact that J can be obtained from line integral FORMULA2, and fundamental theorem of calculus, we deduce that J is continuous. Therefore, we conclude that these level sets are also compact. Thus, we can use (, Prop. 3.1.7, see also Sections 1.2 and 3.6) to ensure existence of an optimal policy.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJm7VfZA-
We present general closed loop analysis for Markov potential games and show that deep reinforcement learning can be used for learning approximate closed-loop Nash equilibrium.
We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression . Bits back coding (; Hinton & van) is a method for performing lossless compression using a latent variable model. In an ideal implementation, the method can achieve an expected message length equal to the variational free energy, often referred to as the negative evidence lower bound (ELBO) of the model. Bits back was first introduced to form a theoretical argument for using the ELBO as an objective function for machine learning (Hinton & van). The first implementation of bits back coding made use of first-infirst-out (FIFO) arithmetic coding (AC) . However, the implementation did not achieve optimal compression, due to an incompatibility between a FIFO coder and bits back coding, and its use was only demonstrated on a small dataset of 8×8 binary images. Compression' (HiLLoC). In our experiments (Section 4), we demonstrate that HiLLoC can be used to compress color images from the ImageNet test set at rates close to the ELBO, outperforming all of the other codecs which we benchmark. We also demonstrate the speedup, of nearly three orders of magnitude, ing from vectorization. We release an open source implementation based on'Craystack', a Python package which we have written for general prototyping of lossless compression with ANS. In this section we briefly describe the BB-ANS algorithm first introduced by. We begin by giving a high-level description of the ANS LIFO entropy coder , along with a new notation for describing the basic ANS operations. Throughout the rest of the paper we use log to mean the base two logarithm, usually denoted log 2, and we measure message lengths in bits. As an entropy coder, ANS was designed for compressing sequences of discretely distributed symbols. It achieves a compressed message length equal to the negative log-probability (information content) of the sequence plus an implementation dependent constant, which is usually less than 32 bits. For long sequences, the constant overhead has a negligible contribution to the overall compression rate. Thus, by Shannon's source coding theorem , ANS coding is guaranteed to be near-optimal for long sequences. There are two basic operations defined by ANS, which we will refer to as'push' and'pop'. Push encodes a symbol by adding it to an existing message. It has the signature push: (message, symbol) → message. Pop is the inverse of push, and may be used to decode a symbol and recover a message identical to that before pushing. pop: message → (message, symbol). When multiple symbols are pushed in sequence, they must be popped using the precise inverse procedure, which means popping the symbols in the opposite order. Hence why ANS is referred to as a last-in-first-out coder, or a stack. The push and pop operations require access to a probabilistic model of symbols, summarized by a probability mass function p over the alphabet of possible symbols. The way that symbols are encoded depends on the model, and pushing a symbol s according to p in an increase in message length of log 1 p(s). Popping s in an equal reduction in message length. For details on how the ANS operations are implemented, see. Note that any model/mass function can be used for the pop operation, i.e. there's no hard restriction to use the distribution that was used to encode the message. In this way, rather than decoding the same data that was encoded, pop can actually be used to sample a symbol from a different distribution. The pop method itself is deterministic, so the source of randomness for the sample comes from the data contained within the message. This sampling operation, which can be inverted by pushing the sample back onto the stack, is essential for bits back coding. For convenience, we introduce the shorthand notation s → p(·) for encoding (pushing) a symbol s according to p, and s ← p(·) for decoding (popping). Suppose we have a model for data x which involves a latent variable z. A sender and receiver wish to communicate a sample x. They have access to a prior on z, denoted p(z), a likelihood p(x | z) and a (possibly approximate) posterior q(z | x), but not the marginal distribution p(x). Without access to p(x), sender and receiver cannot directly code x using ANS. However, BB-ANS specifies an indirect way to push and pop x. It does not require access to the marginal p(x), but rather uses the prior, conditional, and posterior from the latent variable model. Table 1 (a) shows, in order from the top, the three steps of the BB-ANS pushing procedure which the sender can perform to encode x. The'Variables' column shows the variables known to the sender before each step. 1(b) shows the inverse steps which the receiver can use to pop x, with the'Variables' column showing what is known to the receiver after each step. After decoding x, the third step of popping, z → q(· | x), is necessary to ensure that BB-ANS pop is a precise inverse of push. Table 1: Indirectly pushing and popping x using BB-ANS. → and ← denote pushing and popping respectively. ∆L denotes the change in message length ing from each operation. The three steps to push/pop are ordered, starting at the top of the table and descending. Operation ∆L The change in message length from BB-ANS can easily be derived by adding up the quantities in the ∆L column of Table 1. For encoding we get Taking the expectation over z gives the expected message length for a datum x which is the negative evidence lower bound (ELBO), also known as the free energy. This is a commonly used training objective for latent variable models. The above equation implies that latent variable models trained using the ELBO are implicitly being trained to minimize the expected message length of lossless compression using BB-ANS. Note that, as Table 1 shows, the first step of encoding a data point, x, using BB-ANS is to, counterintuitively, decode (and thereby sample) a latent z ← q(· | x). This requires that there is already a buffer of random data pushed to the ANS coder, which can be popped. This data used to start the encoding process is recovered after the final stage of decoding, hence the name'bits back'. If we have multiple samples to compress, then we can use'chaining', which is essentially repeated application of the procedure in Table 1 . In Section 3.4 we describe how we build up an initial buffer of compressed data by using a different codec to code the first images in a sequence. We now discuss the techniques we introduce to scale up BB-ANS. When all of the layers in the generative and recognition networks of a VAE are either convolutional or elementwise functions (i.e. the VAE has no densely connected layers), then it is possible to evaluate the recognition network on images of any height and width, and similarly to pass latents of any height and width through the generative network to generate an image. Thus, such a VAE can be used as a (probabilistic) model for images of any size. We exploit this fact, and show empirically in Section 4 that, surprisingly, a fully convolutional VAE trained on 32 × 32 images can perform well (in the sense of having a high ELBO) as a model for 64 × 64 images as well as far larger images. This in turn corresponds to a good compression rate, and we implement lossless compression of arbitrary sized images by using a VAE in this way. The primary computational bottlenecks in the original BB-ANS implementation were loops over data and latent variables occurring in the Python interpreter. We have been able to vectorize these, achieving an implementation which can scale to large ImageNet images. The effect of vectorization on runtime is shown in Figure 4. A vectorized implementation of ANS was described in using SIMD instructions. This works by expanding the size of the ANS stack head, from a scalar to a vector, and interleaving the output/input bit stream. We implement this in our lossless compression library, Craystack, using Numpy. Please refer to the Craystack code and to for more detail. We ensure that the compression rate overhead to vectorization is low by using the BitKnit technique described in , see Appendix D for more detail. Having vectorized, we found that most of the compute time for our compression was spent in neural net inference, whether running on CPU or GPU, which we know to already be reasonably well optimized. In Craystack, we further generalize the ANS coder using Numpy's n-dimensional array view interface, allowing the stack head to be'shaped' like an n-dimensional array, or a nested Python data-structure containing arrays. We can then use a shape which fits that of the data that we wish to encode or decode. When coding data according to a VAE we use an ANS stack head shaped into a pair of arrays, matching the shapes of the observation x and the latent z. This allows for a straightforward implementation and clarifies the lack of data dependence between certain operations, such as the and z → p(·) during encoding, which can theoretically be performed concurrently. This vectorized encoding process is visualized in Figure 2. It is standard for state of the art latent variable models to use continuous latent variables. Since ANS operates over discrete probability distributions, if we wish to use BB-ANS with such models it is necessary to discretize the latent space so that latent samples can be communicated. described a static discretization scheme for the latents in a simple VAE with a single layer of continuous latent variables, and showed that this discretization has a negligible impact on compression rate. The addition of multiple layers of stochastic variables to a VAE has been shown to improve performance (; ; Maaløe et al., 2019; Sønderby et al., 2016). Motivated by this, we propose a discretization scheme for hierarchical VAEs with multiple layers of latent variables. The discretization described in is formed by dividing the latent space into intervals of equal mass under the prior p(z). For a hierarchical model, the prior on each layer depends on the previous layers: It isn't immediately possible to use the simple static scheme from , since the marginals p(z 1),..., p(z L−1) are not known. estimate these marginals by sampling, and create static bins based on the estimates. They demonstrate that this approach can work well. We propose an alternative approach, allowing the discretization to vary with the context of the latents we are trying to code. We refer to our approach as dynamic discretization. In dynamic discretization, instead of discretizing with respect to the marginals of the prior, we discretize according to the conditionals in the prior, p(z l | z l+1:L). Specifically, for each latent layer l, we partition each dimension into intervals which have equal probability mass under the conditional p(z l | z l+1:L). This directly generalizes the scheme used in BB-ANS . Dynamic discretization is more straightforward to implement because it doesn't require callibrating the discretization to samples. However it imposes a restriction on model structure, in particular it requires that posterior inference is done top-down. This precludes the use of Bit-Swap. In Section 3.3.1 we contrast the model restriction from dynamic discretization with the bottom-up, Markov restriction imposed by Bit-Swap itself. We give further details about the dynamic discretization implementation we use in Appendix A. Figure 3: Graphical models representing the generative and inference models with HiLLoC and Bit-Swap, both using a 3 layer latent hierarchy. The dashed lines indicate dependence on the fixed observation. The first stage of BB-ANS encoding is to pop from the posterior, z 1:L ← q(· | x). When using dynamic discretization, popping the layer z l requires knowledge of the discretization used for z l and thus of the conditional distribution p(z l | z l+1:L). This requires the latents z l+1:L to have already been popped. Because of this, latents in general must be popped (sampled) in'top-down' order, i.e. z L first, then z L−1 and so on down to z 1. The most general form of posterior for which top-down sampling is possible is This is illustrated, for a hierarchy of depth 3, in Figure 3b. The Bit-Swap technique requires that inference be done bottom up, and that generative and inference models must both be a Markov chain on z 1,..., z L, and thus cannot use skip connections. These constraints are illustrated in Figure 3c,d. Skip connections have been shown to improve model ELBO in very deep models (Sønderby et al., 2016; Maaløe et al., 2019). HiLLoC does not have this constraint, and we do utilize skip connections in our experiments. As discussed in Section 3.3, our dynamic discretization method precludes the use of Bit-Swap for reducing the one-time cost of starting a BB-ANS chain. We propose instead to use a significantly simpler method to address the high cost of coding a small number of samples with BB-ANS, namely we code the first samples using a different codec. The purpose of this is to build up a sufficiently large buffer of compressed data to permit the first stage of the BB-ANS algorithm -to pop a latent sample from the posterior. In our experiments we use the'Free Lossless Image Format' (FLIF) to build up the buffer. We chose this codec because it performed better than other widely used codecs, but in principal any lossless codec could be used. The amount of previously compressed data required to pop a posterior sample from the ANS stack (and therefore start the BB-ANS chain) is roughly proportional to the size of the image we wish to compress, since in a fully convolutional model the size of the latent space is determined by the image size. We can exploit this to allow us to obtain a better compression rate than FLIF as quickly as possible. We do so by partitioning the first images we wish to compress with HiLLoC into smaller patches. These patches require a smaller data buffer, and thus we can use the superior HiLLoC coding sooner than if we attempted to compress full images. We find experimentally that, generally, larger patches have a better coding rate than smaller patches. Therefore we increase the size of the image patches being compressed with HiLLoC as more images are compressed and the size of the data buffer grows, until we finally compress full images once the buffer is sufficiently large. For our experiments on compressing full ImageNet images, we compress 32×32 patches, then 64×64, then 128×128 before switching to coding the full size images directly. Note that since our model can compress any shape image, we can compress the edge patches which will have different shape if the patch size does not divide the image dimensions exactly. Using this technique means that our coding rate improves gradually from the FLIF coding rate towards the coding rate achieved by HiLLoC on full images. We compress only 5 ImageNet images using FLIF before we start compressing 32×32 patches using HiLLoC. Using Craystack, we implement HiLLoC with a ResNet VAE (RVAE) . This powerful hierarchical latent variable model achieves ELBOs comparable to state of the art autoregressive models 2. In all experiments we used an RVAE with 24 stochastic hidden layers. The RVAE utilizes skip connections, which are important to be able to effectively train models with such a deep latent hierarchy. See Appendix E for more details. We trained the RVAE on the ImageNet 32 training set, then evaluated the RVAE ELBO and HiLLoC compression rate on the ImageNet 32 test set. To test generalization, we also evaluated the ELBO and compression rate on the tests sets of ImageNet64, CIFAR10 and full size ImageNet. For full size ImageNet, we used the partitioning method described in 3.4. The are shown in Table 2. For HiLLoC the compression rates are for the entire test set, except for full ImageNet, where we use 2000 random images from the test set. Table 2 shows that HiLLoC achieves competitive compression rates on all benchmarks, and state of the art on full size ImageNet images. The fact that HiLLoC can achieve state of the art compression on ImageNet relative to the baselines, even under a change of distribution, is striking. This provides strong evidence of its efficacy as a general method for lossless compression of natural images. Naively, one might expect a degradation of performance relative to the original test set when changing the test distribution-even more so when the resolution changes. However, in the settings we studied, the opposite was true, in that the average per-pixel ELBO (and thus the compressed message length) was lower on all other datasets compared to the ImageNet 32 validation set. In the case of CIFAR, we conjecture that the reason for this is that its images are simpler and contain more redundancy than ImageNet. This theory is backed up by the performance of standard compression algorithms which, as shown in Table 2, also perform better on CIFAR images than they do on ImageNet 32. We find the compression rate improvement on larger images more surprising. We hypothesize that this is because pixels at the edge of an image are harder to model because they have less context to reduce uncertainty. The ratio of edge pixels to interior pixels is lower for larger images, thus we might expect less uncertainty per pixel in a larger image. To demonstrate the effect of vectorization we timed ANS of single images at different, fixed, sizes, using a fully vectorized and a fully serial implementation. The are shown in Figure 4, which clearly shows a speedup of nearly three orders of magnitude for all image sizes. We find that the run times for encoding and decoding are roughly linear in the number of pixels, and the time to compress an average sized ImageNet image of 500 × 374 pixels (with vectorized ANS) is around 29s on a desktop computer with 6 CPU cores and a GTX 1060 GPU. Our experiments demonstrate HiLLoC as a bridge between large scale latent variable models and compression. To do this we use simple variants of pre-existing VAE models. Having shown that bits back coding is flexible enough to compress well with large, complex models, we see plenty of work still to be done in searching model structures (i.e. architecture search), optimizing with a trade-off between compression rate, encode/decode time and memory usage. Particularly pertinent for HiLLoC is latent dimensionality, since compute time and memory usage both scale with this. Since the model must be stored/transmitted to use HiLLoC, weight compression is also highly relevant. This is a well-established research area in machine learning . Our experiments also demonstrated that one can achieve good performance on a dataset of large images by training on smaller images. This is promising, but future work should be done to discover what the best training datasets are for coding generic images. One question in particular is whether could be improved by training on larger images and/or images of varying size. We leave this to future work. Another related direction for improvement is batch compression of images of different sizes using masking, analogous to how samples of different length may be processed in batches by recurrent neural nets. Whilst this work has focused on latent variable models, there is also promise in applying state of the art fully observed auto-regressive models to lossless compression. We look forward to future work investigating the performance of models such as WaveNet (van den) for lossless audio compression as well as PixelCNN++ and the state of the art models in for images. Sampling speed for these models, and thus decompression, scales with autoregressive sequence length, and can be very slow. This could be a serious limitation, particularly in common applications where encoding is performed once but decoding is performed many times. This effect can be mitigated by using dynamic programming , and altering model architecture , but on parallel architectures sampling/decompression is still significantly slower than with VAE models. On the other hand, fully observed models, as well as the flow based models of and, do not require bits back coding, and therefore do not have to pay the one-off cost of starting a chain. Therefore they may be well suited to situations where one or a few i.i.d. samples are to be communicated. Similar to the way that we use FLIF to code the first images for our experiments, one could initially code images using a fully observed model then switch to a faster latent variable model once a stack of bits has been built up. We presented HiLLoC, an extension of BB-ANS to hierarchical latent variable models, and show that HiLLoC can perform well with large models. We open-sourced our implementation, along with the Craystack package for prototyping lossless compression. We have also explored generalization of large VAE models, and established that fully convolutional VAEs can generalize well to other datasets, including images of very different size to those they were trained on. We have described how to compress images of arbitrary size with HiLLoC, achieving a compression rate superior to the best available codecs on ImageNet images. We look forward to future work reuniting machine learning and lossless compression. After discretizing the latent space, the latent variable at layer l can be treated as simply an index i l into one of the intervals created by the discretization. As such, we introduce the following notation for pushing and popping according to a discretized version of the posterior. Where is the distribution over the intervals of the discretized latent space for z l, with interval masses equal to their probability under q(z l |z l+1:L, x). The discretization is created from splitting the latent space into equal mass intervals under p(z l |z l+1:L). The mass of a given interval under some distribution is the CDF at the upper bound of the interval minus the CDF at the lower end of the interval. We have usedz to indicate that these will be discrete z l values that are reconstructed from the indices i l. In practise we takez l (i l) to be the centre of the interval indexed by i l. It is important to note that the Q l has an implicit dependence on the previous prior distributions p(z k |z k+1:L) for k ≥ l, as these prior distributions are required to calculatez l+1:L and the discretization of the latent space. Since we discretize each latent layer to be intervals of equal mass under the prior, the prior distribution over the indices i l becomes a uniform distribution over the interval indices, U (i l), which is not dependent on i =l. Note that this allows us to push/pop the i l according to the prior in parallel. The full encoding and decoding procedures with a hierarchical latent model and the dynamic discretization we have described are shown in Table 3. Note that the operations in the two tables are ordered top to bottom. Variables Operation Table 3: The BB-ANS encoding and decoding operations, in order from the top, for a hierarchical latent model with l layers. The Q l are posterior distributions over the indices i l of the discretized latent space for the lth latent, z l. The discretization for the lth latent is created such that the intervals have equal mass under the prior. Here we describe a codec to compress a set of images of arbitrary size. The encoder now adds the dimensions of the image being coded to the stream of compressed data, such that the decoder knows what shape the image will be before decoding it. Since we are using a vectorized ANS coder, as described in Section 3.2, we resize the top of the coder in between each coding/decoding step such that the size of the top of the coder matches the sizes of the image and latents being coded. The codec is detailed in Table 4. To make the resizing procedure efficient, we resize via'folding' the top of the vectorized ANS coder such that we are roughly halving/doubling the number of individual ANS coders each time we fold. This makes the cost of the resize logarithmic with the size difference between the vectorized coder and the targeted size. The achieved compression rate on the entire ImageNet validation set is displayed in Table 5. The autoregressive component of the PixelVAE generative model leads to an asymmetry between the times required for compression and decompression. Compression with the PixelVAE model is readily parallelizable across pixels, since we already have access to the pixel values we wish to compress and thus also the conditional distributions on each pixel. However, decompression (equivalently, sampling) is not parallelizable across pixels, since we must decompress a pixel value in order to give us access to the conditional distribution on the next pixel. This means the time complexity of decompression is linear in the number of pixels, making it prohibitively slow for most image sizes. To ensure that the compression rate overhead from using vectorization is low, we use a technique from the BitKnit codec . When we reach the end of encoding, we could simply concatenate the integers in the (vector) stack head to form the final output message. However, this is inefficient because the stack head is not uniformly distributed. As discussed in , elements of the top of the stack have a probability mass roughly Equivalently, the length of h is approximately uniformly distributed. More detailed discussion and an empirical demonstration of this is given by. An efficient way to form the final output message at the end of decoding, is to fold the stack head vector by repeatedly encoding half of it onto the other half, until only a scalar remains, using the above distribution for the encoding. We implement this technique in Craystack and use it for our experiments. The number of (vectorized) encode steps required is logarithmic in the size (i.e. the number of elements) of the stack head. Some of the overhead from vectorization also comes at the start of encoding, when, in existing implementations, the elements of the stack head vector are initialized to copies of a fixed constant. Information from these copies ends up in the message and introduces redundancy which scales with the size of the head. This overhead can be removed by initializing the stack head to a vector of length 1 and then growing the length of the stack head vector gradually as more random data is added to the stack, by decoding new stack head vector elements according to the distribution. A full description of the RVAE architecture is given in , and a full implementation can be found in our repository https://github.com/hilloc-submission/hilloc, but we give a short description below. The RVAE is a hierarchical latent model, trained by maximization of the usual evidence lower bound (ELBO) on the log-likelihood: Take the latent hierarchy to be depth L, such that the latents are z 1:L. There are skip connections in both the generative model, p(x, z 1:L), and the inference model, q(z 1:L | x). Due to our requirement of using dynamic discretization, we use a top-down inference model 7. This means that we can write And the ELBO as Where D KL is the KL divergence. As in , the KL terms are individually clamped as max(D KL, λ), where λ is some constant. This is an optimization technique known as free bits, and aims to prevent latent layers in the hierarchy collapsing such that the posterior is equal to the prior. Each layer in the hierarchy consists of a ResNet block with two sets of activations. One set of activations are calculated bottom-up (in the direction of x to z L), and the other are calculated top-down. The bottom-up activations are used only within q(z 1:L | x), whereas the top-down activations are used by both q(z 1:L | x) and p(x, z 1:L). Every conditional distribution on a latent z l is parameterized as a diagonal Gaussian distribution, with mean and covariance a function of the activations within the ResNet block, and the conditional distribution on x is parameterized by a discretized logistic distribution. Given activations for previous ResNet blocks, the activations at the following ResNet block are a combination of stochastic and deterministic features of the previous latent layer, as well as from skip connections directly passing the previous activations. The features are calculated by convolutions. Note also that all latent layers are the same shape. Since we retained the default hyperparameters from the original implementation, each latent layer has 32 feature maps and spatial dimensions half those of the input (e.g. 7 Note that in , this is referred to as bidirectional inference.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1lZgyBYwS
We scale up lossless compression with latent variables, beating existing approaches on full-size ImageNet images.
State of the art computer vision models have been shown to be vulnerable to small adversarial perturbations of the input. In other words, most images in the data distribution are both correctly classified by the model and are very close to a visually similar misclassified image. Despite substantial research interest, the cause of the phenomenon is still poorly understood and remains unsolved. We hypothesize that this counter intuitive behavior is a naturally occurring of the high dimensional geometry of the data manifold. As a first step towards exploring this hypothesis, we study a simple synthetic dataset of classifying between two concentric high dimensional spheres. For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error. In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size $O(1/\sqrt{d})$. Surprisingly, when we train several different architectures on this dataset, all of their error sets naturally approach this theoretical bound. As a of the theory, the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples. There has been substantial work demonstrating that standard image models exhibit the following phenomenon: most randomly chosen images from the data distribution are correctly classified and yet are close to a visually similar nearby image which is incorrectly classified BID22. This is often referred to as the phenomenon of adversarial examples. These adversarially found errors can be constructed to be surprisingly robust, invariant to viewpoint, orientation and scale BID3. Despite some theoretical work and many proposed defense strategies BID6 BID18 BID20 ) the cause of this phenomenon is still poorly understood. There have been several hypotheses proposed regarding the cause of adversarial examples. We briefly survey some of them here. One common hypothesis is that neural network classifiers are too linear in various regions of the input space, BID17. Another hypothesis is that adversarial examples are off the data manifold BID2 a; BID16. BID6 argue that large singular values of internal weight matrices may cause the classifier to be vulnerable to small perturbations of the input. Alongside works endeavoring to explain adversarial examples, others have proposed defenses in order to increase robustness. Some works increase robustness to small perturbations by changing the non-linearities used BID14, distilling a large network into a small network BID20, or using regularization BID6. Other works explore detecting adversarial examples using a second statistical model BID7 BID0 BID11 BID19. However, many of these methods have been shown to fail BID4 BID7. Finally, adversarial training has been shown in many instances to increase robustness BID18 BID15 BID22. Despite some progress on increasing robustness to adversarial perturbations, local errors have still been shown to appear for distances just beyond what is adversarially trained for BID21. This phenomenon is quite intriguing given that these models are highly accurate on the test set. We hypothesize that this behavior is a naturally occurring of the high dimensional nature of the data manifold. In order to begin to investigate this hypothesis, we define a simple synthetic task of classifying between two concentric high dimensional spheres. This allows us to study adversarial examples in a setting where the data manifold is well defined mathematically and where we have an analytic characterization of the decision boundary learned by the model. Even more importantly, we can naturally vary the dimension of the data manifold and study the effect of the input dimension on the geometry of the generalization error of neural networks. Our experiments and theoretical analysis on this dataset demonstrate the following:• A similar behavior to that of image models occurs: most randomly chosen points from the data distribution are correctly classified and yet are "close" to an incorrectly classified input. This behavior occurs even when the test error rate is less than 1 in 10 million.• For this dataset, there is a fundamental tradeoff between the amount of generalization error and the average distance to the nearest error. In particular, we show that any model which misclassifies a small constant fraction of the sphere will be vulnerable to adversarial perturbations of size O(1 DISPLAYFORM0 • Neural networks trained on this dataset naturally approach this theoretical optimal tradeoff between the measure of the error set and the average distance to nearest error. This implies that in order to linearly increase the average distance to nearest error, the error rate of the model must decrease exponentially.• We also show that models trained on this dataset may become extremely accurate even when ignoring a large fraction of the input. We conclude with a detailed discussion about the connection between adversarial examples for the sphere and those for image models. The data distribution is mathematically described as two concentric spheres in d dimensions: we generate a random x ∈ R d where ||x|| 2 is either 1.0 or R, with equal probability assigned to each norm (for this work we choose R = 1.3). We associate with each x a target y such that y = 0 if ||x|| 2 = 1 and y = 1 if ||x|| 2 = R.Studying a synthetic high dimensional dataset has many advantages:• The probability density of the data p(x) is well defined and is uniform over all x in the support. We can also sample uniformly from p(x) by sampling z ∼ N (0, I) and then setting x = z/||z|| 2 or x = Rz/||z|| 2.• There is a theoretical max margin boundary which perfectly separates the two classes (the sphere with radius (R + 1)/2).• We can design machine learning models which provably can learn a decision boundary which perfectly separate the two spheres.• We can control the difficulty of the problem by varying d, and R.Our choice of R = 1.3 was a bit arbitrary and we did not explore in detail the relationship between adversarial examples and the distance between to two spheres. Additionally, our choice to restrict the data distribution to be the shells of the two spheres was made to simplify the problem further. In our experiments we investigate training on this dataset in two regimes. First, the online setting where each minibatch is a uniform sample from p(x) (N = ∞). Second, where there is a fixed training set of size N and the network is trained for many epochs on this finite sample. Our first experiment used an input dimensionality of d = 500. We then train a 2 hidden layer ReLU network with 1000 hidden units on this dataset. We applied batch normalization (Ioffe & Szegedy, Figure 1 : Visualizing a 2d slice of the input space where the subspace is spanned by: 2 randomly chosen directions (left), 1 random and 1 "adversarial direction" (center), and 2 orthogonal "adversarial directions" (right). The data manifold is indicated in black and the max margin boundary in red. The green area indicates points which are classified by the ReLU network as being on the inner sphere. In the last plot, the projection of the entire outer sphere is misclassified despite the fact that the error rate of the model is less than 1 in 10 million. 2015) to the two hidden layers, but not to the readout layer. We train with minibatch SGD, minimizing the sigmoid cross entropy loss. We use Adam optimizer for 1 million training steps with mini batch size 50 and learning rate 0.0001. Because this is training in the online setting with batch size 50 and 1 million training points, 50 million data points were used during training. We evaluated the final model on 10 million uniform samples from each sphere -20 million points in total and observed no errors on these finite samples. Thus the error rate of this model is unknown, we only have a statistical upper bound on the error rate. Despite this, we are able to adversarially find errors on the data manifold by performing gradient descent on the spheres (see Section 3.1). There are two types of adversarial examples we generate using this method, the first are worst-case examples, where we iterate the attack until the attack objective converges and do not restrict to a local region around the starting point. The second type are nearest neighbor examples, where we terminate the attack on the first misclassification found. In Figure 1 we visualize the decision boundary by taking different 2d projections of the 500 dimensional space. When we take a random projection, the model has closely approximated the max margin boundary on this projection. Note the model naturally interpolates between the two spheres despite only being trained on samples from the surfaces of the spheres. By contrast, when we take a 2d projection where one basis vector is a worst-case adversarial example, the model's decision boundary is highly warped along this "adversarial direction". There are points of norm > 2 for which the model is confident is on the inner sphere. We can also take a slice where the x and y axis are an orthogonal basis for the subspace spanned to two different worst-case examples. Although the last plot shows that the entire projection of the outer sphere is misclassified, the volume of this error region is exceedingly small due to the high dimensional space. Despite being extremely rare, these misclassifications appear close to randomly sampled points on the sphere. The mean L2 distance to the nearest error on the data manifold is 0.18, by comparison two randomly sampled points on the inner sphere are typically around √ 2 ≈ 1.41 distance from each other. If we look for the nearest point in between the two spheres which is classified as being on the outer sphere, then we get an average L2 distance of 0.0796, and an average norm of 1.07. Thus the nearest example of the other class is typically about half the distance to the theoretical margin. This phenomenon of very unlikely but local errors appears only when the spheres are high dimensional. In FIG0 (right), we visualize the same model trained on 100 samples in the case where d = 2. The model makes no errors on the data manifold. In our experiments the highest dimension we were able to train the ReLU net where no errors can be found adversarially (local or not) seems to be around d = 60. We consider the ReLU net trained on 50 million samples from two 500 dimensional spheres of radius 1.0 and 1.3. We evaluate the accuracy of this network on the entire space using a theoretical decision boundary of 1.15. For each norm considered we plot the accuracy among 10000 random samples. We see the accuracy rapidly increases as we move away from the margin. As we move far enough away we no longer observe errors on the random samples. However, we are able to adversarially find errors as far as norms.6 and 2.4. Right: We trained the same ReLU net on 100 samples from the data distribution when d = 2. By visualizing predictions on a dense subset of the entire space it appears that the model makes no errors on either circle. Several recent works have hypothesised that adversarial examples are off the data manifold BID2 a; BID16. We wanted to test if adversarial examples were off the data manifold. To that end we designed an attack which specifically produces adversarial examples on the data manifold which we call a manifold attack. Traditional attack methods for image models start with an input x and target classŷ and finds an inputx that maximizes P (ŷ |x) subject to the constraint ||x −x|| <, where || · || is often chosen to be the L ∞ norm. The manifold attack maximizes P (ŷ |x) subject to the constraint ||x|| 2 = ||x|| 2. This ensures that the produced adversarial example is of the same class as the starting point and lies in the support of the data distribution. We solve this constraint problem using projected gradient descent (PGD), only for the projection step we project back on the sphere by normalizing ||x|| 2. Because this attack only produces adversarial examples on the data manifold, their probability under the data distribution is identical to that of correctly classified points in that p(x) = p(x adv). It is difficult to reason about the learned decision boundary of the ReLU network. To obtain a more complete understanding of the decision boundary, we next study a simpler model. The network, dubbed "the quadratic network", is a single hidden layer network where the pointwise non-linearity is a quadratic function, σ(x) = x 2. There is no bias in the hidden layer, and the output simply sums the hidden activations, multiplies by a scalar and adds a bias. With hidden dimension h the network has d × h + 2 learn-able parameters. The logit is written aŝ DISPLAYFORM0 where W 1 ∈ R hxd, 1 is a column vector of h 1's. Finally, w and b are learned scalars. In the Appendix, we show that the output of this network can be rewritten in the form When we train the quadratic network with h = 1000 using the same setup as in Section 3 we arrive at the perfect solution: all of the α i ∈ [1/R 2, 1] and there are no adversarial examples. This again was in the online learning setup where each minibatch was an iid sample from the data distribution. The story is different, however, if we train on a finite sample from the data distribution. In particular if we sample N = 10 6 data points from p(x) as a fixed finite training set and train using the same setup we arrive at a model which empirically has a very low error rate -randomly sampling 10 million datapoints from each sphere in no errors, but for which there are adversarial examples. In fact, 394 out of 500 of the learned α i are incorrect in that α i ∈ [1/R 2, 1] (for a complete histogram see Figure 3). DISPLAYFORM1 We can use the Central Limit Theorem (CLT) to estimate the error rate of the quadratic network from the α i (Section 4.1). The estimated error rate of this particular model to be ≈ 10 −11. Note, we are applying the CLT at the tails of the distribution, so it is unclear how accurate this estimate is. However, we found the CLT closely approximates the error rate in the regime where it is large enough to estimate numerically. Next we augmented the above setup with a "perfect" initialization; we initialize the quadratic network at a point for which all of the α i are "correct" but there are non-zero gradients due to the sigmoid cross-entropy loss. The network is initialized at a point where the sigmoid probability of y = 1 for the inner sphere and outer spheres is.0016 and 0.9994 respectively. As shown in Figure 3 continued training from this initialization in a rapid divergence of the worst and average case loss. Although the average loss on the test set decreases with further training, the worst case rapidly increases and adversarial examples can once again be found after 1000 training steps. This behavior from the fact that the training objective (average sigmoid cross entropy loss) does not directly track the accuracy of the models. It also demonstrates how the worst and average case losses may diverge when the input is high dimensional. We then plot what k/d needs to be in order for the model to obtain a certain error rate. We find that as the input dimension grows, the ratio k/d needed quickly decreases. We can use the CLT 1 to analytically estimate the accuracy for the quadratic network in terms of the α i. The following proposition estimates the error rate on the inner sphere:Proposition 4.1 Consider the decision boundary of the quadratic network of the form DISPLAYFORM0 Let Z = N. We let z ∼ S 0 to denote that the vector z is uniformly distributed on the inner DISPLAYFORM1 2. Then the error rate on the inner sphere can be estimated as DISPLAYFORM2 Proposition 4.1 implies that there are many settings of α i which obtain very low error rates. As long as E[α i] ≈ (1 + R −2)/2) and their variance is not too high, the model will be extremely accurate. The histogram in Figure 3 illustrates this; i.e. the learned model has an error rate of 10 −11 but 80% of the α i are incorrect. For a typical sample, the model sums incorrect numbers together and obtains the correct answer. Flexibility in choosing α i while maintaining good accuracy increases dramatically with the input dimension. To illustrate this further consider a special case of the quadratic network where the decision boundary is of the form DISPLAYFORM3 This simplified model has two parameters, k the number of dimensions the model looks at and b a threshold separating the two classes. How large does k need to be in order for the model to obtain a desired error rate? (Assuming b is chosen optimally based on k). We answer this question using the CLT approximation in Proposition 4.1. In FIG2 we plot the fraction of input dimensions needed to obtain a desired accuracy using this simplified model. For example, if d = 3000 then the model can obtain an estimated accuracy of 10 −14 while only looking at 50% of the input. As demonstrated in section 3, neural networks trained on the sphere dataset exhibit a similar phenomenon to that of image datasets: most random samples from the data distribution are both correctly classified and close to a nearby misclassified point. In this work, we do not attempt to compare the geometry of the natural image manifold to that of the sphere, but we can explain why this property occurs on the sphere dataset. Let S 0 be in the sphere of radius 1 in d dimensions and fix E ⊆ S 0 (we interpret E to be the set of points on the inner sphere which are misclassified by some model). For x ∈ S 0 let d(x, E) denote the L 2 distance between x and the nearest point in the set E. Let d(E) = E x∼S0 d(x, E) denote the average distance from a uniformly sampled point on the sphere to the set E. Finally, let µ(E) denote the measure of E as a fraction of the sphere (so µ(S 0) = 1). We prove the following theorem in the Appendix:Theorem 5.1 Consider any model trained on the sphere dataset. Let p ∈ [0, 1.0] denote the accuracy of the model on the inner sphere, and let E denote the points on the inner sphere the model misclassifies (so in measure DISPLAYFORM0 This theorem directly links the probability of an error on the test set to the average distance to the nearest error independently of the model. Any model which misclassifies a small constant fraction of the sphere must have errors close to most randomly sampled data points, no matter how the model errors are distributed on the sphere. At a high level it follows as direct corollary of an isoperimetric inequality of BID8 . The error set E of fixed measure µ(E) which maximizes the average distance to the nearest error d(E) is a "cap", which is a set of the form E = {x ∈ S 0 : x i > α} (or more generally the sphere intersected with a half space). When d is large we can estimate d(E) by using the fact that for x chosen randomly on S 0 the distribution of a single coordinate DISPLAYFORM1 This illustrates the counter-intuitive property of high dimensional spheres, for large d a set of measure say.01% concentrated near a pole will extend all the way to within O(1/ √ d) of the equator, and a randomly chosen x ∼ p(x) will with high probability lie close to the equator. Theorem 5.1 gives an optimal trade off between the amount of generalization error and the average distance to nearest error. We can compare how the error sets of actual trained neural networks compare with this optimal bound. We do this comparison in Figure 5. We train three different architectures on the sphere dataset when d = 500, the first is a "small" ReLU network with 1000 hidden units and 2 hidden layers (ReLU-h1000-d2), the second is the quadratic network with 1000 hidden units (Quad-h1000), and the third is a "large" ReLU network with 2000 hidden units and depth 8 (Relu-h2000-d8). We train these networks with varying number of samples from the data distribution, in particular N ∈ {1000, 5000, 10000, 100000, ∞}. We then sample the performance of the networks several times during training, computing both the error rate of the model and the average distance to nearest error. The error rate is estimated from 100000 random samples from the data distribution and the average distance is estimated by 100 random points and running PGD for 1000 steps with step size of.001 (searching only on the data manifold for errors). Each point on the plot is a network at a certain snapshot during training (when N >= 10000 the networks later in training become so accurate that the error rate cannot be estimated statistically from 100000 samples, these points do not appear in the graph). Surprisingly we see that the trade off between the average distance to nearest error and the amount of error is close to what is optimal if all the errors were concentrated near a "cap" (as represented by a black line). Note that there is some noise in estimating the error rates and average distances (for example PGD is not guaranteed to find the closest error) as a some networks when sampled appear slightly better than optimal. This plot suggests that the decision boundaries of these networks are all well behaved given the amount of test error observed. For the quadratic network, the error set on the inner sphere is of the form E = {x ∈ R n : ||x|| 2 = 1, DISPLAYFORM2 Geometrically this is the area of the sphere which is outside the ellipsoid decision boundary. If α i > 1 for 2 or more i, then E is a connected region. This gives some intuition as to why the quadratic network might approach the optimal tradeoff between d(E) and µ(E). Perhaps more interesting is that both the small and large ReLU networks have similar ReLU-h1000-d2 ReLU-h2000-d8 Quad-h1000 Optimal Figure 5: We compare the average distance to nearest error with error rate for 3 networks trained on the sphere dataset. All errors are reported for the inner sphere. The 3 networks are trained with 5 different training set sizes, and their performance are measured at various points during training (the networks eventually become too accurate to appear on this graph, as the error rate will be too small to estimate statistically). Amazingly, we observe that the trade off between the amount of error and the average distance to nearest error closely tracks what is optimal, as would be observed if all the errors were concentrated near a pole of the sphere.tradeoffs between d(E) and µ(E), even though they have much more complicated architectures. The large ReLU network, for example, has over 29 million parameters. Despite the complexity, the error region of this large network demonstrates a similar tradeoff as the quadratic network. In this work we attempted to gain insight into the existence of adversarial examples for image models by studying a simpler synthetic dataset. After training different neural network architectures on this dataset we observe a similar phenomenon to that of image models -most random points in the data distribution are both correctly classified and are close to a misclassified point. We then explained this phenomenon for this particular dataset by proving a theoretical tradeoff between the error rate of a model and the average distance to nearest error independently of the model. We also observed that several different neural network architectures closely match this theoretical bound. Theorem 5.1 is significant because it reduces the question of why models are vulnerable to adversarial examples to the question of why is there a small amount of classification error. It is unclear if anything like theorem 5.1 would hold for an image manifold, and future work should investigate if a similar principal applies. Our work suggests that even a small amount of classification error may sometimes logically force the existence of many adversarial examples. This could explain why fixing the adversarial example problem has been so difficult despite substantial research interest. For example, one recent work uses adversarial training to increase robustness in the L ∞ metric BID18. Although this did increase the size,, of the perturbation needed to reliably produce an error, local errors still remain for larger than those adversarially trained for BID21.Several defenses against adversarial examples have been proposed recently which are motivated by the assumption that adversarial examples are off the data manifold BID2 a; BID16. Our challenge whether or not this assumption holds in general. As shown in section 3 there are local errors both on and off the data manifold. Our raise many questions as to whether or not it is possible to completely solve the adversarial example problem without reducing test error to 0. The test error rate of state of the art image models is non-zero, this implies that a constant fraction of the data manifold is misclassified and is the unbiased estimate of µ(E). Perhaps this alone is an indication that local adversarial errors exist. The concentric spheres dataset is an extremely simple problem which is unlikely to capture all of the complexities of the geometry of a natural image manifold. Thus we cannot reach the same about the nature of adversarial examples for real-world datasets. However, we hope that the insights gained from this very simple case will point the way forward to explore how complex real-world data sets leads to adversarial examples.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyUkxxZ0b
We hypothesize that the vulnerability of image models to small adversarial perturbation is a naturally occurring result of the high dimensional geometry of the data manifold. We explore and theoretically prove this hypothesis for a simple synthetic dataset.
It has been established that diverse behaviors spanning the controllable subspace of a Markov decision process can be trained by rewarding a policy for being distinguishable from other policies. However, one limitation of this formulation is the difficulty to generalize beyond the finite set of behaviors being explicitly learned, as may be needed in subsequent tasks. Successor features provide an appealing solution to this generalization problem, but require defining the reward function as linear in some grounded feature space. In this paper, we show that these two techniques can be combined, and that each method solves the other's primary limitation. To do so we introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor features framework. We empirically validate VISR on the full Atari suite, in a novel setup wherein the rewards are only exposed briefly after a long unsupervised phase. Achieving human-level performance on 12 games and beating all baselines, we believe VISR represents a step towards agents that rapidly learn from limited feedback. Unsupervised learning has played a major role in the recent progress of deep learning. Some of the earliest work of the present deep learning era posited unsupervised pre-training as a method for overcoming optimization difficulties inherent in contemporary supervised deep neural networks . Since then, modern deep neural networks have enabled a renaissance in generative models, with neural decoders allowing for the training of large scale, highly expressive families of directed models (; Van den) as well as enabling powerful amortized variational inference over latent variables . We have repeatedly seen how representations from unsupervised learning can be leveraged to dramatically improve sample efficiency in a variety of supervised learning domains . In the reinforcement learning (RL) setting, the coupling between behavior, state visitation, and the algorithmic processes that give rise to behavior complicate the development of "unsupervised" methods. The generation of behaviors by means other than seeking to maximize an extrinsic reward has long been studied under the psychological auspice of intrinsic motivation (; ;), often with the goal of improved exploration (Şimşek and ; ;). However, while exploration is classically concerned with the discovery of rewarding states, the acquisition of useful state representations and behavioral skills can also be cast as an unsupervised (i.e. extrinsically unrewarded) learning problem for agents interacting with an environment. In the traditional supervised learning setting, popular classification benchmarks have been employed (with labels removed) as unsupervised representation learning benchmarks, wherein the acquired representations are evaluated based on their usefulness for some downstream task (most commonly the original classification task with only a fraction of the labels reinstated). Analogously, we propose removing the rewards from an RL benchmark environment for unsupervised pre-training of an agent, with their subsequent reinstatement testing for dataefficient adaptation. This setup emulates scenarios where unstructured interaction with the environment, or a closely related environment, is relatively inexpensive to acquire and the agent is expected to perform one or more tasks defined in this environment in the form of rewards. The current state-of-the-art for RL with unsupervised pre-training comes from a class of algorithms which, independent of reward, maximize the mutual information between latent variable policies and their behavior in terms of state visitation, an objective which we refer to as behavioral mutual information (; ; ;). These objectives yield policies which exhibit a great deal of diversity in behavior, with variational intrinsic control (, VIC) and diversity is all you need (, DIAYN) even providing a natural formalism for adapting to the downstream RL problem. However, both methods suffer from poor generalization and a slow inference process when the reward signal is introduced. The fundamental problem faced by these methods is the requirement to effectively interpolate between points in the latent behavior space, as the most task-appropriate latent skill likely lies "between" those learnt during the unsupervised period. The construction of conditional policies which efficiently and effectively generalize to latent codes not encountered during training is an open problem for such methods. Our main contribution is to address this generalization and slow inference problem by making use of another recent advance in RL, successor features . Successor features (SF) enable fast transfer learning between tasks that differ only in their reward function, which is assumed to be linear in some features. Prior to this work, the automatic construction of these reward function features was an open research problem. We show that, despite being previously cast as learning a policy space, behavioral mutual information (BMI) maximization provides a compelling solution to this feature learning problem. Specifically, we show that the BMI objective can be adapted to learn precisely the features required by SF. Together, these methods give rise to an algorithm, Variational Intrinsic Successor FeatuRes (VISR), which significantly improves performance in the RL with unsupervised pre-training scenario. In order to illustrate the efficacy of the proposed method, we augment the popular 57-game Atari suite with such an unsupervised phase. The use of this well-understood collection of tasks allows us to position our contribution more clearly against the current literature. VISR achieves human-level performance on 12 games and outperforms all baselines, which includes algorithms that operate in three regimes: strictly unsupervised, supervised with limited data, and both. As usual, we assume that the interaction between agent and environment can be modeled as a Markov decision process . An MDP is defined as a tuple M ≡ (S, A, p, r, γ) where S and A are the state and action spaces, p(·|s, a) gives the nextstate distribution upon taking action a in state s, and γ ∈ is a discount factor that gives smaller weights to future rewards. The function r: S × A × S → R specifies the reward received at transition s a − → s; more generally, we call any signal defined as c: S × A × S → R a cumulant . As previously noted, we consider the scenario where the interaction of the agent with the environment can be split into two stages: an initial unsupervised phase in which the agent does not observe any rewards, and the usual reinforcement learning phase in which rewards are observable. During the reinforcement learning phase the goal of the agent is to find a policy π: S → A that maximizes the expected return G t = ∞ i=0 γ i R t+i, where R t = r(S t, A t, S t+1). A principled way to address this problem is to use methods derived from dynamic programming, which heavily rely on the concept of a value function . The action-value function of a policy π is defined as expected value when following policy π. Based on Q π we can compute a greedy policy π is guaranteed to do at least as well as π, that is: and π are called policy evaluation and policy improvement, respectively; under certain conditions their successive application leads to the optimal value function Q *, from which one can derive an optimal policy using. The alternation between policy evaluation and policy improvement is at the core of many RL algorithms, which usually carry out these steps only approximately . Clearly, if we replace the reward r(s, a, s) with an arbitrary cumulant c(s, a, s) all the above still holds. In this case we will use Q π c to refer to the value of π under cumulant c and the associated optimal policies will be referred to as π c, where π c (s) is the greedy policy on Q * c (s, a). Usually it is assumed, either explicitly or implicitly, that during learning there is a cost associated with each transition in the environment, and therefore the agent must learn a policy as quickly as possible. Here we consider that such a cost is only significant in the reinforcement learning phase, and therefore during the unsupervised phase the agent is essentially free to interact with the environment as much as desired. The goal in this stage is to collect information about the environment to speed up the reinforcement learning phase as much as possible. In what follows we will make this definition more precise. Following Barreto et al. (2017;, we assume that there exist features φ(s, a, s) ∈ R d such that the reward function which specifies a task of interest can be written as where w ∈ R d are weights that specify how desirable each feature component is, or a'task vector' for short. Note that, unless we constrain φ somehow, is not restrictive in any way: for example, by making φ i (s, a, s) = r(s, a, s) for some i we can clearly recover the rewards exactly. note that allows one to decompose the value of a policy π as where φ t = φ(S t, A t, S t+1) and ψ π (s, a) are the successor features (SFs) of π. SFs can be seen as multidimensional value functions in which φ(s, a, s) play the role of rewards, and as such they can be computed using standard RL algorithms (Szepesvári, 2010). One of the benefits provided by SFs is the possibility of quickly evaluating a policy π. Suppose that during the unsupervised learning phase we have computed ψ π; then, during the supervised phase, we can find a w ∈ R d by solving a regression problem based on and then compute Q π through. Once we have Q π, we can apply to derive a policy π that will likely outperform π. Since π was computed without access to the reward, its is not deliberately trying to maximize it. Thus, the solution π relies on a single step of policy improvement over a policy that is agnostic to the rewards. It turns out that we can do better than that by extending the strategy above to multiple policies. Let e: (S → A) → R k be a policy-encoding mapping, that is, a function that turns policies π into vectors in R k. Borsa et al.'s universal successor feature (USFs) are defined as ψ(s, a, e(π)) ≡ ψ π (s, a). Note that, using USFs, we can evaluate any policy π by simply computing Now that we can compute Q π for any π, we should be able to leverage this information to improve our previous solution based on a single policy. This is possible through generalized policy improvement (, GPI). Let ψ be USFs, let π 1, π 2,..., π n be arbitrary policies, and let It can be shown that is a strict generalization of, in the sense that Q π (s, a) ≥ Q πi (s, a) for all π i, s, and a. This can be extended to the case in which holds only approximately and ψ is replaced by a universal successor feature approximator (USFA) ψ θ ≈ ψ(s, a) . The above suggests an approach to leveraging unsupervised pre-training for more dataefficient reinforcement learning. First, during the unsupervised phase, the agent learns a USFA ψ θ. Then, the rewards observed at the early stages of the RL phase are used to find an approximate solution w for. Finally, n policies π i are generated and a policy π is derived through. If the approximations used in this process are reasonably accurate, π will be an improvement over π 1, π 2,.., π n. However, in order to actually implement the strategy above we have to answer two fundamental questions: (i) Where do the features φ in come from? (ii) How do we define the policies π i used in? It turns out that these questions allow for complementary answers, as we discuss next. Features φ should be defined in such a way that the down-stream task reward is likely to be a simple function of them (see). Since in the RL with unsupervised pre-training regime the task reward is not available during the long unsupervised phase, this amounts to utilizing a strong inductive bias that is likely to yield features relevant to the rewards of any'reasonable' task. One such bias is to only represent the subset of observation space that the agent can control . This can be accomplished by maximizing the mutual information between a policy conditioning variable and the agent's behavior. There exist many algorithms that maximize this quantity through various means and for various definitions of'behavior' . The objective F(θ) is to find policy parameters θ that maximize the mutual information (I) between some policy-conditioning variable, z, and some function f of the trajectory τ induced by the conditioned policy, where H is the entropy of some variable: While in general z will be a function of the state , it is common to assume that z is drawn from a fixed (or at least state-independent) distribution for the purposes of stability . This simplifies the objective to minimizing the conditional entropy of the conditioning variable given the trajectory. When the trajectory is sufficiently long, this corresponds to sampling from the steady state distribution induced by the policy. Commonly f is assumed to return the final state, but for simplicity we will consider that f samples a single state s uniformly over τ π θ. This intractable conditional distribution can be lower-bounded by a variational approximation (q) which produces the loss function used in practice (see Section 8.1 for a derivation based on) The variational parameters can be optimized by minimizing the negative log likelihood of samples from the true conditional distribution, i.e., q is a discriminator trying to predict the correct z from behavior. However, it is not obvious how to optimize the policy parameters θ, as they only affect the loss through the non-differentiable environment. The appropriate intrinsic reward function can be derived (see Section 8.2 for details) through application of the REINFORCE trick, which in log q(z|s) serving this role. Figure 1: VISR model diagram. In practice w t is also fed into ψ as an input, which also allows for GPI to be used (see Algorithm 1 in Appendix). For the random feature baseline, the discriminator q is frozen after initialization, but the same objective is used to train ψ. Traditionally, the desired product of this optimization was the conditional policy (π). While the discriminator q could be used for imitating demonstrated behaviors (i.e. by inferring the most likely z for a given τ), for down-stream RL it was typically discarded in favor of explicit search over all possible z . In the next section we discuss an alternative approach to leverage the behaviors learned during the unsupervised phase. The primary motivation behind our proposed approach is to combine the rapid task inference mechanism provided by SFs with the ability of BMI methods to learn many diverse behaviors in an unsupervised way. We begin by observing that both approaches use vectors to parameterize tasks. In the SF formulation tasks correspond to linear weightings w of features φ(s). The reward for a task given by w is r SF (s; w) = φ(s) T w. BMI objectives, on the other hand, define tasks using conditioning vectors z, with the reward for task z given by r BM I (s; z) = log q(z|s). We propose restricting conditioning vectors z to correspond to task-vectors w of the SFs formulation. The restriction that z ≡ w, in turn, requires that r SF (s; w) = r BM I (s; w), which implies that the BMI discriminator q must have the form log q(w|s) = φ(s) T w. One way to satisfy this requirement is by restricting the task vectors w and features φ(s) to be unit length and paremeterizing the discriminator q as the Von Mises-Fisher distribution with a scale parameter of 1. Note that this form of discriminator differs from the standard choice of parameterizing q as a multivariate Gaussian, which does not satisfy equation 10. With this variational family for the discriminator, all that is left to complete the base algorithm is to factorize the conditional policy into the policy-conditional successor features (ψ) and the task vector (w). This is straightforward as any conditional policy can be represented by a UVFA , and any UVFA can be represented by a USFA given an appropriate feature basis, such as the one we have just derived. Figure 1 shows the ing model. Training proceeds as in other algorithms maximizing BMI: by randomly sampling a task vector w and then trying to infer it from the state produced by the conditioned policy (in our case w is sampled from a uniform distribution over the unit circle). The key difference is that in VISR the structure of the conditional policy (equation 5) enforces the task/dynamics factorization as in SF (equations 2 and 4), which in turn reduces task inference to a regression problem derived from equation 2. Now that SFs have been given a feature-learning mechanism, we can return to the second question raised at the end of Section 3: how can we obtain a diverse set of policies over which to apply GPI? Recall that we are training a USFA ψ(s, a, e(π)) whose encoding function is e(π) = w (that is, π is the policy that tries to maximize the reward in for a particular value of w). So, the question of which policies to use with GPI comes down to the selection of a set of vectors w. One natural w candidate is the solution for a regression problem derived from. Let us call this solution w base, that is, φ(s, a, s) w base ≈ r(s, a, s). But what should the other task vectors w's be? Given that task vectors are sampled from a uniform distribution over the unit circle during training, there is no single subset that has any privileged status. So, following, we sample additional w's on the basis of similarity to w base. Since the discriminator q enforces similarity on the basis of probability under a Von Mises-Fisher distribution, these additional w's are sampled from such a distribution centered on w base, with the concentration parameter κ acting as a hyper-parameter specifying how diverse the additional w's should be. Calculating the improved policy is thus done as follows: Our experiments are divided in four groups corresponding to Sections 6.1 to 6.4. First, we assess how well VISR does in the RL setup with an unsupervised pre-training phase described in Section 2. Since this setup is unique in the literature on the Atari Suite, for the full two-phase process we only compare to ablations on the full VISR model and a variant of DIAYN adapted for these tasks (Table 1, bottom section). In order to frame performance relative to prior work, in Section 6.2 we also compare to for algorithms that operate in a purely unsupervised manner (Table 1, top section). Next, in Section 6.3, we contrast VISR's performance to that of standard RL algorithms in a low data regime (Table 1, middle section). Finally, we assess how well the proposed approach of inferring the task through the solution of a regression derived from does as compared to random search. To evaluate VISR, we impose a two-phase setup on the full suite of 57 Atari games . Agents are allowed a long unsupervised training phase (250M steps) without access to rewards, followed by a short test phase with rewards (100k steps). The full VISR algorithm includes features learned through the BMI objective and GPI to improve the execution of policies during both the training and test phases (see Algorithm 1 in the Appendix). The main baseline model, RF VISR, removes the BMI objective, instead learning SFs over features given by a random convolutional network (the same architecture as the φ network in the full model). The remaining ablations remove GPI from each of these models. The ablation shown in Table 1 (bottom) confirm that these components of VISR play complementary roles in the overall functioning of our model (also see Figure 2a). In addition, DIAYN has been adapted for the Atari domain, using the same training and testing conditions, base RL algorithm, and network architecture as VISR . With the standard 50-dimensional categorical z, performance was worse than random. While decreasing the dimensionality to 5 (matching that of VISR) improved this, it was still significantly weaker than even the ablated versions of VISR. Table 1: Atari Suite comparisons. @N represents the amount of RL interaction utilized. M dn is median, M is mean, > 0 is the number of games with better than random performance, and > H is the number of games with human-level performance as defined in. Top: unsupervised learning only (Sec. 6.2). Mid: data-limited RL (Sec. 6.3). Bottom: RL with unsupervised pre-training (Sec. 6.1). Standard deviations given in Table 2 (Appendix). Comparing against fully unsupervised approaches, our main external baseline is the Intrinsic Curiosity Module . This uses forward model prediction error in some feature-space to produce an intrinsic reward signal. Two variants have been evaluated on a 47 game subset of the Atari suite . One uses random features as the basis of their forward model (RF Curiosity), and the other uses features learned via an inverse-dynamics model (IDF Curiosity). It is important to note that, in addition to the extrinsic rewards, these methods did not use the terminal signals provided by the environment, whereas all other methods reported here do use them. The reason for not using the terminal signal was to avoid the possibility of the intrinsic reward reducing to a simple "do not die" signal. To rule this out, an explicit "do not die" baseline was run (Pos Reward NSQ), wherein the terminal signal remains and a small constant reward is given at every time-step. Finally, the full VISR model was run purely unsupervised. In practice this means not performing the fast-adaptation step (i.e. reward regression), instead switching between random w vectors every 40 time-steps (as is done during the training phase). Results shown in Table 1 (top and bottom) make it clear that while VISR is not a particularly outstanding in the unsupervised regime, when allowed 100k steps of RL it can vastly outperform these existing unsupervised methods on all criteria. Comparisons to reinforcement learning algorithms in the low-data regime are largely based on similar analysis by on the 26 easiest games in the Atari suite (as judged by above random performance for their algorithm). In that work the authors introduce a model-based agent (SimPLe) and show that it compares favorably to standard RL algorithms when data is limited. Three canonical RL algorithms are compared against: proximal policy optimization (PPO) , Rainbow , and DQN. For each, the from the lowest data regime reported in the literature are used. In addition, we also compare to a version of N-step Q-learning (NSQ) that uses the same codebase and base network architecture as VISR. Results shown in Table 1 (middle) indicate that VISR is highly competitive with the other RL methods. Note that, while these methods are actually solving the full RL problem, VISR's performance is based exclusively on the solution of a linear regression problem (equation 2). Obviously, this solution can be used to "warm start" an agent which can then refine its policy using any RL algorithm. We expect this version of VISR to have even better performance. In the previous , it was assumed that solving the linear reward-regression problem is the best way to infer the appropriate task vector. suggest a simpler approach: exhaustive search. As there are no guarantees that extrinsic rewards will be linear in the learned features (φ), it is not obvious which approach is best in practice. We hypothesize that exploiting the reward-regression task inference mechanism provided by VISR should yield more efficient inference than random search. To show this, 50 episodes (or 100k steps, whichever comes first) are rolled out using a trained VISR, each conditioned on a task vector chosen uniformly on a 5-dimensional sphere. From these initial episodes, one can either pick the task vector corresponding to the trajectory with the highest return (random search), or combine the data across all episodes and solve the linear regression problem. In each condition the VISR policy given by the inferred task vector is executed for 30 episodes and the average returns compared. As shown in Figure 2b, linear regression substantially improves performance despite using data generated specifically to aid in random search. The mean performance across all 57 games was 109.16 for reward-regression, compared to random search at 63.57. Even more dramatically, the median score for reward-regression was 8.99 compared to random search at 3.45. Overall, VISR outperformed the random search alternative on 41 of the 57 games, with one tie, using the exact same data for task inference. This corroborates the main hypothesis of this paper, namely, that endowing features derived from BMI with the fast task-inference provided by SFs gives rise to a powerful method able to quickly learn competent policies when exposed to a reward signal. Our suggest that VISR is the first algorithm to achieve notable performance on the full Atari task suite in a setting of few-step RL with unsupervised pre-training, outperforming all baselines and buying performance equivalent to hundreds of millions of interaction steps compared to DQN on some games (Figure 2c). As a suggestion for future investigations, the somewhat underwhelming for the fully unsupervised version of VISR suggest that there is much room for improvement. While curiosity-based methods are transient (i.e., asymptotically their intrinsic reward vanishes) and lack a fast adaptation mechanism, they do seem to encourage exploratory behavior slightly more than VISR. A possible direction for future work would be to use a curiosity-based intrinsic reward inside of VISR, to encourage it to better explore the space of controllable policies. Another interesting avenue for future investigation would be to combine the approach recently proposed by to enforce the policies computed by VISR to be not only distinguishable but also far apart in a given metric space. By using SFs on features that maximize BMI, we proposed an approach, VISR, that solves two open questions in the literature: how to compute features for the former and how to infer tasks in the latter. Beyond the concrete method proposed here, we believe bridging the gap between BMI and SFs is an insightful contribution that may inspire other useful methods. For convenience, we can refer to maximizing F(θ) as minimizing the loss function for parameters θ = (θ π, θ q), where θ π and θ q refer to the parameters of the policy π and variational approximation q, respectively. We can minimize L θ with respect to θ q, the parameters of q, using back-propagation. However, properly adjusting the parameters of π, θ π, is more difficult, as we lack a differentiable model of the environment. We now show that we can still derive an appropriate score function estimator using the log-likelihood or REINFORCE trick . Since in this section we will be talking about θ π only (that is, we will not discuss θ q), we will drop the subscript and refer to the parameters of π as simply θ. Let τ be a length T trajectory sampled under policy π, and let p θ be the probability of the trajectory τ under the combination of the policy and environment transition probabilities. We can compute the gradient of p θ with respect to θ as: This means that we can adjust p θ to make τ more likely under it. If we interpret p θ as the distribution induced by the policy π, then minimizing corresponds to maximizing the following value function: We can then use the policy gradient theorem to calculate the gradient of our loss function with respect to the parameters of the policy, θ, for trajectories τ beginning in state s, Since standard policy gradient (with rewards r t = log q(z|s t)) can be expressed as: Figure 3: VISR features φ learned by a variational distribution q(w|s) in a 10-by-10 gridworld. we can conclude that log q(z | s) serves the role of the reward function and treat it as such for arbitrary reinforcement learning algorithms (n-step Q-learning is used throughout this paper). The complexity and black-box nature of the Atari task suite make any significant analisis of the representations learned by VISR difficult (apart from their indirect effect on fastinference). Thus, in order to analyze the representation learned by VISR we have conducted a much smaller-scale experiment on a standard 10-by-10 grid-world. Here VISR still uses the full 5-sphere for its space of tasks, but it is trained with a much smaller network architecture for both the successor features ψ and variational approximation φ (both consist of 2 fullyconnected layers of 100 units with ReLU non-linearities, the latter L2-normalized so as to make mean predictions on the 5-sphere). We train this model for longer than necessary (960,000 trajectories of length 40 for 38,400,000 total steps) so as to best capture what representations might look like at convergence. Figure 3 shows each of the 5 dimension of φ across all states of the grid-world. It should be noted that, since these states were observed as one-hot vectors, all of the structure present is the of the mutual information training objective rather than any correlations in the input space. Figure 4 shows 49 randomly sampled reward functions, generated by sampling a w vector uniformly on the 5-sphere and taking the inner product with φ. This demonstrates that the space of φ contains many different partitionings of the state-space, which lends credence to our claim that externally defined reward functions are likely to be not far outside of this space, and thus fast-inference can yield substantial benefits. Figure 5 shows the 49 value functions corresponding to the reward function sampled in Figure 4. These value functions were computed via generalized policy improvement over the policies from 10 uniformly sampled w's. The clear correspondance between these value functions and their respective reward functions demonstrate that even though VISR is tasked with learning an infinite space of value functions, it does not significantly suffer from underfitting. These value functions can be thought of as the desired cumulative state-occupancies, and appear to represent distinct regions of the state space. A distributed reinforcement learning setup was utilized to accelerate experimentation as per. This involved having 100 separate actors, each running on its own instance of the environment. After every roll-out of 40 steps, the experiences are added to a queue. This queue is used by the centralized learner to calculate all of the losses and change the weights of the network, which are then passed back to the actors. The roll-out length implicitly determines other hyper-parameters out of convenience, namely the amount of backpropagation through time is done before truncation , as the sequential structure of the data is lost outside of the roll-out window. The task vector W is also resampled every 40 steps for similar reasons. Under review as a conference paper at ICLR 2020 The approximations to the optimal value functions for the reward functions in Figure 4, computed by VISR through GPI on 10 randomly sampled policies. In all (modulo some reported from other papers) are the average of 3 random seeds per game per condition. Due to the high computational cost of the controlled fast-inference experiments, for the experiments comparing the effect of training steps on fast-inference performance (e.g. Figure 6), an online evaluation scheme was utilized. Rather than actually performing no-reward reinforcement learning as 2 distinct phases, reward information 2 was exposed to 5 of the 100 actors which used the task vector ing from solving the reward regression via OLS. This regression was continuously solved using the most recent 100, 000 experiences from these actors.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJeAHkrYDS
We introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide fast task inference through the successor features framework.
Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn strong supervised models like convolutional neural networks. However, these models trained on one data domain may not generalize well to other domains unequipped with annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. To this end, we propose to learn discriminative feature representations of patches based on label histograms in the source domain, through the construction of a disentangled space. With such representations as guidance, we then use an adversarial learning scheme to push the feature representations in target patches to the closer distributions in source ones. In addition, we show that our framework can integrate a global alignment process with the proposed patch-level alignment and achieve state-of-the-art performance on semantic segmentation. Extensive ablation studies and experiments are conducted on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios. Recent deep learning-based methods have made significant progress on vision tasks, such as object recognition BID17 and semantic segmentation BID19, relying on large-scale annotations to supervise the learning process. However, for a test domain different from the annotated training data, learned models usually do not generalize well. In such cases, domain adaptation methods have been developed to close the gap between a source domain with annotations and a target domain without labels. Along this line of research, numerous methods have been developed for image classification BID29 BID8, but despite recent works on domain adaptation for pixel-level prediction tasks such as semantic segmentation BID14, there still remains significant room for improvement. Yet domain adaptation is a crucial need for pixel-level predictions, as the cost to annotate ground truth is prohibitively expensive. For instance, road-scene images in different cities may have various appearance distributions, while conditions even within the same city may vary significantly over time or weather. Existing state-of-the-art methods use feature-level BID14 or output space adaptation BID31 to align the distributions between the source and target domains using adversarial learning BID11 BID37. These approaches usually exploit the global distribution alignment, such as spatial layout, but such global statistics may already differ significantly between two domains due to differences in camera pose or field of view. Figure 1 illustrates one example, where two images share a similar layout, but the corresponding grids do not match well. Such misalignment may introduce an incorrect bias during adaptation. Instead, we consider to match patches that are more likely to be shared across domains regardless of where they are located. One way to utilize patch-level information is to align their distributions through adversarial learning. However, this is not straightforward since patches may have high variation among each other and there is no guidance for the model to know which patch distributions are close. Motivated by recent advances in learning disentangled representations BID18 BID24, we adopt a similar approach by considering label histograms of patches as a factor and learn discriminative Figure 1: Illustration of the proposed patch-level alignment against the global alignment that considers the spatial relationship between grids. We first learn discriminative representations for source patches (solid symbols) and push a target representation (unfilled symbol) close to the distribution of source ones, regardless of where these patches are located in the image.representations for patches to relax the high-variation problem among them. Then, we use the learned representations as a bridge to better align patches between source and target domains. Specifically, we utilize two adversarial modules to align both the global and patch-level distributions between two domains, where the global one is based on the output space adaptation BID31, and the patch-based one is achieved through the proposed alignment by learning discriminative representations. To guide the learning process, we first use the pixel-level annotations provided in the source domain and extract the label histogram as a patch-level representation. We then apply K-means clustering to group extracted patch representations into K clusters, whose cluster assignments are then used as the ground truth to train a classifier shared across two domains for transferring a learned discriminative representation of patches from the source to the target domain. Ideally, given the patches in the target domain, they would be classified into one of K categories. However, since there is a domain gap, we further use an adversarial loss to push the feature representations of target patches close to the distribution of the source patches in this clustered space (see Figure 1). Note that our representation learning can be viewed as a kind of disentanglement guided by the label histogram, but is different from existing methods that use pre-defined factors such as object pose BID18.In experiments, we follow the domain adaptation setting in BID14 and perform pixellevel road-scene image segmentation. We conduct experiments under various settings, including the synthetic-to-real, i.e., GTA5 BID27 )/SYNTHIA BID28 to Cityscapes BID5 ) and cross-city, i.e., Cityscapes to Oxford RobotCar BID23 scenarios. In addition, we provide extensive ablation studies to validate each component in the proposed framework. By combining global and patch-level alignments, we show that our approach performs favorably against state-of-the-art methods in terms of accuracy and visual quality. We note that the proposed framework is general and could be applicable to other forms of structured outputs such as depth, which will be studied in our future work. The contributions of this work are as follows. First, we propose a domain adaptation framework for structured output prediction by utilizing global and patch-level adversarial learning modules. Second, we develop a method to learn discriminative representations guided by the label histogram of patches via clustering and show that these representations help the patch-level alignment. Third, we demonstrate that the proposed adaptation method performs favorably against various baselines and state-of-the-art methods on semantic segmentation. Within the context of this work, we discuss the domain adaptation methods, including image classification and pixel-level prediction tasks. In addition, algorithms that are relevant to learning disentangled representations are discussed in this section. Domain Adaptation. Domain adaptation approaches have been developed for the image classification task via aligning the feature distributions between the source and target domains. Conventional methods use hand-crafted features BID10 BID7 to minimize the discrep-ancy across domains, while recent algorithms utilize deep architectures BID8 BID32 to learn domain-invariant features. One common practice is to adopt the adversarial learning scheme BID9 and minimize the Maximum Mean Discrepancy BID20. A number of variants have been developed via designing different classifiers BID21 and loss functions BID33. In addition, other recent work aims to enhance feature representations by pixel-level transfer BID1 and domain separation BID0.Compared to the image classification task, domain adaptation for structured pixel-level predictions has not been widely studied. BID14 first introduce to tackle the domain adaptation problem on semantic segmentation for road-scene images, e.g., synthetic-to-real images. Similar to the image classification case, they propose to use adversarial networks and align global feature representations across two domains. In addition, a category-specific prior is extracted from the source domain and is transferred to the target distribution as a constraint. However, these priors, e.g., object size and class distribution, may be already inconsistent between two domains. Instead of designing such constraints, the CDA method BID36 applies the SVM classifier to capture label distributions on superpixels as the property to train the adapted model on the target domain. Similarly, as proposed in BID4, a class-wise domain adversarial alignment is performed by assigning pseudo labels to the target data. Moreover, an object prior is extracted from Google Street View to help alignment for static objects. The above-mentioned domain adaptation methods on structured output all use a global distribution alignment and some class-specific priors to match statistics between two domains. However, such class-level alignment does not preserve the structured information like the patches. In contrast, we propose to learn discriminative representations for patches and use these learned representations to help patch-level alignment. Moreover, our framework does not require additional priors/annotations and the entire network can be trained in an end-to-end fashion. Compared to the recently proposed output space adaptation method BID31 ) that also enables end-to-end training, our algorithm focuses on learning patch-level representations that aid the alignment process. Learning Disentangled Representation. Learning a latent disentangled space has led to a better understanding for numerous tasks such as facial recognition BID26, image generation BID3 BID24, and view synthesis BID18 BID35. These approaches use pre-defined factors to learn interpretable representations of the image. BID18 propose to learn graphic codes that are disentangled with respect to various image transformations, e.g., pose and lighting, for rendering 3D images. Similarly, BID35 synthesize 3D objects from a single image via an encoder-decoder architecture that learns latent representations based on the rotation factor. Recently, AC-GAN BID24 ) develops a generative adversarial network (GAN) with an auxiliary classifier conditioned on the given factors such as image labels and attributes. Although these methods present promising on using the specified factors and learning a disentangled space to help the target task, they focus on handling the data in a single domain. Motivated by this line of research, we propose to learn discriminative representations for patches to help the domain adaptation task. To this end, we take advantages of the available label distributions and naturally utilize them as a disentangled factor, in which our framework does not require to pre-define any factors like conventional methods. In this section, we describe our proposed domain adaptation framework for predicting structured outputs, our adversarial learning scheme to align distributions across domains, and the use of discriminative representations for patches to help the alignment. Given the source and target images I s, I t ∈ R H×W ×3 and the source labels Y s, our goal is to align the predicted output distribution O t of the target data with the source distribution O s. As shown in FIG0 (a), we use a loss function for supervised learning on the source data to predict the structured output, and an adversarial loss is adopted to align the global distribution. Based on this baseline model, we further incorporate a classification loss in a clustered space to learn patch-level discriminative representations F s from the source output distribution O s, shown in FIG0 (b). For target data, we employ another adversarial loss to align the patch-level distributions between F s and F t, where the goal is to push F t to be closer to the distribution of F s.Objective Function. As described in FIG0 (b), we formulate the adaptation task as composed of the following loss functions: DISPLAYFORM0 where L s and L d are supervised loss functions for learning the structured prediction and the discriminative representation on source data, respectively, while Γ denotes the clustering process on the ground truth label distribution. To align the target distribution, we utilize global and patch-level adversarial loss functions, which are denoted as L g adv and L l adv, respectively. Here, λ's are the weights for different loss functions. The following sections describe details of the baseline model and the proposed framework. Figure 3 shows the main components and loss functions of our method. We first adopt a baseline model that consists of a supervised cross-entropy loss L s and an output space adaptation module using L g adv for global alignment as shown in FIG0 (a). The loss L s can be optimized by a fully-convolutional network G that predicts the structured output with the loss summed over the spatial map indexed with h, w and the number of categories C: DISPLAYFORM0 where O s = G(I s) ∈ is the predicted output distribution through the softmax function and is up-sampled to the size of the input image. Here, we will use the same h and w as the index for all the formulations. For the adversarial loss L g adv, we follow the practice of GAN training by optimizing G and a discriminator D g that performs the binary classification to distinguish whether the output prediction is from the source image or the target one. DISPLAYFORM1 Then we optimize the following min-max problem for G and D g, with inputs to the functions dropped for simplicity: min DISPLAYFORM2 3.3 PATCH-LEVEL ALIGNMENT WITH DISCRIMINATIVE REPRESENTATIONS Figure 1 shows that we may find transferable structured output representations shared across source and target images from smaller patches rather than from the entire image or larger grids. Based on this observation, we propose to perform a patch-level domain alignment. Specifically, rather than naively aligning the distributions of all patches between two domains, we first perform clustering Figure 3: The proposed network architecture that consists of a generator G and a categorization module H for learning discriminative patch representations. In the clustered space, solid symbols denote source representations and unfilled ones are target representations pulled to the source distribution.on patches from the source-domain examples using ground truth segmentation labels to construct a set of prototypical patch patterns. Then, we let patches from the target domain adapt to this disentangled (clustered) space of source patch representations by guiding them to select the closest cluster regardless of the spatial location via adversarial objective. In the following, we describe details of the proposed patch-level alignment. Learning Discriminative Representations. In order to learn a disentangled space, class labels BID30 or pre-defined factors BID24 are usually provided as supervisory signals. However, it is non-trivial to assign some sort of class membership to individual patches of an image. One may apply unsupervised clustering of image patches using pixel representations, but it is unclear whether the constructed clustering would separate patches in a semantically meaningful way. In this work, we take advantage of already available per-pixel annotations in the source domain to construct semantically disentangled space of patch representations. To achieve this, we use label histograms for patches as the disentangled factor. We first randomly sample patches from source images, use a 2-by-2 grid on patches to extract spatial label histograms, and concatenate them into a vector, where each histogram is a 2 · 2 · C dimensional vector. Second, we apply K-means clustering on these histograms, whereby the label for any patch can be assigned as the cluster center with the closest distance on the histogram. To incorporate this clustered space during training the network G on source data, we add a classification module H after the predicted output O s, to simulate the procedure of constructing the label histogram and learn a discriminative representation. We denote the learned representation as F s = H(G(I s)) ∈ U ×V ×K through the softmax function, where K is the number of clusters. Here, each data point on the spatial map F s corresponds to a patch of the input image, and we obtain the group label Γ(Y s) for each patch accordingly. Then the learning process to construct the clustered space can be formulated as a cross-entropy loss: DISPLAYFORM3 Patch-level Adversarial Alignment. The ensuing task is to align the representations of target patches to the clustered space constructed in the source domain. To this end, we utilize another adversarial loss between F s and F t, where F t is generated in the same way as described above. Our goal is to align patches regardless of where they are located in the image, that is, without the spatial and neighborhood supports. Thus, we reshape F by concatenating the K-dimensional vectors along the spatial map, which in U · V independent data points. We note that a similar effect can be achieved by using a convolution layer with a proper stride and kernel size. We denote this reshaped data asF and formulate the adversarial objective: DISPLAYFORM4 where D l is the discriminator to classify whether the feature representationF is from the source or the target domain. Finally, we integrate and into the min-max problem in: DISPLAYFORM5 3.4 NETWORK OPTIMIZATION Following the standard procedure for training a GAN BID11, we alternate the optimization between three steps: 1) update the discriminator D g, 2) update the discriminator D l, and 3) update the network G and H while fixing the discriminators. Update the Discriminator D g. We train the discriminator D g to distinguish between the source output distribution (labeled as 1) and the target distribution (labeled as 0). The maximization problem on D g in FORMULA6 is equivalent to minimizing the binary cross-entropy loss: DISPLAYFORM6 Update the Discriminator D l. Similarly, we train the discriminator D l to classify whether the feature representationF is from the source or the target domain: DISPLAYFORM7 Update the Network G and H. The goal of this step is to push the target distribution closer to the source distribution using the optimized D g and D l, while maintaining good performance on the main tasks using G and H. As a , the minimization problem in FORMULA6 is the combination of two supervised loss functions, namely, FORMULA1 and FORMULA4, with two adversarial loss functions, where the adversarial ones can be expressed as binary cross-entropy loss functions that assign the source label to the target distribution: DISPLAYFORM8 We note that updating H would also update G through back-propagation, and thus the feature representations are enhanced in G. In addition, we only require G during the testing phase, so that runtime is unaffected compared to the baseline approach. Discriminator. For the discriminator D g using a spatial map O as the input, we adopt an architecture similar to but use fully-convolutional layers. It contains 5 convolution layers with kernel size 4 × 4, stride 2 and channel numbers {64, 128, 256, 512, 1}. In addition, a leaky ReLU activation BID22 ) is added after each convolution layer, except the last layer. For the discriminator D l, input data is a K-dimensional vector and we utilize 3 fully-connected layers similar to BID33, with leaky ReLU activation and channel numbers {256, 512, 1}. Generator. The generator consists of the network G with a categorization module H. For a fair comparison, we follow the framework used in BID31 ) that adopts DeepLab-v2 BID2 with the ResNet-101 architecture BID13 pre-trained on ImageNet BID6 ) as our baseline network G. To add the module H on the output prediction O, we first use an adaptive average pooling layer to generate a spatial map, where each data point on the map has a desired receptive field corresponding to the size of extracted patches. Then this pooled map is fed into two convolution layers and a feature map F is produced with the channel number K. Figure 3 illustrates the main components of the proposed architecture. Implementation Details. We implement the proposed framework using the PyTorch toolbox on a single Titan X GPU with 12 GB memory. To train the discriminators, we use the Adam optimizer BID16 with initial learning rate of 10 −4 and momentums set as 0.9 and 0.99. For learning the generator, we use the Stochastic Gradient Descent (SGD) solver where the momentum is 0.9, the weight decay is 5 × 10 −4 and the initial learning rate is 2.5 × 10 −4. For all Table 1: Ablation study on GTA5-to-Cityscapes using the ResNet-101 network. We also show the corresponding loss functions used in each setting. the networks, we decrease the learning rates using the polynomial decay with a power of 0.9, as described in BID2. During training, we use λ d = 0.01, λ g adv = λ l adv = 0.0005 and K = 50 for all the experiments. Note that we first train the model only using the loss L s for 10K iterations to avoid initially noisy predictions and then train the network using all the loss functions for 100K iterations. More details of the hyper-parameters such as image and patch sizes are provided in the appendix. DISPLAYFORM0 We evaluate the proposed framework for domain adaptation on semantic segmentation. We first conduct an extensive ablation study to validate each component in the algorithm on the GTA5-toCityscapes (synthetic-to-real) scenario. Second, we show that our method performs favorably against state-of-the-art approaches on numerous benchmark datasets and settings. We evaluate our domain adaptation method on semantic segmentation under various settings, including synthetic-to-real and cross-city scenarios. First, we adapt the synthetic GTA5 BID27 dataset to the Cityscapes BID5 dataset that contains real road-scene images. Similarly, we use the SYNTHIA BID28 dataset with a larger domain gap compared to Cityscapes images. For the above experiments, we follow BID14 to split the training and test sets. To overcome the realistic case when two domains are in different cities under various weather conditions, we adapt Cityscapes with sunny images to the Oxford RobotCar BID23 dataset that contains rainy scenes. We manually select 10 sequences in the Oxford RobotCar dataset annotated with the rainy tag, in which we randomly split them into 7 sequences for training and 3 for testing. We sequentially sample 895 images as training images and annotate 271 images with per-pixel semantic segmentation ground truth as the test set for evaluation. The annotated ground truth will be made publicly available. For all the experiments, intersection-over-union (IoU) ratio is used as the metric to evaluate different methods. In Table 1, we conduct an ablation study on the GTA5-to-Cityscapes scenario to understand the impact of different loss functions and design choices in the proposed framework. Loss Functions. In the first row of Table 1, we show different steps of the proposed method, including disentanglement, global alignment, and patch-level alignment. Interestingly, we find that adding disentanglement without any alignments (L s + L d) also improves the performance (from 36.6% to 38.8%), which demonstrates that the learned feature representation enhances the discrimination and generalization ability. Finally, as shown in the last of the second row, our method that combines both the global and patch-level alignments achieve the highest IoU as 43.2%.Impact on L d and L l adv. In the first two of the second row, we conduct experiments to validate the effectiveness of our patch-level alignment. We show that both losses, L d and L l adv, are necessary to assist this alignment process. Removing either of them will in performance loss, i.e., 1.9% and 1.5% lower than our final . The reason behind this is that, L d is to construct a clustered space so that L l adv can then effectively perform patch-level alignment in this space. Without ReshapedF. In the module H that transforms the output distribution to the clustered space, the features are reshaped as independent data pointsF to remove the spatial relationship and are then used as the input to the discriminator D l. To validate the usefulness, we show that without the reshaping process, the performance drops 2.4% in IoU. This matches our assumption that patches with similar representations should be aligned regardless of their locations. Visualization of Feature Representations. In FIG1, we show the t-SNE visualization BID34 of the patch-level features in the clustered space of our method and compare with the one without patch-level adaptation. The shows that with adaptation in the clustered space, the features are embedded into groups and the source/target representations overlap to each other well. Example patch visualizations are provided in the appendix. In this section, we compare the proposed method with state-of-the-art algorithms under various scenarios, including synthetic-to-real and cross-city cases. Synthetic-to-real Case. We first present experimental for adapting GTA5 to Cityscapes in TAB1. The methods in the upper group adopt the VGG-16 architecture as the base network and we show that our approach performs favorably against state-of-the-art adaptations via feature BID14 BID36, pixel-level BID15, and output space BID31 alignments. In the bottom group, we further utilize the stronger ResNet-101 base network and compare our with BID31 under two settings, i.e., feature and output space adaptations. We show that the proposed method improves the IoU with a gain of 1.8% and achieves the best IoU on 14 out of the 19 categories. In TAB2, we show for adapting SYNTHIA to Cityscapes and similar improvements are observed comparing with state-of-the-art methods. In addition, we shows visual comparisons in Figure 5 and more are presented in the appendix. Cross-city Case. Adapting between real images across different cities and conditions is an important scenario for practical applications. We choose a challenge case where the weather condition is different (i.e., sunny v.s rainy) in two cities by adapting Cityscapes to Oxford RobotCar. The proposed Target Image Ground Truth Before Adaptation Global Alignment Ours Figure 5: Example for GTA5-to-Cityscapes. Our method often generates the segmentation with more details (e.g., sidewalk and pole) while producing less noisy regions. BID31, we run the authors' released code and obtain a mean IoU of 63.6%, which is 1.4% lower than the proposed method. Further and comparisons are provided in the appendix. In this paper, we present a domain adaptation method for structured output via a general framework that combines global and patch-level alignments. The global alignment is achieved by the output space adaptation, while the patch-level one is performed via learning discriminative representations of patches across domains. To learn such patch-level representations, we propose to construct a clustered space of the source patches and adopt an adversarial learning scheme to push the target patch distributions closer to the source ones. We conduct extensive ablation study and experiments to validate the effectiveness of the proposed method under numerous challenges on semantic segmentation, including synthetic-to-real and cross-city scenarios, and show that our approach performs favorably against existing algorithms. To train the model in an end-to-end manner, we randomly sample one image from each of the source and target domain (i.e., batch size as 1) in a training iteration. Then we follow the optimization strategy as described in Section 3.4 of the paper. TAB3 shows the image and patch sizes during training and testing. Note that, the aspect ratio of the image is always maintained (i.e., no cropping) and then the image is down-sampled to the size as in the table. BID12 can be used as a loss in our model to push the target feature representation F t to one of the source clusters. To add this regularization, we replace the adversarial loss on the patch level with an entropy loss as in BID21 (u,v,k), H is the information entropy function, σ is the softmax function, and τ is the temperature of the softmax. The model with adding this entropy regularization achieves the IoU as 41.9%, that is lower than the proposed patchlevel adversarial alignment as 43.2%. The reason is that, different from the entropy minimization approach that does not use the source distribution as the guidance, our model learns discriminative representations for the target patches by pushing them closer to the source distribution in the clustered space guided by the label histogram. DISPLAYFORM0 In FIG3, we show example patches from the source and target domains corresponding to the t-SNE visualization. For each group in the clustered space via t-SNE, we show that source and target patches share high similarity between each other, which demonstrates the effectiveness of the proposed patch-level alignment. In TAB4, we present the complete for adapting Cityscapes (sunny condition) to Oxford RobotCar (rainy scene). We compare the proposed method with the model without adaptation and the output space adaptation approach BID31. More qualitative are provided in FIG4 and 8. We provide more visual comparisons for GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes scenarios from Figure 9 to Figure 11. In each row, we present the of the model without adaptation, output space adaptation BID31, and the proposed method. We show that our approach often yields better segmentation outputs with more details and produces less noisy regions. We sequentially show images in a video and their adapted segmentations generated by our method. Target Image Ground Truth Before Adaptation Global Alignment Ours Figure 9: Example of adapted segmentation for the GTA5-to-Cityscapes setting. For each target image, we show before adaptation, output space adaptation BID31, and the proposed method. Target Image Ground Truth Before Adaptation Global Alignment Ours Figure 10: Example of adapted segmentation for the GTA5-to-Cityscapes setting. For each target image, we show before adaptation, output space adaptation BID31, and the proposed method. Target Image Ground Truth Before Adaptation Global Alignment Ours Figure 11: Example of adapted segmentation for the SYNTHIA-to-Cityscapes setting. For each target image, we show before adaptation, output space adaptation BID31, and the proposed method.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1xFhiC9Y7
A domain adaptation method for structured output via learning patch-level discriminative feature representations
Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are applicable to real-world black-box models such as autonomous cars, need less knowledge and are easier to apply than transfer-based attacks and are more robust to simple defences than gradient- or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available as part of Foolbox (https://github.com/bethgelab/foolbox). Figure 1: (Left) Taxonomy of adversarial attack methods. The Boundary Attack is applicable to realworld ML algorithms because it only needs access to the final decision of a model (e.g. class-label or transcribed sentence) and does not rely on model information like the gradient or the confidence scores. (Right) Application to the Clarifai Brand Recognition Model. Many high-performance machine learning algorithms used in computer vision, speech recognition and other areas are susceptible to minimal changes of their inputs BID26. As a concrete example, a modern deep neural network like VGG-19 trained on object recognition might perfectly recognize the main object in an image as a tiger cat, but if the pixel values are only slightly perturbed in a specific way then the prediction of the very same network is drastically altered (e.g. to bus). These so-called adversarial perturbations are ubiquitous in many machine learning models and are often imperceptible to humans. Algorithms that seek to find such adversarial perturbations are generally denoted as adversarial attacks. Adversarial perturbations have drawn interest from two different sides. On the one side, they are worrisome for the integrity and security of deployed machine learning algorithms such as autonomous cars or face recognition systems. Minimal perturbations on street signs (e.g. turning a stop-sign into a 200 km/h speed limit) or street lights (e.g. turning a red into a green light) can have severe consequences. On the other hand, adversarial perturbations provide an exciting spotlight on the gap between the sensory information processing in humans and machines and thus provide guidance towards more robust, human-like architectures. Adversarial attacks can be roughly divided into three categories: gradient-based, score-based and transfer-based attacks (cp. Figure 1). Gradient-based and score-based attacks are often denoted as white-box and oracle attacks respectively, but we try to be as explicit as possible as to what information is being used in each category 1. A severe problem affecting attacks in all of these categories is that they are surprisingly straight-forward to defend against:• Gradient-based attacks. Most existing attacks rely on detailed model information including the gradient of the loss w.r.t. the input. Examples are the Fast-Gradient Sign Method (FGSM), the Basic Iterative Method (BIM) BID11, DeepFool , the Jacobian-based Saliency Map Attack (JSMA) BID20, Houdini BID5 and the Carlini & Wagner attack BID2. Defence: A simple way to defend against gradient-based attacks is to mask the gradients, for example by adding non-differentiable elements either implicitly through means like defensive distillation BID21 or saturated non-linearities BID18, or explicitly through means like non-differentiable classifiers BID15 ).• Score-based attacks. A few attacks are more agnostic and only rely on the predicted scores (e.g. class probabilities or logits) of the model. On a conceptual level these attacks use the predictions to numerically estimate the gradient. This includes black-box variants of JSMA BID17 and of the Carlini & Wagner attack BID4 as well as generator networks that predict adversarials BID8. Defence: It is straight-forward to severely impede the numerical gradient estimate by adding stochastic elements like dropout into the model. Also, many robust training methods introduce a sharp-edged plateau around samples BID28 which not only masks gradients themselves but also their numerical estimate.• Transfer-based attacks. Transfer-based attacks do not rely on model information but need information about the training data. This data is used to train a fully observable substitute model from which adversarial perturbations can be synthesized BID22. They rely on the empirical observation that adversarial examples often transfer between models. If adversarial examples are created on an ensemble of substitute models the success rate on the attacked model can reach up to 100% in certain scenarios BID13. Defence: A recent defence method against transfer attacks BID28, which is based on robust training on a dataset augmented by adversarial examples from an ensemble of substitute models, has proven highly successful against basically all attacks in the 2017 Kaggle Competition on Adversarial Attacks 2.The fact that many attacks can be easily averted makes it often extremely difficult to assess whether a model is truly robust or whether the attacks are just too weak, which has lead to premature claims of robustness for DNNs BID3.This motivates us to focus on a category of adversarial attacks that has so far received fairly little attention:• Decision-based attacks. Direct attacks that solely rely on the final decision of the model (such as the top-1 class label or the transcribed sentence).The delineation of this category is justified for the following reasons: First, compared to score-based attacks decision-based attacks are much more relevant in real-world machine learning applications where confidence scores or logits are rarely accessible. At the same time decision-based attacks have the potential to be much more robust to standard defences like gradient masking, intrinsic stochasticity or robust training than attacks from the other categories. Finally, compared to transferbased attacks they need much less information about the model (neither architecture nor training data) and are much simpler to apply. There currently exists no effective decision-based attack that scales to natural datasets such as ImageNet and is applicable to deep neural networks (DNNs). The most relevant prior work is a variant of transfer attacks in which the training set needed to learn the substitute model is replaced by a synthetic dataset (b). This synthetic dataset is generated by the adversary alongside the training of the substitute; the labels for each synthetic sample are drawn from the black-box model. While this approach works well on datasets for which the intra-class variability is low (such as MNIST) it has yet to be shown that it scales to more complex natural datasets such as CIFAR or ImageNet. Other decision-based attacks are specific to linear or convex-inducing classifiers BID6 BID14 BID19 and are not applicable to other machine learning models. The work by BID0 basically stands between transfer attacks and decision-based attacks in that the substitute model is trained on a dataset for which the labels have been observed from the black-box model. This attack still requires knowledge about the data distribution on which the black-box models was trained on and so we don't consider it a pure decision-based attack. Finally, some naive attacks such as a line-search along a random direction away from the original sample can qualify as decision-based attacks but they induce large and very visible perturbations that are orders of magnitude larger than typical gradient-based, score-based or transfer-based attacks. Throughout the paper we focus on the threat scenario in which the adversary aims to change the decision of a model (either targeted or untargeted) for a particular input sample by inducing a minimal perturbation to the sample. The adversary can observe the final decision of the model for arbitrary inputs and it knows at least one perturbation, however large, for which the perturbed sample is adversarial. The contributions of this paper are as follows:• We emphasise decision-based attacks as an important category of adversarial attacks that are highly relevant for real-world applications and important to gauge model robustness.• We introduce the first effective decision-based attack that scales to complex machine learning models and natural datasets. The Boundary Attack is conceptually surprisingly simple, extremely flexible, requires little hyperparameter tuning and FORMULA6 is competitive with the best gradient-based attacks in both targeted and untargeted computer vision scenarios.• We show that the Boundary Attack is able to break previously suggested defence mechanisms like defensive distillation.• We demonstrate the practical applicability of the Boundary Attack on two black-box machine learning models for brand and celebrity recognition available on Clarifai.com. Throughout the paper we use the following notation: o refers to the original input (e.g. an image), y = F (o) refers to the full prediction of the model F (·) (e.g. logits or probabilities), y max is the predicted label (e.g. class-label). Similarly,õ refers to the adversarially perturbed image,õ k refers to the perturbed image at the k-th step of an attack algorithm. Vectors are denoted in bold. The basic intuition behind the boundary attack algorithm is depicted in Figure 2: the algorithm is initialized from a point that is already adversarial and then performs a random walk along the boundary between the adversarial and the non-adversarial region such that it stays in the adversarial region and the distance towards the target image is reduced. In other words we perform rejection sampling with a suitable proposal distribution P to find progressively smaller adversarial perturbations according to a given adversarial criterion c. The basic logic of the algorithm is described in Algorithm 1, each individual building block is detailed in the next subsections. DISPLAYFORM0 Algorithm 1: Minimal version of the Boundary Attack. The Boundary Attack needs to be initialized with a sample that is already adversarial 3. In an untargeted scenario we simply sample from a maximum entropy distribution given the valid domain of the input. In the computer vision applications below, where the input is constrained to a range of per pixel, we sample each pixel in the initial imageõ 0 from a uniform distribution U. We reject samples that are not adversarial. In a targeted scenario we start from any sample that is classified by the model as being from the target class. The efficiency of the algorithm crucially depends on the proposal distribution P, i.e. which random directions are explored in each step of the algorithm. The optimal proposal distribution will generally depend on the domain and / or model to be attacked, but for all vision-related problems tested here a very simple proposal distribution worked surprisingly well. The basic idea behind this proposal distribution is as follows: in the k-th step we want to draw perturbations η k from a maximum entropy distribution subject to the following constraints:1. The perturbed sample lies within the input domain, DISPLAYFORM0 2. The perturbation has a relative size of δ, DISPLAYFORM1 3. The perturbation reduces the distance of the perturbed image towards the original input by a relative amount, DISPLAYFORM2 Adjusting step-size of #1 50% of orthogonal perturbations should be within adversarial region Success rate of total perturbation should be higher then threshold (e.g. 25%).classified incorrectly (adversarial) Figure 2: (Left) In essence the Boundary Attack performs rejection sampling along the boundary between adversarial and non-adversarial images. (Center) In each step we draw a new random direction by (#1) drawing from an iid Gaussian and projecting on a sphere, and by (#2) making a small move towards the target image. (Right) The two step-sizes (orthogonal and towards the original input) are dynamically adjusted according to the local geometry of the boundary. In practice it is difficult to sample from this distribution, and so we resort to a simpler heuristic: first, we sample from an iid Gaussian distribution η k i ∼ N and then rescale and clip the sample such that FORMULA1 and FORMULA2 hold. In a second step we project η k onto a sphere around the original image o such DISPLAYFORM0 and FORMULA1 hold. We denote this as the orthogonal perturbation and use it later for hyperparameter tuning. In the last step we make a small movement towards the original image such that and hold. For high-dimensional inputs and small δ, the constraint will also hold approximately. A typical criterion by which an input is classified as adversarial is misclassification, i.e. whether the model assigns the perturbed input to some class different from the class label of the original input. Another common choice is targeted misclassification for which the perturbed input has to be classified in a given target class. Other choices include top-k misclassification (the top-k classes predicted for the perturbed input do not contain the original class label) or thresholds on certain confidence scores. Outside of computer vision many other choices exist such as criteria on the worderror rates. In comparison to most other attacks, the Boundary Attack is extremely flexible with regards to the adversarial criterion. It basically allows any criterion (including non-differentiable ones) as long as for that criterion an initial adversarial can be found (which is trivial in most cases). The Boundary Attack has only two relevant parameters: the length of the total perturbation δ and the length of the step towards the original input (see Fig. 2). We adjust both parameters dynamically according to the local geometry of the boundary. The adjustment is inspired by Trust Region methods. In essence, we first test whether the orthogonal perturbation is still adversarial. If this is true, then we make a small movement towards the target and test again. The orthogonal step tests whether the step-size is small enough so that we can treat the decision boundary between the adversarial and the non-adversarial region as being approximately linear. If this is the case, then we expect around 50% of the orthogonal perturbations to still be adversarial. If this ratio is much lower, we reduce the step-size δ, if it is close to 50% or higher we increase it. If the orthogonal perturbation is still adversarial we add a small step towards the original input. The maximum size of this step depends on the angle of the decision boundary in the local neighbourhood (see also Figure 2). If the success rate is too small we decrease, if it is too large we increase it. Typically, the closer we get to the original image, the flatter the decision boundary becomes and the smaller has to be to still make progress. The attack is converged whenever converges to zero. We quantify the performance of the Boundary Attack on three different standard datasets: MNIST , CIFAR-10 (BID10) and ImageNet-1000 BID7. To make the comparison with previous as easy and transparent as possible, we here use the same MNIST and CIFAR networks as BID2 4. In a nutshell, both the MNIST and CIFAR model feature nine layers with four convolutional layers, two max-pooling layers and two fully-connected layers. For all details, including training parameters, we refer the reader to BID2. On ImageNet we use the pretrained networks VGG-19 , ResNet-50 BID9 and Inception-v3 BID27 provided by Keras 5.We evaluate the Boundary Attack in two settings: an untargeted setting in which the adversarial perturbation flips the label of the original sample to any other label, and a targeted setting in which the adversarial flips the label to a specific target class. In the untargeted setting we compare the Boundary Attack against three gradient-based attack algorithms:• Fast-Gradient Sign Method (FGSM). FGSM is among the simplest and most widely used untargeted adversarial attack methods. In a nutshell, FGSM computes the gradient g = ∇ o L(o, c) that maximizes the loss L for the true class-label c and then seeks the smallest for which o+ ·g is still adversarial. We use the implementation in Foolbox 0.10.0 BID24 ).• DeepFool. DeepFool is a simple yet very effective attack. In each iteration it computes for each class = 0 the minimum distance d(, 0) that it takes to reach the class boundary by approximating the model classifier with a linear classifier. It then makes a corresponding step in the direction of the class with the smallest distance. We use the implementation in Foolbox 0.10.0 BID24. DISPLAYFORM0 The attack by Carlini & Wagner BID2 ) is essentially a refined iterative gradient attack that uses the Adam optimizer, multiple starting points, a tanh-nonlinearity to respect box-constraints and a max-based adversarial constraint function. We use the original implementation provided by the authors with all hyperparameters left at their default values 4.To evaluate the success of each attack we use the following metric: let η A,M (o i) ∈ R N be the adversarial perturbation that the attack A finds on model M for the i-th sample o i. The total score S A for A is the median squared L2-distance across all samples, DISPLAYFORM1 For MNIST and CIFAR we evaluate 1000 randomly drawn samples from the validation set, for ImageNet we use 250 images. In the untargeted setting an adversarial is any image for which the predicted label is different from the label of the original image. We show adversarial samples synthesized by the Boundary Attack for each dataset in Figure 3. The score FORMULA6 Despite its simplicity the Boundary Attack is competitive with gradient-based attacks in terms of the minimal adversarial perturbations and very stable against the choice of the initial point (Figure 5). This finding is quite remarkable given that gradient-based attacks can fully observe the model whereas the Boundary Attack is severely restricted to the final class prediction. To compensate for this lack of information the Boundary Attack needs many more iterations to converge. As a rough measure for the run-time of an attack independent of the quality of its implementation we tracked the number of forward passes (predictions) and backward passes (gradients) through the network requested by each of the attacks to find an adversarial for ResNet-50: averaged over 20 samples and under the same conditions as before, DeepFool needs about 7 forward and 37 backward passes, the Carlini & Wagner attack requires 16.000 forward and the same number of backward passes, and the Boundary Attack uses 1.200.000 forward passes but zero backward passes. While that (unsurprisingly) makes the Boundary Attack more expensive to run it is important to note that the Boundary Attacks needs much fewer iterations if one is only interested in imperceptible perturbations, see figures 4 and 6. We can also apply the Boundary Attack in a targeted setting. In this case we initialize the attack from a sample of the target class that is correctly identified by the model. A sample trajectory from the starting point to the original sample is shown in FIG2. After around 10 4 calls to the model 4.4e-08 4.4e-08 4.5e-08 4.6e-08 4.8e-089.8e-08 9.9e-08 9.9e-08 1.0e-07 1.0e-07 As discussed in the introduction, many attack methods are straight-forward to defend against. One common nuisance is gradient masking in which a model is implicitely or explicitely modified to yield masked gradients. An interesting example is the saturated sigmoid network BID18 in which an additional regularization term leads the sigmoid activations to saturate, which in turn leads to vanishing gradients and failing gradient-based attacks.Another example is defensive distillation BID21. In a nutshell defensive distillation uses a temperature-augmented softmax of the type DISPLAYFORM0 and works as follows:1. Train a teacher network as usual but with temperature T.2. Train a distilled network-with the same architecture as the teacher-on the softmax outputs of the teacher. Both the distilled network and the teacher use temperature T.3. Evaluate the distilled network at temperature T = 1 at test time. Initial were promising: the success rate of gradient-based attacks dropped from close to 100% down to 0.5%. It later became clear that the distilled networks only appeared to be robust because they masked their gradients of the cross-entropy loss BID3: as the temperature of the softmax is decreased at test time, the input to the softmax increases by a factor of T and so the probabilities saturate at 0 and 1. This leads to vanishing gradients of the cross-entropy loss w.r.t. to the input on which gradient-based attacks rely. If the same attacks are instead applied to the logits the success rate recovers to almost 100% BID2.Decision-based attacks are immune to such defences. To demonstrate this we here apply the Boundary Attack to two distilled networks trained on MNIST and CIFAR. The architecture is the same as in section 3 and we use the implementation and training protocol by BID2 which is available at https://github.com/carlini/nn_robust_attacks. Most importantly, we do not operate on the logits but provide only the class label with maximum probability to the Boundary Attack. The size of the adversarial perturbations that the Boundary Attack finds is fairly similar for the distilled and the undistilled network. This demonstrates that defensive distillation does not significantly increase the robustness of network models and that the Boundary Attack is able to break defences based on gradient masking. In many real-world machine learning applications the attacker has no access to the architecture or the training data but can only observe the final decision. This is true for security systems (e.g. face identification), autonomous cars or speech recognition systems like Alexa or Cortana. In this section we apply the Boundary Attack to two models of the cloud-based computer vision API by Clarifai 6. The first model identifies brand names in natural images and recognizes over 500 brands. The second model identifies celebrities and can recognize over 10.000 individuals. Multiple identifications per image are possible but we only consider the one with the highest confidence score. It is important to note that Clarifai does provide confidence scores for each identified class (but not for all possible classes). However, in our experiments we do not provide this confidence score to the Boundary Attack. Instead, our attack only receives the name of the identified object (e.g. Pepsi or Verizon in the brand-name detection task).We selected several samples of natural images with clearly visible brand names or portraits of celebrities. We then make a square crop and resize the image to 100 × 100 pixels. For each sample we make sure that the brand or the celebrity is clearly visible and that the corresponding Clarifai We show five samples for each model alongside the adversarial image generated by the Boundary Attack in FIG3. We generally observed that the Clarifai models were more difficult to attack than ImageNet models like VGG-19: while for some samples we did succeed to find adversarial perturbations of the same order (1e −7) as in section 3 (e.g. for Shell or SAP), most adversarial perturbations were on the order of 1e −2 to 1e −3 ing in some slightly noticeable noise in some adversarial examples. Nonetheless, for most samples the original and the adversarial image are close to being perceptually indistinguishable. In this paper we emphasised the importance of a mostly neglected category of adversarial attacksdecision-based attacks-that can find adversarial examples in models for which only the final decision can be observed. We argue that this category is important for three reasons: first, attacks in this class are highly relevant for many real-world deployed machine learning systems like autonomous cars for which the internal decision making process is unobservable. Second, attacks in this class do not rely on substitute models that are trained on similar data as the model to be attacked, thus making real-world applications much more straight-forward. Third, attacks in this class have the potential to be much more robust against common deceptions like gradient masking, intrinsic stochasticity or robust training. We also introduced the first effective attack in this category that is applicable to general machine learning algorithms and complex natural datasets: the Boundary Attack. At its core the Boundary Attack follows the decision boundary between adversarial and non-adversarial samples using a very simple rejection sampling algorithm in conjunction with a simple proposal distribution and a dynamic step-size adjustment inspired by Trust Region methods. Its basic operating principlestarting from a large perturbation and successively reducing it-inverts the logic of essentially all previous adversarial attacks. Besides being surprisingly simple, the Boundary attack is also extremely flexible in terms of the possible adversarial criteria and performs on par with gradient-based attacks on standard computer vision tasks in terms of the size of minimal perturbations. The mere fact that a simple constrained iid Gaussian distribution can serve as an effective proposal perturbation for each step of the Boundary attack is surprising and sheds light on the brittle information processing of current computer vision architectures. Nonetheless, there are many ways in which the Boundary attack can be made even more effective, in particular by learning a suitable proposal distribution for a given model or by conditioning the proposal distribution on the recent history of successful and unsuccessful proposals. Decision-based attacks will be highly relevant to assess the robustness of machine learning models and to highlight the security risks of closed-source machine learning systems like autonomous cars. We hope that the Boundary attack will inspire future work in this area.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyZI0GWCZ
A novel adversarial attack that can directly attack real-world black-box machine learning models without transfer.
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset. Deep recurrent models like long short-term memory (LSTM) recurrent neural networks (RNNs) have become a standard building block in modern approaches to language modeling, with applications in speech recognition, input decoding for mobile keyboards, and language translation. Because language usage varies widely by problem domain and dataset, training a language model on data from the right distribution is critical. For example, a model to aid typing on a mobile keyboard is better served by training data typed in mobile apps rather than from scanned books or transcribed utterances. However, language data can be uniquely privacy sensitive. In the case of text typed on a mobile phone, this sensitive information might include passwords, text messages, and search queries. In general, language data may identify a speaker-explicitly by name or implicitly, for example via a rare or unique phrase-and link that speaker to secret or sensitive information. Ideally, a language model's parameters would encode patterns of language use common to many users without memorizing any individual user's unique input sequences. However, we know convolutional NNs can memorize arbitrary labelings of the training data BID22 and recurrent language models are also capable of memorizing unique patterns in the training data BID5. Recent attacks on neural networks such as those of BID19 underscore the implicit risk. The main goal of our work is to provide a strong guarantee that the trained model protects the privacy of individuals' data without undue sacrifice in model quality. We are motivated by the problem of training models for next-word prediction in a mobile keyboard, and use this as a running example. This problem is well suited to the techniques we introduce, as differential privacy may allow for training on data from the true distribution (actual mobile usage) rather than on proxy data from some other source that would produce inferior models. However, to facilitate reproducibility and comparison to non-private models, our experiments are conducted on a public dataset as is standard in differential privacy research. The remainder of this paper is structured around the following contributions:1. We apply differential privacy to model training using the notion of user-adjacent datasets, leading to formal guarantees of user-level privacy, rather than privacy for single examples.4. In extensive experiments in §3, we offer guidelines for parameter tuning when training complex models with differential privacy guarantees. We show that a small number of experiments can narrow the parameter space into a regime where we pay for privacy not in terms of a loss in utility but in terms of an increased computational cost. We now introduce a few preliminaries. Differential privacy (DP) BID10 BID8 BID9 ) provides a well-tested formalization for the release of information derived from private data. Applied to machine learning, a differentially private training mechanism allows the public release of model parameters with a strong guarantee: adversaries are severely limited in what they can learn about the original training data based on analyzing the parameters, even when they have access to arbitrary side information. Formally, it says: Definition 1. Differential Privacy: A randomized mechanism M: D → R with a domain D (e.g., possible training datasets) and range R (e.g., all possible trained models) satisfies (, δ)-differential privacy if for any two adjacent datasets d, d ∈ D and for any subset of outputs S ⊆ R it holds that DISPLAYFORM0 The definition above leaves open the definition of adjacent datasets which will depend on the application. Most prior work on differentially private machine learning (e.g. BID7 BID4 ; BID0 BID21 BID16) deals with example-level privacy: two datasets d and d are defined to be adjacent if d can be formed by adding or removing a single training example from d. We remark that while the recent PATE approach of BID16 can be adapted to give user-level privacy, it is not suited for a language model where the number of classes (possible output words) is large. For problems like language modeling, protecting individual examples is insufficient-each typed word makes its own contribution to the RNN's training objective, so one user may contribute many thousands of examples to the training data. A sensitive word or phrase may be typed several times by an individual user, but it should still be protected.2 In this work, we therefore apply the definition of differential privacy to protect whole user histories in the training set. This user-level privacy is ensured by using an appropriate adjacency relation:Definition 2. User-adjacent datasets: Let d and d be two datasets of training examples, where each example is associated with a user. Then, d and d are adjacent if d can be formed by adding or removing all of the examples associated with a single user from d. Model training that satisfies differential privacy with respect to datasets that are user-adjacent satisfies the intuitive notion of privacy we aim to protect for language modeling: the presence or absence of any specific user's data in the training set has an imperceptible impact on the (distribution over) the parameters of the learned model. It follows that an adversary looking at the trained model cannot infer whether any specific user's data was used in the training, irrespective of what auxiliary information they may have. In particular, differential privacy rules out memorization of sensitive information in a strong information theoretic sense. Our private algorithm relies heavily on two prior works: the FederatedAveraging (or FedAvg) algorithm of BID14, which trains deep networks on user-partitioned data, and the moments accountant of BID0, which provides tight composition guarantees for the repeated application of the Gaussian mechanism combined with amplification-via-sampling. While we have attempted to make the current work as self-contained as possible, the above references provide useful . FedAvg was introduced by BID14 for federated learning, where the goal is to train a shared model while leaving the training data on each user's mobile device. Instead, devices download the current model and compute an update by performing local computation on their dataset. It is worthwhile to perform extra computation on each user's data to minimize the number of communication rounds required to train a model, due to the significantly limited bandwidth when training data remains decentralized on mobile devices. We observe, however, that FedAvg is of interest even in the datacenter when DP is applied: larger updates are more resistant to noise, and fewer rounds of training can imply less privacy cost. Most importantly, the algorithm naturally forms peruser updates based on a single user's data, and these updates are then averaged to compute the final update applied to the shared model on each round. As we will see, this structure makes it possible to extend the algorithm to provide a user-level differential privacy guarantee. We also evaluate the FederatedSGD algorithm, essentially large-batch SGD where each minibatch is composed of "microbatches" that include data from a single distinct user. In some datacenter applications FedSGD might be preferable to FedAvg, since fast networks make it more practical to run more iterations. However, those additional iterations come at a privacy cost. Further, the privacy benefits of federated learning are nicely complementary to those of differential privacy, and FedAvg can be applied in the datacenter as well, so we focus on this algorithm while showing that our also extend to FedSGD.Both FedAvg and FedSGD are iterative procedures, and in both cases we make the following modifications to the non-private versions in order to achieve differential privacy: A) We use random-sized batches where we select users independently with probability q, rather than always selecting a fixed number of users. B) We enforce clipping of per-user updates so the total update has bounded L 2 norm. C) We use different estimators for the average update (introduced next). D) We add Gaussian noise to the final average update. The pseudocode for DP-FedAvg and DP-FedSGD is given as Algorithm 1. In the remainder of this section, we introduce estimators for C) and then different clipping strategies for B). Adding the sampling procedure from A) and noise added in D) allows us to apply the moments accountant to bound the total privacy loss of the algorithm, given in Theorem 1. Finally, we consider the properties of the moments accountant that make training on large datasets particular attractive. Bounded-sensitivity estimators for weighted average queries Randomly sampling users (or training examples) by selecting each independently with probability q is crucial for proving low privacy loss through the use of the moments accountant BID0. However, this procedure produces variable-sized samples C, and when the quantity to be estimated f (C) is an average rather than a sum (as in computing the weighted average update in FedAvg or the average loss on a minibatch in SGD with example-level DP), this has ramifications for the sensitivity of the query f. Specifically, we consider weighted databases d where each row k ∈ d is associated with a particular user, and has an associated weight w k ∈. This weight captures the desired influence of the, 1 for all users k W = k∈d w k for each round t = 0, 1, 2,... do C t ← (sample users with probability q) DISPLAYFORM0 Algorithm 1: The main loop for DP-FedAvg and DP-FedSGD, the only difference being in the user update function (UserUpdateFedAvg or UserUpdateFedSGD). The calls on the moments accountant M refer to the API of BID1. row on the final outcome. For example, we might think of row k containing n k different training examples all generated by user k, with weight w k proportional to n k. We are then interested in a bounded-sensitivity estimate of f (C) = k∈C w k ∆ k k∈C w k for per-user vectors ∆ k, for example to estimate the weighted-average user update in FedAvg. Let W = k w k. We consider two such estimators: DISPLAYFORM1 Notef f is an unbiased estimator, since E[k∈C w k] = qW. On the other hand,f c matches f exactly as long as we have sufficient weight in the sample. For privacy protection, we need to control the sensitivity of our query functionf, defined as S(f) = max C,k f (C ∪ {k}) −f (C) 2, where the added user k can have arbitrary data. The lower-bound qW min on the denominator off c is necessary to control sensitivity. Assuming each w k ∆ k has bounded norm, we have: Lemma 1. If for all users k we have w k ∆ k 2 ≤ S, then the sensitivity of the two estimators is bounded as DISPLAYFORM2 A proof is given in Appendix §A.Clipping strategies for multi-layer models Unfortunately, when the user vectors ∆ k are gradients (or sums of gradients) from a neural network, we will generally have no a priori bound 3 S such that ∆ k ≤ S. Thus, we will need to "clip" our updates to enforce such a bound before applying f f orf c. For a single vector ∆, we can apply a simple L 2 projection when necessary: and report the value of for which (, δ)-differential privacy holds after 1 to 10 6 rounds. For large datasets, additional rounds of training incur only a minimal additional privacy loss. However, for deep networks it is more natural to treat the parameters of each layer as a separate vector. The updates to each of these layers could have vastly different L 2 norms, and so it can be preferable to clip each layer separately. DISPLAYFORM3 Formally, suppose each update DISPLAYFORM4 We consider the following clipping strategies, both of which ensure the total update has norm at most S:1. Flat clipping Given an overall clipping parameter S, we clip the concatenation of all the layers as ∆ k = π(∆ k, S). 2. Per-layer clipping Given a per-layer clipping parameter S j for each layer, we set DISPLAYFORM5 j. The simplest model-independent choice is to take DISPLAYFORM6 for all j, which we use in experiments. We remark here that clipping itself leads to additional bias, and ideally, we would choose the clipping parameter to be large enough that nearly all updates are smaller than the clip value. On the other hand, a larger S will require more noise in order to achieve privacy, potentially slowing training. We treat S as a hyper-parameter and tune it. A privacy guarantee Once the sensitivity of the chosen estimator is bounded, we may add Gaussian noise scaled to this sensitivity to obtain a privacy guarantee. A simple approach is to use an (, δ)-DP bound for this Gaussian mechanism, and apply the privacy amplification lemma and the advanced composition theorem to get a bound on the total privacy cost. We instead use the Moments Accountant of BID0 to achieve much tighter privacy bounds. The moments accountant for the sampled Gaussian mechanism upper bounds the total privacy cost of T steps of the Gaussian mechanism with noise N (0, σ 2) for σ = z · S, where z is a parameter, S is the sensitivity of the query, and each row is selected with probability q. Given a δ > 0, the accountant gives an for which this mechanism satisfies (, δ)-DP. The following theorem is a slight generalization of the in BID0; see §A for a proof sketch. Theorem 1. For the estimator (f f,f c), the moments accountant of the sampled Gaussian mechanism correctly computes the privacy loss with the noise scale of z = σ/S and steps T, where S = S/qW for (f f) and 2S/qW min for (f c).Differential privacy for large datasets We use the implementation of the moments accountant from BID1. The moments accountant makes strong use of amplification via sampling, which means increasing dataset size makes achieving high levels of privacy significantly easier. Table 1 summarizes the privacy guarantees offered as we vary some of the key parameters. The takeaway from this table is that as long as we can afford the cost in utility of adding noise proportional to z times the sensitivity of the updates, we can get reasonable privacy guarantees over a large range of parameters. The size of the dataset has a modest impact on the privacy cost of a single query (1 round column), but a large effect on the number of queries that can be run without significantly increasing the privacy cost (compare the 10 6 round column). For example, on a dataset with 10 users, the privacy upper bound is nearly constant between 1 and 10 6 calls to the mechanism (that is, rounds of the optimization algorithm).There is only a small cost in privacy for increasing the expected number of (equally weighted) users C = qW = qK selected on each round as long asC remains a small fraction of the size of the total dataset. Since the sensitivity of an average query decreases like 1/C (and hence the amount of noise we need to add decreases proportionally), we can increaseC until we arrive at a noise level that does not adversely effect the optimization process. We show empirically that such a level exists in the experiments. In this section, we evaluate DP-FedAvg while training an LSTM RNN tuned for language modeling in a mobile keyboard. We vary noise, clipping, and the number of users per round to develop an intuition of how privacy affects model quality in practice. We defer our experimental on FedSGD as well as on models with larger dictionaries to Appendix §D. To summarize, they show that FedAvg gives better privacy-utility trade-offs than FedSGD, and that our empirical extend to larger dictionaries with relatively little need for additional parameter tuning despite the significantly larger models. Some less important plots are deferred to §C.Model structure The goal of a language model is to predict the next word in a sequence s t from the preceding words s 0...s t−1. The neural language model architecture used here is a variant of the LSTM recurrent neural network BID13 trained to predict the next word (from a fixed dictionary) given the current word and a state vector passed from the previous time step. LSTM language models are competitive with traditional n-gram models BID20 and are a standard baseline for a variety of ever more advanced neural language model architectures BID12 BID15 BID11. Our model uses a few tricks to decrease the size for deployment on mobile devices (total size is 1.35M parameters), but is otherwise standard. We evaluate using AccuracyTop1, the probability that the word to which the model assigns highest probability is correct. Details on the model and evaluation metrics are given in §B. All training began from a common random initialization, though for real-world applications pre-training on public data is likely preferable (see §B for additional discussion).Dataset We use a large public dataset of Reddit posts, as described by BID2. Critically for our purposes, each post in the database is keyed by an author, so we can group the data by these keys in order to provide user-level privacy. We preprocessed the dataset to K = 763, 430 users each with 1600 tokens. Thus, we take w k = 1 for all users, so W = K. We writeC = qK = qW for the expected number of users sampled per round. See §B for details on the dataset and preprocessing. To allow for frequent evaluation, we use a relatively small test set of 75122 tokens formed from random held-out posts. We evaluate accuracy every 20 rounds and plot metrics smoothed over 5 evaluations (100 rounds).Building towards DP: sampling, estimators, clipping, and noise Recall achieving differential privacy for FedAvg required a number of changes (§2, items A-D). In this section, we examine the impact of each of these changes, both to understand the immediate effects and to enable the selection of reasonable parameters for our final DP experiments. This sequence of experiments also provides a general road-map for applying differentially private training to new models and datasets. For these experiments, we use the FedAvg algorithm with a fixed learning rate of 6.0, which we verified was a reasonable choice in preliminary experiments. 4 In all FedAvg experiments, we used a local batch size of B = 8, an unroll size of 10 tokens, and made E = 1 passes over the local dataset; thus FedAvg processes 80 tokens per batch, processing a user's 1600 tokens in 20 batches per round. First, we investigate the impact of changing the estimator used for the average per-round update, as well as replacing a fixed sample of C = 100 users per round to a variable-sized sample formed by selecting each user with probability q = 100/763430 for an expectation ofC = 100 users. None of these changes significantly impacted the convergence rate of the algorithm (see Figure 5 in §C). In particular, the fixed denominator estimatorf f works just as well as the higher-sensitivity clipped-denominator estimatorf c. Thus, in the remaining experiments we focus on estimatorf f. Next, we investigate the impact of flat and per-layer clipping on the convergence rate of FedAvg. The model has 11 parameter vectors, and for per-layer clipping we simply chose to distribute the clipping budget equally across layers with S j = S/ √ 11. Figure 2 shows that choosing S ∈ has at most a small effect on convergence rate. Finally, Figure 3 shows the impact of various levels of per-coordinate Gaussian noise N (0, σ 2) added to the average update. Early in training, we see almost no loss in convergence for a noise of σ = 0.024; later in training noise has a larger effect, and we see a small decrease in convergence past σ = 0.012. These experiments, where we sample only an expected 100 users per round, are not sufficient to provide a meaningful privacy guarantee. We have S = 20.0 andC = qW = 100, so the sensitivity of estimatorf f is 20/100.0 = 0.2. Thus, to use the moments accountant with z = 1, we would need to add noise σ = 0.2 (dashed red vertical line), which destroys accuracy. Estimating the accuracy of private models for large datasets Continuing the above example, if instead we choose q soC = 1250, set the L 2 norm bound S = 15.0, then we have sensitivity Table 3: Count histograms recording how many of a model's (row's) top 10 predictions are found in the n = 10, 50, or 100 most frequent words in the corpus. Models that predict corpus top-n more frequently have more mass to the right.15/1250 = 0.012, and so we add noise σ = 0.012 and can apply the moments account with noise scale z = 1. The computation is now significantly more computationally expensive, but will give a guarantee of (1.97, 10 −9)-differential privacy after 3000 rounds of training. Because running such experiments is so computationally expensive, for experimental purposes it is useful to ask: does using an expected 1250 users per round produce a model with different accuracy than a model trained with only 100 expected users per round? If the answer is no, we can train a model with C = 100 and a particular noise level σ, and use that model to estimate the utility of a model trained with a much larger q (and hence a much better privacy guarantee). We can then run the moments accountant (without actually training) to numerically upper bound the privacy loss. To test this, we trained two models, both with S = 15 and σ = 0.012, one withC = 100 and one withC = 1250; recall the first model achieves a vacuous privacy guarantee, while the second achieves (1.97, 10 −9)-differential privacy after 3000 rounds. Figure 7 in §C shows the two models produce almost identical accuracy curves during training. Using this observation, we can use the accuracy of models trained withC = 100 to estimate the utility of private models trained with much largerC. See also Figure 6 in §C, which also shows diminishing returns for larger C for the standard FedAvg algorithm. FIG1 compares the true-average fixed-sample baseline model (see Figure 5 in §C) with models that use varying levels of clipping S and noise σ atC = 100. Using the above approach, we can use these experiments to estimate the utility of LSTMs trained with differential privacy for different sized datasets and different values ofC. TAB2 shows representative values settingC so that z = 1. For example, the model with σ = 0.003 and S = 15 is only worse than the baseline by an additive −0.13% in AccuracyTop1 and achieves (4.6, 10 −9)-differential privacy when trained with C = 5000 expected users per round. As a point of comparison, we have observed that training on a different corpus can cost an additive −2.50% in AccuracyTop1. Adjusting noise and clipping as training progresses FIG1 shows that as training progresses, each level of noise eventually becomes detrimental (the line drops somewhat below the baseline). This suggests using a smaller σ and correspondingly smaller S (thus fixing z so the privacy cost of each round is unchanged) as training progresses. FIG3 (and Figure 8 in §C) shows this can be effective. We indeed observe that early in training (red), S in the 10 -12.6 range works well (σ = 0.006 -0.0076). However, if we adjust the clipping/noise tradeoff after 4885 rounds of training and continue for another 6000, switching to S = 7.9 and σ = 0.0048 performs better. Comparing DP and non-DP models While noised training with DP-FedAvg has only a small effect on predictive accuracy, it could still have a large qualitative effect on predictions. We hy-pothesized that noising updates might bias the model away from rarer words (whose embeddings get less frequent actual updates and hence are potentially more influenced by noise) and toward the common "head" words. To evaluate this hypothesis, we computed predictions on a sample of the test set using a variety of models. At each s t we intersect the top 10 predictions with the most frequent 10, 50, 100 words in the dictionary. So for example, an intersection of size two in the top 50 means two of the model's top 10 predictions are in the 50 most common words in the dictionary. Table 3 gives histograms of these counts. We find that better models (higher AccuracyTop1) tend to use fewer head words, but see little difference from changingC or the noise σ (until, that is, enough noise has been added to compromise model quality, at which point the degraded model's bias toward the head matches models of similar quality with less noise). In this work, we introduced an algorithm for user-level differentially private training of large neural networks, in particular a complex sequence model for next-word prediction. We empirically evaluated the algorithm on a realistic dataset and demonstrated that such training is possible at a negligible loss in utility, instead paying a cost in additional computation. Such private training, combined with federated learning (which leaves the sensitive training data on device rather than centralizing it), shows the possibility of training models with significant privacy guarantees for important real world applications. Much future work remains, for example designing private algorithms that automate and make adaptive the tuning of the clipping/noise tradeoff, and the application to a wider range of model families and architectures, for example GRUs and character-level models. Our work also highlights the open direction of reducing the computational overhead of differentially private training of non-convex models. Proof of Lemma 1. For the first bound, observe the numerator in the estimatorf f can change by at most S between neighboring databases, by assumption. The denominator is a constant. For the second bound, the estimatorf c can be thought of as the sum of the vectors w k ∆ k divided by max(qW min, k∈C ∆ k). Writing Num(C) for the numerator k∈C w k ∆ k, and Den(C) for the denominator max(qW min, k∈C w k), the following are immediate for any C and C def = C ∪ {k}: DISPLAYFORM0 Here in the last step, we used the fact that f c (C) ≤ S. The claim follows. Proof of Theorem 1. It suffices to verify that 1. the moments (of the privacy loss) at each step are correctly bounded; and, 2. the composability holds when accumulating the moments of multiple steps. At each step, users are selected randomly with probability q. If in addition the L 2 -norm of each user's update is upper-bounded by S, then the moments can be upper-bounded by that of the sampled Gaussian mechanism with sensitivity 1, noise scale σ/S, and sampling probability q. Our algorithm, as described in FIG1, uses a fixed noise variance and generates the i.i.d. noise independent of the private data. Hence we can apply the composability as in Theorem 2.1 in BID0.We obtain the theorem by combining the above and the sensitivity boundsf f andf c. Model The first step in training a word-level recurrent language model is selecting the vocabulary of words to model, with remaining words mapped to a special "UNK" (unknown) token. Training a fully differentially private language model from scratch requires a private mechanism to discover which words are frequent across the corpus, for example using techniques like distributed heavyhitter estimation BID6 BID3. For this work, we simplified the problem by pre-selecting a dictionary of the most frequent 10,000 words (after normalization) in a large corpus of mixed material from the web and message boards (but not our training or test dataset).Our recurrent language model works as follows: word s t is mapped to an embedding vector e t ∈ R by looking up the word in the model's vocabulary. The e t is composed with the state emitted by the model in the previous time step s t−1 ∈ R 256 to emit a new state vector s t and an "output embedding" o t ∈ R 96. The details of how the LSTM composes e t and s t−1 can be found in BID13. The output embedding is scored against the embedding of each item in the vocabulary via inner product, before being normalized via softmax to compute a probability distribution over the vocabulary. Like other standard language modeling applications, we treat every input sequence as beginning with an implicit "BOS" (beginning of sequence) token and ending with an implicit "EOS" (end of sequence) token. Unlike standard LSTM language models, our model uses the same learned embedding for the input tokens and for determining the predicted distribution on output tokens from the softmax. 6 This reduces the size of the model by about 40% for a small decrease in model quality, an advantageous tradeoff for mobile applications. Another change from many standard LSTM RNN approaches is that we train these models to restrict the word embeddings to have a fixed L 2 norm of 1.0, a modification found in earlier experiments to improve convergence time. In total the model has 1.35M trainable parameters. Initialization and personalization For many applications public proxy data is available, e.g., for next-word prediction one could use public domain books, Wikipedia articles, or other web content. In this case, an initial model trained with standard (non-private) algorithms on the public data (which is likely drawn from the wrong distribution) can then be further refined by continuing with differentially-private training on the private data for the precise problem at hand. Such pre-training is likely the best approach for practical applications. However, since training models purely on private data (starting from random initialization) is a strictly harder problem, we focus on this scenario for our experiments. Our focus is also on training a single model which is shared by all users. However, we note that our approach is fully compatible with further on-device personalization of these models to the particular data of each user. It is also possible to give the central model some ability to personalize simply by providing information about the user as a feature vector along with the raw text input. LSTMs are well-suited to incorporating such additional context. We evaluate using AccuracyTop1, the probability that the word to which the model assigns highest probability is correct (after some minimal normalization). We always count it as a mistake if the true next word is not in the dictionary, even if the model predicts UNK, in order to allow fair comparisons of models using different dictionaries. In our experiments, we found that our model architecture is competitive on AccuracyTop1 and related metrics (Top3, Top5, and perplexity) across a variety of tasks and corpora. Dataset The Reddit dataset can be accessed through Google BigQuery (Reddit Comments Dataset). Since our goal is to limit the contribution of any one author to the final model, it is not necessary to include all the data from users with a large number of posts. On the other hand, processing users with too little data slows experiments (due to constant per-user overhead). Thus, we use a training set where we have removed all users with fewer than 1600 tokens (words), and truncated the remaining K = 763, 430 users to have exactly 1600 tokens. We intentionally chose a public dataset for research purposes, but carefully chose one with a structure and contents similar to private datasets that arise in real-world language modeling task such as predicting the next-word in a mobile keyboard. This allows for reproducibility, comparisons to nonprivate models, and inspection of the data to understand the impact of differential privacy beyond coarse aggregate statistics (as in Table 3). Figure 5: Comparison of sampling strategies and estimators. Fixed sample is exactly C = 100 users per round, and variable sample selects uniformly with probability q forC = 100. The true average corresponds to f, fixed denominator isf f, and clipped denominator isf c. FIG5, a smaller value would actually be better when doing private training). FedSGD is more sensitive to noise than FedAvg, likely because the updates are smaller in magnitude. Experiments with SGD We ran experiments using FedSGD taking B = 1600, that is, computing the gradient on each user's full local dataset. To allow more iterations, we usedC = 50 rather than 100. Examining Figures 9 and 10, we see S = 2 and σ = 2 · 10 −3 are reasonable values, which suggests for private training we would need in expectation qW = S/σ = 1500 users per round, whereas for FedAvg we might choose S = 15 and σ = 10 −2 forC = qW = 1000 users per round. That is, the relative effect of the ratio of the clipping level to noise is similar between FedAvg and FedSGD. However, FedSGD takes a significantly larger number of iterations to reach equivalent accuracy. Fixing z = 1,C = 5000 (the value that produced the best accuracy for a private model in TAB2) and total of 763,430 users gives (3.81, 10 −9)-DP after 3000 rounds and (8.92, 10 −9)-DP after 20000 rounds, so there is indeed a significant cost in privacy to these additional iterations. Models with larger dictionaries We repeated experiments on the impact of clipping and noise on models with 20000 and 30000 token dictionaries, again using FedAvg training with η = 6, equally weighted users with 1600 tokens, andC = 100 expected users per round. The larger dictionaries give only a modest improvement in accuracy, and do not require changing the clipping and noise parameters despite having significantly more parameters. Results are given in FIG1.Other experiments We experimented with adding an explicit L 2 penalty on the model updates (not the full model) on each user, hoping this would decrease the need for clipping by preferring updates with a smaller L 2 norm. However, we saw no positive effect from this.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJ0hF1Z0b
User-level differential privacy for recurrent neural network language models is possible with a sufficiently large dataset.
Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model. Although trained on images of a specific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps. In this work, we describe and evaluate a novel mixed-size training regime that mixes several image sizes at training time. We demonstrate that models trained using our method are more resilient to image size changes and generalize well even on small images. This allows faster inference by using smaller images at test time. For instance, we receive a 76.43% top-1 accuracy using ResNet50 with an image size of 160, which matches the accuracy of the baseline model with 2x fewer computations. Furthermore, for a given image size used at test time, we show this method can be exploited either to accelerate training or the final test accuracy. For example, we are able to reach a 79.27% accuracy with a model evaluated at a 288 spatial size for a relative improvement of 14% over the baseline. Figure 1: Test accuracy per image size, models trained on specific sizes (ResNet50, ImageNet). Convolutional neural networks are successfully used to solve various tasks across multiple domains such as visual , audio (van den), language and speech . While scale-invariance is considered important for visual representations , convolutional networks are not scale invariant with respect to the spatial resolution of the image input, as a change in image dimension may lead to a non-linear change of their output. Even though CNNs are able to achieve state-of-theart in many tasks and domains, their sensitivity to the image size is an inherent deficiency that limits practical use cases and requires evaluation inputs to match training image size. For example, demonstrated that networks trained on specific image size, perform poorly on other image sizes at evaluation, as shown in Figure 1. Several works attempted to achieve scale invariance by modifying the network structure . However, the most common method is to artificially enlarge the dataset using a set of label-preserving transformations also known as "data augmentation" . Several of these transformations scale and crop objects appearing within the data, thus increasing the network's robustness to inputs of different scale. Although not explicitly trained to handle varying image sizes, CNNs are commonly evaluated on multiple scales post training, such as in the case of detection and segmentation tasks. In these tasks, a network that was pretrained with fixed image size for classification is used as the backbone of a larger model that is expected to adapt to a wide variety of image sizes. In this work, we will introduce a novel training regime, "MixSize" for convolutional networks that uses stochastic image and batch sizes. The main contributions of the MixSize regime are: • Reducing image size sensitivity. We show that the MixSize training regime can improve model performance on a wide range of sizes used at evaluation. • Faster inference. As our mixed-size models can be evaluated at smaller image sizes, we show up to 2× reduction in computations required at inference to reach the same accuracy as the baseline model. • Faster training vs. high accuracy. We show that reducing the average image size at training leads to a trade-off between the time required to train the model and its final accuracy. 2 RELATED WORK Deep convolutional networks are traditionally trained using fixed-size inputs, with spatial dimensions H × W and a batch size B. The network architecture is configured such that the spatial dimensions are reduced through strided pooling or convolutions, with the last classification layer applied on a 1 × 1 spatial dimension. Modern convolutional networks usually conclude with a final "global" average pooling (;, that reduces any remaining spatial dimensions with a simple averaging operation. Modifying the spatial size of an input to a convolutional layer by a factor γ, will yield an output with size scaled by the same factor γ. This modification does not require any change to the number of parameters of the given convolutional layer, nor its underlying operation. Small changes in the expected size can occur, however, due to padding or strides performed by the layer. It was observed by practitioners and previous works that a network trained on a specific input dimension can still be used at inference using a modified image size to some extent . Moreover, evaluating with an image size that is larger than used for training can improve accuracy up to a threshold, after which it quickly deteriorates . showed a computational-vs-accuracy trade-off in scaling image size used to train and evaluate with a convolutional network. This finding is consistent with past findings, which demonstrated that training with a larger image size can in a larger classification error . In addition, previous works explored the notion of "progressive resizing" -increasing image size as training progresses to improve model performance and time to convergence. More recently, demonstrated that CNNs can be trained using a fixed small image size and fine-tuned posttraining to a larger size, with which evaluation will be performed. This procedure reduced the traintest discrepancy caused by the change in image size and allowed faster training time and improved accuracy -at the cost of additional fine-tuning procedure and additional computations at inference time. In this work we will further explore the notion of using multiple image sizes at training, so the CNN performance will be resilient to test time changes in the image size. Deep neural network training can be distributed across many computational units and devices. The most common distribution method is by "data-parallelism"-computing an average estimate of the gradients using multiple, separably computed data samples. As training NN models is done using batch-SGD method and its variants, scaling this process across more computational devices while maintaining similar utilization for each device inflates the global batch size. Large batch training is known to affect the generalization capabilities of the networks and to require modification of the regime used for its optimization. While several works claimed that large-batch training leads to an inherent "generalization gap" , more recent works demonstrated that this gap is largely caused from an insufficient number of optimization steps performed and can be partly mitigated by hyper-parameter tuning . In order to cope with the changes in the training dynamics of the network, several modifications to the optimization procedure have been proposed such as a linear or a square-root scaling of the learning rate with respect to the batch size growth. Other modifications include per-layer gradient scaling schemes and optimizer modifications . Several works also explored using incremented batch-sizes in order to decrease the number of training iterations required to reach the desired accuracy. Recent work by introduced the notion of "Batch Augmentation" (BA)-increasing the batch size by augmenting several instances of each sample within the same batch. BA aids generalization across a wide variety of models and tasks, with the expense of an increased computational effort per step. A similar method called "Repeated Augmentation" (RA) was proposed by. It was also demonstrated that BA may allow to decrease the number of training steps needed to achieve a similar accuracy and also mitigate I/O throughput bottlenecks . As previous works investigated mostly homogeneous training settings (e.g., using a fixed batch size), an open question still exists on the utility of rapidly varying batch-sizes. We will explore this notion and suggest a new optimizer modification that enables training with multiple varying batch-sizes with limited hyper-parameter tuning. The traditional practice of training convolutional networks using fixed-size images holds several shortcomings. First, CNNs are commonly evaluated using a different size than that used for training (; ;) and it was observed that classification accuracy may degrade above or below a certain size threshold (and Figure 1). To remedy these issues, we suggest a stochastic training regime, where image sizes can change in each optimization step. Motivation. In order to motivate our method, we first evaluate the impact of the image size on the training progress of a CNN -by examining gradient statistics during training 2. Specifically, in Table 1 we measured the correlation of the gradients across image sizes. We see that gradients computed across different scales of the same image have a strong correlation compared to those obtained across different images. This correlation is especially apparent during the first stages of training and decreases as the model converges. This suggests that the small image gradients can be used as an approximation of the full image gradients, with a smaller computational footprint. Therefore, using large images along the entire training process may be sub-optimal in terms of computational resource utilization. More specifically, as the gradients of images of different size are highly correlated at the initial steps of training, it may prove beneficial to sacrifice spatial size in favor of batch size that can be increased. To do so, we suggest the following. The MixSize training regime. We suggest "MixSize", a stochastic training regime, where input sizes can vary in each optimization step. In this regime, we modify the spatial dimensions H, W (height and width) of the input image size 3, as well as the batch size. The batch size is changed either by the number of samples used, denoted B, or the number of batch-augmentations for each sample , denoted D ("duplicates"). To simplify our notation and use-cases, we will follow the common practice of training on square images and use S = H = W. Formally, in the MixSize regime, these sizes can be described as random variables sharing a single discrete distribution where ∀i: p i ≥ 0 and i p i = 1. 1.03e −6 1.44e 6.24e 1.95e 6.34e 2.26e As the computational cost of each training step is approximately proportional to S 2 ·B·D, we choose these sizes to reflect an approximately fixed budget for any choice i such that Thus the computational and memory requirements for each step are constant. Benefits and Trade-offs. We will demonstrate that using such a MixSize regime can have a positive impact on the resiliency of trained networks to the image size used at evaluation. That is, mixed-size networks will be shown to have better accuracy across a wide range of sizes. This entails a considerable saving in computations needed for inference, especially when using smaller models. Furthermore, given a fixed budget of computational and time resources (per step), we can now modify our regime along spatial and batch axes. We will explore two trade-offs: • Decrease number of iterations per epoch -by enlarging B at the expense of S. • Improve generalization per epoch -by enlarging D at the expense of S. MixSize regimes continuously change the statistics of the model's inputs, by modifying the image size as well as batch-size. This behavior may require hyper-parameter tuning and may also affect size-dependent layers such as batch normalization . To easily adapt training regimes to the use of MixSize as well as improve their final performance, we continue to describe two methods we found useful: Gradient Smoothing and Batch-norm calibration. Training with varying batch and spatial sizes inadvertently leads to a change in the variance of the accumulated gradients. For example, in Table 1, the gradient variance is larger when computed over a small image size (unsurprisingly). This further suggests that the optimization regime should be adapted to smaller spatial sizes, in a manner similar to learning-rate adaptations that are used for large-batch training. This property was explored in previous works concerning large-batch regimes, in which a learning rate modification was suggested to compensate for the variance reduction for larger batch-sizes. Unfortunately, the nature of this modification can vary from task to task or across models , with solutions such as a square-root scaling , linear scaling or a fixed norm ratio . Here we suggest changing both the spatial size as well as the batch size, which is also expected to modify the variance of gradients within each step and further complicates the choice of optimal scaling. Previous works suggested methods to control the gradient norm by gradient normalization and gradient clipping . These methods explicitly disable or limit the gradient's norm used for each optimization step, but also limit naturally occurring variations in gradient statistics. We suggest an alternative solution to previous approaches, which we refer to as "Gradient smoothing". Gradient smoothing mitigates the variability of gradient statistics when image sizes are constantly changing across training. We introduce an exponentially moving weighted average of the gradients' normḡ t (scalar) which is updated according toḡ t = αḡ t−1 + (1 − α)g t where We normalize the gradients used for each step by the smoothing coefficient, such that each consecutive step is performed with gradients of similar norm. For example, for the vanilla SGD step, we use a weight update rule of the form This running estimate of gradient norm is similar to the optimizer suggested by , which keeps a per-layer estimate of gradient moments. Gradient smoothing, however, is designed to adapt globally (across all layers) to the batch and spatial size modification and can be used regardless of the optimization method used. We found gradient smoothing to be mostly beneficial in regimes where multiple varying batch sizes are used. Figure 5a in the Appendix demonstrates how gradient smoothing reduces the gap between gradient norms of different sizes. Measuring test error on the same model shows a slight advantage for gradient-smoothing (Appendix Figure 5b). As demonstrated by , using a different image size at evaluation may incur a discrepancy between training and evaluation protocols, caused by using different data pre-processing. suggested a post-training procedure, where a network trained on a specific fixed-size is fine-tuned on another size, later used for evaluation. Their solution required 10s of training epochs, amounting to 1000s of full forward and back-propagation computations, along with parameter updates for batch-norm and classifier layers. In contrast, we surmise that for networks trained with mixed-regimes, discrepancy issues mainly arise from the use of the batch-norm layers and can be solved by targeting them specifically. Batch-norm layers introduce a discrepancy between training and test evaluations , as at inference a running estimate of the mean and variance (of training data) are used instead of the actual mean and variance values. This difference is emphasized further in the use of varying image size, as changing the spatial size of an input map can significantly modify the measured variance of that map. While a fine-tuning process per image size can eliminate this discrepancy , we offer a simpler alternative. For each evaluated size, we calibrate the mean and variance estimates used for that size by computing an average value over a small number of training examples. This calibration requires only a few (100s) feed-forward operations with no back-propagation or parameter update and takes only a few seconds on a single GPU. Interestingly, we highlight the fact that although this process has little or no effect on models trained using a fixed-size input, it does improve our mixed-size models considerably on a wide range of image sizes. CIFAR10/100. First, we examine our method using the common visual datasets CIFAR10/100 As CIFAR datasets are limited in size, we consider the following balanced stochastic regime chosen: The regime was designed to be centered around the mean value of 28. As the original image size used for training is 32 × 32, we are now able to increase either the batch size or number of duplicates for each training step by a factor of 32 2 S 2 such that S 2 ·B ·D is approximately constant. We denote our modified mixed-size regimes as B + for an increased effective batch-size and D + for an increased number of BA duplicates of the same ratio. We used our sampling strategy to train and compare our regime to the baseline . We use the original hyper-parameters without modification. For the B + regime, use our gradient smoothing method, as described in Section 4.1. For each , we measure our final test accuracy on the original 32 × 32 image size. We also perform batch-norm calibration as described in Section 4.2. From Table 2, we see that our MixSize regimes on CIFAR datasets yield two possible improvements: • Reduced number of training steps to achieve a similar test accuracy using B + regime. • Better test accuracy when using D + regime. Training progress on the CIFAR10 using ResNet44 is depicted in Figure 2. Interestingly, although designed only to reduce training time, we can see that our B + regime also improves accuracy in some cases. This improvement can be attributed to a regularization effect induced by changing image sizes during training, also manifested by an increase in training error throughout its progress. ImageNet. We also perform large scale experiments using the ImageNet dataset to confirm our findings. We used the ResNet-50 model, with the training regime suggested by that consists of base learning rate of 0.1, decreased by a factor of 10 on epochs 30, 60, 80, stopping at epoch 90. We used the base batch size of 256 over 4 devices and L 2 regularization over weights of convolutional layers. We used the standard data augmentation and did not incorporate any additional regularization or augmentation techniques. Additionally, we also used the EfficientNet-B0 model suggested by. We used the same data augmentation and regularization as the original paper, but opted for a shorter training regime with a momentum-SGD optimizer that consisted of a cosine-annealed learning rate over 200 epochs starting from an initial base 0.1 value. For the ImageNet dataset, we use the following stochastic regime found by cross-validation on several alternatives (see Appendix D): While the original training regime consisted of images of size 224×224, our proposed regime makes for an average image size ofS ×S = 144 × 144. This regime was designed so that the reduced spatial size can be used to increase the corresponding batch size or the number of BA duplicates, as described in Section 3. We are first interested in accelerating the time needed for convergence of the tested models using our B + scheme. We enlarge the batch size used for each spatial size by a factor of 224 2 S 2 such that S 2 · B is kept approximately fixed. As the average batch size is larger than B o, which was used with the original optimization hyper-parameters, we scale the learning rate linearly as suggested by by a factor ofB Bo. We note that for the proposed regimes we did not require any learning rate warm-up, due to the use of gradient smoothing. As can be seen in Figure 3, regime B + enables training with approximately 2.7× less training steps, while reaching a better-than-baseline accuracy of 76.61%. As sizes were chosen to reflect in approximately equal computational cost per iteration, B + regime offers a similar improvement in total wall-clock time. Next, we perform a similar experiment with a D + regime, where the number of BA duplicates is similarly increased with respect to D o instead of the batch size. This scaling with an average duplicates ofD = 3. As the computational cost for each step remains approximately constant, as well as the number of required steps per epochs, training a model under this regime requires an equal wall-clock time. However, the increased batch-augmentation improves the final test accuracy to 78.04%, approximately 7% relative improvement over the 76.4% baseline. Next, we examine how MixSize affects the ing model resiliency to changes in the image size during test-time. We evaluated the models by varying the test-time image sizes around the original 224 spatial size: S = 224 + 32 · m, m ∈ {−6, ..., 6}. The common evaluation procedure for ImageNet models first scales the image to a 256 smallest dimension and crops a center 224 × 224 image. We adapt this regime for other image sizes by scaling the smallest dimension to 8 7 S (since 8 7 · 224 = 256) and then cropping the center S × S patch. Models trained with a mixed regime were calibrated to a specific evaluation size by measuring batch-norm statistics for 200 batches of training samples. We note that for original fixed-size regimes this calibration procedure ed with degraded and so we report accuracy without calibration for these models. We did not use any fine-tuning procedure post training for any of the models. As can be seen in Figure 4a, the baseline model trained using a fixed size, reaches 76.4% top-1 accuracy at the same 224 spatial size it was trained on. As observed previously, the model continues to slightly improve beyond that size, to a maximum of 76.8% accuracy. However, it is apparent that the model's performance quickly degrades when evaluating with sizes smaller than 224. We compare these with a D + regime, trained with an average size ofS = 144. As described earlier, this model requires the same time and computational resources as the baseline model. However, due to the decreased average size, we were able to leverage more than 1 duplicates per batch on average, which improved the model's top-1 accuracy to 77.14% at size 224. Furthermore, we find that the model performs much more favorably at image sizes smaller than 224, scoring an improved (over baseline) accuracy of 76.43% at only 160×160 spatial size. We analyzed an alternative regime S, where the average spatial size is larger at 208 × 208 (for more details see Appendix D). The model trained with the S regime offers a similar improvement in accuracy, only across a larger spatial size, as it observed an average size of 208 × 208 during training. Figure 4a demonstrates that while all three models (Fixed with S = 224, S and S ) were trained with the same compute and memory budget, mixed-size regimes offer superior accuracy over a wide range of evaluation sizes. Specifically, mixed-regime at S = 208 dominates the baseline fixed-size regime at all sizes, while our mixed regime at S = 144 achieves best at sizes smaller than 224. We also compared the classification performance across evaluated image sizes, using networks trained on a variety of fixed sizes and our mixed regimes. As a baseline, we use obtained by (trained with repeated augmentations, without fine-tuning) and compare them with mixed-regime models trained with an equal computational budget, by setting the base number of BA duplicates to D = 2. As can be seen in Figure 4b, mixed-regime trained models offer a wider range of resolutions with close-to-baseline accuracy (within a 2% change) and perform better than their fixed-size counterparts at all sizes. As the number of floating-point operations (flops) grows linearly with the number of pixels, using a mixed regime significantly improves accuracy per compute at evaluation. We further note that our S model reaches a top accuracy of 79.27% at a 288 × 288 evaluation size. In this work, we introduced and examined a performance trade-off between computational load and classification accuracy governed by the input's spatial size. We suggested stochastic image size regimes, which randomly change the spatial dimension as well as the batch size and the number of augmentation (duplicates) in the batch. Stochastic regime benefits are threefold: reduced number of training iterations; or improved model accuracy (generalization) and improved model robustness to changing the image size. We believe this approach may have a profound impact on the practice of training convolutional networks. Given a computational and time budget, stochastic size regimes may enable to train networks faster, with better , as well as to target specific image sizes that will be used at test time. As the average size chosen to train is reflected in the optimal operating point for evaluation resolution, mixed regimes can be used to create networks with better performance across multiple designated use cases. + regime creates two batch sizes: 256 and 2, 048 respectively. Gradient smoothing helps to reduce gap between gradient norms at difference batch sizes and improves final accuracy. We wish to consider training regimes with varying image sizes, such that the average image size is smaller than the desired evaluation size. For example, for the height dimension H, we wish to obtain an average size ofH = i p i H i such thatH < H o. We consider three alternatives for image size variations: • Increase image size from small to large, where each image size is used for number of epochs E i = p i E total, where E total is the total number training epochs required. • Using a random image size for each epoch, keeping the epoch number for each size at E i • Sampling image size per training step at probability p i As can be seen in Figure 6, we found that random sampling regimes performed better than scaling image size from small to large . While sampling both at epoch and step time frames performed similarly, replacing sizes on each step seemed to converge faster and to have less noise in measured test accuracy. We note that these behaviours may partly stem from the use of batch-normalization which is sensitive to the image size used at evaluation or insufficient hyper-parameter tuning for each specific size (e.g., spiking error at the end of the small-to-large regime). Considering these findings, we continue to perform our experiments using the third regime -sampling image size per training step. We used alternative size regimes balanced around 224, named S and S. They can be described by the following distributions:
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HylUPnVKvH
Training convnets with mixed image size can improve results across multiple sizes at evaluation
We propose a simple technique for encouraging generative RNNs to plan ahead. We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model. The backward network is used only during training, and plays no role during sampling or inference. We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states). We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task. Recurrent Neural Networks (RNNs) are the basis of state-of-art models for generating sequential data such as text and speech. RNNs are trained to generate sequences by predicting one output at a time given all previous ones, and excel at the task through their capacity to remember past information well beyond classical n-gram models BID6 BID27. More recently, RNNs have also found success when applied to conditional generation tasks such as speech-to-text BID9, image captioning BID61 and machine translation.RNNs are usually trained by teacher forcing: at each point in a given sequence, the RNN is optimized to predict the next token given all preceding tokens. This corresponds to optimizing one-stepahead prediction. As there is no explicit bias toward planning in the training objective, the model may prefer to focus on the most recent tokens instead of capturing subtle long-term dependencies that could contribute to global coherence. Local correlations are usually stronger than long-term dependencies and thus end up dominating the learning signal. The consequence is that samples from RNNs tend to exhibit local coherence but lack meaningful global structure. This difficulty in capturing long-term dependencies has been noted and discussed in several seminal works (; BID6 BID27 BID45 .Recent efforts to address this problem have involved augmenting RNNs with external memory BID14 BID18 BID22, with unitary or hierarchical architectures BID0 BID51, or with explicit planning mechanisms BID23 . Parallel efforts aim to prevent overfitting on strong local correlations by regularizing the states of the network, by applying dropout or penalizing various statistics BID41 BID64 BID15 BID32 BID39 . Figure 1: The forward and the backward networks predict the sequence s = {x 1, ..., x 4} independently. The penalty matches the forward (or a parametric function of the forward) and the backward hidden states. The forward network receives the gradient signal from the log-likelihood objective as well as L t between states that predict the same token. The backward network is trained only by maximizing the data log-likelihood. During the evaluation part of the network colored with orange is discarded. The cost L t is either a Euclidean distance or a learned metric ||g(h DISPLAYFORM0 with an affine transformation g. Best viewed in color. In this paper, we propose TwinNet, 1 a simple method for regularizing a recurrent neural network that encourages modeling those aspects of the past that are predictive of the long-term future. Succinctly, this is achieved as follows: in parallel to the standard forward RNN, we run a "twin" backward RNN (with no parameter sharing) that predicts the sequence in reverse, and we encourage the hidden state of the forward network to be close to that of the backward network used to predict the same token. Intuitively, this forces the forward network to focus on the past information that is useful to predicting a specific token and that is also present in and useful to the backward network, coming from the future (Fig. 1).In practice, our model introduces a regularization term to the training loss. This is distinct from other regularization methods that act on the hidden states either by injecting noise BID32 or by penalizing their norm BID31 BID39, because we formulate explicit auxiliary targets for the forward hidden states: namely, the backward hidden states. The activation regularizer (AR) proposed by BID39, which penalizes the norm of the hidden states, is equivalent to the TwinNet approach with the backward states set to zero. Overall, our model is driven by the intuition (a) that the backward hidden states contain a summary of the future of the sequence, and (b) that in order to predict the future more accurately, the model will have to form a better representation of the past. We demonstrate the effectiveness of the TwinNet approach experimentally, through several conditional and unconditional generation tasks that include speech recognition, image captioning, language modelling, and sequential image generation. To summarize, the contributions of this work are as follows:• We introduce a simple method for training generative recurrent networks that regularizes the hidden states of the network to anticipate future states (see Section 2);• The paper provides extensive evaluation of the proposed model on multiple tasks and concludes that it helps training and regularization for conditioned generation (speech recognition, image captioning) and for the unconditioned case (sequential MNIST, language modelling, see Section 4);• For deeper analysis we visualize the introduced cost and observe that it negatively correlates with the word frequency (more surprising words have higher cost). Given a dataset of sequences S = {s 1, . . ., s n}, where each s k = {x 1, . . ., x T k} is an observed sequence of inputs x i ∈ X, we wish to estimate a density p(s) by maximizing the log-likelihood of the observed data L = n i=1 log p(s i). Using the chain rule, the joint probability over a sequence x 1,..., x T decomposes as: DISPLAYFORM0 This particular decomposition of the joint probability has been widely used in language modeling BID7 BID40 and speech recognition BID5. A recurrent neural network is a powerful architecture for approximating this conditional probability. At each step, the RNN updates a hidden state h f t, which iteratively summarizes the inputs seen up to time t: h DISPLAYFORM1 where f symbolizes that the network reads the sequence in the forward direction, and Φ f is typically a non-linear function, such as a LSTM cell BID27 or a GRU. Thus, h f t forms a representation summarizing information about the sequence's past. The prediction of the next symbol x t is performed using another non-linear transformation on top of h DISPLAYFORM2, which is typically a linear or affine transformation (followed by a softmax when x t is a symbol). The basic idea of our approach is to encourage h f t to contain information that is useful to predict x t and which is also compatible with the upcoming (future) inputs in the sequence. To achieve this, we run a twin recurrent network that predicts the sequence in reverse and further require the hidden states of the forward and the backward networks to be close. The backward network updates its hidden state according to: DISPLAYFORM3 and predicts DISPLAYFORM4 using information only about the future of the sequence. Thus, h f t and h b t both contain useful information for predicting x t, coming respectively from the past and future. Our idea consists in penalizing the distance between forward and backward hidden states leading to the same prediction. For this we use the Euclidean distance (see Fig. 1): DISPLAYFORM5 where the dependence on x is implicit in the definition of h f t and h b t. The function g adds further capacity to the model and comes from the class of parameterized affine transformations. Note that this class includes the identity tranformation. As we will show experimentally in Section 4, a learned affine transformation gives more flexibility to the model and leads to better . This relaxes the strict match between forward and backward states, requiring just that the forward hidden states are predictive of the backward hidden states. The total objective maximized by our model for a sequence s is a weighted sum of the forward and backward log-likelihoods minus the penalty term, computed at each time-step: DISPLAYFORM0 where α is an hyper-parameter controlling the importance of the penalty term. In order to provide a more stable learning signal to the forward network, we only propagate the gradient of the penalty term through the forward network. That is, we avoid co-adaptation of the backward and forward networks. During sampling and evaluation, we discard the backward network. The proposed method can be easily extended to the conditional generation case. The forward hiddenstate transition is modified to h DISPLAYFORM1 where c denotes the task-dependent conditioning information, and similarly for the backward RNN.Bidirectional neural networks BID49 have been used as powerful feature extractors for sequence tasks. The hidden state at each time step includes both information from the past and the future. For this reason, they usually act as better feature extractors than the unidirectional counterpart and have been successfully used in a myriad of tasks, e.g. in machine translation, question answering BID10 and sequence labeling BID37. However, it is not straightforward to apply these models to sequence generation BID65 due to the fact that the ancestral sampling process is not allowed to look into the future. In this paper, the backward model is used to regularize the hidden states of the forward model and thus is only used during training. Both inference and sampling are strictly equivalent to the unidirectional case. Gated architectures such as LSTMs BID27 and GRUs BID13 have been successful in easing the modeling of long term-dependencies: the gates indicate time-steps for which the network is allowed to keep new information in the memory or forget stored information. BID20; Dieng et al. FORMULA1; BID18 effectively augment the memory of the network by means of an external memory. Another solution for capturing long-term dependencies and avoiding gradient vanishing problems is equipping existing architectures with a hierarchical structure BID51. Other works tackled the vanishing gradient problem by making the recurrent dynamics unitary BID0. In parallel, inspired by recent advances in "learning to plan" for reinforcement learning BID52 BID55, recent efforts try to augment RNNs with an explicit planning mechanism BID23 to force the network to commit to a plan while generating, or to make hidden states predictive of the far future BID34.Regularization methods such as noise injection are also useful to shape the learning dynamics and overcome local correlations to take over the learning process. One of the most popular methods for neural network regularization is dropout BID53. Dropout in RNNs has been proposed in BID41, and was later extended in BID50 BID15, where recurrent connections are dropped at random. Zoneout BID32 modifies the hidden state to regularize the network by effectively creating an ensemble of different length recurrent networks. BID31 introduce a "norm stabilization" regularization term that ensures that the consecutive hidden states of an RNN have similar Euclidean norm. Recently, BID39 proposed a set of regularization methods that achieve state-of-the-art on the Penn Treebank language modeling dataset. Other RNN regularization methods include the weight noise BID19, gradient clipping BID45 and gradient noise BID42. We now present experiments on conditional and unconditional sequence generation, and analyze the in an effort to understand the performance gains of TwinNet. First, we examine conditional generation tasks such as speech recognition and image captioning, where the show clear improvements over the baseline and other regularization methods. Next, we explore unconditional language generation, where we find our model does not significantly improve on the baseline. Finally, to further determine what tasks the model is well-suited to, we analyze a sequential imputation task, where we can vary the task from unconditional to strongly conditional. We evaluated our approach on the conditional generation for character-level speech recognition, where the model is trained to convert the speech audio signal to the sequence of characters. The forward and backward RNNs are trained as conditional generative models with softattention. The context information c is an encoding of the audio sequence and the output sequence s is the corresponding character sequence. We evaluate our model on the Wall Street Journal (WSJ) dataset closely following the setting described in BID4. We use 40 mel-filter bank features with delta and delta-deltas with their energies as the acoustic in- We compare the attention model for speech recognition ("Baseline," BID4 ; the regularizer proposed by BID31 ("Stabilizing norm"); penalty on the L2 norm of the forward states BID39 ) ("AR"), which is equivalent to TwinNet when all the hidden states of the backward network are set to zero. We report the of our model ("TwinNet") both with g = I, the identity mapping, and with a learned g. Test CER Valid CER Baseline 6.8 9.0 Baseline + Gaussian noise 6.9 9.1 Baseline + Stabilizing Norm 6.6 9.0 Baseline + AR 6.5 8.9 Baseline + TwinNet (g = I)6.6 8.7 Baseline + TwinNet (learnt g) 6.2 8.4puts to the model, these features are generated according to the Kaldi s5 recipe BID46. The ing input feature dimension is 123.We observe the Character Error Rate (CER) for our validation set, and we early stop on the best CER observed so far. We report CER for both our validation and test sets. For all our models and the baseline, we follow the setup in BID4 and pretrain the model for 1 epoch, within this period, the context window is only allowed to move forward. We then perform 10 epochs of training, where the context window looks freely along the time axis of the encoded sequence, we also perform annealing on the models with 2 different learning rates and 3 epochs for each annealing stage. We use the AdaDelta optimizer for training. We perform a small hyper-parameter search on the weight α of our twin loss, α ∈ {2.0, 1.5, 1.0, 0.5, 0.25, 0.1}, and select the best one according to the CER on the validation set. Results We summarize our findings in TAB0. Our best performing model shows relative improvement of 12% comparing to the baseline. We found that the TwinNet with a learned metric (learnt g) is more effective than strictly matching forward and hidden states. In order to gain insights on whether the empirical usefulness comes from using a backward recurrent network, we propose two ablation tests. For "Gaussian Noise," the backward states are randomly sampled from a Gaussian distribution, therefore the forward states are trained to predict white noise. For "AR," the backward states are set to zero, which is equivalent to penalizing the norm of the forward hidden states BID39. Finally, we compare the model with the "Stabilizing Norm" regularizer BID31, that penalizes the difference of the norm of consecutive forward hidden states. Results shows that the information included in the backward states is indeed useful for obtaining a significant improvement. Analysis The training/validation curve comparison for the baseline and our network is presented in FIG1. 4 The TwinNet converges faster than the baseline and generalizes better. The L2 cost raises in the beginning as the forward and backward network start to learn independently. Later, due to the pressure of this cost, networks produce more aligned hidden representations. FIG2 provides examples of utterances with L2 plotted along the time axis. We observe that the high entropy words produce spikes in the loss for such words as "uzi." This is the case for rare words which are hard to predict from the acoustic information. To elaborate on this, we plot the L2 cost averaged over a word depending on the word frequency. The average distance decreases with the increasing frequency. The histogram comparison FIG1 ) for the cost of rare and frequent words reveal that the not only the average cost is lower for frequent words, but the variance is higher for rare words. Additionally, we plot the dependency of the L2 cost cross-entropy cost of the forward network FIG1 ) to show that the conditioning also plays the role in the entropy of the output, the losses are not absolutely correlated. We evaluate our model on the conditional generation task of image captioning task on Microsoft COCO dataset BID35. The MS COCO dataset covers 82,783 training images and 40,504 images for validation. Due to the lack of standardized split of training, validation and test data, we follow Karpathy's split BID28 BID61. These are 80,000 training images and 5,000 images for validation and test. We do early stopping based on the validation CIDEr scores and we report BLEU-1 to BLEU-4, CIDEr, and Meteor scores. To evaluate the consistency of our method, we tested TwinNet on both encoder-decoder ('Show&Tell', BID59 and soft attention ('Show, Attend and Tell', BID61 image captioning models. We use a Resnet BID25 with 101 and 152 layers pre-trained on ImageNet for image classification. The last layer of the Resned is used to extract 2048 dimensional input features for the attention model BID61 . We use an LSTM with 512 hidden units for both "Show & Tell" and soft attention. Both models are trained with the Adam BID29 optimizer with a Table 2 : Results for image captioning on the MS COCO dataset, the higher the better for all metrics (BLEU 1 to 4, METEOR, and CIDEr). We reimplement both Show&Tell BID59 and Soft Attention BID61 in order to add the twin cost. We use two types of images features extracted either with Resnet-101 or Resnet-152. DISPLAYFORM0 DeepVS BID28 62.5 45.0 32.1 23.0 19.5 66.0 ATT-FCN BID63 70.9 53.7 40.2 30.4 24.3 -Show & Tell BID59 ---27.7 23.7 85.5 Soft Attention BID61 70.7 49.2 34.4 24.3 23.9 -Hard Attention BID61 71.8 50.4 35.7 25.0 23.0 -MSM BID62 73.0 56.5 42.9 32.5 25.1 98.6 Adaptive Attention BID36 74 Table 3: (left) Test set negative log-likelihood for binarized sequential MNIST, where denotes lower performance of our model with respect to the baselines.(right) Perplexity on WikiText-2 and Penn Treebank BID39.AWD-LSTM refers to the model of BID39 trained with the official implementation at http://github.com/salesforce/awd-lstm/.Model MNIST DBN 2hl BID16 ≈84.55 NADE BID57 88.33 EoNADE-5 2hl BID47 84.68 DLGM 8 BID48 ≈85.51 DARN 1hl ≈84.13 DRAW ≤80.97 P-Forcing (3-layer) BID33 79.58 PixelRNN(1-layer) BID44 80.75 PixelRNN(7-layer) BID44 79.20 PixelVAE BID24 79.02 MatNets BID1 78.50Baseline LSTM(3-layers) 79.87 + TwinNet(3-layers) Baseline LSTM(3-layers) + dropout 79.59 + TwinNet(3-layers)79.12 LSTM BID64 82.2 78.4 4-layer LSTM BID38 67.9 65.4 5-layer RHN BID38 64 BID38 78.1 75.6 1-layer LSTM BID38 69.3 65.9 2-layer LSTM BID38 69.1 65.9AWD-LSTM 68.7 65.8 + TwinNet 68.0 64.9 learning rate of 10 −4. TwinNet showed consistent improvements over "Show & Tell" (Table 2). For the soft attention model we observe small but consistent improvements for majority of scores. We investigate the performance of our model in pixel-by-pixel generation for sequential MNIST. We follow the setting described by BID33: we use an LSTM with 3-layers of 512 hidden units for both forward and backward LSTMs, batch size 20, learning rate 0.001 and clip the gradient norms to 5. We use Adam BID29 as our optimization algorithm and we decay the learning rate by half after 5, 10, and 15 epochs. Our are reported at the Table 3 (left). Our baseline LSTM implementation achieves 79.87 nats on the test set. We observe that by adding the TwinNet regularization cost consistently improves performance in this setting by about 0.52 nats. Adding dropout to the baseline LSTM is beneficial. Further gains were observed by adding both dropout and the TwinNet regularization cost. This last model achieves 79.12 nats on test set. Note that this is competitive with deeper models such as PixelRNN (b) (7-layers) and PixelVAE BID24 which uses an autoregressive decoder coupled with a deep stochastic auto-encoder. As a last experiment, we report obtained on a language modelling task using the PennTree Bank and WikiText-2 datasets BID39. We augment the state-of-the-art AWD-LSTM model BID39 with the proposed TwinNet regularization cost. The are reported in Table 3 (right). In this paper, we presented a simple recurrent neural network model that has two separate networks running in opposite directions during training. Our model is motivated by the fact that states of the forward model should be predictive of the entire future sequence. This may be hard to obtain by optimizing one-step ahead predictions. The backward path is discarded during the sampling and evaluation process, which makes the sampling process efficient. Empirical show that the proposed method performs well on conditional generation for several tasks. The analysis reveals an interpretable behaviour of the proposed loss. One of the shortcomings of the proposed approach is that the training process doubles the computation needed for the baseline (due to the backward network training). However, since the backward network is discarded during sampling, the sampling or inference process has the exact same computation steps as the baseline. This makes our approach applicable to models that requires expensive sampling steps, such as PixelRNNs BID44 and WaveNet (a). One of future work directions is to test whether it could help in conditional speech synthesis using WaveNet. We observed that the proposed approach yield minor improvements when applied to language modelling with PennTree bank. We hypothesize that this may be linked to the amount of entropy of the target distribution. In these high-entropy cases, at any time-step in the sequence, the distribution of backward states may be highly multi-modal (many possible futures may be equally likely for the same past). One way of overcoming this problem would be to replace the proposed L2 loss (which implicitly assumes a unimodal distribution of the backward states) by a more expressive loss obtained by either employing an inference network BID30 or distribution matching techniques BID17. We leave that for future investigation.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BydLzGb0Z
The paper introduces a method of training generative recurrent networks that helps to plan ahead. We run a second RNN in a reverse direction and make a soft constraint between cotemporal forward and backward states.
Deep generative models seek to recover the process with which the observed data was generated. They may be used to synthesize new samples or to subsequently extract representations. Successful approaches in the domain of images are driven by several core inductive biases. However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked. In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition. This provides a way to efficiently learn a more accurate generative model of real-world images, and serves as an initial step towards learning corresponding object representations. We evaluate our approach on several multi-object image datasets, and find that the generator learns to identify and disentangle information corresponding to different objects at a representational level. A human study reveals that the ing generative model is better at generating images that are more faithful to the reference distribution. Generative modelling approaches to representation learning seek to recover the process with which the observed data was generated. It is postulated that knowledge about the generative process exposes important factors of variation in the environment (captured in terms of latent variables) that may subsequently be obtained using an appropriate posterior inference procedure. Therefore, the structure of the generative model is critical in learning corresponding representations. Deep generative models of images rely on the expressiveness of neural networks to learn the generative process directly from data BID11 BID24 BID38. Their structure is determined by the inductive bias of the neural network, which steers it to organize its computation in a way that allows salient features to be recovered and ultimately captured in a representation BID6 BID7 BID24. Recently, it has been shown that independent factors of variation, such as pose and lighting of human faces may be recovered in this way BID5.A promising but under-explored inductive bias in deep generative models of images is compositionality at the representational level of objects, which accounts for the compositional nature of the visual world and our perception thereof BID3 BID37. It allows a generative model to describe a scene as a composition of objects (entities), thereby disentangling visual information in the scene that can be processed largely independent of one another. It provides a means to efficiently learn a more accurate generative model of real-world images, and by explicitly Figure 1: A scene (right) is generated as a composition of objects and . considering objects at a representational level, it serves as an important first step in recovering corresponding object representations. In this work we investigate object compositionality for Generative Adversarial Networks (GANs; BID11), and present a general mechanism that allows one to incorporate corresponding structure in the generator. Starting from strong independence assumptions about the objects in images, we propose two extensions that provide a means to incorporate dependencies among objects and . In order to efficiently represent and process multiple objects with neural networks, we must account for the binding problem that arises when superimposing multiple distributed representations BID18. Following prior work, we consider different representational slots for each object BID13 BID34, and a relational mechanism that preserves this separation accordingly.We evaluate our approach 1 on several multi-object image datasets, including three variations of Multi-MNIST, a multi-object variation of CIFAR10, and CLEVR. In particular the latter two mark a significant improvement in terms of complexity, compared to datasets that have been considered in prior work on unconditional multi-object image generation and multi-object representation learning. In our experiments we find that our generative model learns about the individual objects and the of a scene, without prior access to this information. By disentangling this information at a representational level, it generates novel scenes efficiently through composing individual objects and , as can be seen in Figure 1. As a quantitative experiment we compare to a strong baseline of popular GANs (Wasserstein and Non-saturating) with recent state-of-the-art techniques (Spectral Normalization, Gradient Penalty) optimized over multiple runs. A human study reveals that the proposed generative model outperforms this baseline in generating better images that are more faithful to the reference distribution. Generative Adversarial Networks (GANs; BID11) are a powerful class of generative models that learn a stochastic procedure to generate samples from a distribution P (X). Traditionally GANs consist of two deterministic functions: a generator G(z) and a discriminator (or critic) D(x). The goal is to find a generator that accurately transforms samples from a prior distribution z ∼ P (Z) to match samples from the target distribution x ∼ P (X). This can be done by using the discriminator to implement a suitable objective for the generator, in which it should behave adversarial with respect to the goal of the discriminator in determining whether samples x were sampled from P (X) or G(P (Z)) respectively. These objectives can be summarized as a minimax game with the following value function: DISPLAYFORM0 When the generator and the discriminator are implemented with neural networks, optimization may proceed through alternating (stochastic) gradient descent updates of their parameters with respect to. However, in practice this procedure might be unstable and the minimax formulation is known to be hard to optimize. Many alternative formulations have been proposed and we refer the reader to BID30 and BID26 for a comparison. Following the recommendations of BID26 we consider two practical reformulations of in this paper: Non-Saturating GAN (NS-GAN; BID11), in which the generator maximizes the probability of generated samples being real, and Wassertein GAN (WGAN;) in which the discriminator minimizes the Wasserstein distance between G(P (Z)) and P (X). For both formulations we explore two additional techniques that have proven to work best on a variety of datasets and architectures: the gradient penalty from BID15 to regularize the discriminator, and spectral normalization BID33 to normalize its gradient flow. We propose to structure the generator of a GAN to generate images as compositions of individual objects and . In this case it consists of K = 4 object generators (shared weights) that each generate an image from a separate latent vectorẑ i. These are obtained by having each z i ∼ P (Z) participate in a relational stage, which allows each representation to be updated as a function of all others. Alternativelyẑ i = z i if no relations are to be modelled. On the top, a generator (unique weights) generates a image from a separate latent vector DISPLAYFORM1, which optionally participates in the relational stage. The whole system is trained end-to-end as in the standard GAN framework, and the final image is obtained by composing (in this case using alpha compositing) the outputs of all generators. In order to formulate the structure required to achieve object compositionality in neural networks we primarily focus on the corresponding type of generalization behavior that we are interested in. It is concerned with independently varying the different visual primitives (objects) that an image is composed of, requiring these to be identified at a representational level and described in a common format. We account for the binding problem BID18 BID32 BID41 ) that may arise in combining these object representations to arrive at a final image. In the following subsections we initially present structure that assumes strict object independence (Section 3.1), to then relax this assumption by incorporating relational structure (Section 3.2), and finally allow for the possibility of unstructured and occlusion (Section 3.3). If we assume that images in P (X) are composed of objects that are strictly independent of one another then (without loss of generality) we may structure our latent variables accordingly. For images having K objects, we consider K i.i.d. vector-valued random variables Z i that each describe an object at a representational level. K copies of a deterministic generator G(z) transform samples from each Z i into images, such that their superposition in the corresponding scene: DISPLAYFORM0 When each copy of G generates an image of a single object, the ing generative model efficiently describes images in P (X) in a compositional manner. Each object in is described in terms of the same features (i.e. the Z i 's are i.i.d) and the weights among the generators are shared, such that any acquired knowledge in generating a specific object is transferred across all others. Hence, rather than having to learn about all combinations of objects (including their individual variations) that may appear in an image, it suffices to learn about the different variations of each individual object instead. Notice that the generators in cannot communicate, which prevents degenerate solutions from being learned. This comes at a cost in that relations among the objects cannot be modelled in this way. An additional concern is the sum in, which assumes that images only consist of objects, and that their values can be summed in pixel-space. We will address these concerns in the following, using the superposition of generators as a backbone for object compositionality in our approach. In the real world objects are not strictly independent of one another. Certain objects may only occur in the presence of others, or affect their visual appearance in subtle ways (eg. shadows, lighting). In order to incorporate relationships of this kind we introduce a relational stage, in which the representation of an object is updated as a function of all others, before each generator proceeds to generate its image. Following we consider one or more "attention blocks" to compute interactions among the object representations. At its core is Multi-Head Dot-Product Attention (MHDPA; BID40) that performs non-local computation or message-passing BID10 when one associates each object representation with a node in a graph. When specific design choices are made, computation of this kind provides an efficient means to learn about relations between objects and update their representations accordingly BID4.A single head of an attention block updates z i in the following way: DISPLAYFORM0 attention weights DISPLAYFORM1 where d = dim(v i) and each MLP corresponds to a multi-layer perceptron. First, a query vector q i, a value vector v i, and a key vector k i is computed for each z i. Next, the interaction of an object i with all other objects (including itself) is computed as a weighted sum of their value vectors. Weights are determined by computing dot-products between q i and all key vectors, followed by softmax normalization. Finally, the ing update vector a i is projected back to the original size of z i using MLP up before being added. Additional heads (modelling different interactions) use different parameters for each MLP in. In this case their outputs are combined with another MLP to arrive at a final z i. Complex relationships among objects can be modelled by using multiple attention blocks to iteratively update z i. A detailed overview of these computations can be found in Appendix B, and an overview in FIG0. Up until this point we have assumed that an image is entirely composed of objects, which may be prohibitive for complex visual scenes. For example, certain objects that only appear in the "" may not occur frequently enough, nor have a regular visual appearance that allows a model to consider them as such. One could reason that certain visual primitives (those that can be varied independently and re-composed accordingly) will be discovered from the observed data, whereas all other remaining visual information is captured as by another component. However, these are conflicting assumptions as the latent representations z i (and corresponding generator) now need to describe objects that assume a regular visual appearance, as well as that is not regular in its visual appearance at all. Therefore, we consider an additional generator (see FIG0) having its own set of weights to generate the from a separate vector of latent variables z b ∼ P (Z b). We consider two different variations of this addition, one in which z b participates in the relational stage, and one in which it does not. A remaining challenge is in combining objects with and occlusion. A straightforward adaptation of the sum in to incorporate pixel-level weights would require the generator to assign a weight of zero to all pixel locations where objects appear, thereby increasing the complexity of generating the exponentially. Instead, we require the object generators to generate an additional alpha channel for each pixel, and use alpha compositing to combine the outputs of the different generators and through repeated application of: DISPLAYFORM0 4 RELATED WORK Inductive biases aimed at object compositionality have been previously explored, both in the context of generative models and multi-object representation learning. One line of work models an image as a spatial mixture of image patches, utilizing multiple copies of the same function to arrive at a compositional solution. Different implementations consider RBMs , VAEs BID34, or (recurrent) auto-encoders inspired by EM-like inference procedures BID12 to generate these patches. They consider objects at a representational level and recent work has shown a means to efficiently model interactions between them (van BID39 . However, neither of these approaches are capable of modelling complex visual scenes that incorporate unstructured as well as interactions among objects. A conceptually different line of work relies on recurrent neural networks to iteratively model multiple objects in an image, one at a time. BID14 proposes to use attention to arrive at this solution, whereas Eslami et al. FORMULA0 considers objects explicitly. This approach has also been explored in the context of GANs. Im et al. FORMULA0 generates images iteratively by accumulating outputs of a recurrent generator, and BID27 propose to combine these outputs using alpha compositing. BID44 extends this approach further, by considering a separate generator for the , using spatial transformations to integrate a foreground image. They briefly explore multi-object image generation on a dataset consisting of two non-overlapping MNIST digits, yet their approach requires prior knowledge about the size of the objects, and the number of objects to generate. This is information typically unavailable in the real world and not required for our method. A more general concern is the difficulty in modelling relations among objects when they are generated one at a time. Information about the objects must be stored and updated in the memory of the RNN, which is ill-suited for this task without incorporating corresponding relational structure . It prevents relations from being learned efficiently, and requires the RNN to commit to a plan in its first step, without the possibility to revisit this decision. Recent work in GANs is increasingly focusing on incorporating (domain-specific) architectural structure in the generator to generate realistic images. BID29 considers a Spatial Transformer Network as a generator, and proposes an iterative scheme to remove or add objects to a scene. BID22 propose image generation by conditioning on explicit scene graphs to overcome the limitations of standard GANs in generating scenes composed of multiple objects that require relations to be taken into account. BID43 propose a similar approach but condition on a stochastic and-or graph instead. BID1 considers a framework to generate images composed of two objects, conditioned on images of each single object. In our approach we make use of an implicit graph structure (as implemented by our relational mechanism) to model relations among objects, and do not rely on prior information about individual objects (in the form of conditioning). We test different aspects of the proposed structure on several multi-object datasets. We are particularly interested in verifying that images are generated as compositions of objects and that the relational and structure is properly utilized. To that extend, we study how the incorporated structure affects the quality and the content of generated images. Datasets We consider five multi-object datasets. 3 The first three are different variations of Multi-MNIST (MM), in which each image consists of three MNIST digits that were rescaled and drawn randomly onto a 64 × 64 canvas. In Independent MM, digits are chosen randomly and there is no relation among them. The Triplet variation requires that all digits in an image are of the same type, requiring relations among the digits to be considered during the generative process. Similarly RGB Occluded MM requires that each image consist of exactly one red, green, and blue digit. The fourth dataset (CIFAR10 + MM) is a variation of CIFAR10 BID25 in which the digits from RGB Occluded MM are drawn onto a randomly choosen (resized) CIFAR10 image. Our final dataset is CLEVR BID21, which we downsample to 160 × 240 followed by center-cropping to obtain 128 × 128 images. Samples from each dataset can be seen in in Appendix C. Evaluation A popular evaluation metric in comparing generated images by GANs is the Fréchet Inception Distance (FID; BID16). It computes the distance between two empirical distributions of images (one generated, and a reference set) as the Fréchet (Wasserstein-2) distance between two corresponding multivariate Gaussian distributions that were estimated from the Inception-features computed for each image. Although pior work found that FID correlates well with perceived human quality of images on standard image datasets BID16; BID30, we find that FID is of limited use when considering image datasets in which the dominant visual aspects are determined by multiple objects. Our in Section 5.2 suggest that FID can not be used to verify whether image distributions adhere to certain properties, such as the number of objects. We hypothesize that this inability is inherent to the Inception embedding having been trained only for single object classification. 0 Generator 1 Generator 2 Composed Image Generator 0 Generator 1 Generator 2 Composed Image Generator 0 Generator 1 Generator 2 Composed ImageTo compensate for this we conduct two different studies among humans, 1) to compare images generated by our models to a baseline, and 2) to answer questions about the content of generated images. The latter allows us to verify whether generated images are probable samples from our image distribution, eg. by verifying that they have the correct number of objects. As conducting human evaluation of this kind is not feasible for large-scale hyperparameter search we will continue to rely on FID to select the "best" models during hyper-parameter selection. Details of these human studies can be found in Appendix B.Set-up Each model is optimized with ADAM using a learning rate of 10 −4, and batch size 64 for 1M steps. We compute the FID (using 10K samples) every 20K steps, and select the best set of parameters accordingly. On each dataset, we compare GANs that incorporate our proposed structure to a strong baseline that does not. In both cases we conduct extensive grid searches covering on the order of 40-50 hyperparameter configurations for each dataset, using ranges that were previously found good for GAN BID30 BID26. Each configuration is ran with 5 different seeds to be able to estimate its variance. An overview of each hyper-parameter search can be found in Appendix B, and samples of our best models in Appendix C.Composing On Independent MM and Triplet MM we sum the outputs of the object generators as in, followed by clipping. On all other datasets we use alpha compositing with a fixed order. In this case the object generators output an additional alpha channel, except for RGB Occluded MM in which we obtain alpha values by thresholding the output of each object generator for simplicity. Notation In reporting our we will break down the obtained when incorporating structure in GAN across the different structural parts. In particular we will denote k-GAN to describe a generator consisting of K = k components, k-GAN rel. if it incorporates relational structure and k-GAN ind. if it does not. Additionally we will append "bg." when the model includes a separate generator. Since any variation incorporates multiple components, we will use k-GAN to refer to GANs that incorporate any of the proposed structure as a collective. We will use GAN to refer to the collection of GANs with different hyperparameters in our baseline. Utilizing Structure In analyzing the output of each generator for k-GAN, we consistently find that the final image is generated as a composition of images consisting of individual objects and In the case of CLEVR, in which images may have a greater number of objects than the number of components K that we used during training, we find that the generator continues to learn a factored solution. Visual primitives are now made up of multiple objects, examples of which can be seen at the bottom rows in FIG2. A similar tendency was also found when analyzing generated images by k-GAN ind. when k > 3 on Multi-MNIST. The generator decodes part of its latent space as "no digit" as an attempt at generating the correct number of digits. Generator 0 Generator 1 Generator 2 Generator 3 Generator 4 Generator 5 Composed Image Generator 0 Generator 1 Generator 2 Generator 3 Generator 4 Generator 5 Composed Image Generator 0 Generator 1 Generator 2 Generator 3 Generator 4 Generator 5 Composed Image Generator 0 Generator 1 Generator 2 Generator 3 Generator 4 Generator 5 Composed ImageFrom the generated samples in Appendix C we observe that relations among the objects are correctly captured in most cases. In analyzing the generator we find that it sometimes generates a single object together with the . It rarely generates more than one object, confirming that although it is capable, it is indeed more efficient to generate images as compositions of objects. Latent Traversal We explore the degree to which the relational structure affects our initial independence assumption about objects. If it were to cause the latent representations to be fully dependent on one another then our approach would no longer be compositional in the strict sense. Note that although we have a clear intuition in how this mechanism should work, there is no corresponding constraint in the architecture. We conduct an experiment in which we traverse the latent space of a single latent vector in k-GAN rel., by adding a random vector to the original sample with fixed increments and generating an image from the ing latent vectors. Several examples can be seen in Figure 5b. In the first row it can be seen that as we traverse the latent space of a single component the blue digit 9 takes on the shape of a 3, whereas the visual presentation of the others remain unaffected. Similarly in the second and third row the green digits are transformed, while other digits remain fixed. Hence, by disentangling objects at a representational level the underlying representation is more robust to common variations in image space. We observe this behavior for the majority of the generated samples, confirming to a large degree our own intuition of how the relational mechanism should be utilized. When we conduct the same Figure 5: Three generated images by a) GAN and b) 5-GAN rel. bg., when traversing the latent space of a single (object) generator at different increments. On the right it can be seen that in each case only a single digit is transformed, whereas the visual presentation of the others remains unaffected. In the case of GAN (left) the entire scene changes.latent traversal on the latent space of GAN for which the information encoding different objects is entangled, it in a completely different scene (see Figure 5a). FID We train k-GAN and GAN on each dataset, and compare the FID of the models with the lowest average FID across seeds. On all datasets but CLEVR we find that k-GAN compares favorably to our baseline, although typically by a small margin. A break-down of the FID achieved by different variations of k-GAN reveals several interesting observations FIG8 ). In particular, it can be observed that the lowest FID on Independent MM is obtained by 4-GAN without relational structure. This is surprising as each component is strictly independent and therefore 4-GAN ind. is unable to consistently generate 3 digits. Indeed, if we take a look at the generated samples in Figure 11, then we frequently observe that this is the case. It suggests that FID is unable to account for these properties of the generated images, and renders the small FID differences that we observed inconclusive. FIG8 does reveal some large FID differences across the different variations of k-GAN on Triplet MM, and RGB Occluded MM. It can be observed that the lack of a relational mechanism on these datasets is prohibitive (as one would expect), ing in poor FID for k-GAN ind. Simultaneously it confirms that the relational mechanism is properly utilized when relations are present. We asked humans to compare the images generated by k-GAN rel. (k=3,4,5) to our baseline on RGB Occluded MM, CIFAR10 + MM and CLEVR, using the configuration with a generator for the last two datasets. For each model we select the 10 best hyper-parameter configurations (lowest FID), from which we each generate 100 images. We asked up to three raters for each image and report the majority vote or "Equal" if no decision can be reached. FIG4 reports the when asking human raters to compare the visual quality of the generated images by k-GAN to those by GAN. It can be seen that k-GAN compares favorably across all datasets, and in particular on RGB Occluded MM and CIFAR10 + MM we observe large differences. We find that k-GAN performs better even when k > 3, which can be attributed to the relational mechanism, allowing all components to agree on the correct number of digits. In a second study we asked humans to report specific properties of the generated images (number of objects, number of digits, etc.), a complete list of which can be found in Appendix B. Here our goal was to asses if the generated images by k-GAN are more faithful to the reference distribution. The on RGB Occluded MM are summarized in FIG4. It can be seen that k-GAN more frequently generates images that have the correct number of objects, number of digits, and that satisfy all properties simultaneously (color, digit count, shapes). The difference between the correct number of digits and correct number of objects suggests that the generated objects are often not recognizable as digits. This does not appear to be the case from the generated samples in Appendix C, suggesting that the raters may not have been familiar enough with the variety of MNIST digits. On CIFAR10 + MM FIG5 ) it appears that GAN is able to accurately generate the correct number of objects, although the addition of makes it difficult to provide a comparison in this case. On the other hand if we look at the number of digits, then we find that k-GAN outperforms GAN by the same margin, as one would expect compared to the in FIG4.In comparing the generated images by k-GAN and GAN on CLEVR we noticed that the former generated more crowded scenes (containing multiple large objects in the center), and more frequently generated objects with distorted shapes or mixed colors. On the other hand we found cases in which k-GAN generated scenes containing "flying" objects, a by-product of the fixed order in which we apply. We asked humans to score images based on these properties, which confirmed these observations (see FIG5), although some differences are small. The experimental confirm that the proposed structure is beneficial in generating images of multiple objects, and is utilized according to our own intuitions. In order to benefit maximally from this structure it is desirable to be able to accurately estimate the (minimum) number of objects in the environment in advance. This task is ill-posed as it relies on a precise definition of "object" that is generally not available. In our experiments on CLEVR we encounter a similar situation in which the number of components does not suffice the potentially large number of objects in the environment. Here we find that it does not render the proposed structure useless, but instead each component considers "primitives" that correspond to multiple objects. One concern is in being able to accurately determine foreground, and when combining the outputs of the object generators using alpha compositing. On CLEVR we observe cases in which objects appear to be flying, which is the of being unable to route the information content of a "foreground" object to the corresponding "foreground" generator as induced by the fixed order in which images are composed. Although in principle the relational mechanism may account for this distinction, a more explicit mechanism may be preferred BID31.We found that the pre-trained Inception embedding is not conclusive in reasoning about the validity of multi-object datasets. Similarly, the discriminator may have difficulties in accurately judging images from real / fake without additional structure. Ideally we would have a discriminator evaluate the correctness of each object individually, as well as the image as a whole. The use of a patch discriminator BID20, together with the alpha channel of each object generator to provide a segmentation, may serve a starting point in pursuing this direction. We have argued for the importance of compositionality at the representational level of objects in deep generative models of images, and demonstrated how corresponding structure may be incorporated in the generator of a GAN. On a benchmark of multi-object datasets we have shown that the proposed generative model learns about individual objects and in the process of synthesizing samples. A human study revealed that this leads to a better generative model of images. We are hopeful that in disentangling information corresponding to different objects at a representational level these may ultimately be recovered. Hence, we believe that this work is an important contribution towards learning object representations of complex real-world images without any supervision. A EXPERIMENT The generator and discriminator neural network architectures in all our experiments are based on DCGAN BID35. Object Generators k-GAN ind. introduces K = k copies of an object generator (i.e. tied weights, DCGAN architecture) that each generate and image from an independent sample of a 64-dimensional UNIFORM(-1, 1) prior P (Z).Relational Structure When a relational stage is incorporated (k-GAN rel.) each of the z i ∼ P (Z) are first updated, before being passed to the generators. These updates are computed using one or more attention blocks, which integrate Multi-Head Dot-Product Attention (MHDPA; BID40) with a post-processing step. A single head of an attention block updates z i according to.In our experiments we use a single-layer neural network (fully-connected, 32 ReLU) followed by LayerNorm BID2 up. If multiple heads are present, then their outputs are concatenated and transformed by a single-layer neural network (fully-connected, 64 ReLU) followed by LayerNorm to obtain the neŵ z i. If the relational stage incorporates multiple attention blocks that iteratively update z i, then we consider two variations: using unique weights for each MLP in each block, or sharing their weights across blocks. Background Generation When a generator is incorporated (eg. k-GAN rel. bg) it uses the same DCGAN architecture as the object generators, yet maintains its own set of weights. It receives as input its own latent sample z b ∼ P (Z b), again using a UNIFORM(-1, 1) prior, although one may in theory choose a different distribution. We explore both variations in which z b participates in the relational stage, and in which it does not. Composing In order to obtain the final generated image, we need to combine the images generated by each generator. In the case of Independent MM and Triplet MM we simply sum the outputs of the different generators and clip their values to. On RGB Occluded MM we combine the different outputs using alpha compositing, with masks obtained by thresholding the output of each generator at 0.1. On CIFAR10 + MM and CLEVR we require each of the object generators to generate an additional alpha channel by adding an additional feature map in the last layer of the generator. These are then combined with the generated (opaque) using alpha compositing, i.e. through repeated application of. Each model is optimized with ADAM using a learning rate of 0.0001, and batch size 64 for 1 000 000 steps. Each generator step is followed by 5 discriminator steps, as is considered best practice in training GANs. Checkpoints are saved at every 20 000 th step and we consider only the checkpoint with the lowest FID for each hyper-parameter configuration. FID is computed using 10 000 samples from a hold-out set. Baseline We conduct an extensive grid search over 48 different GAN configurations to obtain a strong GAN baseline on each dataset. It is made up of hyper-parameter ranges that were found to be successful in training GANs on standard datasets BID26.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJgEjiRqYX
We propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition
Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel. We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation. We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it. First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive. Second, that stability and performance are improved by using LOLA , especially in more competitive scenarios. And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones. Multi-agent RL is used for many fully-competitive, zero-sum games (; ;) with the objective of finding a single, best learned agent (or team) at test time. In these games, a "best" agent is one that can outplay all opponents . We cannot evaluate its play by looking at the reward that agent achieves against a given opponent because the reward received will be a function of the choice of opponent. 1 In contrast, MARL in fully cooperative games has usually been able to make the assumption that we get to pick our team, and so can use self-play to try and achieve a maximal possible reward. (b). In this way, fully-cooperative MARL compares the maximum performance between learning algorithms and architectures by optimising the joint reward of the team. We investigate the space of partially competitive games known as general-sum games, where there is some amount of common interest and some amount of conflict. In this case, care must be taken in defining the "best" agent: it is not necessarily the agent that does as well or better than all opponents because, to achieve the highest possible reward in a general-sum game, agents might have to cooperate to some extent . For example, consider an agent playing iterated prisoner's dilemma. The agent that always defects will never have a reward worse than its opponent, but, when playing against a tit-for-tat agent, it will also not achieve the total reward of an agent that is also playing a tit-for-tat strategy. Since the maximum expected reward may only be possible by cooperating, agents must learn how to coordinate with each other. This can be done by training together or learning to understand opponent intentions at test time by observing their actions. The latter allows for ad-hoc comparison of learned agents at test time; the former requires comparing learning algorithms trained together. The latter should then require a sequential/iterative game, so there is time to infer opponent intentions before acting . It may also require meta-learning or other modifications to understand and adapt to these intentions as it seems current self-play methods are insufficient to adapt to ad-hoc play even against different versions of their own architecture . Work in general-sum MARL has mostly worked on analysing learning algorithms' ability to cooperate and resolve social dilemmas (a; ;) To date, investigations of emergent communication have remained mostly in the realm of fullycooperative games; ). For continuous communication, claimed to learn in mixed cooperative-competitive scenarios. However, their setup uses parameter sharing between opponents; their "mixed" case is non-competitive (and implicitly cooperative); their competitive game is actually two stages-one fully cooperative and one fully competitive; and, their in the competitive scenario are simply to mask out all communication. Previous attempts to learn discrete emergent protocols by selfish agents in competitive games have failed unless additional, more complex learning rules are adopted . In the latter case, they compare learning algorithms trained together as opposed to learned agents at test time to avoid a significant issue of comparing emergent communication agents-i.e., different protocols. Since an emergent protocol only has meaning between the agents that learned it together, comparing two learned agents at test time would require that they infer each others' protocols without training with them. This seems impossible, and it is more reasonable to frame it as a meta-learning problem where agents can have a brief adaptation period to synchronise their protocols. Meta-learning is beyond the scope of this paper, so we follow in comparing learning algorithms trained together. We also take a more principled approach than previous work by guaranteeing no communication through the action space, using more rigorous, quantitative criteria for evaluating communication, and precisely setting the levels of competitiveness. Notably, this work does not propose new architectures or learning rules but aims to take a critical look at existing beliefs and draw important distinctions as did for natural language emergence. Our experiments use the simple emergent-communication framework known as a sender-receiver game (or "referential game" ), which finds extensive use in economics and philosophy among others. In the classic game, the sender is given a target value to be communicated to the receiver via a message. The receiver receives the sender's message and must decode it to predict the target value. Both players are rewarded according to the negative of the receiver's prediction error. In this fully-cooperative setting, players often coordinate a protocol to transfer information as effectively as possible . The messages between the sender and the receiver can be categorised as "cheap talk": messages are costless, non-binding, non-verifiable and may affect the receiver's beliefs . Our work benefits greatly from -a seminal work in classical game theory. They study possible fixed communication equilibria under competition by giving the sender and receiver different targets and creating a conflict of interest. They perform a static analysis and prove the existence of a Nash equilibrium where the amount of information communicated is proportional to the alignment between the players' interests; however no informative equilibrium exists when interest diverge too greatly. In contrast, we do not look for existence of an equilibrium but do a dynamic analysis and show the feasibility of communication using standard learning rules in RL. We do not explicitly aim for equilibria but look at the information transfer of communication protocols in flux (and therefore out of equilibrium). This is more in line with previous work in emergent communication as well as evolutionary signalling . To investigate a range of competitive scenarios, we introduce a modified sender-receiver game with a continuous-bias variable, b, that represents the agents' conflict of interest, ranging from fully cooperative to fully competitive. The two players-the Sender (S) and the Receiver (R)-have corresponding targets (T s and T r), which are represented by angles on a circle that are b degrees apart: T r = (T s + b) mod 360 •. The game starts with the sender's target being sampled uniformly from the circle T s ∼ Uniform. The sender is given its target as input and outputs a message, m = S(T s), consisting of a single, discrete token from a vocabulary m ∈ V. The receiver is given the message and outputs a scalar action a = R(m). The goal of each agent is to make the receiver's action as close as possible to its own target value. After the receiver acts, both players get a loss between the action and their respective targets,. By using an L 1 loss between the angle of the target and action ) and a game with the maximum bias b = 180 • is fully competitive or constant-sum (a generalisation of zero-sum, see Appendix A.1 for proof). All values in-between, b ∈ (0 •, 180 •), represent the spectrum of partially cooperative/competitive general-sum games. Figure 1a gives an instance of this game; the game's algorithm is given in Algorithm 1 in Appendix B. This can be seen as the game from modified to cover the range of cooperative/competitive games. Both agents are implemented as MLPs with two hidden layers and ReLU nonlinearities between all layers. The targets are sampled from the circle, the sender takes its target, T s, as input and outputs a categorical distribution over a vocabulary from which we sample a message-its output. The receiver takes the message as input and deterministically outputs its action, a. Errors are calculated using the L 1 loss on the circle. The sender estimates its loss using the score function estimator -also known as REINFORCE -and has an added entropy regularisation term. Since the loss is differentiable with respect to the receiver, it is trained directly with gradient descent, so we are training in the style of a stochastic computation graph . We train for 30 epochs of 250 batches, with batch size 64, and set the circumference of our circle to 36 (so that a loss of 90 • is an error of 9). Both agents are trained using Adam . To evaluate, we use a fixed test set of 100 equidistant points and take the arg max of output distributions instead of sampling. We do all hyperparameter searches with Oríon , using random search with a fixed budget of searches. We perform a hyperparameter search to over both agents' learning rates, hidden layer sizes, the vocabulary size, and entropy regularisation (when used). We always report for given hyperparameters averaged over 5 random seeds, and we average our metric for hyperparameter search over the last 10 epochs to capture some level of stability as well as performance. All hyperparameter search spaces are available in the config files of the code repository. To evaluate the communication emerged with a cheap-talk channel, we can simply look to the sum of agents' L 1 losses. Under non-communication (or uninformative communication), we know that the receiver will just guess a point at random, and the average loss for both players is the expected value of the loss-given that it is drawn uniformly • Therefore, any error for either agent below 90 • is evidence of information transfer . Furthermore, since there is no other action space for agents to communicate in, the information transfer must be happening in the emergent communication space . Therefore, the lower L r 1 + L s 1 is, the more informative the learned protocol is; and, the most informative protocol will have the lowest loss min To show this comparison, we always plot the loss under uninformative communication (90 •) and the loss for each agent if they were to both fairly split the bias (b/2). While we have found evidence of information transfer, does that necessarily mean our agents have learned to communicate? For example, our hyperparameter search could find a minimal learning rate for the sender, such that it is essentially static, and a normal configuration for the receiver. The game would then become not one of learning a protocol between two agents, but rather just a receiver learning the sender's initial random mapping of targets to messages. The receiver could then dominate the sender by always choosing a = T r, which would yield L r 1 + L s 1 = b; namely, the optimal sum of losses and, therefore, optimal information transfer. This situation is clearly not what we are looking for, but it would be permissible, or potentially even encouraged, under an information-transfer objective (as measured by the sum of agents' L 1 losses). It is, therefore, necessary to delineate the differences in communication; here, we can look to extant in signalling . One perspective on information transfer is that of manipulation of receivers by senders or vice-versa ; this manifests as the domination of one agent over the other. We note that these situations are modelled as cue-reading or sensory manipulation, respectively, and are distinct from signalling-i.e., communication . Accordingly, communication requires both agents to receive a net benefit , which implies some degree of cooperation . For the fully-cooperative case, previous metrics of joint reward , or even influence of communication , are sufficient to drive the hyperparameter search. But for competitive scenarios, neither of these can distinguish between manipulation and cooperation . Since our focus is on the emergence of cooperative communication, we are looking for settings where both agents perform better than either their fully-exploited losses (L 2) as our hyperparameter-search metric. We can view our partially competitive scenario as having a common-interest loss (180 • − b), in which both agents are fully cooperative, and a conflict-of-interest loss (b), in which both agents are fully competitive. The sum of L 1 losses optimises only for the common interest, whereas L 2 prefers a more fair division of the conflict-of-interest loss in addition to optimising common interest (see proof in Appendix A.2). We use the L 2 metric only on hyperparameter search and keep L 1 as our game's loss to maintain a constant-sum game for the fully competitive case. • because the game is constant-sum and therefore trivially L •, but for completeness you can see the of a hyperparameter search with b = 180 • in Appendix C Figure 16. We report our in Figure 2 and find that agents do learn to cooperatively communicate without any special learning rules contrary to current literature. We can see that the performance decreases proportionately to the bias, meaning the sender is less informative with messages, forcing the receiver to be less accurate in its own guesses. This matches the theoretical of; information transfer with communication is inversely proportional to the conflict of interest. Plots for each b are in shown in Appendix C Figure 6. For the curve, we still plot the L 1 losses to maintain consistency and to make clearer the comparison to the no-communication baseline and the optimal information transfer (common interest maximisation). We find that our are basically unchanged between the different hyperparameter metrics; a relatively fair and useful protocol is learned by the agents, but this deteriorates in more competitive scenarios. This is clear when comparing the stability and relative efficacy of protocols in b = 30 •, 60 •, shown in Figures 2b, 2c, and that of b = 90 • shown in Figure 2d. We can understand this through the lens of honest communication, which can be taken advantage of in highly competitive scenarios. If, for example, the sender communicates, with complete honesty, its own coordinates, then the receiver can take advantage of this and choose its location exactly so that L r = 0 and L s = b. Comparing this situation to non-communication (L r = L s = 90 •), it is clear that even fully-exploited communication is a strictly dominant strategy for b < 90 • (i.e, when the game is more cooperative than competitive). For more competitive cases, fully-exploitable communication is no longer dominant, and active communication now requires both agents to cautiously cooperate. To achieve this cooperation, we • ] we show the training curve of the best hyperparameters found in 2b,2c,2d. We plot the test loss over training epochs and showing the mean and standard deviation over 5 seeds, finding that for b < 90 • we find stable and relatively fair communication is naturally learned propose using LOLA (a)-a learning rule, resembling theory-of-mind, that allows us to backpropogate through n steps of the opponent's learning. LOLA was able to emerge cooperative behaviour in an iterated prisoner's dilemma, so it is a prime candidate for resolving our game in a similar situation. We experiment with LOLA in three configurations-LOLA on the sender, LOLA on the receiver, LOLA on both-and do a similar hyperparameter search, with the added search space of the LOLA learning rate. Per the improvements made by , we replace the receiver's score function estimate with the DiCE estimator, and we backpropogate through exact copies of opponents as opposed to using opponent modelling. We show our in Figure 3a with extended plots in Figures 8, 9, 10 in Appendix C. We find that LOLA on the sender is ineffective, but LOLA on the receiver and on both agents does indeed lead to better performance. This implies that emerging communication in competitive scenarios necessitates cooperation and that this cooperation can be found through explicit opponent modelling. Furthermore, comparing the curves of basic agents (Figure 2d) with those of LOLA agents (Figure 3c) shows that gains in performance are not from one agent dominating the other, but from both agents improving and increasing stability. We also look at the performance of n-step LOLA, which backpropogates through n > 1 steps of opponent learning. Figure 3b demonstrates that 2-step LOLA slightly outperforms 1-step LOLA, but 3-step LOLA does not provide any increase over 2-step. We see from Figure 3d that the increase comes mostly from stability of learning and slight improvement on the part of the sender. • where our original setup does very poorly. 3b shows that higher step LOLA improves slightly further but not past 2-step. Best feasible communication protocols found for b = 90 • using 1-step (3c) and 2-step LOLA (3d) on both agents demonstrates that the gains in performance over the basic setup shown in Figure 2d are not just from one agent doing better (though the sender is doing better) but both agents improving in performance and stability. Shaded area is standard deviation over 5 seeds Another axis to consider is whether discrete or continuous communication lends itself better to learning with selfish agents. To compare, we make the sender's message a real-valued scalar and appropriately change its output distribution to be a Gaussian, for which it learns the mean and variance (concretely described in Algorithms 2 in Appendix B). We, again, run hyperparameter searches, and we consider training our baseline training with a REINFORCE Sender and deterministic receiver as well as training both agents with 1-step LOLA. Our in Figure 4a suggest that the learned protocols for continuous communication are all highly informative and near optimal. However, in all cases, the receiver is learning to manipulate the sender, and there is little evidence of cooperative communication. Indeed, we found no cases of both agents having a net benefit (L r, L s < 90 •) in any of the hyperparameter runs for continuous REINFORCE-deterministic agents past b = 90 •, and we only two cases of net benefit for LOLA-1 agents. Comparing this to discrete communication with the same LOLA-1 agents in Figure 4f, we can clearly see that they have a preference for more cooperative behaviour. Thus, we find that discrete messages are an important component in emerging cooperative self-interested communication. First and foremost, we show evidence against the current notion that selfish agents do not learn to communicate, and we hope our findings encourage more research into communication under The comparison between discrete and continuous communication for both the REINFORCE-deterministic setup as well as 1-step LOLA agents is shown in Figure 4a. We see that though overall continuous communication can achieve highest information transfer, the gains in performance seem to mostly from manipulation of the sender by the receiver. Two examples are shown for REINFORCE agents in Figures 4b,4c. To find a trend, we plot all 100 hyperparameter runs for b ∈ between continuous and discrete communication using 1-step LOLA agents in Figures 4d,4e,4f,4g. We find that manipulation is the common in continuous communication though individual cooperative points can sometimes be found. In general, continuous communication does not lend itself to cooperative communication competition. We have shown three important properties of communication. First, a game being more cooperative than competitive is sufficient to naturally emerge communication. Second, we've clarified the distinction between information transfer, communication, and manipulation, providing motivation for a better quantitative metric to measure emergent communication in competitive environments. Next, we've found that LOLA improves effective selfish communication and, using our metric, we find it does so by improving both agents' performance and stability. Finally, we've shown that using a discrete communication channel encourages the learning of cooperative commu-nication in contrast to the continuous communication channel setting, where we find little evidence of cooperation. In fully-cooperative emergent communication, both agents fully trust each other, so cooperatively learning a protocol is mutually beneficial. In competitive MARL, the task is using an existing protocol (or action space) to compete with each other. However, selfish emergent communication combines these two since the inherent competitiveness of using the protocol to win is tempered by the inherent cooperativeness of learning it; without somewhat agreeing to meanings, agents cannot use those meanings to compete . Thus, the agents must both learn a protocol and use that protocol simultaneously. In this way, even while competing, selfish agents emerging a communication protocol must learn to cooperate. A.1 PROOF OF FULLY COOPERATIVE/FULLY COMPETITIVE GAME For b = 0, T s = T r so trivially L s = L r and the game is fully cooperative. For b = 180 •, T r = T s + 180 mod 360 • we provide a visual demonstration in Figure 5 that the sum is always L s + L r = 180 • and therefore the game is constant-sum and fully competitive. We can also think of this as moving the actions distance d towards one agent's target means moving it distance d away from the other agent's target. •. Assume without loss of generality T s < T r so T r = T s + 180 Figure 5: The game with maximal bias 180 • showing the sum of L 1 losses L r + L s = 180 • We can extend the proof by symmetry (on the circle) for |T s − a| ≥ 360 • − |T s − a|, so the sum of losses L r + L s always equals 180 • so the game is constant-sum and therefore fully competitive. A.2 PROOF OF L 2 FAIRNESS Assume without loss of generality T s < T r, we are minimizing the sum of L 2 losses Sum of L 2 losses is minimized when the action is T s + b/2 or halfway between both agents' targets. Algorithm 1 Circular Biased Sender-Receiver Game
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1liIlBKvS
We manage to emerge communication with selfish agents, contrary to the current view in ML
Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed. We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss. To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks (GRSVNet). During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry. We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding (OLE)-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces. Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data. More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN. It remains an open question why DNNs, typically with far more model parameters than training samples, can achieve such small generalization error. Previous work used various complexity measures from statistical learning theory, such as VC dimension , Radamacher complexity BID1, and uniform stability BID2 BID10, to provide an upper bound for the generalization error, suggesting that the effective capacity of DNNs, possibly with some regularization techniques, is usually limited. However, the experiments by showed that, even with data-independent regularization, DNNs can perfectly fit the training data when the true labels are replaced by random labels, or when the training data are replaced by Gaussian noise. This suggests that DNNs with data-independent regularization have enough capacity to "memorize" the training data. This poses an interesting question for network regularization design: is there a way for DNNs to refuse to (over)fit training samples with random labels, while exhibiting better generalization power than conventional DNNs when trained with true labels? Such networks are very important because they will extract only intrinsic patterns from the training data instead of memorizing miscellaneous details. One would expect that data-dependent regularizations should be a better choice for reducing the memorizing capacity of DNNs. Such regularizations are typically enforced by penalizing the standard softmax cross entropy loss with an extra geometric loss which regularizes the feature geometry BID8; ). However, regularizing DNNs with an extra geometric loss has two disadvantages: First, the output of the softmax layer, usually viewed as a probability distribution, is typically inconsistent with the feature geometry enforced by the geometric loss. Therefore, the geometric loss typically has a small weight to avoid jeopardizing the minimization of the softmax loss. Second, we find that DNNs with such regularization can still perfectly (over)fit random training samples or random labels. The reason is that the geometric loss (because of its small weight) is ignored and only the softmax loss is minimized. This suggests that simply penalizing the softmax loss with a geometric loss is not sufficient to regularize DNNs. Instead, the softmax loss should be replaced by a validation loss that is consistent with the enforced geometry. More specifically, every training batch B is split into two sub-batches, the geometry batch B g and the validation batch B v. The geometric loss l g is imposed on the features of B g for them to exhibit a desired geometric structure. A semi-supervised learning algorithm based on the proposed feature geometry is then used to generate a predicted label distribution for the validation batch, which combined with the true labels defines a validation loss on B v. The total loss on the training batch B is then defined as the weighted sum l = l g + λl v. Because the predicted label distribution on B v is based on the enforced geometry, the geometric loss l g can no longer be neglected. Therefore, l g and l v will be minimized simultaneously, i.e., the geometry is correctly enforced (small l g) and it can be used to predict validation samples (small l v). We call such DNNs Geometrically-Regularized-Self-Validating neural Networks (GRSVNets). See FIG0 for a visual illustration of the network architecture. GRSVNet is a general architecture because every consistent geometry/validation pair can fit into this framework as long as the loss functions are differentiable. In this paper, we focus on a particular type of GRSVNet, the Orthogonal-Low-rank-Embedding-GRSVNet (OLE-GRSVNet). More specifically, we impose the OLE loss on the geometry batch to produce features residing in orthogonal subspaces, and we use the principal angles between the validation features and those subspaces to define a predicted label distribution on the validation batch. We prove that the loss function obtains its minimum if and only if the subspaces of different classes spanned by the features in the geometry batch are orthogonal, and the features in the validation batch reside perfectly in the subspaces corresponding to their labels (see FIG0). We show in our experiments that OLE-GRSVNet has better generalization performance when trained on real data, but it refuses to memorize the training samples when given random training data or random labels, which suggests that OLE-GRSVNet effectively learns intrinsic patterns. Our contributions can be summarized as follows:• We proposed a general framework, GRSVNet, to effectively impose data-dependent DNN regularization. The core idea is the self-validation of the enforced geometry with a consistent validation loss on a separate batch of features.• We study a particular case of GRSVNet, OLE-GRSVNet, that can produce highly discriminative features: samples from the same class belong to a low-rank subspace, and the subspaces for different classes are orthogonal.• OLE-GRSVNet achieves better generalization performance when compared to DNNs with conventional regularizers. And more importantly, unlike conventional DNNs, OLEGRSVNet refuses to fit the training data (i.e., with a training error close to random guess) when the training data or the training labels are randomly generated. This implies that OLE-GRSVNet never memorizes the training samples, only learns intrinsic patterns. Many data-dependent regularizations focusing on feature geometry have been proposed for deep learning BID8; ). The center loss produces compact clusters by minimizing the Euclidean distance between features and their class centers. LDMNet extracts features sampling a collection of low dimensional manifolds. The OLE loss BID8 ) increases inter-class separation and intra-class similarity by embedding inputs into orthogonal low-rank subspaces. However, as mentioned in Section 1, these regularizations are imposed by adding the geometric loss to the softmax loss, which, when viewed as a probability distribution, is typically not consistent with the desired geometry. Our proposed GRSVNet instead uses a validation loss based on the regularized geometry so that the predicted label distribution has a meaningful geometric interpretation. The way in which GRSVNets impose geometric loss and validation loss on two separate batches of features extracted with two identical baseline DNNs bears a certain resemblance to the siamese network architecture BID4 used extensively in metric learning BID3 BID6 BID7; ). The difference is, unlike contrastive loss BID6 and triplet loss in metric learning, the feature geometry is explicitly regularized in GRSVNets, and a representation of the geometry, e.g., basis of the low-rank subspace, can be later used directly for the classification of test data. Our work is also related to two recent papers (; BID0 addressing the memorization of DNNs. empirically showed that conventional DNNs, even with data-independent regularization, are fully capable of memorizing random labels or random data. BID0 argued that DNNs trained with stochastic gradient descent (SGD) tend to fit patterns first before memorizing miscellaneous details, suggesting that memorization of DNNs depends also on the data itself, and SGD with early stopping is a valid strategy in conventional DNN training. We demonstrate in our paper that when data-dependent regularization is imposed in accordance with the validation, GRSVNets will never memorize random labels or random data, and only extracts intrinsic patterns. An explanation of this phenomenon is provided in Section 4. DISPLAYFORM0 As pointed out in Section 1, the core idea of GRSVNet is to self-validate the geometry using a consistent validation loss. To contextualize this idea, we study a particular case, OLE-GRSVNet, where the regularized feature geometry is orthogonal low-rank subspaces, and the validation loss is defined by the principal angles between the validation features and the subspaces. The OLE loss was originally proposed by. Consider a K-way classification problem. DISPLAYFORM0 Let X c denote the submatrix of X formed by inputs of the c-th class. proposed to learn a linear transformation T: R d → R d that maps data from the same class X c into a low-rank subspace, while mapping the entire data X into a high-rank linear space. This is achieved by solving: DISPLAYFORM1 where · * is the matrix nuclear norm, which is a convex lower bound of the rank function on the unit ball in the operator norm . The norm constraint T 2 = 1 is imposed to avoid the trivial solution T = 0. It is proved by that the OLE loss is always nonnegative, and the global optimum value 0 is obtained if TX c ⊥TX c, ∀c = c. BID8 later used OLE loss as a data-dependent regularization for deep learning. Given a baseline DNN that maps a batch of inputs X into the features Z = Φ(X; θ), the OLE loss on Z is DISPLAYFORM2 The OLE loss is later combined with the standard softmax loss for training, and we will henceforth call such network "softmax+OLE." Softmax+OLE significantly improves the generalization performance, but it suffers from two problems because of the inconsistency between the softmax loss and the OLE loss: First, the learned features no longer exhibit the desired geometry of orthogonal low-rank subspaces. Second, as will be shown in Section 4, softmax+OLE is still capable of memorizing random data or random labels, i.e., it has not reduced the memorizing capacity of DNNs. We will now explain how to incorporate OLE loss into the GRSVNet framework. First, let us better understand the geometry enforced by the OLE loss by stating the following theorem. DISPLAYFORM0.e., the column spaces of Z c and Z c are orthogonal. The proof of Theorem 1, as well as those of the remaining theorems, is detailed in the Appendix. Note that Theorem 1, which ensures that the OLE loss is minimized if and only if features of different classes are orthogonal, is a much stronger than that by. We then need to define a validation loss l v that is consistent with the geometry enforced by l g. A natural choice would be the principal angles between the validation features and the subspaces spanned by {Z c} K c=1. Now we detail the architecture for OLE-GRSVNet. Given a baseline DNN, we split every training batch X ∈ R d×|B| into two sub-batches, the geometry batch X g ∈ R d×|Bg| and the validation batch X v ∈ R d×|Bv|, which are mapped by the same baseline DNN into features Z g = Φ(X g ; θ) and DISPLAYFORM1 ) is imposed on the geometry batch to ensure span(Z For any feature z = Φ(x; θ) ∈ Z v in the validation batch, its projection onto the subspace span(Z g c) is proj c (z) = U c U * c z. The cosine similarity between z and proj c (z) is then defined as the (unnormalized) probability of x belonging to class c, i.e., DISPLAYFORM2 where a small ε is chosen for numerical stability. The validation loss for x is then defined as the cross entropy between the predicted distributionŷ = (ŷ 1, . . .,ŷ K) T ∈ R K and the true label y ∈ {1, . . ., K}. More specifically, let Y v ∈ R 1×|Bv| andŶ v ∈ R K×|Bv| be the collection of true labels and predicted label distributions on the validation batch, then the validation loss is defined as DISPLAYFORM3 where δ y is the Dirac distribution at label y, and H(·, ·) is the cross entropy between two distributions. The empirical loss l on the training batch X is then defined as DISPLAYFORM4 See FIG0 for a visual illustration of the OLE-GRSVNet architecture. Because of the consistency between l g and l v, we have the following theorem: Theorem 2. For any λ > 0, and any geometry/validation splitting of X = [X g, X v] satisfying X v contains at least one sample for each class, the empirical loss function defined in is always nonnegative. l(X, Y) = 0 if and only if both of the following conditions hold true: Figure 2: Training and testing accuracy of different networks on the SVHN dataset with random labels or random data (Gaussian noise). Note that softmax, sotmax+wd, and softmax+OLE can all perfectly (over)fit the random training data or training data with random labels. However, OLE-GRSVNet refuses to fit the training data when there is no intrinsically learnable patterns.• The features of the geometry batch belonging to different classes are orthogonal, i.e., Moreover, if l < ∞, then rank(span(Z g c)) ≥ 1, ∀c, i.e., Φ(·; θ) does not trivially map data into 0. DISPLAYFORM5 Remark: The requirement that λ > 0 is crucial in Theorem 2, because otherwise the network can map every input into 0 and achieve the minimum. This is validated in our numerical experiments. Before delving into the implementation details of OLE-GRSVNet, we first present two toy experiments to illustrate our proposed framework. We use VGG-11 as the baseline architecture, and compare the performance of the following four DNNs: (a) The baseline network with a softmax classifier (softmax). (b) VGG-11 with weight decay (softmax+wd). (c) VGG-11 regularized by penalizing the softmax loss with the OLE loss (softmax+OLE) (d) OLE-GRSVNet. We first train these four DNNs on the Street View House Numbers (SVHN) dataset with the original data and labels without data augmentation. The test accuracy and the PCA embedding of the learned test features are shown in FIG0. OLE-GRSVNet has the highest test accuracy among the comparing DNNs. Moreover, because of the consistency between the geometric loss and the validation loss, the test features produced by OLE-GRSVNet are even more discriminative than softmax+OLE: features of the same class reside in a low-rank subspace, and different subspaces are (almost) orthogonal. Note that in FIG0, features of only four classes out of ten (though ideally it should be three) have nonzero 3D embedding (Theorem 2).Next, we train the same networks, without changing hyperparameters, on the SVHN dataset with either (a) randomly generated labels, or (b) random training data (Gaussian noise). We train the DNNs for 800 epochs to ensure their convergence, and the learning curves of training/testing accuracy are shown in Figure 2. Note that the baseline DNN, with either data-independent or conventional data-dependent regularization, can perfectly (over)fit the training data, while OLE-GRSVNet refuses to memorize the training data when there are no intrinsically learnable patterns. In another experiment, we generate three classes of one-dimensional data in R 10: the data points in the i-th class are i.i.d. samples from the Gaussian distribution with the standard deviation in the i-th coordinate 50 times larger than other coordinates. Each class has 500 data points, and we randomly shuffle the class labels after generation. We then train a multilayer perceptron (MLP) with 128 neurons in each layer for 2000 epochs to classify these low dimensional data with random labels. We found out that only three layers are needed to perfectly classify these data when using a softmax classifier. However, after incrementally adding more layers to the baseline MLP, we found out that OLE-GRSVNet still refuses to memorize the random labels even for 100-layer MLP. This further suggests that OLE-GRSVNet refuses to memorize training data by brute force when there is no intrinsic patterns in the data. A visual illustration of this experiment is shown in the Appendix. We provide an intuitive explanation for why OLE-GRSVNet can generalize very well when given true labeled data but refuses to memorize random data or random labels. By Theorem 2, we know that OLE-GRSVNet obtains its global minimum if and only if the features of every random training batch exhibit the same orthogonal low-rank-subspace structure. This essentially implies that OLEGRSVNet is implicitly conducting O(N |B|)-fold data augmentation, where N is the number of training data, and |B| is the batch size, while conventional data augmentation by the manipulation of the inputs, e.g., random cropping, flipping, etc., is typically O(N). This poses a very interesting question: Does it mean that OLE-GRSVNet can also memorize random data if the baseline DNN has exponentially many model parameters? Or is it because of the learning algorithm (SGD) that prevents OLE-GRSVNet from learning a decision boundary too complicated for classifying random data? Answering this question will be the focus of our future research. Most of the operations in the computational graph of OLE-GRSVNet FIG0 ) explained in Section 3 are basic matrix operations. The only two exceptions are the OLE loss (Z g → l g ((Z g))) and the SVD (Z g → (U 1, . . ., U K)). We hereby specify their forward and backward propagations. According to the definition of the OLE loss in, we only need to find a (sub)gradient of the nuclear norm to back-propagate the OLE loss. The characterization of the subdifferential of the nuclear norm is explained by. More specifically, assuming m ≥ n for simplicity, let U ∈ R m×m, Σ ∈ R m×n, V ∈ R n×n be the SVD of a rank-s matrix A. DISPLAYFORM0 ) ] be the partition of U, V, respectively, where U ∈ R m×s and V ∈ R n×s, then the subdifferential of the nuclear norm at A is: DISPLAYFORM1 where · 2 is the spectral norm. Note that to use, one needs to identify the rank-s column space of A, i.e., span(U ) to find a subgradient, which is not necessarily easy because of the existence of numerical error. BID8 intuitively truncated the numerical SVD with a small parameter chosen a priori to ensure the numerical stability. We show in the following theorem using the backward stability of SVD that such concern is, in theory, not necessary. DISPLAYFORM2 and δU 2, δV 2, δA 2 are all O(ε), where ε is the machine error. If rank(A) = s ≤ n, and the smallest singular value DISPLAYFORM3 However, in practice we did observe that using a small threshold (10 −6 in this work) to truncate the numerical SVD can speed up the convergence, especially in the first few epochs of training. With the help of Theorem 3, we can easily find a stable subgradient of the OLE loss in. Unlike the computation of the subgradient in Theorem 3, we have to threshold the singular vectors of Z g c, because the desired output U c should be an orthonormal basis of the low-rank subspace span(Z g c).In the forward propagation, we threshold the singular vectors U c such that the smallest singular value is at least 1/10 of the largest singular value. As for the backward propagation, one needs to know the Jacobian of SVD, which has been explained by BID9. Typically, for a matrix A ∈ R n×n, computing the Jacobian of the SVD of A involves solving a total of O(n 4) 2 × 2 linear systems. We have not implemented the backward propagation of SVD in this work because this involves technical implementation with CUDA API. In our current implementation, the node (U 1, . . ., U K) is detached from the computational graph during back propagation, i.e., the validation loss l v is only propagated back through the path l v →Ŷ v → Z v → θ. Our rational is this: The validation loss l v can be propagated back through two paths: DISPLAYFORM0 The first path will modify θ so that Z v c moves closer to U c, while the second path will move U c closer to Z v c. Cutting off the second path when computing the gradient might decrease the speed of convergence, but numerical experiments suggest that the training process is still well-behaved under such simplification. With such simplification, the only extra computation is the SVD of a mini-batch of features, which is negligible (<5%) when compared to the time of training the baseline network. In this section, we demonstrate the superiority of OLE-GRSVNet when compared to conventional DNNs in two aspects: (a) It has greater generalization power when trained on true data and true labels. (b) Unlike conventionally regularized DNNs, OLE-GRSVNet refuses to memorize the training samples when given random training data or random labels. We use similar experimental setup as in Section 4. The same four modifications to three baseline architectures (VGG-11,16,19 DISPLAYFORM0 The performance of the networks are tested on the following datasets:• MNIST. The MNIST dataset contains 28 × 28 grayscale images of digits from 0 to 9. There are 60,000 training samples and 10,000 testing samples. No data augmentation was used.• SVHN. The Street View House Numbers (SVHN) dataset contains 32 × 32 RGB images of digits from 0 to 9. The training and testing set contain 73,257 and 26,032 images respectively. No data augmentation was used.• CIFAR. This dataset contains 32 × 32 RGB images of ten classes, with 50,000 images for training and 10,000 images for testing. We use "CIFAR+" to denote experiments on CIFAR with data augmentation: 4 pixel padding, 32 × 32 random cropping and horizontal flipping. All networks are trained from scratch with the "Xavier" initialization BID5. SGD with Nesterov momentum 0.9 is used for the optimization, and the batch size is set to 200 (a 100/100 split for geometry/validation batch is used in OLE-GRSVNet). We set the initial learning rate to 0.01, and decrease it ten-fold at 50% and 75% of the total training epochs. For the experiments with true labels, all networks are trained for 100, 160 epochs for MNIST, SVHN, respectively. For CIFAR, we train the networks for 200, 300, 400 epochs for VGG-11, VGG16, VGG-19, respectively. In order to ensure the convergence of SGD, all networks are trained for 800 epochs for the experiments with random labels. The mean accuracy after five independent trials is reported. The weight decay parameter is always set to µ = 10 −4. The weight for the OLE loss in "softmax+OLE" is chosen according to BID8. More specifically, it is set to 0.5 for MNIST and SVHN, 0.5 for CIFAR with VGG-11 and VGG-16, and 0.25 for CIFAR with VGG-19. For OLE-GRSVNet, the parameter λ in is determined by cross-validation. More specifically, we set λ = 10 for MNIST, λ = 5 for SVHN and CIFAR with VGG-11 and VGG-16, and λ = 1 for CIFAR with VGG-19. Table 1 reports the performance of the networks trained on the original data with real or randomly generated labels. The numbers without parentheses are the percentage accuracies on the test data when networks are trained with real labels, and the numbers enclosed in parentheses are the accuracies on the training data when given random labels. Accuracies on the training data with real labels Table 1: Testing or training accuracies when trained on training data with real or random labels. The numbers without parentheses are the percentage accuracies on the testing data when networks are trained with real labels. The numbers enclosed in parentheses are the accuracies on the training data when networks are trained with random labels. The mean accuracy after five independent trials is reported. This suggests that OLE-GRSVNet outperforms conventional DNNs on the testing data when trained with real labels. Moreover, unlike conventional DNNs, OLE-GRSVNet refuses to memorize the training data when trained with random labels. (always 100%) and accuracies on the test data with random labels (always close to 10%) are omitted from the table. As we can see, similar to the experiment in Section 4, when trained with real labels, OLE-GRSVNet exhibits better generalization performance than the competing networks. But when trained with random labels, OLE-GRSVNet refuses to memorize the training samples like the other networks because there are no intrinsically learnable patterns. This is still the case even if we increase the number of training epochs to 2000.We point out that by combining different regularization and tuning the hyperparameters, the test error of conventional DNNs can indeed be reduced. For example, if we combine weight decay, conventional OLE regularization, batch normalization, data augmentation, and increase the learning rate from 0.01 to 0.1, the test accuracy of CIFAR can be pushed to 91.02%. However, this does not change the fact that such network can still perfectly memorize training samples when given random labels. This corroborates the claim by that conventional regularization appears to be more of a tuning parameter instead of playing an essential role in reducing network capacity. We proposed a general framework, GRSVNet, for data-dependent DNN regularization. The core idea is the self-validation of the enforced geometry on a separate batch using a validation loss consistent with the geometric loss, so that the predicted label distribution has a meaningful geometric interpretation. In particular, we study a special case of GRSVNet, OLE-GRSVNet, which is capable of producing highly discriminative features: samples from the same class belong to a low-rank subspace, and the subspaces for different classes are orthogonal. When trained on benchmark datasets with real labels, OLE-GRSVNet achieves better test accuracy when compared to DNNs with different regularizations sharing the same baseline architecture. More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize and overfit the training data when trained on random labels or random data. This suggests that OLE-GRSVNet effectively reduces the memorizing capacity of DNNs, and it only extracts intrinsically learnable patterns from the data. Although we provided some intuitive explanation as to why GRSVNet generalizes well on real data and refuses overfitting random data, there are still open questions to be answered. For example, what is the minimum representational capacity of the baseline DNN (i.e., number of layers and number of units) to make even GRSVNet trainable on random data? Or is it because of the learning algorithm (SGD) that prevents GRSVNet from learning a decision boundary that is too complicated for random samples? Moreover, we still have not answered why conventional DNNs, while fully capable of memorizing random data by brute force, typically find generalizable solutions on real data. These questions will be the focus of our future work. It suffices to prove the case when K = 2, as the case for larger K can be proved by induction. In order to simplify the notation, we restate the original theorem for K = 2:Theorem. Let A ∈ R N ×m and B ∈ R N ×n be matrices of the same row dimensions, and [A, B] ∈ R N ×(m+n) be the concatenation of A and B. We have DISPLAYFORM0 Moreover, the equality holds if and only if A * B = 0, i.e., the column spaces of A and B are orthogonal. Proof. The inequality and the sufficient condition for the equality to hold is easy to prove. More specifically, DISPLAYFORM1 Moreover, if A * B = 0, then DISPLAYFORM2 where |A| = (A * A) 1 2. Therefore, DISPLAYFORM3 Next, we show the necessary condition for the equality to hold, i.e., DISPLAYFORM4 DISPLAYFORM5 | be a symmetric positive semidefinite matrix. We DISPLAYFORM6 Let DISPLAYFORM7 be the orthonormal eigenvectors of |A|, |B|, respectively. Then DISPLAYFORM8 Similarly, DISPLAYFORM9 Suppose that [A, B] * = A * + B *, then DISPLAYFORM10 Therefore, both of the inequalities in this chain must be equalities, and the first one being equality only if G = 0. This combined with the last equation in FORMULA2 implies DISPLAYFORM11 APPENDIX B PROOF OF THEOREM 2Proof. First, l is defined in equation FORMULA8 as DISPLAYFORM12 The nonnegativity of l g (Z g) is guaranteed by Theorem 1. The validation loss l v (Y v,Ŷ v) is also nonnegative since it is the average (over the validation batch) of the cross entropy losses: DISPLAYFORM13 Therefore l = l g + λl v is also nonnegative. Next, for a given λ > 0, l(X, Y) obtains its minimum value zero if and only if both l g (Z g) and l v (Y v,Ŷ v) are zeros.• By Theorem 1, l g (Z g) = 0 if and only if span(Z g c)⊥ span(Z g c), ∀c = c.• According to, l v (Y v,Ŷ v) = 0 if and only ifŷ(x) = δ y, ∀x ∈ X v, i.e., for every x ∈ X v c, its feature z = Φ(x; θ) belongs to span(Z g c).At last, we want to prove that if λ > 0, and X v contains at least one sample for each class, then rank(span(Z g c)) ≥ 1 for any c ∈ {1, . . ., K}. If not, then there exists c ∈ {1, . . ., K} such that rank(span(Z g c)) = 0. Let x ∈ X v be a validation datum belonging to class y = c. The predicted probability of x belonging to class c is defined in: DISPLAYFORM14 Thus we have DISPLAYFORM15
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1GSBsRcFX
we propose a new framework for data-dependent DNN regularization that can prevent DNNs from overfitting random data or random labels.
End-to-end automatic speech recognition (ASR) commonly transcribes audio signals into sequences of characters while its performance is evaluated by measuring the word-error rate (WER). This suggests that predicting sequences of words directly may be helpful instead. However, training with word-level supervision can be more difficult due to the sparsity of examples per label class. In this paper we analyze an end-to-end ASR model that combines a word-and-character representation in a multi-task learning (MTL) framework. We show that it improves on the WER and study how the word-level model can benefit from character-level supervision by analyzing the learned inductive preference bias of each model component empirically. We find that by adding character-level supervision, the MTL model interpolates between recognizing more frequent words (preferred by the word-level model) and shorter words (preferred by the character-level model). End-to-end automatic speech recognition (ASR) allows for learning a direct mapping from audio signals to character outputs. Usually, a language model re-scores the predicted transcripts during inference to correct spelling mistakes BID16. If we map the audio input directly to words, we can use a simpler decoding mechanism and reduce the prediction time. Unfortunately, word-level models can only be trained on known words. Out-of-vocabulary (OOV) words have to be mapped to an unknown token. Furthermore, decomposing transcripts into sequences of words decreases the available number of examples per label class. These shortcomings make it difficult to train on the word-level BID2.Recent works have shown that multi-task learning (MTL) BID8 on the word-and character-level can improve the word-error rate (WER) of common end-to-end speech recognition architectures BID2 BID3 BID18 BID21 BID22 BID24 BID29. MTL can be interpreted as learning an inductive bias with favorable generalization properties BID6. In this work we aim at characterizing the nature of this inductive bias in word-character-level MTL models by analyzing the distribution of words that they recognize. Thereby, we seek to shed light on the learning process and possibly inform the design of better models. We will focus on connectionist temporal classification (CTC) BID15. However, the analysis can also prove beneficial to other modeling paradigms, such as RNN Transducers BID14 or Encoder-Decoder models, e.g., BID5 BID9.Contributions. We show that, contrary to earlier negative BID2 BID27, it is in fact possible to train a word-level model from scratch on a relatively small dataset and that its performance can be further improved by adding character-level supervision. Through an empirical analysis we show that the ing MTL model combines the preference biases of word-and character-level models. We hypothesize that this can partially explain why word-character MTL improves on only using a single decomposition, such as phonemes, characters or words. Several works have explored using words instead of characters or phonemes as outputs of the end-toend ASR model BID2 BID27. Soltau et al. BID27 found that in order to solve the problem of observing only few labels per word, they needed to use a large dataset of 120, 000 hours to train a word-level model directly. Accordingly, Audhkhasi et al. BID2 reported difficulty to train a model on words from scratch and instead fine-tuned a pre-trained character-level model after replacing the last dense layer with a word embedding. MTL enables a straightforward joint training procedure to integrate transcript information on multiple levels of granularity. Treating word-and character-level transcription as two distinct tasks allows for combining their losses in a parallel BID21 BID22 BID28 BID29 or hierarchical structure BID13 BID20 BID24. Augmenting the commonly-used CTC loss with an attention mechanism can help with aligning the predictions on both character-and word-level BID3 BID12 BID22. All these MTL methods improve a standard CTC baseline. Finding the right granularity of the word decomposition is in itself a difficult problem. While Li et al. BID22 used different fixed decompositions of words, sub-words and characters, it is also possible to optimize over alignments and decompositions jointly BID23. Orthogonal to these works different authors have explored how to minimize WER directly by computing approximate gradients BID25 BID32.When and why does MTL work? Earlier theoretical work argued that the auxiliary task provides a favorable inductive bias to the main task BID6. Within natural language processing on text several works verified empirically that this inductive bias is favorable if there is a certain notion of relatedness between the tasks BID4 BID7 BID26. Here, we investigate how to characterize the inductive bias learned via MTL for speech recognition. The CTC loss is defined as follows BID15: DISPLAYFORM0 where x is the audio input, commonly a spectrogram, and π is a path that corresponds to the groundtruth transcript z. The squashing function B maps a path π to the output z by first merging repetitions and then deleting so-called blank tokens. The gradient of the CTC loss can be computed efficiently using a modified forward-backward algorithm. Typically, π t is a categorical random variable over the corresponding output alphabet A = {a, b, c, ...,}. Here, is the blank token which encodes the empty string. This output representation enables the model to be able to transcribe any word possible without a specified alignment. Character-level CTC models are often supplemented by an external language model that can significantly improve the accuracy of the ASR. This is because these models still make spelling mistakes despite being trained on large amounts of data BID0.By using an alphabet of words one can ensure that there are no misspellings. The alphabet could contain, for example, the most common words found in the training set. This has the advantage that any word is guaranteed to be spelled correctly and that costly re-scoring on a character-level is avoided. However, by using a word-level decoding, we can no longer predict rare or new words. In this case the model has to be content with outputting an unknown token. Another challenge when using a word-level model is label sparsity. While we will observe many examples of a single character, there will be fewer for a single word, making overfitting more likely. We aim at counter-acting these shortcomings by making use of character-level information during training, similar to Audhkhasi et al. BID2.In this work we combine word-and character-level models via an MTL loss and denote this a word-character-level model. We treat each output-level prediction as a separate task and form a linear combination of the losses. The MTL loss is then defined as DISPLAYFORM1 where λ ≥ 0 defines a hyperparameter to weight the influence of the character-level CTC loss L char against the word-level CTC loss L word. In our experiments we set it to 1, giving equal contribution to both loss terms, but other choices may improve the performance. Alternatively, one could try to estimate this weight based on the uncertainty BID17 or gradient norm BID10 of each loss term. We experimented with these approaches, but did not observe any significant improvement in performance over the equally-weighted loss. We trained our models using a convolutional architecture which is based on Wav2Letter BID11. Details can be found in the appendix. Compared to recurrent neural networks, convolutional neural networks avoid iterative computation over time and suffer less from the vanishing/exploding gradient problem. They achieve comparable performance in terms of WER BID11 BID31. We performed all experiments on read news articles from the Wall Street Journal (WSJ) BID30. This dataset has relatively little noise and allows us to focus on the influence of word frequency and word length. We used the si284 subset for training, and dev92 for validation. For the character-level model we used 32 different characters which include the space-character and a blank token. To define the output alphabet for the word-level model, we included all words that appeared at least 5 times in the training set in addition to a blank and an unknown token. This corresponds to an alphabet of 9411 units with an OOV rate of 9 % on the training set, and 10 % on the validation set, which represents a lower bound for the achievable WER of a word-level model. For the MTL model we let word-and character-level model share every layer but the last. To decode the output on the character-and word-level, we used greedy decoding. In order to get rid of unknown tokens in our prediction, we employed the following heuristic BID21: For each unknown token predicted on the word-level, we substituted the corresponding word on the character-level that was defined at the same time step. To compare our we also trained word-and character-only models. For optimization we used the Adam-optimizer BID19 with a learning rate of 5e−4 and a batch size of 16 to fit the whole model into the memory of one GPU. We applied batch normalization and dropout. For the input data, we transformed each utterance into spectrograms over 20 ms with a shift of 10 ms using 40 log-mel coefficients, standardized per spectrogram. We ran each experiment for 100 epochs, corresponding to 233, 838 updates. MTL performance. The of our experiments can be found in FIG0. It shows the learning curve for the word-and character-level components by measuring the WER on the validation set. The dashed line shows the achieved WER using a character-level model without joint word-level training. We observe that MTL converges faster and to a lower WER of 23 %, which is 5 percentage points lower than the character-level component of the MTL network, or the single-task character-level baseline. Using a beam search decoder with a lexicon constraint on the character-level model reduces the WER from 28 % to a WER of 24 %, which is still higher than our MTL error. This shows that MTL performs favorably even without a language model. A word-level-only model achieved the same performance as the character-level baseline on this dataset. Contrary to the findings of Audhkhasi et al. BID2, this shows that it is indeed possible to train a word-level model from scratch, even without a large amount of training data. While the combined decoding only gives an improvement of 0.7 percentage points in terms of WER, it eliminates unknown-token predictions which might make transcripts more readable. Characterizing the inductive bias. Arpit et al. BID1 have shown that a neural network trained with stochastic gradient descent learns easier examples first. We argue that we can characterize the preference bias of our model and learning algorithm by showing which examples are easy to classify in the particular representation that each of the models is learning. Since ASR models are usually evaluated in terms of WER, we consider which words each model is learning. To this end we chose a relatively clean dataset and considered the attributes frequency and length to describe a word. We trained each model for 4 epochs and recorded the distribution of the recognized words during training. Since we are not given a perfect alignment between speech and ground-truth transcript, we define a word as being recognized if it is both present in the greedy prediction on the validation set and the corresponding ground-truth transcript. FIG1 shows how the distribution of recognized words changes during training. We see that the word-level model is biased towards recognizing the most common words and slowly learns less frequent words over time. This makes sense since more weight is given to the corresponding examples. While the same effect is present in the character-level model, it covers the complete support of the word frequency distribution in the same number of steps. On the other hand for the length distribution, we see that the word-level model covers all words independent of its length within the beginning of training. The character-level model focuses strongly on shorter words before it covers the whole range of the word length distribution. If we compare the learning dynamics of both models, we find that each model learns words with different characteristics more easily. If we take a look at the MTL model, we see that it combines both biases and arrives at learning a distribution that is much more uniform across both word frequency and word length. We hypothesize that putting more emphasis on the tail of each of these distributions combines the strengths of the two models and makes them perform better, especially in distributions that follow a power law such as word frequency rank. In contrast to earlier studies in the literature, we found that, even on a relatively small dataset, training on a word-level can be feasible. Furthermore, we found that combining a word-level model with character-level supervision in MTL can improve noticeably. To gain a better understanding of this, we characterized the inductive bias of word-character MTL in ASR by comparing the distributions of recognized words at the beginning of training. We found that adding character-level supervision to a word-level interpolates between recognizing more frequent words (preferred by the word-level model) and shorter words (preferred by the character-level model). This effect could be even more pronounced on harder datasets than WSJ, such as medical communication data where many long words are infrequent, but very important. Further analysis of word distributions in terms of pitch, noise and acoustic variability could provide additional insight.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1GySqOojm
Multi-task learning improves word-and-character-level speech recognition by interpolating the preference biases of its components: frequency- and word length-preference.
Discretizing floating-point vectors is a fundamental step of modern indexing methods. State-of-the-art techniques learn parameters of the quantizers on training data for optimal performance, thus adapting quantizers to the data. In this work, we propose to reverse this paradigm and adapt the data to the quantizer: we train a neural net whose last layers form a fixed parameter-free quantizer, such as pre-defined points of a sphere. As a proxy objective, we design and train a neural network that favors uniformity in the spherical latent space, while preserving the neighborhood structure after the mapping. For this purpose, we propose a new regularizer derived from the Kozachenko-Leonenko differential entropy estimator and combine it with a locality-aware triplet loss. Experiments show that our end-to-end approach outperforms most learned quantization methods, and is competitive with the state of the art on widely adopted benchmarks. Further more, we show that training without the quantization step in almost no difference in accuracy, but yields a generic catalyser that can be applied with any subsequent quantization technique. Recent work BID27 proposed to leverage the pattern-matching ability of machine learning algorithms to improve traditional index structures such as B-trees or Bloom filters, with encouraging . In their one-dimensional case, an optimal B-Tree can be constructed if the cumulative density function (CDF) of the indexed value is known, and thus they approximate this CDF using a neural network. We emphasize that the CDF itself is a mapping between the indexed value and a uniform distribution in. In this work, we wish to generalize such an approach to multi-dimensional spaces. More precisely, as illustrated by FIG0, we aim at learning a function that maps real-valued vectors to a uniform distribution over a d-dimensional sphere, such that a fixed discretizing structure, for example a fixed binary encoding (sign of components) or a regular lattice quantizer, offers competitive coding performance. Our approach is evaluated in the context of similarity search, where methods often rely on various forms of learning machinery BID12 BID45; in particular there is a substantial body of literature on methods producing compact codes BID20 ). Yet the problem of jointly optimizing a coding stage and a neural network remains essentially unsolved, partly because. It is learned end-to-end, yet the part of the network in charge of the discretization operation is fixed in advance, thereby avoiding optimization problems. The learnable function f, namely the "catalyzer", is optimized to increase the quality of the subsequent coding stage. input λ = 0 λ = 0.01 λ = 0.1 λ → ∞ FIG1: Illustration of our method, which takes as input a set of samples from an unknown distribution. We learn a neural network that aims at preserving the neighborhood structure in the input space while best covering the output space (uniformly). This trade-off is controlled by a parameter λ. The case λ = 0 keeps the locality of the neighbors but does not cover the output space. On the opposite, when the loss degenerates to the differential entropic regularizer (λ → ∞), the neighbors are not maintained by the mapping. Intermediate values offer different trade-offs between neighbor fidelity and uniformity, which is proper input for an efficient lattice quantizer (depicted here by the hexagonal lattice A 2).it is difficult to optimize through a discretization function. For this reason, most efforts have been devoted to networks producing binary codes, for which optimization tricks exist, such as soft binarization or stochastic relaxation, which are used in conjunction with neural networks BID28 BID18. However it is difficult to improve over more powerful codes such as those produced by product quantization BID20, and recent solutions addressing product quantization require complex optimization procedures BID24 BID34.In order to circumvent this problem, we propose a drastic simplification of learning algorithms for indexing. We learn a mapping such that the output follows the distribution under which the subsequent discretization method, either binary or a more general quantizer, performs better. In other terms, instead of trying to adapt an indexing structure to the data, we adapt the data to the index. Our technique requires to jointly optimize two antithetical criteria. First, we need to ensure that neighbors are preserved by the mapping, using a vanilla ranking loss BID40 BID6 BID44. Second, the training must favor a uniform output. This suggests a regularization similar to maximum entropy BID36, except that in our case we consider a continuous output space. We therefore propose to cast an existing differential entropy estimator into a regularization term, which plays the same "distribution-matching" role as the Kullback-Leiber term of variational auto-encoders BID9.As a side note, many similarity search methods are implicitly designed for the range search problem (or near neighbor, as opposed to nearest neighbor BID15 BID0), that aims at finding all vectors whose distance to the query vector is below a fixed threshold. For real-world high-dimensional data, range search usually returns either no neighbors or too many. The discrepancy between near-and nearest-neighbors is significantly reduced by our technique, see Section 3.3 and Appendix C for details. Our method is illustrated by FIG1. We summarize our contributions as follows:• We introduce an approach for multi-dimensional indexing that maps the input data to an output space in which indexing is easier. It learns a neural network that plays the role of an adapter for subsequent similarity search methods.• For this purpose we introduce a loss derived from the Kozachenko-Leonenko differential entropy estimator to favor uniformity in the spherical output space.• Our learned mapping makes it possible to leverage spherical lattice quantizers with competitive quantization properties and efficient algebraic encoding.• Our ablation study shows that our network can be trained without the quantization layer and used as a plug-in for processing features before using standard quantizers. We show quantitatively that our catalyzer improves performance by a significant margin for quantization-based (OPQ BID10) and binary (LSH BID5) method. This paper is organized as follows. Section 2 discusses related works. Section 3 introduces our neural network model and the optimization scheme. Section 4 details how we combine this strategy with lattice assignment to produce compact codes. The experimental section 5 evaluates our approach. Generative modeling. Recent models such as Generative Adversarial Networks (GANs) BID13 or Variational Auto-Encoders (VAEs) BID23 ) learn a mapping between an isotropic Gaussian distribution and the empirical distribution of a training set. Our approach maps an empirical input distribution to a uniform distribution on the spherical output space. Another distinction is that GANs learn a unidirectional mapping from the latent code to an image (decoder), whereas VAEs learn a bidirectional mapping (encoder -decoder). In our work, we focus on learning the encoder, whose goal is to pre-process input vectors for subsequent indexing. Dimensionality reduction and representation learning. There is a large body of literature on the topic of dimensionality reduction, see for instance the review by BID43. Relevant work includes self-organizing maps BID26, the stochastic neighbor embedding BID14 and the subsequent t-SNE approach BID42, which is tailored to low-dimensional spaces for visualisation purposes. Both works are non-linear dimensionality reduction aiming at preserving the neighborhood in the output space. Learning to index and quantize. The literature on product compact codes for indexing is most relevant to our work, see BID45 BID9 for an overview of the topic. Early popular highdimensional approximate neighbor methods, such as Locality Sensitive Hashing BID15 BID11 BID5 BID0, were mostly relying on statistical guarantees without any learning stage. This lack of data adaptation was subsequently addressed by several works. The Iterative quantization (ITQ) BID12 modifies the coordinate system to improve binarization, while methods inspired by Vector Quantization and compression BID20 BID1 BID47 BID17 have gradually emerged as strong competitors for estimating distances or similarities with compact codes. While most of these works aim at reproducing target (dis-)similarity, some recent works directly leverage semantic information in a supervised manner with neural networks BID28 BID18 BID24 BID38.Lattices, also known as Euclidean networks, are discrete subsets of the Euclidean space that are of particular interest due to their space covering and sphere packing properties BID7. They also have excellent discretization properties under some assumptions about the distribution, and most interestingly the closest point of a lattice is determined efficiently thanks to algebraic properties BID37. This is why lattices have been proposed BID0 BID19 as hash functions in LSH. However, for real-world data, lattices waste capacity because they assume that all regions of the space have the same density BID35. In this paper, we are interested in spherical lattices because of their bounded support. Entropy regularization appears in many areas of machine learning and indexing. For instance, BID36 argue that penalizing confident output distributions is an effective regularization. BID8 use entropy regularization to speed up computation of optimal transport distances. Another proposal by BID4 in an unsupervised learning context, is to spread the output by enforcing input images to map to points drawn uniformly on a sphere. Interestingly, most recent works on binary hashing introduce some form of entropic regularization. Deep hashing BID28 employs a regularization term that increases the marginal entropy of each bit. SUBIC BID18 extends this idea to one-hot codes. Our proposal is inspired by prior work for one-dimensional indexing BID27. However their approach based on unidimensional density estimation can not be directly translated to the multidimensional case. Our strategy is to train a neural network f that maps vectors from a d in -dimensional space to the hypersphere of a d out -dimensional space S dout. Let us first introduce our regularizer, which we design to spread out points uniformly across S dout. With the knowledge of the density of points p, we could directly maximize the differential entropy Figure 3: Histograms of the distance between a query point and its 1st (resp. 100 th) nearest neighbors, in the original space (left) and after our catalyzer (right). In the original space, the two histograms have a significant overlap, which means that a 100-th nearest neighbor for a query has often a distance lower that the 1st neighbor for another query. This gap is significantly reduced by our catalyzer. differential entropy as a proxy. It was shown by Kozachenko and Leononenko (see e.g. BID3) that defining ρ n,i = min j =i f (x i) − f (x j), the differential entropy of the distribution can be estimated by DISPLAYFORM0 DISPLAYFORM1 where α n and β n are two constants that depend on the number of samples n and the dimensionality of the data d out. Ignoring the affine components, we define our entropic regularizer as DISPLAYFORM2 This loss also has a satisfactory geometric interpretation: closest points are pushed away, with a strength that is non-decreasing and concave. This ensures diminishing returns: as points get away from each other, the marginal impact of increasing the distance becomes smaller. We enforce the outputs of the neural network to follow the same neighborhood structure as in the input space by adopting the triplet loss BID6 BID44 ) DISPLAYFORM0 where x is a query, x + a positive match, x − a negative match. The positive matches are obtained by computing the k pos nearest neighbors of each point x in the training set in the input space. The negative matches are generated by taking the k neg -th nearest neighbor of f (x) in (f (x 1),..., f (x n)). In order to speed up the learning, we compute the k neg -th nearest neighbor of every point in the dataset at the beginning of each epoch and use these throughout the epoch. Note that we do not need to use a margin, as its effect is essentially superseded by our regularizer. Our overall loss combines the triplet loss and the entropy regularizer, as DISPLAYFORM1 where the parameter λ ≥ 0 controls the trade-off between ranking quality and uniformity. Choice of λ. The marginal distributions for these two views are much more uniform with our KoLeo regularizer, which is a consequence of the higher uniformity in the high-dimensional latent space. Qualitative evaluation of the uniformity. Figure 3 shows the histogram of the distance to the nearest (resp. 100 th nearest) neighbor, before applying the catalyzer (left) and after (right). The overlap between the two distributions is significantly reduced by the catalyzer. We evaluate this quantitatively by measuring the probability that the distance between a point and its nearest neighbor is larger than the distance between another point and its 100 th nearest neighbor. In a very imbalanced space, this value is 50%, whereas in a uniform space it should approach 0%. In the input space, this probability is 20.8%, and it goes down to 5.0% in the output space thanks to our catalyzer. Visualization of the output distribution. While FIG1 illustrates our method with the 2D disk as an output space, we are interested in mapping input samples to a higher dimensional hyper-sphere. FIG2 proposes a visualization of the high-dimensional density from a different viewpoint, with the Deep1M dataset mapped in 8 dimensions. We sample 2 planes randomly in R dout and project the dataset points (f (x 1),..., f (x n)) on them. For each column, the 2 figures are the angular histograms of the points with a polar parametrization of this plane. The area inside the curve is constant and proportional to the number of samples n. A uniform angular distribution produces a centered disk, and less uniform distributions look like unbalanced potatoes. The densities we represent are marginalized, so if the distribution looks non-uniform then it is non-uniform in d out -dimensional space, but the reverse is not true. Yet one can compare the obtained for different regularization coefficients, which shows that our regularizer has a strong uniformizing effect on the mapping, ultimately resembling that of a uniform distribution for λ = 1. In this section we describe how our method interplays with discretization, at training and at search time. We consider two parameter-free coding methods: binarization and defining a fixed set of points on the unit sphere provided by a lattice spherical quantizer. A key advantage of a fixed coding structure like ours is that compressed-domain distance computations between codes do not depend on external meta-data. This is in contrast with quantization-based methods like product quantization, which require centroids to be available at search time. Binary features are obtained by applying the sign function to the coordinates. We relax this constraint at train time by replacing the sign with the identity function, and the binarization is used only to cross-validate the regularization parameter on the validation set. As discussed by BID35, lattices impose a rigid partitioning of the feature space, which is suboptimal for arbitrary distributions, see FIG1. In contrast, lattices offer excellent quantization properties for a uniform distribution BID7. Thanks to our regularizer, we are closer to uniformity in the output space, making lattices an attractive choice. We consider the simplest spherical lattice, integer points of norm r, a set we denote S r d. Given a vector x ∈ R din, we compute its catalyzed features f (x), and find the nearest lattice point on S r d using the assignment operation, which formally minimizes q(f (x)) = min c∈S r DISPLAYFORM0 This assignment can be computed very efficiently (see Appendix B for details). Given a query y and its representation f (y), we approximate the similarity between y and x using the code: DISPLAYFORM1, This is an asymmetric comparison, because the query vectors are not quantized BID20.When used as a layer, it takes a vector in R d and returns the quantized version of this vector in the forward pass, and passes the gradient to the previous layer in the backward pass. This heuristic is referred to as the straight-through estimator in the literature, and is often used for discretization steps, see e.g., van den. This section presents our experimental . We focus on the class of similarity search methods that represents the database vectors with a compressed representation BID5 BID20 BID12 BID10, which enables to store very large dataset in memory BID30 BID39. All experiments have two phases. In the first phase (encoding), all vectors of a database are encoded into a representation (e.g. 32, 64 bits). Encoding consists in a vector transformation followed by a quantization or binarization stage. The second phase is the search phase: a set of query vectors is transformed, then the codes are scanned exhaustively and compared with the transformed query vector, and the top-k nearest vectors are returned. Datasets and metrics. We use two benchmark datasets Deep1M and BigAnn1M. Deep1M consists of the first million vectors of the Deep1B dataset BID2. The vectors were obtained by running a convnet on an image collection, reduced to 96 dimensions by principal component analysis and subsequently 2 -normalized. We also experiment with the BigAnn1M BID21, which consists of SIFT descriptors BID29. Both datasets contain 1M vectors that serve as a reference set, 10k query vectors and a very large training set of which we use 500k elements for training, and 1M vectors that we use a base to cross-validate the hyperparameters d out and λ. We also experiment on the full Deep1B and BigAnn datasets, that contain 1 billion elements. We evaluate methods with the recall at k performance measure, which is the proportion of that contain the ground truth nearest neighbor when returning the top k candidates (for k ∈ {1, 10, 100}).Training. For all methods, we train our neural network on the training set, cross-validate d out and λ on the validation set, and use a different set of vectors for evaluation. In contrast, some works carry out training on the database vectors themselves BID33 BID31 BID12, in which case the index is tailored to a particular fixed set of database vectors. Our model is a 3 -layer perceptron, with ReLU non-linearity and hidden dimension 1024. The final linear layer projects the dataset to the desired output dimension d out, along with 2 -normalization. We use batch normalization BID16 ) and train our model for 300 epochs with Stochastic Gradient Descent, with an initial learning rate of 0.1 and a momentum of 0.9. The learning rate is decayed to 0.05 (resp. 0.01) at the 80-th epoch (resp. 120-th). We evaluate the lattice-based indexing proposed in Section 4, and compare it to more conventional methods based on quantization, namely PQ BID20 and Optimized Product Quantization (OPQ) BID10. We use the Faiss BID22 implementation of PQ and OPQ and assign one byte per sub-vector (each individual quantizer has 256 centroids). For our lattice, we vary the value of r to increase the quantizer size, hence generating curves for each value of d out. Figure 5 provides a comparison of these methods. On both datasets, the lattice quantizer strongly outperforms PQ and OPQ for most code sizes. Figure 5: Comparison of the performance of the product lattice vs OPQ on Deep1M (left) and BigAnn1M (right). Our method maps the input vectors to a d out -dimensional space, that is then quantized with a lattice of radius r. We obtain the curves by varying the radius r. Impact of the hyperparameters. Varying the rank parameters k pos and k neg did not impact significantly the performance, so we fixed them respectively to k pos = 10 and k neg = 50. For a fixed number of bits, varying the dimension d out is a trade-off between a good representation and an easily compressible one. When d out is small, we can use a large r for a very small quantization error, but there are not enough dimensions to represent the degrees of freedom of the underlying data. A larger d out allows for better representations but suffers from a coarser approximation. Figure 5 shows that for low bitrates, small dimensions perform better because the approximation quality dominates, whereas for higher bitrates, larger dimensions are better because the representation quality dominates. Similarly, the regularizer λ needs to be set to a large value for small dimensions and low bitrates, but higher dimensions and higher bitrates require lower values of λ (cf. Appendix A for details).Large-scale experiments. We experiment with the full Deep1B (resp. BigAnn) dataset, that contains 1 billion vectors, with 64 bits codes. At that scale, the recall at 10 drops to 26.1% for OPQ and to 37.8% for the lattice quantizer (resp. 21.3% and 36.5%). As expected, the recall performance is lower than for the 1 million vectors database, but the precision advantage of the lattice quantizer is maintained at large scale. Comparison to the state of the art. Additive quantization variants BID1 BID32 BID34 are currently state-of-the art encodings for vectors in terms of accuracy. However, their encoding stage involves an iterative optimization process that is prohibitively slow for practical use cases. For example, Competitive quantization's reported complexity is 15× Table 2: Performance (1-recall at 10, %) with LSH, on Deep1M and BigAnn1M, as a function of the number of bits per index vector. All are averaged over 5 runs with different random seeds. Our catalyzer gets a large improvement in binary codes over LSH and ITQ.slower than OPQ. Table 1 compares our with LSQ BID32, a recent variant that is close to the state of the art and for which open-source code is available. We show that our Catalyst + Lattice variant method is 14× times faster for an accuracy that is competitive or well above that of LSQ. To our knowledge, this is the first time that such competitive are reported for a method that can be used in practice at a large scale. Our search time is a bit slower: computing 1M asymmetric distances takes 7.5 ms with the Catalyzer+Lattice instead of 4.9 ms with PQ. This is due to our decoding procedure, which does not rely on precomputed tables as used in PQ. Ablation study. As a sanity check, we first replace our catalyzer by a PCA that reduces the dimensionality to the same size as our catalyzer, followed by 2 -normalization. This significantly decreases the performance of the lattice quantizer, as can be seen in Table 1.We also evaluate the impact of training end-to-end, compared to training without the quantization layer. Table 1 shows that end-to-end training has a limited impact on the overall performance for 64 bits, sometimes even decreasing performance. This may be partly due to the approximation induced by the straight-through estimation, which handicaps end-to-end training. Another reason is that the KoLeo regularizer narrows the performance gap induced by discretization. In other terms, our method trained without the discretization layer trains a general-purpose network (hence the name catalyzer), on which we can apply any binarization or quantization method. Table 1 shows that OPQ is improved when applied on top of catalyzed features, for example increasing the recall@10 from 63.6 to 71.1.Binary hashing. We also show the interest of our method as a catalyzer for binary hashing, compared to two popular methods BID5 BID12:LSH maps Euclidean vectors to binary codes that are then compared with Hamming distance. A set of m fixed projection directions are drawn randomly and isotropically in d in, and each vector is encoded into m bits by taking the sign of the dot product with each direction. ITQ is another popular hashing method, that improves LSH by using an orthogonal projection that is optimized to maximize correlation between the original vectors and the bits. Table 2 compares our catalyzer to LSH and ITQ. Note that a simple sign function is applied to the catalyzed features. The catalyzer improves the performance by 2-9 percentage points in all settings, from 32 to 128 bits. We train a neural network that maps input features to a uniform output distribution on a unit hypersphere, making high-dimensional indexing more accurate, in particular with fast and rigid lattice quantizers or a trivial binary encoding. To the best of our knowledge, this is the first work on multi-dimensional data that demonstrates that it is competitive to adapt the data distribution to a rigid quantizer, instead of adapting the quantizer to the input data. This has several benefits: rigid quantizers are fast at encoding time; and vectors can be decoded without carrying around codebooks or auxiliary tables. We open-sourced the code corresponding to the experiments at https://github.com/facebookresearch/spreadingvectors. The optimal value of the regularizer λ decreases with the dimension, as shown by TAB2: Optimal values of the regularization parameter λ for Deep1M, using a fixed radius of r = 10. We consider the set of integer points DISPLAYFORM0 Atoms. We define a "normalization" function N of vectors: it consists in taking the absolute value of their coordinates, and sorting them by decreasing coordinates. We call "atoms" the set of vectors that can be obtained by normalizing the vectors of S Encoding and enumerating. To solve Equation 5, we apply the following steps:1. normalize y with N, store the permutation σ that sorts coordinates of |y| 2. exhaustively search the atom z that maximizes N (y) z 3. apply the inverse permutation σ −1 that sorts y to z to obtain z 4. the nearest vector (z 1, .., z d) is z i = sign(y i)z i ∀i = 1..d. To encode a vector of z ∈ S r d we proceed from N (z):1. each atom is assigned a range of codes, so z is encoded relative to the start of N (z)'s range 2. encode the permutation using combinatorial number systems BID25. There are d! permutations, but the permutation of equal components is irrelevant, which divides the number combinations. For example atom is the normalized form of 8!/(2!2!4!) = 240 vectors of S √ 10 8.3. encode the sign of non-zero elements. In the example above, there are 4 sign bits. Decoding proceeds in the reverse order. Encoding 1M vectors takes about 0.5 s on our reference machine, which is faster than PQ (1.9 s). In other terms, he quantization time is negligible w.r.t. the preprocessing by the catalyzer. FIG7 shows how our method achieves a better agreement between range search and k-nearest neighbors search on Deep1M. In this experiment, we consider different thresholds ε for the range search and perform a set of queries for each ε. Then we measure how many vectors we must return, on average, to achieve a certain recall in terms of the nearest neighbors in the original space. Without our mapping, there is a large variance on the number of for a given ε. In contrast, after the mapping it is possible to use a unique threshold to find most neighbors. For example: to obtain 80% recall, the search in the original space requires to set ε = 0.54, which returns 700 per query on average, while in the transformed space ε = 0.38 returns just 200 . Observe the much better agreement in the latent spherical space.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkGuG2R5tm
We learn a neural network that uniformizes the input distribution, which leads to competitive indexing performance in high-dimensional space
Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance. A variance reduced TD (VRTD) algorithm was proposed by , which applies the variance reduction technique directly to the online TD learning with Markovian samples. In this work, we first point out the technical errors in the analysis of VRTD in , and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance. We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate. Furthermore, the variance error (for both i.i.d. and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD. In reinforcement learning (RL), policy evaluation aims to obtain the expected long-term reward of a given policy and plays an important role in identifying the optimal policy that achieves the maximal cumulative reward over time;;. The temporal difference (TD) learning algorithm, originally proposed by , is one of the most widely used policy evaluation methods, which uses the Bellman equation to iteratively bootstrap the estimation process and continually update the value function in an incremental way. In practice, if the state space is large or infinite, function approximation is often used to find an approximate value function efficiently. Theoretically, TD with linear function approximation has been shown to converge to the fixed point solution with i.i.d. samples and Markovian samples in;. The finite sample analysis of TD has also been studied in;; Dalal et al. (2018a);. Since each iteration of TD uses one or a mini-batch of samples to estimate the mean of the gradient 1, TD learning usually suffers from the inherent variance, which substantially degrades the convergence accuracy. Although a diminishing stepsize or very small constant stepsize can reduce the variance;; Dalal et al. (2018a), they also slow down the convergence significantly. Two approaches have been proposed to reduce the variance. The first approach is the so-called batch TD, which takes a fixed sample set and transforms the empirical mean square projected Bellman error (MSPBE) into an equivalent convex-concave saddle-point problem. Due to the finite-sample nature of such a problem, stochastic variance reduction techniques for conventional optimization can be directly applied here to reduce the variance. In particular, showed that and can be applied to improve the performance of batch TD algorithms, and proposed two variants of SVRG to further save the computation cost. However, the analysis of batch TD does not take into account the statistical nature of the training samples, which are generated by a MDP. Hence, there is no guarantee of such obtained solutions to be close to the fixed point of TD learning. The second approach is the so-called TD with centering (CTD) algorithm proposed in , which introduces the variance reduction idea to the original TD learning algorithm. For the sake of better reflecting its major feature, we refer to CTD as Variance Reduced TD (VRTD) throughout this paper. Similarly to the SVRG in , VRTD has outer and inner loops. The beginning of each inner-loop (i.e. each epoch) computes a batch of sample gradients so that each subsequent inner loop iteration modifies only one sample gradient in the batch gradient to reduce the variance. The main difference between VRTD and batch TD is that VRTD applies the variance reduction directly to TD learning rather than to a transformed optimization problem in batch TD. empirically verified that VRTD has better convergence accuracy than vanilla TD learning, some technical errors in the analysis in have been pointed out in follow up studies Dalal et al. (2018a); Narayanan and Szepesvári. Furthermore, as we discuss in Section 3, the technical proof in regarding the convergence of VRTD also has technical errors so that their do not correctly characterize the impact of variance reduction on TD learning. Given the recent surge of interest in the finite time analysis of the vanilla;; Dalal et al. (2018a), it becomes imperative to reanalyze the VRTD and accurately understand whether and how variance reduction can help to improve the convergence accuracy over vanilla TD. Towards this end, this paper specifically addresses the following central questions. • For i.i.d. sampling, it has been shown in that vanilla TD converges only to a neighborhood of the fixed point for a constant stepsize and suffers from a constant error term caused by the variance of the stochastic gradient at each iteration. For VRTD, does the variance reduction help to reduce such an error and improve the accuracy of convergence? How does the error depend on the variance reduction parameter, i.e., the batch size for variance reduction? • For Markovian sampling, it has been shown in; that the convergence of vanilla TD further suffers from a bias error due to the correlation among samples in addition to the variance error as in i.i.d. sampling. Does VRTD, which was designed to have reduced variance, also enjoy reduced bias error? If so, how does the bias error depend on the batch size for variance reduction? Our main contributions are summarized in Table 1 and are described as follows. For i.i.d. sampling, we show that a slightly modified version of VRTD (for avoiding bias error) converges linearly to a neighborhood of the fixed point solution for a constant stepsize α, with the variance error at the order of O(α/M), where M is the batch size for variance reduction. This clearly reduces the corresponding variance error O(α) of vanilla TD in. For Markovian sampling, we show that VRTD has the same linear convergence and the same variance error reduction over the vanilla; as i.i.d. sampling. More importantly, the variance reduction in VRTD also attains a substantially reduced bias error at the order of O(1/M) over the vanilla; , where the bias error is at the order of O(α). Therefore, vanilla TD typically needs to decrease the stepsize α in order to reduce the variance and bias errors, which however slows down the convergence. In contrast, VRTD can increase the batch size to reduce both errors while still keeping the stepsize at a desired constant level to maintain fast convergence, as can be observed in our experiments. At the technical level, our analysis of bias error for Markovian sampling takes a different path from the techniques used in;;. Due to the batch average of stochastic gradients adopted by VRTD to reduce the variance, we apply a concentration bound established in Dedecker and Gouëzel for Markovian samples. This shows that the correlation among samples in different epochs is eliminated due to the concentration to a deterministic average, and the correlation among samples within each epoch is implicitly captured by the parameters in the concentration inequality. Such an analysis also explicitly explains why the variance reduction step can also reduce the bias error. On-policy TD learning and variance reduction. On-policy TD learning aims to minimize the Mean Squared Bellman Error (MSBE) when samples are drawn independently from the stationary distribution of the corresponding MDP. The non-asymptotic convergence under i.i.d. sampling has been established in Dalal et al. (2018a) for TD with linear function approximation and for TD with overparameterized neural network approximation. The convergence of averaged linear SA with constant stepsize has been studied in. In the Markovian setting, the non-asymptotic convergence has been studied for on-policy TD in;;;. proposed a variance reduced CTD algorithm (called VRTD in this paper), which directly applies variance reduction technique to the TD algorithm. The analysis of VRTD provided in has technical errors. The aim of this paper is to provide a technically solid analysis for VRTD to characterize the advantage of variance reduction. Variance reduced batch TD learning. algorithms are generally designed for policy evaluation by solving an optimization problem on a fixed dataset. , the empirical MSPBE is first transformed into a quadratic convex-concave saddle-point optimization problem and variance reduction methods of and were then incorporated into a primal-dual batch gradient method. applied two variants of variance reduction methods to solve the same saddle point problems, and showed that those two methods can save gradient computation cost. We note that due to the extensive research in TD learning, we include here only studies that are highly related to our work, and cannot cover many other interesting topics on TD learning such as asymptotic convergence of TD learning Tadić; , off-policy TD learning Sutton et al. (2008; 2009);;; , two time-scale TD algorithms Dalal et al. (2018b); , fitted TD algorithms , etc. The idea of the variance reduction algorithm proposed in as well as the analysis techniques that we develop in this paper can potentially be useful for these algorithms. We describe the problem of value function evaluation over a Markov decision process (MDP) (S, A, P, r, γ), where each component is explained in the sequel. Suppose S ⊂ R d is a compact state space, and A is a finite action set. Consider a stationary policy π, which maps a state s ∈ S to the actions in A via a probability distribution π(·|s). At time-step t, suppose the process is in some state s t ∈ S, and an action a t ∈ A is taken based on the policy π(·|s t). Then the transition kernel P = P(s t+1 |s t, a t) determines the probability of being at state s t+1 ∈ S in the next time-step, and the reward r t = r(s t, a t, s t+1) is received, which is assumed to be bounded by r max. We denote the associated Markov chain by p(s |s) = a∈A p(s |s, a)π(a|s), and assume that it is ergodic. Let µ π be the induced stationary distribution, i.e., s p(s |s)µ π (s) = µ π (s). We define the value function for a policy π as v with φ i (s) for i = 1, 2, · · · d denoting the fixed basis feature functions of state s, and θ ∈ R d is a parameter vector. Let Φ be the |S| × d feature matrix (with rows indexed by the state and columns corresponding to components of θ). The linear function approximation can be written in the vector form asv(θ) = Φθ. Our goal is to find the fixed-point parameter θ The TD learning algorithm performs the following fixed-point iterative update to find such θ *. where α t > 0 is the stepsize, and A xt and b xt are specified below. For i.i.d. samples generated from the distribution µ π, we denote the sample as x t = (s t, r t, s t), and A xt = φ(s t)(γφ(s t) − φ(s t)) and b xt = r(s t)φ(s t). For Markovian samples generated sequentially from a trajectory, we denote the sample as x t = (s t, r t, s t+1), and in this case A xt = φ(s t)(γφ(s t+1) − φ(s t)) and b xt = r(s t)φ(s t). We further define the mean gradient g(θ) = Aθ + b where We call g(θ) as gradient for convenience due to its analogous role as in the gradient descent algorithm. It has been shown that the iteration in eq. converges to the fix point θ * = −A −1 b at a sublinear rate O(1/t) with diminishing stepsize α t = O(1/t) using both Markovian and i.i.d. samples; Dalal et al. (2018a);. Throughout the paper, we make the following standard assumptions;;;;. Assumption 1 (Problem solvability). The matrix A is non-singular. Assumption 3 (Geometric ergodicity). The considered MDP is irreducible and aperiodic, and there exist constants κ > 0 and ρ ∈ such that where d T V (P, Q) denotes the total-variation distance between the probability measures P and Q. Assumption 1 requires the matrix A to be non-singular so that the optimal parameter θ * = −A −1 b is well defined. Assumption 2 can be ensured by normalizing the basis functions. Assumption 3 holds for any time-homogeneous Markov chain with finite state-space and any uniformly ergodic Markov chains with general state space. In this section, we first introduce the variance-reduced TD (VRTD) algorithm proposed in for Markovian sampling and then discuss the technical errors in the analysis of VRTD in. 3.1 Since the standard TD learning takes only one sample in each update as can be seen in eq., it typically suffers from a large variance. This motivates the development of the VRTD algorithm in (named as CTD in). VRTD is formally presented in Algorithm 2, and we briefly introduce the idea below. The algorithm runs in a nested fashion with each inner-loop (i.e., each epoch) consists of M updates. At the beginning of the m-th epoch, a batch of M samples are acquired and a batch gradient g m (θ m−1) is computed based on these samples as an estimator of the mean gradient. Then, each inner-loop update randomly takes one sample from the batch, and updates the corresponding component in g m (θ m−1). Here, Π R θ in Algorithm 2 denotes the projection operator onto a norm ball with the radius R θ. The idea is similar to the SVRG algorithm proposed in for conventional optimization. Since a batch gradient is used at each inner-loop update, the variance of the gradient is expected to be reduced. Input: batch size M, learning rate α and initializatioñ θ0 1: for m = 1, 2,..., S do 2: θm,0 =θm−1 3: Sample a set Bm with M samples indepedently from the distribution µπ 4: Sample xj m,t indepedently from the distribution µπ 7: θm,t+1 = θm,t + α gx j m,t (θm,t) −gx j m,t (θm−1) + gm(θm−1) 9: end for 10: setθm = θm,t for randomly chosen t ∈ {1, 2, ..., M} 11: end for Output:θS Algorithm 2 Variance Reduced TD with Markovian samples Input: batch size M, learning rate α and initializatioñ θ0 1: for m = 1, 2,..., S do 2: θm,0 =θm−1 3: Sample jm,t uniformly at random in {(m − 1)M,..., mM − 1} from trajetory 6: θm,t+1 = ΠR θ θm,t + α gx j m,t (θm,t) −gx j m,t (θm−1) + gm(θm−1) 8: end for 9: setθm = θm,t for randomly chosen t ∈ {1, 2, ..., M} 10: end for Output:θS 3.2 In this subsection, we point out the technical errors in the analysis of VRTD in , which thus fails to provide the correct variance reduction performance for VRTD. At the high level, the batch gradient g m (θ m−1) computed at the beginning of each epoch m should necessarily introduce a non-vanishing variance error for a fixed stepsize, because it cannot exactly equal the mean (i.e. population) gradient g(θ m−1). Furthermore, due to the correlation among samples, the gradient estimator in expectation (with regard to the randomness of the sample trajectory) does not equal to the mean gradient, which should further cause a non-vanishing bias error in the convergence bound. Unfortunately, the convergence bound in indicates an exact convergence to the fixed point, which contradicts the aforementioned general understanding. More specifically, if the batch size M = 1 (with properly chosen λ A defined as λ A := 2|λ max (A + A)|), VRTD reduces to the vanilla TD. However, the exact convergence in Theorem 3 in does not agree with that of vanilla TD characterized in the recent studies; , which has variance and bias errors. In Appendix A, we further provide a counter-example to show that one major technical step for characterizing the convergence bound in does not hold. The goal of this paper is to provide a rigorous analysis of VRTD to characterize its variance reduction performance. As aforementioned, the convergence of VRTD consists of two types of errors: the variance error due to inexact estimation of the mean gradient and the bias error due to Markovian sampling. In this section, we first focus on the first type of error and study the convergence of VRTD under i.i.d. sampling. We then study the Markovian case to further analyze the bias. In both cases, we compare the performance of VRTD to that of the vanilla TD described in eq. to demonstrate its advantage. For i.i.d. samples, it is expected that the bias error due to the time correlation among samples does not exist. However, if we directly apply VRTD (Algorithm 2) originally designed for Markovian samples, there would be a bias term due to the correlation between the batch gradient estimate and every inner-loop updates. Thus, we slightly modify Algorithm 2 to Algorithm 1 to avoid the bias error in the convergence analysis with i.i.d. samples. Namely, at each inner-loop iteration, we draw a new sample from the stationary distribution µ π for the update rather than randomly selecting one from the batch of samples drawn at the beginning of the epoch as in Algorithm 2. In this way, the new independent samples avoid the correlation with the batch gradient evaluated at the beginning of the epoch. Hence, Algorithm 1 does not suffer from an extra bias error. To understand the convergence of Algorithm 1 at the high level, we first note that the sample batch gradient cannot estimate the mean gradient g(θ m−1) exactly due to its population nature. Then, we define e m (θ m) = g m (θ m−1) − g(θ m−1) as such a gradient estimation error, our analysis (see Appendix C) shows that after each epoch update, we have where F m,0 denotes the σ-field that includes all the randomness in sampling and updates before the m-th epoch. The first term in the right-hand side of eq. captures the contraction property of Algorithm 1 and the second term corresponds to the variance of the gradient estimation error. It can be seen that due to such an error term, Algorithm 1 is expected to have guaranteed convergence only to a neighborhood of θ *, when applying eq. iteratively. Our further analysis shows that such an error term can still be well controlled (to be small) by choosing an appropriate value for the batch size M, which captures the advantage of the variance reduction. The following theorem precisely characterizes the non-asymptotic convergence of Algorithm 1. Theorem 1. Consider the VRTD algorithm in Algorithm 1. Suppose Assumptions 1-3 hold. Set a constant stepsize α < λ A 8(1+γ) 2 and the batch size M > where (1+γ) 2 (with C 1 < 1 due to the choices of α and M), and We note that the convergence rate in eq. can be written in a simpler form as Theorem 1 shows that Algorithm 1 converges linearly (under a properly chosen constant stepsize) to a neighborhood of the fixed point solution, and the size of the neighborhood (i.e., the error term) has the order of O(α M), which can be made as small as possible by properly increasing the batch size M. This is in contrast to the convergence of the vanilla TD, which suffers from the constant error term with order O(α) for a fixed stepsize. Thus, a small stepsize α is required in vanilla TD to reduce the variance error, which, however, slows down the practical convergence significantly. In contrast, this is not a problem for VRTD, which can attain a high accuracy solution while still maintaining fast convergence at a desirable stepsize. We further note that if we have access to the mean gradient g(θ m−1) in each epoch m, then the error term ||θ m − θ * || 2 2 becomes zero, and Algorithm 1 converges linearly to the exact fixed point solution, as the iteration number m goes to infinity with respect to the conditional number C 1, which is a positive constant and less than 1. This is similar to the conventional convergence of SVRG for strongly convex optimization. However, the proof here is very different. , the convergence proof relies on the relationship between the gradient and the value of the objective function, but there is not such an objective function in the TD learning problem. Thus, the convergence of the parameter θ needs to be developed by exploiting the structure of the Bellman operator. In this section, we study the VRTD algorithm (i.e., Algorithm 2) with Markovian samples, in which samples are generated from one single MDP path. In such a case, we expect that the convergence of VRTD to have both the variance error due to the gradient estimation (similar to the case with i.i.d. samples) and the bias error due to the correlation among samples. To understand this at the high level, we define the bias at each iteration as ). Then our analysis (see Appendix D) shows that after each epoch update, we have The first term on the right-hand side of eq. captures the epochwise contraction property of Algorithm 2. The second term is due to the variance of the gradient estimation, which captures how well the batch gradient g m (θ *) approximates the mean gradient g(θ *) (note that g(θ *) = 0). Such a variance term can be shown to decay to zero as the batch size gets large similarly to the i.i.d. case. The third term captures the bias introduced by the correlation among samples in the m-th epoch. To quantitatively understand this error term, we provide the following lemma that characterizes how the bias error is controlled by the batch size M. Lemma 1. For any m > 0 and any θ ∈ B θ, which is a ball with the radius R θ, we have where the expectation is over the random trajectory, θ is treated as a fixed variable, and 0 < C 0 < ∞ is a constant depending only on the MDP. In Lemma 1, the constant C 0 depends proportionally on the coupling time and the returning time of the underlying Markov chain. Specifically, the coupling time indicates how fast the Markov chain converges to the stationary distribution, and the returning time, which depends on the nature of the stationary distribution, captures the expected time that the Markov chain returns to the same state. Although it is in general difficult to obtain an explicit form for C 0, the value of C 0 is typically small if the Markov chain has a small mixing time and the stationary distribution is less degenerate. Lemma 1 shows that the bias error diminishes as the batch size M increases and the algorithm approaches to the fixed point θ *. To explain why this happens, the definition of ξ m (θ) immediately yields the following bound: The first term on the right-hand-side of eq. can be bounded by the concentration property for the ergodic process as g m (θ) = → g(θ). As M increases, the randomness due to the gradient estimation is essentially averaged out due to the variance reduction step in VRTD, which implicitly eliminates its correlation from samples in the previous epochs. As a comparison, the bias error in vanilla TD has been shown to be bounded by E[ξ n (θ)] = O(α log(1/α));. In order to reduce the bias and achieve a high convergence accuracy, the stepsize α is required to be small, which causes the algorithm to run very slowly. The advantage of VRTD is that the bias can be reduced by choosing a sufficiently large batch size M so that the stepsize can still be kept at a desirable constant to guarantee fast convergence. Theorem 2. Consider the VRTD algorithm in Algorithm 2. Suppose Assumptions 1-3 hold. Set the constant stepsize α < λ A 12(1+γ) 2 and the batch size M > 1 0.5αλ A −6α 2 (1+γ) 2. Then, we have where 0.5αλ A −3α 2 (1+γ) 2 (with C 1 < 1 due to the choices for α and M),. We note that the convergence rate in eq. can be written in a simpler form as Theorem 2 shows that VRTD (i.e., Algorithm 2) with Markovian samples converges to a neighborhood of θ * at a linear rate, and the size of the neighborhood (i.e., the convergence error) decays sublinearly with the batch size M. More specifically, the first term in the right-hand side of eq. captures the linear convergence of the algorithm, the second term corresponds to the sum of the cumulative gradient estimation error and the cumulative bias error. For the fixed stepsize, the total convergence error is dominated by the sum of those two error terms with the order O(1/M). Therefore, the variance reduction in Algorithm 2 reduces both the variance and the bias of the gradient estimator. In this section, we provide numerical to verify our theoretical . We consider an MDP with γ = 0.95 and |S| = 50. Each transition probability are randomly sampled from and the transitions were normalized to one. The expected reward for each transition is also generated randomly in and the reward on each transition was sampled without noise. Each component of the feature matrix Φ ∈ R 50×4 is randomly and uniformly sampled between 0 and 1. The baseline for comparison is the vanilla TD algorithm, which corresponds to the case with M = 1 in our figure. We conduct two experiments to investigate how the batch size M for variance reduction affects the performance of VRTD with i.i.d. and Markovian samples. In the Markovian setting, we sample the data from a MDP trajectory. In the i.i.d. setting, we sample the data independently from the corresponding stationary distribution. In both experiments, we set the constant stepsize to be α = 0.1 and we run the experiments for five different batch sizes: M = 1, 50, 500, 1000, 2000. Our are reported in Figure 1. All the plots report the square error over 1000 independent runs. In each case, the left figure illustrates the convergence process over the number of gradient computations and the right figure shows the convergence errors averaged over the last 10000 iterations for different batch size values. It can be seen that in both i.i.d. and Markovian settings, the averaged error decreases as the batch size increases, which corroborates both Theorem 1 and Theorem 2. We also observe that increased batch size substantially reduces the error without much slowing down the convergence, demonstrating the desired advantage of variance reduction. Moreover, we observe that the error of VRTD with i.i.d samples is smaller than that of VRTD with Markovian samples under all batch size settings, which indicates that the correlation among Markovian samples introduces additional errors. In this paper, we provided the convergence analysis for VRTD with both i.i.d. and Markovian samples. We developed a novel technique to bound the bias of the VRTD gradient estimator. Our demonstrate the advantage of VRTD over vanilla TD on the reduced variance and bias errors by the batch size. We anticipate that such a variance reduction technique and our analysis tools can be further applied to other RL algorithms. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W.. OpenAI Gym. In this section, we use a counter-example to show that one major technical step for characterizing the convergence bound in does not hold. Consider Step 4 in the proof of Theorem 3 in. For the following defined (θ) where Ψ denotes the stationary distribution of the corresponding Markov chain, claimed that the following inequality holds This is not correct. Consider the following counter-example. Let the batch size M = 3 and the dimension of the feature vector be one, i.e., Φ ∈ R |S|×1. Hence, all variables in eq. and eq. are scalars. Since the steps for proving eq. in do not have specific requirements for the transition kernel, eq. should hold for any distribution of v. Thus, suppose v follows the uniform distribution over [−3, 3]. Further assume that in the n-th epoch, the samples of v are given by {1, 2, −3}. Recall that E(·|F n) is the average over the batch samples in the n-th epoch. We have: Substituting the above values into eq. yields which obviously does not hold in general when θ = θ *. Consequently the second statement in Theorem 3 of , which is critically based on the above erroneous steps, does not hold. Hence, the first statement in the same theorem whose proof is based on the second statement cannot hold either. Lemma 2. For any x i = (s i, r i, s i) (i.i.d. sample) or x i = (s i, r i, s i+1) (Markovian sample), we have A xi 2 ≤ 1 + γ and b xi 2 ≤ r max. Proof. First consider the case when samples are i.i.d. Due to the definition of A xi, we have Following similar steps, we can obtain the same upper bounds for the case with Markovian samples. Lemma 3. Let G = (1 + γ)R θ + r max. Consider Algorithm 2. For any m > 0 and 0 ≤ t ≤ M − 1, Proof. First, we bound g xj m,t (θ m,t) 2 as follows. Following the steps similar to the above, we have g xj m,t (θ m−1) 2 ≤ G. Finally for where eq. follows from the last fact g xj m,t (θ m−1) 2 ≤ G. Proof. Recalling the definition of g xi, and applying Lemma 2, we have Lemma 5. Considering Algorithm 2 with Markovian samples. We have Proof. We first derive Following the steps similar to the above, we can derive Recall that B m is the sample batch drawn at the beginning of each m-th epoch and x i,j denotes the sample picked at the j-th iteration in the i-th epoch in Algorithm 1. We denote σ(θ 0) as a trivial σ-field whenθ 0 is a deterministic vector. Let σ(A ∪ B) indicate the smallest σ-field that contains both A and B. Then, we construct a set of σ-fields in the following incremental way. The proof of Theorem 1 proceeds along the following steps. Step 1: Iteration within the m-th epoch For the m-th epoch, we consider the last update (i.e., the M -th iteration in the epoch), and decompose its error into the following form. First, consider the third term in the right-hand side of eq., we have Then, by taking the expectation conditioned on F m,M −1 on both sides of eq., we have where (i) follows from the fact that and (ii) follows from the inequality E[(X − EX) 2 ] ≤ EX 2 and Lemma 2. Then, taking the For all 1 ≤ i ≤ M, we have Then, arranging terms in eq. and using the above fact yield Finally, dividing eq. by [αλ A − 4α 2 (1 + γ) 2 ]M on both sides yields Step 2: Bounding the variance error where eq. follows from Lemma 4. Step 3: Iteration over m epoches First, we substitute eq. into eq. to obtain where we define Taking the expectation of eq. conditioned on F m−1,0 and following the steps similar to those in step 1 to upper bound E θ m−1 − θ * 2 2 F m−1,0, we obtain Then, by following the above steps for (m − 1) times, we have which yields the desirable . We define σ(S) to be the σ-field of all sample trajectories {x 1, x 2, ...} and recall that j m,t is the index of the sample picked at the t-th iteration in the m-th epoch in Algorithm 2. Then we define a set of σ-fields in the following incremental way: We first prove Lemma 1, which is useful for step 4 in the main proof in Theorem 2 provided in Section D.2. Proof. Recall the definition of the bias term: where and To bound eq. and eq., we apply the concentration inequality over Markov chains developed in Dedecker and Gouëzel. We first introduce such a concentration bound as follows. Theorem 3 (Dedecker and Gouëzel, Theorem 2). Let {X n} be an irreducible aperiodic Markov chain which is geometrically ergodic on a space S. Let π be its stationary distribution. There exists a constant C 0 depending on the Markov chain (see the detailed definition of C 0 in Dedecker and Gouëzel) with the following property. Let n ∈ N. Let K(x 0, · · ·, x n−1) be a function of n variables on S n. Then for all t > 0, where µ is the stationary distribution of the Markov chain and 0 ≤ L i < +∞ is a constant that satisfies: Since the MDP in Algorithm 2 satisfies Assumption 3, it satisfies the assumptions in Theorem 3. Then applying Theorem 3 to each W n,(i,j) and V n,i, we have and where 0 < C 0 < ∞ is a constant depending on the MDP parameters. Then, substituting eq. into eq. and eq. into eq. yield and Then we derive the following two bounds: and Finally, substituting eq. and eq. into eq. yields D.2 PROOF OF THEOREM 2 Step 1: Iteration within the m-th inner loop For the m-th inner loop, we consider the last update (i.e., the M -th iteration in the epoch), and decompose its error into the following form. First, consider the third term in the right-hand side of eq.. Then, by taking the expectation conditioned on F m,(M −1) on both sides of eq., we have where (i) follows from the fact that, and (iii) follows from Lemma 2. We further consider the last term in eq.: Then, taking the expectation conditioned on F m,M −1 on both sides of eq. yields where (i) follows by plugging eq. into its preceding step and from the fact that for θ ∈ R d. Then, by applying eq. iteratively, we have Arranging the terms in eq. yields Then, substituting eq. into eq., we obtain Subtracting 0.5λ 2 |F m,0 ] on both sides of eq. yields Then, dividing eq. by [0.5αλ A − 3α 2 (1 + γ) 2 ]M on both sides, we obtain For simplicity, let 0.5λ A −3α(1+γ) 2. Then we rewrite eq.: Step 2 by following similar steps in the previous steps, we obtained By following the above steps for (m − 1) times, we have Then taking the expectation of σ(S) (which contains the randomness of the entire sample trajectory) on both sides of eq. yields where the second term in the right hand side of eq. corresponds to the bias error and the third term corresponds to the variance error. Without loss of generality, we consider the case when j > i as follows: computations required by VRTD (i.e., Algorithm 1) under i.i.d. sampling to attain such an -accuracy solution is at most Proof. Given the values of α and M in the theorem, it can be easily checked that E||θ m − θ * || 2 ≤ for m = log. Then the total number of gradient computations is given by 2mM that yields the desired order given in the theorem. As a comparison, consider the vanilla TD algorithm studied in Proof. Given the values of α and M in the theorem, it can be easily checked that E||θ m − θ * || 2 ≤ for m = log. Then the total number of gradient computations is given by 2mM that yields the desired order given in the theorem. As a comparison, consider the vanilla TD algorithm studied in gradient computations in total to obtain an -accuracy solution. Hence, in the Markovian setting, VRTD outperforms vanilla TD in terms of the total computational complexity by a factor of log 1. To intuitively explain, we first note that the correlation among data samples in the Markovian case also causes a bias error in addition to the variance error. For VRTD, due to the variance reduction scheme, the bias and variance errors are kept at the same level (with respect to the batch size) so that the bias error does not cause order-level increase in the computational complexity for VRTD. However, for vanilla TD, the bias error dominates the variance error, which turns out to require more iterations to attain an -accurate solution, and yields an additional log 1 factor in the total complexity compared to VRTD. The finite-time convergence rate of vanilla TD under i.i.d. and Markovian sampling has been characterized in;. However, these studies did not provide the overall computational complexity, i.e., the total number of gradient computations to achieve an -accuracy solution. This section provides such an analysis based on their convergence for completeness. only when it reaches the goal and 0 otherwise. Each transition probability is randomly sampled from and normalized to one, and each component of the feature matrix Φ ∈ R 16×4 is also randomly sampled from. Given the feature matrix and the transition probability, the ground truth value of θ * can be calculated, which is used to evaluate the error in the experiments. We set the stepsize to be α = 0.1 and run vanilla TD (M = 1) and VRTD with the batch sizes M = 50, 500, 1000, 2000. Note that M = 1 corresponds to the base line vanilla TD. We compute the squared error over 1000 independent runs. The left plot in Figure 2 shows the convergence process over the number of gradient computations and the right plot in Figure 2 shows the convergence error averaged over the last 10000 iterations. It can be observed that VRTD achieves much smaller error than TD, and increasing the batch size for VRTD substantially reduces the error without much slowing down the convergence. Mountain Car is a game in OpenAI Gym, which is driven by an MDP with an infinite state space and a finite action space. At each time step, an agent randomly chooses an action ∈ {push left, push right, no push}. In this problem, the ground truth value of θ * is not known. In order to quantify the performance of VRTD, we apply the error metric known as the norm of the expected TD update given by NEU= E[δφ] 2 2, where δ is the temporal difference;. The state sample is transformed into a feature vector with the dimension 20 using an approximation of a RBF kernel. The agent follows a random policy in our experiment and we initialize θ 0 = 0. At t = 0, the agent starts from the lowest point, receives a reward of −1 at each time step, and returns to the starting point every time it reaches the goal. We set the stepsize to be α = 0.2 and run vanilla TD (M = 1) and VRTD with batch size M = 1000. After every 10000 gradient computations, learning is paused and the NEU is computed by averaging over 1000 test samples. We conduct 1000 independent runs and the are reported by averaging over these runs. Figure 3 shows the convergence process of the NEU versus the number of gradient computations. It can been seen that VRTD achieves smaller NEU than vanilla TD. Upon the request of one reviewer, we provide an additional experiment to compare the performance of VRTD given in Algorithm 2 (under constant stepsize) with the TD algorithm (under a changing stepsize as suggested by the reviewer). We adopt the same setting of Frozen Lake as in Appendix G.1. Let VRTD take a batch size M = 5000 and stepsize α = 0.1. For a fair comparison, we start TD with the same constant stepsize α = 0.1 and then reduce the stepsize by half whenever the error stops decrease. The comparison is reported in Figure 4, where both curves are averaged over 1000 independent runs. The two algorithms are compared in terms of the squared error versus the total number of gradient computations (equivalently, the total number of samples being used). It can be seen that VRTD reaches the required accuracy much faster than TD.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1ly10EKDS
This paper provides a rigorous study of the variance reduced TD learning and characterizes its advantage over vanilla TD learning
We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others. This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared. As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously. While deep learning has ushered in great advances in automated image understanding, it still suffers from the same weaknesses as all other machine learning techniques: when trained with images obtained under specific conditions, deep networks typically perform poorly on images acquired under different ones. This is known as the domain shift problem: the changing conditions cause the statistical properties of the test, or target, data, to be different from those of the training, or source, data, and the network's performance degrades accordingly. Domain adaptation aims to address this problem, especially when annotating images from the target domain is difficult, expensive, or downright infeasible. The dominant trend is to map images to features that are immune to the domain shift, so that the classifier works equally well on the source and target domains (; ; . In the context of deep learning, the standard approach is to find those features using a single architecture for both domains (; ; ;). Intuitively, however, as the domains have different properties, it is not easy to find one network that does this effectively for both. A better approach is to allow domains to undergo different transformations to arrive at domain-invariant features. This has been the focus of recent work (; Bermúdez-Chacón et al., 2018; ;, where source and target data pass through two different networks with the same architecture but different weights, nonetheless related to each other. In this paper, we introduce a novel, even more flexible paradigm for domain adaptation, that allows the different domains to undergo different computations, not only in terms of layer weights but also in terms of number of operations, while selectively sharing subsets of these computations. This enables the network to automatically adapt to situations where, for example, one domain depicts simpler images, such as synthetic ones, which may not need as much processing power as those coming from more complex domains, such as images taken in-the-wild. Our formulation reflects the intuition that source and target domain networks should be similar because they solve closely related problems, but should also perform domain-specific computations to offset the domain shift. To turn this intuition into a working algorithm, we develop a multibranch architecture that sends the data through multiple network branches in parallel. What gives it the necessary flexibility are trainable gates that are tuned to modulate and combine the outputs of these branches, as shown in, each of which processes the data in parallel branches, whose outputs are then aggregated in a weighted manner by a gate to obtain a single response. To allow for domain-adaptive computations, each domain has its own set of gates, one for each computational unit, which combine the branches in different ways. As a , some computations are shared across domains while others are domain-specific. computations should be carried out for each one. As an additional benefit, in contrast to previous strategies for untying the source and target streams (;, our formulation naturally extends to more than two domains. In other words, our contribution is a learning strategy that adaptively adjusts the specific computation to be performed for each domain. To demonstrate that it constitutes an effective approach to extracting domain-invariant features, we implement it in conjunction with the popular domain classifier-based method of . Our experiments demonstrate that our Domain Adaptive Multibranch Networks, which we will refer to as DAMNets, not only outperform the original technique of , but also the state-of-the-art strategy for untying the source and target weights of , which relies on the same domain classifier. We will make our code publicly available upon acceptance of the paper. Domain Adaptation. Domain adaptation has achieved important milestones in recent years (; ; ; ;), with deep learning-based methods largely taking the lead in performance. The dominant approach to deep domain adaptation is to learn a domain-invariant data representation. This is commonly achieved by finding a mapping to a feature space where the source and target features have the same distribution.; Long et al. (2015; ; , the distribution similarity was measured in terms of Maximum Mean Discrepancy , while other metrics based on second-and higher-order statistics were introduced in;;. , the distribution alignment process was disambiguated by exploiting the class labels, and in Häusser et al.; by leveraging anchor points associating embeddings between the domains. Another popular approach to learning domain-invariant features is to train a classifier to recognize the domain from which a sample was drawn, and use adversarial training to arrive to features that the classifier can no longer discriminate (; ; . This idea has spawned several recent adversarial domain adaptation classification , semantic segmentation ), and active learning techniques, and we will use such a classifier. Closest in spirit to our approach are those that do not share the weights of the networks that process the source and target data (; Bermúdez-Chacón et al., 2018; ; . , the weights were simply allowed to vary freely. ; Bermúdez-Chacón et al., it was shown that regularizing them to remain close to each other was beneficial. More recently, proposed to train small networks to map the source weights to the target ones. While these methods indeed untie the source and target weights, the source and target data still undergo the same computations, i.e., number of operations. In this paper, we argue that the amount of computation, that is, the network capacity, should adapt to each domain and reflect their respective complexities. We rely on a domain classifier as in; Ganin et al. (2016; . However, we do not force the source and target samples to go through the same transformations, which is counterintuitive since they display different appearance statistics. Instead, we start from the premise that they should undergo different computations and use domain-specific gates to turn this premise into our DAMNet architecture. Dynamic Network Architectures. As the performance of a neural network is tightly linked to its structure, there has been a recent push towards automatically determining the best architecture for the problem at hand. While neural architecture search techniques (; ; ; ;) aim to find one fixed architecture for a given dataset, other works have focused on dynamically adapting the network structure at inference time (; ; ; ;). In particular, in;;; , gates were introduced for this purpose. While our DAMNets also rely on gates, their role is very different: first, we work with data coming from different domains, whereas these gated methods, with the exception of , were all designed to work in the single-domain scenario. Second, and more importantly, these techniques aim to define a different computational path for every test sample. By contrast, we seek to determine the right computation for each domain. Another consideration is that we freeze our gates for inference while these methods must constantly update theirs. We believe this to be illsuited to domain adaptation, particularly because learning to adapt the gates for the target domain, for which only unlabeled data is available, is severely under-constrained. This lack of supervision may be manageable when one seeks to define operations for a whole domain, but not when these operations are sample-specific. We now describe our deep domain adaptation approach, which automatically adjusts the computations that the different domains undergo. We first introduce the multibranch networks that form the backbone of our DAMNet architecture and then discuss training in the domain adaptation scenario. is an aggregation of the outputs of parallel computations, or branches, f Let us first consider a single domain. In this context, a traditional deep neural network can be thought of as a sequence of N f operations f (i) (·) 1≤i≤N f, each transforming the output of the previous one. Given an input image x, this can be expressed as As a general convention, each operation f (i) (·) can represent either a single layer or multiple ones. Our formulation extends this definition by replacing each f (i) by multiple parallel computations, as shown in Fig. 2. More specifically, we replace each f (i) by a computational unit {f K} consisting of K parallel branches. Note that this K can be different at each stage of the network and should therefore be denoted as K (i). However, to simplify notation, we drop this index below. Given this definition, we write the output of each computational unit as whereΣ(·) is an aggregation operator that could be defined in many ways. It could be a simple summation that gives all outputs equal importance, or, at the opposite end of the spectrum, a multiplexer that selects a single branch and ignores the rest. To cover the range between these two alternatives, we introduce learnable gates that enable the network to determine what relative importance the different branches should be given. Our gates perform a weighted combination of the branch outputs. Each gate is controlled by a set of K activation weights {φ, and a unit returns If ∀j, φ (i) j = 1, the gate performs a simple summation. If φ (i) j = 1 for a single j and 0 for the others, it behaves as a multiplexer. The activation weights φ (i) j enable us to modulate the computational graph of network block f (i). To bound them and encourage the network to either select or discard each branch in a computational unit, we write them in terms of sigmoid functions with adaptive steepness. That is, where the g j s are learnable unbounded model parameters, and π (i) controls the plasticity of the activation-the rate at which φ j varies between the extreme values 0 and 1 for block i. During training, we initially set π (i) to a small value, which enables the network to explore different gate configurations. We then apply a cooling schedule on our activations, by progressively increasing π (i) over time, so as to encourage the gates to reach a firm decision. Note that our formulation does not require, that is, we do not require the aggregated output x (i) to be a convex combination of the branch outputs f ). This is deliberate because allowing the activation weights to be independent from one another provides additional flexibility for the network to learn general additive relationships. Finally, a Multibranch Network is the concatenation of multiple computational units, as shown in Fig. 1. For the aggregation within each unit f (i) to be possible, the f (i) j s' outputs must be of matching shapes. Furthermore, as in standard networks, two computational units can be attached only if the output shape of the first one matches the input shape of the second. Although it would be possible to define computational units at any point in the network architecture, in practice, we usually take them to correspond to groups of layers that are semantically related. For example, one would group a succession of convolutions, pooling and non-linear operations into the same computational unit. Our goal is to perform domain adaptation, that is, leverage a large amount of labeled images, drawn from a source domain, to train a model for a target domain, whose data distribution is different and for which we only have access to unlabeled images To this end, we extend the gated networks of Section 3.1 by defining two sets of gates, one for the source domain and one for the target one. Let {(φ s) be the corresponding source and target activation weights for computational unit f (i), respectively. Given a sample x d coming from a domain d ∈ {s, t}, we take the corresponding output of the i-th computational unit to be Note that under this formulation, the domain identity d of the sample is required in order to select the appropriate The concatenated computational units forming the DAMNet encode sample x from domain d into a feature vector z = f (x, d). Since the gates for different domains are set independently from one another, the outputs of the branches for each computational unit are combined in a domainspecific manner, dictated by the activation weights (φ d) j. Therefore, the samples are encoded to a common space, but arrive to it through potentially different computations. Fig. 3 depicts this process. Ultimately, the network can learn to share weights for computational unit f (i) by setting; Rozantsev et al. (2018;, it can learn to use more computation for one domain than for the other by setting (φ s) while having only a single non-zero (φ t) Figure 3: Computational graphs for the source (top) and target (bottom) domains, for the same network. While both domains share the same computational units, their outputs are obtained by different aggregations of their inner operations, e.g., in the first unit, the source domain does not use the middle two operations, whereas the target domain does; by contrast, both exploit the fourth operation. In essence, this scheme adapts the amount of computation that each domain is subjected to. The above formulation treats all branches for each computational unit as potentially sharable between domains. However, it is sometimes desirable not to share at all. For example, batchnormalization layers that accumulate and update statistics of the data over time, even during the forward pass, are best exposed to a single domain to learn domain-specific statistics. We allow for this by introducing computational units whose gates are fixed, yet domain specific, and that therefore act as multiplexers. After the last computational unit, a small network p y operates directly on the encodings and returns the class assignmentŷ = p y (z), thus subjecting the encodings for all samples to the same set of operations. The formulation outlined above extends naturally to more than two domains, by assigning one set of gates per domain. This enables us to exploit annotated data from different source domains, and even to potentially handle multiple target domains simultaneously. In this generalized case, we introduce governing sets of gates with activations φ d1,..., φ d D for D different domains. They act in the same way as in the two-domain case and the overall architecture remains similar. When training our models, we jointly optimize the gate parameters (g d) (i) j, from Eq. 4, along with the other network parameters using standard back-propagation. To this end, we make use of a composite loss function, designed to encourage correct classification for labeled samples from the source domain(s) and align the distributions of all domains, using labeled and unlabeled samples. This loss can be expressed as where and u are the sets of labeled and unlabeled samples, respectively, and where we assumed, without loss of generality, that the samples are ordered. The first term in this loss, L y (y,ŷ), is the standard cross-entropy, which compares the groundtruth class probabilities y with the predicted onesŷ = p y (z), where, as discussed in Section 3.2.1, ) is the feature encoding of sample x from domain d. For the second term, which encodes distribution alignment, we rely on the domain confusion strategy of , which is commonly used in existing frameworks. Specifically, for D domains, we make use of an auxiliary domain classifier network p d that predicts a D-dimensional vector of domain probabilitieŝ d given the feature vector z. Following the gradient reversal technique of Ganin & Lempitsky, where d is the D-dimensional binary vector encoding the ground-truth domain, d i indicates the i-th element of d, andd = p d (R(z)), with R the gradient reversal pseudofunction of that enables to incorporate adversarial training directly into back-propagation. That is, with this loss, standard back-propagation trains jointly the domain classifier to discriminate the domains and the feature extractor f (·) to produce features that fool this classifier. When training is complete and the gates have reached a stable state, the branches whose activations are close to zero are deactivated. This prevents the network from performing computations that are irrelevant and allows us to obtain a more compact network to process the target data. Since we rely on the domain confusion loss to train our model, we treat the Domain-Adversarial Neural Network (DANN) method of , as our first baseline. To demonstrate the benefits of our approach over simply untying the source and target stream parameters, we compare our approach against the Residual Parameter Transfer (RPT) method of , which constitutes the state of the art in doing so. Note that RPT also relies on the domain confusion loss, which makes our comparison fair. In addition, we report the of directly applying a model trained on the source domain to the target, without any domain adaptation, which we refer to as "No DA". We also provide the oracle accuracy of a model trained on the fully-labeled target domain, referred to as "On TD". We adapt different network architectures to the multibranch paradigm for different adaptation problems. For all cases, we initialize our networks' parameters by training the original versions of those architectures on the source domains, either from scratch, for simple architectures, or by fine-tuning weights learned on ImageNet, for very deep ones. We then set the parameters of all branches to the values from the corresponding layers. We perform this training on the predefined training splits, when available, or on 75% of the images, otherwise. The initial values of the gate parameters are defined so as to set the activations to 1 K, for each of the K branches. This prevents our networks from initially favoring a particular branch for any domain. To train our networks, we use Stochastic Gradient Descent with a momentum of 0.9 and a variable learning rate defined by the annealing schedule of as µ p = µ0 (1+α·p) β, where p is the training progress, relative to the total number of training epochs, µ 0 is the initial learning rate, which we take to be 10 −2, and α = 10 and β = 0.75 as in. We eliminate exploding gradients by 2 -norm clipping. Furthermore, we modulate the plasticity of the activations at every gate as π (i) = 1 − p, that is, we make π (i) decay linearly as training progresses. As data preprocessing, we apply mean subtraction, as in. We train for 200 epochs, during which the network is exposed to all the image data from the source and target domains, but only to the annotations from the source domain(s). Our "On TD" oracle is trained on either the preset training splits, when available, or our defined training data, and evaluated on the corresponding test data. For the comparison to this oracle to be meaningful, we follow the same strategy for our DAMNets. That is, we use the unlabeled target data from the training splits only and report on the testing splits. This protocol differs from that of , which relied on a transductive evaluation, where all the target images, training and test ones, were seen by the networks during training. We evaluate our method in the task of image recognition for which we use several domain adaptation benchmark problems: Digits, which comprises three domains: MNIST , MNIST-M , and SVHN ; Office (Saenko et al., Table 1 : Domain Adaptation datasets and . We compare the accuracy of our DAMNet approach with that of DANN and of RPT , for image classification tasks commonly used to evaluate domain adaptation methods. Our DAMNets yield a significant accuracy boost in the presence of large domain shifts, particularly when using more than one source domain. A more comprehensive evaluation on all datasets is provided in Appendix D. Office-Home: Art (A), Clipart (C), Product (P), Real (R) Setup. As discussed in Section 3, our method is general and can work with any feed-forward network architecture. To showcase this, for the digit recognition datasets, we apply it to the LeNet and SVHNet architectures , which are very simple convolutional networks, well suited for small images. , we employ LeNet when using the synthetic datasets MNIST and MNIST-M as source domains, and SVHNet when SVHN acts as source domain. We extend these architectures to multibranch ones by defining the computational units as the groups of consecutive convolution, pooling and non-linear operations defined in the original model. For simplicity, we use as many branches within each computational unit as we have domains, and all branches from a computational unit follow the same architecture, which we provide in Appendix A, Figures 1 and 2. As backbone network to process all the rest of the datasets, we use a ResNet-50 , with the bottleneck layer modification of. While many multibranch configurations can be designed for such a deep network, we choose to make our gated computational units coincide with the layer groupings defined in , namely conv1, conv2 x, conv3 x, conv4 x, and conv5 x. The ing multibranch network is depicted in Appendix A, Figure 4. We feed our DAMNets images resized to 224 × 224 pixels, as expected by ResNet-50. Results. The for the digit recognition and Office-Home datasets are provided in Table 1. Results for Office and VisDA17 datasets are presented in Appendix D. Our approach outperforms the baselines in all cases. For the Digits datasets, in addition to the traditional two-domain setup, we also report when using two source domains simultaneously. Note that the reference method RPT does not apply to this setting, since it was designed to transform a single set of source parameters to the target ones. Altogether, our method consistently outperforms the others. Note that the first two columns correspond to the combinations reported in the literature. We believe, however, that the SVHN MNIST one is quite artificial, since, in practice, one would typically annotate simpler, synthetic images and aim to use real ones at test time. We therefore also report synthetic SVHN cases, which are much more challenging. The multi-source version of our method achieves a significant boost over the baselines in this scenario. To further demonstrate the potential of our approach in this setting, we replaced its backbone with the much deeper ResNet-50 network and applied it on upscaled versions of the images. As shown in the column indicated by a, this allowed us to achieve an accuracy close to 80%, which is remarkable for such a difficult adaptation task. On Office-Home, the gap between DAMNet and the baselines is again consistent across the different domain pairs. Note that, here, because of the relatively large number of classes, the overall performance is low for all methods. Importantly, our show that we gain performance by training on more than one source domain, and by leveraging all synthetic domains to transfer to the real one, our approach reaches an accuracy virtually equal to that of using full supervision on the target domain. Despite our best efforts, we were unable to obtain convincing for RPT using the authors' publicly available code, as for this dataset were not originally reported for RPT. Gate dynamics. To understand the way our networks learn the domain-specific branch assignments, we track the state of the gates for all computational units over all training epochs. In Figure 4, we plot the corresponding evolution of the gate activations for the DSLR+Webcam Amazon task on Office. Note that our DAMNet leverages different branches over time for each domain before reaching a firm decision. Interestingly, we can see that, with the exception of the first unit, which performs low-level computations, DSLR and Webcam share all branches. By contrast, Amazon, which has a significantly different appearance, mostly uses its own branches, except in two computational units. This evidences that our network successfully understands when domains are similar and can thus use similar computations. No adaptation 0.377 DANN 0.715 ADDA 0.731 Two-stream 0.732 RPT 0.743 DAMNet 0.792 We evaluate our method for the detection of drones from video frames, on the UAV-200 dataset , which contains examples of drones both generated artificially and captured from real video footage. Full details and example images are provided in Appendix B.3 Setup. Our domain adaptation leverages both the synthetic examples of drones, as source domain, and the limited amount of annotated real drones, as target domain, as well as the negative examples, to predict the class of patches from the validation set of real images. We follow closely the supervised setup and network architecture of , including the use of AdaDelta as optimizer, cross-entropy as loss function, and average precision as evaluation metric. Our multibranch computational units are defined as groupings of successive convolutions, nonlinearities, and pooling operations. The details of the architecture are provided in Appendix A, Figure 3. Results. Our method considerably surpasses all the others in terms of average precision, as shown in Table 2, thus validating DAMNets as effective models for leveraging synthetic data for domain adaptation in real-world problems. We validate the effectiveness of our method as a feature extractor, by combining it with the Maximum Classifier Discrepancy (MCD) method of. As MCD operates on the extracted encodings, we replace the encoding strategy that MCD uses, which is the same as DANN, with our DAMNet. Or, in other words, we replace the domain classifier in our approach with the corresponding MCD term. Specifically, we use a single computational unit with two branches, each of which replicates the architectures proposed in. We present the of combining MCD with DAMNet in Table 3. In all tested scenarios, we obtain improvements over using MCD as originally proposed. To obtain more insights about specific branch decisions, we evaluate the effects of adding extra branches to the network, as well as using branches with different capacities. When computational units are composed of branches of different capacities, DAMNets often assign branches with more capacity to more complex domains. To exemplify this, we trained a modified multibranch SVHNet for adaptation between MNIST and SVHN. Instead of the identical branches originally used, we replace the second branch in each computational unit with a similar branch where the convolution operation is performed by 1x1 rather than 5x5 kernels. These second branches, with Figure 4: Evolution of the gates' activations for each of the computational units in a multibranch ResNet-50 network, for the Office DSLR + Webcam Amazon domain adaptation problem. In the top two rows, we show the gates for the source domains and in the bottom row for the target one. All branches are initialized to parameters obtained from a single ResNet-50 trained on ImageNet. Note how for the first computational unit, conv1, each domain chooses to process the data with different branches. In the remaining units, the two source domains, which have similar appearance, share all the computations. By contrast, the target domain still uses its own branches in conv3 x, and conv4 x to account for its significantly different appearance. When arriving at conv 5x, the data has been converted to a domain-agnostic representation, and hence the same branch can operate on all domains. 25 times fewer parameters each, are mostly used by the simpler domain-MNIST in this case. We provide the gate evolution that reflects this in Appendix C, Figures 5 and 6. We explore the effects of using more branches than domains, so as to provide the networks with alternative branches from where to choose. In particular, we explore the case where K = D + 1. We evaluate multibranch LeNet and ResNet architectures under this setting. We show the gate activation evolution in Appendix C, Figures 7 and 8. During the training process, we have observed that the networks quickly choose to ignore extra branches when K > D. This suggests that they did not contribute to the learning of our feature extraction. We did not find experimental evidence to support that K > D is beneficial. We have introduced a domain adaptation approach that allows for adaptive, separate computations for different domains. Our framework relies on computational units that aggregate the outputs of multiple parallel operations, and on a set of trainable domain-specific gates that adapt the aggregation process to each domain. Our experiments have demonstrated the benefits of this approach over the state-of-the-art weight untying strategy; the greater flexibility of our method translates into a consistently better accuracy. Although we only experimented with using the same branch architectures within a computational unit, our framework generalizes to arbitrary branch architectures, the only constraint being that their outputs are of commensurate shapes. An interesting avenue for future research would therefore be to automatically determine the best operation to perform for each domain, for example by combining our approach with neural architecture search strategies. Figure 1: Multibranch LeNet. This architecture is a multibranch extension to the LeNet used by DANN (Figure 2 : Multibranch SVHNet. This architecture is a multibranch extension to the SVHNet used by DANN ( . We preserve the groupings described in the original paper . N denotes the number of classes in the dataset. MNIST consists of black and white images of handwritten digits from 0 to 9. All images are of size 28 × 28 pixels. The standard training and testing splits contain 60,000 and 10,000 examples, respectively. MNIST-M is synthetically generated by randomly replacing the foreground and pixels of random MNIST samples with natural images. Its image size is 32 × 32, and the standard training and testing splits contain 59,001 and 9,001 images, respectively. SVHN , the Street View House Numbers dataset, consists of natural scene images of numbers acquired from Google Street View. Its images are also of size 32 × 32 pixels, and its preset training and testing splits are of 73,257 and 26,032 images, respectively. The SVHN images are centered at the desired digit, but contain clutter, visual artifacts, and distractors from its surroundings. Office is a multiclass object recognition benchmark dataset, containing images of 31 categories of objects commonly found in office environments. It contains color images from three different domains: 2,817 images of products scraped from Amazon, 498 images acquired using a DSLR digital camera, and 795 images captured with a webcam. The images are of arbitrary sizes and aspect ratios. Figure 5: Gate evolution for a multibranch SVHN network with branches of different capacities. Branch 1 is the original branch that applies 5x5 convolutions to the image, whereas branch 2 is a similar architecture but with 1x1 convolutions instead. The network quickly recognizes that SVHN requires a more complex processing and hence assigns the respective branch to it for computational units 1 and 3. Figure 6: Gate evolution for a multibranch LeNet network with branches of different capacities. We have simplified the architecture to encapsulate the feature extraction into a single computational unit in this case. Similarly to the above, we modify the second branch for a simpler computation. The original branches apply convolution operations to extract 32 channels with a 5x5 kernel, and then to extract 48 channels from those with a 5x5 kernel. We replace them in the second branch with 24 channels 3x3 kernel and 48 channels 1x1 kernel convolutions, respectively, which yields commensurate shapes with the original branch, but with more than 20 times fewer parameters. Unlike in the above experiment, we do not force the gates to open or close. The network still assigns combinations of branches that reflect the difference in visual complexity of the domains. Table 4: Domain Adaptation . We compare the accuracy of our DAMNet approach with that of DANN and of RPT , for image classification tasks commonly used to evaluate domain adaptation methods. As illustrated in Appendix B, different source and target domain combinations present various degrees of domain shift, and some combinations are clearly more challenging than others. Our DAMNets yield a significant accuracy boost in the presence of large domain shifts, particularly when using more than one source domain. and Evaluated with a ResNet-50 * Results reported as Average Precision
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJxycxHKDS
A Multiflow Network is a dynamic architecture for domain adaptation that learns potentially different computational graphs per domain, so as to map them to a common representation where inference can be performed in a domain-agnostic fashion.
The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA ), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO. Proximal Policy Optimization is one of the most sample-efficient on-policy algorithms. However, it relies on a synchronous architecture for collecting experiences, which is closely tied to its trust region optimization objective. Other architectures such as IMPALA can achieve much higher throughputs due to the asynchronous collection of samples from workers. Yet, IMPALA suffers from reduced sample efficiency since it cannot safely take multiple SGD steps per batch as PPO can. The new agent, Importance Weighted Asynchronous Architectures with Clipped Target Networks (IMPACT), mitigates this inherent mismatch. Not only is the algorithm highly sample efficient, it can learn quickly, training 30 percent faster than IMPALA. At the same time, we propose a novel method to stabilize agents in distributed asynchronous setups and, through our ablation studies, show how the agent can learn in both a time and sample efficient manner. In our paper, we show that the algorithm IMPACT realizes greater gains by striking the balance between high sample throughput and sample efficiency. In our experiments, we demonstrate in the experiments that IMPACT exceeds state-of-the-art agents in training time (with same hardware) while maintaining similar sample efficiency with PPO's. The contributions of this paper are as follows: 1. We show that when collecting experiences asynchronously, introducing a target network allows for a stabilized surrogate objective and multiple SGD steps per batch (Section 3.1). 2. We show that using a circular buffer for storing asynchronously collected experiences allows for smooth trade-off between real-time performance and sample efficiency (Section 3.2). 3. We show that IMPACT, when evaluated using identical hardware and neural network models, improves both in real-time and timestep efficiency over both synchronous PPO and IMPALA (Section 4). into a large training batch and the learner performs minibatch SGD. IMPALA workers asynchronously generate data. IMPACT consists of a batch buffer that takes in worker experience and a target's evaluation on the experience. The learner samples from the buffer. Reinforcement Learning assumes a Markov Decision Process (MDP) setup defined by the tuple (S, A, p, γ, r) where S and A represent the state and action space, γ ∈ is the discount factor, and p: S × A × S → R and R: S × A → R are the transition dynamics and reward function that models an environment. Let π(a t |s t): S × A → denote a stochastic policy mapping that returns an action distribution given state s t ∈ S. Rolling out policy π(a t |s t) in the environment is equivalent to sampling a trajectory τ ∼ P(τ), where τ:= (s 0, a 0, ...., a T −1, s T, a T). We can compactly define state and state-action marginals of the trajectory distribution p π (s t) and p π (s t, a t) induced by the policy π(a t |s t).The goal for reinforcement learning aims to maximize the following objective: When θ parameterizes π(a t |s t), the policy is updated according to the Policy Gradient Theorem : where π θ (s t, a t) is an estimator of the advantage function. The advantage estimator is usually defined as the 1-step TD error, π θ (s t, a t) = r(s t, a t) + γV (s t+1) −V (s t), whereV (s t) is an estimation of the value function. Policy gradients, however, suffer from high variance and large update-step sizes, oftentimes leading to sudden drops in performance. Per iteration, Proximal Policy Optimization (PPO) optimizes policy π θ from target π θold via the following objective function where r t (θ) = π θ (at|st) π θ old (at|st) and is the clipping hyperparameter. In addition, many PPO implementations use GAE-λ as a low bias, low variance advantage estimator for t (b). PPO's surrogate objective contains the importance sampling ratio r t (θ), which can potentially explode if π θold is too far from π θ. . PPO's surrogate loss mitigates this with the clipping function, which ensures that the agent makes reasonable steps. Alternatively, PPO can also be seen as an adaptive trust region introduced in TRPO (a). In Figure 1a, distributed PPO agents implement a synchronous data-gathering scheme. Before data collection, workers are updated to π old and aggregate worker batches to training batch D train. The learner performs many mini-batch gradient steps on D train. Once the learner is done, learner weights are broadcast to all workers, who start sampling again. In Figure 1b, IMPALA decouples acting and learning by having the learner threads send actions, observations, and values while the master thread computes and applies the gradients from a queue of learners experience . This maximizes GPU utilization and allows for increased sample throughput, leading to high training speeds on easier environments such as Pong. As the number of learners grows, worker policies begin to diverge from the learner policy, ing in stale policy gradients. To correct this, the IMPALA paper utilizes V-trace to correct the distributional shift: where, V φ is the value network, π θ is the policy network of the master thread, µ θ is the policy network of the learner thread, and c j = min c, Input: Batch size M, number of workers W, circular buffer size N, replay coefficient K, target update frequency t target, weight broadcast frequency t frequency, learning rates α and β 1: Randomly initialize network weights (θ, w) 2: Initialize target network (θ, w) ← (θ, w) 3: Create W workers and duplicate (θ, w) to each worker 4: Initialize circular buffer C(N, K) 5: for t = 1,.., T do Compute policy and value network gradients Update policy and value network weights If t ≡ 0 (mod t frequency), broadcast weights to workers 13: end for Worker-i Input: Worker sample batch size S 1: repeat 2: for t = 1,..., S do Store (s t, a t, r t, s t+1) ran by θ i in batch B i end for 6: If broadcasted weights exist, set θ i ← θ 8: until learner finishes 3 IMPACT ALGORITHM Like IMPALA, IMPACT separates sampling workers from learner workers. Algorithm 1 and Figure 1c describe the main training loop and architecture of IMPACT. In the beginning, each worker copies weights from the master network. Then, each worker uses their own policy to collect trajectories Since π worker may differ per worker, using this ratio in trust region conflicts across multiple batches. Since π learner is updated after each batch from the worker, only a single SGD step can be taken per batch. The IMPACT objective allows for multiple SGD steps per async batch and has a stable trust region. Figure 2: In asynchronous PPO, there are multiple candidate policies from which the trust region can be defined: π workeri, the policy of the worker process that produced the batch of experiences, π learner, the current policy of the learner process, and π target, the policy of a target network. Introducing the target network allows for both a stable trust region and multiple SGD steps per batch of experience collected asynchronously from workers, improving sample efficiency. Since workers can generate experiences asynchronously from their copy of the master policy, this also allows for good real-time efficiency. and sends the data (s t, a t, r t) to the circular buffer. Simultaneously, workers also asynchronously pull policy weights from the master learner. In the meantime, the target network occasionally syncs with the master learner every t target iterations. The master learner then repeatedly draws experience from the circular buffer. Each sample is weighted by the importance ratio of πtarget. The target network is used to provide a stable trust region (Figure 2), allowing multiple steps per batch (i.e., like PPO) even in the asynchronous setting (i.e., with the IMPALA architecture). In the next section, we describe the design of this improved objective. PPO gathers experience from previous iteration's policy π θold, and the current policy trains by importance sampling off-policy experience with respect to π θ. In the asynchronous setting, worker i's policy, denoted as π workeri, generates experience for the policy network π θ. The probability that batch B comes from worker i can be parameterized as a categorical distribution i ∼ D(α 1, ..., α n). We include this by adding an extra expectation to the importance-sampled policy gradient objective (IS-PG) : Since each worker contains a different policy, the agent introduces a target network for stability (Figure 2). Off-policy agents such as DDPG and DQN update target networks with a moving average. For IMPACT, we periodically update the target network with the master network. However, training with importance weighted ratio π θ πtarget can lead to numerical instability, as shown in Figure 3. To prevent this, we clip the importance sampling ratio from worker policy,π workeri, to target policy, π target: where β = 1 ρ. In the experiments, we set ρ as a hyperparameter with ρ ≥ 1 and β ≤ 1. To see why clipping is necessary, when master network's action distribution changes significantly over few training iterations, worker i's policy, π workeri, samples data outside that of target policy, π target, leading to large likelihood ratios, In (b), we show the target network update frequency is robust to a range of choices. We try target network update frequency ttarget equal to the multiple (ranging from 1/16 and 16) of n = N · K, the product of the size of circular buffer and the replay times for each batch in the buffer. large IS ratios to ρ. Figure 10 in Appendix E provides additional intuition behind the target clipping objective. We show that the target network clipping is a lower bound of the IS-PG objective. For ρ > 1, the clipped target ratio is larger and serves to augment advantage estimator t. This incentivizes the agent toward good actions while avoiding bad actions. Thus, higher values of ρ encourages the agent to learn faster at the cost of instability. We use GAE-λ with V-trace . The V-trace GAE-λ modifies the advantage function by adding clipped importance sampling terms to the summation of TD errors: where c i = min c, πtarget(aj |sj) πworker i (aj |sj) (we use the convention t−1 j=t c j = 1) and δ i V is the importance sampled 1-step TD error introduced in V-trace. IMPACT uses a circular buffer (Figure 4) to emulate the mini-batch SGD used by standard PPO. The circular buffer stores N batches that can be traversed at max K times. Upon being traversed K times, a batch is discarded and replaced by a new worker batch. For motivation, the circular buffer and the target network are analogous to mini-batching from π old experience in PPO. When target network's update frequency n = N K, the circular buffer is equivalent to distributed PPO's training batch when the learner samples N minibatches for K SGD iterations. This is in contrast to standard replay buffers, such as in ACER and APE-X, where transitions (s t, a t, r t, s t+1) are either uniformly sampled or sampled based on priority, and, when the buffer is full, the oldest transitions are discarded . We investigate the performance of the clipped-target objective relative to prior work, which includes PPO and IS-PG based objectives. Specifically, we consider the following ratios below: For all three experiments, we truncate all three ratios with PPO's clipping function: c(R) = clip(R, 1−, 1+) and train in an asynchronous setting. Figure 4 (a) reveals two important takeaways: first, R 1 suffers from sudden drops in performance midway through training. Next, R 2 trains stably but does not achieve good performance. We theorize that R 1 fails due to the target and worker network mismatch. During periods of training where the master learner undergoes drastic changes, worker action outputs vastly differ from the learner outputs, ing in small action probabilities. This creates large ratios in training and destabilizes training. We hypothesize that R 2 fails due to different workers pushing and pulling the learner in multiple directions. The learner moves forward with the most recent worker's suggestions without developing a proper trust region, ing in many worker's suggestions conflicting with each other. The loss function, R 3 shows that clipping is necessary and can help facilitate training. By clipping the target-worker ratio, we make sure that the ratio does not explode and destabilize training. Furthermore, we prevent workers from making mutually destructive suggestions by having a target network provide singular guidance. In Section 3.2, an analogy was drawn between PPO's mini-batching mechanism and the circular buffer. Our primary benchmark for target update frequency is n = N · K, where N is circular buffer size and K is maximum replay coefficient. This is the case when PPO is equivalent to IMPACT. In Figure 4 (b), we test the frequency of updates with varying orders of magnitudes of n. In general, we find that agent performance is robust to vastly differing frequencies. However, when n = 1 ∼ 4, the agent does not learn. Based on empirical , we theorize that the agent is able to train as long as a stable trust region can be formed. On the other hand, if update frequency is too low, the agent is stranded for many iterations in the same trust region, which impairs learning speed. Counter to intuition, the tradeoff between time and sample efficiency when K increases is not necessarily true. In Figure 4b and 4c, we show that IMPACT realizes greater gains by striking the balance between high sample throughput and sample efficiency. When K = 2, IMPACT performs the best in both time and sample efficiency. Our reveal that wall-clock time and sample efficiency can be optimized based on tuning values of K in the circular buffer. We investigate how IMPACT attains greater performance in wall clock-time and sample efficiency compared with PPO and IMPALA across six different continuous control and discrete action tasks. We tested the agent on three continuous environments (Figure 5): HalfCheetah, Hopper, and Humanoid on 16 CPUs and 1 GPU. The policy networks consist of two fully-connected layers of 256 units with nonlinear activation tanh. The critic network shares the same architecture as the policy network. For consistentency, same network architectures were employed across PPO, IMPALA, and IMPACT. For the discrete environments (Figure 6), Pong, SpaceInvaders, and Breakout were chosen as common benchmarks used in popular distributed RL libraries . Additional experiments for discrete environments are in the Appendix. These experiments were ran on 32 CPUs and 1 GPU. The policy network consists of three 4x4 and one 11x11 conv layer, with nonlinear activation ReLU. The critic network shares weights with the policy network. The input of the network is a stack of four 42x42 down-sampled images of the Atari environment. The hyper-parameters for continuous and discrete environments are listed in the Appendix B table 1 and 2 respectively. Figures 5 and 6 show the total average return on evaluation rollouts for IMPACT, IMPALA and PPO. We train each algorithm with three different random seeds on each environment for a total time of three hours. According to the experiments, IMPACT is able to train much faster than PPO and IMPALA in both discrete and continuous domains, while preserving same or better sample efficiency than PPO. Our reveal that continuous control tasks for IMPACT are sensitive to the tuple (N, K) for the circular buffer. N = 16 and K = 20 is a robust choice for continuous control. Although higher K inhibits workers' sample throughput, increased sample efficiency from replaying experiences in an overall reduction in training wall-clock time and higher reward. For discrete tasks, N = 1 and K = 2 works best. Empirically, agents learn faster from new experience than replaying old experience, showing how exploration is crucial to achieving high asymptotic performance in discrete enviornments. Figure 7: Performance of IMPACT with respect to the number of workers in both continuous and discrete control tasks Figure 7 shows how IMPACT's performance scales relative to the number of workers. More workers means increased sample throughput, which in turn increases training throughput (the rate that learner consumes batches). With the learner consuming more worker data per second, IMPACT can attain better performance in less time. However, as number of workers increases, observed increases in performance begin to decline. Distributed RL architectures are often used to accelerate training. Gorila and A3C use workers to compute gradients to be sent to the learner. A2C and IMPALA send experience tuples to the learner. Distributed replay buffers, introduced in ACER and Ape-X , collect worker-collected experience and define an overarching heuristic for learner batch selection. IMPACT is the first to fully incorporate the sample-efficiency benefits of PPO in an asynchronous setting. Surreal PPO also studies training with PPO in the asynchronous setting, but do not consider adaptation of the surrogate objective nor IS-correction. Their use of a target network for broadcasting weights to workers is also entirely different from IMPACT's. Consequently, IMPACT is able to achieve better in both real-time and sample efficiency. Off-policy methods, including DDPG and QProp, utilize target networks to stabilize learning the Q function . This use of a target network is related but different from IMPACT, which uses the network to define a stable trust region for the PPO surrogate objective. In , we introduce IMPACT, which extends PPO with a stabilized surrogate objective for asynchronous optimization, enabling greater real-time performance without sacrificing timestep efficiency. We show the importance of the IMPACT objective to stable training, and show it can outperform tuned PPO and IMPALA baselines in both real-time and timestep metrics. Time (In Figure 9, we gradually add components to IMPALA until the agent is equivalent to IMPACT's. Starting from IMPALA, we gradually add PPO's objective function, circular replay buffer, and target-worker clipping. In particular, IMPALA with PPO's objective function and circular replay buffer is equivalent to an asynchronous-variant of PPO (APPO). APPO fails to perform as well as synchronous distributed PPO, since PPO is an on-policy algorithm. In Figure 6, IMPALA performs substantially worse than other agents in continuous environments. We postulate that IMPALA suffers from low asymptotic performance here since its objective is an importance-sampled version of the Vanilla Policy Gradient (VPG) objective, which is known to suffer from high variance and large update-step sizes. We found that for VPG, higher learning rates encourage faster learning in the beginning but performance drops to negative return later in training. In Appendix B, for IMPALA, we heavily tuned on the learning rate, finding that small learning rates stabilize learning at the cost of low asymptotic performance. Prior work also reveals the agents that use VPG fail to attain good performance in non-trivial continuous tasks . Our with IMPALA reaches similar performance compared to other VPG-based algorithms. The closest neighbor to IMPALA, A3C uses workers to compute gradients from the VPG objective to send to the learner thread. A3C performs well in InvertedPendulum yet flounders in continuous environments . The following ratios represent the objective functions for different ablation studies. In the plots (Figure 10), we set the advantage function to be one, i.e.  t = 1. • IS ratio: According to Figure 10, IS ratio is large when π workeri assigns low probability. IMPACT target -clip is a lower bound of the PPO -clip. In an distributed asynchronous setting, the trust region suffers from larger variance stemming from off-policy data. IMPACT target -clip ratio mitigates this by encouraging conservative and reasonable policy-gradient steps.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJeGlJStPr
IMPACT helps RL agents train faster by decreasing training wall-clock time and increasing sample efficiency simultaneously.
In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets. Learning good representations is seen by many machine learning researchers as the main reason behind the tremendous successes of the field in recent years . In image analysis , natural language processing or reinforcement learning , groundbreaking rely on efficient and flexible deep learning architectures that are capable of transforming a complex input into a simple vector while retaining most of its valuable features. The universal approximation theorem (; ; ;) provides a theoretical framework to analyze the expressive power of such architectures by proving that, under mild hypotheses, multi-layer perceptrons (MLPs) can uniformly approximate any continuous function on a compact set. This provided a first theoretical justification of the strong approximation capabilities of neural networks, and was the starting point of more refined analyses providing valuable insights into the generalization capabilities of these architectures (; ; ;). Despite a large literature and state-of-the-art performance on benchmark graph classification datasets, graph neural networks yet lack a similar theoretical foundation . Universality for these architectures is either hinted at via equivalence with approximate graph isomorphism tests (k-WL tests in Xu et al. 2019; Maron et al. 2019a), or proved under restrictive assumptions (finite node attribute space in Murphy et al. 2019). In this paper, we introduce Colored Local Iterative Procedure 1 (CLIP), which tackles the limitations of current Message Passing Neural Networks (MPNNs) by showing, both theoretically and experimentally, that adding a simple coloring scheme can improve the flexibility and power of these graph representations. More specifically, our contributions are: 1) we provide a precise mathematical definition for universal graph representations, 2) we present a general mechanism to design universal neural networks using separability, 3) we propose a novel node coloring scheme leading to CLIP, the first provably universal extension of MPNNs, 4) we show that CLIP achieves state of the art on benchmark datasets while significantly outperforming traditional MPNNs as well as recent methods on graph property testing. The rest of the paper is organized as follows: Section 2 gives an overview of the graph representation literature and related works. Section 3 provides a precise definition for universal representations, as well as a generic method to design them using separable neural networks. In Section 4, we show that most state-of-the-art representations are not sufficiently expressive to be universal. Then, using the analysis of Section 3, Section 5 provides CLIP, a provably universal extension of MPNNs. Finally, Section 6 shows that CLIP achieves state-of-the-art accuracies on benchmark graph classification taks, as well as outperforming its competitors on graph property testing problems. The first works investigating the use of neural networks for graphs used recurrent neural networks to represent directed acyclic graphs . More generic graph neural networks were later introduced by; , and may be divided into two categories. 1) Spectral methods (; ; ;) that perform convolution on the Fourier domain of the graph through the spectral decomposition of the graph Laplacian. 2) Message passing neural networks , sometimes simply referred to as graph neural networks, that are based on the aggregation of neighborhood information through a local iterative process. This category contains most state-of-the-art graph representation methods such as (; ; ; ;), DeepWalk , graph attention networks , graphSAGE or GIN . Recently, showed that MPNNs were, at most, as expressive as the WeisfeilerLehman (WL) test for graph isomorphism . This suprising led to several works proposing MPNN extensions to improve their expressivity, and ultimately tend towards universality (a; b; ;). However, these graph representations are either as powerful as the k-WL test (a), or provide universal graph representations under the restrictive assumption of finite node attribute space . Other recent approaches (c) implies quadratic order of tensors in the size of the considered graphs. Some more powerfull GNNs are studied and benchmarked on real classical datasets and on graph property testing (; ;): a set of problems that classical MPNNs cannot handle. Our work thus provides a more general and powerful of universality, matching the original definition of for MLPs. In this section we present the theoretical tools used to design our universal graph representation. More specifically, we show that separable representations are sufficiently flexible to capture all relevant information about a given object, and may be extended into universal representations. Let X, Y be two topological spaces, then F(X, Y) (resp. C(X, Y)) denotes the space of all functions (resp. continuous functions) from X to Y. Moreover, for any group G acting on a set X, X /G denotes the set of orbits of X under the action of G (see Appendix B for more details). Finally, · is a norm on R d, and P n is the set of all permutation matrices of size n. In what follows, we assume that all the considered topological spaces are Hausdorff (see e.g. for an in-depth review): each pair of distinct points can be separated by two disjoint open sets. This assumption is rather weak (e.g. all metric spaces are Hausdorff) and is verified by most topological spaces commonly encountered in the field of machine learning. Let X be a set of objects (e.g. vectors, images, graphs, or temporal data) to be used as input information for a machine learning task (e.g. classification, regression or clustering). In what follows, we denote as vector representation of X a function f: X → R d that maps each element x ∈ X to a d-dimensional vector f (x) ∈ R d. A standard setting for supervised representation learning is to define a class of vector representations F d ⊂ F(X, R d) (e.g. convolutional neural networks for images) and use the target values (e.g. image classes) to learn a good vector representation in light of the supervised learning task (i.e. one vector representation f ∈ F d that leads to a good accuracy on the learning task). In order to present more general , we will consider neural network architectures that can output vectors of any size, i.e. F ⊂ ∪ d∈N * F(X, R d), and will denote the set of d-dimensional vector representations of F. A natural characteristic to ask from the class F is to be generic enough to approximate any vector representation, a notion that we will denote as universal representation . In other words, F is a universal representation of a normed space X if and only if, for any continuous function φ: X → R d, any compact K ⊂ X and any ε > 0, there exists f ∈ F such that One of the most fundamental theorems of neural network theory states that one hidden layer MLPs are universal representations of the m-dimensional vector space R m. Theorem 1 (, Theorem 3.1). Let ϕ: R → R be a continuous non polynomial activation function. For any compact K ⊂ R m and d ∈ N *, two layers neural networks with activation ϕ are uniformly dense in the set C(K, R d). However, for graphs and structured objects, universal representations are hard to obtain due to their complex structure and invariance to a group of transformations (e.g. permutations of the node labels). We show in this paper that a key topological property, separability, may lead to universal representations of those structures. Loosely speaking, universal representations can approximate any vector-valued function. It is thus natural to require that these representations are expressive enough to separate each pair of dissimilar elements of X. Definition 2 (Separability). A set of functions F ⊂ F(X, Y) is said to separate points of X if for every pair of distinct points x and y, there exists f ∈ F such that f (x) = f (y). we will say that F is separable if its 1-dimensional representations F 1 separates points of X. Separability is rather weak, as we only require the existence of different outputs for every pair of inputs. Unsurprisingly, we now show that it is a necessary condition for universality (see Appendix A for all the detailed proofs). Proposition 1. Let F be a universal representation of X, then F 1 separates points of X. While separability is necessary for universal representations, it is also key to designing neural network architectures that can be extended into universal representations. More specifically, under technical assumptions, separable representations can be composed with a universal representation of R d (such as MLPs) to become universal. Theorem 2. For all d ≥ 0, let M d be a universal approximation of R d. Let F be a class of vector representations of X such that: Stability by concatenation is verified by most neural networks architectures, as illustrated for MLPs in Figure 1. The proof of Theorem 2 relies on the Stone-Weierstrass theorem (see e.g. , Theorem 7.32) whose assumptions are continuity, separability, and the fact that the class of functions is an algebra. Fortunately, composing a separable and concatenable representation with a universal representation automatically leads to an algebra, and thus the applicability of the StoneWeierstrass theorem and the desired . A complete derivation is available in Appendix A. Since MLPs are universal representations of R d, Theorem 2 implies a convenient way to design universal representations of more complex object spaces: create a separable representation and compose it with a simple MLP (see Figure 2). Corollary 1. A continuous, concatenable and separable representation of X composed with an MLP is universal. Note that many neural networks of the deep learning literature have this two steps structure, including classical image CNNs such as AlexNet or Inception . In this paper, we use Corollary 1 to design universal graph and neighborhood representations, although the method is much more generic and may be applied to other objects. In this section, we first provide a proper definition for graphs with node attributes, and then show that message passing neural networks are not sufficiently expressive to be universal. Consider a dataset of n interacting objects (e.g. users of a social network) in which each object i ∈ 1, n has a vector attribute v i ∈ R m and is a node in an undirected graph G with adjacency matrix A ∈ R n×n. Definition 3. The space of graphs of size n with m-dimensional node attributes is the quotient space where A is the adjacency matrix of the graph, v contains the m-dimensional representation of each node in the graph and the set of permutations matrices P n is acting on (v, A) by Moreover, we limit ourselves to graphs of maximum size n max, where n max is a large integer. This allows us to consider functions on graphs of different sizes without obtaining infinite dimensional spaces and infinitely complex functions that would be impossible to learn via a finite number of samples. We thus define Graph m = n≤nmax Graph m,n. More details on the technical topological aspects of the definition are available in Appendix B, as well as a proof that Graph m is Hausdorff. A common method for designing graph representations is to rely on local iterative procedures. Following the notations of , a message passing neural network (MPNN) is made of three consecutive phases that will create intermediate node representations x i,t for each node i ∈ 1, n and a final graph representation x G as described by the following procedure: 1) Initialization: All node representations are initialized with their node attributes 2) Aggregation and combination: T local iterative steps are performed in order to capture larger and larger structural characteristics of the graph. 3) Readout: This step combines all final node representations into a single graph representation: where READOUT is permutation invariant. Unfortunately, while MPNNs are very efficient in practice and proven to be as expressive as the Weisfeiler-Lehman algorithm , they are not sufficiently expressive to construct isomorphism tests or separate all graphs (for example, consider k-regular graphs without node attributes, for which a small calculation shows that any MPNN representation will only depend on the number of nodes and degree k ). As a direct application of Proposition 1, MPNNs are thus not expressive enough to create universal representations. In this section, we present Colored Local Iterative Procedure (CLIP), an extension of MPNNs using colors to differentiate identical node attributes, that is able to capture more complex structural graph characteristics than traditional MPNNs. This is proved theoretically through a universal approximation theorem in Section 5.3 and experimentally in Section 6. CLIP is based on three consecutive steps: 1) graphs are colored with several different colorings, 2) a neighborhood aggregation scheme provides a vector representation for each colored graph, 3) all vector representations are combined to provide a final output vector. We now provide more information on the coloring scheme. In order to distinguish non-isomorphic graphs, our approach consists in coloring nodes of the graph with identical attributes. This idea is inspired by classical graph isomorphism algorithms that use colors to distinguish nodes , and may be viewed as an extension of one-hot encodings used for graphs without node attributes . For any k ∈ N, let C k be a finite set of k colors. These colors may be represented as one-hot encodings (C k is the natural basis of R k) or more generally any finite set of k elements. At initialization, we first partition the nodes into groups of identical attributes V 1,..., V K ⊂ 1, n. Then, for a subset V k of size |V k |, we give to each of its nodes a distinct color from C k (hence a subset of size |V k |). For example, Figure 3 shows two colorings of the same graph, which is decomposed in three groups V 1, V 2 and V 3 containing nodes with attributes a, b and c respectively. Since V 1 contains only two nodes, a coloring of the graph will attribute two colors ( and, depicted as blue and red) to these nodes. More precisely, the set of colorings C(v, A) of a graph G = (v, A) are defined as In the CLIP algorithm, we add a coloring scheme to an MPNN in order to distinguish identical node attributes. This is achieved by modifying the initialization and readout phases of MPNNs as follows. We first select a set C k ⊆ C(v, A) of k distinct colorings uniformly at random (see Eq.). Then, for each coloring c ∈ C k, node representations are initialized with their node attributes concatenated with their color: This step is performed for all colorings c ∈ C k using a universal set representation as the aggregation function: where ψ and ϕ are MLPs with continuous non-polynomial activation functions and ψ(x, y) denotes the of ψ applied to the concatenation of x and y. The aggregation scheme we propose is closely related to DeepSet , and a direct application of Corollary 1 proves the universality of our architecture. More details, as well as the proof of universality, are available in Appendix C. 3. Colored readout: This step performs a maximum over all possible colorings in order to obtain a final coloring-independent graph representation. In order to keep the stability by concatenation, the maximum is taken coefficient-wise where ψ is an MLP with continuous non polynomial activation functions. We treat k as a hyper-parameter of the algorithm and call k-CLIP (resp. ∞-CLIP) the algorithm using k colorings (resp. all colorings, i.e. k = |C(v, A)|). Note that, while our focus is graphs with node attributes, the approach used for CLIP is easily extendable to similar data structures such as directed or weighted graphs with node attributes, graphs with node labels, graphs with edge attributes or graphs with additional attributes at the graph level. As the colorings are chosen at random, the CLIP representation is itself random as soon as k < |C(v, A)|, and the number of colorings k will impact the variance of the representation. However, ∞-CLIP is deterministic and permutation invariant, as MPNNs are permutation invariant. The separability is less trivial and is ensured by the coloring scheme. Theorem 3. The ∞-CLIP algorithm with one local iteration (T = 1) is a universal representation of the space Graph m of graphs with node attributes. The proof of Theorem 3 relies on showing that ∞-CLIP is separable and applying Corollary 1. This is achieved by fixing a coloring on one graph and identifying all nodes and edges of the second graph using the fact that all pairs (v i, c i) are dissimilar (see Appendix D). Similarly to the case of MLPs, only one local iteration is necessary to ensure universality of the representation. This rather counter-intuitive is due to the fact that all nodes can be identified by their color, and the readout function can aggregate all the structural information in a complex and non-trivial way. However, as for MLPs, one may expect poor generalization capabilities for CLIP with only one local iteration, and deeper networks may allow for more complex representations and better generalization. This point is addressed in the experiments of Section 6. Moreover, ∞-CLIP may be slow in practice due to a large number of colorings, and reducing k will speed-up the computation. Fortunately, while k-CLIP is random, a similar universality theorem still holds even for k = 1. Theorem 4. The 1-CLIP algorithm with one local iteration (T = 1) is a random representation whose expectation is a universal representation of the space Graph m of graphs with node attributes. The proof of Theorem 4 relies on using ∞-CLIP on the augmented node attributes v i = (v i, c i). As all node attributes are, by design, different, the max over all colorings in Eq. disappears and, for any coloring, 1-CLIP returns an ε-approximation of the target function (see Appendix D). Remark 1. Note that the variance of the representation may be reduced by averaging over multiple samples. Moreover, the proof of Theorem 4 shows that the variance can be reduced to an arbitrary precision given enough training epochs, although this may lead to very large training times in practice. As the local iterative steps are performed T times on each node and the complexity of the aggregation depends on the number of neighbors of the considered node, the complexity is proportional to the number of edges of the graph E and the number of steps T. Moreover, CLIP performs this iterative aggregation for each coloring, and its complexity is also proportional to the number of chosen colorings k = |C k |. Hence the complexity of the algorithm is in O(kET). Note that the number of all possible colorings for a given graph depends exponentially in the size of the groups V 1,..., V K, and thus ∞-CLIP is practical only when most node attributes are dissimilar. This worst case exponential dependency in the number of nodes can hardly be avoided for universal representations. Indeed, a universal graph representation should also be able to solve the graph isomorphism problem. Despite the existence of polynomial time algorithms for a broad class of graphs , graph isomorphism is still quasi-polynomial in general . As a , creating a universal graph representation with polynomial complexity for all possible graphs and functions to approximate is highly unlikely, as it would also induce a graph isomorphism test of polynomial complexity and thus solve a very hard and long standing open problem of theoretical computer science. In this section we show empirically the practical efficiency of CLIP and its relaxation. We run two sets of experiments to compare CLIP w.r.t. state-of-the-art methods in supervised learning settings: i) on 5 real-world graph classification datasets and ii) on 4 synthetic datasets to distinguish structural graph properties and isomorphism. Both experiments follow the same experimental protocol as described in: 10-fold cross validation with grid search hyper-parameter optimization. More details on the experimental setup are provided in Appendix E. We performed experiments on five benchmark datasets extracted from standard social networks (IMDBb and IMDBm) and bio-informatics databases (MUTAG, PROTEINS and PTC). All dataset characteristics (e.g. size, classes), as well as the experimental setup, are available in Appendix E. Following standard practices for graph classification on these datasets, we use one-hot encodings of node degrees as node attributes for IMDBb and IMDBm , and perform singlelabel multi-class classification on all datasets. We compared CLIP with six state-of-the-art baseline algorithms: 1) WL: Weisfeiler-Lehman subtree kernel , 2) AWL: Anonymous Walk Embeddings , 3) DCNN: Diffusion-convolutional neural networks , 4) PS: PATCHY-SAN , 5) DGCNN: Deep Graph CNN and 6) GIN: Graph Isomorphism Network . WL and AWL are representative of unsupervised methods coupled with an SVM classifier, while DCNN, PS, DGCNN and GIN are four deep learning architectures. As the same experimental protocol as that of was used, we present their reported on Table 1. Table 1 shows, CLIP can achieve state-of-the-art performance on the five benchmark datasets. Moreover, CLIP is consistent across all datasets, while all other competitors have at least one weak performance. This is a good indicator of the robustness of the method to multiple classification tasks and dataset types. Finally, the addition of colors does not improve the accuracy for these graph classification tasks, except on the MUTAG dataset. This may come from the small dataset sizes (leading to high variances) or an inherent difficulty of these classification tasks, and contrasts with the clear improvements of the method for property testing (see Section 6.2). More details on the performance of CLIP w.r.t. the number of colors k are available in Appendix E. Remark 2. In three out of five datasets, none of the recent state-of-the-art algorithms have statistically significantly better than older methods (e.g. WL). We argue that, considering the high variances of all classification algorithms on classical graph datasets, graph property testing may be better suited to measure the expressiveness of graph representation learning algorithms in practice. We now investigate the ability of CLIP to identify structural graph properties, a task which was previously used to evaluate the expressivity of graph kernels and on which the Weisfeiler-Lehman subtree kernel has been shown to fail for bounded-degree graphs . The performance of our algorithm is evaluated for the binary classification of four different structural properties: 1) connectivity, 2) bipartiteness, 3) triangle-freeness, 4) circular skip links (see Appendix E for precise definitions of these properties) against three competitors: a) GIN, arguably the most efficient MPNN variant yet published , b) Ring-GNN, a permutation invariant network that uses the ring of matrix addition and multiplication , c) RP-GIN, the Graph Isomorphism Network combined with Relational Pooling, as described by , which is able to distinguish certain cases of non-isomorphic regular graphs. We provide all experimental details in Appendix E. Table 2: Classification accuracies of the synthetic datasets. k-RP-GIN refers to a relational pooling averaged over k random permutations. We report Ring-GNN from. Connectivity Bipartiteness Triangle-freeness Circular skip links mean ± std mean ± std mean ± std mean ± std max min Table 2 shows that CLIP is able to capture the structural information of connectivity, bipartiteness, triangle-freeness and circular skip links, while MPNN variants fail to identify these graph properties. Furthermore, we observe that CLIP outperforms RP-GIN, that was shown to provide very expressive representations for regular graphs , even with a high number of permutations (the equivalent of colors in their method is set to k = 16). Moreover, both for k-RP-GIN and k-CLIP, the increase of permutations and colorings respectively lead to higher accuracies. In particular, CLIP can capture almost perfectly the different graph properties with as little as k = 16 colorings. In this paper, we showed that a simple coloring scheme can improve the expressive power of MPNNs. Using such a coloring scheme, we extended MPNNs to create CLIP, the first universal graph representation. Universality was proven using the novel concept of separable neural networks, and our experiments showed that CLIP is state-of-the-art on both graph classification datasets and property testing tasks. The coloring scheme is especially well suited to hard classification tasks that require complex structural information to learn. The framework is general and simple enough to extend to other data structures such as directed, weighted or labeled graphs. Future work includes more detailed and quantitative approximation depending on the parameters of the architecture such as the number of colors k, or number of hops of the iterative neighborhood aggregation. Proof of Theorem 2. The proof relies on the Stone-Weierstrass theorem we recall below. We refer to (, Theorem 7.32) for a detailed proof of the following classical theorem. Theorem 5 (Stone-Weierstrass). Let A be an algebra of real functions on a compact Hausdorff set K. If A separates points of K and contains a non-zero constant function, then A is uniformly dense in C(K, R). We verify that under the assumptions of Theorem 2 the Stone-Weierstrass theorem applies. In this setting, we first prove the theorem for m = 1 and use induction for the general case. Let K ⊂ X be a compact subset of X. We will denote and will proceed in two steps: we first show that A 0 is uniformly dense in C(K, R), then that A is dense in A 0, hence proving Theorem 2. Proof. The subset A 0 contains zero and all constants. Let f, g ∈ A 0 so that, and by assumption ϕ ∈ F. We have so that f + g ∈ A 0 and we conclude that A 0 is a vectorial subspace of C(K, R). We proceed similarly for the product in order to finish the proof of the lemma. Because F 1 separates the points of X by assumption, A 0 also separates the points of X. Indeed, let x = y two distinct points of X so that ∃f ∈ F such that f (x) = f (y). There exists g ∈ C(R d, R) such that g(f (x)) = g(f (y)). From Theorem 5 we deduce that A 0 is uniformly dense in C(K, R) for all compact subsets K ⊂ X. Lemma 2. For any compact subset K ⊂ X, A is uniformly dense in A 0. Proof. Let > 0 and h = ψ 0 • f ∈ A 0 with f ∈ F and ψ 0 ∈ C(R d, R). Thanks to the continuity of f, the imageK = f (K) is a compact of R d. By Theorem 1 there exists an MLP ψ such that This last lemma completes the proof in the case m = 1., f ∈ F} and proceed in a similar manner than Lemma 2 by decomposing and applying Lemma 1 for each coefficient function Proof of Proposition 1. Assume that there exists x, y ∈ X s.t. ∀f ∈ F 1, f (x) = f (y). Then K = {x, y} is a compact subset of X and let φ ∈ C(K, R) be such that φ(x) = 1 and φ(y) = 0. Thus, for all f ∈ F 1, max z∈{x,y} φ(z) − f (z) ≥ 1/2 which contradicts universality (see Definition 1). In what follows, X is always a topological set and G a group of transformations acting on X. The orbits of X under the action of G are the sets Gx = {g · x : g ∈ G}. Moreover, we denote as X /G the quotient space of orbits, also defined by the equivalence relation: x ∼ y ⇐⇒ ∃g ∈ G s.t. x = g · y. As stated in Section 5, graphs with node attributes can be defined using invariance by permutation of the labels. We prove here that the ing spaces are Hausdorff. Lemma 3 ((, I, §8. 3)). Let X be a Hausdorff space and R an equivalence relation of X. Then X /R is Hausdorff if and only if any two distinct equivalence classes in X are contained in disjoints saturated open subsets of X. Thanks to this lemma we prove the following proposition. Proposition 2. Let G a finite group acting on an Hausdorff space X, then the orbit space X /G is Hausdorff. Proof. Let Gx and Gy two distinct classes with disjoint open neighbourhood U and V. By finiteness of G, the application π: Suppose that there exists z ∈Ũ ∩Ṽ, then π(z) ∈ π(U) ∩ π(V) and we finally get that Gz ⊂ U ∩ V = ∅. ThereforeŨ ∩Ṽ is empty and X /G is Hausdorff by Lemma 3. Proposition 2 directly implies that the spaces Graph m and Neighborhood m are Hausdorff. We now provide more details on the aggregation and combination scheme of CLIP, and show that a simple application of Corollary 1 is sufficient to prove its universality for node neighborhoods. Each local aggregation step takes as input a couple (x i, {x j} j∈Ni ) where x i ∈ R m is the representation of node i, and {x j} j∈Ni is the set of vector representations of the neighbors of node i. In the following, we show how to use Corollary 1 to design universal representations for node neighborhoods. Definition 5. The set of node neighborhoods for m-dimensional node attributes is defined as where the set of permutation matrices P n is acting on R n×m by P · v = P v. The main difficulty to design universal neighborhood representations is that the node neighborhoods of Definition 5 are permutation invariant w.r.t. neighboring node attributes, and hence require permutation invariant representations. The graph neural network literature already contains several deep learning architectures for permutation invariant sets (; ; ;), among which PointNet and DeepSet have the notable advantage of being provably universal for sets. Following Corollary 1, we compose a separable permutation invariant network with an MLP that will aggregate both information from the node itself and its neighborhood. While our final architecture is similar to Deepset , this section emphasizes that the general universality theorems of Section 3 are easily applicable in many settings including permutation invariant networks. The permutation invariant set representation used for the aggregation step of CLIP is as follows: where ψ and ϕ are MLPs with continuous non-polynomial activation functions and ψ(x, y) denotes the of the MLP ψ applied to the concatenation of x and y. Theorem 6. The set representation described in Eq. Taking ψ(x, y) = y and ε = 1/3 max{|S 1 |, |S 2 |}, we have which proves separability and, using Corollary 1, the universality of the representation. Proof of Theorem 3. First of all, as the activation functions of the MLPs are continuous, CLIP is made of continuous and concatenable functions, and is thus also continuous and concatenable. Second, as the node aggregation step (denoted NODEAGGREGATION below) is a universal set representation (see Appendix C), it is capable of approximating any continuous function. We will thus first replace this function by a continuous function φ, and then show that the still holds for NODEAGGREGATION by a simple density argument. Let 2 ) be two distinct graphs of respective sizes n 1 and n 2 (up to a permutation). If n 1 = n 2, then ψ(x) = x and φ(x) = 1 returns the number of nodes, and hence x G 1 = n 1 = n 2 = x G 2. Otherwise, let V = {v k i} i∈ 1,n 1,k∈{1,2} be the set of node attributes of G 1 and G 2, c 1 be a coloring of G 1, ψ(x) = x and φ be a continuous function such that, ∀x ∈ V and S ⊂ V, The existence of φ ∈ C(R m, R) is assured by Urysohn's lemma (see e.g. (, lemma 2.12) ). Then, x G counts the number of matching neighborhoods for the best coloring, and we have x G 1 = n 1 and x G 2 ≤ n 1 − 1. Finally, taking ε < 1/2n 1 in the definition of universal representation leads to the desired , as then, using an ε-approximation of φ as NODEAGGREGATION, we have Proof of Theorem 4. Consider a continuous function ψ: nmax and we define φ: Moreover, observe that for any coloring c ∈ C(v, A), ∞-CLIP and 1-CLIP applied to ((v, c), A) returns the same , as all node attributes are dissimilar (by definition of the colorings) and C((v, c), A) = ∅. Finally, 1-CLIP applied to (v, A) is equivalent to applying 1-CLIP to ((v, C), A) where C is a random coloring in C(v, A), and Eq. thus implies that any random sample of 1-CLIP is within an ε error of the target function ψ. As a , its expectation is also within an ε error of the target function ψ, which proves the universality of the expectation of 1-CLIP. E.1 REAL-WORLD DATASETS Table 3 summarizes the characteristics of all benchmark graph classification datasets used in Section 6.1. We now provide complementary information on these datasets. Social Network Datasets (IMDBb, IMDBm): These datasets refer to collaborations between actors/actresses, where each graph is an ego-graph of every actor and the edges occur when the connected nodes/actors are playing in the same movie. The task is to classify the genre of the movie that the graph derives from. IMDBb is a single-class classification dataset, while IMDBm is multi-class. For both social network datasets, we used one-hot encodings of node degrees as node attribute vectors. Bio-informatics Datasets (MUTAG, PROTEINS, PTC): MUTAG consists of mutagenic aromatic and heteroaromatic nitrocompounds with 7 discrete labels. PROTEINS consists of nodes, which correspond to secondary structureelements and the edges occur when the connected nodes are neighbors in the amino-acidsequence or in 3D space. It has 3 discrete labels. PTC consists of chemical compounds that reports the carcinogenicity for male and female rats and it has 19 discrete labels. For all bio-informatics datasets we used the node labels as node attribute vectors. Experimentation protocol: We follow the same experimental protocol as described in , and thus report the provided in this paper corresponding to the accuracy of our six baselines in Table 1. We optimized the CLIP hyperparameters by grid search according to 10-fold cross-validated accuracy means. We use 2-layer MLPs, an initial learning rate of 0.001 and decreased the learning rate by 0.5 every 50 epochs for all possible settings. For all datasets the hyperparameters we tested are: the number of hidden units within {32, 64}, the number of colorings c ∈ {1, 2, 4, 8}, the number of MPNN layers within {1, 3, 5}, the batch size within {32, 64}, and the number of epochs, that means, we select a single epoch with the best cross-validation accuracy averaged over the 10 folds. Note that standard deviations are fairly high for all models due to the small size of these classic datasets. Table 4 summarizes the performances of CLIP while increasing the number of colorings k. Overall we can see a small increase in performances and a reduction of the variances when k is increasing. Nevertheless we should not jump to any since none of the models are statistically significantly better than the others. In Section 6.2 we evaluate the expressive power of CLIP on benchmark synthetic datasets. Our goal is to show that CLIP is able to distinguish basic graph properties, where classical MPNN cannot. We considered a binary classification task and we constructed balanced synthetic datasets 2 for each of the examined graph properties. The 20-node graphs are generated using Erdös-Rényi model (Erdös and Rényi, 1959) (and its bipartite version for the bipartiteness) with different probabilities p for edge creation. All nodes share the same (scalar) attribute. We thus have uninformative feature vectors. In particular, we generated datasets for different classical tasks: 1) connectivity, 2) bipartiteness, 3) triangle-freeness, and 4) circular skip links . In the following, we present the generating protocol of the synthetic datasets and the experimentation setup we used for the experiments. In every case of synthetic dataset we follow the same pattern: we generate a set of random graphs using Erdös-Rényi model, which contain a specific graph property and belong to the same class and by proper edge addition we remove this property, thus creating the second class of graphs. By this way, we assure that we do not change different structural characteristics other than the examined graph property. -Connectivity dataset: this dataset consists of 1000 (20-node) graphs with 500 positive samples and 500 negative ones. The positive samples correspond to disconnected graphs with two 10-node connected components selected among randomly generated graphs with an Erdös-Rényi model probability of p = 0.5. We constructed negative samples by adding to positive samples a random edge between the two connected components. -Bipartiteness dataset: this dataset consists of 1000 (20-node) graphs with 500 positive samples and 500 negative ones. The positive samples correspond to bipartite graphs generated with an Erdös-Rényi (bipartite) model probability of p = 0.5. For the negative samples (non-bipartite graphs) we chose the positive samples and for each of them we added an edge between randomly selected nodes from the same partition, in order to form odd cycles 3. -Triangle-freeness dataset: this dataset consists of 1000 (20-node) graphs with 500 positive samples and 500 negative ones. The positive samples correspond to triangle-free graphs selected among randomly generated graphs with an Erdös-Rényi model probability of p = 0.1. We constructed negative samples by randomly adding new edges to positive samples until it creates at least one triangle. -Circular skip links: this dataset consists of 150 graphs of 41 nodes as described in . The Circular Skip Links graphs are undirected regular graphs with node degree 4. We denote a Circular skip link graph by G n,k an undirected graph of n nodes, where (i, j) ∈ E holds if and only if |i − j| ≡ 1 or k(mod n) This is a 10-class multiclass classification task whose objective is to classify each graph according to its isomorphism class. Experimentation protocol: We evaluate the different configurations of CLIP and its competitors GIN and RP-GIN based on their hyper-parameters. For the architecture implementation of the GIN, we followed the best performing architecture, presented in. In particular, we used the summation as the aggregation operator, MLPs as the combination level for the node embedding generation and the sum operator for the readout function along with its refined version of concatenated graph representations across all iterations/layers of GIN, as described in. In all the tested configurations for CLIP and its competitors (GIN, RP-GIN) we fixed the number of layers of the MLPs and the learning rate: we chose 2-layer MLPs and we used the Adam optimizer with initial learning rate of 0.001 along with a scheduler decaying the learning rate by 0.5 every 50 epochs. Concerning the other hyper-parameters, we optimized: the number of hidden units within {16, 32, 64} (except for the CSL task where we only use 16 hidden units to be fair w.r.t. RP-GIN
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJxt0JHKvS
This paper introduces a coloring scheme for node disambiguation in graph neural networks based on separability, proven to be a universal MPNN extension.
In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components. Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel. Algorithmic music composition is almost as old as computers themselves, dating back to the 1957 "Illiac suite" . Since then, automated music composition evolved with technology, progressing from the first rule-and-randomness based methods to the sophisticated tools made possible by modern-day machine learning (see Fernández & and for detailed surveys on history and state of the art of algorithmic music composition). One of the first machine learning (ML) approaches to music generation was , who used the common notion of entropy as a measurement to build what they termed a multiple viewpoint system. Standard feedforward neural networks have difficulties with sequence based information such as music. Predicting the next note of a piece, when only based on the current note, does not account for long-range context or structure (such as key and musical sections) which help give coherence to compositions. As music is traditionally represented as sequences of notes, recurrent neural networks are a natural tool for music (especially melody) generation, and multiple groups used RNNs fairly successfully for a variety of types of music. used a sequential model for composition in 1989, and used the adapted LSTM structure to successfully generate music that had both short-term musical structure and contained the higher-level context and structure needed. Subsequently, there have been a number of RNN-based melody generators (; ; ; ; ; ; . Other approaches such as MidiNet by , though not RNNs, also leveraged the sequential representation of music. Using an RNN architecture provides a lot of flexibility when generating music, as an RNN has the ability to generate pieces of varying length. However, in some styles of music this is not as desired. This is true of traditional Irish music -and especially their jigs and reels. These pieces have a more rigid format where the varying length can prevent capturing the interplay between the phrases of the piece. Finding jigs and reels to train on was made easy by an excellent database of Irish traditional melodies in ABC notation (a text based format), publicly available at TheSessionKeith. Several RNN-based generators were trained on the melodies from TheSession, most notably Sturm et al. , as well as. It is natural to view music, and in particular melodies, as sequential data. However, to better represent long-term dependencies it can be useful to present music as a two-dimensional form, where related parts and occurrences of long patterns end up aligned. This benefit is especially apparent in forms of music where a piece consists of a well-defined, fixed-length components, such as reels in Irish music. These components are often variations on the same theme, with specific rules on where repeats vs. changes should be introduced. Aligning them allows us to use vertical spatial proximity to capture these dependencies, while still representing the local structure in the sequence by horizontal proximity. In this project, we leverage such two-dimensional representation of melodies for non-sequential melody generation. We focus on melody generation using deep convolutional generative adversarial networks (DCGANs) without recurrent components for fixed-format music such as reels. This approach is intended to capture higher-level structures in the pieces (like sections), and better mimic interplay between smaller parts (musical motifs). More specifically, we use dilations of several semantically meaningful lengths (a bar or a phrase) to further capture the dependencies. Dilated convolutions, introduced by , have been used in a number of applications over the last several years to capture long-range dependencies, notably in WaveNet . However, they are usually combined with some recurrent component even when used for a GAN-based generation such as in or. Not all techniques applicable to images can be used for music, however: pooling isn't effective, as the average of two pitches can create notes which fall outside of the 12-semitone scale (which is the basis the major and minor scale as well as various modes). This is reflected in the architecture of our discriminator, with dilations and towers as the main ingredients. In spite of emigration and a well-developed connection to music influences from its neighbouring cultures -especially Scotland -Irish traditional music has kept many of its elements and itself influenced many forms of music. Its influences can be found in the folk music of Newfoundland and Quebec and in Bluegrass from the Appalachian region of United States. A significant part of the traditional Irish music is instrumental dance music, where a tune following a set format is played on a solo instrument, such as a fiddle, tin whistle, accordion or flute. The core of the piece is usually monophonic, so while harmonies and embellishments can be introduced during performances, they are considered extra and not usually encoded when a piece is transcribed. Generally the melody consists of two distinct 8-bar parts, which in turn consist of two 4-bar phrases each. Usually, phrases represent a pattern which repeats with variations from phrase to phrase, with one (most often third) phrase deviating most from the pattern, and the final phrase returning to it. When performing, each part is repeated twice; sometimes the second repeat of a part is different enough that it is transcribed separately, making the transcription from 16 to 32 bars long. The whole piece is usually performed three (or more) times, especially when accompanying a dance. The two most common types of dances are reels and jigs: the reels have time signature of 4/4 (that is, each bar contains 4 units, where each unit is a quarter note), whereas a jig has a time signature 6/8 (that is, each bar contains 6 units, where each unit is an eighth note). Many Irish tunes are in a major key, with D major being most common. However, a number of tunes use the Dorian and Mixolydian modes (starting with 2nd or 5th note with respect to the relative major scale, respectively). The natural minor key (Aeolian mode) is less common. Overall, it is uncommon for a tune to incorporate notes not in its key and to go outside of a 2-octave range (either of these would make it hard to play on a tin whistle). Traditionally, musicians learned tunes by ear. However, the relative simplicity of the structure of Irish tunes has lent itself well to a text-based representation, culminating in the ABC notation developed by Chris Walshaw in the 1970s. There, the key and default length of a note are specified in a header (together with the title, time signature, etc), and the melody is then encoded by letters: A-G,where C represents middle C, and a-g, which represents the notes an octave higher than A-G. Additionally there are modifiers which raise and lower notes by an octave, as well as charaters which represent sharps and flats. Like in the staff notation, sharps and flats which are implied by the key are not explicitly written into the piece. These letters are also appended with duration modifiers: for example, if ABC header sets the default note length as 1/8, then D represents an eighth note (which is a D), D2 represents a quarter note, and D/2 represents a sixteenth note. For readability, bar separators | and spacing are often included: for example, the sequence "d2dA BAFA|ABdA BAFA|ABde f2ed|Beed egfe" encodes the first phrase of a reel called "Merry Blacksmith", where the key was specified to be D major and the default note length is set to 1/8. Nowadays, ABC notation is widely used, especially for Irish folk music, with additional features for representing ornaments, repetitions, and so on. In particular, this is the default representation for tunes in databases such as TheSession which we used for this project. TheSession dataset contains over 31,000 tunes, which contain 10,000 reels in a major key. However, after removing improperly formatted tunes, tunes with features we did not want to generate (such as triplets), and tunes that had more than 16 bars we were left with 820 reels, which we used as our training data. The samples in ABC notation were then converted into numerical vectors, with 16 numbers per bar (so 256 numbers total), corresponding to 16 possible notes (normalized midi values). This encoding did not preserve information about the duration of each note, only their (normalized) midi value. However, we found that with simple postprocessing (converting a sequence of occurrences of the same note into a note of a longer duration, starting with an occurrence on a beat) we were able to recover the tunes with not much more variation than would be introduced in performances. One of the main ideas behind our encoding was representing the ing vectors of 256 values as a 64x4 image, with midi values corresponding to pixel values. The ing image had a line for each the phrase of the tune, and the vertical alignment of the corresponding notes and bars let us exploit the long-range dependencies among the phrases using their spatial proximity. Intuitively, this is akin to building a mental map of a space explored by touch: though the touch information is sequential, the ing map can be two or three-dimensional, with significant amount of information encoded by proximity in the second and third dimensions. Traditionally, sequential data is learned using some variation of RNNs. Simple implementations of RNNs (standalone GRU and LSTM cells) tend to struggle with capturing long-distance dependencies; a popular solution to this issue is to use bi-directional RNNs (that is, a pair of layers that run in opposite directions) with an activation mechanism (using an auxiliary trainable network to better relate elements of the output sequence to those of the input sequence) so that information for multiple sections of the sequence can be captured . A similar effort at capturing long-distance dependencies has been applied to Convolutional Neural Networks (CNNs) too. Yu et al. have proposed dilated convolutions as a solution to the problem: in a dilated convolution, the convolution kernel is not a contiguous block, but rather subsections of the kernel are separated by some number of pixels from each other. As a , the convolution operation is not limited to the immediate neighbourhood, and long-distance dependencies are made available to the kernel . While attention-based and bi-directional RNNs have proven to be quite successful at capturing long-distance dependencies, we believe that there are a few notable benefits for being able to use a DCGAN for generating sequential data, as opposed to an RNN: • DCGANs that incorporate dilations allow for domain-specific heuristics to be passed to the neural network, by means of the dilation rates and the convolution kernel dimensions. This allows for a whole new dimension of flexibility and fine-tuning that is not available through RNNs. • A DCGAN yields both a discriminator and a generator, as opposed to just a generator. • Unlike an RNN, a GAN does not require a meaningful or domain-specific seed to be passed as an input to the generator; a vector of random noise can be used instead, making it easier to generate novel outputs from the generator. GANs have traditionally worked well with pictures, but to our knowledge existing music generation with GANs either relies on RNNs in the discriminator, as in SeqGAN or generates the melody in a sequential bar-after-bar process as in MidiNet . The discriminator starts off with a 6-tower convolution network (that is, 6 convolutions run sideby-side, as opposed to being dependent on each other). We chose to use dilated filters on 5 of the towers to capture relative information between bars and phrases (we shall refer to this as the global context), with the remaining layer being a contiguous 2x9 convolution to also capture the structure of the immediate neighbourhood (local context). Having multiple towers each learning different aspects gives us versatility and helps avoid the "tunnel vision" nature of other generative music models. This can be viewed in Figure 1. All the towers use 32 convolution filters, and use zero-padding for edge pixels that might not have a sufficient neighbourhood to be covered by each filter. The outputs of the convolutions from each tower are then stacked horizontally, and passed to a second (regular) layer of convolution made of 64 filters, using a 3x3 kernel, a 2x2 stride and with the convolution being only applied to pixels that have a sufficient neighbourhood for the convolution filter. The output of this layer is then flattened, and passed to a dense layer consisting of 1024 neurons, which is in turn followed by a sigmoid neuron that generates the prediction of the discriminator. No batch normalization is applied to any of the layers on the discriminator. Unlike the discriminator, the generator does not make use of dilations or towers. We start off with a dense layer made of 32x2x256 (where the last dimension is the number of filters). The dense layer is then reshaped into 256 filters of size 32x2, and passed through two layers of deconvolution, each with halving the number of filters, while using a stride of 1x1 and a kernel size of 2x5. A last deconvolution layer (also 2x5 in kernel size) is used to bring down the filters into one image, with a stride of 2x2 to return to the original image dimension of 64x4; this layer also uses a tanh activation to place the output values in the same range as the training dataset. All layers in the generator use batch normalization. After our GAN has generated music in the form of a 64x4 matrix, we map the real values to notes in ABC notation, rounding to the nearest note in the D major key. We then take each beat and merge notes with the same pitch together, so a sequence of four 16th notes C starting from a 1st, 5th, 9th or 13th note in a bar becomes a quarter note C instead. The ing ABC notated music can then be converted into sheet music or played with midi generators. Our approach is similar to the model by as both train on text-based musical notation and aim to learn Irish traditional instrumental music that has distinct structure. The models in Magenta are considered to be state of the art in music generation at the moment. For this reason, we compared our samples with the Folk-RNN and Magenta models -specifically the MelodyRNN model in Magenta -training using the same dataset of 820 curated reels. We generated an equivalent number of samples from each model and compared the samples. Since our goal is for the GAN to learn the global patterns, we use measures that highlight similarity in structure. The first metric we chose was the normalized mean Fréchet distance. Fréchet distance is a measure of similarity between vectors which prioritizes the ordering and magnitude of the values in those vectors. Since the order and distribution of notes are important for a tune to be considered structurally similar to another, this metric is ideal for us. We normalize this Fréchet distance so we can view the changes between the phrases better. The of this comparison are found in Figure 2. Each group records the normalized Fréchet distance between corresponding combinations of phrases i.e the first group highlights the normalized mean Fréchet distance between phrase 1 and 2 for all the distributions. We can see that our generated samples compare favorably with the Folk-RNN model and both exhibit similar structure to the training set. It is important to note that the MelodyRNN used here is the default model without the attention mechanism. Another measure to visualize similarity between distributions is the t-distributed stochastic neighbourhood embedding (t-SNE) algorithm developed byMaaten &. This algorithm groups points that are similar in higher dimension closer together in low dimensions -making it useful to visualize similarity between the tunes generated by both models along with the training data. Here, we set the tunable parameter perplexity to 50. This increases the importance of global similarity between vectors. Figure 3 shows distributions of the training data and samples generated by our model, FolkRNN and Magenta trained on the same data. Since t-SNE is stochastic and can generate different visualizations for each run, we display visualization produced by two runs of t-SNE algorithm. It can be observed that most of the time (>75%), the tunes generated by our model seem to lie within the distribution of the training data, although they only cover a subset of the training data. Both RNN models seem to mimic a slightly different distribution, where we can think of the tunes generated by Magenta as a subset of the FolkRNN ones, with FolkRNN generating by far the most diverse set of samples. A third metric is looking at the distribution of notes in the samples. Figure 4 shows the frequency of distribution of notes as midi key values in the samples. As music theory would suggest, the tonic of the key (in this case, the midi value corresponding to D) is the most common, with the 3rd and 5th of the key more frequent than the rest of the scale. Both our model and FolkRNN created a set of samples which mimics this distribution of the notes in the key, however Magenta seems to have a more uniform distribution of note frequencies. Overall, these metrics show that music generated by our DCGAN is comparable to melodies generated by RNNs on the same symbolic data. The use of dilations allows for our model to learn global tune structure. Converting sequential data into a format which implicitly encodes temporal information as spatial information is an effective way of generating samples of such data as whole pieces. Here, we explored this approach for melody generation of fixed-length music forms, such as an Irish reel, using non-recurrent architecture for the discriminator CNN with towers and dilations, as well as a CNN for the GAN itself. One advantage of this approach is that the model learns global and contextual information simultaneously, even with a small model. LSTMs and other approaches need a much larger model to be able to learn both the contextual neighboring note sequences and global melody structure. In future work, we would like to introduce boosting in order to capture the structure of the distribution more faithfully, and increase the range of pieces our model can generate. Natural extensions of the model would be to introduce multiple channels to capture durations better (for example, as in), and add polyphony (ie, using some form of piano roll representation). Another direction could be to experiment with higher-dimensional representation of the sequence data, to better capture several types of dependencies simultaneously. Additionally, it would be interesting to apply it to other kinds of fixed-length sequential data with long-range patterns.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkePOCNtPH
Representing melodies as images with semantic units aligned we can generate them using a DCGAN without any recurrent components.
Neural machine translation (NMT) systems have reached state of the art performance in translating text and widely deployed. Yet little is understood about how these systems function or break. Here we show that NMT systems are susceptible to producing highly pathological translations that are completely untethered from the source material, which we term hallucinations. Such pathological translations are problematic because they are are deeply disturbing of user trust and easy to find. We describe a method t generate hallucinations and show that many common variations of the NMT architecture are susceptible to them. We study a variety of approaches to reduce the frequency of hallucinations, including data augmentation, dynamical systems and regularization techniques and show that data augmentation significantly reduces hallucination frequency. Finally, we analyze networks that produce hallucinations and show signatures of hallucinations in the attention matrix and in the stability measures of the decoder. Neural machine translation (NMT) systems are language translation systems based on deep learning architectures BID10 BID1 BID31. In the past few years, NMT has vastly improved and has been deployed in production systems, for example at Google BID33, Facebook BID15, Microsoft BID17, and many others. As NMT systems are built on deep learning methodology, they exhibit both the strengths and weaknesses of the approach. For example, NMT systems are competitive with state of the art performance BID6 and scale well to very large datasets BID23 but like most large deep learning systems, NMT systems are poorly understood. For example, in many commercial translation systems, entering repeated words many times occasionally in strange translations, a phenomenon which has been highly publicized BID12. More broadly, recent work shows that NMT systems are highly sensitive to noise in the input tokens BID3 and also susceptible to adversarial inputs BID9. When there is an error in translation, it can be challenging to either understand why the mistake occurred or engineer a fix. Here we continue the study of noise in the input sequence and describe a type of phenomenon that is particularly pernicious, whereby inserting a single additional input token into the source sequence can completely divorce the translation from the input sentence. For example, here is a German input sentence translated to English (reference) by a small NMT system: Source: Caldiero sprach mit E! Nachrichten nach dem hart erkämpften Sieg, noch immer unter dem Schocküber den Gewinn des Großen Preises von 1 Million $. Reference: Caldiero spoke with E! News after the hard-fought victory, still in shock about winning the $1 million grand prize. NMT Translation: Caldiero spoke with E, after the hard won victory, still under the shock of the winning of the Grand Prix of 1 million $. Since its invention, researchers have been working to better understand NMT. For example, moving from the original Seq2Seq model BID31 BID11 to models that utilize attention mechanisms BID1, ed in improved translation quality BID22 and better interpretability BID14. Studies identified the most critical components of the LSTM BID16 and the role of the LSTM in language modeling BID19 more broadly. Followed by explorations in interpretability, recent work has focused on robust NMT, studying the effects of input noise, aiming to reduce variation from typos BID3 and synonym choice BID9 in the discrete input set used by NMT systems. Both BID3 and BID9 have discovered that NMT systems are highly sensitive to input noise and both used adversarial training to help stabilize NMT systems (either with black-box adversarial training or augmenting an adversarial loss). There has been work in understanding how to handle some of the pathologies in RNNs for language modeling and translation, for example, using scheduled sampling to handle mismatches between training and test sets BID5.In parallel, there has also been work in understanding RNNs through the framework of dynamical systems BID30 BID28 BID21 BID25 BID2 and understanding their capacity in language tasks BID13. For example, it is known that continuous time vanilla RNNs exhibit high-dimensional chaos BID27, which can be driven out by strong input BID24. RNNs exhibiting chaos can be beneficial, for example, RNNs capable of chaos in the absence of input can serve as strong initialization for training RNNs when they are input driven BID29, but caution must be used as unbridled chaos can be viewed as a source of dynamical noise. This has led to efforts to rid RNNs of chaos altogether BID21. Efforts related to improving optimization may also have dynamically regularizing effects, e.g. BID0. Given the complexities and slow compute times of recurrent systems, there have also been attempts to rid NMT of recurrence BID18 BID15 BID32.We further expand on these studies by highlighting the specific pathology of hallucinations, systematically studying those hallucinations, and analyzing them from a dynamical systems perspective. Models: In this paper, we use a standard RNN-based encoder-decoder NMT model with attention. Specifically, we study the NMT model described in BID33, known as GNMT. We use the GNMT model and its public implementation 2. Formally, given an input sequence, x 1:S of length S, the NMT model first encodes the input sequence x 1:S into a set of vectors z 1:S = f enc (x 1:S) using its encoder f enc. The task of the decoder f dec, is to generate the translation, y 1:T, one symbol at a time y i, given the encoding, z 1:S, and previously generated symbols y <i. The decoder, f dec is implemented as a conditional sequence model BID1, where the distribution over y 1:T is conditioned on x 1:S. The decoder internally makes use of an attention mechanism f att to query the encoder, summarizing z 1:S for each output symbol y i, putting it all together y i = f dec (y < i, f att (z 1:S)) (also see Figure 7 for a detailed model schematic). Finally, the conditional probability of the target sequence is modelled as p(y 1:T |x 1:S) = T i=1 p(y i |y < i, x 1:S) and the log of this conditional likelihood is maximized given a set of source-target pairs (x, y) during training. We study models that are significantly smaller and less complex than those typically used in state-ofthe-art or production systems for the sake of research tractability. We use a single layer bidirectional LSTM in the encoder f enc and a two layered unidirectional LSTM in the decoder f dec with an additive attention mechanism as f att BID8. The word embedding dimensions and each LSTM hidden cell (both in the encoder and decoder) are set to 256. We refer to this model as the canonical model. Unless otherwise stated, we used Adam BID20 optimizer with a learning rate of 0.001, a constant learning rate schedule, and clipped gradients to a maximum norm of 5.0 during training. Given these hyper-parameter and architectural choices, we trained 10 canonical models with different random seeds to observe how parameter initialization variability played a role in our . All additional model variants we describe later were also trained 10 times with the same 10 different random seeds. Each model was trained for 1M steps (updates) with a mini-batch of size 128 and the training checkpoint with the best BLEU score on the development set was selected. The central goal of our study was to understand how various modelling choices affected the frequency of hallucinations. In order to isolate the effects of modeling changes, all model variants we study in this paper were identical to the canonical model except for a single change. This means, for example, that our model with 512 hidden units also is 2 layers deep, etc. We performed a simple hyper-parameter search for the canonical model, and did not perform additional hyper-parameter searches for any additional models. All models we present are well trained with a BLEU score of at least 20.0 on the test set using greedy decoding, a reasonable score for 2-layer models with 256 hidden units. With beam search decoding, our canonical models achieve an average BLEU score of 25.66.Inference: Generating a translation of the input sequence, or formally finding an output sequence that maximizes the conditional log-probability,ŷ = argmax y log p(y|x), is a major challenge in NMT since the exact inference (or decoding) is intractable. NMT uses approximate decoding techniques which we also have used in this paper. The simplest approximate decoding technique, greedy decoding, chooses the most-likely symbol under the conditional probabilitŷ y t = argmax i log p(y t = i|ŷ <i, x 1:S), outputting a single best local prediction by keeping track of a single hypothesis k, at each time step. Another approximate decoding technique, beam search, improves upon greedy decoding by keeping track of multiple hypotheses (beams), where k >1 at each time step of the decoding, compared to k=1 in greedy-decoding. To maintain simplicity in our canonical model we used greedy decoding. Note that production systems will often perform beam search to find a more probable translation than one generated by greedy search. We also ran an additional set of experiments with beam search. We trained all models with the German to English WMT De→En 2016 dataset (4,500,966 examples) BID7, validated with the WMT De→En 2015 development set (2,169 examples). We then used the WMT De→En 2016 test set (2,999 examples) to compute the hallucination percentage for each model. For the input and output of all NMT models in consideration, we used sub-word tokens extracted by Byte-Pair Encoding (BPE) BID26. To construct a vocabulary of unique tokens, we first combined the tokenized source and target corpora BID2, and then learned a joint BPE code with an 8k merge operations budget, ing in 12,564 unique tokens. Further, in order to study the effect of larger vocabulary sizes, for some experiments we repeated the same process with 16,000 and 32,000 BPE codes and ended up with vocabularies having 19,708 and 36,548 unique tokens respectively. Note that we used the same vocabulary for both source and target side languages. Computing the percentage of hallucinations in a NMT model Select a model; Fix a random seed; Select a group of subword tokens with the following attributes:; -Common tokens: 100 most common subword tokens; -Mid-frequency tokens: random sample of 100 subword tokens between common and rare tokens; -Rare tokens: 100 least common subword tokens; -Punctuation: all punctuation tokens; for every sentence in test corpus (e.g. WMT De→En 2016 test set) do if adjusted BLEU between reference sentence and translated sentence > 0.09 then for every selected token do for every location in (beginning, end, second-to-end, randomly in the middle) do put the selected token at the selected location in the byte-pair encoded input sequence; translate the perturbed input sequence; if adjusted BLEU between the translated, perturbed sentence and the translated, unperturbed sentence< 0.01 then this sentence can be perturbed to produce a hallucination; Informally, a hallucination is a translation of a perturbed input sentence that has almost no words in common with the translation of the unperturbed sentence. Here, we use the term perturb to mean adding a single token to the input source sequence. This is based on an initial observation that adding a rare token to an input sequence reliably caused a model to generate a hallucination, e.g. adding a Chinese character token to a German to English translation. We expanded and systematized this discovery into a brute force search procedure (Algorithm 1) by splitting our tokens into several types: common (100 most common German tokens), rare (100 least common German tokens), mid-frequency tokens (randomly sampled 100 tokens from the remaining German tokens), and punctuation tokens. Additionally, we attempted to perturb each sentence by inserting a token at one of several positions: beginning, end, second to the end, and randomly in the middle. We did this for every sentence in our test set and collected statistics for each model variant. To define a quantitative threshold for a hallucination we modified the BLEU score, which is used to compare a reference sequence with a translated sequence. Briefly, the BLEU score is a common metric for translation, which measures weighted n-gram overlaps while penalizing short translations. We modified the BLEU score by re-weighting the n-grams in the BLEU score computation to favor having any words in common between the two sentences (1.0 for one-grams and 0.8 for bi-grams and disregarded other n-grams). Then, we call only sentences that have an adjusted BLEU score of less than 0.01 hallucinations. For examples of sentence pairs with different adjusted BLEU scores, see Section 8.2 in the Appendix. Not all translations are good even before adding a perturbation token to the input sequence. To strengthen our on hallucinations, we first excluded these poor translations by computing the adjusted BLEU score between the reference translation and the translation produced by the unperturbed input sequence. We kept sentences that had an adjusted BLEU of ≥ 0.09. We chose a value of 0.09 because it seemed to maintain enough context that you could tell the translation and the reference were related. We describe four common hallucination patterns: grammatically correct output that bears no relation to the input text, ungrammatical output with oscillatory structure, output that remains largely in the source language, and finally terse jumps to the end of the sequence. We also observe translations that are ungrammatical nonsense. While still highly undesirable, we note that a user should at least be able spot and reject these additional hallucination patterns. We show that hallucinations can be easily evoked by inserting tokens in the source sequence. We used Algorithm 1 to quantify how susceptible a given model is to hallucination. In particular, we studied what types of perturbations (location, and token type) are more effective at inducing hallucinations. With this method, we found that, on average, 73% of all sentences in the WMT De→En test set can be perturbed to hallucination in the canonical model. We studied how beam search, number of hidden units, vocabulary size, and decoding scheme affected hallucination percentages FIG0. We found that changing the number of hidden units to both 512 and 1024 from 256 and changing the vocabulary size-from 8K to 16K BPE codes did not significantly decrease the hallucination percentage. However, beam search and a vocabulary size increase corresponding to 32K BPE codes did significantly lower the mean percentage of hallucinations. We also studied how different types of perturbations impacted the hallucination percentage of the canonical model FIG0. By far, adding a perturbing token to the beginning of the input sequence induces the most hallucinations in the canonical model. We were curious if BLEU scores were predictive of hallucination percentage. We plot the BLEU vs. hallucination percentage of all models we study (FIG1 . Surprisingly, we found that hallucination percentage does not decrease as BLEU score increases. However, since we did not study all possible models, we urge caution in interpreting these . What changes can we make to the model to make it more robust to hallucinations? We investigated the effect of three different methodologies, simple regularizations, data augmentation and regularizations on the dynamics in state space. We tested if a model variation significantly reduces hallucinations by performing a one-sided Mann-Whitney U between the canonical distribution of models and the distribution of models that use the model variant. We use a p-value of 0.05.Simple Regularizations: We choose dropout, L2 regularization on embeddings (L2E), L2 regularization on recurrent weights (L2R) and L2 regularization on all weights (L2) as straight-forward regularization techniques to be applied. For dropout, we created a model with dropout in all feedforward layers, with a keep probability of 0.9. Next, we implemented L2 regularization on the We augmented the training data by perturbing all training sentences with a random token (either common, rare, mid-frequency, or punctuation) at either the beginning, end, second-to-end, or randomly in the middle while keeping the reference translation the same. This doubled our training set. We then trained a canonical model with the augmented training set, and found that data augmentation helps decrease hallucination percentages. We call this model DA.Dynamical Regularizations: We wondered if hallucinations could be reduced by providing a more exact initial state for the decoder, so we trained additional models where the initial state of the decoder was tied to last step of the encoder (TDIS). Note that the canonical model sets the decoder initial state as a vector of zeros. As a second regularization method that operates on the state space, we used Chaos-free network (CFN) BID21 which by premise cannot produce chaos. We replaced the LSTM cell with the CFN in a set of experiments, again using 256 hidden units. Dropout, L2E, and DA all ed in statistically significant decreases in hallucination percentage, with DA being by far the most effective at decreasing hallucinations. On the contrary, switching out Although data augmentation dramatically reduced hallucinations in the canonical model, it requires knowing the kind of perturbations that one would use to induce a hallucination. To study how fine grained one's knowledge must be, we trained the canonical model on a different training set where we withheld two types of data augmentation: perturbing at the beginning or with common tokens (We call this model DA w/o beginning or common). We then compared this model with the canonical model trained with the full DA training set (Figure 4). We found that DA w/o beginning or common yields much higher hallucination percentages when tested by perturbing at the beginning or with common tokens in comparison to the DA model. However, we also saw a reduction in hallucination percentage for common and beginning tokens when compared to the canonical model. This indicates that DA can still provide some protection against hallucinations, even if exact perturbations are not known. Figure 4: Effects of data augmented (DA) training when including all perturbation types vs excluding common and beginning perturbation types. We trained two models, one including all perturbations types for DA training, and the other excluding common and beginning perturbation types. We then examined the hallucination percentage of each perturbation type for both of these models and studied whether a DA model would be less prone to hallucinate when perturbed with types of tokens or positions it had not been trained against. Red star shows that DA w/o beginning or common had statistically significantly reduced mean compared to the canonical model trained without DA.Additionally, we wondered if hallucinations were present in NMT architectures that were not recurrent. Thus, we studied the Transformer model (TR) BID32. To make our easily accessible to NMT practitioners, we chose a hyperparameter set from those given in the popular Tensor2Tensor library BID3 that was closest to our cutoff BLEU score when using greedy decoding. These models are trained with the parameters from transformer tiny (2 hidden layers, 128 hidden size, 512 filter size, and 4 heads) and have a greedy BLEU score of 17.5, which is a little lower than our GNMT models. We find the transformer model hallucinates significantly less than the canonical model, but can still be perturbed to hallucinate on average 15% of the time FIG2 ). We present these with many caveats. Unlike the canonical model, this model is trained with many types of regularization (dropout, attention dropout, label smoothing, relu dropout, and a larger batch size) and a longer input sequence length (256 versus 50 in the canonical model). Unfortunately, training with no regularization or a sequence length of 50 dramatically reduced the BLEU score for parameter combinations we tried, and thus we decided to present these with caveats instead of a model without regularization and a comparable sequence length. We observed a large difference between attention matrices of normal translations and of hallucinations. Attention networks in normal translations tend to study the entire input sequence throughout decoding. In French to English and other language pairs that are grammatically aligned (German to English is somewhat aligned), this often in a strong diagonal in the attention matrix. The attention matrix, when translating hallucinations, however, shows the model attending to a few tokens. We give an example comparison in FIG3, top panels. For additional attention matrices, see Section 9.We wanted to quantify this difference in a way that does not require strong alignment between languages, i.e. one expects English to French to in a largely diagonal matrix but not English to Turkish, so we used information entropy to compute a statistic that described the difference between attention matrices during a decode that ed in a hallucination and those that ed in a normal translation. Specifically, at each output of the decoder, the attention network gives a distribution over input tokens. We averaged these distributions across all decoded output tokens, ing in a distribution of average attention weight over the input tokens. We treated this as a discrete distribution and computed the entropy, − t p(x t) log p(x t), where x t is the input token at time t, for each example, ing in a distribution of entropy values over all decoded sequences. We then compared the entropy of average attention distributions between hallucinations and correct translations (FIG3 . This figure shows a significant difference between the entropy values for hallucination sequences. As a control, we show there is no significant difference between original input sequences and perturbed input sequences for sentences that cannot be perturbed to hallucination. Note that, in real world scenarios where a ground truth translation is not available, the measure of entropy of the average attention distribution may be useful to detect hallucinations. The breakdown of the attention module seems to signal that the encoder and the decoder have been decoupled, and the decoder ignores context from the encoder and samples from its language model. Two possibilities are that broken attention modules are the root cause of decoupling, or they are a symptom of further breakdown in the dynamics of the decoder. Examining the causes and types of hallucinations: instability of translation, translations decoupled from the input, and oscillations in translations, led us to believe that hallucinations from a dynamical process gone wrong in the decoder (which might be caused by the encoder or attention module). Further, many model variants that decrease hallucinations (like L2 regularization) can be viewed as regularizing the dynamics of the model. Thus, we explore differences in the decoder between translating a hallucinating or non-hallucinating sentence by comparing the hidden states of the decoder and analyzing the stability of the system FIG4 ).In this section, we are interested in how perturbations change the decoding pipeline and thus study an unchanged, input sentence (which we call "original" and denote by x o) and its perturbations (x p). We perturbed all source sentences in the test set with all types of tokens (common, rare, mid-frequency, and punctuation) at all locations (beginning, end, second-to-end, and randomly) and sorted all perturbations into two groups: those that in hallucinations and those that do not. We hypothesize that differences in translation occur early in decoding and focus on the first timestep the decoder receives context from the encoder, t = 1. We studied both the distance between (left panel) and ratio of the norms (middle panel) of the perturbed decoder hidden states h 1 (x p) and the original decode, h 1 (x o), at decode t = 1. Both ed in obviously different distributions as a function of whether the perturbation ed in a hallucination. Given these differences (FIG4), it seemed natural to attempt to causally reduce the number of perturbations by re-normalizing h 1 (x p) to the norm of h 1 (x o). We tried this for all perturbations of all original sentences in our test set and did not see a reduction in hallucination percentage. We wondered if a hallucination from exciting an unstable mode of the decoder early in the decode. To study this, we defined an unstable subspace of the original decode U (x o) as the subspace spanned by the eigenvectors of DISPLAYFORM0 ∂h0 (x o) with corresponding eigenvalues greater than 1. We projected the normalized hidden state of the perturbed input, x p, onto this subspace A. Normal translations produce attention matrices that show a distribution of weight across most tokens in the source sequence throughout decoding (x-axis source sequence, y-axis decoded sequence). B. However, during hallucinations, the attention network tends to place weight on only a few input tokens. We can see the majority of the weight throughout decoding is placed on the "." in the source sequence. C, D. We used an entropy measure to quantify how distributed the attention is over the input source sequence. Shown are the distributions for normal translations C. and hallucinations D. for the original (blue) and perturbed (green) attention entropy values. Mean of the entropy distributions for hallucinations are statistically significantly different (Mann-Whitney U test, p < 0.05). DISPLAYFORM1 whereĥ is h normalized to 1. We did this for every original sentence in our test set that had at least 10 hallucinations (around half of all original sentences). We show the count (up to 10) of perturbed sentences such that E(x o, x p) > 1 when x p ed in a hallucination (red) and when it did not (blue) in FIG4 (right panel).Finally, we also studied the stability exponents of the decoder, focusing on how eigenvalues of the Jacobian, ∂h T ∂h0 (x), of the hidden states of the decoder changed as a function of whether or not a perturbation ed in a hallucination for models trained with and without data augmentation (Shown in Appendix 10). In this paper we uncovered and studied a hallucination-like phenomenon whereby adding a single additional token into the input sequence causes complete mistranslation. We showed that hallucinations are common in the NMT architecture we examined, as well as in its variants. We note that hallucinations appear to be model specific. We showed that the attention matrices associated with hallucinations were statistically different on average than those associated with input sentences that could not be perturbed. Finally we proposed a few methods to reduce the occurrence of hallucinations. Our model has two differences from production systems. For practical reasons we studied a small model and used a limited amount of training data. Given these differences it is likely that our model shows more hallucinations than a quality production model. However, given news reports of strange translations in popular public translation systems BID12, the dynamical nature of the phenomenon, the fact that input datasets are noisy and finite, and that our most effective technique for preventing hallucinations is a data augmentation technique that requires knowledge of hallucinations, it would be surprising to discover that hallucinations did not occur in production systems. While it is not entirely clear what should happen when a perturbing input token is added to an input source sequence, it seems clear that having an utterly incorrect translation is not desirable. This phenomenon appeared to us like a dynamical problem. Here are two speculative hypotheses: perhaps a small problem in the decoder is amplified via iteration into a much larger problem. Alternatively, perhaps the perturbing token places the decoder state in a poorly trained part of state space, the dynamics jump around wildly for while until an essentially random well-trodden stable trajectory is found, producing the remaining intelligible sentence fragment. Many of our can be interpreted from the vantage of dynamical systems as well. For example, we note that the NMT networks using CFN recurrent modules were highly susceptible to perturbations in our experiments. This highlights the difficulty of understanding or fixing problems in recurrent networks. Because the CFN is embedded in a larger graph that contains an auto-regressive loop, there is no guarantee that the chaos-free property of the CFN will transfer to the larger graph. The techniques we used to reduce hallucinations can also be interpreted as dynamical regularization. For example, L2 weight decay is often discussed in the context of generalization. However, for RNNs L2 regularization can also be thought of as dynamically conditioning a network to be more stable. L2 regularization of input embeddings likely means that rare tokens will have optimization pressure to reduce the norm of those embeddings. Thus, when rare tokens are inserted into an input token sequence, the effects may be reduced. Even the data augmentation technique appears to have stability effects, as Appendix 10 shows the overall stability exponents are reduced when data augmentation is used. Given our experimental , do we have any recommendations for those that engineer and maintain production NMT systems? Production models should be tested for hallucinations, and when possible, the attention matrices and hidden states of the decoder should be monitored. Our on reducing hallucinations suggest that standard regularization techniques such as Dropout and L2 weight decay on the embeddings are important. Further, data augmentation seems critical and we recommend inserting randomly chosen perturbative tokens in the input sentence as a part of the standard training regime (while monitoring that the BLEU score does not fall). We note a downside of data augmentation is that, to some extent, it requires knowing the types of the pathological phenomenon one desires to train against. Figure 7: Schematic of the NMT decoder. The input sequence, x 1:S, is encoded by a bidirectional encoder (not shown) into a sequence of encodings, z 1:S. The attention network, f att, computes a weighted sum of these encodings (computed weights not shown), based on conditioning information from h and provides the weighted encoding to the 2-layer decoder, f dec, as indicated by the arrows. The decoder proceeds forward in time producing the translation one step at a time. As the decoder proceeds forward, it interacts with both the attention network and also receives as input the decoded output symbol from the previous time step. Examples of pairs of sentences with different adjusted BLEU scores are as follows: As seen above, an adjusted BLEU score of < 0.01 means the two sentences have very few words in common. DISPLAYFORM0 We defined a spectrum of stability exponents for the decoder and compared them between normal translations and hallucinations (FIG6). Concretely, we studied the stability of the decoder as a function of a given input token sequence, x 1:S of length S (denoted x below). The sequence x 1:S is run through the encoder, whose output is processed by the attention network, finally delivering an input to the decoder. For a given input token sequence, the decoder runs until it produces an endof-sequence token, ing in an output token sequence y 1:T of length T (or reaches a maximal decoded sequence length T > 3S). We were interested in studying DISPLAYFORM0 ∂h0 as many stability properties can be deduced from it. We note that if one is interested in studying ∂ht ∂xs, the iterative process described by ∂h T ∂h0 would still be critical to understand due to the chain rule. We defined our spectrum of stability exponents, in analogy with Lyapunov exponents, but adapted for finite time by studying a finite-time version of Oseledets matrix, typically used in the study of chaotic dynamical systems. In particular, the i th stability exponent is defined as λ i (x) = 1 2T log (α i (x)), where α i (x) is the i th eigenvalue of the positive-semidefinite symmetric matrix DISPLAYFORM1 ∂h0 (x) and h(t) is the decoder state at time t concatenated across all layers. We used auto-differentiation software to exactly compute the Jacobian ∂h T ∂h0 (x) so the complexities of the decoder circuitry were handled naturally (shown in Appendix, Section 8.1 BID4 .We show the distribution of stability exponents, comparing between all input sequences that could be made to hallucinate, and all those that could not ( FIG6). We show these for both the canonical model and the model trained with data augmentation. There are two observations. First, means of the distribution of the stability exponents for the canonical model, averaged over those sentences that could be perturbed to hallucinate, are statistically different than exponents averaged over sentences that could not be perturbed to hallucinate. Second, the distributions of the model trained with data augmentation show significantly reduced exponents in comparison to the canonical model.
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxTk3vB3m
We introduce and analyze the phenomenon of "hallucinations" in NMT, or spurious translations unrelated to source text, and propose methods to reduce its frequency.
For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks. In this work, we identify two issues of current explanatory methods. First, we show that two prevalent perspectives on explanations—feature-additivity and feature-selection—lead to fundamentally different instance-wise explanations. In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals. The second issue is that current post-hoc explainers have only been thoroughly validated on simple models, such as linear regression, and, when applied to real-world neural networks, explainers are commonly evaluated under the assumption that the learned models behave reasonably. However, neural networks often rely on unreasonable correlations, even when producing correct decisions. We introduce a verification framework for explanatory methods under the feature-selection perspective. Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings. We validate the efficacy of our evaluation by showing the failure modes of current explainers. We aim for this framework to provide a publicly available,1 off-the-shelf evaluation when the feature-selection perspective on explanations is needed. A large number of post-hoc explanatory methods have recently been developed with the goal of shedding light on highly accurate, yet black-box machine learning models (a; ; ; ; b; ;). Among these methods, there are currently at least two widely used perspectives on explanations: feature-additivity (a; ; ;) and feature-selection (; ;), which we describe in detail in the sections below. While both shed light on the overall behavior of a model, we show that, when it comes to explaining the prediction on a single input in isolation, i.e., instance-wise explanations, the two perspectives lead to fundamentally different explanations. In practice, explanatory methods adhering to different perspectives are being directly compared. For example, and compare L2X, a feature-selection explainer, with LIME (a) and SHAP , two feature-additivity explainers. We draw attention to the fact that these comparisons may not be coherent, given the fundamentally different explanation targets, and we discuss the strengths and limitations of the two perspectives. Secondly, while current explanatory methods are successful in pointing out catastrophic biases, such as relying on headers to discriminate between pieces of text about Christianity and atheism (a), it is an open question to what extent they are reliable when the model that they aim to explain (which we call the target model) has a less dramatic bias. This is a difficult task, precisely because the ground-truth decision-making process of neural networks is not known. Consequently, when applied to complex neural networks trained on real-world datasets, a prevalent way to evaluate the explainers is to assume that the target models behave reasonably, i.e., that they did not rely on irrelevant correlations. For example, in their morphosyntactic agreement paradigm, Pörner et al. assume that a model that predicts if a verb should be singular or plural given the tokens before the verb, must be doing so by focusing on a noun that the model had identified as the subject. Such assumptions may be poor, since recent works show a series of surprising spurious correlations in human-annotated datasets, on which neural networks learn to heavily rely (; ;). Therefore, it is not reliable to penalize an explainer for pointing to tokens that just do not appear significant to us. We address the above issue by proposing a framework capable of generating evaluation tests for the explanatory methods under the feature-selection perspective. Our tests consist of pairs of (target model, dataset). Given a pair, for each instance in the dataset, the specific architecture of our model allows us to identify a subset of tokens that have zero contribution to the model's prediction on the instance. We further identify a subset of tokens clearly relevant to the prediction. Hence, we test if explainers rank zero-contribution tokens higher than relevant tokens. We instantiated our framework on three pairs of (target model, dataset) on the task of multi-aspect sentiment analysis. Each pair corresponds to an aspect and the three models (of same architecture) have been trained independently. We highlight that our test is not a sufficient test for concluding the power of explainers in full generality, since we do not know the whole ground-truth behaviour of the target models. Indeed, we do not introduce an explanation generation framework but a framework for generating evaluation tests for which we provide certain guarantees on the behaviour of the target model. Under these guarantees we are able to test the explainers for critical failures. Our framework therefore generates necessary evaluation tests, and our metrics penalize explainers only when we are able to guarantee that they produced an error. To our knowledge, we are the first to introduce an automatic and non-trivial evaluation test that does not rely on speculations on the behavior of the target model. Finally, we evaluate L2X , a feature-selection explainer, under our test. Even though our test is specifically designed for feature-selection explanatory methods, since, in practice, the two types of explainers are being compared, and, since LIME (a) and SHAP are two very popular explainers, we were interested in how the latter perform on our test, even though they adhere to the feature-additivity perspective. Interestingly, we find that, most of the time, LIME and SHAP perform better than L2X. We will detail in Section 5 the reasons why we believe this is the case. We provide the error rates of these explanatory methods to raise awareness of their possible modes of failure under the feature-selection perspective of explanations. For example, our findings show that, in certain cases, the explainers predict the most relevant token to be among the tokens with zero contribution. We will release our test, which can be used off-the-shelf, and encourage the community to use it for testing future work on explanatory methods under the feature-selection perspective. We also note that our methodology for creating this evaluation is generic and can be instantiated on other tasks or areas of research. The most common instance-wise explanatory methods are feature-based, i.e., they explain a prediction in terms of the input unit-features (e.g., tokens for text and super-pixels for images). Among the feature-based explainers, there are two major types of explanations: (i) feature-additive: provide signed weights for each input feature, proportional to the contributions of the features to the model's prediction (a; ; ;), and (ii) feature-selective: provide a (potentially ranked) subset of features responsible for the prediction (; ;). We discuss these explanatory methods in more detail in Section 3. Other types of explanations are (iii) example-based : identify the most relevant instances in the training set that influenced the model's prediction on the current input, and (iv) human-level explanations (; ;): explanations that are similar to what humans provide in real-world, both in terms of arguments (human-biases) and form (full-sentence natural language). In this work, we focus on verifying feature-based explainers, since they represent the majority of current works. While many explainers have been proposed, it is still an open question how to thoroughly validate their faithfulness to the target model. There are four types of evaluations commonly performed: 1. Interpretable target models. Typically, explainers are tested on linear regression and decision trees (e.g., LIME (a)) or support vector representations (e.g., MAPLE ). While this evaluation accurately assesses the faithfulness of the explainer to the target model, these very simple models may not be representative for the large and intricate neural networks used in practice. 2. Synthetic setups. Another popular evaluation setup is to create synthetic tasks where the set of important features is controlled. For example, L2X was evaluated on four synthetic tasks: 2-dim XOR, orange skin, nonlinear additive model, and switch feature. While there is no limit on the complexity of the target models trained on these setups, their synthetic nature may still prompt the target models to learn simpler functions than the ones needed for real-world applications. This, in turn, may ease the job for the explainers. 3. Assuming a reasonable behavior. In this setup, one identifies certain intuitive heuristics that a high-performing target model is assumed to follow. For example, in sentiment analysis, the model is supposed to rely on adjectives and adverbs in agreement with the predicted sentiment. Crowd-sourcing evaluation is often performed to assert if the features produced by the explainer are in agreement with the model's prediction . However, neural networks may discover surprising artifacts to rely on, even when they obtain a high accuracy. Hence, this evaluation is not reliable for assessing the faithfulness of the explainer to the target model. 4. Are explanations helping humans to predict the model's behaviour? In this evaluation, humans are presented with a series of predictions of a model and explanations from different explainers, and are asked to infer the predictions (outputs) that the model will make on a separate set of examples. One concludes that an explainer E1 is better than an explainer E2 if humans are consistently better at predicting the output of the model after seeing explanations from E1 than after seeing explanations from E2 . While this framework is a good proxy for evaluating the real-world usage of explanations, it is expensive and requires considerable human effort if it is to be applied on complex real-world neural network models. In contrast to the above, our evaluation is fully automatic, the target model is a non-trivial neural network trained on a real-world task and for which we provide guarantees on its inner-workings. Our framework is similar in scope with the sanity check introduced by. However, their test filters for the basic requirement that an explainer should provide different explanations for a model trained on real data than when the data and/or model are randomized. Our test is therefore more challenging and requires a stronger fidelity of the explainer to the target model. As mentioned before, current explanatory methods adhere to two major perspectives of explanations: Perspective 1 (Feature-additivity): For a model f and an instance x, the explanation of the prediction f (x) consists of a set of contributions {w x i (f)} i for each feature i of x such that the sum of the contributions of the features in x approximates f (x), i.e., i w Many explanatory methods adhere to this perspective (; ; a). For example, LIME (a) learns the weights via a linear regression on the neighborhood (explained below) of the instance. unified this class of methods by showing that the only set of feature-additive contributions that verify three desired constraints (local accuracy, missingness, and consistency-we refer to their paper for details) are given by the Shapley values from game theory: where the sum enumerates over all subsets x of features in x that do not include the feature i, and | · | denotes the number of features of its argument. Thus, the contribution of each feature i in the instance x is an average of its contributions over a neighborhood of the instance. Usually, this neighborhood consists of all the perturbations given by x 1: "The movie was good, it was actually x 2: "The movie was nice, in fact, it was nice. " very good." masking out combinations of features in x; see, e.g., (a;). show that the choice of the neighborhood is critical, and it is an open question what neighborhood is best to use in practice. Perspective 2 (Feature-selection): For a model f and an instance x, the explanation of f (x) consists of a sufficient (ideally small) subset S(x) of (potentially ranked) features that alone lead to (almost) the same prediction as the original one, i.e., f (S(x)) ≈ f (x).; , and adhere to this perspective. For example, L2X learns S(x) by maximizing the mutual information between S(x) and the prediction. However, it assumes that the number of important features per instance, i.e., |S(x)|, is known, which is usually not the case in practice. A downside of this perspective is that it may not always be true that the model relied only on a (small) subset of features, as opposed to using all the features. However, this can be the case for certain tasks, such as sentiment analysis. To better understand the differences between the two perspectives, in Figure 1, we provide the instance-wise explanations that each perspective aims to provide for a hypothetical sentiment analysis regression model, where 0 is the most negative and 1 the most positive score. We note that our hypothetical model is not far, in behaviour, from what real-world neural networks learn, especially given the notorious biases in the datasets. For example, show that natural language inference neural networks trained on SNLI may heavily rely on the presence of a few specific tokens in the input, which should not even be, in general, indicators for the correct target class, e.g., "outdoors" for the entailment class, "tall" for the neutral class, and "sleeping" for the contradiction class. In our examples in Figure 1, we clearly see the differences between the two perspectives. For the instance x 1, the feature-additive explanation tells us that "nice" was the most relevant feature, with a weight of 0.4, but also that "good" had a significant contribution of 0.3. While for this instance alone, our model relied only on "nice" to provide a positive score of 0.7, it is also true that, if "nice" was not present, the model would have relied on "good" to provide a score of 0.6. Thus, we see that the feature-additive perspective aims to provide an average explanation of the model on a neighborhood of the instance, while the feature-selective perspective aims to tell us the pointwise features used by the model on the instance in isolation, such as "nice" for instance x 1. An even more pronounced difference between the two perspectives is visible on instance x 2, where the ranking of features is now different. The feature-selective explanation ranks "good" and "nice" as the two most important features, while on the instance x 2 in isolation, the model relied on the tokens "very" and "good", that the feature-selection perspective would aim to provide. Therefore, we see that, while both perspectives of explanations give insights into the model's behavior, one perspective might be preferred over the other in different real-world use-cases. In the rest of the paper, we propose a verification framework for the feature-selection perspective of instance-wise explanations. Our proposed verification framework leverages the architecture of the RCNN model introduced by. We further prune the original dataset on which the RCNN had been trained to ensure that, for each datapoint x, there exists a set of tokens that have zero contribution (irrelevant features) and a set of tokens that have a significant contribution (clearly relevant features) to RCNN's prediction on x. We further introduce a set of metrics that measure how explainers fail to rank the irrelevant tokens lower than the clearly revelant ones. We describe each of these steps in detail below. The RCNN. The RCNN consists of two modules: a generator followed by an encoder, both instantiated with recurrent convolutional neural networks . The generator is a bidirectional network that takes as input a piece of text x and, for each of its tokens, outputs the parameter of a Bernoulli distribution. According to this distribution, the RCNN selects a subset of tokens from x, called S x = generator(x), and passes it to the encoder, which makes the final prediction solely as a function of S x. Thus: There is no direct supervision on the subset selection, and the generator and encoder were trained jointly, with supervision only on the final prediction. The authors also used two regularizers: one to encourage the generator to select a short sub-phrase, rather than disconnected tokens, and a second to encourage the selection of fewer tokens. At training time, to circumvent the non-differentiability introduced by the intermediate sampling, the gradients for the generator were estimated using a REINFORCE-style procedure . This intermediate hard selection facilitates the existence of tokens that do not have any contribution to the final prediction. aimed for S x to be the sufficient rationals for each prediction, the model might have learned an internal (emergent) communication protocol that encodes information from the non-selected via the selected tokens, which we call a handshake. For example, the RCNN could learn a handshake such as the one in Figure 2, where the feature "good" was important in all three cases, but not selected in the first two. Eliminating handshakes. Our goal is to gather a dataset D such that for all x ∈ D, the set of non-selected tokens, which we denote N x = x \ S x, has zero contribution to the RCNN's prediction on x. Equivalently, we want to eliminate instances that contain handshakes. We show that: The proof is in Appendix B. On our example in Figure 2, on the instance "The movie was very good.", the model selects "very" and predicts a score of 1. However, if we input the instance consisting of just "very", the model will not select anything 2 and would return a score of 0.5. Thus, Equation 7 indeed captures the handshake in this example. From now on, we refer to non-selected tokens as irrelevant or zero-contribution interchangeably. On the other hand, we note that S Sx = S x does not necessarily imply that there was a handshake. There might be tokens (e.g., the or a at the ends of the selection sequence(s)) that might have been selected in the original instance x and that become non-selected in the instance formed by S x without significantly changing the actual prediction. However, since it would be difficult to differentiate between such a case and an actual handshake, we simply prune the dataset by retaining only the instances for which S Sx = S x. At least one clearly relevant feature. With our pruning above, we ensured that the non-selected tokens have no contribution to the prediction. However, we are yet not sure that all the non-selected tokens are relevant to the prediction. In fact, it is possible that some tokens (such as "the" or "a") are actually noise, but have been selected only to ensure that the selection is a contiguous sequence (as we mentioned, the RCNN was penalized during training for selecting disconnected tokens). Since we do not want to penalize explainers for not differentiating between noise and zero-contribution features, we further prune the dataset such that there exists at least one selected token which is, without any doubt, clearly relevant for the prediction. To ensure that a given selected token s is clearly relevant, we check that, when removing the token s, the absolute change in prediction with respect to the original prediction is higher than a significant threshold τ. Precisely, if for the selected token s ∈ S x, we have that |encoder(S x −s)−encoder(S x)| ≥ τ, then the selected token s is clearly relevant for the prediction. Thus, we have further partitioned S x into S x = SR x ∪ SDK x, where SR x are the clearly relevant tokens, and SDK x are the rest of the selected tokens for which we do not know if they are relevant or noise (SDK stands for "selected don't know"). We see a diagram of this partition in Figure 3. We highlight that simply because a selected token alone did not make a change in prediction higher than a threshold does not mean that this token is not relevant, as it may be essential in combination with other tokens. Our procedure only ensures that the tokens that change the prediction by a given (high) threshold are indeed important and should therefore be ranked higher than any of the non-selected tokens, which have zero contribution. We thus further prune the dataset to retain only the datapoints x for which |SR x | ≥ 1, i.e., there is at least one clearly relevant token per instance. Evaluation metrics. First, we note that our procedure does not provide an explainer in itself, since we do not give an actual ranking, nor any contribution weights, and it is possible for some of the tokens in SDK x to be even more important than tokens in SR x. However, we guarantee the following two properties: P1: All tokens in N x have to be ranked lower than any token in SR x. The first most important token has to be in S x. We evaluate explainers that provide a ranking over the features. We denote by r 1 (x), r 2 (x),..., r n (x) the ranking (in decreasing order of importance) given by an explainer on the n = |x| features in the instance x. Under our two properties above, we define the following error metrics: (A) Percentage of instances for which the most important token provided by the explainer is among the non-selected tokens: where 1 is the indicator function. (B) Percentage of instances for which at least one non-selected token is ranked higher than a clearly relevant token: (C) Average number of non-selected tokens ranked higher than any clearly relevant token: where last si = argmax j {r j (x) ∈ SR x } is the lowest rank of the clearly relevant tokens. Metric (A) shows the most dramatic failure: the percentage of times when the explainer tells us that the most relevant token is one of the zero contribution ones. Metric (B) shows the percentage of instances for which there is at least an error in the explanation. Finally, metric (C) quantifies the number of zero-contribution features that were ranked higher than any clearly relevant feature. In this work, we instantiate our framework on the RCNN model trained on the BeerAdvocate corpus, 3 on which the RCNN was initially evaluated . BeerAdvocate consists of a total of ≈.100K human-generated multi-aspect beer reviews, where the three considered aspects are appearance, aroma, and palate. The reviews are accompanied with fractional ratings originally between 0 and 5 for each aspect independently. The RCNN is a regression model with the goal to predict the rating, rescaled between 0 and 1 for simplicity. Three separate RCNNs are trained, one for each aspect independently, with the same default settings. 4 With the above procedure, we gathered three datasets D a, one for each aspect a. For each dataset, we know that for each instance x ∈ D a, the set of non-selected tokens N x has zero contribution to the prediction of the model. For obtaining the clearly relevant tokens, we chose a threshold of τ = 0.1, since the scores are in, and the ground-truth ratings correspond to {0, 0.1, 0.2, . . ., 1}. Therefore, a change in prediction of 0.1 is to be considered clearly significant for this task. We provide several statistics of our datasets in Appendix A. For example, we provide the average lengths of the reviews, of the selected tokens per review, of the clearly relevant tokens among the selected, and of the non-selected tokens. We note that we usually obtained 1 or 2 clearly relevant tokens per datapoints, showing that our threshold of 0.1 is likely very strict. However, we prefer to be more conservative in order to ensure high guarantees on our evaluation test. We also provide the percentages of datapoints eliminated in order to ensure the no-handshake condition (Equation 7). Evaluating explainers. We test three popular explainers: LIME (a), SHAP , and L2X . We used the code of the explainers as provided in the original repositories, 5 with their default settings for text explanations, with the exception that, for L2X, we set the dimension of the word embeddings to 200 (the same as in the RCNN), and we also allowed training for a maximum of 30 epochs instead of 5. As mentioned in Section 3, LIME and SHAP adhere to the feature-additivity perspective, hence our evaluation is not directly targeting these explainers. However, we see in Table 1 that, in practice, LIME and SHAP outperformed L2X on the majority of the metrics, even though L2X is a featureselection explainer. We hypothesize that a major limitation of L2X is the requirement to know the number of important features per instance. Indeed, L2X learns a distribution over the set of features by maximizing the mutual information between subsets of K features and the response variable, where K is assumed to be known. In practice, one usually does not know how many features per instance a model relied on. To test L2X under real-world circumstances, we used as K the average number of tokens highlighted by human annotators on the subset manually annotated by. We obtained an average K of 23, 18, and 13 for the three aspects, respectively. In Table 1, we see that, on metric (A), all explainers are prone to stating that the most relevant feature is a token with zero contribution, as much as 14.79% of the time for LIME and 12.95% of the time for L2X in the aroma aspect. We consider this the most dramatic form of failure. Metric (B) shows that both explainers can rank at least one zero-contribution token higher than a clearly relevant feature, i.e., there is at least one mistake in the predicted ranking. Finally, metric (C) shows that, in average, SHAP only places one zero-contribution token ahead of a clearly relevant token for the first two aspects and around 9 tokens for the third aspect, while L2X places around 3-4 zero-contribution tokens ahead of a clearly relevant one for all three aspects. Figure 4: Explainers' rankings (with top 5 features on the right-hand side) on an instance from the palate aspect in our evaluation dataset. Qualitative Analysis. In Figure 6, we present an example from our dataset of the palate aspect. More examples in Appendix C. The heatmap corresponds to the ranking determined by each explainer, and the intensity of the color decreases linearly with the ranking of the tokens. 6 We only show in the heatmap the first K = 10 ranked tokens, for visibility reasons. Tokens in S x are in bold, and the clearly relevant tokens from SR x are additionally underlined. The first selected by the explainer is marked wth a rectangular. Additionally the 5 ranks tokens by each explainer are on the right-hand side. Firstly, we notice that both explainers are prone to attributing importance to nonselected tokens, with LIME and SHAP even ranking the tokens "mouthfeel" and "lacing" belonging to N x as first two (most important). Further, "gorgeous", the only relevant word used by the model, did not even make it in top 13 tokens for L2X. Instead, L2X gives "taste", "great", "mouthfeel" and "lacing" as most important tokens. We note that if the explainer was evaluated by humans assuming that the RCNN behaves reasonably, then this choice could have well been considered correct. In this work, we first shed light on an important distinction between two widely used perspectives of explanations. Secondly, we introduced an off-the-shelf evaluation test for post-hoc explanatory methods under the feature-selection perspective. To our knowledge, this is the first automatic verification framework offering guarantees on the behaviour of a non-trivial real-world neural network. We presented the error rates on different metrics for three popular explanatory methods to raise awareness of the types of failures that these explainers can produce, such as incorrectly predicting even the most relevant token. While instantiated on a natural language processing task, our methodology is generic and can be adapted to other tasks and other areas. For example, in computer vision, one could train a neural network that first makes a hard selection of super-pixels to retain, and subsequently makes a prediction based on the image where the non-selected super-pixels have been blurred. The same procedure of checking for zero contribution of non-selected super-pixels would then apply. We also point out that the core algorithm in the majority of the current post-hoc explainers are also domain-agnostic. Therefore, we expect our evaluation to provide a representative view of the fundamental limitations of the explainers. We provide the statistics of our dataset in Table 2. N is the number of instances that we retain with our procedure, |x| is the average length of the reviews, and |S x |, |SR x |, and |N x | are the average numbers of selected tokens, selected tokens that give an absolute difference of prediction of at least 0.1 when eliminated individually, and non-selected tokens, respectively. In parenthesis are the standard deviations. The column %(S Sx = S x) provides the percentage of instances eliminated from the original BeerAdvocate dataset due to a potential handshake. Finally, %(|SR x | = 0) shows the percentage of datapoints (out of the non-handshake ones) further eliminated due to the absence of a selected token with absolute effect of at least 0.1 on the prediction. Proof: We note that if there is a handshake in the instance x, i.e., at least one non-selected token x k ∈ N x is actually influencing the final prediction via an internal encoding of its information into the selected tokens, then the model should have a different prediction when x k is eliminated from the instance, i.e., RCNN(x) = RCNN(x − x k). Equivalently, if RCNN(x − x k) = RCNN(x), then x k could not have been part of a handshake. Thus, if the RCNN gives the same prediction when eliminating all the non-selected tokens, it means that there was no handshake for the instance x, and hence the tokens in N x have indeed zero contribution. Hence, we have that: RCNN(x − N x) = RCNN(x) =⇒ no handshake in x. Since x − N x = S x, Equation 8 rewrites as: RCNN(S x) = RCNN(x) =⇒ no handshake in x. From Equation 2, we further rewrite Equation 9 as: encoder(generator(S x)) = encoder(generator(x)) =⇒ no handshake in x. Since, by definition, generator(x) = S x, we have that: encoder(S Sx) = encoder(S x) =⇒ no handshake in x. Hence, it is sufficient to have S Sx = S x in order to satisfy the right-hand-side condition of Equation 11, which finishes our proof. LIME: nice brown " grolsch " like bottle (good for re-use). pours a dark 1. laces yellow color with a lot of head in the beginning which laces well. 2. yellow very fizzy. smells like fruit, maybe some apple and blueberry. no mouthfeel 3. lot whatsover. besides being wet and a small initial alcohol, i could n't feel 4. dark anything. tastes of fruit and not much alcohol, but i can start to feel a slight 5. of warming as i finish off the bottle. better than most american lagers, but very smooth. i think i would normally drink this too fast. SHAP: nice brown " grolsch " like bottle (good for re-use). pours a dark 1. fizzy yellow color with a lot of head in the beginning which laces well. 2. laces very fizzy. smells like fruit, maybe some apple and blueberry. no mouthfeel 3. head whatsover. besides being wet and a small initial alcohol, i could n't feel 4. nice anything. tastes of fruit and not much alcohol, but i can start to feel a slight 5. yellow warming as i finish off the bottle. better than most american lagers, but very smooth. i think i would normally drink this too fast. L2X: nice brown " grolsch " like bottle (good for re-use). pours a dark 1. laces yellow color with a lot of head in the beginning which laces well. very fizzy. smells like fruit, maybe some apple and blueberry. no mouthfeel 3. lot whatsover. besides being wet and a small initial alcohol, i could n't feel 4. whatsover anything. tastes of fruit and not much alcohol, but i can start to feel a slight 5. nice warming as i finish off the bottle. better than most american lagers, but very smooth. i think i would normally drink this too fast. Figure 5: Explainers' rankings (with the top 5 features on the right-hand side) on an instance from the appearance aspect in our evaluation.
[ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1e-0kBYPB
An evaluation framework based on a real-world neural network for post-hoc explanatory methods
Planning in high-dimensional space remains a challenging problem, even with recent advances in algorithms and computational power. We are inspired by efference copy and sensory reafference theory from neuroscience. Our aim is to allow agents to form mental models of their environments for planning. The cerebellum is emulated with a two-stream, fully connected, predictor network. The network receives as inputs the efference as well as the features of the current state. Building on insights gained from knowledge distillation methods, we choose as our features the outputs of a pre-trained network, yielding a compressed representation of the current state. The representation is chosen such that it allows for fast search using classical graph search algorithms. We display the effectiveness of our approach on a viewpoint-matching task using a modified best-first search algorithm. As we manipulate an object in our hands, we can accurately predict how it looks after some action is performed. Through our visual sensory system, we receive high-dimensional information about the object. However, we do not hallucinate its full-dimensional representation as we estimate how it would look and feel after we act. But we feel that we understood what happened if there is an agreement between the experience of the event and our predicted experience. There has been much recent work on methods that take advantage of compact representations of states for search and exploration. One of the advantages of this approach is that finding a good representation allows for faster and more efficient planning. This holds in particular when the latent space is of a much lower dimensionality than the one where the states originally live in. Our central nervous system (CNS) sends a command (efferent) to our motor system, as well as sending a copy of the efferent to our cerebellum, which is our key organ for predicting the sensory outcome of actions when we initiate a movement and is responsible for fine motor control. The cerebellum then compares the of the action (sensory reafference) with the intended consequences. If they differ, then the cerebellum makes changes to its internal structure such that it does a better job next time -i.e., in no uncertain terms, it learns. The cerebellum receives 40 times more information than it outputs, by a count of the number of axons. This gives us a sense of the scale of the compression ratio between the high dimensional input and low dimensional output. Thus, we constrain our attention to planning in a low-dimensional space, without necessarily reconstructing the high-dimensional one. We apply this insight for reducing the complexity of tasks such that planning in high dimensionality space can be done by classical AI methods in low dimensionality space. Our contributions are thus twofold: provide a link between efference theory and classical planning with a simple model and introduce a search method for applying the model to reduced state-space search. We validate our approach experimentally on visual data associated with categorical actions that connect the images, for example taking an object and rotating it. We create a simple manipulation task using the NORB dataset , where the agent is presented with a starting viewpoint of an object and the task is to produce a sequence of actions such that the agent ends up with the target viewpoint of the object. As the NORB data set can be embedded on a cylinder (Schüler et al., 2018) or a sphere , we can visualize the actions as traversing the embedded manifold. We essentially aim at approximating a Markov decision process' state transition function. Thus, similarities abound in the reinforcement learning literature: many agents perform explicit latentspace planning computations as part of learning and executing policies. train a reinforcement learning (RL) agent to simultaneously predict rewards as well as future latent states. Our work is distinct from these as we are not assuming a reward signal and are not constrained to an RL setting during training. Similar to our training setup, predict future frames in ATARI environments conditioned on actions. The predicted frames are used for learning the dynamics of the environment, e.g. for improving exploration by informing agents of which actions are more likely to in unseen states. Our work differs as we are maneuvering within the latent space and not the full input space. Vision, memory, and controller modules are combined for learning a model of the world before learning a decision model in Ha and Schmidhuber's World Models . A predictive model is trained in an unsupervised manner, allowing the agent to learn policies entirely within its learned latent space representation of the environment. Instead of training the representation from scratch, we apply the machinery of transfer learning by using pre-trained networks. This allows the method to be applied to a generic representation rather than a specifically trained representation. Using the output of pre-trained networks as targets instead of the original pixels is not new. The case where the output of a larger model is the target for a smaller model is known as knowledge distillation (Buciluǎ et al., 2006). This is used for compressing a model ensemble into a single model. Vondrick et al. learn to make high-level semantic predictions of future frames in video data. Given a frame at a current time, a neural network is tasked with predicting the representation of a future frame, e.g. by AlexNet. Our approach extends this general idea by admitting an action as the input to our predictor network as well. Causal InfoGAN is a method based on generative adversarial networks (GANs) , inspired by InfoGAN in particular , for learning plannable representations. A GAN is trained for encoding start and target states and plans a trajectory in the representation space as well as reconstructing intermediate states in the plans. Our method differ from this by training a simple forward model and forgoing reconstruction, which is unnecessary for planning. To motivate our approach, we will briefly describe the concept of efference copies from neuroscience. As we act, our central nervous system (CNS) sends a signal, or an efference, to our peripheral nervous system (PNS). An additional copy of the efference is created, which is sent to an internal forward model . This model makes a prediction of the sensory outcome and learns by minimizing the errors between its prediction and the actual sensory inputs that are experienced after the action is performed. The sensory inputs that reach the CNS from the sensory receptors in the PNS are called afference. They can be exafference, signals that are caused by the environment, or reafference, signals that are caused by ourself acting. By creating an efference copy and training the forward model, we can tell apart exafference from reafference. This is how we can be tickled when grass straws are brushed against our feet (exafference), but we can walk barefeet over a field of grass without feeling tickled (reafference). We assume that the motor system and sensory system are fixed and not necessarily trainable. The motor system could be the transition function of a Markov decision process. The sensory system is assumed to be a visual feature extractor -in our experiments, a Laplacian Eigenmap or an intermediate layer of a pre-trained convolutional neural network (CNN) is used. Pre-trained CNNs can provide powerful, generic descriptors while eliminating the computational load associated with predicting the pixel values of the full images . ). A motor command, or efference, is issued by the CNS. A copy of the efference is made and is sent to an internal forward model, which predicts the sensory of the efference. The original efference is sent to the motor system to act. The consequence of the action in the world is observed by the sensory system, and a reafference is created and sent to the CNS. The forward model is then updated to minimize the discrepancy between predicted and actual reafference. Suppose we have a feature map φ and a training set of N data points (, where S t is the state at time step t and a t is an action ing in a state S t+1 . We train the predictor f, parameterized by θ, by minimizing the mean squared error loss over f 's parameters: is our set of training data. In our experiments, we construct f as a two-stream, fully connected, neural network (Fig. 2). Using this predictor we are able to carry out efficient planning in the latent space defined by φ. By planning we mean that there is a start state S start and a goal state S goal and we are tasked with finding a path between them. Assuming deterministic dynamics, we output the expected representation after performing the action. This allows us to formulate planning as a classical AI pathfinding or graph traversal problem. The environment is turned to a graph by considering each state as a node and each action as an edge. We used a modified best-first search algorithm with the trained EfferenceNets for our experiments (Algorithm 1). Each state is associated with a node as well as a score: the distance between its representation and the representation of the goal state. The edges between nodes are the available actions at each state. The output of the search is the sequence of actions that corresponds to the path connecting the start node to the proposed solution node, i.e. the node whose representation is the one closest to the goal. To make the algorithm faster, we only consider paths that do not take us to a state that has already been evaluated, even if there might be a difference in the predictions from going this roundabout way. That is, if a permutation of the actions in the next path to be considered is already in an evaluated path, it will be skipped. This is akin to transposition tables used to speed up search in game trees. This might yield paths with redundancies which can be amended with path-simplifying routines (e.g. take one step forward instead of one step left, one forward then one right). Selecting the optimal action in each step of a temporally-extended Markov decision process in a task effectively is a hard problem that is solved by different fields in different ways. For example, in Algorithm 1 Find a plan to reach S = S j, where j = argmin i≤k ||Ri − φ(S goal)|| and R = (r 0, . . ., r k) are the representations of explored states. Input: S start, S goal, max. trials m, action set A, feature map φ and EfferenceNet f Output: A plan of actions (a 0, . . ., a n) reaching # add j to the set of checked indices for k ← 0 to m do j = argmin i≤k,i / ∈D ||ri − φ(S goal)|| # take a new state, most similar to the goal for all a ∈ A do r k+1 ← f (rj, a) # try every action from current state R ← R ∪ {r k+1} # save the estimated representation p k+1 ← pj ∪ a P ← P ∪ {p k+1} # store path from goal state to current state # add j to the set of checked indices end for return pj the area of reinforcement learning, it is addressed by estimating the accumulated reward signal for a given behavioral strategy and adapting the strategy to optimize this estimate. This requires either a large number of actively generated samples or exact knowledge of the environment (in dynamic programming) to find a strategy that behaves optimally. In this paper, we choose a different approach and utilize a one-step prediction model to allow decisions to be made based on the predicted outcome of a number of one-step lookaheads started from an initial state. The actions are chosen so that each step greedily maximizes progress towards a known target. This method, sometimes called hillclimber or best-first search, belongs to the family of informed search algorithms . To be more efficient than random or exhaustive search, these kinds of algorithms rely on heuristics to provide sufficient evidence for a good -albeit not necessarily optimal -decision at every time step to reach the goal. Here we use the Euclidean distance in representation space: An action is preferred if its predicted is closest to the goal. The usefulness of this heuristics depends on how well and how coherently the Euclidean distance encodes the actual distance to the goal state in terms of the number of actions. Our experiments show that an easily attainable general purpose representation, such as a pre-trained VGG16 , can already provide sufficient guidance to apply such this heuristic effectively. One might, however, ask what a particularly suited representation might look like when attainability is ignored. It would need to take the topological structure of the underlying data manifold into account, such that the Euclidean distance becomes a good proxy for the geodesic distance. One class of methods that fulfill this are spectral embeddings, such as Laplacian Eigenmaps (LEMs) . Since they do not readily allow for out-of-sample embedding, they will only be applied in an in-sample fashion to serve as a control experiment in Section 5.2.1. In the experiments, we show that our method can be combined with simple, informed graph search algorithms to plan trajectories for manipulation tasks (Fig. 3, top row). We use the NORB dataset, which contains images of different objects each under 9 different elevations, 36 azimuths, and 6 lighting conditions. We derive from the dataset an object manipulation environment. The actions correspond to turning a turntable back and forth by 20 •, moving the camera up or down by 5 • or changing the illumination intensity. After the EfferenceNet is trained, we apply Algorithm 1 to the viewpoint matching task. The goal is to find a sequence of actions that transforms the start state to the goal state. The two states differ in their azimuth and elevation configuration. Given a feature map φ, we task the EfferenceNet f with predicting φ(S t+1) after the action a was performed in the state S t. We train f by minimizing the mean-squared error between f (φ(S t), a) and φ(S t+1). The network (Fig. 2 is built with Keras and optimized with Nadam and converges in two hours on a Tesla P40. It is a two-stream dense neural network. Each stream consists of one dense layer followed by a batch normalization (BatchNorm) layer. The outputs of these streams are then concatenated and passed through 3 dense layers, each one followed by a BatchNorm, and then an output dense layer. Every dense layer, except the last, is followed by a rectified linear unit (ReLU) activation. Figure 2: EfferenceNet architecture. The network takes as input the representation vector φ(S t) of the state S t as determined by the feature map φ, as well as the one-hot encoded action a. It outputs the estimated feature vector of the ing state S t+1 after action a is performed. Our φ is chosen to be a Laplacian Eigenmap for in-sample search and the second-to-last layer output of VGG16 , pre-trained on ImageNet , for out-ofsample search. As the representation made by φ do not change over the course of the training, they can be cached, expediting the training. Of the 10 car toys in the NORB dataset, we randomly chose 9 for our training set and test on the remaining one. Embedding a single toy's confgurations in three dimensions using Laplacian Eigenmaps will in a cylindrical embedding that encodes both, elevation and azimuth angles, as visible in Figure 4. Three dimensions are needed so that the cyclic azimuth can be embedded correctly as sin(θ) and cos(θ). If such a representation is now used to train the EfferenceNet which is subsequently applied in Algorithm 1, one would expect monotonically decreasing distance the closer the prediction comes to the target. Figure 5 shows that this is the case and that this behavior can be very effectively used for a greedy heuristic. While the monotinicity is not always exact due to imperfections in the approximate prediction, Figure 5 still qualitatively illustrates a best-case scenario. The goal and start states are chosen randomly, with the constraint that their distances along each dimension (azimuth and elevation) are uniformly distributed. As the states are searched, a heat map of similarities is produced (Fig. 3). To visualize the performance of the search we plot a histogram (Fig. 6) illustrating the accuracy of the search. The looks less accurate with respect to elevation than azimuth, but this is due to the elevation changes being more fine-grained than the azimuth changes, namely by a factor of 4. The difference between the elevation of the goal and solution viewpoints in Figure 3 left, for example, is hardly perceptible. If one would scale the histograms by angle and not by bins, the drop-off would be comparable. The heat maps of the type shown in Figure 3 can be aggregated to reveal basins of attraction during the search. Each heat map is shifted such that the goal position is at the bottom, middle row (Fig. 7, a). Here it is apparent that the goal and the 180 • flipped (azimuth) version of the goal are attractor states. This is due to the feature map being sensitive to the rough shape of the object, but being unable to distinguish finer details. In (Fig. 7, b) we display an aggregate heat map when the agent can alter the lighting conditions as well. In our work, we focus on learning a transition model. Doing control after learning the model is an established approach, with the mature field of model-based RL dedicated to it. This has the advantage of allowing for quick learning of new reward functions, since disentangling reward contingencies from the transition function is helpful when learning for multiple/changing reward functions and allows useful learning when there is no reward available at all. Thus, it might also be useful in a sparse or late reward setting. Another advantage of our approach is that it accommodates evaluations of reward trajectories with arbitrary discounts. Standard RL methods are usually restricted in their optimization problems. Often, there is a choice between optimizing discounted or undiscounted expected returns. Simulation/rollout-based planning methods are not restricted in that sense: If you are able to predict reward trajectories, you can (greedily) optimize arbitrary functions of these -possibly allowing for behavior regularization. For example, the risk-averse portfolio manager can prioritize smooth reward trajectories over volatile ones. We use a pre-trained network because we believe that a flexible algorithm should be based rather on generic, multi-purpose representions and not on very specific representations. This contributes to the flexibility of the system. However, a drawback of using pre-trained networks is that features might be encoded that are irrelevant for the current task. This has the effect that informed search methods, such as best-first search, are not guaranteed to output the accurate solution in the latent space, as there might be distracting pockets of erroneous local minima. Our visualizations reveal gradient towards the goal state as well as a visually similar, far away states. There is variation in the similarities, preventing the planning algorithm from finding the exact goal for every task, sometimes yielding solutions that are the polar-opposites of the goal, w.r.t. the azimuth. Pairing the EfferenceNet with a good but generic feature map allows us to perform an accurate search in the latent space of manipulating unseen objects. This remarkably simple method, inspired by the neurology of the cerebellum, reveals a promising line of future work. We validate our method by on a viewpoint-matching task derived from the NORB dataset. In the case of deterministic environments, EfferenceNets calculate features of the current state and action, which in turn define the next state. This opens up a future direction of research by combining EfferenceNets with successor features . Furthermore, the study of effective feature maps strikes us as an important factor in this line of work to consider. We utilize here Laplacian Eigenmaps and pre-trained deep networks. It is probably possible to improve the performance of the system by end-to-end training but we believe that it is more promising to work on generic multi-purpose representations. Possible further methods include Slow Feature Analysis (SFA) (Schüler et al., 2018). SFA has been previously shown to solve a special case of LEMs while it allows for natural out-of-sample embeddings.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkxloaEYwB
We present a neuroscience-inspired method based on neural networks for latent space search
Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks. Distributed representations have played a pivotal role in the current success of machine learning. In contrast with the symbolic representations of classical AI, distributed representation spaces can encode rich notions of semantic similarity in their distance measures, allowing systems to generalise to novel inputs. Methods to learn these representations have gained significant traction, in particular for modelling words BID30. They have since been successfully applied to many other domains, including images BID15 BID39 and graphs BID25 BID17 BID33.Using unlabelled data to learn effective representations is at the forefront of modern machine learning research. The Natural Language Processing (NLP) community in particular, has invested significant efforts in the construction BID30 BID37 BID10 BID21, evaluation and theoretical analysis BID28 of distributed representations for words. Recently, attention has shifted towards the unsupervised learning of representations for larger pieces of text, such as phrases BID50 BID51, sentences BID22 BID43 BID19 BID7, and entire paragraphs BID27. Some of this work simply sums or averages constituent word vectors to obtain a sentence representation BID32 BID31 BID48 BID7, which is surprisingly effective but naturally cannot leverage any contextual information. Another line of research has relied on a sentence-level distributional hypothesis BID38, originally applied to words BID18, which is an assumption that sentences which occur in similar contexts have a similar meaning. Such models often use an encoder-decoder architecture BID12 to predict the adjacent sentences of any given sentence. Examples of such models include SkipThought, which uses Recurrent Neural Networks (RNNs) for its encoder and decoders, and FastSent BID19, which replaces the RNNs with simpler bagof-words (BOW) versions. Models trained in an unsupervised manner on large text corpora are usually applied to supervised transfer tasks, where the representation for a sentence forms the input to a supervised classification problem, or to unsupervised similarity tasks, where the similarity (typically taken to be the cosine similarity) of two inputs is compared with corresponding human judgements of semantic similarity in order to inform some downstream process, such as information retrieval. Interestingly, some researchers have observed that deep complex models like SkipThought tend to do well on supervised transfer tasks but relatively poorly on unsupervised similarity tasks, whereas for shallow log-linear models like FastSent the opposite is true BID19 BID13. It has been highlighted that this should be addressed by analysing the geometry of the representation space BID6 BID42 BID19, however, to the best of our knowledge it has not been systematically attempted 1.In this work we attempt to address the observed performance gap on unsupervised similarity tasks between representations produced by simple models and those produced by deep complex models. Our main contributions are as follows:• We introduce the concept of an optimal representation space, in which the space has a similarity measure that is optimal with respect to the objective function.• We show that models with log-linear decoders are usually evaluated in their optimal space, while recurrent models are not. This effectively explains the performance gap on unsupervised similarity tasks.• We show that, when evaluated in their optimal space, recurrent models close that gap. We also provide a procedure for extracting this optimal space using the decoder hidden states.• We validate our findings with a series of consistent empirical evaluations utilising a single publicly available codebase. We begin by considering a general problem of learning a conditional probability distribution P model (y | x) over the output symbols y ∈ Y given the input symbols x ∈ X. Definition 1. A space H combined with a similarity measure ρ: H × H → R in which semantically close symbols s i, s j ∈ S have representations h i, h j ∈ H that are close in ρ is called a distributed representation space BID16.In general, a distributed representation of a symbol s is obtained via some function h s = f (s; θ f), parametrised by weights θ f. Distributed representations of the input symbols are typically found as the layer activations of a Deep Neural Network (DNN). One can imagine running all possible x ∈ X through a DNN and using the activations h x of the n th layer as vectors in H x: DISPLAYFORM0 The distributed representation space of the output symbols H y can be obtained via some function h y = g(y; θ g) that does not depend on the input symbol x, e.g. a row of the softmax projection matrix that corresponds to the output y. In practice, although H obtained in such a manner with a reasonable vector similarity ρ (such as cosine or Euclidean distance) forms a distributed representation space, there is no a priori reason why an arbitrary choice of a similarity function would be appropriate given H and the model's objective. There is no analytic guarantee, for arbitrarily chosen H and ρ, that small changes in semantic similarity of symbols correspond to small changes in similarity ρ between their vector representations in H and vice versa. This motivates Definition 2. A space H equipped with a similarity measure ρ such that log P model (y | x) ∝ ρ (h y, h x) is called an optimal representation space. In words, if a model has an optimal representation space, the conditional log-probability of an output symbol y given an input symbol x is proportional to the similarity ρ(h y, h x) between their corresponding vector representations h y, h x ∈ H.For example, consider the following standard classification model DISPLAYFORM0 where u y is the y th row of the output projection matrix U.If H x = {DNN(x) | x ∈ X } and H y = {u y | y ∈ Y}, then H = H x ∪ H y equipped with ρ(h 1, h 2) = h 1 · h 2 (the dot product) is an optimal representation space. Note that if the exponents of Equation FORMULA1 contained Euclidean distance, then we would find log P model (y | x) ∝ ||u y − DNN(x)|| 2. The optimal representation space would then be equipped with Euclidean distance as its optimal distance measure ρ. This easily extends to any other distance measures desired to be induced on the optimal representation space. Let us elaborate on why Definition 2 is a reasonable definition of an optimal space. Let x 1, x 2 ∈ X be the input symbols and y 1, y 2 ∈ Y their corresponding outputs. Using DISPLAYFORM1 to denote that a and b are close under ρ, a reasonable model trained on a subset of (X, Y) will ensure that h x1 ρ ∼ h y1 and h x2 ρ ∼ h y2. If x 1 and x 2 are semantically close and assuming semantically close input symbols have similar outputs, we also have that h x1 ρ ∼ h y2 and h x2 ρ ∼ h y1. Therefore it follows that h x1 ρ ∼ h x2 (and h y1 ρ ∼ h y2). Putting it differently, semantic similarity of input and output symbols translates into closeness of their distributed representations under ρ, in a way that is consistent with the model. Note that any model P model (y | x) parametrised by a continuous function can be approximated by a function in the form of Equation. It follows that any model that produces a probability distribution has an optimal representation space. Also note that the optimal space for the inputs does not necessarily have to come from the final layer before the softmax projection but instead can be constructed from any layer, as we now demonstrate. Let n be the index of the final activation before the softmax projection and let k ∈ {1, . . ., n}. We split the network into three parts: DISPLAYFORM2 where G k contains first k layers, F n contains the remaining n − k layers and U is the softmax projection matrix. Let the space for inputs H x be defined as DISPLAYFORM3 and the space for outputs H y defined as DISPLAYFORM4 where DISPLAYFORM5 is again an optimal representation space. We will show a specific example where this holds in Section 3.3. For the remainder of this paper, we focus on unsupervised models for learning distributed representations of sentences, an area of particular interest in NLP. si consists of words from a pre-defined vocabulary V of size |V |.We transform the corpus into a set of pairs DISPLAYFORM0, where s i ∈ S and c i is a context of s i. The context usually (but not necessarily) contains some number of surrounding sentences of s i, e.g. c i = (s i−1, s i+1).We are interested in modelling the probability of a context c given a sentence s. In general DISPLAYFORM1 One popular way to model P (c | s) for sentence-level data is suggested by the encoder-decoder framework. The encoder E produces a fixed-length vector representation h We first consider encoder-decoder architectures with a log-linear BOW decoder for the context. Let h i = E(s i) be a sentence representation of s i produced by some encoder E. The nature of E is not important for our analysis; for concreteness, the reader can consider a model such as FastSent BID19, where E is a BOW (sum) encoder. In the case of the log-linear BOW decoder, words are conditionally independent of the previously occurring sequence, thus Equation becomes DISPLAYFORM0. where u w ∈ R d is the output word embedding for a word w and h i is the encoder output. (Biases are omitted for brevity.)The objective is to maximise the model probability of contexts c i given sentences s i across the corpus D, which corresponds to finding the Maximum Likelihood Estimator (MLE) for the trainable parameters θ: DISPLAYFORM1 By switching to the negative log-likelihood and inserting the above expression, we arrive at the following optimisation problem: DISPLAYFORM2 Noticing that DISPLAYFORM3 we see that the objective in Equation forces the sentence representation h i to be similar under dot product to its context representation c i, which is simply the sum of the output embeddings of the context words. Simultaneously, output embeddings of words that do not appear in the context of a sentence are forced to be dissimilar to its representation. Figure 1: Unrolling a RNN decoder at inference time. The initial hidden state for the decoder is typically the encoder output, either the recurrent cell final state for a RNN encoder, or the sum of the input word embeddings for a BOW encoder. At the first time step, a learned <GO> token is presented as the input. In subsequent time steps, a probability-weighted sum over word vectors is used. The decoder is then unrolled for a fixed number of steps. The hidden states are then concatenated to produce the unrolled decoder embedding. In the models evaluated in Section 4, this process is performed for the RNN corresponding to the previous and next sentences. The sentence representation is then taken as the concatenation across both RNNs. Using dot ∼ to denote close under dot product, we find that if two sentences s i and s j have similar DISPLAYFORM4 Putting it differently, sentences that occur in related contexts are assigned representations that are similar under the dot product. Hence we see that the encoder output equipped with the dot product constitutes an optimal representation space as defined in Section 2. Another common choice for the context decoder is an RNN decoder DISPLAYFORM0 where h i = E(s i) is the encoder output. The specific structure of E is again not important for our analysis. (When E is also an RNN, this is similar to SkipThought .)The time unrolled states of decoder are converted to probability distributions over the vocabulary, conditional on the sentence s i and all the previously occurring words. Equation becomes DISPLAYFORM1 Similarly to Equation FORMULA11, MLE for the model parameters θ can be found as DISPLAYFORM2 Using ⊕ to denote vector concatenation, we note that DISPLAYFORM3 where the sentence representation h 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10Number of unroll steps Spearman correlation coefficient Figure 2: Performance on the STS tasks depending on the number of unrolled hidden states of the decoders, using dot product as the similarity measure. The top row presents for the RNN encoder and the bottom row for the BOW encoder. Red: Raw encoder output with BOW decoder. Green: Raw encoder output with RNN decoder. Blue: Unrolled RNN decoder output. Independent of the encoder architecture, unrolling even a single state of the decoder always outperforms the raw encoder output with RNN decoder, and almost always outperforms the raw encoder output with BOW decoder for some number of unrolls.the context words. Hence we can come to the same as in the log-linear case, except we have order-sensitive representations as opposed to unordered ones. As before, h D i is forced to be similar to the context c i under dot product, and is made dissimilar to sequences of u w that do not appear in the context. The "transitivity" argument from Section 3.2 remains intact, except the length of decoder hidden state sequences might differ from sentence to sentence. To avoid this problem, we can formally treat them as infinite-dimensional vectors in 2 with only a finite number of initial components occupied by the sequence and the rest set to zero. Alternatively, we can agree on the maximum sequence length, which in practice can be determined from the training corpus. Regardless, the above space of unrolled concatenated decoder states, equipped with dot product, is the optimal representation space for models with recurrent decoders. Consequently, this space could be a much better candidate for unsupervised similarity tasks. We refer to the method of accessing the decoder states at every time step as unrolling the decoder, illustrated in Figure 1. Note that accessing the decoder output does not require re-architecting or retraining the model, yet gives a potential performance boost on unsupervised similarity tasks almost for free. We will demonstrate the effectiveness of this technique empirically in Section 5. We have seen in Section 2 that the optimal representation space for a given model depends on the choice of decoder architecture. To support this theory, we train several encoder-decoder architectures for sentences with the decoder types analysed in Section 3, and evaluate them on downstream tasks using both their optimal space and the standard space of the encoder output as the sentence representations. Models and training. Each model has an encoder for the current sentence, and decoders for the previous and next sentences. As our analysis is independent of encoder type, we train and evaluate models with BOW and RNN encoders, two common choices in the literature for sentence representation learners BID19. The BOW encoder is the sum of word vectors BID19. The RNN encoder and decoders are Gated Recurrent Units (GRUs) BID12. using dot product as the similarity measure. On each task, the highest performing setup for each encoder type is highlighted in bold and the highest performing setup overall is underlined. All reported values indicate Pearson/Spearman correlation coefficients for the task. RNN encoder: Unrolling the RNN decoders using the concatenation of the decoder hidden states (RNN-concat) dramatically improves the performance across all tasks compared to using the raw encoder output (RNN-RNN), validating the theoretical justification presented in Section 3.3. BOW encoder: Unrolling the RNN decoders improves performance overall, however, the improvement is less drastic than that observed for the RNN encoder, which we discuss further in the main text. Using the notation ENC-DEC, we train RNN-RNN, RNN-BOW, BOW-BOW, and BOW-RNN models. For each encoder-decoder combination, we test several methods of extracting sentence representations to be used in the downstream tasks. First, we use the standard choice of the final output of the encoder as the sentence representation. In addition, for models that have RNN decoders, we unroll between 1 and 10 decoder hidden states. Specifically, when we unroll n decoder hidden states, we take the first n hidden states from each of the decoders and concatenate them in order to get the ing sentence representation. We refer to these representations as *-RNN-concat. All models are trained on the Toronto Books Corpus, a dataset of 70 million ordered sentences from over 7,000 books. The sentences are pre-processed such that tokens are lower case and splittable by space. Evaluation tasks. We use the SentEval tool BID13 to benchmark sentence embeddings on both supervised and unsupervised transfer tasks. The supervised tasks in SentEval include paraphrase identification (MSRP) BID14, movie review sentiment (MR) BID36, product review sentiment (CR), BID20 ), subjectivity (SUBJ) BID35, opinion polarity (MPQA) BID46, and question type (TREC) BID45 BID40. In addition, there are two supervised tasks on the SICK dataset, entailment and relatedness (denoted SICK-E and SICK-R) BID29. For the supervised tasks, SentEval trains a logistic regression model with 10-fold cross-validation using the model's embeddings as features. The unsupervised Semantic Textual Similarity (STS) tasks are STS12-16 BID11 BID2 BID1 BID5, which are scored in the same way as SICK-R but without training a new supervised model; in other words, the embeddings are used to directly compute similarity. We use dot product to compute similarity as indicated by our analysis; and discussion using cosine similarity, which is canonical in the literature, are presented in Appendix B. For more details on all tasks and the evaluation strategy, see BID13.Implementation and hyperparameters. Our goal is to study how different decoder types affect the performance of sentence embeddings on various tasks. To this end, we use identical hyperparameters and architecture for each model (except encoder and decoder types), allowing for a fair headto-head comparison. Specifically, for RNN encoders and decoders we use a single layer GRU with layer normalisation BID8. All the weights (including word embeddings) are initialised uniformly over [−0.1, 0.1] and trained with Adam without weight decay or dropout BID24. Sentence length is clipped or zero-padded to 30 tokens and end-of-sentence tokens are used throughout training and evaluation. Following, we use a vocabulary size of 20k with vocabulary expansion, 620-dimensional word embeddings, and 2400 hidden units in all RNNs. Performance of the unrolled models on the STS tasks is presented in Figure 2. We note that unrolling even a single state of the decoder always improves the performance over the raw encoder output with the RNN decoder, and nearly always does so for the BOW decoder for some number of unrolled hidden states. We observe that the performance tends to peak around 2-3 hidden states and fall off afterwards. In principle, one might expect the peak to be around the average sentence length of the corpus. A possible explanation of this behaviour is the "softmax drifting effect". As there is no context available at inference time, we generate the word embedding for the next time step using the softmax output from the previous time step (see Figure 1). Given that for any sentence, there is no single correct context, the probability distribution over the next words in that context will be multi-modal. This will flatten the softmax and produce inputs for the decoder that diverge from the inputs it expects (i.e. word vectors for the vocabulary). Further work is needed to understand this and other possible causes in detail. Performance across unsupervised similarity tasks is presented in Table 1 and performance across supervised transfer tasks is presented in TAB2. For the unrolled architectures, in these tables we report on the one that performs best on the STS tasks. When the encoder is an RNN, the supervised transfer validate our claims in Section 3.3. The are less conclusive when the encoder is a BOW. We believe this is caused by the simplicity of the BOW encoder forcing its outputs to obey the sentence-level distributional hypothesis irrespective of decoder type, ing in multiple candidates for the optimal representation space, but this should be investigated with a detailed analysis in future work. In addition, see Appendix A for a comparison with the original SkipThought from the literature, and Appendix B for using cosine similarity rather than dot product as the similarity measure in STS tasks, as is the canonical choice. When we look at the performance on supervised transfer in TAB2, combined with the similarity in Table 1, we see that the notion that models cannot be good at both supervised transfer and unsupervised similarity tasks needs refining; for example, RNN-RNN achieves strong performance on supervised transfer, while RNN-RNN-concat achieves strong performance on unsupervised similarity. In general, our indicate that a single model may be able to perform well on different downstream tasks, provided that the representation spaces chosen for each task are allowed to differ. Curiously, the unusual combination of a BOW encoder and concatenation of the RNN decoders leads to the best performance on most benchmarks, even slightly exceeding that of some supervised models on some tasks BID13. This architecture may be worth investigating. In this work, we introduced the concept of an optimal representation space, where semantic similarity directly corresponds to distance in that space, in order to shed light on the performance gap between simple and complex architectures on downstream tasks. In particular, we studied the space of initial hidden states to BOW and RNN decoders (typically the outputs of some encoder) and how that space relates to the training objective of the model. For BOW decoders, the optimal representation space is precisely the initial hidden state of the decoder equipped with dot product, whereas for RNN decoders it is not. Noting that it is precisely these spaces that have been used for BOW and RNN decoders has led us to a simple explanation for the observed performance gap between these architectures, namely that the former has been evaluated in its optimal representation space, whereas the latter has not. Furthermore, we showed that any neural network that outputs a probability distribution has an optimal representation space. Since a RNN does produce a probability distribution, we analysed its objective function which motivated a procedure of unrolling the decoder. This simple method allowed us to extract representations that are provably optimal under dot product, without needing to retrain the model. We then validated our claims by comparing the empirical performance of different architectures across transfer tasks. In general, we observed that unrolling even a single state of the decoder always outperforms the raw encoder output with RNN decoder, and almost always outperforms the raw encoder output with BOW decoder for some number of unrolls. This indicates different vector embeddings can be used for different downstream tasks depending on what type of representation space is most suitable, potentially yielding high performance on a variety of tasks from a single trained model. Although our analysis of decoder architectures was restricted to BOW and RNN, others such as convolutional BID49 and graph BID25 decoders are more appropriate for many tasks. Similarly, although we focus on Euclidean vector spaces, hyperbolic vector spaces BID34, complex-valued vector spaces BID44 and spinor spaces BID23 all have beneficial modelling properties. In each case, although an optimal representation space should exist, it is not clear if the intuitive space and similarity measure is the optimal one. However, there should at least exist a mapping from the intuitive choice of space to the optimal space using a transformation provided by the network itself, as we showed with the RNN decoder. Evaluating in this space should further improve performance of these models. We leave this for future work. Ultimately, a good representation is one that makes a subsequent learning task easier. For unsupervised similarity tasks, this essentially reduces to how well the model separates objects in the chosen representation space, and how appropriately the similarity measure compares objects in that space. Our findings lead us to the following practical advice: i) Use a simple model architecture where the optimal representation space is clear by construction, or ii) use an arbitrarily complex model architecture and analyse the objective function to reveal, for a chosen vector representation, an appropriate similarity metric. We hope that future work will utilise a careful understanding of what similarity means and how it is linked to the objective function, and that our analysis can be applied to help boost the performance of other complex models. 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10Number of unroll steps Spearman correlation coefficient Figure 3: Performance on the STS tasks depending on the number of unrolled hidden states of the decoders, using cosine similarity as the similarity measure. The top row presents for the RNN encoder and the bottom row for the BOW encoder. Red: Raw encoder output with BOW decoder. Green: Raw encoder output with RNN decoder. Blue: Unrolled RNN decoder output. For both RNN and BOW encoders, unrolling the decoder strictly outperforms *-RNN for almost every number of unroll steps, and perform nearly as well as or better than *-BOW.A COMPARISON WITH SKIPTHOUGHT Table 3: Performance of the SkipThought model, with and without layer normalisation BID8, compared against the RNN-RNN model used in our experimental setup. On each task, the highest performing model is highlighted in bold. For SICK-R, we report the Pearson correlation, and for STS14 we report the Pearson/Spearman correlation with human-provided scores. For all other tasks, reported values indicate test accuracy. † indicates taken from BID13. ‡ indicates our from running SentEval on the model downloaded from BID8's publicly available codebase (https://github.com/ryankiros/layer-norm). We attribute the discrepancies in performance to differences in experimental setup or implementation. However, we expect our unrolling procedure to also boost SkipThought's performance on unsupervised similarity tasks, as we show for RNN-RNN in our fair singlecodebase comparisons in the main text. As discussed in Section 3, the objective function is maximising the dot product between the BOW decoder/unrolled RNN-decoder and the context. However, as other researchers in the field and the STS tasks specifically use cosine similarity by default, we present the using cosine similarity in TAB4 and the for different numbers of unrolled hidden decoder states in Figure 3.Although the in TAB4 are consistent with the dot product in Table 1, the overall performance across STS tasks is noticeably lower when dot product is used instead of cosine similarity to determine semantic similarity. Switching from using cosine similarity to dot product transitions from considering only angle between two vectors, to also considering their length. Empirical studies have indicated that the length of a word vector corresponds to how sure of its context the model that produces it is. This is related to how often the model has seen the word, and how many different contexts it appears in (for example, the word vectors for "January" and "February" have similar norms, however, the word vector for "May" is noticeably smaller) BID41. Using the raw encoder output (RNN-RNN) achieves the lowest performance across all tasks. Unrolling the RNN decoders dramatically improves the performance across all tasks compared to using the raw encoder RNN output, validating the theoretical justification presented in Section 3.3. BOW encoder: We do not observe the same uplift in performance from unrolling the RNN encoder compared to the encoder output. This is consistent with our findings when using dot product (see Table 1). A corollary is that longer sentences on average have shorter norms, since they contain more words which, in turn, have appeared in more contexts BID0. During training, the corpus can induce differences in norms in a way that strongly penalises sentences potentially containing multiple contexts, and consequently will disfavour these sentences as similar to other sentences under the dot product. This induces a noise that potentially renders the dot product a less useful metric to choose for STS tasks than cosine similarity, which is unaffected by this issue.using dot product as the similarity measure. On each task, the highest performing setup for each encoder type is highlighted in bold and the highest performing setup overall is underlined. A practical downside of the unrolling procedure described in Section 3.3 is that concatenating hidden states of the decoder leads to very high dimensional vectors, which might be undesirable due to memory or other practical constraints. An alternative is to instead average the hidden states, which also corresponds to a representation space in which the training objective optimises the dot product as a measure of similarity between a sentence and its context. We refer to this model choice as *-RNN-mean. Results on similarity and transfer tasks for BOW-RNN-mean and RNN-RNN-mean are presented in TAB7 respectively, with for the other models from Section 5 included for completeness. While the strong performance of RNN-RNN-mean relative to RNN-RNN is consistent with our theory, exploring why it is able to outperform RNN-concat experimentally on STS tasks is left to future work.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byd-EfWCb
By introducing the notion of an optimal representation space, we provide a theoretical argument and experimental validation that an unsupervised model for sentences can perform well on both supervised similarity and unsupervised transfer tasks.
Cloze test is widely adopted in language exams to evaluate students' language proficiency. In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams. With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets. We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data. We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck. In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data. Being a classic language exercise, the cloze test BID26 is an accurate assessment of language proficiency BID7 BID11 BID27 and has been widely employed in language examinations. Under standard setting, a cloze test requires examinees to fill in the missing word (or sentence) that best fits the surrounding context. To facilitate natural language understanding, automatically generated cloze datasets were introduced to measure the ability of machines in reading comprehension BID8 BID9 BID17. In these datasets, each cloze question typically consists of a context paragraph and a question sentence. By randomly replacing a particular word in the question sentence with a blank symbol, a single test case is created. For instance, the CNN/Daily Mail BID8 take news articles as the context and the summary bullet points as the question sentence. Only named entities are considered when creating the blanks. Similarly, in Children's Books test (CBT) BID9, the cloze question is obtained by removing a word in the last sentence of every consecutive 21 sentences, with the first 20 sentences being the context. Different from the CNN/Daily Mail datasets, CBT also provides each question with a candidate answer set, consisting of randomly sampled words with the same part-of-speech tag from the context as that of the ground truth. Thanks to the automatic generation process, these datasets can be very large in size, leading to significant research progress. However, compared to how humans would create cloze questions, the automatic generation process bears some inevitable issues. Firstly, the blanks are chosen uniformly without considering which aspect of the language phenomenon the question will test. Hence, quite a portion of automatically generated questions can be purposeless or even trivial to answer. Another issue involves the ambiguity of the answer. Given a context and a blanked sentence, there can be multiple words that fit almost equally well into the blank. A possible solution is to include a candidate option set, as done by CBT, to get rid of the ambiguity. However, automatically generating the candidate option set can be problematic since it cannot guarantee the ambiguity is removed. More importantly, automatically generated candidates can be totally irrelevant or simply grammatically unsuitable for the blank, ing in again trivial questions. Probably due to these unsatisfactory issues, it has been shown neural models have achieved comparable performance with human within very short time BID3 BID6 BID23. While there has been work trying to incorporate human design into cloze question generation BID30, the MSR Sentence Completion Challenge created by this effort is quite small in size, limiting the possibility of developing powerful neural models on it. Motivated by the aforementioned drawbacks, we propose CLOTH, a large-scale cloze test dataset collected from English exams. Questions in the dataset are designed by middle-school and highschool teachers to prepare Chinese students for entrance exams. To design a cloze test, teachers firstly determine the words that can test students' knowledge of vocabulary, reasoning or grammar; then replace those words with blanks and provide three candidate options for each blank. If a question does not specifically test grammar usage, all of the candidate options would complete the sentence with correct grammar, leading to highly confusing questions. As a , human-designed questions are usually harder and are a better assessment of language proficiency. Note that, different from the reading comprehension task, a general cloze test does not focus on testing reasoning abilities but evaluates several aspects of language proficiency including vocabulary, reasoning and grammar. To verify if human-designed cloze questions are difficult for current models, we train dedicated models as well as the state-of-the-art language model and evaluate their performance on this dataset. We find that the state-of-the-art model lags behind human performance even if the model is trained on a large external corpus. We analyze where the model fails compared to human. After conducting error analysis, we assume the performance gap from the model's inability to use long-term context. To verify this assumption, we evaluate humans' performance when they are only allowed to see one sentence as the context. Our assumption is confirmed by the matched performances of the model and human when given only one sentence. In addition, we demonstrate that human-designed data is more informative and more difficult than automatically generated data. Specifically, when the same amount of training data is given, human-designed training data leads to better performance. Additionally, it is much easier for the same model to perform well on automatically generated data. In this section, we introduce the CLOTH dataset that is collected from English examinations, and study the assessed abilities of this dataset. We collected the raw data from three free websites 2 in China that gather exams designed by English teachers. These exams are used to prepare students for college/high school entrance exams. Before cleaning, there are 20, 605 passages and 332, 755 questions. We perform the following processes to ensure the validity of the data: 1. We remove questions with an inconsistent format such as questions with more than four options; 2. We filter all questions whose validity relies on external information such as pictures or tables; 3. Further, we delete duplicated passages; 4. On one of the websites, the answers are stored as images. We use two OCR software, tesseract 3 and ABBYY FineReader 4, to extract the answers from images. We discard the question when from the two software are different. After the cleaning process, we obtain a dataset of 7, 131 passages and 99, 433 questions. Since high school questions are more difficult than middle school questions, we divided the datasets into CLOTH-M and CLOTH-H, which stand for the middle school part and the high school part. We split 11% of the data for both the test set and the dev set. The detailed statistics of the whole dataset and two subsets are presented in TAB1. In order to evaluate students' mastery of a language, teachers usually design tests so that questions cover different aspects of a language. Specifically, they first identity words in the passage that can Table 2.Passage: Nancy had just got a job as a secretary in a company. Monday was the first day she went to work, so she was very 1 and arrived early. She 2 the door open and found nobody there. " I am the 3 to arrive." She thought and came to her desk. She was surprised to find a bunch of 4 on it. They were fresh. She 5 them and they were sweet. She looked around for a 6 to put them in. " Somebody has sent me flowers the very first day!" she thought 7. " But who could it be?" she began to 8. The day passed quickly and Nancy did everything with 9 interest. For the following days of the 10, the first thing Nancy did was to change water for the followers and then set about her work. Then came another Monday. 11 she came near her desk she was overjoyed to see a(n) 12 bunch of flowers there. She quickly put them in the vase, 13 the old ones. The same thing happened again the next Monday. Nancy began to think of ways to find out the 14. On Tuesday afternoon, she was sent to hand in a plan to the 15. She waited for his directives at his secretary's 16. She happened to see on the desk a half-opened notebook, which 17: "In order to keep the secretaries in high spirits, the company has decided that every Monday morning a bunch of fresh flowers should be put on each secretarys desk." Later, she was told that their general manager was a business management psychologist. Questions: DISPLAYFORM0 A Table 2: A Sample passage from our dataset. The correct answers are highlighted. To understand the assessed abilities on this dataset, we divide questions into several types and label the proportion of each type of questions. We find that the questions can be divided into the following types:• Grammar: The question is about grammar usage, involving tense, preposition usage, active/passive voices, subjunctive mood and so on.• Short-term-reasoning: The question is about content words and can be answered based on the information within the same sentence.• Matching/paraphrasing: The question is answered by copying/paraphrasing a word.• Long-term-reasoning: The answer must be inferred from synthesizing information distributed across multiple sentences. We sample 100 passages in the high school category and the middle school category respectively. Each passage in the high school category has 20 questions and each passage in the middle school category has 10 questions. The types of the 3000 question are labeled on Amazon Turk. We pay $1 and $0.5 for high school passage and middle school passage respectively. The proportion of different questions is shown in TAB4. We find that the majority of questions are short-term-reasoning questions, in which the examinee needs to utilize grammar knowledge, vocabulary knowledge and simple reasoning to answer the questions. Note that questions in middle school are easier since they have more grammar questions. Finally, only approximately 22.4% of data needs long-term information, in which the long-term-reasoning questions constitute a large proportion. In this section, we study if human-designed cloze test is a challenging problem for state-of-the-art models. We find that the language model trained on large enough external corpus could not solve the cloze test. After conducting error analysis, we hypothesize that the model is not able to deal with long-term dependencies. We verify the hypothesis by evaluating human's performance when human only see one sentence as the context. LSTM To test the performance of RNN based supervised models, we train a bidirectional LSTM BID10 to predict the missing word given the context, with only labeled data. The implementation details are in Appendix A.1.Attention Readers To enable the model to gather information from a longer context, we augment the supervised LSTM model with the attention mechanism BID1, so that the representation at the blank is used as a query to find the relevant context in the document and a blank-specific representation of the document is used to score each candidate answer. Specifically, we adapt the Stanford Attention Reader BID3 and the position-aware attention model BID29 to the cloze test problem. With the position-aware attention model, the attention scores are based on both the context match and the distances of two words. Both attention models are trained only with the human-designed blanks just as the LSTM model. Language model Language modeling and cloze test are similar since, in both tasks, a word is predicted based on the context. In cloze test, the context on both sides may determine the correct answer. Suppose x i is the missing word and x 1, · · ·, x i−1, x i+1, · · ·, x n are the context. Although language model is trained to predict the next word only using the left context, to utilize the surrounding context, we could choose x i that maximizes the joint probability p(x 1, · · ·, x n), which essentially maximizes the conditional likelihood p(DISPLAYFORM0 . Therefore, language model can be naturally adapted to cloze test. In essence, language model treats each word as a possible blank and learns to predict it. As a , it receives more supervision than the supervised model trained on human-labeled questions. Additionally, it can be trained on a very large unlabeled corpus. Interested in whether the state-ofthe-art language model can solve cloze test, we first train a neural language model on the training set of our corpus, then we test the language model trained on One Billion Word Benchmark BID2) (referred as 1-billion-language-model) that achieves a perplexity of 30.0 BID13 5. To make the evaluation time tractable, we limit the context length to one sentence or three sentences. Human performance We measure the performance of Amazon Turkers on 3, 000 sampled questions when the whole passage is given. The comparison is shown in Table 4. Both attention models achieve a similar accuracy to the LSTM. We hypothesize the attention model's unsatisfactory performance is due to the difficulty to learn to comprehend longer context when the majority of the training data only requires understanding short-term information. The language model trained on our dataset achieves an accuracy of 0.548 while the supervised model's accuracy is 0.484, indicating that more training data in better generalization. When only one sentence is given as context, the accuracy of 1-billion-languagemodel is 0.695, which shows that the amount of data is an essential factor affecting the model's performance. It also indicates that the language model can learn sophisticated language regularities when given enough data. The same can also be drawn from state-of-the-art on six language tasks ed from applying language model representations as word vectors BID0. However, if we increase the context length to three sentences, the accuracy of 1-billionlanguage-model only improves to 0.707. In contrast, human outperforms 1-billion-language-model by a significant margin, which demonstrates that deliberately designed questions in CLOTH are not completely solved even for state-of-the-art models. Table 4: Model and human's performance on CLOTH. Attention model does not leads to performance improvement compared to vanilla LSTM. Language model outperforms LSTM since it receives more supervisions in learning to predict each word. Training on large external corpus further significantly enhances the accuracy. In this section, we would like to understand why the state-of-the-art model lags behind human performance. We find that most of the errors made by the large language model involve long-term reasoning. Additionally, in a lot of cases, the dependency is within the context of three sentences. Several errors made by the large language model are shown in Table 5. In the first example, the model does not know that Nancy found nobody in the company means that Nancy was the first one to arrive at the company. In the second and third example, the model fails probably because of the coreference from "they" to "flowers". The dependency in the last case is longer. It depends on the fact that "Nancy" was alone in the company. Based on the case study, we hypothesize that the language model is not able to take long-term information into account, although it achieves a surprisingly good overall performance. Moreover, the 1-billion-language-model is trained on the sentence level, which might also in paying more attention to short-term information. However, we do not have enough computational resources to train a large model on 1 Billion Word Benchmark to investigate the differences of training on sentence level or on paragraph level. She smelled them and they were sweet. She looked around for a to put them in. A. vase B. room C. glass D. bottle "Somebody has sent me flowers the very first day!" " But who could it be?" she began to. The day passed quickly and Nancy did A. seek B. wonder C. work D. ask everything with great interest. Table 5: Error analysis of 1-billion-language-model with three sentences as the context. The questions are sampled from the sample passage shown in Table 2. The correct answer is in bold text. The incorrectly selected options are in italics. An available comparison is to test the model's performance on different types of questions. We find that the model's accuracy is 0.591 on long-term-reasoning questions of CLOTH-H while achieving 0.693 on short-term-reasoning, which partially confirms that long-term-reasoning is harder. However, we could not completely rely on the performance on specific questions types, partly due to the small sample size. A more fundamental reason is that the question type labels are subjective and their reliability depends on whether turkers are careful enough. For example, in the error analysis shown in Table 5, a careless turker would label the second example as short-term-reasoning without noticing that the meaning of "they" relies on a long context span. To objectively verify if the language model's strengths are in dealing with short-term information, we obtain the ceiling performance of only utilizing short-term information. Showing only one sentence as the context, we ask the turkers to label all possible options that they deem to be correct given the insufficient information. We also ask them to select a single option based on their best guesses. By limiting the context span manually, the ceiling performance with only the access to short context is estimated accurately. The performances of turkers and 1-billion-language-model are shown in TAB8. The performance of 1-billion-language-model using one sentence as the context can almost match the ceiling performance of only using short-term information. Hence we conclude that the language model can almost perfectly solve all short-term cloze questions. However, the performance of language model is not improved significantly when the needed long-term context is given, indicating that the performance gap is due to the inability of long-term reasoning. Assuming the majority of question type labels is reliable, we verify the strengths and weaknesses of models and human by studying the performance of models and human on different question categories. The comparison is shown in Figure 1.The human study on short-term ceiling performance also reveals that the options are carefully picked. Specifically, when a Turker thinks that a question has multiple answers, 3.41 out of 4 options are deemed to be possibly correct, which means that teachers design the options so that three or four options all make sense if we only look at the local context.4 COMPARING HUMAN-DESIGNED DATA AND AUTOMATICALLY GENERATED DATAIn this section, we demonstrate that human-designed data is a better test bed than automatically generated data for general cloze test since it in a larger gap between the model's performance and human performance. However, the distributional mismatch between two types of data makes the human-designed data an unsuitable training source for solving automatically generated questions. In addition, we improve the model's performance by finding generated data that resembles humandesigned data. At a casual observation, a cloze test can be created by randomly deleting words and randomly sampling candidate options. In fact, to generate large-scale data, similar generation processes have been introduced and widely used in machine comprehension BID8 BID9 BID17. However, research on cloze test design BID22 shows that tests created by deliberately deleting words are more reliable than tests created by randomly or periodically deleting words. To design accurate language proficiency assessment, teachers usually select words in order to examine students' proficiency in grammar, vocabulary and reasoning. Moreover, in order to make the question non-trivial, the three incorrect options provided by teachers are usually grammatically correct and relevant to the context. For instance, in the fourth problem of the sample passage shown in Table 2, "grapes", "flowers" and "bananas" all fit the description of being fresh. We know "flowers" is the correct answer after seeing the sentence "Somebody has sent me flowers the very first day!".Naturally, we hypothesize that the distribution of human-generated data is different from automatically generated data. To verify this assumption, we compare the LSTM model's performance when given different proportion of the two types of data. Specifically, to train a model with α percent of automatically generated data, we randomly replace a percent blanks with blanks at random positions, while keeping the remaining 100 − α percent questions the same. The candidate options for the generated blanks are random words sampled from the unigram distribution. We test the trained model on human-designed data and automatically generated data respectively. Table 7: We train a model on α percent of automatically generated data and 100 − α percent of human-designed data and test it on human-designed data and automatically generated data respectively. The performance is shown in Table 7. We have the following observations: human-designed data leads to a larger gap between the model's performance and the human performance, when given the same model. The model's performance and human's performance on the human-designed data are 0.484 and 0.860 respectively, leading to a gap of 0.376. In comparison, the performance gap on the automatically generated data is at most 0.185 since the model's performance reaches 0.815 when trained on generated data. It shows that the distributions of human-designed data and automatically generated data are quite different. the distributional mismatch between two types of data makes it difficult to transfer a model trained on human-designed data to automatically generated data. Specifically, the model's performance on automatically generated data monotonously increases when given a higher ratio of automatically generated training data. To conclude, human-designed data is a good test base because of the larger gap between performances of the model and the human, although the distributional mismatch problem makes it difficult to be the best training source for out-of-domain cloze test such as automatically generated cloze test.4.2 COMBINING HUMAN-DESIGNED DATA WITH AUTOMATICALLY GENERATED DATA In Section 3.1, we show that language model is able to take advantage of more supervisions since it predicts each word based on the context. In essence, each word can provide an automatically generated question. At the same time, we also show that human-designed data and the automatically generated data are quite different in Section 4.1. In this section, we propose to combine humandesigned data with automatically generated data to achieve better performance. Note that discriminative models can also treat all words in a passage as automatically generated questions, just like a language model (Please see the Appendix A.3 for details). We study two methods of leveraging automatically generated data and human-designed data:Equally averaging Let J h be the average loss for all human-designed questions and J u be the average loss for all automatically generated questions in the passage. A straightforward method is to optimize J h + λJ u so that the model learns to predict words deleted by human and all other words in the passage. We set λ to 1 in our experiments. This model treats each automatically generated questions as equally important. Representativeness-based weighted averaging A possible avenue towards having large-scale indomain data is to automatically pick out questions which are representative of in-domain data among a large number of out-of-domain samples. Hence, we mimick the design behavior of language teachers by training a network to predict the representativeness of each automatically generated question. Note that the candidate option set for a automatically generated question is the whole vocabulary. We leave the candidate set prediction for future work. The performance of the representativeness prediction network and an example are shown in Appendix A.4.Let J i denotes the negative log likelihood loss for the i−th question and let l i be the outputted representativeness of the i-th question (The definition of l i is in Appendix A.2). We define the representativeness weighted loss function as DISPLAYFORM0 where H is the set of all human-generated questions and α is the temperature of the Softmax function. When the temperature is +∞, the model degenerate into equally averaging objective function without using the representativeness. When the temperature is 0, only the most representative question is used. We set α to 2 based on the performance on the dev set. We present the in Table 8. When all other words are treated as equally important, the accuracy is 0.543, similar to the performance of language model. Representativeness-based weighted averaging leads to an accuracy of 0.565. When combined with human-designed data, the performance can be improved to 0.583 6. Large-scale automatically generated cloze test BID8 BID9 BID17 leaded to significant research advancement. However, the generated questions do not consider Table 8: Overall on CLOTH. The "representativeness" means weighted averaging the loss of each question using the predicted representativeness. "equal-average" means to equally average losses of questions.the language phenomenon to be tested and are relatively easy to solve. Recently proposed reading comprehension datasets are all labeled by human to ensure their qualities BID20 BID12 BID28 BID16. Aiming to evaluate machines under the same conditions human is evaluated, there are a growing interests in obtaining data from examinations. NTCIR QA Lab BID24 contains a set of real-world university entrance exam questions. The Entrance Exams task at CLEF QA Track BID18 BID21 evaluates machine's reading comprehension ability. The AI2 Elementary School Science Questions dataset 7 provides 5, 060 scientific questions used in elementary and middle schools. BID15 proposes the first large-scale machine comprehension dataset obtained from exams. They show that questions designed by teachers have a significant larger proportion of reasoning questions. Our dataset focuses on evaluating language proficiency while the focus of reading comprehension is reasoning. In Section 4.2, we employ a simple supervised approach that predicts how likely a word is selected by teachers as a cloze question. It has been shown that features such as morphology information and readability are beneficial in cloze test prediction BID25 BID5. We leave investigating the advanced approaches of automatically designing cloze test to future work. In this paper, we propose a large-scale cloze test dataset CLOTH that is designed by teachers. With the missing blanks and candidate options carefully created by teachers to test different aspects of language phenomenon, CLOTH requires a deep language understanding and better captures the complexity of human language. We find that human outperforms state-of-the-art models by a significant margin, even if the model is trained on a large corpus. After detailed analysis, we find that the performance gap is due to model's inability to understanding a long context. We also show that, compared to automatically-generated questions, human-designed questions are more difficult and leads to a larger margin between human performance and the model's performance. A predicted sample is shown in FIG2. Clearly, words that are too obvious have low scores, such as punctuation marks, simple words "a" and "the". In contrast, content words whose semantics are directly related to the context have a higher score, e.g., "same", "similar", "difference" have a high score when the difference between two objects is discussed and "secrets" has a high score since it is related to the subsequent sentence "does not want to share with others".Our prediction model achieves an F1 score of 36.5 on the test set, which is understandable since there are many plausible questions within a passage.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJJzTyWCZ
A cloze test dataset designed by teachers to assess language proficiency
Recent work suggests goal-driven training of neural networks can be used to model neural activity in the brain. While response properties of neurons in artificial neural networks bear similarities to those in the brain, the network architectures are often constrained to be different. Here we ask if a neural network can recover both neural representations and, if the architecture is unconstrained and optimized, also the anatomical properties of neural circuits. We demonstrate this in a system where the connectivity and the functional organization have been characterized, namely, the head direction circuit of the rodent and fruit fly. We trained recurrent neural networks (RNNs) to estimate head direction through integration of angular velocity. We found that the two distinct classes of neurons observed in the head direction system, the Ring neurons and the Shifter neurons, emerged naturally in artificial neural networks as a of training. Furthermore, connectivity analysis and in-silico neurophysiology revealed structural and mechanistic similarities between artificial networks and the head direction system. Overall, our show that optimization of RNNs in a goal-driven task can recapitulate the structure and function of biological circuits, suggesting that artificial neural networks can be used to study the brain at the level of both neural activity and anatomical organization. Artificial neural networks have been increasingly used to study biological neural circuits. In particular, recent work in vision demonstrated that convolutional neural networks (CNNs) trained to perform visual object classification provide state-of-the-art models that match neural responses along various stages of visual processing;; Güçlü & van; ). Recurrent neural networks (RNNs) trained on cognitive tasks have also been used to account for neural response characteristics in various domains (; ; ; ; ; ; ; ;). While these provide important insights on how information is processed in neural circuits, it is unclear whether artificial neural networks have converged upon similar architectures as the brain to perform either visual or cognitive tasks. Answering this question requires understanding the functional, structural, and mechanistic properties of artificial neural networks and of relevant neural circuits. We address these challenges using the brain's internal compass -the head direction system, a system that has accumulated substantial amounts of functional and structural data over the past few decades in rodents and fruit flies (a; ; ; ; ; ; ; ;). We trained RNNs to perform a simple angular velocity (AV) integration task and asked whether the anatomical and functional features that have emerged as a of stochastic gradient descent bear similarities to biological networks sculpted by long evolutionary time. By leveraging existing knowledge of the biological head direction (HD) systems, we demonstrate that RNNs exhibit striking similarities in both structure and function. Our suggest that goal-driven training of artificial neural networks provide a framework to study neural systems at the level of both neural activity and anatomical organization.). e) The brain structures in the fly central complex that are crucial for maintaining and updating heading direction, including the protocerebral bridge (PB) and the ellipsoid body (EB). f) The RNN model. All connections within the RNN are randomly initialized. g) After training, the output of the RNN accurately tracks the current head direction. We trained our networks to estimate the agent's current head direction by integrating angular velocity over time (Fig. 1f). Our network model consists of a set of recurrently connected units (N = 100), which are initialized to be randomly connected, with no self-connections allowed during training. The dynamics of each unit in the network r i (t) is governed by the standard continuous-time RNN equation: for i = 1,..., N. The firing rate of each unit, r i (t), is related to its total input x i (t) through a rectified tanh nonlinearity, r i (t) = max(0, tanh(x i (t))). Every unit in the RNN receives input from all other units through the recurrent weight matrix W rec and also receives external input, I(t), through the weight matrix W in. These weight matrices are randomly initialized so no structure is a priori introduced into the network. Each unit has an associated bias, b i which is learned and an associated noise term, ξ i (t), sampled at every timestep from a Gaussian with zero mean and constant variance. The network was simulated using the Euler method for T = 500 timesteps of duration τ /10 (τ is set to be 250ms throughout the paper). Let θ be the current head direction. Input to the RNN is composed of three terms: two inputs encode the initial head direction in the form of sin(θ 0) and cos(θ 0), and a scalar input encodes both clockwise (CW, negative) and counterclockwise, (CCW, positive) angular velocity at every timestep. The RNN is connected to two linear readout neurons, y 1 (t) and y 2 (t), which are trained to track current head direction in the form of sin(θ) and cos(θ). The activities of y 1 (t) and y 2 (t) are given by: Velocity at every timestep (assumed to be 25 ms) is sampled from a zero-inflated Gaussian distribution (see Fig. 5). Momentum is incorporated for smooth movement trajectories, consistent with the observed animal behavior in flies and rodents. More specifically, we updated the angular velocity as AV(t) = σX + momentum * AV(t−1), where X is a zero mean Gaussian random variable with standard deviation of one. In the Main condition, we set σ = 0.03 radians/timestep and the momentum to be 0.8, corresponding to a mean absolute AV of ∼100 deg/s. These parameters are set to roughly match the angular velocity distribution of the rat and fly (; ; ;). In Sec. 4, we manipulate the magnitude of AV by changing σ to see how the trained RNN may solve the integration task differently. We optimized the network parameters W to minimize the mean-squared error in equation between the target head direction and the network outputs generated according to equation, plus a metabolic cost for large firing rates (L 2 regularization on r). Parameters were updated with the Hessian-free algorithm . Similar were also obtained using Adam . We found that the trained network could accurately track the angular velocity (Fig. 1g). We first examined the functional and structural properties of model units in the trained RNN and compared them to the experimental data from the head direction system in rodents and flies. We first plotted the neural activity of each unit as a function of HD and AV (Fig. 2a). This revealed two distinct classes of units based on the strength of their HD and AV tuning (see Appendix Fig. 6a, b, c). Units with essentially zero activity are excluded from further analyses. The first class of neurons exhibited HD tuning with minimal AV tuning (Fig. 2f). The second class of neurons were tuned to both HD and AV and can be further subdivided into two populations -one with high firing rate when animal performs CCW rotation (positive AV), the other favoring CW rotation (negative AV) (CW tuned cell shown in Fig. 2g). Moreover, the preferred head direction of each sub-population of neurons tile the complete angular space (Fig. 2b). Embedding the model units into 3D space using t-SNE reveals a clear ring-like structure, with the three classes of units being separated (Fig. 2c). Neurons with HD tuning but not AV tuning have been widely reported in rodents (a; ;), although the HD*AV tuning profiles of neurons are rarely shown (but see). By re-analyzing the data from , we find that neurons in the anterodorsal thalamic nucleus (ADN) of the rat brain are selectively tuned to HD but not AV (Fig. 2d, also see), with HD*AV tuning profile similar to what our model predicts. Preliminary evidence suggests that this might also be true for ellipsoid body (EB) ring neurons in the fruit fly HD system . Previous studies have shown that the temporal relationship of cell firing to the rat's head direction differs across H D cells recorded from the PoS and ADN (, , . Specifically, H D cells in the ADN appear to encode the rat's f uture directional heading by ϳ25 msec, whereas H D cells in the PoS encode the rat's present or recent past directional heading. Of the 20 H D cells recorded from the L M N, a second H D cell was recorded simultaneously on the same electrode wire for three cells. Because the tuning curves of these simultaneously recorded cells partially overlapped one another, the accuracy of the time shift analysis is likely to be compromised for these cells, and they were therefore excluded from the following analyses. Time shift analyses were conducted on a representative session for each of the remaining 17 L M N H D cells. Figure 5A -C illustrates an example of the optimal time shifts of an L M N H D cell for each of the three parameters. For this cell, the optimal time shifts for peak firing rate, range width, and information Previous studies have shown that the temporal relationship of cell firing to the rat's head direction differs across HD cells recorded from the PoS and ADN (, , . Specifically, HD cells in the ADN appear to encode the rat's future directional heading by ϳ25 msec, whereas HD cells in the PoS encode the rat's present or recent past directional heading. Of the 20 HD cells recorded from the LMN, a second HD cell was recorded simultaneously on the same electrode wire for three cells. Because the tuning curves of these simultaneously recorded cells partially overlapped one another, the accuracy of the time shift analysis is likely to be compromised for these cells, and they were therefore excluded from the following analyses. Time shift analyses were conducted on a representative session for each of the remaining 17 LMN HD cells. Figure 5A -C illustrates an example of the optimal time shifts of an LMN HD cell for each of the three parameters. For this cell, the optimal time shifts for peak firing rate, range width, and information Previous studies have shown that the temporal relationship of cell firing to the rat's head direction differs across H D cells recorded from the PoS and ADN (, , . Specifically, H D cells in the ADN appear to encode the rat's f uture directional heading by ϳ25 msec, whereas H D cells in the PoS encode the rat's present or recent past directional heading. Of the 20 H D cells recorded from the L M N, a second H D cell was recorded simultaneously on the same electrode wire for three cells. Because the tuning curves of these simultaneously recorded cells partially overlapped one another, the accuracy of the time shift analysis is likely to be compromised for these cells, and they were therefore excluded from the following analyses. Time shift analyses were conducted on a representative session for each of the remaining 17 L M N H D cells. Figure 5A -C illustrates an example of the optimal time shifts of an L M N H D cell for each of the three parameters. For this cell, the optimal time shifts for peak firing rate, range width, and information et al., 1994). The head turn modulation of HD cell activity in these distinct brain areas suggests that LMN HD cells are head turn-modulated HD cells, whereas those in the cortical regions represent a dual code, signaling both head turn and directional heading. Previous studies have shown that the temporal relationship of cell firing to the rat's head direction differs across HD cells recorded from the PoS and ADN (, , . Specifically, HD cells in the ADN appear to encode the rat's future directional heading by ϳ25 msec, whereas HD cells in the PoS encode the rat's present or recent past directional heading. Of the 20 HD cells recorded from the LMN, a second HD cell was recorded simultaneously on the same electrode wire for three cells. Because the tuning curves of these simultaneously recorded cells partially overlapped one another, the accuracy of the time shift analysis is likely to be compromised for these cells, and they were therefore excluded from the following analyses. Time shift analyses were conducted on a representative session for each of the remaining 17 LMN HD cells. Figure 5A -C illustrates an example of the optimal time shifts of an LMN HD cell for each of the three parameters. For this cell, the optimal time shifts for peak firing rate, range width, and information Neurons tuned to both HD and AV tuning have also been reported previously in rodents and fruit flies (; ;), although the joint HD*AV tuning profiles of neurons have only been documented anecdotally with a few cells . In rodents, certain cells are also observed to display HD and AV tuning (Fig. 2e). In addition, in the fruit fly heading system, neurons on the two sides of the protocerebral bridge (PB) are also tuned to CW and CCW rotation, respectively, and tile the complete angular space, much like what has been observed in our trained network . These observations collectively suggest that neurons that are HD but not AV selective in our model can be tentatively mapped to "Ring" units in the EB, and the two sub-populations of neurons tuned to both HD and AV map to "Shifter" neurons on the left PB and right PB, respectively. We will correspondingly refer to our model neurons as either'Ring' units or'CW/CCW Shifters'(Further justification of the terminology will be given in Sec. 3.2 & 3.3). We next sought to examine the tuning properties of both Ring units and Shifters of our network in greater detail. First, we observe that for both Ring units and Shifters, the HD tuning curve varies as a function of AV (see example Ring unit in Fig. 2f and example Shifter in Fig. 2g). Population summary statistics concerning the amount of tuning shift are shown in Appendix Fig. 7a. The preferred HD tuning is biased towards a more CW angle at CW angular velocities, and vice versa for CCW angular velocities. Consistent with this observation, the HD tuning curves in rodents were also dependent upon AV (see example neurons in Fig. 2h,i) (; ; ; ; . Second, the AV tuning curves for the Shifters exhibit graded response profiles, consistent with the measured AV tuning curves in flies and rodents (see Fig. 1b,d). Across neurons, the angular velocity tuning curves show substantial diversity (see Appendix Fig. 6b), also consistent with experimental reports . In summary, the majority of units in the trained RNN could be mapped onto the biological head direction system in both general functional architecture and also in detailed tuning properties. Our model unifies a diverse set of experimental observations, suggesting that these neural response properties are the consequence of a network solving an angular integration task optimally. vity of the trained network are structured, and exhibit similarity to the connectivity lex. a) the connectivity of the network trained with mid-speed. We sort the neuron nctional classes, and further arrange them according to the preferred HD with each ible structural connectivity within and across different cell types. b) mapping the ture onto fly central complex. Inserted panels represent the average connectivity one s.d.) as a function of the difference between preferred HD for within and class of neurons. ly structured connectivity have been proposed to perform integration. It's shown igure 3: Connectivity of the trained network is structured and exhibits similarities with the connectivity in the fly central complex. a) Pixels represent connections from the units in each column to the units in each row. Excitatory connections are in red, and inhibitory connections are in blue. Units are first sorted by functional classes, and then are further sorted by their preferred HD within each class. The black box highlights recurrent connections to the ring units from ring units, from CCW Shifters, and from CW Shifters. b) Ensemble connectivity from each functional cell type to the Ring units as highlighted in a), in relation to the architecture of the PB & EB in the fly central complex. Plots show the average connectivity (shaded area indicates one s.d.) as a function of the difference between the preferred HD of the cell and the Ring unit it is connecting to. Ring units connect strongly to units with similar HD tuning and inhibit units with dissimilar HD tuning. CCW shifters connect strongly to ring units with preferred head directions that are slightly CCW-shifted to its own, and CW shifters connect strongly to Ring units with preferred head directions that are slightly CW-shifted to its own. Refer to Appendix Fig. 8b for the full set of ensemble connectivity between different classes. Previous experiments have detailed a subset of connections between EB and PB neurons in the fruit fly. We next analyzed the connectivity of Ring units and Shifters in the trained RNN to ask whether it recapitulates these connectivity patterns -a test which has never been done to our knowledge in any system between artificial and biological neural networks (see Fig. 3). We ordered Ring units, CCW Shifters, and CW Shifters by their preferred head direction tuning and plotted their connection strengths (Fig. 3a). This revealed highly structured connectivity patterns within and between each class of units. We first focused on the connections between individual Ring units and observed a pattern of local excitation and global inhibition. Neurons that have similar preferred head directions are connected through positive weights and neurons whose preferred head directions are anti-phase are connected through negative weights (Fig. 3b). This pattern is consistent with the connectivity patterns inferred in recent work based on detailed calcium imaging and optogenetic perturbation experiments , with one caveat that the connectivity pattern inferred in this study is based on the effective connectivity rather than anatomical connectivity. We conjecture that Ring units in the trained RNN serve to maintain a stable activity bump in the absence of inputs (see section 3.3), as proposed in previous theoretical models (; ;). We then analyzed the connectivity between Ring units and Shifters. We found that CW shifters excite Ring units with preferred head directions that are clockwise to its own, and inhibit Ring units with preferred head directions counterclockwise to its own (Fig. 3b). The opposite pattern is observed for CCW shifters. Such asymmetric connections from Shifters to the Ring units are consistent with the connectivity pattern observed between the PB and the EB in the fruit fly central complex (; ;), and also in agreement with previously proposed mechanisms of angular integration (; ; ;) (Fig. 3b). We note that while the connectivity between PB Shifters and EB Ring units are one-to-one (; ;), the connectivity profile in our model is broad, with a single CW Shifter exciting multiple Ring units with preferred HDs that are clockwise to its own, and vice versa for CCW shifters. In summary, the RNN developed several anatomical features that are consistent with structures reported or hypothesized in previous experimental . A few novel predictions are worth mentioning. First, in our model the connectivity between CW and CCW Shifters exhibit specific recurrent connectivity (Fig. 8). Second, the connections from Shifters to Ring units exhibit not only excitation in the direction of heading motion, but also inhibition that is lagging in the opposite direction. This inhibitory connection has not been observed in experiments yet but may facilitate the rotation of the neural bump in the ring units during turning (; ; ;). In the future, EM reconstructions together with functional imaging and optogenetics should allow direct tests of these predictions. We have segregated neurons into Ring and Shifter populations according to their HD and AV tuning, and have shown that they exhibit different connectivity patterns that are suggestive of different functions. Ring units putatively maintain the current heading direction and shifter units putatively rotate activity on the ring according to the direction of angular velocity. To substantiate these functional properties, we performed a series of perturbation experiments by lesioning specific subsets of connections. We first lesioned connections when there is zero angular velocity input. Normally, the network maintains a stable bump of activity within each class of neurons, i.e., Ring units, CW Shifters, and CCW Shifters (see Fig. 4a,b). We first lesioned connections from Ring units to all units and found that the activity bumps in all three classes disappeared and were replaced by diffuse activity in a large proportion of units. As a consequence, the network could not report an accurate estimate of its current heading direction. Furthermore, when the connections were restored, a bump formed again We next lesioned connections during either constant CW or CCW angular velocity. Normally, the network can integrate AV accurately (Fig. 4k-n). As expected, during CCW rotation, we observe a corresponding rotation of the activity bump in Ring units and in CCW Shifters, but CW Shifters display low levels of activity. The converse is true during CW rotation. We first lesioned connections from CW Shifters to all units, and found that it significantly impaired rotation in the CW direction, and also increased the rotation speed in the CCW direction. Lesioning of CCW Shifters to all units had the opposite effect, significantly impairing rotation in the CCW direction. These are consistent with the hypothesis that CW/CCW Shifters are responsible for shifting the bump in a CW and CCW direction, respectively, and are consistent with the data in , which shows that inhibition of Shifter units in the PB of the fruit fly heading system impairs the integration of HD. Our lesion experiments further support the segregation of units into modular components that function to separately maintain and update heading during angular motion. Optimal computation requires the system to adapt to the statistical structure of the inputs . In order to understand how the statistical properties of the input trajectories without any external input (Fig. 4d), suggesting the network can spontaneously generate an activity bump through recurrent connections mediated by Ring units. We then lesioned connections from CW Shifters to all units and found that all three bumps exhibit a CCW rotation, and the read-out units correspondingly reported a CCW rotation of heading direction (Fig. 4e,f). Analogous were obtained with lesions of CCW Shifters, which ed in a CW drifting bump of activity (Fig. 4g,h). These are consistent with the hypothesis that CW and CCW Shifters simultaneously activate the ring, with mutually cancelling signals, even when the heading direction is stationary. When connections are lesioned from both CW and CCW Shifters to all units, we observe that Ring units are still capable of holding a stable HD activity bump (Fig. 4i,j), consistent with the predictions that while CW/CCW shifters are necessary for updating heading during motion, Ring units are responsible for maintaining heading. We next lesioned connections during either constant CW or CCW angular velocity. Normally, the network can integrate AV accurately (Fig. 4k-n). As expected, during CCW rotation, we observe a corresponding rotation of the activity bump in Ring units and in CCW Shifters, but CW Shifters display low levels of activity. The converse is true during CW rotation. We first lesioned connections from CW Shifters to all units, and found that it significantly impaired rotation in the CW direction, and also increased the rotation speed in the CCW direction. Lesioning of CCW Shifters to all units had the opposite effect, significantly impairing rotation in the CCW direction. These are consistent with the hypothesis that CW/CCW Shifters are responsible for shifting the bump in a CW and CCW direction, respectively, and are consistent with the data in , which shows that inhibition of Shifter units in the PB of the fruit fly heading system impairs the integration of HD. Our lesion experiments further support the segregation of units into modular components that function to separately maintain and update heading during angular motion. Optimal computation requires the system to adapt to the statistical structure of the inputs . In order to understand how the statistical properties of the input trajectories affect how a network solves the task, we trained RNNs to integrate inputs generated from low and high AV distributions. When networks are trained with small angular velocities, we observe the presence of more units with strong head direction tuning but minimal angular velocity tuning. Conversely, when networks are trained with large AV inputs, fewer ring units emerge and more units become Shifter-like and exhibit both HD and AV tuning (Fig. 5c,f,i). We sought to quantify the overall AV tuning under each velocity regime by computing the slope of each neuron's AV tuning curve at its preferred HD angle. We found that by increasing the magnitude of AV inputs, more neurons developed strong AV tuning (Fig. 5b,e,h). In summary, with a slowly changing head direction trajectory, it is advantageous to allocate more resources to hold a stable activity bump, and this requires more ring units. In contrast, with quickly changing inputs, the system must rapidly update the activity bump to integrate head direction, requiring more shifter units. This prediction may be relevant for understanding the diversity of the HD systems across different animal species, as different species exhibit different overall head turning behavior depending on the ecological demand (; ; ;). Previous work in the sensory systems have mainly focused on obtaining an optimal representation (; ; ; ; ;) with feedforward models. Several recent studies have probed the importance of recurrent connections in understanding neural computation by training RNNs to perform tasks (e.g., ; ;), but the relation of these trained networks to the anatomy and function of brain circuits are not mapped. Using the head direction system, we demonstrate that goal-driven optimization of recurrent neural networks can be used to understand the functional, structural and mechanistic properties of neural circuits. While we have mainly used perturbation analysis to reveal the dynamics of the trained RNN, other methods could also be applied to analyze the network. For example, in Appendix Fig. 10, using fixed point analysis , we found evidence consistent with attractor dynamics. Due to the limited amount of experimental data available, comparisons regarding tuning properties and connectivity are largely qualitative. In the future, studies of the relevant brain areas using Neuropixel probes and calcium imaging will provide a more in-depth characterization of the properties of HD circuits, and will facilitate a more quantitative comparison between model and experiment. In the current work, we did not impose any additional structural constraint on the RNNs during training. We have chosen to do so in order to see what structural properties would emerge as a consequence of optimizing the network to solve the task. It is interesting to consider how additional structural constraints affect the representation and computation in the trained RNNs. One possibility would to be to have the input or output units only connect to a subset of the RNN units. Another possibility would be to freeze a subset of connections during training. Future work should systematically explore these issues. Recent work suggests it is possible to obtain tuning properties in RNNs with random connections . We found that training was necessary for the joint HD*AV tuning (see Appendix Fig. 9) to emerge. consider a simple binary classification task, our integration task is computationally more complicated. Stable HD tuning requires the system to keep track of HD by accurate integration of AV, and to stably store these values over time. This computation might be difficult for a random network to perform . Our approach contrasts with previous network models for the HD system, which are based on hand-crafted connectivity (; ; ; ; ; ; ; Kakaria & de ;). Our modeling approach optimizes for task performance through stochastic gradient descent. We found that different input statistics lead to different heading representations in an RNN, suggesting that the optimal architecture of a neural network varies depending on the task demandan insight that would be difficult to obtain using the traditional approach of hand-crafting network solutions. Although we have focused on a simple integration task, this framework should be of general relevance to other neural systems as well, providing a new approach to understand neural computation at multiple levels. Our model may be used as a building block for AI systems to perform general navigation . In order to effectively navigate in complex environments, the agent would need to construct a cognitive map of the surrounding environment and update its own position during motion. A circuit that performs heading integration will likely be combined with another circuit to integrate the magnitude of motion (speed) to perform dead reckoning. Training RNNs to perform more challenging navigation tasks such as these, along with multiple sources of inputs, i.e., vestibular, visual, auditory, will be useful for building robust navigational systems and for improving our understanding of the computational mechanisms of navigation in the brain . Figure 9: Joint HD × AV tuning of the initial, randomly connected network and the final trained network. a) Before training, the 100 units in the network do not have pronounced joint HD × AV tuning. The color scale is different for each unit (blue = minimum activity, yellow = maximum activity) to maximally highlight any potential variation in the untrained network. b) After training, the units are tuned to HD × AV, with the exception of 12 units (shown at the bottom) which are not active and do not influence the network.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HklSeREtPB
Artificial neural networks trained with gradient descent are capable of recapitulating both realistic neural activity and the anatomical organization of a biological circuit.
Convolutional Neural Networks (CNNs) are computationally intensive, which limits their application on mobile devices. Their energy is dominated by the number of multiplies needed to perform the convolutions. Winograd’s minimal filtering algorithm and network pruning can reduce the operation count, but these two methods cannot be straightforwardly combined — applying the Winograd transform fills in the sparsity in both the weights and the activations. We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity. First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations. Second, we prune the weights in the Winograd domain to exploit static weight sparsity. For models on CIFAR-10, CIFAR-100 and ImageNet datasets, our method reduces the number of multiplications by 10.4x, 6.8x and 10.8x respectively with loss of accuracy less than 0.1%, outperforming previous baselines by 2.0x-3.0x. We also show that moving ReLU to the Winograd domain allows more aggressive pruning. Deep Convolutional Neural Networks (CNNs) have shown significant improvement in many machine learning applications. However, CNNs are compute-limited. Their performance is dominated by the number of multiplies needed to perform the convolutions. Moreover, the computational workload of CNNs continues to grow over time. BID16 proposed a CNN model with less than 2.3 × 10 7 multiplies for handwritten digit classification. Later, BID13 developed AlexNet, an ImageNet-winning CNN with more than 1.1 × 10 9 multiplies. In 2014, ImageNetwinning and runner up CNNs increased the number of multiplies to 1.4 × 10 9 BID24 ) and 1.6 × 10 10 BID22 respectively. Despite the powerful representational ability of large scale CNNs, their computational workload prohibits deployment on mobile devices. Two research directions have been explored to address the problem. BID14 proposed using Winograd's minimal filtering algorithm BID25 to reduce the number of multiplies needed to perform 3 × 3 kernel convolutions. On the other end, pruning the model BID5 and exploiting the dynamic sparsity of activations due to ReLU also reduces the required multiplies. Unfortunately, the above two directions are not compatible: the Winograd transformation fills in the zeros in both the weights and the activations FIG0 ) -eliminating the gain from exploiting sparsity. Thus, for a pruned network, Winograd's algorithm actually increases the number of multiplies; the loss of sparsity more than offsets the reduced operation count. In this paper, we introduce two modifications to the original Winograd-based convolution algorithm to eliminate this problem. First, we move the ReLU operation to be after the Winograd transform to also make the activations sparse at the point where the multiplies are performed. Second, we prune the weights after (rather than before) they are transformed. Thus, the weights are sparse when the elementwise multiply is performed -reducing the operation count. Together, these two modifications enable the gains of Winograd's algorithm and of exploiting sparsity to be combined. We open-source our code and models at https://github.com/xingyul/Sparse-Winograd-CNN. Linear Algebra property in Convolution: Previous research proposes using the linear algebra property of convolution to reduce the number of multiplies by trading additions for multiplies. BID3 convert convolution into matrix multiplies and utilize the linear algebra property at the sub-matrix block level. This approach achieves a 47% saving in multiplies. BID14 exploits the element-level linear algebra property of convolution, i.e. Winograd's minimal filtering algorithm BID25. This approach reduces the number of multiplies by 2.25× to 4×, depending on the image patch size used in the algorithm. Winograd's algorithm is also used in a state-of-the-art deep learning library, cuDNN BID2, to improve computation efficiency. Model Compression: Model compression reduces the number of multiplies of CNNs by pruning network parameters BID15 BID8 and exploiting weight sparsity. BID5 proposed learning the sparsity pattern of network weights by eliminating weights whose absolute value is less than an empirical threshold. This approach can prune the convolutional layers of the model to only 30% − 50% of the original size and reduce the number of multiplies required. first proposed pruning and re-training the weights in Winograd domain for conventional Winograd convolution. later showed promising on large datasets and reported 90% sparsity in the Winograd parameters of AlexNet with less than 0.1% accuracy loss. Dynamic Activation Sparsity: The ReLU non-linearity sets activations whose values are negative to zero, causing dynamic sparsity in activations. Model compression can work in tandem with dynamic activation sparsity and reduce multiplication workload. BID5 showed that exploiting sparsity of both weights and activations can reduce the number of multiplies by 4 − 11×. BID11 further proposed to manually set a small positive ReLU threshold at test time to exploit greater sparsity in activation without losing testing accuracy. Research in novel architectures also led to optimizations for deep learning accelerators to exploit the sparsity in activations. BID6 proposed using a Leading Non-zero Detection unit (LNZD) for their fully-connected layer accelerator to efficiently skip zeros in input activations. BID1 proposed a similar mechanism for a convolution layer accelerator. We first introduce the conventional Winograd convolution and show how sparsity of weights or activations is lost during the dataflow of the algorithm. We then present the novel Winograd-ReLU CNN architecture. It preserves sparsity in both weights and activations before multiplies are performed and significantly reduces the computational workload. The basic block of the conventional Winograd convolution algorithm works on an p×p patch (denoted by d) extracted with stride of (p − 2) × (p − 2) from an H × W input feature map. With "valid" padding, the p×p patch is convolved with a 3×3 kernel (denoted by g) to produce an (p−2)×(p−2) output patch (denoted by S). The output patches are assembled into an output feature map. Input activation patch d and kernel g (spatial-domain activation and weights) are transformed using matrices B and G to be B T dB and GgG T (Winograd-domain activation and weights) respectively, both with shape p × p. After element-wise product in Winograd-domain, the output activation S is obtained using matrix A (equation ). Matrices B, G and A are p-specific. When p = 4, B and A consists of 1, −1 and 0, so the multiplication with B and A only requires addition. It reduces the number of multiplies from 9(p − 2) 2 to p 2. Lavin FORMULA1 gives details of the algorithm. DISPLAYFORM0 Spatial Baseline Network: When using a "vanilla" pruned network, as introduced by BID5, a ReLU non-linear operation is performed by the previous layer on spatial-domain input d and spatial-domain weight g is pruned. The output activation patch S is obtained from equation FORMULA1. This is illustrated in FIG0 (a) for p = 4. Though g and d may both be sparse due to pruning and ReLU respectively, the element-wise multiply is dense due to G(·)G T and B(·)B T transformations filling the spatial-domain zeros. Sparsity does not reduce the number of multiplies in Winograd's algorithm. DISPLAYFORM1 Winograd Native Pruned Network: When using the Winograd-domain pruned network introduced by and BID17, the spatial-domain input d is ReLU-ed by the previous layer while the Winograd-domain weight GgG T is pruned. The output activation patch S is obtained from equation. The algorithm when p = 4 is also illustrated in FIG0 (b). Though Winograd-domain weights are sparse due to pruning, Winograd-domain activations are still dense due to B(·)B T transforms. The sparsity in spatial activations due to ReLU does not reduce the number of multiplies. DISPLAYFORM2 To address the above problems, we introduce the Winograd-ReLU Network. Instead of applying ReLU to the activations in the spatial domain, we apply ReLU to the activations in the Winograd domain, as in equation FORMULA3 and FIG0 (c). The ReLU operation zeros all negative transformed activations, reducing the number of multiplies in the Winograd domain. DISPLAYFORM0 In the Winograd-ReLU CNN, we eliminate the spatial-domain kernel entirely. Because this ReLU is really associated with the previous layer, we perform this transformed ReLU starting with the second layer. We point out that the proposed new CNN architecture is not mathematically equivalent to the vanilla CNN nor the conventional Winograd CNN. Due to the change of network architecture, the training and pruning should also be changed. Our method operates in three phases: dense training, pruning, and retraining. Dense training: we train a dense p × p kernel directly in the transform domain. The transformed kernel is initialized and trained directly by back-propagation through the inverse transform -eliminating the need to maintain a kernel in the spatial domain or to transform a spatial kernel. Pruning: we prune the transformed kernel by computing the threshold t required to achieve a desired pruning rate r and setting all weights whose absolute value less than t to zero. In our experiments, we used the same r for all Winograd-ReLU layers. Because sensitivity varies from layer to layer, we expect that better performance could be achieved by varying the pruning rate r i for each layer i. Re-training: we re-train the model using a "sparsity mask" to force the weights that were pruned to remain zero. The sparsity mask is computed during the pruning step and is kept constant during re-training. The gradient of the network's loss, L, with respect to the input activation and Winograd weights can be derived using the chain rule. Equation FORMULA4 shows the calculation of input activation gradient ∇ d L and Winograd weight gradient ∇ GgG T L using the loss gradient passed from upstream layers DISPLAYFORM1 4 EXPERIMENTS We applied the methodology described above to several different CNNs on different datasets. The original network models are chosen such that the majority of the convolution layers have 3 × 3 kernels. This ensures the largest portion of layers can be converted to Winograd convolution layers and ReLU be put in Winograd domain. We used image classification datasets of different scales: CIFAR-10, CIFAR-100 BID12 ) and ImageNet 2012 BID21. For network architectures, we chose VGG-nagadomi , ConvPool-CNN-C model BID23 and a variation of ResNet-18 BID9 ) respectively on three datasets. Using the Tensorflow BID0 ) framework, we trained the spatial baseline CNN, corresponding conventional Winograd CNN, and Winograd-ReLU CNN models from scratch. Then the three models are iteratively pruned and re-trained. For a specific dataset, we used the same data augmentation for the training of all models on the dataset. We used VGG-nagadomi on the CIFAR-10 dataset. VGG-nagadomi is a lightweight version of VGGNet BID22. It contains 8 convolution layers with 3×3 kernels. The best reported validation set accuracy it achieves on CIFAR-10 is 93.31% . We trained three models from scratch. The corresponding conventional Winograd CNN model and Winograd-ReLU CNN model can achieve validation set accuracy of 93.30% and 93.43% respectively. The first convolution layer is most sensitive to pruning and we set its density to a constant of 80%. We iteratively pruned and re-trained other convolution layers with density from 80% down to 20%. Figure 2: Test accuracy vs density for the three models in FIG0 on VGG-nagadomi. Figure 2 shows test accuracy as a function of weight density for the three models. The two baseline models can only be pruned to 60% density before accuracy falls significantly (> 0.1%). Our Winograd-ReLU CNN model can be pruned to 40% density before falling to the same accuracy. TAB1 shows the input activation density and compares the workloads for each pruned convolution layer in three models. Pruning two baseline models reduces the convolution layer workload by 5.1× and 3.7× 1 respectively. Pruning the Winograd-ReLU model reduces the convolution layer workload by 13.3×, a 2.6× and 3.6× improvement respectively over the two baselines. The improvement of overall network workload reduction is 2.2× and 3.0× respectively over two baselines.1 All Winograd CNN model workload reduction include the intrinsic 2.25× reduction. We used the ConvPool-CNN-C model on on the CIFAR-100 dataset. ConvPool-CNN-C contains 9 convolution layers, out of which 7 have 3 × 3 kernels. We trained three models from scratch. The spatial baseline CNN model and conventional Winograd CNN model can achieve single model validation accuracy of 69.34% and 69.32% respectively. The corresponding Winograd-ReLU network model can achieve validation set accuracy of 69.75%. We pruned the first convolution layer to a constant density of 80%. We iteratively pruned and re-trained the other layers to densities from 80% down to 20%. Figure 3: Test accuracy vs density for the three models in FIG0 on ConvPool-CNN-C. Figure 3 shows the accuracy as a function of density for spatial baseline and Winograd-ReLU models. The spatial-baseline and Winograd-ReLU models can be pruned to 60% density without significant (> 0.1%) loss of accuracy. In contrast, the conventional Winograd CNN model can only be pruned to 70% density. At a given density, the Winograd-ReLU model has the highest accuracy. TAB3 shows the input activation density and compares the workloads for each pruned convolution layer in three models. Pruning two baseline models reduces the convolution layer workload by 3.5× and 3.2× respectively. Pruning the Winograd-ReLU model reduces the workload by 7.1×, a 2.1× and 2.2× improvement respectively over the two baselines. The improvement of overall network workload reduction is 2.0× and 2.2× respectively over two baselines. DISPLAYFORM0 We used a variation of the full pre-activation version BID10 of ResNet-18 BID9 on the ImageNet 2012 dataset. We used this version because it performs the best among various ResNet versions and its structure suits our Winograd-ReLU approach -its ReLU units are located before convolutions in the residual modules. The variation is different from original ResNet-18 by replacing all 2 × 2-stride 3 × 3 convolution layers with a 2 × 2 max-pooling layer followed by a 1 × 1-stride 3 × 3 convolution layer. Such difference ensure most of convolution layers can be converted to Winograd convolution layer. Another difference is that it doesn't have the last max pooling layer so the last group of residual modules has spatial size of 14 × 14, in order to keep the spatial size even instead of odd. This setting suits Winograd convolution with p = 4 best in that even spatial size is required for even p values. We trained three models from scratch. Figure 4: Top-1 and top-5 validation accuracy vs density for three models on a variation of ResNet-18. Figure 4 shows the accuracy as a function of density for three models. The spatial baseline CNN model and conventional Winograd CNN model can be pruned to 60% and 50% respectively without significant (> 0.1%) loss of top-1 or top-5 accuracy. The Winograd-ReLU model can be pruned much further, to 30%/35% density without significant (> 0.1%) loss of top-1/top-5 accuracy. At these densities, top-1 accuracies are 66.53%, 66.45% and 66.61% for three models respectively, with a dense spatial baseline of 66.67%; top-5 accuracies are 87.29%, 87.30% and 87.35% for three models respectively, with a dense spatial baseline of 87.42%. TAB5 shows the input activation density and compares the workloads for each pruned convolution layer in three models. Pruning the two baseline models reduces the convolution layer workload by 5.1× and 4.5× respectively. Pruning the Winograd-ReLU model reduces the workload by 13.2×, a 2.6× and 2.9× improvement respectively over the two baselines. The improvement of overall network workload reduction is 2.3× and 2.6× respectively over two baselines. In this section, we summarize the experiment and compare the three models in terms of a) weight and activation dimensions and b) the dynamic density of activations. We then visualize the kernels to illustrate the pattern of the proposed Winograd-ReLU model kernel. DISPLAYFORM0 In a convolutional neural network, a convolution-ReLU pair acts as a classifier on a spatial patch of an input feature. The dimension of the space being classified is the total number of elements passing through the ReLU layer. The decision boundaries of the classifier are determined by the weights. Insufficient non-zero weights or insufficient activations in too simple a decision boundary and causes accuracy loss. Experimental have shown that Winograd-ReLU CNN can reach the same accuracy as both vanilla spatial baseline CNN and conventional Winograd CNN without pruning, and that WinogradReLU CNN is more robust to aggressive pruning. In this subsection we provide an explanation for the latter observation from the aspect of activation and weight dimensions. We provide a summary on dimensions in Table 4. Table 4: Comparison of ReLU dimension and weight dimension in three types of networks. Assume the convolution-ReLU pair operates on input activation of spatial size of H × W and the number of input and output channels are C and K respectively. Spatial Baseline CNN BID5 Winograd native pruned CNN We can see that our Winograd-ReLU architecture has an advantage on the dimensions of weights and activations over other two models. This means Winograd-ReLU CNNs classify on a higher dimension with more complex decision boundaries, which forms a stronger representational ability in high dimensional image feature space. DISPLAYFORM0 As is shown in the ImageNet in the previous section, dynamic activation density of spatial baseline CNN model varies significantly among layers. Layers at earlier stages typically have higher density in activation than later stages. In Winograd-ReLU CNN model, the dynamic activation densities vary little among layers and are all close to 50%.An explanation is that the nature of image convolution ensures activations d to be spatially smooth. Thus, due to the structure of matrix B BID14, 15 of 16 elements in the 4 × 4 matrix of Winograd-domain activation patch B T · d · B have a mean close to zero. This benefits classification within a patch since ReLU layer is most powerful when half of activations are positive. We visualize the kernels of the proposed Winograd-ReLU model. We selected the first 6 input and output channels of layer res2a_2a of ResNet-18 at three different pruning densities. Unlike spatial domain kernels, Winograd-ReLU kernels do not show clear physical meanings such as edge or corner detectors. However, we observe that values of the elements (from top-left, 1-based indices) in each kernel are typically distinct in a kernel and are most likely kept during aggressive pruning. A possible reason for this is that the elements of Winograd-domain activation in a 4 × 4 patch are special: interested readers can calculate B T · d · B symbolically and will realize that elements are the only elements that are transformed with a linear combination of only adding and no subtraction. In a spatially smooth activation patch, this means the elements are the ones and the only ones with a non-zero mean. We have shown that we can combine the computational savings of sparse weights and activations with the savings of the Winograd transform by making two modifcations to conventional CNNs. To make the weights sparse at the point of multiplication, we train and prune the weights in the transform domain. This simple approach does not reduce the workload with respect to spatial pruning, though, so we move the ReLU non-linear operation after the Winograd transform to make the activations sparse at the point of multiplication. Moving ReLU to the Winograd domain also allows the weights to be more aggressively pruned without losing accuracy. With a 2 × 2 output patch (p = 4), the net is a reduction of 10.4×, 6.8× and 10.8× in computation on three datasets: CIFAR-10, CIFAR-100 and ImageNet. We plan to extend this work in the following directions. First, we expect that even greater savings on computation can be realized by using larger patch sizes (e.g., p = 6), and there may be benefit in exploring different Winograd transformation matrices (B,G and A). Second, we expect that using different pruning rates r i for each network layer will help maintain accuracy and improve overall workload reduction. Finally, we expect that combining our Winograd-ReLU network with other network simplification techniques, e.g. quantization of weights and/or activations BID4 BID18 BID20, will reduce the energy of computation even further.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJzgZ3JCW
Prune and ReLU in Winograd domain for efficient convolutional neural network
In this paper we present a novel optimization algorithm called Advanced Neuroevolution. The aim for this algorithm is to train deep neural networks, and eventually act as an alternative to Stochastic Gradient Descent (SGD) and its variants as needed. We evaluated our algorithm on the MNIST dataset, as well as on several global optimization problems such as the Ackley function. We find the algorithm performing relatively well for both cases, overtaking other global optimization algorithms such as Particle Swarm Optimization (PSO) and Evolution Strategies (ES). Gradient Descent (GD) and its variations like stochastic gradient descent BID2 are the de facto standard for training deep neural networks (DNNs) for tasks in various domains like Object Detection BID10, Robotic Grasping BID9 and Machine Translation BID1. Most of the field of Deep Learning is centered around algorithms similar to variants of Gradient Descent to find the optimal weights given desired input/output pairs BID7, BID4, BID14. However, there are also some limitations to using gradient-based optimization. For example, the neural network and the loss function have to be differentiable end-to-end. As a consequence, there are a number of problems that can not be directly modeled or solved without some alterations such as Formal Logic and Hard Attention BID11. Note that throughout this paper, we will refer to gradient-based methods collectively as SGD. Similarly, we will refer to Advanced Neuroevolution with the acronym AdvN.For those reasons, we developed a new algorithm which we call Advanced Neuroevolution. It is not a single algorithm, in truth. It is an ensemble of low-level algorithms, layered on top of each other. Those low-level algorithms have different scopes of operations addressing different levels of abstraction in the search process. For example, the perturbation mechanism addresses the introduction of noise into the models, the most basic operation. In contrast, the minimum distance mechanism addresses the global scale properties, i.e. the search regions. The goal is to traverse the search space as efficiently as possible without use of gradients. In the case of neural networks the search space is the weight space, including biases. Indeed, while this algorithm was developed primarily for training of deep neural networks, it can be used for other optimization tasks. In essence, we present the algorithm as an evolutionary optimization algorithm, with a focus on DNNs. There are many global optimization algorithms such as Evolution Strategies BID13, Particle Swarm Optimization BID8 and Simulated Annealing BID23. Each has its merits and limitations. Our aim is not to compete directly with those algorithms but rather to complement them and offer another option with its own merits and limitations. To evaluate the performance of such algorithms we can use well-known benchmark functions such as the Rastrigin or Ackley function. We recognize those functions and test Advanced Neuroevolution against them to assess its performance. In addition, there have been other approaches to using evolutionary optimization techniques to train DNNs, see and BID19 as recent examples. It reflects the awareness within the broader research community about the potential of such algorithms, and the need for alternatives to SGD. We don't see our algorithm replacing SGD, especially in fields where it is already quite successful such as Computer Vision. Our aim is to complement it, by offering another option. Furthermore, there is no reason why both can not be used in tandem as part of a grander learning strategy. We noted that there has been a limitation in learning potential due to the employment of SGD. Specifically, SGD is prone to losing the model's top performance throughout training iterations. This is usually due to dynamic elements in the environments, and sometimes can also arise from batch training. To elaborate, at one point in training t i the agent may achieve a certain high score, say 9000. At a later point in time, t i+1, there is no guarantee that the agent will not regress to a lower score when faced with the exact same conditions. While this behavior is a natural part of SGD, and happens frequently during the training progress, the algorithm eventually overcomes these temporary regressions. In Robotics, and associated fields such as Reinforcement Learning, this beahvior can amount to being sample-inefficient. That is, the agent usually takes a relatively large number of samples, possibly in the order of 10 6 to 10 8, to achieve competitive learning BID15. In such cases, a large amount of labeled data is required which is often costly to create. Alternatives like simulation have other potential issues with producing training data which is representative for reality. This is another reason why we wanted to investigate gradient-free approaches. The algorithm we propose always maintains its best performing agent. Indeed it is wholly different from SGD as an optimization step. Advanced Neuroevolution seeks to train neural networks based on the concept of evolving a pool of agents, whereas SGD iteratively optimizes a single agent. One advantage for this approach is that it inherently achieves exploration and exploitation, whereas SGD usually struggles on exploration due to its gradient-following constitution. We see this paper as a starting point, for the whole Robotics community to engage in developing of alternative optimization for DNNs used in Robotics and RL. Training neural networks using optimization algorithms is not a novel idea, see for example BID12. However, we aim to improve upon the shortcomings of other algorithms with the mechanisms we introduce here. Our algorithm should be able to handle sparse-rewards, exploration, exploitation, high-dimensionality, computational efficiency, pool-efficiency and sample-efficiency. The field of evolutionary computation is rich with spectacular implementations and algorithms. For example, BID18 use neuro-evolution as a proxy algorithm to evolve the architectures of the networks, rather than the networks themselves. They also use computer vision is a starter task, though they apply their method on the more challenging sets of CIFAR10 and CIFAR100. The networks themselves are trained using SGD. In contrast, we evolve the weights of the networks themselves rather than the architectures. Similarly, the work in BID3 ) also uses gradients to evolve the networks. The authors in train models of 4M parameters, and cite it as one of the largest models ever trained with evolutionary algorithms. Following this interesting path, we decided to use a model on a similar scale for the MNIST classification task, to see whether our algorithm can solve as large a network or not. We summarize the contributions of the paper as follows. 1) Introduce an evolutionary optimization algorithm called Advanced Neuroevolution (AdvN).2) Benchmark AdvN algorithm and other optimization algorithms, on solving global optimization functions.3) Test AdvN against functions with special properties such as the sparsely-rewarding Easom, to evaluate its behavior. For example, Easom is a function that has near-zero gradient everywhere except a small region compared to the search space. Gradient-following in this scenario would be an implausible strategy. 4) Test AdvN and SGD on hand-written digit classification dataset MNIST. We impose special conditions to see how the algorithm can adapt to typical training challenges. The algorithm is actually a set of smaller, locally-aware algorithms and functions all operating to achieve a global synergistic behavior. We describe each in turn as follows, and eventually integrate all of them so as to portray the entire algorithm. Please note that we prefer to use nomenclature that is different from traditional evolutionary optimization. It reflects our way of thinking about the problem. Also, using standard evolutionary/genetic nomenclature such as "Mutations", sometimes hinders the mental grasp on the problem because of the association with biological processes. We do not wish to crystallize around or closely subscribe to the mechanisms of Biological evolution. For those reasons, we developed our own notation, which we share in the proceeding sections. Another note is that we employ a fixed-size pool, and the fixed network structure. This is in contrast to other evolutionary algorithms where the network structure may be changed throughout the training process. For the first iteration of the algorithm the pool will consist of a set of random samples chosen chosen according to the weight initialization scheme, e.g. Xavier Normal BID6. The subsequent iterations of the pool will be made up of the following components: Elite, Anchors, Probes and Blends. The Elite is the historically best-performing sample. Anchors are the bestperforming N samples in a generation, where N is the desired number of anchors. These samples are copied over to the new pool as is. From each anchor M probes are spawned, where M is the number of probes desired. Those probes are clones of the anchor, each randomly perturbed. In addition to the anchors and probes, the Elite is copied over as well. The remaining slots are filled with blends. Blends can occur between an anchor and any other sample in the pool. The most basic step in the algorithm is the Perturbation step. This is equivalent to genetic mutation, prinicipally referring to the introduction of noise into the makeup of the system. It is the primary searching mechanism in the algorithm. Two important properties are the magnitude and shape of the noise distribution. We choose uniformly-distributed noise, centered at the origin. We refer to limits of the uniform distribution as Search Radius. It can be thought of as defining a virtual radius around the anchor where the probes will be cast in random directions. The greater the radius the farther away the probes will be casted. This tunes the explorative vs. exploitative properties of the algorithm. The limit of the uniform distribution, ie. Search Radius, is calculated with respect to the integrity value and 2 pre-defined constants. Search radius is calculated as DISPLAYFORM0 where p = (1-integrity), λ and lr are scalar constants. The learning rate, lr, scales the function, controlling the search radius. The shifted, scaled hyperbolic tangent function has attractive properties in the range. It has an almost-flat slope near 0 and an almost-flat slope near 1. This allows the algorithm to spend more time searching low-energy regions where the most likely rewards are, ie. exploitation. It also prevents the algorithm from searching exceedingly high-energy configurations, ie. controlling exploration. Similar to the Search Radius, is the number of selections. This variable determines how many weights will be perturbed. It is calculated as DISPLAYFORM1 where p = (1-integrity), and α and β are scalar constants. In the range this function starts with at the origin and increases until it saturates near top. This saturation limits the number of modifications in the network according to the chosen constants. Intuitively, making too many adjustments to a neural network model in one step is usually un-rewarding. In addition, when searching high-energy regions, ie. low integrity, the number of selections can exceed reasonable limits. This creates a lot of wasted cycles where the search is not profitable. The experimenter chooses the constants introduced in equations 1 and 2 before starting the algorithm. Blending mechanism takes randomly-chosen weights from 2 components to yield a blend. The first component is always one of the anchors, picked at random. The second component can be any sample in the pool. This is commonly referred to as Crossover. The first component is cloned, to form the basis. Then a number of weights are randomly picked from the second component. This number is calculated by equation 2, using the same constants. The importance of blending is that it potentially allows us to explore regions in the search space which are otherwise unreachable without effort. Intuitively, blends have the potential to transport samples farther away from the current search region, shortcutting the general mechansim of perturbations. Blends help to extend the actively searched area in the search space based on already known favorable samples. There are two sorts of elites, a generational elite, and a historical elite. The generational elite is the best-performing sample of the current pool. The historical elite is the best performing sample across all generations. In our algorithm the generational elite is one of the Anchors, and the historical elite is simply called the Elite. The integrity concept defines the scope and magnitude of perturbations. The lower the integrity, the greater the magnitude and the scope of perturbations introduced into the model. The magnitude and scope of perturbations follow a non-linear trajectory with the integrity. The governing functions are equations 1 and 2. Generally, neural networks are not updated in bulk. Rather, updates should come as small incremental changes to the network. For those reasons, intuitively, we saturate the magnitude and scope of perturbations and blends in order not to exceed thresholds. Those thresholds are parameters of the algorithm. They control how aggressive the perturbations and blends are allowed to get. Integrity decreases when the current generation of samples does not yield a score that is incrementally better than the previous best score. There is a parameter in the algorithm that defines the required minimum percentage change in the score, we call it Minimum Entropy. For example, in the global optimization scenario, we set the minimum acceptable percentage improvement, ie. Minimum Entropy, to 1%. Thus, if the current generation does not improve upon the previous generation's score by at least 1% then the integrity is decreased by a fixed step size. The experimenter determines the appropriate step size. Backtracking curbs the behavior of integrity. It can be thought of as a layer on top of integrity. If integrity keeps decreasing, without improvement in the reward signal, then backtracking interferes and resets integrity re-inserts the Elite. Essentially, as integrity decreases we search higher-energy regions of the search space. Eventually, the model becomes "hot" from applying high-magnitude perturbations, and the weights start exploding. To fix this problem, backtracking resets the integrity back to maximum, and inserts the historical elite in place of the generational elite. When the elite is inserted, and the probes are spawned with high integrity, essentially copying the elite with minor modifications, then the entire pool "cools down". This mechanism makes it possible to explore high-energy configurations more safely. It is somewhat reminiscent of the simulated annealing. Anchors constitute the main search mechanism. Essentially, anchors constitute the best Nperforming agents, in order. Thus for example if there are 5 anchors, those will be the 5 bestperforming agents. Anchors are updated each generation, as well as being carried over to the next generation in case no improvement was made. The best anchor is also the generational elite. We spawn probes as clones of the anchors, but with the perturbations applied to them. Intuitively, what this means is that probes are spawned to search the local neighborhoods of the anchors. By having multiple anchors we search multiple regions in tandem. By having multiple probes, we search multiple local neighborhoods within those regions. Distance is a metric that represents how far the anchors can be from each other. It is calculated in terms of magnitudes and positions of difference between the weights of the anchors. Essentially, it's inefficient for the anchors to be close to each other. It would be practically searching the same region. Thus a minimal distance is defined. For that reason we choose the Canberra distance as given below: DISPLAYFORM0 where x,y represent the two anchors. Similar to distance where we don't want the anchors to collapse to a single region, we also introduce the concept of radial expansion to achieve the same goal. Radial expansion adaptively adjusts the perturbations magnitude and scope. We do this in order to cast the samples far away from each other so that we can maintain a full roster of anchors. For example, if we have a roster of 5 anchors and we're only using 3, then it means that all the other anchors are not far enough according to the distance metric. Thus we lose 2 anchors, and operate only 3 anchors. The remainder of the slots do not necessarily go to waste, they are given to the blending algorithm. However, this is not the ideal situation, since we expect the algorithm to be use the number of anchors allocated. Therefore, the mechanism of radial expansion increases the search space by increasing the value of the parameters governing the perturbations magnitude. This in turn forces the probes to be casted farther from the anchors, thereby allowing a greater diversity (and consequently distance) within the pool. We find that this mechanism is significantly involved in the training of networks solving the MNIST task. Collectively, the algorithm operates on two steps. The first is a conditioning step that takes into account the current and previous states of the algorithm, and decides the integrity to use, who are the anchors and the elite for the upcoming generation. The conditioning step makes use of the Distance and Expansion mechanisms described above. The second step is the execution step, the actual formation of the pool through the perturbation and blending steps. This section defines the implementation details of the algorithm, as well as the benchmarking processes. Across the entire experiment set, we use only a pool size of 50 samples. This remarkably low number emphasizes the objectives we are trying to achieve with this algorithm. The algorithm is written and implemented in PyTorch framework BID16. The networks used are common architectures. On this end, however, we think there is a vast opportunity to develop network architectures that are tailored to this sort of algorithm. Thus far, the architectures used have been tailored for SGD featuring layers such as Dropout. In our experiments, we find that the performance is sensitive to the architectural choice. However, optimizing the architecture is beyond the scope of this work. We use 4 Nvidia Titan V GPUs for data parallelization. That is, the models for copied into each GPU and the inference is run by dividing the dataset into 4 chunks, and then the loss is aggregated. For all our experiments we use a pool size of 50, 4 anchors, and 8 probes per anchor. Our aim was to showcase the algorithm on a number of tasks which are relatively simple. We decided to benchmark our performance against that of typical global optimization algorithms. We use the Inspyre and PyBrain libraries implementions BID20, along with the default parameters. It is typical that any optimization algorithm is sensitive to the choice of parameters. However, tuning those parameters is beyond the scope of this paper. We test on a set of 7 well-defined functions. Each function is tested 5 times, and the average number of generations required to converge is presented. Note that we define convergence as reaching a value within 0.06 of the global optimum. For example, if the global optimum is 0, then when the algorithm reaches 0.06 or less, it terminates. We solve the global optimization problem as a regression problem. The architecture we used is a single hidden layer of size 128. The functions are all 2-dimensional, therefore the network accepts two real numbers as the (x,y) origin and outputs 2 real numbers as optimal (x,y). The value of the function at the predicted (x,y) is the network's cost, and the aim is to minimize that cost. The input coordinates to the network are picked from a uniform distribution for each run, but remains constant throughout that run. The uniform distribution's limits differ for each function. Specifically, they are [-5,5] for Ackley, [-5.2,5 .2] for Rastrigin, [-2,2] for Rosenbrock, [-500,500] for Schwefel, [-15,-5,-3,3] for Bukin, [-20,20] for Easom and [-512,512] for Eggholder. For fair comparison they are the same for all the algorithms in the benchmark. We run the algorithms for a maximum of 5000 generations. If the algorithm has not converged, it is assumed that it is stuck in a local minima which it can't escape. If this is the case, then we calculate this particular run as taking 5000 generations. If all the runs don't converge within 5000 generations, we note this with a "+" sign, e.g. 5000+.In addition to regular benchmark functions, we wanted to test the algorithm against some special functions with special properties. For example, the Easom function is flat throughout except one regions where it is depressed. A typical gradient-following algorithm such as SGD will struggle to solve this function. It reflects an aspect of the sparse-reward problem in domains such as RL. Those special functions are not used for benchmarking, however, because the benchmarking libraries do not carry their implementations. In this set of experiments, we run the algorithm against a simple CNN to solve the MNIST digit classification problem. The model we trained had 4 convolutional layers of size 32, 64, 128 and 128 respectively with stride 2 and max-pooling in between, and a single fully-connected layer of size 512. The activation used was the Parametric ReLu function in PyTorch. The model has 4.7M parameters. To speed up the computations we switched to using half-precision floats. Interestingly, in our experimentation we found that the algorithm does not need much data at all to train. For that reason we used only a randomly chosen subset of the MNIST dataset, of 2000 images. When we run validation, however, we run against the entire validation set of 10,000 images. This not only tests how well the model generalizes, but also how much it can learn from a limited-dataset. This problem is of relevance to domains with relatively small volume of annonated data, such as Medical Imaging BID0.The test is terminated once the algorithm achieves a loss of 0.15 or lower, or if the number of generations exceeded 5000. We assert this limit in order to prevent runs of diminishing returns. Our goal is not to achieve or exceed the state of the art in MNIST classification. The system would need to be tuned beyond the casual effort, and then the training would take a different, more aggressive approach. Our aim is to showcase how the algorithm performs in an appreciatively high-dimensional search space. As mentioned earlier, the algorithm is tested on a number of common global optimization functions. For comparison we test several other optimization algorithms. Namely, Evolution Strategies BID13 DISPLAYFORM0 and some of them can definitely outperform the vanilla implementation shown here. Despite this, the clearly suggest that AdvN is a strong competitor. Our algorithm performs well and solves all the benchmarking functions in a relatively low number of generations. In addition, the algorithm solves the special functions we give it. The Eggholder function is notorious for being difficult since the global optimium is in a corner, and the function is not convex. Therefore it is quite difficult for an algorithm that doesn't explore enough to ever find the global optimum, but our algorithm does, and with only a pool size of 50 sampels. Similarly, a gradient-following algorithm would be stuck in the Easom function where the gradient is near zero throughout, save for one small region. We see that our algorithm explores well, and converges upon the reward-bearing region. The Bukin function has a narrow ridge where the global minimum lies. If the exploration steps are too large, the algorithm will generally skip over the ridge. Again, we highlight that those were achieved with the same set of parameters across all the functions. The parameters of the algorithm are not changed to solve a particular function. Rather, it has the required agility to deal with the different conditions imposed by the different functions. It is a desirable feature because throughout learning, the problem space may impose varying constraints and dynamics. The algorithm needs to be agile enough to handle those conditions as they arise. A sample run of the training loss function against the number of inferences is shown in Figure 4.2. The algorithm takes 2333 generations to converge to a training loss of 0.15, averaged over 5 runs. This corresponds to a validation accuracy of 90%. Similarly SGD takes 320 generations (ie. batches of 50) to converge to the same training loss, and achieves also a validation accuracy of 90%. It must be noted that 2333 generations over 2000 images with a pool size of 50 means that our algorithm on average runs inference 233,300,000 times. That is orders of magnitude more than SGD. In our run, SGD takes around 16,000 inferences to converge to a similar training loss. Therefore, we see a clear trade-off in the number of inferences required, in favor of SGD. This may be the case because the gradient is smooth in this task, it is possible that in other tasks, the gradient-space is more challenging to navigate, or uninformative. With only 2000 images from the training set, the model was able to achieve a relatively acceptable accuracy. Note that we don't use shuffling of those 2000 images. The network sees the same images in the same order. It would be interesting to see what role layers such as Dropout would play if we train other networks using AdvN.It is clear that the algorithm can handle the complexity of the 4.7M parameter network. It suggests that we can use it to train deeper, wider networks, on more complex tasks. Furthermore, we see that half-precision is not a hindrance for the algorithm. This bodes well for future iterations since we train on Volta architecture GPUs, which use tensor cores to execute those operations. It would be interesting to see if we can achieve the same level with even lower precision floats, if any such become available in the future. We presented the Advanced Neuroevolution algorithm as an alternative optimization step to SGD to train neural networks. The work is motivated by some limitations we perceived in gradient-based methods, such as differentiability and sample-inefficiency. The algorithm is benchmarked against other optimization algorithms on typical optimization problems. It performed satisfactorily well, and improved upon all of them. For fairness, we noted that the implementation of the other algorithms may not be optimized, and they can arguably perform better. Next our algorithm is tested on the MNIST digit classification task. It achieved 90% accuracy on the entire validation set using only 2000 images from the training set. In all our experiments, halfprecision floats are used in order to decrease the time of the computations. The computations are done only on 4 Titan V GPUs instead of thousands of CPU cores as in other evolutionary algorithms papers. This makes training of neural networks with evolutionary algorithms more tractable in terms of resource requirements. Finally, while not presented in this work, preliminary tests of our algorithm on RL tasks have been promising. It solves the assigned problems, though it takes longer than other approaches. We aim to improve upon the algorithm and the strategies employed in order to achieve competitive on RL and Robotics tasks.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1g5Gh05KQ
A new algorithm to train deep neural networks. Tested on optimization functions and MNIST.
Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services. Stochasticity is a key component of many modern neural net architectures and training algorithms. The most widely used regularization methods are based on randomly perturbing a network's computations BID29 BID7. Bayesian neural nets can be trained with variational inference by perturbing the weights BID4 BID0. Weight noise was found to aid exploration in reinforcement learning BID20 BID2. Evolution strategies (ES) minimizes a black-box objective by evaluating many weight perturbations in parallel, with impressive performance on robotic control tasks BID25. Some methods perturb a network's activations BID29 BID7, while others perturb its weights BID4 BID0 BID20 BID2 BID25. Stochastic weights are appealing in the context of regularization or exploration because they can be viewed as a form of posterior uncertainty about the parameters. However, compared with stochastic activations, they have a serious drawback: because a network typically has many more weights than units, it is very expensive to compute and store separate weight perturbations for every example in a mini-batch. Therefore, stochastic weight methods are typically done with a single sample per mini-batch. In contrast, activations are easy to sample independently for different training examples within a mini-batch. This allows the training algorithm to see orders of magnitude more perturbations in a given amount of time, and the variance of the stochastic gradients decays as 1/N, where N is the mini-batch size. We believe this is the main reason stochastic activations are far more prevalent than stochastic weights for neural net regularization. In other settings such as Bayesian neural nets and evolution strategies, one is forced to use weight perturbations and live with the ing inefficiency. In order to achieve the ideal 1/N variance reduction, the gradients within a mini-batch need not be independent, but merely uncorrelated. In this paper, we present flipout, an efficient method for decorrelating the gradients between different examples without biasing the gradient estimates. Flipout applies to any perturbation distribution that factorizes by weight and is symmetric around 0-including DropConnect, multiplicative Gaussian perturbations, evolution strategies, and variational Bayesian neural nets-and to many architectures, including fully connected nets, convolutional nets, and RNNs. In Section 3, we show that flipout gives unbiased stochastic gradients, and discuss its efficient vectorized implementation which incurs only a factor-of-2 computational overhead compared with shared perturbations. We then analyze the asymptotics of gradient variance with and without flipout, demonstrating strictly reduced variance. In Section 4, we measure the variance reduction effects on a variety of architectures. Empirically, flipout gives the ideal 1/N variance reduction in all architectures we have investigated, just as if the perturbations were done fully independently for each training example. We demonstrate speedups in training time in a large batch regime. We also use flipout to regularize the recurrent connections in LSTMs, and show that it outperforms methods based on dropout. Finally, we use flipout to vectorize evolution strategies BID25, allowing a single GPU to handle the same throughput as 40 CPU cores using existing approaches; this corresponds to a factor-of-4 cost reduction on Amazon Web Services. We use the term "weight perturbation" to refer to a class of methods which sample the weights of a neural network stochastically at training time. More precisely, let f (x, W) denote the output of a network with weights W on input x. The weights are sampled from a distribution q θ parameterized by θ. We aim to minimize the expected loss DISPLAYFORM0, where L is a loss function, and D denotes the data distribution. The distribution q θ can often be described in terms of perturbations: W = W + ∆W, where W are the mean weights (typically represented explicitly as part of θ) and ∆W is a stochastic perturbation. We now give some specific examples of weight perturbations. Gaussian perturbations. If the entries ∆W ij are sampled independently from Gaussian distributions with variance σ 2 ij, this corresponds to the distribution W ij ∼ N (W ij, σ 2 ij). Using the reparameterization trick BID9, this can be rewritten as W ij = W ij + σ ij ij, where ij ∼ N; this representation allows the gradients to be computed using backprop. A variant of this is multiplicative Gaussian perturbation, where the perturbations are scaled according to the weights: DISPLAYFORM1, where again ij ∼ N. Multiplicative perturbations can be more effective than additive ones because the information content of the weights is the same regardless of their scale. DropConnect. DropConnect BID30 ) is a regularization method inspired by dropout BID29 which randomly zeros out a random subset of the weights. In the case of a 50% drop rate, this can be thought of as a weight perturbation where W = W/2 and each entry ∆W ij is sampled uniformly from ±W ij.Variational Bayesian neural nets. Rather than fitting a point estimate of a neural net's weights, one can adopt the Bayesian approach of putting a prior distribution p(W) over the weights and approximating the posterior distribution p(W |D) ∝ p(W)p(D|W), where D denotes the observed data. BID4 observed that one could fit an approximation q θ (W) ≈ p(W |D) using variational inference; in particular, one could maximize the evidence lower bound (ELBO) with respect to θ: DISPLAYFORM2 The negation of the second term can be viewed as the description length of the data, and the negation of the first term can be viewed as the description length of the weights BID6. BID4 observed that if q is chosen to be a factorial Gaussian, sampling from θ can be thought of as Gaussian weight perturbation where the variance is adapted to maximize F. BID0 later combined this insight with the reparameterization trick BID9 to derive unbiased stochastic estimates of the gradient of F.Evolution strategies. ES BID22 ) is a family of black box optimization algorithms which use weight perturbations to search for model parameters. ES was recently proposed as an alternative reinforcement learning algorithm BID26 BID25. In each iteration, ES generates a collection of weight perturbations as candidates and evaluates each according to a fitness function F. The gradient of the parameters can be estimated from the fitness function evaluations. ES is highly parallelizable, because perturbations can be generated and evaluated independently by different workers. Suppose M is the number of workers, W is the model parameter, σ is the standard deviation of the perturbations, α is the learning rate, F is the objective function, and ∆W m is the Gaussian noise generated at worker m. The ES algorithm tries to maximize E∆W F W + σ∆W. The gradient of the objective function and the update rule can be given as: DISPLAYFORM3 In some cases, it's possible to reformulate weight perturbations as activation perturbations, thereby allowing them to be efficiently computed fully independently for different examples in a mini-batch. In particular, BID10 showed that for fully connected networks with no weight sharing, unbiased stochastic gradients could be computed without explicit weight perturbations using the local reparameterization trick (LRT). For example, suppose X is the input mini-batch, W is the weight matrix and B = XW is the matrix of activations. The LRT samples the activations B rather than the weights W. In the case of a Gaussian posterior, the LRT is given by: DISPLAYFORM0 where b m,j denotes the perturbed activations. While the exact LRT applies only to fully connected networks with no weight sharing, BID10 also introduced variational dropout, a regularization method inspired by the LRT which performs well empirically even for architectures the LRT does not apply to. Control variates are another general class of strategies for variance reduction, both for black-box optimization BID31 BID21 BID19 and for gradient-based optimization BID24 BID18 BID15. Control variates are complementary to flipout, so one could potentially combine these techniques to achieve a larger variance reduction. We also note that the fastfood transform BID13 ) is based on similar mathematical techniques. However, whereas fastfood is used to approximately multiply by a large Gaussian matrix, flipout preserves the random matrix's distribution and instead decorrelates the gradients between different samples. As described above, weight perturbation algorithms suffer from high variance of the gradient estimates because all training examples in a mini-batch share the same perturbation. More precisely, sharing the perturbation induces correlations between the gradients, implying that the variance can't be eliminated by averaging. In this section, we introduce flipout, an efficient way to perturb the weights quasi-independently within a mini-batch. We make two assumptions about the weight distribution q θ: the perturbations of different weights are independent; and the perturbation distribution is symmetric around zero. These are nontrivial constraints, but they encompass important use cases: independent Gaussian perturbations (e.g. as used in variational BNNs and ES) and DropConnect with drop probability 0.5. We observe that, under these assumptions, the perturbation distribution is invariant to elementwise multiplication by a random sign matrix (i.e. a matrix whose entries are ±1). In the following, we denote elementwise multiplication by •. Observation 1. Let q θ be a perturbation distribution that satisfies the above assumptions, and let ∆W ∼ q θ. Let E be a random sign matrix that is independent of ∆W. Then ∆W = ∆W • E is identically distributed to ∆W. Furthermore, the loss gradients computed using ∆W are identically distributed to those computed using ∆W.Flipout exploits this fact by using a base perturbation ∆W shared by all examples in the mini-batch, and multiplies it by a different rank-one sign matrix for each example: DISPLAYFORM0 where the subscript denotes the index within the mini-batch, and r n and s n are random vectors whose entries are sampled uniformly from ±1. According to Observation 1, the marginal distribution over gradients computed for individual training examples will be identical to the distribution computed using shared weight perturbations. Consequently, flipout yields an unbiased estimator for the loss gradients. However, by decorrelating the gradients between different training examples, we can achieve much lower variance updates when averaging over a mini-batch. Vectorization. The advantage of flipout over explicit perturbations is that computations on a minibatch can be written in terms of matrix multiplications. This enables efficient implementations on GPUs and modern accelerators such as the Tensor Processing Unit (TPU) . Let x denote the activations in one layer of a neural net. The next layer's activations are given by: DISPLAYFORM1 where φ denotes the activation function. To vectorize these computations, we define matrices R and S whose rows correspond to the random sign vectors r n and s n for all examples in the mini-batch. The above equation is vectorized as: DISPLAYFORM2 This defines the forward pass. Because R and S are sampled independently of W and ∆W, we can backpropagate through Eqn. 4 to obtain derivatives with respect to W, ∆W, and X.Computational cost. In general, the most expensive operation in the forward pass is matrix multiplication. Flipout's forward pass requires two matrix multiplications instead of one, and therefore should be roughly twice as expensive as a forward pass with a single shared perturbation when the multiplications are done in sequence. 1 However, note that the two matrix multiplications are independent and can be done in parallel; this incurs the same overhead as the local reparameterization trick BID10.A general rule of thumb for neural nets is that the backward pass requires roughly twice as many FLOPs as the forward pass. This suggests that each update using flipout ought to be about twice as expensive as an update with a single shared perturbation (if the matrix multiplications are done sequentially); this is consistent with our experience. Evolution strategies. ES is a highly parallelizable algorithm; however, most ES systems are engineered to run on multi-core CPU machines and are not able to take full advantage of GPU parallelism. Flipout enables ES to run more efficiently on a GPU because it allows each worker to evaluate a batch of quasi-independent perturbations rather than only a single perturbation. To apply flipout to ES, we can simply replicate the starting state by the number of flipout perturbations N, at each worker. Instead of Eqn. 1, the update rule using M workers becomes: DISPLAYFORM3 where m indexes workers, n indexes the examples in a worker's batch, and F mn is the reward evaluated with the n th perturbation at worker m. Hence, each worker is able to evaluate multiple perturbations as a batch, allowing for parallelism on a GPU architecture. In this section, we analyze the variance of stochastic gradients with and without flipout. We show that flipout is guaranteed to reduce the variance of the gradient estimates compared to using naïve shared perturbations. DISPLAYFORM0 ) under the perturbation ∆W for a single training example x. (Note that G x is a random variable which depends on both x and ∆W . We analyze a single entry of the gradient so that we can work with scalar-valued variances.) We denote the gradient averaged over a mini-batch as the random variable DISPLAYFORM1 denotes a mini-batch of size N, and ∆W n denotes the perturbation for the n th example. (The randomness comes from both the choice of B and the random perturbations.) For simplicity, we assume that the x n are sampled i.i.d. from the data distribution. Using the Law of Total Variance, we decompose Var(G B) into a data term (the variance of the exact mini-batch gradients) and an estimation term (the estimation variance for a fixed mini-batch): DISPLAYFORM2 Notice that the data term decays with N while the estimation term may not, due to its dependence on the shared perturbation. But we can break the estimation term into two parts for which we can analyze the dependence on N. To do this, we reformulate the standard shared perturbation scheme as follows: ∆W is generated by first sampling ∆W and then multiplying it by a random sign matrix rs as in Eqn. 3 -exactly like flipout, except that the sign matrix is shared by the whole minibatch. According to Observation 1, this yields an identical distribution for ∆W to the standard shared perturbation scheme. Based on this, we obtain the following decomposition: Theorem 2 (Variance Decomposition Theorem). Define α, β, and γ to be DISPLAYFORM3 DISPLAYFORM4 Under the assumptions of Observation 1, the variance of the gradients under shared perturbations and flipout perturbations can be written in terms of α, β, and γ as follows:Fully independent perturbations: DISPLAYFORM5 Flipout: DISPLAYFORM6 Proof. Details of the proof are provided in Appendix A.We can interpret α, β, and γ as follows. First, α combines the data term from Eqn. 6 with the expected estimation variance for individual data points. This corresponds to the variance of the gradients on individual training examples, so fully independent perturbations yield a total variance of α/N. The other terms, β and γ, reflect the covariance between the estimation errors on different training examples as a of the shared perturbations. The term β reflects the covariance that from sampling r and s, so it is eliminated by flipout, which samples these vectors independently. Finally, γ reflects the covariance that from sampling ∆W, which flipout does not eliminate. Empirically, for all the neural networks we investigated, we found that α β γ. This implies the following behavior for Var(G B) as a function of N: for small N, the data term α/N dominates, giving a 1/N variance reduction; with shared perturbations, once N is large enough that α/N < β, the variance Var(G B) levels off to β. However, flipout continues to enjoy a 1/N variance reduction in this regime. In principle, flipout's variance should level off at the point where α/N < γ, but in all of our experiments, γ was small enough that this never occurred: flipout's variance was approximately α/N throughout the entire range of N values we explored, just as if the perturbations were sampled fully independently for every training example. We first verified empirically the variance reduction effect of flipout predicted by Theorem 2; we measured the variance of the gradients under different perturbations for a wide variety of neural network architectures and batch sizes. In Section 4.2, we show that flipout applied to Gaussian perturbations and DropConnect is effective at regularizing LSTM networks. In Section 4.3, we demonstrate that flipout converges faster than shared perturbations when training with large minibatches. Finally, in Section 4.4 we present experiments combining Evolution Strategies with flipout in both supervised learning and reinforcement learning tasks. In our experiments, we consider the four architectures shown in TAB1 (details in Appendix B). Since the main effect of flipout is intended to be variance reduction of the gradients, we first estimated the gradient variances of several architectures with mini-batch sizes ranging from 1 to 8196 FIG0 ). We experimented with three perturbation methods: a single shared perturbation per minibatch, the local reparameterization trick (LRT) of BID10, and flipout. For each of the FC, ConVGG, and LSTM architectures, we froze a partially trained network to use for all variance estimates, and we used multiplicative Gaussian perturbations with σ 2 = 1. We computed Monte Carlo estimates of the gradient variance, including both the data and estimation terms in Eqn. 6. Confidence intervals are based on 50 independent runs of the estimator. Details are given in Appendix C.The analysis in Section 3.2 makes strong predictions about the shapes of the curves in FIG0. By Theorem 2, the variance curves for flipout and shared perturbations each have the form a + b/N, where N is the mini-batch size. On a log-log plot, this functional form appears as a linear regime with slope -1, a constant regime, and a smooth phase transition in between. Also, because the distribution of individual gradients is identical with and without flipout, the curves must agree for N = 1. Our plots are consistent with both of these predictions. We observe that for shared perturbations, the phase transition consistently occurs for mini-batch sizes somewhere between 100 and 1000. In contrast, flipout gives the ideal linear variance reduction throughout the range of mini-batch sizes we investigated, i.e., its behavior is indistinguishable from fully independent perturbations. As analyzed by BID10, the LRT gradients are fully independent within a mini-batch, and are therefore guaranteed to achieve the ideal 1/N variance reduction. Furthermore, they reduce the variance below that of explicit weight perturbations, so we would expect them to achieve smaller variance than flipout, as shown in FIG0. However, flipout is applicable to a wider variety of architectures, including convolutional nets and RNNs. We evaluated the regularization effect of flipout on the character-level and word-level language modeling tasks with the Penn Treebank corpus (PTB) BID16. We compared flipout to several other methods for regularizing RNNs: naïve dropout BID32, variational dropout BID3, recurrent dropout BID27, zoneout BID12, and DropConnect BID17. BID32 apply dropout only to the feed-forward connections of an RNN (to the input, output, and connections between layers). The other methods regularize the recurrent connections as well: BID27 apply dropout to the cell update vector, with masks sampled either per step or per sequence; BID3 apply dropout to the forward and recurrent connections, with all dropout masks sampled per sequence. BID17 use DropConnect to regularize the hidden-to-hidden weight matrices, with a single DropConnect mask shared between examples in a mini-batch. We denote their model WD (for weight-dropped LSTM).Character-Level. For our character-level experiments, we used a single-layer LSTM with 1000 hidden units. We trained each model on non-overlapping sequences of 100 characters in batches of size 32, using the AMSGrad variant of Adam with learning rate 0.002. We perform early stopping based on validation performance. Here, we applied flipout to the hidden-tohidden weight matrix. More hyperparameter details are given in Appendix D. The , measured in bits-per-character (BPC) for the validation and test sequences of PTB, are shown in Table 2. In the table, shared perturbations and flipout (with Gaussian noise sampling) are denoted by Mult. Gauss and Mult. Gauss + Flipout, respectively. We also compare to RBN (recurrent batchnorm) and H-LSTM+LN (HyperLSTM + LayerNorm) BID5. Mult. Gauss + Flipout outperforms the other methods, and achieves the best reported for this architecture. Word-Level. For our word-level experiments, we used a 2-layer LSTM with 650 hidden units per layer and 650-dimensional word embeddings. We trained on sequences of length 35 in batches of size 40, for 100 epochs. We used SGD with initial learning rate 30, and decayed the learning rate by a factor of 4 based on the nonmonotonic criterion introduced by BID17. We used flipout to implement DropConnect, as described in Section 2.1, and call this WD+Flipout. We applied WD+Flipout to the hidden-to-hidden weight matrices for recurrent regularization, and used the same hyperparameters as BID17. We used embedding dropout (setting rows of the embedding matrix to 0) with probability 0.1 for all regularized models except Gal, where we used Table 3: Perplexity on the PTB word-level validation and test sets. All are from our own experiments. probability 0.2 as specified in their paper. More hyperparameter details are given in Appendix D. We show in Table 3 that WD+Flipout outperforms the other methods with respect to both validation and test perplexity. In Appendix E.4, we show that WD+Flipout yields significant variance reduction for large mini-batches, and that when training with batches of size 8192, it converges faster than WD. Theorem 2 and FIG0 suggest that the variance reduction effect of flipout is more pronounced in the large mini-batch regime. In this section, we train a Bayesian neural network with mini-batches of size 8192 and show that flipout speeds up training in terms of the number of iterations. We trained the FC and ConvLe networks from Section 4.1 using Bayes by Backprop BID0. Since our primary focus is optimization, we focus on the training loss, shown in FIG2 for FC, we compare flipout with shared perturbations and the LRT; for ConvLe, we compare only to shared perturbations since the LRT does not give an unbiased gradient estimator. We found that flipout converged in about 3 times fewer iterations than shared perturbations for both models, while achieving comparable performance to the LRT for the FC model. Because flipout is roughly twice as expensive as shared perturbations (see Section 3.1), this corresponds to a 1.5x speedup overall. Curves for the training and test error are given in Appendix E.2. ES typically runs on multiple CPU cores. The challenge in making ES GPU-friendly is that each sample requires computing a separate weight perturbation, so traditionally each worker can only generate one sample at a time. In Section 3.1, we showed that ES with flipout allows each worker to evaluate a batch of perturbations, which can be done efficiently on a GPU. However, flipout induces correlations between the samples, so we investigated whether these correlations cause a slowdown in training relative to fully independent perturbations (which we term "IdealES"). In this section, we show empirically that flipout ES is just as sample-efficient as IdealES, and consequently one can obtain significantly higher throughput per unit cost using flipout ES on a GPU.The ES gradient defined in Eqn. 1 has high variance, so a large number of samples are generally needed before applying an update. We found that 5,000 samples are needed to achieve stable performance in the supervised learning tasks. Standard ES runs the forward pass 5,000 times with independent weight perturbations, which sees little benefit to using a GPU over a CPU. FlipES allows the same number of samples to be evaluated using a much smaller number of explicit perturbations. Throughout the experiments, we ran flipout with mini-batches of size 40 (i.e. N = 40 in Eqn. 5).We compared IdealES and FlipES with a fully connected network (FC) on the MNIST dataset. FIG2 shows that we incur no loss in performance when using pseudo-independent noise. Next, we compared FlipES and cpuES (using 40 CPU cores) in terms of the per-update time with respect to the model size. The (in Appendix E.3) shows that FlipES scales better because it runs on the GPU. Finally, we compared FlipES and the backpropagation algorithm on both FC and ConvLe. FIG2 and FIG2 show that FlipES achieves data efficiency comparable with the backpropagation algorithm. IdealES has a much higher computational cost than backpropagation, due to the large number of forward passes. FlipES narrows the computational gap between them. Although ES is more expensive than backpropagation, it can be applied to models which are not fully differentiable, such as models with a discrete loss (e.g., accuracy or BLEU score) or with stochastic units. We have introduced flipout, an efficient method for decorrelating the weight gradients between different examples in a mini-batch. We showed that flipout is guaranteed to reduce the variance compared with shared perturbations. Empirically, we demonstrated significant variance reduction in the large batch setting for a variety of network architectures, as well as significant speedups in training time. We showed that flipout outperforms dropout-based methods for regularizing LSTMs. Flipout also makes it practical to apply GPUs to evolution strategies, ing in substantially increased throughput for a given computational cost. We believe flipout will make weight perturbations practical in the large batch setting favored by modern accelerators such as Tensor Processing Units . DISPLAYFORM0 In this section, we provide the proof of Theorem 2 (Variance Decomposition Theorem).Proof. We use the notations from Section 3.2. Let x, x denote two training examples from the mini-batch B, and ∆W, ∆W denote the weight perturbations they received. We begin with the decomposition into data and estimation terms (Eqn. 6), which we repeat here for convenience: DISPLAYFORM1 The data term from Eqn. 13 can be simplified: DISPLAYFORM2 We break the estimation term from Eqn. 13 into variance and covariance terms: DISPLAYFORM3 We now separately analyze the cases of fully independent perturbations, shared perturbations, and flipout. Fully independent perturbations. If the perturbations are fully independent, the second term in Eqn. 15 disappears. Hence, combining Eqns. 13, 14, and 15, we are left with DISPLAYFORM4 which is just α/N. Recall that we reformulate the shared perturbations in terms of first sampling ∆W, and then letting ∆W = ∆W • rs, where r and s are random sign vectors shared by the whole batch. Using the Law of Total Variance, we break the second term in Eqn. 15 into a part that comes from sampling ∆W and a part that comes from sampling r and s. DISPLAYFORM0 Since the perturbations are shared, ∆W = ∆W, so this can be simplified slightly to: DISPLAYFORM1 Plugging these two terms into the second term of Eqn. 15 yields Here, we provide details of the network configurations used for our experiments (Section 4).The FC network is a 3-layer fully-connected network with 512-512-10 hidden units. ConvLe is a LeNet-like network BID14 where the first two layers are convolutional with 32 and 64 filters of size, and use ReLU non-linearities. A 2 × 2 max pooling layer follows after each convolutional layer. Dimensionality reduction only takes place in the pooling layer; the stride for pooling is two and padding is used in the convolutional layers to keep the dimension. Two fully-connected layers with 1024 and 10 hidden units are used to produce the classification . ConVGG is based on the VGG16 network BID28. We modified the last fully connected layer to have 10 output dimensions for our experiments on CIFAR-10. We didn't use batch normalization for the variance reduction experiment since it introduces extra stochasticity. The architectures used for the LSTM experiments are described in Section 4.2. The hyperparameters used for the language modelling experiments are provided in Appendix D. Given a network architecture, we compute the empirical stochastic gradient update variance as follows. We start with a moderately pre-trained model, such as a network with 85% training accuracy on MNIST. Without updating the parameters, we obtain the gradients of all the weights by performing a feed-forward pass, that includes sampling ∆W, R, and S, followed by backpropagation. The gradient variance of each weight is computed by repeating this procedure 200 times in the experiments. Let Var lj denote the estimate of the gradient variance of weight j in layer l. We compute the gradient variance as follows: DISPLAYFORM0 where g i lj is the gradient received by weight j in layer l. We estimate the variance of the gradients in layer l by averaging the variances of the weights in that layer,Ṽ = 1 |J| j Var lj. In order to compute a confidence interval on the gradient variance estimate, we repeat the above procedure 50 times, yielding a sequence of average variance estimates, V 1,..., V 50. FIG0, we compute the 90% confidence intervals of the variance estimates with a t-test. For ConVGG, multiple GPUs were needed to run the variance reduction experiment with large mini-batch sizes (such as 4096 and 8192). In such cases, it is computationally efficient to generate independent weight perturbations on different GPUs. However, since our aim was to understand the effects of variance reduction independent of implementation, we shared the base perturbation among all GPUs to produce the plot shown in FIG0. We show in Appendix E that flipout yields lower variance even when we sample independent perturbations on different GPUs. For the LSTM variance reduction experiments, we used the two-layer LSTM described in Section 4.2, trained for 3 epochs on the word-level Penn Treebank dataset. FIG0, we split large mini-batches (size 128 and higher) into sub-batches of size 64; we sampled one base perturbation ∆W that was shared among all sub-batches, and we sampled independent R and S matrices for each sub-batch. Long Short-Term Memory networks (LSTMs) are defined by the following equations: DISPLAYFORM0 where i t, f t, and o t are the input, forget, and output gates, respectively, g t is the candidate update, and • denotes elementwise multiplication. Naïve application of dropout on the hidden state of an LSTM is not effective, because it leads to significant memory loss over long sequences. Several approaches have been proposed to regularize the recurrent connections, based on applying dropout to specific terms in the LSTM equations. BID27 propose to drop the cell update vector, with a dropout mask d t sampled either per-step or per-sequence: DISPLAYFORM1 Gal & Ghahramani FORMULA3 BID12 propose to zone out units rather than dropping them; the hidden state and cell values are either stochastically updated or maintain their previous value: DISPLAYFORM2, with zoneout masks d For the word-level models (Table 3), we used gradient clipping threshold 0.25 and the following hyperparameters:• , we used variational dropout with the parameters given in their paper: 0.35 dropout probability on inputs and outputs, 0.2 hidden state dropout, and 0.2 embedding dropout.• For BID27, we used 0.1 embedding dropout, 0.5 dropout on inputs and outputs, and 0.3 dropout on cell updates, with per-step mask sampling.• For BID12, we used 0.1 embedding dropout, 0.5 dropout on inputs and outputs, and cell and hidden state zoneout probabilities of 0.25 and 0.025, respectively. • For WD BID17, we used the parameters given in their paper: 0.1 embedding dropout, 0.4 dropout probability on inputs and outputs, and 0.3 dropout probability on the output between layers (the same masks are used for each step of a sequence). We use 0.5 probability for DropConnect applied to the hidden-to-hidden weight matrices.• For WD+Flipout, we used the same parameters as BID17, given above, but we regularized the hidden-to-hidden weight matrices with the variant of flipout described in Section 2.1, which implements DropConnect with probability 0.5.For the character-level models (Table 2), we used orthogonal initialization for the LSTM weight matrices, gradient clipping threshold 1, and did not use input or output dropout. The input characters were represented as one-hot vectors. We used the following hyperparameters for each model:• For recurrent dropout BID27, we used 0.25 dropout probability on the cell state, and per-step mask sampling.• For Zoneout BID12, we used 0.5 and 0.05 for the cell and hidden state zoneout probabilities, respectively. • For the variational LSTM BID3, we used 0.25 hidden state dropout.• For the flipout and shared perturbation LSTMs, we sampled Gaussian noise with σ = 1 for the hidden-to-hidden weight matrix. As discussed in Appendix B, training on multiple GPUs naturally induces independent noise for each sub-batch. FIG5 shows that flipout still achieves lower variance than shared perturbations in such cases. When estimating the variance with mini-batch size 8192, running on four GPUs naturally induces four independent noise samples, for each sub-batch of size 2048; this yields lower variance than using a single noise sample. Similarly, for mini-batch size 4096, two independent noise samples are generated on separate GPUs. E.2 LARGE BATCH TRAINING WITH FLIPOUT FIG6 shows the training and test error for the large mini-batch experiments described in Section 4.3. For both FC and ConvLe networks, we used the Adam optimizer with learning rate 0.003. We downscaled the KL term by a factor of 10 to achieve higher accuracy. While FIG2 shows that flipout converges faster than shared perturbations, FIG6 shows that flipout has the same generalization ability as shared perturbations (the faster convergence doesn't in overfitting). The variance reduction offered by flipout allows us to use DropConnect BID30 efficiently in a large mini-batch setting. Here, we use flipout to implement DropConnect as described in Section 2.1, and use it to regularize an LSTM word-level language model. We used the LSTM architecture proposed by BID17, which has 400-dimensional word embedddings and three layers with hidden dimension 1150. Following BID17, we tied the weights of the embedding layer and the decoder layer. BID17 use DropConnect to regularize the hidden-to-hidden weight matrices, with a single mask shared for all examples in a batch. We used flipout to achieve a different DropConnect mask per example. We applied WD+Flipout to both the hidden-to-hidden (h2h) and input-to-hidden (i2h) weight matrices, and compared to the model from BID17, which we call WD (for weight-dropped LSTM), with DropConnect applied to both h2h and i2h. Both models use embedding dropout 0.1, output dropout 0.4, and have DropConnect probability 0.5 for the i2h and h2h weights. Both models were trained using Adam with learning rate 0.001. FIG8 compares the variance of the gradients of the first-layer hidden-to-hidden weights between WD and WD+Flipout, and shows that flipout achieves significant variance reduction for mini-batch sizes larger than 256. FIG9 shows the training curves of both models with batch size 8192. We see that WD+Flipout converges faster than WD, and achieves a lower training perplexity, showcasing the optimization benefits of flipout in large mini-batch settings. Training curves for WD and WD+Flipout, with batch size 8192.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJNpifWAb
We introduce flipout, an efficient method for decorrelating the gradients computed by stochastic neural net weights within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example.
Deep generative models such as Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) play an increasingly important role in machine learning and computer vision. However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN. In this paper, we propose a novel algorithm named Latently Invertible Autoencoder (LIA) to address the above two issues in one framework. An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network. The decoder proceeds in the reverse order of the encoder's composite mappings. A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA. Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation. Deep generative models play a more and more important role in cracking challenges in computer vision as well as in other disciplines, such as high-quality image generation a; ), text-to-speech transformation (van den a;, information retrieval , 3D rendering , and signal-to-image acquisition . Overall, the generative models fall into four categories: autoencoder and its most important variant of Variational AutoEncoder (VAE) , auto-regressive models (van den b; a), Generative Adversarial Network (GAN) , and normalizing flows (NF) (; ;). Here we compare these models through the perspective of data dimensionality reduction and reconstruction. To be formal, let x be a data point in the d x -dimensional observable space R dx and y be its corresponding low-dimensional representation in the feature space R dy. The general formulation of dimensionality reduction is where f (·) is the mapping function and d y d x. The manifold learning aims at requiring f under various constraints on y (Tenenbaum1 et al., 2000;). However, the sparsity of data points in high-dimensional space often leads to model overfitting, thus necessitating research on opposite mapping from y to x, i.e. where g(·) is the opposite mapping function with respect to f (·), to reconstruct the data. In general, the role of g(·) is a regularizer to f (·) or a generator to produce more data. The autoencoder is of mapping x f → y g →x. A common assumption in autoencoder is that the variables in lowdimensional space are usually sampled from a prior distribution P(z; θ) such as uniform or Gaussian. To differentiate from y, we let z represent the low-dimensional vector following the prior distribution. Thus we can write g: R dz → R dx, z → x = g(z), z ∼ P(z; θ). It is crucial to establish such dual maps z = f (x) and x = g(z). In the parlance of probability, the process of x → z = f (x) is called inference, and the other procedure of z → x = g(z) is called sampling or generation. VAE is capable of carrying out inference and generation in one framework by two collaborative functional modules. However, it is known that in many cases VAEs are only able to generate blurry images due to the imprecise variational inference. To see this, we write the approximation of the marginal log-likelihood where KL[q(z|x)||p(z)] is the Kullback-Leibler divergence with respect to posterior probability q(z|x) and prior p(z). This lower-bound log-likelihood usually produces imprecise inference. Furthermore, the posterior collapse frequently occurs when using more sophisticated decoder models . These two issues greatly limit the generation capability of the VAE. On the other hand, GAN is able to achieve photo-realistic generation (a; . However, its critical limitation is the absence of the encoder f (x) for carrying inference on real images. Effort has been made on learning an encoder for GAN under the framework of VAE, however the previous two issues of learning VAE still exist. Normalizing flows can perform the exact inference and generation with one architecture by virtue of invertible networks . But it requires the dimension d x of the data space to be identical to the dimension d z of the latent space, thus posing computational issues due to high complexity of learning deep flows and computing the Jacobian matrices. Inspired by recent success of GANs (a; and normalizing flows , we develop a new model called Latently Invertible Autoencoder (LIA). LIA utilizes an invertible network to bridge the encoder and the decoder of VAE in a symmetric manner. We summarize its key advantages as follows: • The symmetric design of the invertible network brings two benefits. The prior distribution can be exactly fitted from an unfolded feature space, thus significantly easing the inference problem. Besides, since the latent space is detached, the autoencoder can be trained without variational optimization thus there is no approximation here. • The two-stage adversarial learning decomposes the LIA framework into a Wasserstein GAN (only a prior needed) and a standard autoencoder without stochastic variables. Therefore the training is deterministic 2, implying that the model will be not affected by the posterior collapse when the decoder is more complex or followed by additional losses such as the adversarial loss and the perceptual loss. • We compare LIA with state-of-the-art generative models on inference and generation/reconstruction. The experimental on FFHQ and LSUN datasets show the LIA achieves superior performance on inference and generation. The neural architecture of LIA is designed such that the data distribution can be progressively unrolled from a complex or manifold-valued one to a given simple prior. We now describe the detail as follows. Framework of LIA is based on the classic VAE and the realization of normalizing flow. As shown in Figure 1f, we symmetrically embed an invertible neural network in the latent space of VAE, following the diagram of mapping process as, an invertible network φ to reshape feature embeddings to match the prior distribution z = φ(x) and φ −1 to map latent variables to feature vectors y = ψ −1 (z), a decoder to produce outputx = g(ỹ), a feature extractor to perform reconstruction measure, and a discriminator c to distinguish real/fake distributions. where φ = φ 1 • · · · • φ k denotes the deep composite mapping of normalizing flow with depth k. LIA first performs nonlinear dimensionality reduction on the input data x and transform them into the low-dimensional feature space R dy. The role of f (x) for LIA can be regarded to unfold the underlying data manifold, as illustrated in Figures 1a and 1b. Therefore, the Euclidean operations such as linear interpolation and vector arithmetic are reliable in this feature space. Then we establish an invertible mapping φ(y) from the feature y to the latent variable z, as opposed to the VAEs that directly map original data to latent variables. The feature y can be exactly recovered via the invertibility of φ from z, which is the advantage of using invertible networks. The recovered feature y is then fed into a partial decoder g(y) to generate the corresponding datax. If the maps φ and φ In general, any invertible networks are applicable in the LIA framework. We find in practice that a simple invertible network is sufficiently capable of constructing the mapping from the feature space R dy to the latent space R dz. Let x = [x t ; x b] and z = [z t ; z b] be the forms of the top and bottom fractions of x and z, respectively. Then the invertible network can be built as follows, where τ is the transformation that can be an arbitrary differentiable function. Alternatively, one can attempt to exploit the complex invertible network with affine coupling mappings for more challenging tasks . As conducted in , we set τ as a multi-layer perceptron with the leaky ReLU activation. To guarantee the precise reconstructionx, the conventional way by (variational) autoencoders is to use the distance x −x or the cross entropy directly between x andx. Here, we utilize the perceptual loss that is proven to be more robust to variations of image details . Let denote the feature extractor, e.g. VGG . Then we can write the loss It suffices to emphasize that the functionality of here is in essence to produce the representations of the input x and the outputx. The acquisition of is fairly flexible. It can be attained by supervised or (b) Encoder training. unsupervised learning, meaning that can be trained with class labels or without class labels (van den). The norm-based reconstruction constraints usually incur the blurry image generation in autoencoderlike architectures. This problem can be handled via the adversarial learning . To do so, a discriminator c is employed to balance the loss of the comparison between x andx. Using the ), we can write the optimization objective as where P x and Px denote the probability distributions of the real data and the generated data, respectively. γ is the hyper-parameter of the regularization. The R 1 regularizer is formulated in , which is proven more stable for training convergence. Training deep models usually follows an end-to-end fashion for the whole architecture. To backpropagate gradients through random variables for VAE, the reparameterization trick is harnessed , i.e. z = µ + σ * N, where µ is the mean and σ the standard deviation. The regularization of coupling the prior and the posterior is the KL divergence used to optimize the parameters of the encoder by backpropagation. For our framework, however, we find that this endto-end learning strategy cannot lead the algorithm to converge to a satisfactory optima. To proceed, we propose a scheme of two-stage stochasticity-free training, which decomposes the framework into two parts that can be well trained end-to-end respectively, as shown in Figure 2. At the first step, the decoder of LIA is trained using adversarial learning with invertible network to acquire the ability of high-quality generation. At the second step, the invertible network that connects feature space and latent space is detached from the architecture, reducing the framework to a standard autoencoder without variational inference. Thus this two-stage design prevents the posterior collapse issue while facilitates the decoder with more complex structure as well as adversarial loss for high-resolution image generation. ProGAN (a), StyleGAN (b), and BigGAN are capable of generating photo-realistic images from random noise sampled from some prior distribution. Then it is naturally supposed that such GAN models are applicable to recover a precisex if we can find the latent variable z for the given x. Namely, we may train the associated GAN model separately in the LIA framework. To conduct this, we single out a standard GAN model for the first-stage training, as displayed in Figure 2a. Here z is directly sampled from a prior distribution. According to the principle of Wasserstein GAN, the optimization objective can be written as where the superscript * denotes that the parameters of corresponding mappings have already been learned. It is worth noting that the role of the invertible network here is just its transformation invertibility. We do not pose any constraints on the probabilities of z and φ(y) in contrast to normalizing flows. Our strategy of attaching an invertible network in front of the generator can be potentially applied to any GAN models. In the LIA architecture, the invertible network is embedded in the latent space in a symmetric fashion, in the sense that f (x) = y = φ −1 (z). The unique characteristic of the invertible network allows us to detach the invertible network φ from the LIA framework. Thus we attain a conventional autoencoder without stochastic variables, as shown in Figure 2b. In practice, the feature extractor in perceptual loss is the VGG weight up to conv4 pretrained on the ImageNet dataset. After the first-stage encoder training, the parameter of f is needed to be learned as where β is the hyper-parameter. The above optimization serving to the architecture in Figure 2b is widely applied in computer vision. It is the backbone framework of various GANs for diverse image processing tasks. For LIA, however, it is much simpler because we only need to learn the partial encoder f. This simplicity brought by the two-stage training is able to enforce the encoder to converge with more precise inference. Our LIA model is relevant to the works that solve the inference problem for VAEs with adversarial learning as well as the works that design encoders for GANs. The integration of GAN with VAE can be traced back to the work of VAE/GAN and implicit autoencoders . These methods encounter the difficulty of end-to-end training, because the gradients are prone to becoming unstable after going through the latent space in deep complex architectures . Besides, there is an intriguing attempt of training VAE in the adversarial manner . These approaches confront the trade-off between the roles of the encoder that performs inference and compares the real/fake distributions. This is difficult to tune. So we prefer the complete GAN with an indispensable discriminator. The closely related works to LIA are the models of combining VAE and the inverse autoregressive flow and the latent-flow-based VAE approach that are VAEs with latent variables conditioned by normalizing flows . These three models all need to optimize the posterior probability of normalizing flows, which is essentially different from our deterministic optimization in LIA. The stochasticity-free training is directly derived from the symmetric design of the invertible network in the latent space, which is different from and . There are alternative attempts of specifying the generator of GAN with normalizing flow or mapping images into feature space with partially invertible network . These approach suffers from high complexity computation for high dimensions. The approach of two-stage training in is incapable of solving the posterior estimation. It worth noticing that the reconstruction task we focus here is different to the recent work of representation learning which learns features for recognition using adversarial inference; ). Our primary goal is to faithfully reconstruct real images from the latent code. For experimental setup, we instantiate the decoder of LIA with the generator of StyleGAN (b). The difference is that we replace the mapping network (MLP) in StyleGAN with the invertible network. The layer number of the invertible network is 8. The hyper-parameters for the discriminator are γ = 10 (equation) and β = 0.001 (equation). For perceptual loss in equation, we take = conv4_3 from the VGG weight. The generative models we compare are the MSE-based optimization methods (; ;) 3, the adversarially learned inference (ALI), and the adversarial generator-encoder (AGE) network (the necessity of the invertible network, we also train an encoder and a StyleGAN with its original multi-layer perceptron, which is the last column in Figure 3 . The two-stage training scheme is used as LIA does. The generator and discriminator of the StyleGAN is exactly same to that of StyleGAN. For quantitative evaluation metrics, we use Fréchet inception distance (FID), sliced Wasserstein distance (SWD), and mean square error (MSE). These three metrics are commonly used to measure the numerical accuracy of generative algorithms (; a; ; b). We directly use the code released by the authors of ProGAN (a). The prior for z is Gaussian and the dimension is 512. Figure 4: Manipulating reconstructed faces. The algorithm in is applied to manipulate the image reconstructed from the latent code w given by LIA. Each row shows the original image, the reconstruction, glass, pose, gender, smile, and age. All models are first tested on the Flickr-Faces-HQ (FFHQ) database 4 created by the authors of StyleGAN as the benchmark. FFHQ contains 70,000 high-quality face images. We take the first 65,000 faces as the training set and the remaining 5,000 faces as the reconstruction test according to the exact order of the dataset. We do not split the dataset by random sampling for interested readers can precisely reproduce all the reported with our experimental protocol. The exemplar real images of objects and scenes from LSUN database and their reconstructed images by LIA. Three categories are tested, i.e. cat, bedroom, and car. For each group, the first row is the original images, the second row shows the reconstructed images. Figure 3 shows the reconstructed faces of all models. It is clear that LIA significantly outperforms others. The reconstructed faces by ALI and AGE look correct, but the quality is mediocre. The ideas of ALI and AGE are elegant. Their performance may be improved with the new techniques such as progressive growing of neural architecture or style-based one. The method of the MSE-based optimization produces facial parts of comparable quality with LIA when the faces are normal. But this approach fails when the variations of faces become large. For example, the failure comes from the long fair, hats, beard, and large pose. The interesting phenomenon is that the StyleGAN with encoder only does not succeed in recovering the target faces using the same training strategy as LIA, even though it is capable of generating photo-realistic faces in high quality due to the StyleGAN generator. This indicates that the invertible network plays the crucial role to make the LIA work. The quantitative in Table 1 shows the consistent superiority of LIA. More reconstruction are included in Appendix A.5. The reconstruction from LIA facilitates semantic photo editing. Figure 4 shows the manipulation on reconstructed faces. More can be found in Appendix A.7. The interpolation and style mixing are also displayed for reference in Appendix A.2. To further evaluate LIA on the data with large variations, we use the three categories from the large-scale LSUN database , i.e. cat, car, and bedroom. For each category, the 0.1 million images are selected by ranking algorithm from the first 0.5 million images in the dataset. Each cat and bedroom images are resized to be 128 × 128 and the size of the car image is 128 × 96 for training. We take subsets because it does not take too long for training to converge while still maintains the data complexity. These subsets will be made available for evaluation. Figure 5 shows that the reconstructed objects by LIA faithfully maintain the semantics as well as the appearance of the original ones. For example, the cats' whiskers are recovered, indicating that LIA is able to recover very detailed information. More are attached in Appendix A.6. We can see that LIA significantly improves the reconstruction quality. The improvement mainly comes from the two-stage training of LIA. The decoder trained with adversarial learning guarantees that the generated images are photo-realistic. The encoder deterministically trained with perceptual and adversarial losses ensures that latent feature vectors can be obtained more precisely. This two-stage training is enabled by the design that the invertible network detachs the encoder and decoder, thus avoiding the optimization of the posterior probability when learning the encoder. Figure 6 shows the Figure 7 clearly illustrates the difference of gradients between these two cases. The gradient volatility for variational inference is high and the associated loss is not effectively reduced, meaning that the gradients during training are noisy and not always informative. This may indicate that the stochasticity in latent space causes problems for training encoder via variational inference. Instead, the encoder's gradients for LIA are rather stable across different layers and the loss decreases monotonically, showing the importance of the stochasticity-free training and the invertible network. A new generative model, named Latently Invertible Autoencoder (LIA), has been proposed for generating image sample from a probability prior and simultaneously inferring accurate latent code for a given sample. The core idea of LIA is to symmetrically embed an invertible network in an autoencoder. Then the neural architecture is trained with adversarial learning as two decomposed modules. With the design of two-stage training, the decoder can be replaced with any GAN generator for high-resolution image generation. The role of the invertible network is to remove any probability optimization and bridge the prior with unfolded feature vectors. The effectiveness of LIA is validated with experiments of reconstruction (inference and generation) on FFHQ and LSUN datasets. It is still challenging to faithfully recover all the image content especially when the objects or scenes have unusual parts. For example, LIA fails to recover the hand appeared at the top of the little girl (the second row in Figure 3). Besides, the Bombay cat's necklace (the second row in Figure 5) is missed in the reconstructed image. These features belong to multiple unique parts of the objects or scenes, which are difficult for the generative model to capture. One possible solution is to raise the dimension of latent variables (e.g. using multiple latent vectors) or employ the attention mechanism to highlight such unusual structures in the decoder, which is left for future work. Examining the interpolation in the latent feature space is an effective way of visualizing the capability of generative models as well as measuring how well it fits the underlying data distribution. Here we compare the three algorithms. As shown in Figure 9, LIA achieves smoother interpolation while well preserve the facial properties. The interpolation quality of the MSE-based optimization is actually based on the reconstruction performance because it has a good generator (StyleGAN). The intermediate interpolation from Glow deviates from real faces. Figure 9: Interpolation by three generative models. The first row shows the of LIA (ours), the second row that of the MSE-based optimization, and the last row that of Glow. The first and last faces in the Glow are the real face images of FFHQ. We further perform style mixing using a small set of reconstructed faces. Style mixing is conducted using the same approach presented in (b). The difference is that LIA uses the real faces thanks to its encoding capability. Figure 10 shows that our algorithm can infer accurate latent codes and generate high-quality mixed faces. Figure 13: Generated faces along the optimization path with the latent code z and the feature w. The path is formed by the optimization of the MSE loss with respect to z and w, respectively. The faces in the first column are generated by the initial values of z and w. Then the following faces are sequentially generated with the optimization process until the optimization converges. For each group, the first row shows the with z and the second row shows the with w. The disentanglement effect of w is clearly shown. Instead, the latent code z suffers from the entanglement of features.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryefE1SYDr
A new model Latently Invertible Autoencoder is proposed to solve the problem of variational inference in VAE using the invertible network and two-stage adversarial training.
Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism. Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs. The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability. To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures (PHPs). A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller. We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs. We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations. Representing the logic of a computer program with a parametrized model, such as a neural network, is a central challenge in AI with applications including reinforcement learning, robotics, natural language processing, and programming by example. A salient feature of recently-proposed approaches for learning programs BID32 BID6 is their ability to leverage the hierarchical structure of procedure invocations present in well-designed programs. Explicitly exposing this hierarchical structure enables learning neural programs with empirically superior generalization, compared to baseline methods that learn only from elementary computer operations, but requires training data that does not consists only of low-level computer operations but is annotated with the higher-level procedure calls BID32 BID6. tackled the problem of learning hierarchical neural programs from a mixture of annotated training data (hereafter called strong supervision) and unannotated training data where only the elementary operations are given without their call-stack annotations (called weak supervision). In this paper, we propose to learn hierarchical neural programs from a mixture of strongly supervised and weakly supervised data via the Expectation-Gradient method and an explicit program counter, in lieu of a high-dimensional real-valued state of a recurrent neural network. Our approach is inspired by recent work in robot learning and control. In Imitation Learning (IL), an agent learns to behave in its environment using supervisor demonstrations of the intended behavior. However, existing approaches to IL are largely insufficient for addressing algorithmic domains, in which the target policy is program-like in its accurate and structured manipulation of inputs and data structures. An example of such a domain is long-hand addition, where the computer loops over the digits to be added, from least to most significant, calculating the sum and carry. In more complicated examples, the agent must correctly manipulate data structures to compute the right output. Three main challenges set algorithmic domains apart from other IL domains. First, the agent's policy must be highly accurate. Algorithmic behavior is characterized by a hard constraint of output correctness, where any suboptimal actions are simply wrong and considered failures. In contrast, many tasks in physical and simulated domains tolerate errors in the agent's actions, as long as some goal region in state-space is eventually reached, or some safety constraints are satisfied. A second challenge is that algorithms often use specific data structures, which may require the algorithmic policies to have a particular structure. A third challenge is that the environment in algorithmic domains, which consists of the program input and the data structures, is almost completely unobservable directly by the agent. They can only be scanned using some limited reading apparatus, such as the read/write heads in a Turing Machine or the registers in a register machine. Recently proposed methods can infer from demonstration data hierarchical control policies, where high-level behaviors are composed of low-level manipulation primitives BID8. In this paper, we take a similar approach to address the challenges of algorithmic domains, by introducing Parametrized Hierarchical Procedures (PHPs), a structured model of algorithmic policies inspired by the options framework BID38, as well as the procedural programming paradigm. A PHP is a sequence of statements, such that each statement branches conditionally on the observation, to either perform an elementary operation, invoke another PHP as a sub-procedure, or terminate and return control to the caller PHP. The index of each statement in the sequence serves as a program counter to accurately remember which statement was last executed and which one is next. The conditional branching in each statement is implemented by a neural network mapping the program counter and the agent's observation into the elementary operation, sub-procedure, or termination to be executed. The PHP model is detailed in Section 4.1.PHPs have the potential to address the challenges of algorithmic domains by strictly maintaining two internal structures: a call stack containing the current branch of caller PHPs, and the current program counter of each PHP in the stack. When a statement invokes a PHP as a sub-procedure, this PHP is pushed into the call stack. When a statement terminates the current PHP, it is popped from the stack, returning control to the calling PHP to execute its next statement (or, in the case of the root PHP, ending the entire episode). The stack also keeps the program counter of each PHP, which starts at 0, and is incremented each time a non-terminating statement is executed. PHPs impose a constraining structure on the learned policies. The call stack arranges the policy into a hierarchical structure, where a higher-level PHP can solve a task by invoking lower-level PHPs that solve sub-tasks. Since call stacks and program counters are widely useful in computer programs, they provide a strong inductive bias towards policy correctness in domains that conform to these constraints, while also being computationally tractable to learn. To support a larger variety of algorithmic domains, PHPs should be extended in future work to more expressive structures, for example allowing procedures to take arguments. We experiment with PHPs in two benchmarks, the NanoCraft domain introduced in, and long-hand addition. We find that our algorithm is able to learn PHPs from a mixture of strongly and weakly supervised demonstrations with better sample complexity than previous algorithms: it achieves better test performance with fewer demonstrations. In this paper we make three main contributions:• We introduce the PHP model and show that it is easier to learn than the NPI model BID32 ).• We propose an Expectation-Gradient algorithm for efficiently training PHPs from a mixture of annotated and unannotated demonstrations (strong and weak supervision).• We demonstrate efficient training of multi-level PHPs on NanoCraft and long-hand addition BID32, and achieve improved success rate.2 RELATED WORK BID32 Recursive NPI BID6 (recursive) NPL Mixed PHP (this work) Mixed BID18, the Neural GPU BID19, and End-to-End Memory Networks BID37, have been proposed for learning neural programs from input-output examples, with components such as variable-sized memory and novel addressing mechanisms facilitating the training process. In contrast, our work considers the setting where, along with the input-output examples, execution traces are available which describe the steps necessary to solve a given problem. The Neural Programmer-Interpreter (NPI, BID32) learns hierarchical policies from execution traces which not only indicate the low-level actions to perform, but also a structure over them specified by higher-level abstractions. BID6 showed that learning from an execution trace with recursive structure enables perfect generalization. Neural Program Lattices work within the same setting as the NPI, but can learn from a dataset of execution traces where only a small fraction contains information about the higher-level hierarchy. In demonstrations where the hierarchical structure along the trace is missing, this latent space grows exponentially in the trace length. address this challenge via an approximation method that selectively averages latent variables on different computation paths to reduce the complexity of enumerating all paths. In contrast, we compute exact gradients using dynamic programming, by considering a hierarchical structure that has small discrete latent variables in each time step. Other works use neural networks as a tool for outputting programs written in a discrete programming language, rather than having the neural network itself represent a program. BID3 learned to generate programs for solving competition-style problems. BID9 and BID31 generate programs in a domain-specific language for manipulating strings in spreadsheets. Automatic discovery of hierarchical structure has been well-studied, and successful approaches include action-sequence compression BID39, identifying important transitional states BID28 BID29 Şimşek &; BID36 BID25, learning from demonstrations BID5 BID22 BID7 BID23, considering the set of initial states from which the MDP can be solved BID20 BID21, policy gradients BID26, information-theoretic considerations BID13 BID11 BID17 BID10, active learning BID15, and recently value-function approximation BID2 BID16 BID34.Our approach is inspired by the Discovery of Deep Options (DDO) algorithm of. Following the work of BID8, who use Expectation-Maximization (EM) to train an Abstract Hidden Markov Model BID5, DDO parametrizes the model with neural networks where complete maximization in the M-step is infeasible. Instead, DDO uses Expectation-Gradient (EG) to take a single gradient step using the same forward-backward E-step as in the EM algorithm. A variant of DDO for continuous action spaces (DDCO) has shown success in simulated and physical robot control. This paper extends DDO by proposing an E-step that can infer a call-stack of procedures and their program counters. Computation can be modeled as a deterministic dynamical system, where the computer is an agent interacting with its environment, which consists of the program input and its data structures. Mathematically, the environment is a Deterministic Partially Observable Markov Decision Process (DET-POMDP BID4), which consists of a state space S, an observation space O, an action space A, the state-dependent observation o t ps t q, and the state transition s t`1 " f ps t, a t q. The initial state s 0 includes the program input, and is generated by some distribution p 0 ps 0 q. This notation is general enough to model various computation processes. In a Turing Machine, for example, s t is the machine's configuration, o t is the vector of tape symbols under the read/write heads, and a t contains writing and moving instructions for the heads. In partially observable environments, the agent often benefits from maintaining memory m t of past observations, which reveals temporarily hidden aspects of the current state. The agent has a parametrized stochastic policy π θ, in some parametric family θ P Θ, where π θ pm t, a t |m t´1, o t q is the probability of updating the memory state from m t´1 to m t and taking action a t, when the observation is o t. The policy can be rolled out to induce the stochastic process ps 0:T, o 0:T, m 0:T´1, a 0:T´1 q, such that upon observing o T the agent chooses to terminate the process. In a computation device, the memory m t stands for its internal state, such as the Finite State Machine of a Turing Machine. We can scale computer programs by adding data structures to their internal state, such as a call stack, which we model in the next section as Parametrized Hierarchical Procedures. In Imitation Learning (IL), the learner is provided with direct supervision of the correct actions to take. The setting we use is Behavior Cloning (BC), where the supervisor rolls out its policy to generate a batch of demonstrations before learning begins, and the agent's policy is trained to minimize a loss on its own selection of actions in demonstrated states, with respect to the demonstrated actions. In strong supervision, a demonstration contains not only the sequence of observable variables ξ " po 0:T, a 0:T´1 q, where a 0:T´1 is the sequence of supervisor actions during the demonstration, but also the sequence of the supervisor's memory states ζ " m 0:T´1, which are ordinarily latent. This allows the agent to directly imitate not just the actions, but also the memory updates of the supervisor, for example by maximizing the log-likelihood of the policy given the demonstrations arg max DISPLAYFORM0 the latter being the negative cross-entropy loss with respect to the demonstrations. In weak supervision, on the other hand, only the observable trajectories ξ are given as demonstrations. This makes it difficult to maximize the likelihood Ppξ|θq " ř ζ Ppζ, ξ|θq, due to the large space of possible memory trajectories ζ. We address this difficulty via the Expectation-Gradient algorithm described in Section 4. The semantics of this definition are given by the following control policy. The agent's memory maintains a stack rph 1, τ 1 q,..., ph n, τ n qs of the active procedures and their program counters. Initially, this stack contains only the root procedure and the counter is 0. Upon observing o t, the agent checks whether the top procedure should terminate, i.e. ψ τn hn po t q " 1. If the procedure h n terminates, it is popped from the stack, the next termination condition ψ τn´1 hn´1 po t q is consulted, and so on. For the first procedure h i that does not terminate, we select the operation η τi`1 hi po t q, after incrementing the program counter τ i. If this operation is an invocation of procedure h 1 i`1, we push ph 1 i`1, 0q onto the stack, consult its operation statement η 0 h 1 i`1 po t q, and so on. Upon the first procedure h 1 n 1 to select an elementary action a t, we save the new memory state m t " rph 1, τ 1 q,..., ph i´1, τ i´1 q, ph i, τ i`1 q, ph 1 i`1, 0q,..., ph 1 n 1, 0qs, and take the action a t in the environment. DISPLAYFORM1 The call stack and program counters act as memory for the agent, so that it can remember certain hidden aspects of the state that were observed before. In principle, any finite memory structure can be implemented with sufficiently many PHPs, by having a distinct procedure for each memory state. However, PHPs leverage the call stack and program counters to allow exponentially many memory states to be expressed with a relatively small set of PHPs. We impose two practical limitations on the general definition of PHPs. Our training algorithm in Section 4.2 does not support recursive procedures, i.e. cycles in the invocation graph. In addition, for simplicity, we allow each procedure to either invoke other procedures or execute elementary actions, not both. These two limitations are achieved by layering the procedures in levels, such that only the lowest-level procedures can execute elementary actions, and each higher-level procedure can only invoke procedures in the level directly below it. This does not lose generality, since instead of invoking a procedure or action at a certain level, we can wrap it in a one-level-higher surrogate procedure that invokes it and terminates. A Parametrized Hierarchical Procedure (PHP) is a representation of a hierarchical procedure by differentiable parametrization. In this paper, we represent each PHP by two multi-layer perceptrons (MLPs) with ReLU activations, one for the PHP's operation statement and one for its termination statement. The input is a concatenation of the observation o and the program counter τ, where τ is provided to the MLPs as a real number. During training, we apply the soft-argmax activation function to the output of each MLP to obtain stochastic statements η τ h p¨|o t q and ψ τ h p¨|o t q. During testing, we replace the soft-argmax with argmax, to obtain deterministic statements as above. In weak supervision, only the observable trajectory ξ " po 0:T, a 0:T´1 q is available in a demonstration, and the sequence of memory states ζ " m 0:T´1 is latent. This poses a challenge, since the space of possible memory trajectories ζ grows exponentially in the length of the demonstration, which at first seems to prohibit the computation of the log-likelihood gradient ∇ θ log Ppξ|π θ q, needed to maximize the log-likelihood via gradient ascent. We use the Expectation-Gradient (EG) method to overcome this challenge BID33. This method has been previously used in dynamical settings to play Atari games and to control simulated and physical robots. The EG trick expresses the gradient of the observable log-likelihood as the expected gradient of the full log-likelihood: DISPLAYFORM0 where the first and third equations follow from two applications of the identity ∇ θ x " x∇ θ log x. In the E-step of the EG algorithm, we find the posterior distribution of ζ given the observed ξ and the current parameter θ. In the G-step, we use this posterior to calculate and apply the exact gradient of the observable log-likelihood. We start by assuming a shallow hierarchy, where the root PHP calls level-one PHPs that only perform elementary operations. At any time t, the stack contains two PHPs, the root PHP and the PHP it invoked to select the elementary action. The stack also contains the program counters of these two PHPs, however we ignore the root counter to reduce complexity, and bring it back when we discuss multi-level hierarchies in the next section. Let us denote by η τ h pa t |o t q and ψ τ h pb t |o t q, respectively, the stochastic operation and termination statements of procedure h P H Y tKu, where K is the root PHP. Let ph t, τ t q be the top stack frame when action a t is selected. Then the full likelihood Ppζ, ξ|θq of the policy given an annotated demonstration is a product of the terms that generate the demonstration, including η τt ht pa t |o t q for the generation of each a t, as well as ψ τt´1 ht´1 p1|o t qη K ph t |o t q whenever h t´1 terminates and h t is pushed with τ t " 0, and ψ τt´1 ht´1 p0|o t q whenever h t´1 does not terminate (i.e. h t " h t´1 and τ t " τ t´1`1). Crucially, the form of Ppζ, ξ|θq as a product implies that ∇ θ log Ppζ, ξ|θq decomposes into a sum of policy-gradient terms such as ∇ θ log η τt ht pa t |o t q, and computing its expectation over Ppζ|ξ, θq only requires the marginal posterior distributions over single-step latent variables v t ph, τ q " Pph t "h, τ t "τ |ξ, θq w t ph, τ q " Pph t "h, τ t "τ, τ t`1 "τ`1|ξ, θq. The marginal posteriors v t and w t can be found via a forward-backward algorithm, as described in Appendix A, and used to compute the exact gradient DISPLAYFORM0 t ph, τ q∇ θ log η τ h pa t |o t q w t ph, τ q∇ θ log ψ τ h p0|o t`1 qq pv t ph, τ q´w t ph, τ qq∇ θ log ψ τ h p1|o t`1 q¸¸. A naive attempt to generalize the same approach to multi-level PHPs would in an exponential blow-up of the forward-backward state, which would need to include the entire stack. Instead, we train each level separately, iterating over the PHP hierarchy from the lowest level to the highest. Let us denote by d the number of levels in the hierarchy, with 0 being the root and d´1 the lowest level, then we train level i in the hierarchy after we have trained levels i`1,..., d´1.Two components are required to allow this separation. First, we need to use our trained levels i`1,..., d´1 to abstract away from the elementary actions, and generate demonstrations where the level-pi`1q PHPs are treated as the new elementary operations. In this way, we can view level-i PHPs as the new lowest-level PHPs, whose operations are elementary in the demonstrations. This is easy to do in strongly supervised demonstrations, since we have the complete stack, and we only need to truncate the lowest d´i´1 levels. In weakly supervised demonstrations, on the other hand, we need an algorithm for decoding the observable trajectories, and replacing the elementary actions with higher-level operations. We present such an algorithm below. The second component needed for level-wise training is approximate separation from higher levels that have not been trained yet. When we train level i ą 1 via the EG algorithm in the previous section, the "root PHP" would be at level i´1, had it corresponded to any real PHP. In all but the simplest domains, we cannot expect a single PHP to perfectly match the behavior of the i-levels PHP hierarchy (levels 0, . . ., i´1) that actually selected the level-i PHPs that generated the demonstrations. To facilitate better separation from higher levels, we augment the "root PHP" used for training with an LSTM that approximates the i-levels stack memory as η LST M K ph t |o 0,..., o t q. As mentioned above, abstraction from lower levels is achieved by rewriting weakly supervised demonstrations to show level-pi`1q operations as elementary. After level i`1 is trained, the levelpi`1q PHPs that generated the demonstrations are decoded using the trained parameters. We considered three different decoding algorithms: finding the most likely level-pi`1q PHP at each time step, by taking the maximum over v t; using a Viterbi-like algorithm to find the most likely latent trajectory of level-pi`1q PHPs; sampling from the posterior distribution Ppζ|ξ, θq over latent trajectories. In our current experiments we used latent trajectories sampled from the posterior distribution, given by DISPLAYFORM0 where the denominators can be computed via the same forward-backward algorithm used in the previous section to compute v t and w t, as detailed in Appendix A. We evaluate our proposed method on the two settings studied by: NanoCraft, which involves an agent interacting in a grid world, and long-hand addition, which was also considered by BID32 and BID6. Task description. The NanoCraft domain, introduced by, involves placing blocks in a two-dimensional grid world. The goal of the task is to control an agent to build a rectangular building of a particular height and width, at a specified location within the grid, by moving around the grid and placing blocks in appropriate cells. The state contains a 6ˆ6 grid. In our version, each grid cell can either be empty or contain a block. The state also includes the current location of the agent, as well as the building's desired height, width, and location, expressed as the offset from the agent's initial location at the top left corner. Initially, some of the blocks are already in place and must not be placed again. The state-dependent observation o t ps t q reveals whether the grid cell at which the agent is located contains a block or not, and four numbers for the building's specifications. We provide each observation to the MLPs as a 5-dimensional real-valued feature vector. PHPs and elementary actions. The top-level PHP nanocraft executes (moves_r, moves_d, builds_r, builds_d, builds_l, builds_u, return). moves_r calls move_r a number of times equal to the building's horizontal location, and similarly for moves_d w.r.t. move_d and the vertical location; builds_r w.r.t. build_r and the building's width; and so on for builds_d, builds_l, and builds_u. At the lowest level, move_r takes the elementary action MOVE_RIGHT and terminates, and similarly for move_d taking MOVE_DOWN. build_r executes (MOVE_RIGHT, if cell full: return, else: PLACE_BLOCK, return), and similarly for build_d, build_l, and build_u w.r.t. MOVE_DOWN, MOVE_LEFT, and MOVE_UP.Experiment setup. We trained our model on datasets of 16, 32, and 64 demonstrations, of which some are strongly supervised and the rest weakly supervised. We trained each level for 2000 iterations, iteratively from the lowest level to the highest. The are averaged over 5 trials with independent datasets. Results. Our summarized in FIG2 show that 32 strongly supervised demonstrations are sufficient for achieving perfect performance at the task, and that 16 such demonstrations approach the same success rate when used along with weakly supervised demonstrations, for a total of 16, 32, or 64 demonstrations. An interesting question is whether these performance gains are due to the simplicity of the PHP model itself, the use of exact gradients in its optimization via the EG algorithm, or both. The PHP and NPL/NPI experiments with 64 strongly supervised demonstrations FIG2, blue curves at the 64 mark) directly compare the PHP model with the NPI model, since both algorithms use exact gradients in this case. 1 The accuracy is 1.0 for PHP; 0.724 for NPL/NPI, suggesting that the gains of PHP are at least in part due to the PHP model inducing an optimization landscape in which a good solution is easier to find. In the experiments with 16 strongly supervised demonstrations of a total 64 (blue curves at the 16 mark), the success rate is 0.969 for PHP; 0.502 for NPL. This 70% increase in the gain of PHP over NPL may be evidence that exact gradients are better at training the model than the approximate gradients of NPL, although the choice of an optimization method is conflated here with the choice of a model. Task description. The long-hand addition task was also considered by BID32,, and BID6. In this task, our goal is to add two numbers represented in decimal, by starting at the rightmost column (least significant digit) and repeatedly summing each column to write the ing digit and a carry if needed. The state consists of 4 tapes, as in a Turing Machine, corresponding to the first number, the second number, the carries, and the output. The state also includes the positions of 4 read/write heads, one for each tape. Initially, each of the first two tapes contains the K digits of a number to be added, all other cells contain the empty symbol, and the heads point to the least significant digits. The state-dependent observation o t ps t q reveals the value of the digits (or empty symbols) pointed to by the pointers. The four values are provided to the MLPs in one-hot encoding, i.e., the input vector has 11ˆ4 dimensions with exactly one 1-valued entry in each of the four group. PHPs and elementary actions. The top-level PHP add repeatedly calls add1 to add each column of digits. add1 calls write, carry, and lshift in order to compute the sum of the column, write the carry in the next column, and move the pointers to the next column. If the sum for a column is less than 10, then add1 does not call carry. There are two kinds of elementary actions: one which moves a specified pointer in a specified direction (e.g. MOVE CARRY LEFT), and one which writes a specified digit to a specified tape (e.g. WRITE OUT 2). η write, η carry, and η lshift output the probability distribution over possible action and argument combinations as the product of 3 multinomial distributions, each with 2, 4, and 10 possibilities respectively. Experiment setup. Following, we trained our model on execution traces for inputs of each length 1 to 10. We used 16 traces for each input length, for a total of 160 traces. 2 We experimented with providing 1, 2, 3, 5, and 10 strongly supervised traces, with the remainder containing only the elementary actions. For training our model, we performed a search over two hyperparameters:• Weight on loss from strongly supervised traces: When the number of weakly supervised demonstrations overwhelms the number of strongly supervised traces, the model can learn a hierarchy which does not match the supervisor. By appropriately scaling up the loss contribution from the strongly supervised traces, we can ensure that the model learns to follow the hierarchy specified in them.• Use of τ in ψ: The termination condition ψ τ h pb t |o t q contains a dependence on τ, the number of steps that the current procedure h has executed. However, sometimes the underlying definition for ψ does not contain any dependence on τ: ψ 1 h pb|oq " ψ 2 h pb|oq "¨¨¨. In such a case, the MLP for ψ h may learn a spurious dependency on τ, and generalize poorly to values of τ seen during test time. Therefore, we searched over whether to use τ for ψ at each level of the hierarchy. Results. Our are summarized in Table 2. Previous work by learns a model which can generalize perfectly to input lengths of 500 but not 1000. In our experiments with the same Accuracy for input length Strongly supervised / total traces 500 1000NPI BID32 3 160 / 160 <100% <100% NPL 3 10 / 160 100% <100% PHP 3 / 160 100% 100% Table 2: Empirical for the long-hand addition task. All models were trained with 16 traces per input length between 1 and 10, for a total of 160 traces, some of which strongly supervised.sample complexity, EG can train PHPs which generalize to length-1000 inputs with 100% empirical test accuracy. Moreover, we successfully learn models with as few as 3 strongly supervised demonstrations, compared to the 10 used by. However, we found that when the number of strongly supervised demonstrations was smaller than 10, early stopping of the training of the top-level policy was needed to learn a correct model. To obtain our reported , we evaluated different snapshots of the model generated dur reporteding training. In this paper we introduced the Parametrized Hierarchical Procedures (PHP) model for hierarchical representation of neural programs. We proposed an Expectation-Gradient algorithm for training PHPs from a mixture of strongly and weakly supervised demonstrations of an algorithmic behavior, showed how to perform level-wise training of multi-level PHPs, and demonstrated the benefits of our approach on two benchmarks. PHPs alleviate the sample complexity required to train policies with unstructured memory architectures, such as LSTMs, by imposing the structure of a call stack augmented with program counters. This structure may be limiting in that it requires the agent to also rely on observable information that could otherwise be memorized, such as the building specifications in the NanoCraft domain. The benchmarks used so far in the field of neural programming are simple enough and observable enough to be solvable by PHPs, however we note that more complicated and less observable domains may require more expressive memory structures, such as passing arguments to sub-procedures. Future work will explore such structures, as well as new benchmarks to further challenge the community. Our suggest that adding weakly supervised demonstrations to the training set can improve performance at the task, but only when the strongly supervised demonstrations already get decent performance. Weak supervision could attract the optimization process to a different hierarchical structure than intended by the supervisor, and in such cases we found it necessary to limit the number of weakly supervised demonstrations, or weight them less than demonstrations annotated with the intended hierarchy. An open question is whether the attractors strengthened by weak supervision are alternative but usable hierarchical structures, that are as accurate and interpretable as the supervisor's. Future work will explore the quality of solutions obtained by training from only weakly supervised demonstrations. In weak supervision, only the observable trajectory ξ " po 0:T, a 0:T´1 q is available in a demonstration, and the sequence of memory states ζ " m 0:T´1 is latent. This poses a challenge, since the space of possible memory trajectories ζ grows exponentially in the length of the demonstration, which at first seems to prohibit the computation of the log-likelihood gradient ∇ θ log Ppξ|π θ q, needed to maximize the log-likelihood via gradient ascent. Our key insight is that the log-likelihood gradient can be computed precisely and efficiently using an instance of the Expectation-Gradient (EG) method BID33, which we detail below: DISPLAYFORM0 where the first and third equations follow from the identity ∇ θ x " x∇ θ log x. We start by assuming two-level PHPs, so that at any time t the stack contains the root PHP and the PHP it invoked to select the elementary action. The stack also contains the program counters of these two PHPs, however we ignore the root counter to reduce complexity, and bring it back when we discuss multi-level hierarchies in Section 4.2.3 (and below).Let us denote by η τ h pa t |o t q and ψ τ h pb t |o t q, respectively, the stochastic operation and termination statements of procedure h P H Y tKu, where K is the root PHP. Let ph t, τ t q be the top stack frame when action a t is selected. Then the full likelihood Ppζ, ξ|θq of the policy given an annotated demonstration is Ppζ, ξ|θq 9 η K ph 0 |o 0 qδ τ0"0 DISPLAYFORM1 where from the right-hand side we omitted the constant causal dynamics factor DISPLAYFORM2 Ppo t |o 0:t´1, a 0:t´1 q, and with Pph t, τ t |h t´1, τ t´1, o t q " # ψ τt´1 ht´1 p1|o t qη K ph t |o t q if τ t " 0 ψ τt´1 ht´1 p0|o t qδ ht"ht´1 if τ t " τ t´1`1.This formulation of the likelihood has the extremely useful property that ∇ θ log Ppζ, ξ|θq decomposes into a sum of gradients. To find the expected gradient, as in, we do not need to represent the entire posterior distribution Ppζ|ξ, θq, which would be intractable. Instead, we only need the marginal posteriors that correspond to the various terms, namely v t ph, τ q " Pph t "h, τ t "τ |ξ, θq w t ph, τ q " Pph t "h, τ t "τ, τ t`1 "τ`1|ξ, θq. With these, the EG trick gives us the gradient of the observable demonstration DISPLAYFORM3 t ph, 0q∇ θ log η K ph|o t q t ÿ τ "0˜v t ph, τ q∇ θ log η τ h pa t |o t q w t ph, τ q∇ θ log ψ τ h p0|o t`1 qq pv t ph, τ q´w t ph, τ qq∇ θ log ψ τ h p1|o t`1 q¸¸.To allow the G-step, we take an E-step that calculates the marginal posteriors v and w with a forward-backward pass. We first compute the likelihood of a trajectory prefix φ t ph, τ q 9 Ppo 0:t, a 0:t, h t "h, τ t "τ q, up to the causal dynamics factor, via the forward recursion given by φ 0 ph, 0q " η K ph|o 0 q, and for 0 ď t ă T´1 φ t`1 ph 1, 0q "˜ÿ hPH,0ďτ ďt φ t ph, τ qη τ h pa t |o t qψ τ h p1|o t`1 q¸η K ph 1 |o t`1 q φ t`1 ph, τ`1q " φ t ph, τ qη τ h pa t |o t qψ τ h p0|o t`1 qq. We similarly compute the likelihood of a trajectory suffix ω t ph, τ q 9 Ppa t:T´1, o t`1:T |o 0:t, h t "h, τ t "τ q, via the backward recursion given by ω T´1 ph, τ q " η τ h pa T´1 |o T´1 qψ τ h p1|o T q, and for 0 ď t ă T´1 ω t ph, τ q " η τ h pa t |o t q˜ψ τ h p1|o t`1 q ÿ h 1 PH η K ph 1 |o t`1 qω t`1 ph 1, 0q`ψ τ h p0|o t`1 qqω t`1 ph, τ`1q¸.For efficiency considerations, note that this forward-backward graph has pt`1qk nodes in layer t, where k " |H|, but only pt`1qkpk`1q edges to the next layer, rather than the naive pt`1qpt`2qk 2.We can calculate our target likelihood using any 0 ď t ă T, by takingPpξ|θq " ÿ hPH,0ďτ ďt Ppξ, h t "h, τ t "τ q 9 ÿ hPH,0ďτ ďt φ t ph, τ qω t ph, τ q, so most efficient is to use t " 0 Ppξ|θq " DISPLAYFORM4 Ppξ, h 0 "h, τ 0 "0q 9 ÿ hPH φ 0 ph, 0qω 0 ph, 0q. Finally, the marginal posteriors are given by v t ph, τ q " 1 Ppξ|θq φ t ph, τ qω t pτ, hq w T´1 ph, τ q " 0, and for 0 ď t ă T´1 w t ph, τ q " 1 Ppξ|θq φ t ph, τ qη τ h pa t |o t qψ τ h p0|o t`1 qqω t`1 ph, τ`1q. As mentioned in Section 4.2.3, level-wise training of multi-level PHPs requires abstraction from lower levels and separation from higher levels. The former is achieved by rewriting weakly supervised demonstrations to show level-i operations as elementary, for the purpose of training the next-higher level i´1.After level i is trained, the level-i PHPs that generated the demonstrations are decoded using the trained parameters. In our current experiments we used latent trajectories sampled from the posterior distribution, given by Ppζ|ξ, θq " v 0 ph 0, τ 0 q T´2 ź t"0 Pph t, τ t, h t`1, τ t`1 |ξ, θq v t ph t, τ t q,where for each step 0 ď t ă T´1Pph t, τ t, h t`1, 0|ξ, θq " 1 Ppξ|θq φ t ph t, τ t qη τt ht pa t |o t qψ τt ht po t`1 qη K ph t`1 |o t`1 qω t`1 ph t`1, 0qPph t, τ t, h t`1, τ t`1 |ξ, θq " δ ht`1"ht w t ph t, τ t q.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJl63fZRb
We introduce the PHP model for hierarchical representation of neural programs, and an algorithm for learning PHPs from a mixture of strong and weak supervision.
Deep Reinforcement Learning has managed to achieve state-of-the-art in learning control policies directly from raw pixels. However, despite its remarkable success, it fails to generalize, a fundamental component required in a stable Artificial Intelligence system. Using the Atari game Breakout, we demonstrate the difficulty of a trained agent in adjusting to simple modifications in the raw image, ones that a human could adapt to trivially. In transfer learning, the goal is to use the knowledge gained from the source task to make the training of the target task faster and better. We show that using various forms of fine-tuning, a common method for transfer learning, is not effective for adapting to such small visual changes. In fact, it is often easier to re-train the agent from scratch than to fine-tune a trained agent. We suggest that in some cases transfer learning can be improved by adding a dedicated component whose goal is to learn to visually map between the known domain and the new one. Concretely, we use Unaligned Generative Adversarial Networks (GANs) to create a mapping function to translate images in the target task to corresponding images in the source task. These mapping functions allow us to transform between various variations of the Breakout game, as well as between different levels of a Nintendo game, Road Fighter. We show that learning this mapping is substantially more efficient than re-training. A visualization of a trained agent playing Breakout and Road Fighter, with and without the GAN transfer, can be seen in \url{https://streamable.com/msgtm} and \url{https://streamable.com/5e2ka}. Transferring knowledge from previous occurrences to new circumstances is a fundamental human capability and is a major challenge for deep learning applications. A plausible requirement for artificial general intelligence is that a network trained on one task can reuse existing knowledge instead of learning from scratch for another task. For instance, consider the task of navigation during different hours of the day. A human that knows how to get from one point to another on daylight will quickly adjust itself to do the same task during night time, while for a machine learning system making a decision based on an input image it might be a harder task. That is because it is easier for us to make analogies between similar situations, especially in the things we see, as opposed to a robot that does not have this ability and its knowledge is based mainly on what it already saw. Deep reinforcement learning has caught the attention of researchers in the past years for its remarkable success in achieving human-level performance in a wide variety of tasks. One of the field's famous achievements was on the Atari 2600 games where an agent was trained to play video games directly from the screen pixels and information received from the game BID20. However, this approach depends on interacting with the environment a substantial number of times during training. Moreover, it struggles to generalize beyond its experience, the training process of a new task has to be performed from scratch even for a related one. Recent works have tried to overcome this inefficiency with different approaches such as, learning universal policies that can generalize between related tasks BID25, as well as other transfer approaches BID7 BID24. In this work, we first focus on the Atari game Breakout, in which the main concept is moving the paddle towards the ball in order to maximize the score of the game. We modify the game by introducing visual changes such as adding a rectangle in the middle of the image or diagonals in the . From a human perspective, it appears that making visual changes that are not significant to the game's dynamics should not influence the score of the game, a player who mastered the original game should be able to trivially adapt to such visual variants. We show that the agent fails to transfer. Furthermore, fine-tuning, the main transfer learning method used today in neural networks, also fails to adapt to the small visual change: the information learned in the source task does not benefit the learning process of the very related target task, and can even decelerate it. The algorithm behaves as if these are entirely new tasks. Our second focus is attempting to transfer agent behavior across different levels of a video game: can an agent trained on the first level of a game use this knowledge and perform adequately on subsequent levels? We explore the Nintendo game Road Fighter, a car racing game where the goal is to finish the track before the time runs out without crashing. The levels all share the same dynamics, but differ from each other visually and in aspects such as road width. Similar to the Breakout , an agent trained to play the first level fails to correctly adapt its past experience, causing the learned policy to completely fail on the new levels. To address the generalization problem, we propose a zero-shot generalization approach, in which the agent learns to transfer between related tasks by learning to visually map images from the target task back to familiar corresponding images from the source task. Such mapping is naturally achieved using Generative Adversarial Networks (GANs) BID9, one of the most popular methods for the image-to-image translation that is being used in computer vision tasks such as style transfer BID15, object transfiguration BID31, photo enhancement BID17 and more recently, video game level generation BID27. In our setup, it is not realistic to assume paired images in both domains, calling for the use of Unaligned GANs BID19 BID15. Using this approach we manage to transfer between similar tasks with no additional learning. Contributions This work presents three main contributions. First, in Section 2, we demonstrate how an agent trained with deep reinforcement learning algorithms fails to adapt to small visual changes, and that the common transfer method of fine-tuning fails as well. Second, in Section 3, we propose to separate the visual mapping from the game dynamics, ing in a new transfer learning approach for related tasks based on visual input mapping. We evaluate this approach on Breakout and Road Fighter, and present the comparing to different baselines. We show that our visual transfer approach is much more sample efficient then the alternatives. Third, in section 5, we suggest an evaluation setup for unaligned GAN architectures, based on their achieved performance on concrete down-stream tasks. Many Breakout variations can be constructed that involve the same dynamics. The main idea is to make modifications that are not critical for a human playing the game but are for the algorithm that relies on visual inputs. We demonstrate the difficulty of deep reinforcement learning to generalize using 4 types of modifications as presented in FIG0. For all the experiments in this section forward we use the Asynchronous Advantage Actor-Critic (A3C) algorithm BID21, taking advantage of being faster than Deep Q-Network (DQN) BID20. The A3C learns the policy and the state-value function using parallel actor-learners exploring different policies for the acceleration and stability of the training. We rescale the image to 80 × 80 and keep the RGB colors for more realistic images. We use 32 actor learners, a discount rate of 0.99, learning rate of 0.0001, 20-step returns, and entropy regularization weight of 0.01. The A3C variation we choose is the LSTM-A3C network. We use the standard high-performance architecture implemented in BID16. The setup mentioned in 2.1 successfully trains on Breakout, reaching a score of over 400 points. However, when a network trained on the original game is presented with the game variants, it fails completely, reaching to a maximum score of only 3 points. This shows that the network does not necessarily learn the game's concepts and heavily relies on the images it receives. The common approach for transferring knowledge across tasks is fine-tuning. We experiment with common techniques used in deep learning models. In each setting, we have a combination of frozen and fine-tuned layers (Partial/Full) as well as layers that are initialized with the target's parameters and layers that are initialized with random values (Random). Our settings are inspired by BID29. We train each one of the tasks (before and after the transformation) for 60 million frames, and our evaluation metric is the total reward the agents collect in an episode averaged by the number of episodes, where an episode ends when the game is terminated or when a number of maximum steps is reached. We periodically compute the average during training. We consider the following settings:• From-Scratch: The game is being trained from scratch on the target game.• Full-FT: All of the layers are initialized with the weights of the source task and are fine-tuned on the target task.• Random-Output: The convolutional layers and the LSTM layer are initialized with the weights of the source task and are fine-tuned on the target task. The output layers are initialized randomly.• Partial-FT: All of the layers are initialized with the weights of the source task. The three first convolutional layers are kept frozen, and the rest are fine-tuned on the target task.• Partial-Random-FT: The three first convolutional layers are initialized with the weights of the source task and are kept frozen, and the rest are initialized randomly. The presented in FIG1 show a complete failure of all the fine-tuning approaches to transfer to the target tasks. In the best scenarios the transfer takes just as many epochs as training from scratch, while in other cases starting from a trained network makes it harder for the network to learn the target task. As the graphs show, some of the modification influenced more than others. For example, FIG1 shows that adding a simple rectangle can be destructive for a trained agent: while training from scratch consistently and reliably achieves scores over 300, the settings starting from a trained agent struggle to pass the 200 points mark within the same number of iterations, and have a very high variance. We noticed that during training the agent learns a strategy to maximize the score with a minimum number of actions. None of the experiments we performed showed better when the layers in the network were fine-tuned, and some showed negative transfer which is a clear indication of an overfitting problem. The A3C model learned the detail and noise in the training data to the extent that it negatively impacted the performance of the model on new data. Our and drawn from them are consistent with the shown when a similar approach was used on Pong BID24. In addition to Breakout, we also experimented transfer between the first and advanced level of Road Fighter, where the s change but the dynamics remains the same. This experiments ed with 0 points on each of the levels, a complete failure of the agent to re-use the driving techniques learned on the first levels on the next ones. An agent capable of performing a task in a source domain is now presented with a new domain. Fine-tuning the agent on the target domain fails to transfer knowledge from the source domain. We propose to separate the visual transfer from the dynamics transfer. To perform well, the agent can try and make analogies from the new domain to the old one: after observing a set of states (images) in the new domain, the agent can learn to map them to similar, familiar states from the source domain, and act according to its source domain policy on the mapped state. More concretely, given a trained policy π(a|s; θ) with trained parameters θ proposing an action a for source domain states s ∈ S, we wish to learn a mapping function G: T → S from target domain states t ∈ T such that interacting with the environment T by applying the policy π(a|G(t); θ) will in a good distribution of actions for the states T, as indicated by high overall scores. In other words, we seek a mapping function G that allows us to re-use the same policy π θ learned for source environment S when interacting with the target environment T.As both the source and target domain items are images, we heuristically learn the function G by collecting sets of images from S and T and learning to visually map between them using Unaligned GAN BID19 BID15. We use the scores obtained from interacting with the environment via π(a|G(t); θ) for the GAN model selection and stopping criteria. In this work, we focus on learning setups that receive only raw image data, without additional domain knowledge about objects or game dynamics. This prohibits us from using supervised paired GANs BID13 for learning the mapping function G: we cannot collect the needed supervision of corresponding (s, t) pairs. Instead, we use unaligned GANs BID19 BID15, in which the learner observes two sets of images, one from each domain, with the goal of learning to translate images in one domain to images in another. All major approaches to the unaligned image-to-image translation use the Cycle-Consistency principle. We have two mapping (encoding) functions DISPLAYFORM0 is a set of images collected from the source task and T = {t j} M j=1 is a set of images collected from the target task. The goal is to generate an image s, for any given t ∈ T where G 1 (t) = s, that is indistinguishable from s ∈ S. The cycle consistency principle relies on the assumption that the two functions, G 1 and G 2 are inverses of each other. It encourages unsupervised mapping by forcing G 2 (G 1 (t)) = t and G 1 (G 2 (s)) = s where s and t are the input images. The second component of the GAN architecture are the discriminators D 1 and D 2 aiming to distinguish between images generated by G 1 and G 2 and the real images from the target and source distributions respectively. In the following experiments, we use the UNIT framework BID19, which we found to perform well for the Breakout tasks (in section 5 we explicitly compare the UNIT and CycleGAN approaches on both the Breakout and Road Fighter transfer tasks). A distinguishing element in the UNIT framework is the shared-latent space assumption, according to which there is a shared-latent space consisting a shared latent code z for any pair of images s and t that can be recovered from this code. This share-latent space is represented as the weights of the last few layers of the encoding network and the few first layers of the decoding networks, and is learned by using Variational Autoencoders (VAEs). This sharing strongly ties the images in the source and target domain to each other, encouraging mappings that preserve similarities across domains. In contrast, the CycleGAN architecture does not make the shared space assumption and instead the generators are trained independently with two separate networks. For further information of the unaligned GAN architectures, see the original papers. Datasets. The Unaligned GAN training dataset requires images from both domains. We collect images from the source domain by running an untrained agent and collecting the observed images, and we do similarly for the target domain. The number of collected images should balance between two objectives: On the one hand, we want to take a small number of images, and on the other hand, it is essential for us to have a diverse dataset. We repeat this procedure for every target task, and create a source-target dataset for each. During training, we further ensure the images pairs are not aligned by randomly picking an image from each set at each iteration. Setup and Analysis. For our experiments we use the same architecture and hyper-parameters proposed in the UNIT paper. We initialize the weights with Xavier initialization BID8, set the batch size to 1 and train the network for a different number of iterations on each task. Some tasks are harder than others, the more changes exist in the frames the harder it is for the GAN to learn the mapping between the domains. However, our evaluation metric, testing the agent with the generated images, is a clear indication of how hard each task is and the number of iterations needed is based on the of this evaluation. Evaluation. We use GAN training to learn a mapping function G. GAN training, and unaligned GANs training in particular, are unstable and it challenging to find a good loss-based stopping criteria for them. A major issue with GANs is the lack of an evaluation metric that works well for all models and architectures, and which can assist in model selection. Different works use different methods that were suitable for their types of data. Our setup suggests a natural evaluation criteria: we run the source agent without any further training while using the model to translate each image of the target task back to the source task and collect the rewards the agent receives during the game when presented with the translated image. We use the total accumulated rewards (the score) the agent collects during the game as our model accuracy. We examine how well the agent does when receiving translated frames generated by the generator trained with GANs. Since we initialized the layers with the values of the trained network, we assume that the success of the agent is dependent on the similarity between the generated and the source task's frames. We first test our approach on Breakout, evaluating its ability to remove the changes added in the images. Second, we challenge our method even more on Road Fighter, where the goal is to transfer between different environments. Our goal in Breakout is removing the modifications in each one of the target tasks and transfer between each to the original game. Although the variations share many similarities, some tasks were more challenging than others, e.g., the lines of the Green Lines variation hide parts of the ball in Table 1: The score and number of frames needed for it of: the source task (Source), target task when initialized with the source task' network parameters with no additional training (Target) and the target task when initialized with the source task' network parameters where every frame is translated to a frame from the source task (Target with GANs). Target Task Target Target DISPLAYFORM0 Figure 3: Illustration of a frame taken from the target task (left) and its matching frame of the source task generated with GANs (right) for each one of the Breakout variations. (a)-(d) demonstrate successes, while (e) and (f) show failure modes of the unaligned GAN. In (e) the ball in the input image is not generated in the output and in (f) not all bricks are generated, and some of the generated bricks appear smudged. some frames. On the opposite side, the Rectangle variation requires less training since the number of pixels changed in the image is small. During testing, we encountered problems with the images generation that we did not observe during the GAN training. The translation task we attempted to perform was supposedly simple -search for the differences between the domains shared by all images, change them to the way they are in the opposite domain and leave everything else the same. Unfortunately, since the network does not have any prior information about objects in the image, it struggles to generate them even if they were not changed. The most common problem we had was that the generator generated bricks that were supposed to be removed from the game, and in some cases, they were very noisy in the generated image (Fig. 3f). Another problem was the ball generation and more specifically, the location of the generated ball. Since the ball is small and changes its position often, it was hard for the generator, trained with unaligned pairs, to decide if and where to locate it in the generated image (Fig. 3e). These issues and others eventually caused the agent to fail in following the policies it learned on the source task. We found that more training leads to better for some of the variations and so the number of iterations needed was different for each variation. In Table 1 we show the of a test game played by the agent with and without GANs. We stop the training after reaching 300 points, which we consider to be a high score. As the show, the source game trained from scratch requires ten of millions of images to achieve such score comparing to the target task trained with GANs that only needs a few hundreds of thousands-a 100x fold increase in sample efficiency. Moreover, the frames the GAN was trained on were limited to the first games in which the A3C network was not trained, and yet it managed to generalize to more advanced stages of the game. While the Breakout variants work well to demonstrate transfer failure cases, they can be considered as "toy examples". We proceed to demonstrate the effectiveness of our transfer method on a "real" task: learning to transfer between different levels of the Road Fighter game. Road Fighter contains 4 different levels FIG2, each with a different where some are more difficult than others. The levels mostly differ visually and all have the same rules and require the same driving techniques. Thus, we believe that these techniques should sustain when playing a new level. We start by training an RL agent to play the first level of the game. To maximize the score, the agent has to acquire 3 main capabilities: driving fast, avoiding collision with obstacles, and if a car collision occurs reacting fast to avoid crashing. We use the A2C algorithm, the synchronous version of the Advantage Actor-Critic which performs better on this game than A3C, reaching over 10, 000 game points on level 1.For the GAN training, we collect 100k 84x84 frames from each of levels 2, 3 and 4 by running an untrained agent repeatedly until we have collected sufficient samples (training an RL agent to reach a score of 10,000 points on Level 1 required observing above 100M frames.). Using the collected images we train a mapping function for each task to map the new level (target task) to the first one (source task). We use the same GAN architecture used for Breakout, but initialize the weights with Orthogonal initialization. Compared to Breakout, these tasks introduce new challenges: rather than removing a mostly static element, the GAN has to be able to change the and road size while keeping the cars in the right position on the road. On the other hand, this setup may be closer to the one unaligned GANs are usually applied in. We restrict ourselves to collecting images from the beginning of the game, before the agent had any training. This restricts the phenomena the GAN can observe, and some target tasks' images do not have a clear corresponding situation in the first level, potentially causing unpredictable behaviors. For example, the generator matches the diagonal shaped roads to one of the first rare and unique images of level 1 (Fig. 5e).Our experiments presented in TAB1 demonstrate how an agent trained to master the first level of the game fails to follow the optimal policies on new levels, reaching 0 points. However, with the GAN-based visual analogies the agent is able to apply some of the abilities it gained when training on the first level, most notably driving fast, staying on the road, avoiding some cars, and, most importantly, recovering from car crashes. The ing agent achieves impressive scores on levels 2, 3 and 4 (5350, 5350 and 2050 points, respectively), with no additional RL training and while observing only a fraction of the frames required for training a corresponding RL agent from scratch for these levels. Limitations. While the GAN works well in generating objects it has seen before, such as the agent's car, it does have limitations on objects it has not seen. As a , it ends up generating differently colored cars all in red, or not generating them at all, as shown in Fig. 5a, 5d and 5f. Colorful cars can be "collected" by the agent and are worth 1000 points each. Generating them in red makes the agent avoid them, losing these extra points and achieving overall lower scores even if finishing the track. When cars are not fully generated, the agent is less likely to avoid them, and eventually crashes. Data Efficiency. We measure the number of frames of game-interaction needed for the analogytransfer method. We collect 100k frames, and then train the GAN for up to 500k iterations, evaluating it every 10, 000 iterations by running the game and observing the score, and pick the best scoring model. This amounts to 100k + 50 * F frames, where F = 3000 is roughly the average number of frames in a game. This amounts to about 250k frames of game interaction for each transfered level, Figure 5: Left: the original frame. Right: GAN generated. Upper row shows the success cases of the GAN while the lower row shows representative failures: in (d) and (f) the only object generated on the road is the player's car and in (e) the diagonal shaped road of level 2 in matched to the starting point of level 1.an order of magnitude fewer interaction frames than training an RL agent to achieve a comparable score. Discussion. The approach is successfully transferring to most tasks, achieving the best scores on levels 2 and 3. In general, as the game progresses, the more obstacles are presented making it harder to play. One of our best is achieved on level 2 where the road is identical to level 1's, reaching the best score after 320k GAN iterations. On level 3, the challenge increases as the road is very narrow, making it harder for a player to avoid crashing. However, the GAN manages to generate the road in the right shape in most frames and position the cars in the matching ratio. Moreover, despite the challenges, due to the agent's ability to avoid crashing when colliding with cars it gets over 5000 points after 450k GAN iterations. In level 4 we get the maximum after 270k iterations. This level is also the most challenging one to play, which might be the reason for the score being the lowest out of all tasks. The main obstacles are the sudden turns in the road causing the car to be very close to the sideways and the increasing dangers a player has to avoid. Theses difficulties make this level much harder than level 1 and might require more training even from a human player. Overall, the agent performs quite well by successfully applying 2 out of 3 main capabilities it gained during training. It is missing the third capability, avoiding collisions and collecting bonus cars, mainly because of bad generation. We believe that these tasks and demonstrate a success of the analogy transfer method for zero-short generalization across different levels of a video game. They also suggest a potential of performing well on additional real world tasks in which visual analogies can be made. Evaluating GAN models and comparing them to each other is a major challenge: working with images and without well-defined objectives, testing the quality of the generator is delegated to human judgment, often using crowdsourcing to evaluate the generated images BID13; BID5. This metric is not stable and can be unreliable due to changing human factors. Others use a linear classifier to evaluate the image representations the GAN learned on supervised datasets, e.g., MNIST and CIFAR-10 . These approaches and others may be good enough for a limited group of test images but do not necessarily reflect the performance of a model in the real world. We note that while unaligned GANs are known to achieve impressive with only few training examples, our seemingly trivial translation cases proved to be quite challenging for the unaligned GAN architectures: producing valid translations that work in the game require training on a substantial number of images. During training, the GAN sees images from the early stages of the game where only a few bricks are missing in Breakout and there are no obstacles in Road Fighter. Our task requires it to generalize by translating images where objects are removed, added or in a different locations. This requires levels of generalization that may not be reflected in existing test-sets. We argue that using simple, well defined yet diverse test cases, situated in the context of a concrete down-stream task like we do here, is an important step forward in the evaluation of GAN models. We propose to evaluate GAN by running a game in which the trained Deep RL network that learned policies based on images is now receiving images from the same domain generated with GAN, and equate a successful GAN model with one ing in a high game score. We use the approach to compare Cycle-GAN and UNIT-GAN BID19.We examine both methods by running the RL agent with each every 1000 GAN training iterations and considering the maximum score after 500k iterations. We present the in TAB2. The UNIT GAN performs better than Cycle-GAN in most Breakout tasks, while Cycle-GAN outperforms UNIT in most Road Fighter tasks while requiring fewer iterations. The main difference between the two methods is the weight-sharing constraint applied in UNIT, making the domains dependent on each other by sharing and updating the weights of one or several decoders and encoders layers. We hypothesize this constraint is an advantage in tasks where the representation of the images in the different domains are similar. Thus, in Breakout, where most pixels are identical in the source and target images, Cycle-GAN often fails where UNIT succeed. However, in tasks such as Road Fighter's where most pixels are different, the agent could benefit from architectures such as Cycle-GAN where the two domains are independent of each other. Transfer Learning (TL) is a machine learning technique used to improve the training speed of a target task with knowledge learned in a source task. Pretraining and fine-tuning was proposed in BID11 and applied to TL in BID0 and BID4. In this procedure, the approach is to train the base network and then copy its first n layers to the first n layers of a target network. One can choose to update the feature layers transferred to the new task with the error backpropagated from its output, or they can be left frozen, meaning that they do not change during training on the new task. Unfortunately, as we have shown, while fine-tuning might have the ability to accelerate the training process is some cases, it can also have a damaging impact on others. Generalization is a key element in training deep learning models with time or data size constraints. Recent discussions on overfitting in Deep RL algorithms BID30 encouraged better evaluation (e.g. OpenAI Retro Contest 1) and generalization methods. In Atari, there are many similarities between the goals and the mechanism of the games. For this reason, there have been many works attempting to transfer between games or between different variations of the same game, one approach trying to do both is the progressive networks BID24. A progressive network is constructed by successively training copies of A3C on each task of interest. In that work, they transferred between different games as well as from different variations of the game Pong. The drawback of this approach is the number of parameters growing quadratically with the number of tasks. However, even if this growth rate was improved, different tasks may require different adjustments and the predefinition of the number of layers and network representation is preventing it. Zero-shot generalization is a discussed and researched topic nowadays. One work transferring between modified versions of the same game using zero-shot transfer is the schema networks BID14. Like us, they also chose to demonstrate their method on the game Breakout, using Object Oriented Markov Decision Process. In contrast, we do not use the representation of the objects in the game, and we wish to preserve the accomplishments of DQN and transfer using only raw data. Other attempted to achieve robust policies using learned disentangled representation of the image BID10, analogies between sets of instructions BID22, interactive replay BID3 while training and learn general policies by training on multiple tasks in parallel BID6 BID26.Finally, the idea of using GANs for transfer learning and domain adaptation was explored for supervised image classification and robotics applications by several authors BID1 BID12 BID18 BID2. In these methods, there is supervised source domain data (x Our RL-based setup is different: first, our coverage of target-domain data is very limited (we can only observe states which are reachable by the un-adapted or untrained agent). Second, we do not have access to supervised gold labels on the source domain, but only to a learned policy network. Third, interactions with the game environment provide very indirect rewards, so using this reward signal to influence the GAN training will be very inefficient. We thus opt for a different strategy: rather than mapping the source to the target domain and training on the projected signal, which is unrealistic an costly in the RL setup, we instead take a pre-trained source model and train an unaligned GAN to map from the target domain back to the source domain, in order to re-use the source model's knowledge and apply it to the target domain data. We believe this form of usage of GAN for transfer learning is novel. We demonstrated the lack of generalization by looking at artificially constructed visual variants of a game (Breakout), and different levels of a game (Road Fighter). We further show that transfer learning by fine-tuning fails. The policies learned using model-free RL algorithms on the original game are not directly transferred to the modified games even when the changes are irrelevant to the game's dynamics. We present a new approach for transfer learning between related RL environments using GANs without the need for any additional training of the RL agent, and while requiring orders of magnitude less interactions with the environment. We further suggest this setup as a way to evaluate GAN architectures by observing their behavior on concrete tasks, revealing differences between the Cycle-GAN and UNIT-GAN architectures. We believe our approach is applicable to cases involving both direct and less direct mapping between environments, as long as an image-to-image translation exist. While we report a success in analogy transfer using Unaligned GANs, we also encountered limitations in the generation process that made it difficult for the agent to maximize the on the Road Fighter's tasks. In future work, we plan to explore a tighter integration between the analogy transfer method and the RL training process, to facilitate better performance where dynamic adjustments are needed in addition to the visual mapping.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkxjnjA5KQ
We propose a method of transferring knowledge between related RL tasks using visual mappings, and demonstrate its effectiveness on visual variants of the Atari Breakout game and different levels of Road Fighter, a Nintendo car driving game.
A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization. We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years. We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings. We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning. Finally, we synthesize a ``user's guide'' to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings. Adaptive gradient methods have remained a cornerstone of optimization for deep learning. They revolve around a simple idea: scale the step sizes according to the observed gradients along the execution. It is generally believed that these methods enjoy accelerated optimization, and are more robust to hyperparameter choices. For these reasons, adaptive optimizers have been applied across diverse architectures and domains. However, in recent years, there has been renewed scrutiny on the distinction between adaptive methods and "vanilla" stochastic gradient descent (SGD). Namely, several lines of work have purported that SGD, while often slower to converge, finds solutions that generalize better: for the same optimization error (training error), adaptive gradient methods will produce models with a higher statistical error (holdout validation error). This claim, which can be shown to be true in a convex overparameterized examples, has perhaps muddled the consensus between academic research and practitioners pushing the empirical state of the art. For the latter group, adaptive gradient methods have largely endured this criticism, and remain an invaluable instrument in the deep learning toolbox. In this work, we revisit the generalization performance of adaptive gradient methods from an empirical perspective, and examine several often-overlooked factors which can have a significant effect on the optimization trajectory. Addressing these factors, which does not require trying yet another new optimizer, can often account for what appear to be performance gaps between adaptive methods and SGD. Our experiments suggest that adaptive gradient methods do not necessarily incur a generalization penalty: if an experiment indicates as such, there are a number of potential confounding factors and simple fixes. We complete the paper with a discussion of inconsistent evidence for the generalization penalty of adaptive methods, from both experimental and theoretical viewpoints. Our work investigates generalization of adaptive gradient methods, and constructively comments on the following: The brittleness of simple experiments and simple abstractions. We attempt a replication of the experiments from , finding that they have not stood up to unknown hardware and software differences. We show simple theoretical settings where adaptive methods can either fail or succeed dramatically, as compared to SGD. Though each can shed interesting insights, neither abstraction is reflective of the truth. The perils of choosing a large. The innocuous initial accumulator value hyperparameter destroys adaptivity at parameter scales smaller than √. This really matters in large-scale NLP; a foolproof solution is to use our proposed "ε = 0" variant of AdaGrad. The subtleties in conducting a proper optimizer search. The differences between Adam, AdaGrad, and RMSprop are not fundamental; some, like AdaGrad's lack of momentum, are easily fixable. Upon disentangling these differences, and with enough tuning of the learning rate schedule, we suggest that all three are equally good candidates in optimizer search, and can match or beat SGD. Adaptive regularization was introduced along the AdaGrad algorithm in parallel in . A flurry of extensions, heuristics and modifications followed, most notably RMSprop and Adam . Today, these papers have been cited tens of thousands of times, and the algorithms they propose appear in every deep learning framework. For an in-depth survey of the theory of adaptive regularization and its roots in online learning, see . Upon a quick perusal of recent literature, there is plenty of evidence that adaptive methods continue to be relevant in the state of the art. Adam in particular remains a staple in recent developments in fields such as NLP (; ;), deep generative modeling (; ;), and deep reinforcement learning . Adaptive methods have seen adoption in extremely large-scale settings, necessitating modifications to reduce memory consumption (; ; . In recent years, there have been various works attempting to quantify the generalization properties of SGD. These varied perspectives include general analyses based on stability and early stopping , a characterization of the implicit bias in special separable cases (a; b), and a more fine-grained analysis for neural networks exploiting their specific structure . More recently, there has been a growing interest towards understanding the interpolation regime for overparameterized function fitting, where SGD is often the basic object of analysis . Finally, empirical questions on the generalization of adaptive gradient methods were brought to the forefront by , who exhibit empirical and theoretical situations where adaptive methods generalize poorly. Building on this premise, suggest switching from Adam and SGD during training. develop a doctrine of "superconvergence" which eschews adaptive methods. point out some pathological settings where Adam fails to converge, and amends the algorithm accordingly. note some sociological problems leading to misleading research on optimizer selection, providing a benchmarking suite for fairer hyperparameter searches, with mixed preliminary . We begin by reviewing the stochastic optimization setting, and giving rigorous definitions of the adaptive gradient methods commonly used in practice. We will focus on stochastic optimization tasks of the form where the expectation is over a random variable z whose distribution D is initially unknown; in machine learning, z often represents a pair (x, y) of an example x and its corresponding label y, drawn from an unknown population. A stochastic optimization algorithm is given a sample z 1,..., z T ∼ D from the underlying distribution, and produces a pointw ∈ R d whose population loss F (w) is as close as possible to that of the minimizer w = arg min w F (w). Often, iterative (first-order) optimization methods maintain a sequence of iterates w 1,..., w T and, at each step t, use the stochastic gradient to form the next iterate w t+1. The simplest stochastic optimization method is Stochastic Gradient Descent (SGD), whose update rule at step t takes the form where η t > 0 is a step size (or learning rate) parameter, whose scale and time-varying behavior are typically determined via hyperparameter search. Adaptive gradient methods comprise a general family of iterative optimization algorithms which attempt to automatically adapt to anisotropic gradient and parameter sizes. Often, an adaptive method will incorporate a different (adaptively computed) step size for each entry of the gradient vector. More specifically, the most common adaptive methods divide each parameter's gradient update by a second-moment-based estimate of the scale of its historical gradients. A concise way to unify this family of adaptive methods is given by following update equation (starting from an arbitrary initializer w 0): The above update expresses a broad family of methods including SGD, momentum (i.e., Polyak's heavy-ball method), AdaGrad, RMSprop, and Adam. The particular instantiations of the parameters α k, β k are summarized below: Table 1: Parameter settings for common optimization algorithms in the unified framework of Equation 1. Here, 2 denotes the entrywise square of a vector or matrix. We omit the parameters in the adaptive methods, see the discussion in Section 3.1. In this section, we compile some lesser-known practices in the usage of adaptive methods, which we have found to help consistently across large-scale experiments. We emphasize that this collection is restricted to simple ideas, which do not add extraneous hyperparameters or algorithmic alternatives. The general AdaGrad update, as originally proposed by , includes a parameter to allow for convenient inversions. Formally, the update looks like: The inclusion of in the the original proposal of the above updates seems to have been made for convenience of presenting the theoretical . However, in practice, this parameter often turns out to be a parameter that should be tuned depending on the problem instance. The default value of this parameter in the standard implementations of the algorithm tend to be quite high; e.g., in Tensorflow it is 0.1 which can be quite high. As large values would in AdaGrad reducing to SGD with an implicit 1/ √ learning rate, and losing out on all adaptive properties (RMSprop and Adam implementations also have an equivalent epsilon parameter). The effect can be seen in Figure 4 which shows that along many coordinates the accumulated second moments of the gradient are very small, even in the middle of training. At least one work remarks that the ability to choose a large ε in a secondmoment-based adaptive method might be a feature rather than a shortcoming; the smooth interpolation with SGD may improve the stability of more exotic optimizers. This does not appear to be the case for diagonal-matrix adaptive methods, in the NLP setting investigated in this paper. Instead, we suggest removing this hyperparameter altogether and justify it in Section 4.2, and performing the AdaGrad update with the pseudoinverse instead of the full inverse. Then, the update is given by the following: where A † denotes the Moore-Penrose pseudoinverse of A and with the preconditioning matrices updated as before. The above means that if there is a coordinate for which the gradient has been 0 thus far we make no movement in that coordinate. This fix which can similarly be applied to the full matrix version of AdaGrad, does not affect the regret guarantees of AdaGrad. We provide an analysis in the Appendix B, verifying as a sanity check that the standard AdaGrad regret bounds continue to hold when ε is completely removed. A key distinction between AdaGrad, RMSprop and Adam is as follows: AdaGrad does not include momentum, and there is a per-parameter learning rate which is inverse of the accumulated gradient squares for that parameter. RMSprop as described in uses exponential moving averaging rather than straightforward accumulation that AdaGrad relies on, and Adam modifies RMSprop to add momentum for the gradients along with a bias-correction factor. Note that implementation of RMSprop can vary based on the software library; e.g., TensorFlow includes modification to include momentum, while Keras API does not. We note that it is straightforward to extend AdaGrad to incorporate heavy-ball momentum, where we start withḡ 0 = 0 (and from a certain initialization w 0) and iteratively update: The original definition of the Adam optimizer includes a bias correction term, in which the moment estimates are multiplied by the time-varying scalars (1 − β t 1) and (1 − β t 2). As mentioned in the original paper, the bias correction can equivalently be written as an update to the learning rate. In the notation of Table 1: As can be seen from Figure 2, for the typical values of β 1 = 0.9 and β 2 = 0.999, the effective multiplier on the learning rate essentially resembles an external warmup applied on top of the learning rate. The general heuristic of including a warmup phase at the beginning of training has gained significant popularity in state-of-the-art empirical works; see, for example, Applying such a warm up externally on Adam in now 3 hyper-parameters (β 1, β 2 and now the amount of warmup) conflating with each other, making hyper-parameter tuning difficult. Instead we suggest to complete disable this bias correction altogether and use an explicit warmup schedule in place of it. We use such a schedule in all of our experiments for SGD as well as adaptive optimizers as we find that it helps consistently across language modelling experiments. One motivation for warmup during the initial stages of training is that for adaptive updates, the squared norm of the preconditioned gradient during the initial stage is quite large compared to the scale of the parameters. For the initial steps the preconditioned gradient squared norm is proportional to the number of coordinates with non-zero gradients where as the squared norm of the parameters are proportional to the number of nodes. Therefore adaptive methods are naturally forced to start with a smaller learning rate. The warmup in such a case helps the learning rate to rise up while the norm of the gradients fall sharply as training proceeds. Learning rate decay schedules are one of the hardest to tune albeit a crucial hyperparameter of an optimizer. Stochastic gradient like algorithms, domain specific learning rate schedules have been derived over time with a lot of care and effort, examples include Resnet-50 on ImageNet-2012 where state of the art configuration of SGD+Momentum follows a stair-case learning rate schedule (while other type of schedules maybe possible). Adaptive algorithms apriori come with a potential promise of not requiring massive tuning of these schedules as they come with an in-built schedule with the caveat that AdaGrad variants like Adam or RMSprop does not enjoy a data-dependent decay like AdaGrad due to the presence of a non-zero decay factor and requires an external learning rate decay schedule. Even for experiments in which introduces Adam optimizer has experiments to include a 1/ √ T decay schedule for convergence. In our experiments, we found this implicit decay of AdaGrad to be sufficient for achieving superior performance on training a machine translation model, while an external decay rate was necessary for training the Resnet-50 on ImageNet-2012 to high accuracy. We study the empirical performance of various optimization methods for training large state-of-theart deep models, focusing on two domains: natural language processing (machine translation) and image recognition. We study the convergence of various optimization methods when training a Transformer model for machine translation. We used the larger Transformer-Big architecture ; this architecture has 6 layers in the encoder and decoder, with 1024 model dimensions, 8192 hidden dimensions, and 16 attention heads. It was trained on the WMT'14 English to French dataset (henceforth "en→fr") that contains 36.3M sentence pairs. All experiments were carried out on 32 cores of a TPU-v3 Pod and makes use of the Lingvo sequence-to-sequence TensorFlow library. We compared several optimization methods for training; the are reported in Fig. 3. We see that a properly tuned AdaGrad (with ε = 0 and added momentum) outperforms Adam, while SGD with momentum, plain AdaGrad and RMSprop perform much worse on this task. These illustrate that adaptivity and momentum are both extremely effective in training these models. In Section 3.1, we proposed an "ε = 0" variant of AdaGrad. Here we empirically motivate this modification, by investigating the effect of the parameter ε on the performance of AdaGrad. We train the Transformer model from above on the en→fr dataset using AdaGrad while varying the value of ε. The are given in Fig. 4. We see drastic improvement in convergence as we lower the value of ε down to 10 −7 (lower values do not improve convergence further and are thus omitted from the figure). To see where these dramatic improvements come from, we also visualize in?? the histogram of the square gradient values for the embedding layer of the model at step t = 116200, which indicates that a large fraction of the cumulative gradient entries have extremely small magnitudes. The choice of ε is thus important, and justify our prescription of removing the dependency all-together instead of tuning it as a separate hyper-parameter. Next, we trained a ResNet-50 architecture on the Imagenet-2012 dataset. The task is to classify images as belonging to one of the 1000 classes. Our training setup consists of 512 cores of a TPU v3 Pod and makes use of a relatively large batch size of 16386. As a baseline, we considered SGD with momentum with a highly-tuned staircase learning rate schedule, that achieves 75.3% test accuracy after 90 epochs. We compared several optimization methods on this task as seen in Fig. 5: the straightforward application of AdaGrad (with a fixed ε and with heavy ball momentum) achieves only a paltry 63.94% test accuracy. Noticing that AdaGrad implicit decay schedule does not decay sufficiently fast, an external decay rate was added starting at epoch 50 of the form (1 − current epoch -50 50) 2. This change was sufficient for AdaGrad to reach a test accuracy of 74.76%-a drastic >∼ 10% jump. As demonstrated, learning rate schedule is a highly important hyperparameter and requires tuning for each task. E.g., the baseline SGD is highly tuned and follows an elaborate stair case learning rate to reach 75% test accuracy. We attempted to reproduce the experiments from , using the same codebases and identical hyperparameter settings. Although we were able to replicate some of their findings on these smaller-scale experiments, others appear to be sensitive to hyperparameter tuning, and perhaps subtle changes in the deep learning software and hardware stack that have occurred during the two years since the publication of that paper. In this section, we summarize these findings. Image classification. On the classic benchmark task of CIFAR-10 classification with a VGG network , we were able to replicate the perfectly, using the same codebase 2. We repeated the hyperparameter search reported in the paper, found the same optimal base learning rates for each optimizer, and found the same stratification in performance between non-adaptive methods, Adam & RMSprop, and AdaGrad. Character-level language modeling. Curiously, our replication of the language modeling experiment using the same popular repository 3 was successful in reproducing the optimal hyperparameter settings, but ed in an opposite . Here, SGD found the objective with the smallest training loss, but Adam exhibited the best generalization performance. We believe that software version discrepancies (our setup: CUDA 10.1, cuDNN 7.5.1) may account for these small differences. Generative parsing. We turn to the Penn Treebank constituency parsing code 4 accompanying . Using the same architectural and training protocol modifications as specified in , we were able to get the model to converge with each optimizer. However, for two of the settings (SGD and RMSprop), the best reported learning rates exhibited non-convergence (the fainter curves in Figure 6). Similarly as the above experiment, the ranking of optimizers' training and generalization performance differs from that seen in the original report. include a fourth set of experiments, generative parsing of Penn Treebank, using the code 5 accompanying . Unfortunately, this DyNet implementation, which was last updated in 2016, encountered a fatal memory leak when training with our DyNet 2.1 setup. All relevant plots are given in Figure 6, with color codes selected to match Figures 1 and 2 in. Together, these experiments are further evidence for a widespread reproducibility crisis in deep learning: despite the authors' exceptional transparency in disclosing their optimizer selection and evaluation protocol, these benchmarks have turned out to be brittle for unknown reasons. Along the same lines as the random-seed-tuning experiments of , this suggests that there are further technical complications to the problems of credible optimizer evaluation addressed by , even on well-known supervised learning benchmarks. Character-level language modeling with a 2-layer LSTM. The original reported hyperparameters are the best and all optimizers converge to reasonable solutions, but contradictory about generalization arise. Bottom: 3-layer LSTM for generative parsing. Training does not converge with all reported learning rates; about generalization are unclear. In this section we provide two simple examples of stochastic convex problems where it can be seen that when it comes to generalization both AdaGrad and SGD can be significantly better than the other depending on the instance. Our purpose to provide both the examples is to stress our point that the issue of understanding the generalization performance of SGD vs. adaptive methods is more nuanced than what simple examples might suggest and hence such examples should be treated as qualitative indicators more for the purpose of providing intuition. Indeed which algorithm will perform better on a given problem, depends on various properties of the precise instance. We provide a brief intuitive review of the construction provided by; for a precise description, see Section 3.3 of that paper. Consider a setting of overparameterized linear regression, where the true output (i.e. dependent variable) y ∈ {±1} is the first coordinate of the feature vector (independent variable) x. The next two coordinates of x are "dummy" coordinates set to 1; then, the coordinates are arranged in blocks which only appear once per sample, taking the value of y. The key idea is that in this setting, the solution space that AdaGrad explores is always in the subspace of the sign vector of X y. As a , AdaGrad treats the first three coordinates essentially indistinguishably putting equal mass on each. It can then be seen that for any new example the AdaGrad solution does not extract the true label information from the first three coordinates and hence gets the prediction wrong, leading to high generalization error; the other distinguishing features belong to the new unique block which are set to 0 for the AdaGrad solution, as it has not seen them. This example is motivated from the original AdaGrad paper , adapted to the overparameterized setting. Consider a distribution Z supported over {0, 1} d with equal 1/d mass over vectors with exactly one 1 and 0 mass everywhere else. Let the label distribution be always y = 1. Consider sampling a dataset S of size c · d where c ≤ 1 (corresponding to the overparameterized setting) and consider the hinge loss f t (x) = [1 − y t (z t x t)] + where (z t, y t) denotes the t-th (example, label) pair. Note that there is an optimal predictor given by the all-ones vector. Running AdaGrad in such a setting, it can be seen that the first time a vector that has not appeared yet is sampled, AdaGrad quickly adapts by setting the coordinate corresponding to the vector to 1 and thereby making 0 error on the example. Therefore after one epoch of AdaGrad (cd steps), the training error reduces to 0 and the average test error becomes roughly (1 − c). On the other hand, for SGD (with an optimal 1/ √ t decay scheme) after say cd/2 steps, the learning rate reduces to at most O(1/ √ d) and therefore in the next cd/2 steps SGD reduces the error at most by a factor of O(1 − 1 √ d), leading to a total test error of at least ∼ (1 − c/2) after a total of cd steps. This is significantly smaller than the error achieved by AdaGrad at this stage. Further note that to get down to the same test error as that achieved by AdaGrad, it can be seen that SGD requires at least Ω(√ d) times more steps than AdaGrad.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJl6t64tvr
Adaptive gradient methods, when done right, do not incur a generalization penalty.
The ability to generalize quickly from few observations is crucial for intelligent systems. In this paper we introduce APL, an algorithm that approximates probability distributions by remembering the most surprising observations it has encountered. These past observations are recalled from an external memory module and processed by a decoder network that can combine information from different memory slots to generalize beyond direct recall. We show this algorithm can perform as well as state of the art baselines on few-shot classification benchmarks with a smaller memory footprint. In addition, its memory compression allows it to scale to thousands of unknown labels. Finally, we introduce a meta-learning reasoning task which is more challenging than direct classification. In this setting, APL is able to generalize with fewer than one example per class via deductive reasoning. Consider the following sequential decision problem: at every iteration of an episode we are provided with an image of a digit (e.g. MNIST) and an unknown symbol. Our goal is to output a digit Y = X + S where X is the value of the MNIST digit, and S is a numerical value that is randomly assigned to the unknown symbol at the beginning of each episode. After seeing only a single instance of a symbol an intelligent system should not only be able to infer the value S of the symbol but also to correctly generalize the operation associated with the symbol to any other digit in the remaining iterations of that episode. Despite its simplicity, this task emphasizes three cognitive abilities that a generic learning algorithm should display: 1. the algorithm can learn a behaviour and then flexibly apply it to a range of different tasks using only a few context observations at test time; 2. the algorithm can memorize and quickly recall previous experiences for quick adaptation; and 3. the algorithm can process these recalled memories in a non-trivial manner to carry out tasks that require reasoning. The first point is commonly described as "learning to learn" or meta-learning, and represents a new way of looking at statistical inference BID22 BID2 BID1. Traditional neural networks are trained to approximate arbitrary probability distributions with great accuracy by parametric adaptation via gradient descent BID13 BID23. After training that probability distribution is fixed and neural networks can only generalize well when the testing distribution matches the training distribution BID16. In contrast, meta-learning systems are trained to learn an algorithm that infers a function directly from the observations it receives at test time. This setup is more flexible than the traditional approach and generalizes better to unseen distributions as it incorporates new information even after the training phase is over. It also allows these models to improve their accuracy as they observe more data, unlike models which learn a fixed distribution. The second requirement -being able to memorize and efficiently recall previous experience -is another active area of research. Storing information in a model proves especially challenging as we move beyond small toy-examples to tasks with higher dimensional data or real-world problems. Current methods often work around this by summarizing past experiences in one lower-dimensional representation BID7 BID10 or using memory modules BID6. While the former approach can produce good , the representation and therefore the amount of information we can ultimately encode with such models will be of a fixed and thus limited size. Working with neural memory modules, on the other hand, presents its own challenges as learning to store and keep the right experiences is not trivial. In order to successfully carry out the task defined at the beginning of this paper a model should learn to capture information about a flexible and unbounded number of symbols observed in an episode without storing redundant information. Finally, reasoning requires processing recalled experiences in order to apply the information they contain to the current data point being processed. In simple cases such as classification, it is enough to simply recall memories of similar data points and directly infer the current class by combining them using a weighted average or a simple kernel BID26 BID24, which limits the models to performing interpolation. In the example mentioned above, more complex reasoning is necessary for human-level generalisation. In this paper we introduce Approximate Posterior Learning (APL, pronounced like the fruit), a self-contained model and training procedure that address these challenges. APL learns to carry out few-shot approximation of new probability distributions and to store only as few context points as possible in order to carry out the current task. In addition it learns how to process recalled experiences to carry out tasks of varying degrees of complexity. This sequential algorithm was inspired by Bayesian posterior updating BID8 in the sense that the output probability distribution is updated as more data is observed. We demonstrate that APL can deliver accuracy comparable to other state-of-the-art algorithms in standard few-shot classification benchmarks while being more data efficient. We also show it can scale to a significantly larger number of classes while retaining good performance. Finally, we apply APL to the reasoning task introduced as motivation and verify that it can perform the strong generalization we desire. The main contributions of this paper are:• A simple memory controller design which uses a surprise-based signal to write the most predictive items to memory. By not needing to learn what to write, we avoid costly backpropagation through memory which makes the setup easier and faster to train. This design also minimizes how much data is stored, making our method more memory efficient.• An integrated external and working memory architecture which can take advantage of the best of both worlds: scalability and sparse access provided by the working memory; and all-to-all attention and reasoning provided by a relational reasoning module.• A training setup which steers the system towards learning an algorithm which approximates the posterior without backpropagating through the whole sequence of data in an episode. 2.1 ARCHITECTURE Our proposed model is composed of a number of parts: an encoder that generates a representation for the incoming query data; an external memory store which contains previously seen representation/ data pairings with writing managed by a memory controller; and a decoder that ingests the query representation as well as data from the memory store to generate a probability distribution over targets. We describe each of the parts in detail below 1.Encoder The encoder is a function which takes in arbitrary data x t and converts it to a representation e t of (usually) lower dimensionality. In all our experiments x t is an image, and we therefore choose a convolutional network architecture for the encoder. Architectural details of the encoder used for each of the experiments are provided in the appendix. e d Figure 1: APL model applied to the the classification of an Omniglot image. The encoded image is compared to the entries of the memory and the most relevant ones are passed through a decoder that outputs a probability distribution over the labels. The dotted line indicates the parts of the graph that are not updated via back-propagation. Memory store The external memory module is a database containing the stored experiences. Each of the columns corresponds to one of the attributes of the data. In the case of classification, for example, we would store two columns: the embedding e m and the true label y m. Each of the rows contains the information for one data point. The memory module is queried by finding the k-nearest neighbors between a query and the data in a given column. The full row data for each of the neighbors is returned for later use. The distance metric used to calculate proximity between the points is an open choice, and here we always use euclidean distance. Since we are not backpropagating through the memory, how do we ensure that the neighbors returned by the querying mechanism contain task-relevant information? We expect that class-discriminative embeddings produced by the encoder should cluster together in representation space, and therefore should be close in the sense of euclidean distance. While this is not mathematically necessary, in the following sections we will show that APL as proposed does work and retrieve the correct neighbors which means that in practice our intuition holds true. We use a simple memory controller which tries to minimize the amount of data points written to memory. Let us define surprise as the quantity associated with the prediction for label y t as S = −ln(y t). Intuitively, this means that the higher the probability our model assigns to the true class, the less surprised it will be. This suggests a way of storing the minimal amount of data points in memory which supports maximal classification accuracy. If a data point is'surprising', it should be stored in memory; otherwise it can be safely discarded as the model can already classify it correctly. How should the memory controller decide whether a point is'surprising'? In this work we choose the simplest possible controller: if the surprise is greater than some hyperparameter σ, then that data should be stored in memory. For our experiments, we choose σ ∝ − ln(N) where N is the number of classes under classification which means that if the prediction confidence in the correct class is smaller than the probability assigned by a uniform prediction the value should be written to memory. In the appendix we show that after model training model performance is robust to variations in σ, as surprise becomes highly bimodal: a new data point tends to be either highly surprising (never seen something similar before) or not very surprising. Conveniently, in the case of classification problems the commonly used cross-entropy loss reduces to our measure of surprise directly, and we therefore use the prediction loss as an input to the memory controller directly. Decoder The decoder takes as input the query representation as well as all the data from the neighbors found in the external memory. We designed a relational feed-forward module with self attention which takes particular advantage of the external memory architecture. In addition we tested two other established decoder architectures: an unrolled relational working memory core and an unrolled LSTM. As all experiments have a classification loss at the end, all the decoders return a vector with logits for the N classes under consideration. Full details of each architecture are provided in the Appendix.• Relational self-attention feed-forward decoder. The relational feed-forward module (see query, and then does a cross-element comparison with a self-attention module before reducing the activations with an attention vector calculated from neighbor distances.• Relational working memory decoder The relational working memory module (figure 2, right) takes in the concatenated neighbor embeddings and corresponding label embeddings as its initial memory state. The query is fed a number N times as input to the relational memory core to unroll the computation.• LSTM decoder Finally we also test a vanilla LSTM decoder that takes in the query as the initial memory state and is fed each of the concatenated neighbor embeddings and corresponding label embeddings as its input each time step. Since we are looking for a system which can update its beliefs in an online manner we need a training procedure that reflect this behaviour. We train the system over a sequence of episodes that are composed of sequences of pairs (x t, y t). At the start of every episode the mapping x t → y t is shuffled in a deterministic manner (the exact details are task dependent and will be outlined in the experiments section). The data is then presented to the model sequentially in a random order. The model's memory is empty at the beginning of the episode. At each time step, a batch of examples is shown to the model and a prediction is made. We then measure the instantaneous loss L(ŷ t, y t) and perform a gradient update step on the network to minimize the loss on that batch alone. The loss is also fed to the memory controller for the network to decide whether to write to memory. In all the experiments below the task is to classify some quantity, therefore we use cross entropy loss throughout. APL learns a sequential update algorithm, that is, it minimizes the expected loss over an episode consisting of a number of data elements presented sequentially. However we don't need to backpropagate through the sequence to learn the algorithm. Rather, the model's parameters are updated to minimize the cross-entropy loss independently at each time step. Therefore the only pressure to learn a sequential algorithm comes from the fact that episodes are kept small so that the decoder is encouraged to read the information coming from the queried neighbors instead of just learning to fit the current episode's label mapping in its weights after a few steps of gradient descent (which is what happens in the case of MAML BID4). Meta-learning as a research field covers a large number of areas. The concept of'learning to learn' is not tied to a specific task and thus meta-learning algorithms have been successfully applied to a wide range of challenges like RL BID27 BID4, program induction few-shot classification BID12 BID26 and scene understanding.Some meta learning models generate predictions in an autoregressive fashion by predicting the next target point from the entire prior sequence of consecutive observations BID18 BID14. Algorithms of this kind have delivered state-of-the art in a range of tasks such as supervised learning to classification. Nonetheless their reliance on the full context history in addition to their autoregressive nature hinders parallelization and hurts performance and scalability. Another set of methods is based on the nearest neighbours approach BID12 BID26 BID24. These methods use an encoder to find a suitable embedding and then perform a memory look up based on these representations. The is a weighted average of the returned labels. As shown in BID14 ) using a pure distance metric to compare neighbors in worse performance than allowing a network to learn a comparison function. These kinds of methods thus suffer in comparison. Meta Networks BID15 ) also use an external memory to enable learning from previous examples combined with a model featuring slow and fast weights to produce the output, enabling them to state-of-the-art performance in several benchmarks. Conditional neural processes summarize the data into a fixed representation by averaging over the outputs of an encoder. This representation is fed into a decoder together with a query to produce the output. These methods are more space and compute efficient but given the fixed and averaged representation may not scale to very large problems. All of the above methods expect a fixed size context, thereby making life-long learning over large time horizons difficult. To enable this, an algorithm must learn to only write observations into memory when they provide additional predictive power. Memory augmented neural networks (MANN) BID20 achieve this by learning a controller to write into a differentiable neural dictionary. However, this requires backpropagating through the entire sequence to learn, which makes credit assignment over long time sequences hard and is computationally expensive. The idea of using an external memory module has been explored in BID9 and shown to produce good . Compared to that work we introduce a simpler writing mechanism and the idea of a relational decoder to exploit the nearest neighbor structure. The Omniglot dataset contains 1623 characters with 20 examples each. 1200 of the character classes are assigned to the train set while the remaining 423 are part of the test set. The examples are presented to the model sequentially in batches of 16 examples. For each episode, we choose N classes and shuffle their labels. We then run an episode for a certain number of steps, which is decreased as the model's accuracy increases to encourage quick adaptation. This means that the model accuracy and number of memories written to memory is time dependent. We follow an architecture similar to those used in previous work for the image encoder BID26 BID14, which consists of four convolutional blocks with 3x3 convolutions, relu and batch normalization. We augment the number of classes in the training set by rotating each symbol 90, 180 and 270 degrees as in previous work BID20. For this task, we found that all three decoder architectures perform similarly. A detailed comparison and all hyperparameters needed to reproduce this experiment are provided in the Appendix. In Figure 4a we can see the behavior of the algorithm within a single episode. As it sees more examples its performance increases until it saturates at some point, when additional writes don't help anymore (assuming the exact same data piece won't be seen again, which is the regime we always assume here). In the simple case of 5-way Omniglot classification, fewer than 2 examples per class are sufficient to saturate performance. In Figure 4b we demonstrate the evolution of the posterior distribution in 20-way classification for 3 different, fixed inputs. For the first step, where the memory is empty APL learns to output a uniform distribution (p 1/N with N the number of classes under classification). As more examples are added to memory, its distribution refines until it sees an informative example for that class, at which point its prediction becomes very confident. In Figure 4c we can see that different numbers of examples are written to memory for different classes, which demonstrates one of the advantages of this framework: we only need to store as many examples as each class requires, therefore if some classes may be more easily classified than others we can optimally use memory. In contrast, other models feed in a fixed context size per class which means they will either suffer in accuracy or use excessive memory. Figure 4: a) Accuracy and size of memory for 5-way Omniglot. APL stops writing to memory after having 2 examples per class for 5-way classification. b), examples of evolution of posterior distribution for 20-way classification for 3 images. The distribution starts as uniform for the very first step, then starts to change as more items are added to memory. When the correct class is seen, the distribution converges. c), the number of labels stored in memory per class is highly heterogeneous. In this 20-way problem, APL stored 44 items in memory and achieved 98.5% accuracy, which is higher than its homogeneous 5-shot accuracy. d) Accuracy vs. number of items written to memory for 1000-way classification. Classification accuracy when only 2000 examples have been written to memory (on average 2 examples per class) surpasses the accuracy for a fixed context size of 5 examples per class. Despite the previous point, it is also worthwhile comparing model accuracy to existing baselines with a fixed context size. To this end, we pre-populate the memory of a trained model with 1 or 5 examples per class and calculate the accuracy over the test set. We emphasize that the model was not trained to do well in this fixed-context scenario, and yet for 1 and 5-shot classification we obtain performance comparable to state-of-the-art models without having extensively tuned hyperparameters. Furthermore we tested our model with a much higher number of classes: 423-way classification where where we can use the whole test set, and 1000-way where the test set is augmented with rotations of the characters as we do in training. Finally, we test how the model fares on a completely new distribution, MNIST. For 1-shot, 10-way MNIST, APL trained on 20-way omniglot classification obtains 61% accuracy (compared to 72% cited by BID26 continues writing examples to memory as it sees surprising observations, which allows it to correct for the distribution shift. After writing 45 examples per class, it reaches an accuracy of 86% (there are 1000 examples per class in the MNIST test set, so it is not simply memorizing). We also applied our model to the full scale imagenet dataset. Unlike the above experiments there are no held out classes for testing, as the dataset was not conceived with held out classes in mind. Instead, we rely on shuffling the labels amongst the 1000 Imagenet classes and using the images from the test set for evaluation. This means the generalization are slightly weaker than in the above sections, but they still provide important insights as to the scalability of our method to thousands of classes and applicability to harder scenarios. As an encoder we use the pretrained Inception-ResNet-v2 BID25 due to computational constraints. For the fixed label case, this network reaches a top-1 accuracy of 80.4%. Training the encoder end-to-end might produce better , an investigation which we leave to later work. In the 20-way classification challenge, our method reaches 86.7% top-1 accuracy (average accuracy after 50 iterations). Performance remains very high for 100-way (72.9% top-1 accuracy). The model's performance degrades somewhat (52.6% top-1 accuracy) for 1000-way classification, where all the classes are shuffled. This highlights that large scale meta-learning on real world datasets remains a challenge even when all the classes have been observed by the encoder, as in this case. The number analogy task challenges a meta-learning model to use logic reasoning to generalize with fewer than 1 example per possible class FIG3 ). At each time step the network is shown two pieces of data, a number X and a symbol S. It is asked to classify the of X + S based only on the current data and its previous experiments. We experiment with two levels of difficulty for this task: in the first, the number values are fixed and correspond to the MNIST digits, while there are 10 different symbols with unknown values in each episode; in the second, both digits and symbols have shuffled values. We sample the symbol values in the range [−10, 10].When querying the memory, we query k neighbors via the number embeddings and k neighbors via the symbol embeddings. This makes sure that any relevant information to the problem is available to the reasoning module. The rest of the training setup is identical to the Omniglot experiments, including the encoder network for the digits. In the case where the numbers are fixed a human would need only see 10 examples, one for each symbol, to be able to correctly generalize for all 100 possible combinations. With one example per symbol, APL reaches 97.6% accuracy on the test set. When both numbers and symbols are shuffled each episode, a logical deduction process must be performed to infer the correct symbols. Our model is able to generalize using 50 examples written to memory FIG3 ) which is still fewer than seeing one of all 100 possible combinations. In this complex case, once a symbol's meaning has been figured out it is no longer necessary to solve the system of equations for that unknown. It would be interesting to explore how a system could additionally store this information in memory for later reuse, a question we leave for later work. On the left we fix the decoder (relational self-attention feed-forward module) and vary k. As there are 100 possible combinations of symbols (10 numbers × 10 symbols), the thick dashed line corresponds to the performance of a model capable of perfect 1-shot generalization. We can see that for k = 8 and k = 16 the decoder can infer the symbol and number meanings to do better than direct 1-shot classifications. On the right we fix k = 16 and show that the relational self-attention feed-forward module can generalize better from few examples than other decoder architectures. We introduced a self-contained system which can learn to approximate a probability distribution with as little data and as quickly as it can. This is achieved by putting together the training setup which encourages adaptation; an external memory which allows the system to recall past events; a writing system to adapt the memory to uncertain situations; and a working memory architecture which can efficiently compare items retrieved from memory to produce new predictions. We showed that the model can:• Reach state of the art accuracy with a smaller memory footprint than other meta-learning models by efficiently choosing which data points to remember.• Scale to very large problem sizes thanks to the use of an external memory module with sparse access.• Perform fewer than 1-shot generalization thanks to relational reasoning across neighbors. For all experiments below we use the same training setup. For each training episode we sample elements from N classes and randomly shuffle them to create training batches. For every batch shown to the model, we do one step of gradient descent with the Adam optimizer. We anneal the learning rate from 10 −4 to 10 −5 with exponential decay over 1000 steps (decay rate 0.9).For all experiments, the query data is passed through an embedding network (described below in), and this vector is used to query an external memory module. The memory module contains multiple columns of data, one of which will be the key and the others will contain data associated with that key. The necessary columns for each experiment are outlined below. The memory size is chosen so that elements will never be overwritten in a single episode (i.e. memory size > batch size × number of iterations).The returned memory contents as well as the query are fed to one of the decoder architectures described in section 6.2. For omniglot we encode the images with a convolutional network composed of a single first convolution to map the image to 64 feature channels, followed by 12 convolutional blocks. Each block is made up of a step of Batch Normalization, followed by a ReLU activation and a convolutional layer with kernel size 3. Every three blocks the convolution contains a stride 2 to downsample the image. All layers have 64 features. Finally we flatten the activations to a 1D vector and pass it through a Layer Normalization function. For all decoders we use a hidden dimensionality of 512, take their final state and pass it through a linear layer to generate the final logits for classification, using a Cross Entropy loss. The encoder is a pretrained Inception-ResNet-v2 network BID25 with the standard preprocessing as described in the paper. We use as embedding the pre-logit activations. All decoders use a hidden dimensionality of 1024. After the decoder step we take their final state and pass it through a linear layer to generate the final logits for classification, using a Cross Entropy loss. The encoder for MNIST uses the same convolutional network as described in the Omniglot section. The symbols are one-hot vectors for the first set of experiments. The memory is queried for neighbors both of the digit embeddings as well as the symbols, and found neighbors are concatenated and fed to the decoder. The decoder process is identical to Omniglot. However, the classification target is now the one-shot encoded version of the of the computation X + S, where X is the digit value and S is the symbol value. As S ∈ [−5, 5], we sum 5 to all values to obtain valid one-hot encodings (which means there are 20 possible values in all). BID0.This tensor (of shape [batch size, k, sum of all embeddings feature sizes]) is fed to what we call a relational self-attentional block: first the tensor is passed through a multihead attention layer (FIG4), which compares all elements to each other and returns a new tensor of the same shape; then a shared nonlinear layer (ReLU, linear, layer norm) processes each element individually. The selfattentional blocks are repeated 8 times in a residual manner (the dimensionality of the tensor never changes).Finally, we pass the distances between neighbors and query through a softmax layer to generate an attention vector which is multiplied with the activations tensor over the first axis (this has the effect of weighting closer memories more). The tensor is then summed over that first axis to obtain the final representation. We use a Relational working memory core as described in, (figure 2, right). The memory is initialized with the concatenated vectors {e 1...m, l 1...m, d 1...m}. The query e t is fed a number N = 5 times as input to the relational memory core to unroll the computation. The final memory state is passed through a linear layer to obtain the logits. We use a standard LSTM module, with initial state equal to the query embedding, and at each time step we feed in the neighbor embedding as concatenated with the embedded label. The LSTM is rolled out for k time steps (i.e. the number of neighbors). Its final output state is taken as input to the logits linear layer. We compared all three decoder architectures for the classification case and found they perform equally well for the classification case, as shown in the figure below. For the analogy task, we found the Relational self-attention feed forward module to work best, as outlined in the main text. How does the choice of parameter σ affect the performance of APL? Empirically we have verified that for a large range of σ, memory size and accuracy are largely unchanged. This is due to the feedback loop between the number of items stored and classification accuracy: as more items are stored in memory, the more elements are correctly classified and not stored in memory. Therefore the memory storage mechanism is self-regulating, and the number of elements in memory ends up being largely flat. In FIG7 we show the final memory size and average accuracy for the last 100 data points after showing APL 2000 unique data points for the case of 200-way classification. In this case the'natural' (uniform predictions) σ is around 5.2, which seems to be close to optimal for accuracy vs. elements in memory. We can increase the value somewhat but eventually the model can't write to memory any more and performance tanks. On the other side of the curve, for σ = 0 where we write everything, performance is slightly higher but at a roughly 8x memory cost. While we study APL in the few-shot learning setting, the algorithm could also be used in the continual learning setup BID11 BID19 BID28 BID17. We consider an experiment where each task consists of learning 10 new and previously unseen classes. For each task we present the models with 200 unique examples, and report the average accuracy for the last 100 examples seen. Examples are drawn from the test set of classes. In the case of progressive networks, one gradient descent step is taken after each example. For each task, a new logits layer is added on top of a convolutional encoder (same architecuture as APL) pretrained on the omniglot training set. APL is run as described in the main text. The are summarized in figure 12: APL can perform as well or better than a progressive network on this kind of task without needing access to gradient information, as its memory store can provide the requisite information to update its predictions to the new task. Figure 12: Accuracy of APL on a lifelong learning task where each task corresponds to learning 10 new classes in Omniglot. The baseline is a progressive net where the convolutional encoder is pretrained, and for every task a new logits layer is added and trained to classify the new classes. Results are the average accuracy over 5 runs. While not using any gradient information, APL performs as well or better than progressive networks.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByeSdsC9Km
We introduce a model which generalizes quickly from few observations by storing surprising information and attending over the most relevant data at each time point.
Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches. Recurrent neural nets (RNNs), including GRU nets BID6 and LSTM nets BID12, have been increasingly applied to many problems in natural language processing. Most of the problems can be divided into two categories: sequence to sequence (seq2seq) tasks BID29 ) (e.g., language modeling BID2 BID20, machine translation BID13, conversational/dialogue modeling BID26, question answering BID11 BID17, and document summarization BID21); and the classification tasks (e.g., part-of-speech tagging BID23, chunking, named entity recognition BID7, sentimental analysis BID28, and document classification BID14 BID25). To solve these problems, models often need to read every token or word of the text from beginning to the end, which is necessary for most seq2seq problems. However, for classification problems, we do not have to treat each individual word equally, since certain words or chunks are more relevant to the classification task at hand. For instance, for sentiment analysis it is sufficient to read the first half of a review like "this movie is amazing" or "it is the best I have ever seen," to provide an answer even without reading the rest of the review. In other cases, we may want to skip or skim some text without carefully checking it. For example, sentences such as "it's worth to try" are usually more important than irrelevant text such as "we got here while it's still raining outside" or "I visited on Saturday." On the other hand, sometimes, we want to re-read some sentences to figure out the actual hidden message of the text. All of these techniques enable us to achieve fast and accurate reading. Similarly, we expect RNN models to intelligently determine the importance or the relevance of the current sentence in order to decide whether to make a prediction, whether to skip some texts, or whether to re-read the current sentence. In this paper, we aim to augment existing RNN models by introducing efficient partial reading for classification, while maintaining a higher or comparable accuracy compared to reading the full text. To do so, we introduce a recurrent agent which uses an RNN module to encode information from the past and the current tokens, and applies a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. We expect that our agent will be able to achieve fast reading for classification with both high computational efficiency and good classification performance. To train this model, we develop an end-to-end approach based on the policy gradient method which backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder. We evaluate our approach on four different sentiment analysis and document topic classification datasets. By comparing to the standard RNN models and a recent LSTM-skip model which implements a skip action BID33, we find that our approach achieves both higher efficiency and better accuracy. Given an input sequence x 1:T with length T, our model aims to predict a single label y for the entire sequence, such as the topic or the sentiment of a document. We develop a technique for skimming, re-reading, early stopping and prediction, with the goal of (i) skipping irrelevant information and reinforcing the important parts, and (ii) to enable fast and accurate text classification. Specifically, the model will read the current token/chunk x it at time step t, encode the data x it and previous information h t−1 into a feature h t, and then decide the next token to read by skimming/skipping or to stop to form a final prediction (see FIG0). Such a model can be fully defined on top of a RNN structure and trained in an end-to-end fashion via back-propagation of a well defined reward signal. Both skimming and re-reading actions can be defined similarly by first choosing a step size k ∈ {0, 1, 2, · · ·, K} and then setting i t+1 = i t + k. When k = 0, the model rereads the current token; when k = 1, the model moves to the next token sequentially; when k > 1, the model skips the next k − 1 tokens. If the current action is to stop or the next token to read is after the last token of the input sequence text, the model will stop and output a label. All of these actions are defined by a policy module Π which takes the recurrent feature h t as input and outputs a stop signal and a label or generates a step size k and moves to the next token x it+1=it+k. The design of the policy module Π plays an critical role in our framework. It should (i) read as much significant text as possible to ensure a confident classification output and (ii) be computationally efficient, e.g., avoiding reading to the end of the text if sufficient information is already obtained and skipping irrelevant or unimportant part of the text. More specifically, for each step, the policy module Π should decide whether the information collected is convincing enough to stop reading and make a prediction. Otherwise it will need to evaluate the importance of the current semantic unit or token just read to decide which token to be read in the next step. By formulating this process as a sequential decision process, at each time step t, the policy module takes the output h t of an encoder, which summarizes the text read before and the current token x it, and outputs a probability distribution π t defined over actions. It is worth noting that to save computation, the actions are determined only by the latent representation h t. At each time step t, a sequence of actions are generated by first sampling a stopping decision in the form of a binary variable s from a Bernoulli distribution π S (·|h t). If s = 1, the model stops and draws a labelŷ from a conditional multinomial distribution specified by a classifier π C (·|h t, s = 1); otherwise, the model draws a step size k ∈ {0, . . ., K} from another conditional multinomial distribution π N (·|h t, s = 0) to jump to the token x it+1=it+k.Thus the probability of a sequence of actions that reads text X i1:it = {x i1, x i2, ..., x it}, stops at time t, and outputs a labelŷ can be written as the joint distribution DISPLAYFORM0 or simply as DISPLAYFORM1 Hereby, k j = i j+1 − i j is the step size sampled at time j which ensures the model moves from token x ij to x ij+1, and h j = RN N (x ij, h j−1) is computed by the RNN module. To encourage fast and accurate text reading, we want to minimize the difference between true label and predicted while ensuring a low computational cost, which is measured by the length of the assessed text. Hence, as the reward for the last output action, we use −L (ŷ, y), where L is a loss function that measures the accuracy between predicted labelŷ and true label y. For other actions we use a negative computational cost −F. Hereby, F is the normalized FLOP count used at each time step which is approximately constant. Note that the FLOP count for the last step, F t, differs, since it also includes the cost of the classification. Overall, the reward signal is defined as: DISPLAYFORM2 where α is a trade-off parameter between accuracy and efficiency. Assume that the entire policy Π θ is parameterized by θ = {θ DISPLAYFORM3 subsumes the parameters for the encoder. Our final goal is to find the optimal θ which maximize the expected return defined by: DISPLAYFORM4 where the first summation is used for integrating all possible sequences with different lengths to ensure the normalization of the distribution Π, and γ ∈ is a discount factor. It is not hard to see that J is infeasible to compute by enumerating all possibilities in the summation and expectation. Fortunately, we can apply the policy gradient algorithm BID32 to optimize this objective by estimating the gradient using Monte Carlo rollout samples, without doing expensive integration or enumeration. The REINFORCE policy gradient of the objective on data (x, y) can be derived as follows: DISPLAYFORM5 Considering that the length of the rollout sequence can differ significantly, the space for policy exploration is very large, thus making the variance of the gradient estimation very high. To remedy this, we also implement the advantage actor-critic algorithm BID16, which couples partial future return with each action and estimates a value function as the baseline for variance reduction. We find this procedure to provide better performance than the vanilla REINFORCE algorithm. It is worth noting that this policy gradient method eventually is able to backpropagate both classification accuracy and computational cost signals to every module in our model, including the stopping/skipping distributions, the label distribution and even the recurrent encoder, thus providing an end-to-end solution to text classification problems. Overall, our model aims to accelerate text classification while still achieving a high accuracy. The hyperparameter α is used to control the trade-off between accuracy and time cost. If we set α to be a relatively large value, our model will be more boldly to skip tokens, stop reading and output a label. If α is small, our model would like to (re)read more tokens. Actually, the reward for penalizing the computational cost can be seen as a Lagrangian multiplier which is used to constrain the average cost of the computation. Therefore, there is a mapping between α and the amortized computational budget allocated for each sample. Given a budget, we can tune α to provide a model with best classification accuracy with the amortized cost within the budget. This is desirable for many cost-sensitive applications, such as those on mobile devices. In this section, we illustrate our approach using two representative text classification tasks: sentiment analysis and topic classification. To perform a solid demonstration on re-reading and skimming, we conduct experiments on three different syntactic levels. We will first introduce the on the word level before discussing character and sentence level performance. In our experiments, we use the IMDB and Yelp dataset for sentiment analysis, and the AG news and DBpedia for topic classification. To evaluate each classifier, we use predictive accuracy as the performance metric and average per-data floating point operations (FLOPs) as the computational cost metric. We also take the FLOPs of the policy module into account, even though they are much lower than the classifier. The energy cost for the policy module is about 1 to 2 million FLOPs per sentence, which is much smaller than the total FLOPs needed for the recurrent module and the classifier. Hyper-parameters: We use the Adam BID15 optimizer with a learning rate of 0.001 in all experiments. For the recurrent network structure, we use a convolution layer with 128 kernels of size 5 and stack it as input to an LSTM with a hidden size of 128. For π S and π N policy network, we use a three hidden-layer MLP with 128 hidden units per layer. For π C and value network, we use a single-layer MLP with 128 hidden units. For all experiments, the maximal step size K is set to 3. We first evaluate our method on the IMDB movie dataset BID19. We randomly split it into 20,000 training, 5,000 validation and 25,000 test samples. The average length in the dataset is 240 words. We adopt the same preprocessing method as BID33, either padding or truncating each sentence to 400 words. We use a chunk-size of 20 words, i.e., at each step, the classifier reads 20 words. When the action is rereading or skipping, it rereads or skips several chunks of 20 words. To demonstrate the effectiveness of re-reading and skimming, we design three baseline models: The early stopping model, which has only a stopping module to decide when to terminate reading the paragraph, the classifier and policy module are jointly trained on the entire training corpus; The partial reading model, which is a classifier with same architecture trained on the truncated sentences decided by the stopping model (same as the one in the early stopping model. Thus, although the partial reading model has the same computational budget as the early stopping model, the prediction performance may differ; The whole reading model, which tries to use the whole corpus as training data. Figure 2 shows our comparison on the IMDB dataset, where the blue line indicates our proposed model while green and red one denote early-stopping model and partial reading model, respectively. The x-axis denotes the FLOP count (in millions) and the y-axis indicates the accuracy. Here the FLOP count is determined by the choice of the hyper-parameter α. As α increases, we obtain a curve indicating the trade-off between accuracy and energy cost. From this plot, we observe that both blue line and green line outperform the red line significantly. In addition, rereading and skipping further improve the performance of the model with only the early stopping mechanism. This observation implies that training the classifier jointly with the policy model improves both computational efficiency and accuracy. Besides the word-level evaluation, we also conduct experiments on a smaller scale syntactic unit: character-level. In detail, we perform topic classification on two large-scale text datasets BID34: the AG news dataset contains four topics, 120,000 training news, 10,000 validation news, 7,600 testing news. The DBpedia dataset contains 14 topics, 560,000 training entities, 10,000 validation entities and 70,000 testing entities. The are summarized in Figure 3. We observe that our proposed model outperforms the partial reading baseline by a significant margin. Furthermore, we evaluate our proposed model on a larger syntactic level: sentence level. We use Yelp review sentiment analysis dataset for this experiment. The Yelp dataset includes 500,000 training reviews, 20,000 validation reviews and 40,000 testing reviews. To evaluate on the larger semantic unit, we treat each sentence as a token, which will be read sequentially by the RNN encoder. The performance is provided in Figure 4. We observe that our proposed model achieves superior performance while being significantly faster. We summarize the obtained performance improvements in Table 1. On four different datasets and for three different syntactic levels we observe significant speedups when using the proposed techniques for skimming, rereading and early stopping, while maintaining the accuracy. A partial reading model which has the same computational cost achieves that are less accurate, which illustrates the benefits of a flexible model. In addition, our model achieves about 0.5-1 percent accuracy improvement compared to the full-reading model. Finally, we compare our model to a recently published baseline BID33, which only implements the skipping actions with k ∈ {1, 2, ..., K} but without rereading, and simply do early stopping when k = 0. We implemented their algorithm for a fair comparison. Results in Table 2 show that our model is much more efficient than their LSTM-skip model at the same-level of accuracy, which is marginally better than full reading baseline. These demonstrated that our proposed rereading and skimming mechanisms are effective on a variety of text classification tasks including sentiment analysis and topic classification. Also it is effective on different level of semantics: character-level, word-level or even sentence-level. With the help of our mechanisms, we could achieve both higher accuracy and faster speed. In this section, we conducted an ablation study to demonstrate the effectiveness of each action mechanism in our method: skimming, rereading and early-stopping. Our experiment was performed Table 2: Summary of our : We compare our model to BID33, showing the relative FLOPs necessary to achieve the same accuracy on two datasets used in both theirs and this paper.on the word-level IMDB dataset, and the is presented in Figure 5. The blue curve denotes the performance of the model with all actions (skimming, rereading and early-stopping) enabled. The green one denotes the performance of the model with only the early-stopping actions. Between these two curves, the red curve represents a model with rereading and early-stopping action, and the yellow line represents a model with skimming and early-stopping actions. Note that the performance of the green curve is the worst, indicating that rereading and skimming mechanisms are necessary. Furthermore, the blue curve is better than all other ones, indicating that combining skimming and rereading together can further improve the performance of the policy model. To obtain a more detailed understanding of our model, we first show the actions taken by our model on a sentiment analysis example (Figure 6), on which the LSTM full-reading model failed to give the right classification. We show the degree of positiveness given by LSTM model encoded in color, from green representing positiveness to brown representing negativeness. The paragraph starts with a sentence with strong positiveness of a dinner, then followed by a few sentences on some confusing description on the dinner. Many trivial or even negative words show up in the explanation. As a , the output of the full-reading model gradually changes from positive to negative and finally in a negative signal. Importantly, after our model reads the first two sentences, the policy module decides that it is confident enough to make a decision yielding the correct answer. Next we illustrate how the rereading and skimming actions are useful to identify important information in the text. As shown in FIG4, our model first reads a key word "stake" and is confident that the document is about money. Then it skims a few irrelevant tokens to read about "buying stake in Biotechnology" the following two tokens. The phrase "5 percent stake" showed up twice. Our model consider it to be important so it re-reads this token. At this time, the model basically knows this text is about business with a reasonable confidence. Then it skips to read about "collaboration deal" and stops to make a confident prediction. The idea of improving time efficiency with adaptive computation has been studied extensively and throughout the years BID30. For example, the adaptive computation time algorithm on recurrent neural networks proposed to utilize early stopping action to save computational cost. Spatially adaptive computation time BID9 was proposed for image classification and object detection tasks. Compared to their work, our model is powerful by utilizing the combinatorial complexity of actions. Attention mechanism applied to text data are related. ReasonNet BID27 trains a policy module to determine whether to stop before accessing the full text on question-answering tasks. Similarly, the model of BID8 performs early-stopping on text classification tasks. Comparing with these related work, our proposed model's skimming and rereading mechanisms are innovative. In addition, BID5 and BID18 propose to select the relevant sentences which are critical for question answering and sentiment analysis, respectively. Their methods utilize prediction accuracy as the reward signal to train the policy module. However, in our work, the policy module is trained considering both accuracy and computational cost explicitly. Other ways to reduce the inference computational cost for new examples have been considered. BID1 proposes a scheme to selectively activate parts of the network. BID3 presents two schemes to adaptively utilize the network during inference: Given each data point, they first select a network and then select some components of that network. One closely related work is BID33. The authors train their policy network end-to-end with reinforcement learning. In contrast to their work, our model implemented human-like mechanism rereading and separated early-stopping mechanism, thus leading to further improved efficiency and accuracy. Furthermore, we hardly rely on many hyper-parameters and only use a simple reward structure. Finally, we get an advanced performance with better reward design which incorporates the negative energy cost explicitly and implement a value network to reduce the variance. We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks. By mimicking human fast reading, we introduce a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. An endto-end training algorithm based on the policy gradient method backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder. We demonstrate the efficacy of the proposed approach on four different datasets and demonstrate improvements for both accuracy and computational performance. To illustrate that our model's performance is robust to the choice of chunk size, we investigate the model performance with a variety of chunk sizes on the IMDB dataset. The is shown in FIG5. Here the red curve denotes the performance of the partial reading baseline, and the other three curves denote the performance of our full-action model with three chunk sizes 8, 20, 40, respectively. It is clear that our model outperformes the baselines significantly with different choices of chunk size. In addition, we found that if the chunk size is too small, there are more decision steps inside each sentence, ing the policy optimization more difficult. For instance, the performance of the chunk size 8 seems worse than two larger chunk sizes. We believe this issue may be overcome by applying more advanced policy optimization algorithms such as proximal policy optimization BID24 Here the x-axis and y-axis are the same as previous figures. The red curve denotes the partial reading baseline, while the grey, blue, purple curves denote our models with chunk size 8, 20, 40, respectively. We found that our model is robust to different chunk sizes.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryZ8sz-Ab
We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks.
Reinforcement learning agents need to explore their unknown environments to solve the tasks given to them. The Bayes optimal solution to exploration is intractable for complex environments, and while several exploration methods have been proposed as approximations, it remains unclear what underlying objective is being optimized by existing exploration methods, or how they can be altered to incorporate prior knowledge about the task. Moreover, it is unclear how to acquire a single exploration strategy that will be useful for solving multiple downstream tasks. We address these shortcomings by learning a single exploration policy that can quickly solve a suite of downstream tasks in a multi-task setting, amortizing the cost of learning to explore. We recast exploration as a problem of State Marginal Matching (SMM), where we aim to learn a policy for which the state marginal distribution matches a given target state distribution, which can incorporate prior knowledge about the task. We optimize the objective by reducing it to a two-player, zero-sum game between a state density model and a parametric policy. Our theoretical analysis of this approach suggests that prior exploration methods do not learn a policy that does distribution matching, but acquire a replay buffer that performs distribution matching, an observation that potentially explains these prior methods' success in single-task settings. On both simulated and real-world tasks, we demonstrate that our algorithm explores faster and adapts more quickly than prior methods. Reinforcement learning (RL) algorithms must be equipped with exploration mechanisms to effectively solve tasks with limited reward signals. These tasks arise in many real-world applications where providing human supervision is expensive. The inability of current RL algorithms to adequately explore limits their applicability to long-horizon control tasks. A wealth of prior work has studied exploration for RL. While, in theory, the Bayes-optimal exploration strategy is optimal, it is intractable to compute exactly, motivating work on tractable heuristics for exploration. Exploration methods based on random actions have limited ability to cover a wide range of states. More sophisticated techniques, such as intrinsic motivation, accelerate learning in the single-task setting. However, these methods have two limitations. First, they do not explicitly define an objective to quantify "good exploration," but rather argue that exploration arises implicitly through some iterative procedure. Lacking a well-defined optimization objective, it remains challenging to understand what these methods are doing and why they work. Similarly, the lack of a metric to quantify exploration, even if only for evaluation, makes it challenging to compare exploration methods and assess progress in this area. The second limitation is that these methods target the single-task setting. Because these methods aim to converge to the optimal policy for a particular task, it is challenging to repurpose these methods to solve multiple tasks. We address these shortcomings by considering a multi-task setting, where many different reward functions can be provided for the same set of states and dynamics. Rather than exploring from scratch for each task, we aim to learn a single, task-agnostic exploration policy that can be adapted to many possible downstream reward functions, amortizing the cost of learning to explore. This exploration policy can be viewed as a prior on the policy for solving downstream tasks. Learning will consist of two phases: during training, we acquire this task-agnostic exploration policy; during testing, we use this exploration policy to quickly explore and maximize the task reward. Learning a single exploration policy is considerably more difficult than doing exploration throughout the course of learning a single task. The latter is done by intrinsic motivation (; ;) and count-based exploration methods , which can effectively explore to find states with high reward, at which point the agent can decrease exploration and increase exploitation of those high-reward states. While these methods perform efficient exploration for learning a single task, the policy at any particular iteration is not a good exploration policy. For example, the final policy at convergence would only visit the high-reward states discovered for the current task. What objective should be optimized to obtain a good exploration policy? We recast exploration as a problem of State Marginal Matching: given a desired state distribution, we learn a mixture of policies for which the state marginal distribution matches this desired distribution. Without any prior information, this objective reduces to maximizing the marginal state entropy H [s], which encourages the policy to visit as many states as possible. The distribution matching objective also provides a convenient mechanism to incorporate prior knowledge about the task, whether in the form of safety constraints that the agent should obey; preferences for some states over other states; reward shaping; or the relative importance of each state dimension for a particular task. We also propose an algorithm to optimize the State Marginal Matching (SMM) objective. First, we reduce the problem of SMM to a two-player, zero-sum game between a policy player and a density player. We find a Nash Equilibrium for this game using fictitious play , a classic procedure from game theory. Our ing algorithm iteratively fits a state density model and then updates the policy to visit states with low density under this model. Our analysis of this approach sheds light on prior work on exploration. In particular, while the policy learned by existing exploration algorithms does not perform distribution matching, the replay buffer does, an observation that potentially explains the success of prior methods. On both simulated and real-world tasks, we demonstrate that our algorithm explores more effectively and adapts more quickly to new tasks than state-of-the-art baselines. Most prior work on exploration has looked at exploration bonuses and intrinsic motivation. One class of exploration methods uses prediction error of some auxiliary task as an exploration bonus, which provides high (intrinsic) reward in states where the predictive model performs poorly (; ; ; ;). Another set of approaches (; ;) directly encourage the agent to visit novel states. While all methods effectively explore during the course of solving a single task (Taïga et al., 2019), the policy obtained at convergence is often not a good exploration policy (see Section 4). In contrast, our method converges to a highly-exploratory policy by maximizing state entropy in the training objective (Eq. 2). Many exploration algorithms can be classified by whether they explore in the space of actions, policy parameters, goals, or states. Common exploration strategies including -greedy and Ornstein-Uhlenbeck noise , as well as standard MaxEnt RL algorithms , explore in the action space. Recent work shows that adding noise to the parameters of the policy can in good exploration. Most closely related to our work are methods that perform exploration in the space of states or goals (; ; ; ;). In fact, consider the same State Marginal Matching objective that we examine and propose a similar algorithm. In relation to , our main contributions are empirically showing that exploration based on state-entropy is competitive with existing state-of-the-art exploration methods, and explaining how existing exploration methods based on prediction error are implicitly maximizing this state-entropy objective. In Appendix C.1, we also discuss how goal-conditioned RL can be viewed as a special case of State Marginal Matching when the goal-sampling distribution is learned jointly with the policy. The problems of exploration and meta-reinforcement learning are tightly coupled. Meta-reinforcement learning algorithms (; ; ;) must perform effective exploration if they hope to solve a downstream task. Some prior work has explicitly looked at the problem of learning to explore ). Our problem statement is similar to meta-learning, in that we also aim to learn a policy as a prior for solving downstream tasks. However, whereas meta-RL requires a distribution of task reward functions, our method will require only a single target state marginal distribution. Due to the simpler problem assumptions and training procedure, our method may be easier to apply in real-world domains. Related to our approach are standard maximum action entropy algorithms (; ; ; ;). While these algorithms are referred to as MaxEnt RL, they are maximizing entropy over actions, not states. These algorithms can be viewed as performing inference on a graphical model where the likelihood of a trajectory is given by its exponentiated reward . While distributions over trajectories induce distributions over states, computing the exact relationship requires integrating over all possible trajectories, an intractable problem for most MDPs. A related but distinct class of relative entropy methods use a similar entropy-based objective to limit the size of policy updates . Finally, the idea of distribution matching has been employed successfully in imitation learning (; ; ;). Similar to some inverse RL algorithms , our method iterates between learning a policy and learning a reward function, though our reward function is obtained via a density model instead of a discriminator. While inverse RL algorithms assume access to expert trajectories, we instead assume access to the density of the target state marginal distribution. In many realistic settings, such as robotic control with many degrees of freedom, providing fully-specified trajectories may be much more challenging than defining a target state marginal distribution. The latter only requires some aggregate statistics about expert behavior, and does not even need to be realizable by any policy. In summary, our work unifies prior exploration methods as performing approximate distribution matching, and explains how state distribution matching can be performed properly. This perspective provides a clearer picture of exploration, and this observation is useful particularly because many of the underlying ingredients, such as adversarial games and density estimation, have seen recent progress and therefore might be adopted to improve exploration methods. In this section, we propose the State Marginal Matching problem as a principled objective for learning to explore, and offer an algorithm for optimizing it. We consider a parametric policy π θ ∈ Π {π θ | θ ∈ Θ} that chooses actions a ∈ A in a Markov Decision Process (MDP) M with fixed episode lengths T, dynamics distribution p(s t+1 | s t, a t), and initial state distribution p 0 (s). The MDP M together with the policy π θ form an implicit generative model over states. We define the state marginal distribution ρ π (s) as the probability that the policy visits state s: We emphasize that ρ π (s) is not a distribution over trajectories, and is not the stationary distribution of the policy after infinitely many steps, but rather the distribution over states visited in a finite-length episode. 2 We also note that any trajectory distribution matching problem can be reduced to a state marginal matching problem by augmenting the current state to include all previous states. We assume that we are given a target distribution p * (s) over states s ∈ S that encodes our belief about the tasks we may be given at test-time. For example, a roboticist might assign small values of p * (s) to states that are dangerous, regardless of the desired task. Alternatively, we might also learn p * (s) from data about human preferences . For goal-reaching tasks, we can analytically derive the optimal target distribution (Appendix C). Given p * (s), our goal is to find a parametric policy that is "closest" to this target distribution, where we measure discrepancy using the Kullback-Leibler (KL) divergence: This is the same objective as in , we regularize the entropy of the state distribution, not the conditional distribution of actions given states, which in exploration in the space of states rather than in actions. Moreover, Equation 1 suggests that State Marginal Matching maximizes a pseudo-reward r(s) log p * (s) − log ρ π (s), which assigns positive utility to states that the agent visits too infrequently and negative utility to states visited too frequently (see Figure 1). We emphasize that maximizing this pseudo-reward is not a RL problem because the pseudo-reward depends on the policy. Optimizing Equation 1 to obtain a single exploration policy is more challenging than standard RL because the reward function itself depends on the policy. To break this cyclic dependency, we introduce a parametric state density model q ψ (s) ∈ Q {q ψ | ψ ∈ Ψ} to approximate the policy's state marginal distribution, ρ π (s). We assume that the class of density models Q is sufficiently expressive to represent every policy: Assumption 1. For every policy π ∈ Π, there exists q ∈ Q such that D KL (ρ π (s) q(s)) = 0. Under this assumption, optimizing the policy w.r.t. this approximate distribution q(s) will yield the same solution as Equation 1 (see Appendix A for the proof): Proposition 3.1. Let policies Π and density models Q satisfying Assumption 1 be given. For any target distribution p *, the following optimization problems are equivalent: Solving the new max-min optimization problem is equivalent to finding the Nash equilibrium of a two-player, zero-sum game: a policy player chooses the policy π while the density player chooses the density model q. To avoid confusion, we use actions to refer to controls a ∈ A output by the policy π in the traditional RL problem and strategies to refer to the decisions π ∈ Π of the policy player and decisions q ∈ Q of the density player. The Nash existence theorem proves that such a stationary point always exists for such a two-player, zero-sum game. One common approach to saddle point games is to alternate between updating player A w.r.t. player B, and updating player B w.r.t. player A. However, games such as Rock-Paper-Scissors illustrate that such a greedy approach is not guaranteed to converge to a stationary point. A slight variant, Input: Target distribution p * (s) Initialize policy π(a | s), density model q(s), and replay buffer B. Algorithm 1: An algorithm for optimizing the State Marginal Matching objective (Equation 1). The algorithm iterates between fitting a density model q (m) and training the policy π (m) with a RL objective to optimize the expected return w.r.t. the updated reward function r(s). The algorithm returns the collection of policies from each iteration, which do distribution matching in aggregate. fictitious play does converge to a Nash equilibrium in finite time . At each iteration, each player chooses their best strategy in response to the historical average of the opponent's strategies. In our setting, fictitious play alternates between fitting the density model to the historical average of policies (Equation 4), and updating the policy with RL to minimize the log-density of the state, using a historical average of the density models (Equation 5): Crucially, the exploration policy is not the last policy, π m+1, but rather the historical average policy: Definition 3.1. A historical average policyπ(a | s), parametrized by a collection of policies π 1, · · ·, π m, is a policy that randomly samples one of the policy iterates at the start of each episode and takes actions according to that policy for each step in the episode. A new policy is sampled for the next episode. We summarize the ing algorithm in Algorithm 1. In practice, we can efficiently implement Equation 4 and avoid storing the policy parameters from every iteration by instead storing sampled states from each iteration. 3 We cannot perform the same trick for Equation 5, and instead resort to approximating the historical average of density models with the most recent iterate. Algorithm 1 looks similar to prior exploration methods based on prediction-error, suggesting that we might use SMM to understand how these prior methods work. Exploration methods based on prediction error (; ; ; ;) do not converge to an exploratory policy, even in the absence of extrinsic reward. For example, consider the asymptotic behavior of ICM in a deterministic MDP, such as the Atari games where it was evaluated. At convergence, the predictive model will have zero error in all states, so the exploration bonus is zero -the ICM objective has no effect on the policy at convergence. Similarly, consider the exploration bonus in Pseudocounts : 1/n(s), wheren(s) is the (estimated) number of times that state s has been visited. In the infinite limit, each state has been visited infinitely many times, so the Pseudocount exploration bonus also goes to zero -Pseudocounts has no effect at convergence. Similar reasoning can be applied to other methods based on prediction error . More broadly, we can extend this analysis to stochastic MDPs, where we consider an abstract exploration algorithm that alternates between computing some intrinsic reward and performing RL (to convergence) on that intrinsic reward. Existing prediction-error exploration methods are all special cases. At each iteration, the RL step solves a fully-observed MDP, which always admits a deterministic policy as a solution . Thus, any exploration algorithm in this class cannot converge to a single, exploratory policy. Despite these observations, prior methods do excel at solving hard exploration tasks. We draw an analogy to fictitious play to explain their success. While these methods never acquire an exploratory policy, over the course of training they will eventually visit all states. In other words, the historical average over policies will visit a wide range of states. Since the replay buffer exactly corresponds to this historical average over states, these methods will obtain a replay buffer with a diverse range of experience, possibly explaining why they succeed at solving hard exploration tasks. Moreover, this analysis suggests a surprisingly simple method for obtaining an exploration from these prior methods: use a mixture of the policy iterates throughout training. The following section will not only compare SMM against prior exploration methods, but also show that this historical averaging trick can be used to improve existing exploration methods. We used simulated control tasks to determine if SMM learns an exploratory policy, to compare SMM to prior exploration methods, and to study the effect of historical averaging. More details can be found in Appendix D, and code will be released upon publication. We compare to a state-of-the-art off-policy MaxEnt RL algorithm, Soft Actor-Critic (SAC) ; an inverse RL algorithm, Generative Adversarial Imitation Learning (GAIL) ; and three exploration methods: • Count-based Exploration (C), which discretizes states and uses − logπ(s) as an exploration bonus. • Pseudo-counts (PC) , which uses the recoding probability as a bonus. • Intrinsic Curiosity Module (ICM) , which uses prediction error as a bonus. We used SAC as the base RL algorithm for all exploration methods (SMM, C, PC, ICM). To implement SMM, we define the target distribution in terms of the extrinsic environment reward: We use a variational autoencoder (VAE) to model the density q(s) for both SMM and Pseudocounts (PC). For the GAIL baseline, we generated synthetic expert data by sampling expert states from the target distribution p * (s) (see Appendix D.2 for details). Results for all experiments are averaged over 4-5 random seeds. We start with a sanity check: Is exploration in state space (as done by SMM) better than exploration in action space (as done by MaxEnt RL, e.g., SAC)? To study this question, we implemented a 2D Navigation environment, shown in Figure 2a. To evaluate each method, we counted the number of hallways that the agent fully explored (i.e., reached the end) during training. Figure 2b shows the state visitations for the three hallway environment, illustrating that SAC only explores one hallway, whereas SMM explores all three. Figure 2c also shows that SMM consistently explores 60% of hallways, whereas SAC rarely visits more than 20% of hallways. The remaining simulated experiments used the Manipulation environment , shown in Figure 3a. Our first experiment evaluates whether the exploration policy acquired by SMM allows us to solve downstream tasks more quickly. We defined the target distribution to be uniform over the entire state space (joint + block configuration), with the constraints that we put low probability mass on states where the block has fallen off the table; that actions should be small; and that the arm should be close to the object. As shown in Figure 3b, SMM adapts substantially more quickly than other exploration methods, achieving a success rate 20% higher than the next best method, and reaching the same level of performance of the next baseline (ICM) in 4x fewer episodes. SMM without historical averaging attains similar performance as the next best baseline (ICM), suggesting that historical averaging is the key ingredient, while the particular choice of prediction error or VAE is less important. We provide further ablation studies of SMM in Appendix B.2. While historical averaging is necessary to guarantee convergence (§ 3.1), most prior exploration methods do not employ historical averaging, raising the question of whether it is necessary in practice. To answer this question, we compare SMM to three exploration methods. In Figure 3c, we compare the policy obtained at convergence with the historical average of policy iterates over training for each method. We measure how well each method explores by computing the marginal state entropy, which we compute by discretizing the state space. 4 The show that SMM maximizes state entropy at least as effectively as prior methods, if not better. While this comparison is somewhat unfair, as we measure exploration using the objective that SMM maximizes, none of the methods we compare against propose metrics for exploration that we could use instead. Furthermore, we see that historical averaging is not only beneficial to SMM, but also improves the exploration of prior methods. In our final simulated experiment, we check whether prior knowledge injected via the target distribution is reflected in the policy obtained from State Marginal Matching. Using the same Manipulation environment as above, we modified the target distribution to assign larger probability to states where the block was on the left half of the table than on the right half. In Figure 3d, we measure whether SMM is able to achieve the target distribution by measuring the discrepancy between the block's horizontal coordinate and the target distribution. Compared to the SAC baseline, SMM and the Count baseline are half the distance to the target distribution. No method achieves zero discrepancy, suggesting that future methods could be better at matching state marginals. While almost all research on exploration focus on simulated domains, attributes of the real world such as partial observability, nonstationarity, and stochasticity may make the exploration more challenging. The aim of this section is to see if SMM explores effectively on a real-world robotic control task. We used the D'Claw robotic manipulator, which is a 3-fingered hand positioned vertically above a handle that it can turn. For all experiments on the D'Claw, we used a target distribution that places uniform mass over all object angles [−180 Claw is a 9-DoF robotic hand that is trained to turn a valve object. (b) Sim2Real: We trained each algorithm in simulation, and then measured how far the trained policy rotated the knob on the hardware robot. We also measured the maximum angle that the agent turned the knob in the clockwise and counter-clockwise directions within one episode. (c) Training on Hardware: We trained SAC and SMM on the real robot for 1e5 environment steps (about 9 hours in real time), and measured the maximum angle turned throughout training. We see that SMM moves the knob more and visits a wider range of states than SAC. All are averaged over 4-5 seeds. In a first experiment, we trained SMM and other baselines in simulation, and then evaluated the acquired exploration policy on the real robot using two metrics: the total number of rotations (in either direction), and the maximum radians turned (in both directions). For each method, we computed the average metric across 100 evaluation episodes. We repeated this process for 5 independent training runs. Figure 4b shows that SMM turns the knob more than the baselines, and it turns the knob to a wider range of angles. To test for statistical significance, we used a 1-sided Student's t-test to test the hypothesis that SMM turned the knob more and to a wider range of angles than SAC. The p-values were all less than 0.05: p = 0.046 for number of rotations, p = 0.019 for maximum clockwise angle, and p = 0.001 for maximum counter-clockwise angle. In our second experiment, we investigated whether it was possible to learn an exploration policy directly in the real world, without the need for a simulator. Learning to explore in the real world is quite important, as building faithful simulators of complex systems is challenging. The physical constraints of the real robot make data efficiency paramount, suggesting that learning to explore will require an effective exploration strategy. In Figure 4c, we plot the range of angles that the policy explores throughout training. Not only does SMM explore a wider range of angles than SAC, but its ability to explore increases throughout training, suggesting that the SMM objective is correlated with real-world metrics of exploration. In summary, the in this section suggests that exploration techniques may actually be useful in the real world, which may encourage future work to study exploration methods on real-world tasks. In this paper, we introduced a formal objective for exploration. While it is often unclear what existing exploration algorithms will converge to, our State Marginal Matching objective has a clear solution: at convergence, the policy should visit states in proportion to their density under a target distribution. Not only does this objective encourage exploration, it also provides human users with a flexible mechanism to bias exploration towards states they prefer and away from dangerous states. Upon convergence, the ing policy can thereafter be used as a prior in a multi-task setting, amortizing exploration and enabling faster adaptation to new, potentially sparse, reward functions. The algorithm we proposed looks quite similar to previous exploration methods based on prediction error, suggesting that those methods are also performing some form of distribution matching. However, by deriving our method from first principles, we note that these prior methods omit a crucial historical averaging step, without which the algorithm is not guaranteed to converge. Experiments on both simulated and real-world tasks demonstrated how SMM learns to explore, enabling an agent to efficiently explore in new tasks provided at test time. In future work, we aim to study connections between inverse RL, MaxEnt RL and state marginal matching, all of which perform some form of distribution matching. Empirically, we aim to scale to more complex tasks by parallelizing the training of all mixture components simultaneously. Broadly, we expect the state distribution matching problem formulation to enable the development of more effective and principled RL methods that reason about distributions rather than individual states. Proof of Proposition 3.1. Note that By Assumption 1, D KL (ρ π (s) q(s)) = 0 for some q ∈ Q, so we obtain the desired : This section introduces an extension of SMM, SM4, that incorporates mixure modelling. Given the challenging problem of exploration in large state spaces, it is natural to wonder whether we can accelerate exploration by automatically decomposing the potentially-multimodal target distribution into a mixture of "easier-to-learn" distributions and learn a corresponding set of policies to do distribution matching for each component. Note that the mixture model we introduce here is orthogonal to the historical averaging step discussed before. Using ρ πz (s) to denote the state distribution of the policy conditioned on the latent variable z ∈ Z, the state marginal distribution of the mixture of policies is where p(z) is a latent prior. As before, we will minimize the KL divergence between this mixture distribution and the target distribution. Using Bayes' rule to re-write ρ π (s) in terms of conditional probabilities, we obtain the following optimization problem: Intuitively, this says that the agent should go to states (a) with high density under the target state distribution, (b) where this agent has not been before, and (c) where this agent is clearly distinguishable from the other agents. The last term (d) says to explore in the space of mixture components z. This decomposition bears a resemblance to the mutual-information objectives in recent work . Thus, one interpretation of our work is as explaining that mutual information objectives almost perform distribution matching. The caveat is that prior work omits the state entropy term − log ρ πz (s) which provides high reward for visiting novel states, possibly explaining why these previous works have failed to scale to complex tasks. We summarize the ing procedure in Algorithm 2, which we refer to as SM4 (State Marginal Matching with Mixtures of Mixtures). The algorithm fits a density model q (m) z (s) to approximate the state marginal distribution for each policy π z; learns a discriminator d (m) (z | s) to predict which policy π z will visit state s; and uses RL to update each policy π z to maximize the expected return of its corresponding reward function r z (s) log p * (s) − log ρ πz (s) + log p(z | s) − log p(z) derived in Equation 8. The only difference from Algorithm 1 is that we learn a discriminator d(z | s), in addition to updating the density models q z (s) and the policies π z (a | s). Jensen's inequality tells us that maximizing the log-density of the learned discriminator will maximize a lower bound on the true density (see): Note that the updates for each z can be conducted in parallel. A distributed implementation would emulate broadcast-collect algorithms , with each worker updating the policy independently, and periodically aggregating to update the discriminator d(z | s). Such a distributed implementation has the appealing property that each compute node would explore a different part of the state space. While there has been some work on multi-agent coordinated exploration and concurrent exploration , it remains a fairly unexplored area (pun intended) and we believe that SMM with Mixtures of Mixtures offers a simple approach to this problem. To understand the relative contribution of each component in the SM4 objective (Equation 8), we compare SM4 to baselines that lack conditional state entropy H πz [s] = − log ρ πz (s), latent conditional action entropy log p(z | s), or both (i.e, SAC). In Figure 5a, we plot the training time performance on the Navigation task with 3 halls of length 50. We see that SM4 relies heavily on both key differences from SAC. In Figure 5b, we study the effect of mixture modelling on test-time exploration in the Manipulation environment. After running SMM/SM4 with a uniform distribution, we count the number of episodes required to find an (unknown) goal state. We run each method for the same number of environment transitions; a mixture of three policies does not get to take three times more transitions. We find that increasing the number of mixture components increases the agents success. However, the effect was smaller when using historical averaging. Taken together, this suggests that efficient exploration requires either historical averaging or mixture modelling, but might not need both. On the Navigation task, we compare SM4 (with three mixture components) against ablation baselines that lack conditional state entropy, latent conditional action entropy, or both (i.e., SAC) in the SM4 objective (Equation 8). We see that both terms contribute heavily to the exploration ability of SM4, but the state entropy term is especially critical. (b): We compare SMM/SM4 with different numbers of mixtures, and with vs. without historical averaging. We found that increasing the number of latent mixture components n ∈ {1, 2, 4} accelerates exploration, as does historical averaging. Error bars show std. dev. across 4 random seeds. C CHOOSING p * (s) FOR GOAL-REACHING TASKS In general, the choice of the target distribution p * (s) will depend on the distribution of test-time tasks. In this section, we consider the special case where the test-time tasks correspond to goal-reaching derive the optimal target distribution p * (s). We consider the setting where goals g ∼ p g (g) are sampled from some known distribution. Our goal is to minimize the number of episodes required to reach that goal state. We define reaching the goal state as visiting a state that lies within an ball of the goal, where both > 0 and the distance metric are known. We start with a simple lemma that shows that the probability that we reach the goal at any state in a trajectory is at least the probability that we reach the goal at a randomly chosen state in that same trajectory. Defining the binary random variable z t 1(s t − g ≤) as the event that the state at time t reaches the goal state, we can formally state the claim as follows: Lemma C.1. Proof. We start by noting the following implication: Thus, the probability of the event on the RHS must be at least as large as the probability of the event on the LHS: Next, we look at the expected number of episodes to reach the goal state. Since each episode is independent, the expected hitting time is simply HITTINGTIME(s) = 1 p(some state reaches s) = 1 Note that we have upper-bounded the hitting time using Lemma C.1. Since the goal g is a random variable, we take an expectation over g: We can rewrite the RHS using p * (s) to denote the target state marginal distribution: We will minimize F, an upper bound on the expected hitting time. ds is a smoothed version of the target density. Before presenting the proof, we provide a bit of intuition. In the case where → 0, the optimal target distribution is p * (s) ∝ p g (s). For non-zero, the policy in Lemma C.2 is equivalent to convolving p g (s) with a box filter before taking the square root. In both cases, we see that the optimal policy does distribution matching to some function of the goal distribution. Note thatp(·) may not sum to one and therefore is not a proper probability distribution. Proof. We start by forming the Lagrangian: The first derivative is dL dp Note that the second derivative is positive, indicating that this Lagrangian is convex, so all stationary points must be global minima: Setting the first derivative equal to zero and rearranging terms, we obtain Renamings ↔ s, we obtain the desired . Goal-Conditioned RL (; ;) can be viewed as a special case of State Marginal Matching when the goal-sampling distribution is learned jointly with the policy. In particular, consider the State Marginal Matching with a mixture policy (Algorithm 2), where the mixture component z maps bijectively to goal states g. In this case, we learn goal-conditioned policies of the form π(a | s, g). We start by swapping g for z in the SMM objective with Mixtures of Policies (Equation 8): The second term p(g | s) is an estimate of which goal the agent is trying to reach, similar to objectives in intent inference . The third term π(s | g) is the distribution over states visited by the policy when attempting to reach goal g. For an optimal goal-conditioned policy in an infinite-horizon MDP, both of these terms are Dirac functions: In this setting, the State Marginal Matching objective simply says to sample goals g ∼ π(g) with probability equal to the density of that goal under the target distribution. Whether goal-conditioned RL is the preferable way to do distribution matching depends on the difficulty of sampling goals and the supervision that will be provided at test time. It is natural to use goal-conditioned RL in settings where it is easy to sample goals, such as when the space of goals is small and finite or otherwise low-dimensional. If a large collection of goals is available apriori, we could use importance sampling to generate goals to train the goal-conditioned policy . However, many real-world settings have high-dimensional goals, which can be challenging to sample. While goal-conditioned RL is likely the right approach when we will be given a test-time task, a latent-conditioned policy may explore better in settings where the goal-state is not provided at test-time. We summarize the environment parameters for Navigation (Figures 2, 5a), Manipulation (Figures 3, 5b, 7, 8, 9, 10), and D'Claw (Figure 4) in Table 1. Navigation: Episodes have a maximum time horizon of 100 steps. The environment reward is where s xy is the xy-position of the agent. We used a uniform target distribution over the end of all m halls, so the environment reward at training time is r env (s) = 1 m if the robot is close enough to the end of any of the halls. We used a fixed hall length of 10 in Figures 2b and 2c, and length 50 in Figure 5a. All experiments used m = 3 halls, except in Figure 2c where we varied the number of halls {3, 5, 7}. Manipulation. We used the simulated Fetch Robotics arm 5 implemented by using the MuJoCo simulator. The state vector s ∈ R 28 includes the xyzcoordinates s obj, s robot ∈ R 3 of the block and the robot gripper respectively, as well as their velocities, orientations, and relative position s obj − s robot. At the beginning of each episode, we spawn the object at the center of the table, and the robot gripper above the initial block position. We terminate each episode after 50 environment steps, or if the block falls off the table. We considered two target state marginal distributions. In Manipulation-Uniform, the target density is given by p * (s) ∝ exp (α 1 r goal (s) + α 2 r robot (s) + α 3 r action (s)) where α 1, α 2, α 3 > 0 are fixed weights, and the rewards r goal (s):= 1 − 1(s obj is on the table surface) r robot (s):= 1(s obj − s robot 2 2 < 0.1) r action (s):= − a 2 2 correspond to a uniform distribution of the block position over the table surface (the agent receives +0 reward while the block is on the table), an indicator reward for moving the robot gripper close to the block, and action penalty, respectively. The environment reward is a weighted sum of the three reward terms: r env (s) 20r goal (s) + r robot (s) + 0.1r action (s). At test-time, we sample a goal block location g ∈ R 3 uniformly on the table surface, and the goal is not observed by the agent. In Manipulation-Half, the target state density places higher probability mass to states where the block is on the left-side of the table. This is implemented by replacing r goal (s) with a reward function that gives a slightly higher reward +0.1 for states where the block is on the left-side of the table. D'Claw. The D'Claw robot (; 6 controls three claws to rotate a valve object. The environment consists of a 9-dimensional action space (three joints per claw) and a 12-dimensional observation space that encodes the joint angles and object orientation. We fixed each episode at 50 timesteps, which is about 5 seconds on the real robot. In the hardware experiments, each algorithm was trained on the same four D'Claw robots to ensure consistency. We defined the target state distribution to place uniform probability mass over all object angles in [−180 It also incorporates reward shaping terms that place lower probability mass on states with high joint velocity and on states with joint positions that deviate far from the initial position (see). GAIL assumes access to expert demonstrations, which SMM and the other exploration methods do not require. To compare GAIL with the exploration methods on a level footing, we sampled synthetic states from p * (s) to train GAIL, and restricted the GAIL discriminator input to states only (no actions). For D'Claw (Fig. 4), we sampled the valve object angle uniformly in [−180 •, 180 •]. For Manipulation-Uniform (Fig. 3c), we sampled object positions s object uniformly on the table surface, and tried two different sampling distributions for the gripper position s robot (see Fig. 6). For both environments, all other state dimensions were sampled uniformly in [−10, 10], and used 1e4 synthetic state samples to train GAIL. Since the state samples from p * (s) may not be reachable from the initial state, the policy may not be able to fool the discriminator. To get around this problem, we also tried training GAIL with the discriminator input restricted to only the state dimensions corresponding to the object position or gripper position (Manipulation), or the object angle (D'Claw). We summarize these GAIL ablation experiments in Fig. 6. In our experiments, we used the best GAIL ablation model to compare against the exploration baselines in Figures 3c and 4. In our SMM implementation, we estimated the density of data x as p(x) ≈ decoder(x = x|z = encoder(x)). That is, we encoded x to z, reconstructionx from z, and then took the likelihood of the true data x under a unit-variance Gaussian distribution centered at the reconstructedx. The log-likelihood is therefore given by the mean-squared error between the data x and the reconstruction x, plus a constant that is independent of x: log q(x) = 1 2 x −x 2 2 + C. We compare the wall-clock time of each exploration method in Table 2. The computational cost of our method is comparable with prior work. We studied the effect of restricting the GAIL discriminator input to fewer state dimensions. (a) Manipulation: We trained the GAIL discriminator on the entire state vector s; on the object and gripper positions {s object, s robot} only; or on the object position s object only. We also varied the sampling distribution for the gripper position, p * (s robot): we compare using a normal distribution, N (s object, I 3), to sample gripper positions closer to the object, versus a uniform distribution, Uniform [−10, 10], for greater entropy of the sampled gripper positions. We observe that sampling gripper positions closer to the object position improves the entropy of the object position H π [s object], but hurts the entropy of the gripper position We restricted the discriminator to the entire state vector s, or to the object angle and position s object. Analysis: In both domains, we observe that restricting the discriminator input to fewer state dimensions (e.g., to s object) makes the discriminator less capable of distinguishing between expert and policy states (orange and green curves). On the other hand, training on the entire state vector s causes the discriminator loss to approach 0 (i.e., perfect classification), partly because some of the "expert" states sampled from p * (s) are not reachable from the initial state, and the policy is thus unable to fool the discriminator. We summarize hyperparameter settings in Table 3. All algorithms were trained for 1e5 steps on Navigation, 1e6 steps on Manipulation, 1e6 steps on D'Claw Sim2Real, and 1e5 steps on D'Claw hardware. Loss Hyperparameters. For each exploration method, we tuned the weights of the different loss components. SAC reward scale controls the weight of the action entropy reward relative to the extrinsic reward. Count coeff controls the intrinsic count-based exploration reward w.r.t. the extrinsic reward and SAC action entropy reward. Similarly, Pseudocount coeff controls the intrinsic pseudocount exploration reward. Historical Averaging. In the Manipulation experiments, we tried the following sampling strategies for historical averaging: Uniform: Sample policies uniformly across training iterations. Exponential: Sample policies, with recent policies sampled exponentially more than earlier ones. Last: Sample the N latest policies uniformly at random. We found that Uniform worked less well, possibly due to the policies at early iterations not being trained enough. We found negligible difference in the state entropy metric between Exponential vs. Last, and between sampling 5 vs. 10 historical policies, and we also note that it is unnecessary to keep checkpoints from every iteration. Network Hyperparameters. For all algorithms, we use a Gaussian policy with two hidden layers with Tanh activation and a final fully-connected layer. The Value function and Q-function each are a feedforward MLP with two hidden layers with ReLU activation and a final fully-connected layer. Each hidden layer is of size 300 (SMM, SAC, ICM, C, PC) or 256 (GAIL). The same network configuration is used for the SMM discriminator, d(z | s), and the GAIL discriminator, but with different input and output sizes. The SMM density model, q(s), is modeled by a VAE with encoder and decoder networks each consisting of two hidden layers of size with ReLU activation. The same VAE network configuration is used for Pseudocount. The replay buffer is filled with 1e4 random actions before training, for training stability. We perform one discriminator update per SAC update. For both Manipulation and D'Claw, we used 1e4 states sampled from p * (s). Other hyperparameter settings, such as batch size for both discriminator and policy updates, are summarized in Table 3. We observed that GAIL training is more unstable compared to the exploration baselines. Thus, for GAIL, we did not take the final iterate (e.g., policy at convergence) but instead used early termination (e.g., take the best iterate according to the state entropy metric). We visualize where different methods push the block in the Manipulation environment. More precisely, we visualize the log state marginal log ρ πz (s) over block XY-coordinates s = (x, y) in Figures 7 and 8. In Figure 9, we plot goals sampled at test-time, colored by the number of episodes each method required to push the block to that goal location. Blue dots indicate that the agent found the goal quickly. We observe that SMM has the most blue dots, indicating that it succeeds in exploring a wide range of states at test-time.. The environment reward is a weighted sum of three terms: r goal (s) (+0 if object is on table, -1 otherwise), r robot (s) (+1 if robot gripper is close to block), and r action (action penalty term), with weights -20, 1, 0.1 respectively (see Appendix D.1). The three exploration methods (ICM, Count, SMM) also optimize an auxilliary exploration loss, which makes the agent more likely to move around the block. Compared to SAC, this causes the exploration methods to get worse returns for r goal (s) and r action (s) (due to the agent moving the block around), but also quickly learns to maximize the sparse reward r robot (s) (indicator reward for moving gripper within a threshold distance to the block).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hkla1eHFvS
We view exploration in RL as a problem of matching a marginal distribution over states.
The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible. Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models. For sensory perception tasks, neural networks have mostly replaced handcrafted features. Instead of defining features by hand using domain knowledge, it is now possible to learn them, ing in improved accuracy and saving a considerable amount of work. However, successful generalization is still critically dependent on the inductive bias encoded in the network architecture, whether this bias is understood by the network architect or not. The canonical example of a successful network architecture is the Convolutional Neural Network (CNN, ConvNet). Through convolutional weight sharing, these networks exploit the fact that a given visual pattern may appear in different locations in the image with approximately equal likelihood. Furthermore, this translation symmetry is preserved throughout the network, because a translation of the input image leads to a translation of the feature maps at each layer: convolution is translation equivariant. Very often, the true label function (the mapping from image to label that we wish to learn) is invariant to more transformations than just translations. Rotations are an obvious example, but standard translational convolutions cannot exploit this symmetry, because they are not rotation equivariant. As it turns out, a convolution operation can be defined for almost any group of transformation -not just translations. By simply replacing convolutions with group convolutions (wherein filters are not Figure 1 : Hexagonal G-CNN. A p6 group convolution is applied to a single-channel hexagonal image f and filter ψ 1, producing a single p6 output feature map f g ψ 1 with 6 orientation channels. This feature map is then group-convolved again with a p6 filter ψ 2 . The group convolution is implemented as a Filter Transformation (FT) step, followed by a planar hexagonal convolution. As shown here, the filter transform of a planar filter involves only a rotation, whereas the filter transform for a filter on the group p6 involves a rotation and orientation channel cycling. Note that in general, the orientation channels of p6 feature maps will not be rotated copies of each other, as happens to be the case in this figure. just shifted but transformed by a larger group; see Figure 1 ), convolutional networks can be made equivariant to and exploit richer groups of symmetries BID0. Furthermore, this technique was shown to be more effective than data augmentation. Although the general theory of such group equivariant convolutional networks (G-CNNs) is applicable to any reasonably well-behaved group of symmetries (including at least all finite, infinite discrete, and continuous compact groups), the group convolution is easiest to implement when all the transformations in the group of interest are also symmetries of the grid of pixels. For this reason, G-CNNs were initially implemented only for the discrete groups p4 and p4m which include integer translations, rotations by multiples of 90 degrees, and, in the case of p4m, mirror reflections -the symmetries of a square lattice. The main hurdle that stands in the way of a practical implementation of group convolution for a continuous group, such as the roto-translation group SE, is the fact that it requires interpolation in order to rotate the filters. Although it is possible to use bilinear interpolation in a neural network BID10, it is somewhat more difficult to implement, computationally expensive, and most importantly, may lead to numerical approximation errors that can accumulate with network depth. This has led us to consider the hexagonal grid, wherein it is possible to rotate a filter by any multiple of 60 degrees, without interpolation. This allows use to define group convolutions for the groups p6 and p6m, which contain integer translations, rotations with multiples of 60 degrees, and mirroring for p6m. To our surprise, we found that even for translational convolution, a hexagonal pixelation appears to have significant advantages over a square pixelation. Specifically, hexagonal pixelation is more efficient for signals that are band limited to a circular area in the Fourier plane BID17, and hexagonal pixelation exhibits improved isotropic properties such as twelve-fold symmetry and six-connectivity, compared to eight-fold symmetry and four-connectivity of square pixels BID15 BID2. Furthermore, we found that using small, approximately round hexagonal filters with 7 parameters works better than square 3 × 3 filters when the number of parameters is kept the same. As hypothesized, group convolution is also more effective on a hexagonal lattice, due to the increase in weight sharing afforded by the higher degree of rotational symmetry. Indeed, the general pattern we find is that the larger the group of symmetries being exploited, the better the accuracy: p6-convolution outperforms p4-convolution, which in turn outperforms ordinary translational convolution. In order to use hexagonal pixelations in convolutional networks, a number of challenges must be addressed. Firstly, images sampled on a square lattice need to be resampled on a hexagonal lattice. This is easily achieved using bilinear interpolation. Secondly, the hexagonal images must be stored in a way that is both memory efficient and allows for a fast implementation of hexagonal convolution. To this end, we review various indexing schemes for the hexagonal lattice, and show that for some of them, we can leverage highly optimized square convolution routines to perform the hexagonal convolution. Finally, we show how to efficiently implement the filter transformation step of the group convolution on a hexagonal lattice. We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) BID21. Aerial images are one of the many image types where the label function is invariant to rotations: One expects that rotating an aerial image does not change the label. In situations where the number of examples is limited, data efficient learning is important. Our experiments demonstrate that group convolutions systematically improve performance. The method outperforms the baseline model pretrained on ImageNet, as well as comparable architectures with the same number of parameters. Source code of G-HexaConvs is available on Github: https://github.com/ehoogeboom/hexaconv.The remainder of this paper is organized as follows: In Section 2 we summarize the theory of group equivariant networks. Section 3 provides an overview of different coordinate systems on the hexagonal grid, Section 4 discusses the implementation details of the hexagonal G-convolutions, in Section 5 we introduce the experiments and present , Section 6 gives an overview of other related work after which we discuss our findings and conclude. In this section we review the theory of G-CNNs, as presented by BID0. To begin, recall that normal convolutions are translation equivariant 1. More formally, let L t denote the operator that translates a feature map f: Z 2 → R K by t ∈ Z 2, and let ψ denote a filter. Translation equivariance is then expressed as: DISPLAYFORM0 In words: translation followed by convolution equals convolution followed by a translation. If instead we apply a rotation r, we obtain: DISPLAYFORM1 That is, the convolution of a rotated image L r f by a filter ψ equals the rotation of a convolved image f by a inversely rotated filter L r −1 ψ. There is no way to express [L r f] ψ in terms of f ψ, so convolution is not rotation equivariant. The convolution is computed by shifting a filter over an image. By changing the translation to a transformation from a larger group G, a G-convolution is obtained. Mathematically, the GConvolution for a group G and input space X (e.g. the square or hexagonal lattice) is defined as: DISPLAYFORM2 where k denotes the input channel, f k and ψ k are signals defined on X, and g is a transformation in G. The standard (translational) convolution operation is a special case of the G-convolution for X = G = Z 2, the translation group. In a typical G-CNN, the input is an image, so we have X = Z Figure 2: Four candidate coordinate systems for a hexagonal grid. Notice that the cube coordinate system uses three integer indexes and both the axial and cube coordinate system may have negative indices when using a top left origin. Equation FORMULA2 gives a mathematical definition of group convolution, but not an algorithm. To obtain a practical implementation, we use the fact that the groups of interest can be split 2 into a group of translations (Z 2), and a group H of transformations that leaves the origin fixed (e.g. rotations and/or reflections about the origin 3). The G-Conv can then be implemented as a two step computation: filter transformation (H) and planar convolution (Z 2).G-CNNs generally use two kinds of group convolutions: one in which the input is a planar image and the output is a feature map on the group G (for the first layer), and one in which the input and output are both feature maps on G. We can provide a unified explanation of the filter transformation step by introducing H in and H out. In the first-layer G-Conv, H in = {e} is the trivial group containing only the identity transformation, while H out = H is typically a group of discrete rotations (4 or 6).For the second-layer G-Conv, we have DISPLAYFORM0 The input for the filter transformation step is a learnable filterbank Ψ of shape C × K · |H in | × S × S, where C, K, S denote the number of output channels, input channels, and spatial length, respectively. The output is a filterbank of shape C · |H out | × K · |H in | × S × S, obtained by applying each h ∈ H out to each of the C filters. In practice, this is implemented as an indexing operation Ψ[I] using a precomputed static index array I.The second step of the group convolution is a planar convolution of the input f with the transformed filterbank Ψ [I]. In what follows, we will show how to compute a planar convolution on the hexagonal lattice (Section 3), and how to compute the indexing array I used in the filter transformation step of G-HexaConv (Section 4). The hexagonal grid can be indexed using several coordinate systems (see Figure 2). These systems vary with respect to three important characteristics: memory efficiency, the possibility of reusing square convolution kernels for hexagonal convolution, and the ease of applying rotations and flips. As shown in Figure 3, some coordinate systems cannot be used to represent a rectangular image in a rectangular memory region. In order to store a rectangular image using such a coordinate system, extra memory is required for padding. Moreover, in some coordinate systems, it is not possible to use standard planar convolution routines to perform hexagonal convolutions. Specifically, in the Offset coordinate system, the shape of a hexagonal filter as represented in a rectangular memory array changes depending on whether it is centered on an even or odd row (see Figure 4).Because no coordinate system is ideal in every way, we will define four useful ones, and discuss their merits. Figures 2, 3 and 4 should suffice to convey the big picture, so the reader may skip to Section 4 on a first reading.2 To be precise, the group G is a semidirect product: DISPLAYFORM0 The group G generated by compositions of translations and rotations around the origin, contains rotations around any center.. Standard 2D convolution using both feature map and filter stored according to the coordinate system is equivalent to convolution on the hexagonal lattice. Note that for the offset coordinate system two separate planar convolution are required -one for even and one for odd rows. Perhaps the most natural coordinate system for the hexagonal lattice is based on the lattice structure itself. The lattice contains all points in the plane that can be obtained as an integer linear combination of two basis vectors e 1 and e 2, which are separated by an angle of 60 degrees. The Axial coordinate system simply represents the pixel centered at ue 1 + ve 2 by coordinates (u, v) (see Figure 2a).Both the square and hexagonal lattice are isomorphic to Z 2. The planar convolution only relies on the additive structure of Z 2, and so it is possible to simply apply a convolution kernel developed for rectangular images to a hexagonal image stored in a rectangular buffer using axial coordinates. As shown in Figure 3a, a rectangular area in the hexagonal lattice corresponds to a parallelogram in memory. Thus the axial system requires additional space for padding in order to store an image, which is its main disadvantage. When representing an axial filter in memory, the corners of the array need to be zeroed out by a mask (see Figure 4a). The cube coordinate system represents a 2D hexagonal grid inside of a 3D cube (see Figure 5). Although representing grids in three dimensional structures is very memory-inefficient, the cube system is useful because rotations and reflections can be expressed in a very simple way. Furthermore, the conversion between the axial and cube systems is straightforward: x = v, y = −(u + v), z = u. Hence, we only use the Cube system to apply transformations to coordinates, and use other systems for storing images. A counter-clockwise rotation by 60 degrees can be performed by the following formula: DISPLAYFORM0 Similarly, a mirroring operation over the vertical axis through the origin is computed with: Figure 5: The cube coordinate system as a 3D structure. DISPLAYFORM1 The double width system is based on two orthogonal axes. Stepping to the right by 1 unit in the hexagonal lattice, the u-coordinate is incremented by 2 (see Figure 2c). Furthermore, odd rows are offset by one unit in the u direction. Together, this leads to a checkerboard pattern (Figure 3b) that doubles the image and filter size by a factor of two. The good thing about this scheme is that a hexagonal convolution can be implemented as a rectangular convolution applied to checkerboard arrays, with checkerboard filter masking. This works because the checkerboard sparsity pattern is preserved by the square convolution kernel: if the input and filter have this pattern, the output will too. As such, HexaConv is very easy to implement using the double width coordinate system. It is however very inefficient, so we recommend it only for use in preliminary experiments. Like the double width system, the offset coordinate system uses two orthogonal axes. However, in the offset system, a one-unit horizontal step in the hexagonal lattice corresponds to a one-unit increment in the u-coordinate. Hence, rectangular images can be stored efficiently and without padding in the offset coordinate system (see Figure 3c).The downside to offset coordinates is that hexagonal convolutions cannot be expressed as a single 2D convolution (see Figure 4c and 4d), because the shape of the filters is different for even and odd rows. Given access to a convolution kernel that supports strides, it should be possible to implement hexagonal convolution in the offset system using two convolution calls, one for the even and one for the odd row. Ideally, these two calls would write to the same output buffer (using a strided write), but unfortunately most convolution implementations do not support this feature. Hence, the of the two convolution calls has to be copied to a single buffer using strided indexing. We note that a custom HexaConv kernel for the offset system would remove these difficulties. Were such a kernel developed, the offset system would be ideal because of its memory efficiency. The group convolution can be factored into a filter transformation step and a hexagonal convolution step, as was mentioned in Section 2 and visualized in Figure 1. In our implementation, we chose to use the Axial coordinate system for feature maps and filters, so that the hexagonal convolution can be performed by a rectangular convolution kernel. In this section, we explain the filter transformation and masking in general, for more details see Appendix A.The general procedure described in Section 2.1 also applies to hexagonal group convolution (p6 and p6m). In summary, a filter transformation is applied to a learnable filter bank Ψ ing in a filter bank Ψ than can be used to compute the group convolution using (multiple) planar convolutions (see the top of Figure 1 for a visual portrayal of this transformation). In practice this transformation Although convolution with filters and feature maps laid out according to the Axial coordinate system is equivalent to convolution on the hexagonal lattice, both the filters and the feature maps contain padding (See Figure 3 and 4), since the planar convolution routines operate on rectangular arrays. As a consequence, non-zero output may be written to the padding area of both the feature maps or the filters. To address this, we explicitly perform a masking operation on feature maps and filters after every convolution operation and parameter update, to ensure that values in the padding area stay strictly equal to zero. We perform experiments on the CIFAR-10 and the AID datasets. Specifically, we compare the accuracy of our G-HexaConvs (p6-and p6m-networks) to that of existing G-networks (p4-and p4m-networks) BID0 and standard networks (Z 2). Moreover, the effect of utilizing an hexagonal lattice is evaluated in experiments using the HexaConv network (hexagonal lattice without group convolutions, or Z 2 Axial). Our experiments show that the use of an hexagonal lattice improves upon the conventional square lattice, both when using normal or p6-convolutions. CIFAR-10 is a standard benchmark that consists of 60000 images of 32 by 32 pixels and 10 different target classes. We compare the performance of several ResNet BID8 based G-networks. Specifically, every G-ResNet consists of 3 stages, with 4 blocks per stage. Each block has two 3 by 3 convolutions, and a skip connection from the input to the output. Spatial pooling is applied to the penultimate layer, which leaves the orientation channels intact and allows the network to maintain orientation equivariance. Moreover, the number of feature maps is scaled such that all G-networks are made up of a similar number of parameters. For hexagonal networks, the input images are resampled to the hexagonal lattice using bilinear interpolation (see FIG3). Since the classification performance of a HexaConv network does not degrade, the quality of these interpolated images suffices. The CIFAR-10 are presented in TAB0, obtained by taking the average of 10 experiments with different random weight initializations. We note that the the HexaConv CNN (Z 2 Axial) outperforms the standard CNN (Z 2). Moreover, we find that p4-and p4m-ResNets are outperformed by our p6-and p6m-ResNets, respectively. We also find a general pattern: using groups with an increasing number of symmetries consistently improves performance. The Aerial Image Dataset (AID) BID21 ) is a dataset consisting of 10000 satellite images of 400 by 400 pixels and 30 target classes. The labels of aerial images are typically invariant to rotations, i.e., one does not expect labels to change when an aerial image is rotated. For each experiment, we split the data set into random 80% train/20% test sets. This contrasts the 50% train/test split by BID21. Since the test sets are smaller, experiments are performed multiple times with randomly selected splits to obtain better estimates. All models are evaluated on identical randomly selected splits, to ensure that the comparison is fair. As a baseline, we take the best performing model from BID21, which uses VGG16 as a feature extractor and an SVM for classification. Because the baseline was trained on 50%/50% splits, we re-evaluate the model trained on the same 80%/20% splits. We again test several G-networks with ResNet architectures. The first convolution layer has stride two, and the ResNets take resized 64 by 64 images as input. The networks are widened to account for the increase in the number of classes. Similar to the CIFAR-10 experiments, the networks still consist of 3 stages, but have two blocks per stage. In contrast with the CIFAR-10 networks, pooling is applied to the spatial dimensions and the orientation channels in the penultimate layer. This allows the network to become orientation invariant. The for the AID experiment are presented in TAB1. The error decreases from an error of 19.3% on a Z 2 -ResNet, to an impressive error of 8.6% on a p6-ResNet. We found that adding mirror symmetry did not meaningfully affect performance (p4m 10.2% and p6m 9.3% error). This suggests that mirror symmetry is not an effective inductive bias for AID. It is worth noting that the baseline model has been pretrained on ImageNet, and all our models were trained from random initialization. These demonstrate that group convolutions can improve performance drastically, especially when symmetries in the dataset match the selected group. The effect of changing the sampling lattice for image processing from square to hexagonal has been studied over many decades. The isoperimetry of a hexagon, and uniform connectivity of the lattice, make the hexagonal lattice a more natural method to tile the plane BID16. In certain applications hexagonal lattices have shown to be superior to square lattices BID17 BID7.Transformational equivariant representations have received significant research interest over the years. Methods for invariant representations in hand-crafted features include pose normalization BID13 BID3 and projections from the plane to the sphere BID11. Although approximate transformational invariance can be achieved through data augmentation BID19, in general much more complex neural networks are required to learn the invariances that are known to the designer a-priori. As such, in recent years, various approaches for obtaining equivariant or invariant CNNs -with respect to specific transformations of the data -were introduced. A number of recent works propose to either rotate the filters or the feature maps followed and subsequent channel permutations to obtain equivariant (or invariant) CNNs BID0 BID4 BID12. describe a general framework of equivariant networks with respect to discrete and continuous groups, based on steerable filters, that includes group convolutions as a special case. Harmonic Networks BID20 apply the theory of Steerable CNNs to obtain a CNN that is approximately equivariant with respect to continuous rotations. Deep Symmetry Networks BID6 are approximately equivariant CNN that leverage sparse high dimensional feature maps to handle high dimensional symmetry groups. BID14 obtain rotational equivariance by rotating filters followed by a pooling operation that maintains both the angle of the maximum magnitude and the magnitude itself, ing in a vector field feature map. BID18 study equivariance in neural networks through symmetries in the network architecture, and propose two parameter-sharing schemes to achieve equivariance with respect to discrete group actions. Instead of enforcing invariance at the architecture level, Spatial Transformer Networks BID10 explicitly spatially transform feature maps dependent on the feature map itself ing in invariant models. Similarly, Polar Transformer Networks BID5 transform the feature maps to a log-polar representation conditional on the feature maps such that subsequent convolution correspond to group (SIM) convolutions. obtain invariant CNN with respect to spatial transformations by warping the input and filters by a predefined warp. Due to the dependence on global transformations of the input, these methods are limited to global symmetries of the data. To understand the filter transformation step intuitively, we highly recommend studying Figure 1. Below we give a precise definition that makes explicit the relation between the mathematical model and computational practice. Recall that in our mathematical model, filters and feature maps are considered functions ψ: X → R K, where X = Z 2 or X = G. In the filter transformation step, we need to compute the transformed filter L r ψ for each rotation r ∈ H and each filter ψ, thus increasing the number of output channels by a factor |H|. The rotation operator L r is defined by the equation [L r ψ](h) = ψ(r −1 h). Our goal is to implement this as a single indexing operation Ψ [I], where Ψ is an array storing our filter, and I is a precomputed array of indices. In order to compute I, we need to relate the elements of the group, such as r −1 h to indices. To simplify the exposition, we assume there is only one input channel and one output channel; K = C = 1. A filter ψ will be stored in an n-dimensional array Ψ, where n = 2 if X = Z 2 and n = 3 if X = G. An n-dimensional array has a set of valid indices I ⊂ Z n. Thus, we can think of our array as a function that eats an index and returns a scalar, i.e. Ψ: I → R. If in addition we have an invertible indexing function ι: X → I, we can consider the array Ψ as a representation of our function ψ: X → R, by setting ψ(x) = Ψ[ι(x)]. Conversely, we can think of ψ as a representation of Ψ, because Ψ[i] = ψ(ι −1 (i)). This is depicted by the following diagram: DISPLAYFORM0 With this setup in place, we can implement the transformation operator L r by moving back and forth between I (the space of valid indices) and X (where inversion and composition of elements are defined). Specifically, we can define: DISPLAYFORM1 That is, we convert our index i to group element h = ι −1 (i). Then, we compose with r −1 to get r −1 h, which is where we want to evaluate ψ. To do so, we convert r −1 h back to an index using ι, and use it to index Ψ.To perform this operation in one indexing step as Ψ[I], we precompute indexing array I: DISPLAYFORM2 Finally, we need choose a representation of our group elements h that allows us to compose them easily, and choose an indexing map ι. An element h ∈ p6 can be written as a rotation-translation pair (r, t). The rotation component can be encoded as an integer 0,..., 5, and the translation can be encoded in the Axial coordinate frame (Section 3.1) as a pair of integers u, v. To compute r −1 h, we use that r −1 (r, t) = (r −1 r, r −1 t). The rotational composition reduces to addition modulo 6 (which in the orientation channel cycling behavior pictured in Figure 1), while r −1 t can be computed by converting to the Cube coordinate system and using Equation 5 (which rotates the orientation channels).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1vuQG-CW
We introduce G-HexaConv, a group equivariant convolutional neural network on hexagonal lattices.
Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data. Methods for object localization, however, are still in need of substantial improvement. Common approaches to this problem involve the use of a sliding window, sometimes at multiple scales, providing input to a deep CNN trained to classify the contents of the window. In general, these approaches are time consuming, requiring many classification calculations. In this paper, we offer a fundamentally different approach to the localization of recognized objects in images. Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights. We provide a simple method to interpret classifier weights in the context of individual classified images. This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each in- put pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition. These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image. We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object. Our experimental , using real-world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique. Deep Convolutional Neural Networks (CNNs) have been shown to be effective at image classification, accurately performing object recognition even with thousands of object classes when trained on a sufficiently rich data set of labeled images BID14. One advantage of CNNs is their ability to learn complete functional mappings from image pixels to object categories, without any need for the extraction of hand-engineered image features BID21. To facilitate learning through stochastic gradient descent, CNNs are (at least approximately) differentiable with regard to connection weight parameters. Image classification, however, is only one of the problems of computer vision. In the task of image classification, each image has a single label, associated with the class identity of the main object in the image, and the goal is to assign correct labels in a manner that generalizes to novel images. This can be accomplished by training a machine learning classifier, such as a CNN, on a large data set of labeled images BID5. In the object localization task, in comparison, the output for a given image is not a class label but the locations of a specified number of objects in the image, usually encoded as bounding boxes. Evaluation of an object localization system generally requires ground truth bounding boxes to compare to the system's output. The detection task is more difficult than the localization task, as the number of objects are not predetermined BID21.In this paper, we focus on object localization, identifying the position in the image of a recognized object. As is common in the localization literature, position information is output in the form of a bounding box. Previously developed techniques for accomplishing this task generally involve searching the image for the object, considering many candidate bounding boxes with different sizes and locations, sometimes guided by an auxilliary algorithm for heuristically identifying regions of interest BID21; BID10; BID13. For each candidate location, the sub-image captured by the bounding box is classified for object category, with the final output Figure 1: Examples of sensitivity maps, displaying the sensitivity of network internal representations to individual pixels, providing information about the locations of the main objects in the source images.bounding box either being the specific candidate region classified as the target object with the highest level of certainty or some heuristic combination of neighboring or overlapping candidate regions with high classification certainty. These approaches tend to be time consuming, often requiring deep CNN classification calculations of many candidate regions at multiple scales. Efforts to speed these methods mostly focus on reducing the number of regions considered, typically by using some adjunct heuristic region proposal algorithm BID10; BID17; BID13. Still, the number of considered regions is often reported to be roughly 2,000 per image. While these approaches can be fairly accurate, their slowness limits their usefulness, particularly for online applications. A noteworthy alternative approach is to directly train a deep CNN to produce outputs that match ground truth localization bounding boxes, using a large image data set that provides both category and localization information for each image. It appears as if some form of this method was used with AlexNet BID14, though details concerning localization, rather than image classification, are difficult to discern from the published literature. A natural approach would be to cast the learning of bounding boxes as a simple regression problem, with targets being the four coordinates that specify a bounding box (e.g., coordinates of upper-left and lower-right corners, or region center coordinates along with region width and height). It is reasonable to consider sharing early layers of a deep CNN, such as those performing convolution and max pooling, between both an image classification network and an object localization network. Indeed, taking such a multitask learning approach BID2 can allow for both object category and object location training data to shape connection weights throughout the network. Thus, the deep CNN would have "two heads", one for image classification, using a classification cross-entropy loss function, and one for object localization, reducing the 2 norm between ground truth and predicted bounding box coordinates BID14. While this approach can produce a network that quickly outputs location information, extensive training on large data sets containing ground truth bounding box information is necessary to produce good generalization. In this paper, we introduce an approach to object localization that is both very fast and robust in the face of limited ground truth bounding box training data. This approach is rooted in the assertion that any deep CNN for image classification must contain, implicit in its connection weights, knowledge about the location of recognized objects BID20. The goal, then, is to interpret the flow of activation in an object recognition network when it is performing image classification so as to extract information about object location. Furthermore, the goal is to do this quickly. Thus, this approach aims to leverage location knowledge that is already latent in extensively trained and tuned image classification networks, without requiring a separate learning process for localization. Our method makes use of the notion of a sensitivity analysis BID26. We propose estimating the sensitivity of the category outputs, or activation patterns at internal network layers, of an image classification CNN to variance in each input pixel, given a specific input image. The is a numeric value for each pixel in the input image that captures the degree to which small changes in that pixel (locally, around its current value) give rise to large changes in the output category. Together, these numeric values form a sensitivity map of the image, encoding image regions that are important for the current classification. Our proposed measure of sensitivity is the partial derivative of activity with regard to each pixel value, evaluated for the current image. For a deep CNN that formally embodies a differentiable mapping (at least approximately) from image pixels to output categories, this partial derivative can be quickly calculated. While many tools currently exist for efficiently calculating such derivatives, we provide a simple algorithm that computes these values through a single backward pass through the image classification network, similar to that used to calculate unit error (delta) values in the backpropagation of error learning algorithm BID18. Thus, we can generate a sensitivity map for an image in about the same amount of time as it takes the employed image classification network to produce an output. Some example sensitivity maps are shown in Figure 1.The idea of using sensitivity information, like that in our sensitivity maps, for a variety of tasks, including localization, has previously appeared in the literature BID24; BID28 BID20. Indeed, some of these past efforts have used more sophisticated measures of sensitivity. In this paper, we show that even our very simple sensitivity measure can produce strong localization performance, and it can do so quickly, without any modifications to the classification network, and even for object categories on which the classification network was not trained. The relationship of the reported here to previously reported work is discussed further in Section 4.As previously mentioned, object localization methods typically encode object location as a bounding box. Since our sensitivity maps encode location differently, in terms of pixels, we propose learning a simple linear mapping from sensitivity maps to bounding box coordinates, allowing our method to output a bounding box for each classified image. We suggest that this linear mapping can be robustly learned from a relatively small training set of images with ground truth bounding boxes, since the sensitivity maps form a much more simple input than the original images. The primary contributions of this paper may be summarized as follows:• We propose a new general approach to performing object localization, interpreting a previously trained image classification network by performing a sensitivity analysis, identifying pixels to which the category output, or a more general internal representation, is particularly sensitive.• We demonstrate how a linear function from the ing sensitivity maps to object location bounding box coordinates may be learned from training images containing ground truth location information.• We provide a preliminary assessment of our approach, measuring object localization performance on the ImageNet and PASCAL VOC data sets using the VGG16 image classification CNN, showing strong accuracy while maintaining short computation times. Calculating derivatives of a function of network output with regard to network parameters, such as connection weights, is a standard part of CNN training. It is common for learning in a deep CNN to involve stochastic gradient decent, which involves such derivatives. In that case, the derivatives are of an objective function with regard to connection weight values. In image classification networks, the objective function is designed to have optima where training images are correctly classified. In the case of object localization, a similar objective function could be designed to minimize differences between output bounding box coordinates and provided ground truth bounding box coordinates, for all images in an appropriately labeled training set. For example, given N training images, stored in the matrix X, with the ground truth 4-dimensional bounding box vector for image x i being y i, and G(x i ; w) being the CNN output vector for image x i given connection weights w, an appropriate loss function would be: DISPLAYFORM0The CNN will produce good estimates of the training image bounding boxes when this loss function is minimized with regard to w. Network weight parameters that minimize this loss, w *, may be sought through stochastic gradient decent, incrementally updating w according to the gradient of (X, w) with regard to w. A primary drawback of this approach is that it requires a large and representative sample of images with ground truth bounding box information. Consider that, once weights are found, the gradient of (X, w *) with regard to X would provide information about the sensitivity of the bounding box loss function with regard to the pixels in the images. This gradient can be calculated as efficiently as the gradient of the loss with regard to the weights, with both depending on the gradient of G(x i ; w) with regard to a subset of its arguments. This means that the gradient of G(x i ; w *) with regard to x i can be efficiently computed, and that gradient would capture the sensitivity of bounding box coordinates with regard to the specific pixels in image x i. Note that this gradient can be calculated for images beyond those in the training set. Knowing which pixels in a novel image play an important role in determining the bounding box provides useful information for object localization. Using this calculation to address the object localization task makes little sense, however, as G(x i ; w *) provides an estimate of object location without a need to consider pixel sensitivity. Rather than training a deep CNN to output bounding boxes, requiring extensive labeled data, we propose calculating the same gradient for a different network -one successfully trained to perform image classification. If we now see G(x i ; w *) as the output of such an image classification network, its gradient with regard to x i would provide information about the sensitivity of the assigned category to individual pixels. Pixels with the largest absolute values of this derivative will, around the input x i, produce the largest changes in the classification decision of the CNN. This can be seen as one measure of how important pixels are for classifying the object in the image. Consider that the object class output is not immediately affected by changes to pixels with a derivative of zero. The calculation of this gradient can be performed as efficiently as a single "backward pass" through the classification network. This is well illustrated by considering the case of a simple layered backpropagation network BID18 in which the "net input" of unit i, η i, is a weighted sum of the activations of units in the previous layer, and the activation of unit i is g(η i), where g(·) is the unit activation function. In this case, we can define a sensitivity value for each unit, s i, as the derivative of the network output with regard to η i. Using the chain rule of calculus, it is easy to show that the sensitivity of an output unit is g (ηi), and, for units in earlier layers the gradients are computed as follows: DISPLAYFORM1 where k iterates over all units in the immediately downstream layer from unit i and w ki is the connection weight from unit i to unit k. This calculation may be performed, layer by layer, from outputs to inputs, until s i values for each pixel input unit are available. This demonstrates how efficiently pixel sensitivity values can be calculated for a given classified image. Of course, there are currently a variety of software packages that include tools for calculating gradients. In the evaluation of our approach in Section 3, we report using the tools provided by TensorFlow BID0. We have proposed using a previously trained image classification network as a source of information about object location, focusing on the gradient of the network output with regard to image pixels. It is interesting to note that it might not be necessary to perform the sensitivity calculation using the full classification network. There is a growing body of research that suggests that, in a well trained image classification CNN, the features that are extracted at the "attention map" layer (i.e., the output of the last convolutional layer) tend to be generally useful for learning a variety of image analysis tasks Razavian et al. FORMULA1;. Inspired by these , we have investigated the possibility of substituting the gradient of the classifier output with regard to pixels with the gradient of the attention map with regard to pixels. This avoids calculations involving final fully connected layers and any classification softmax layer. Generating image sensitivity maps from the attention map layer is slightly faster than our original proposal, but, more importantly, it is possible that general knowledge about object location might be found in the attention map, and using the attention map as the basis of the sensitivity map might actually generalize beyond the categories on which the image classiciation CNN was trained. We have not yet done a formal comparison of these two approaches to constructing the sensitivity map, but example using both approaches are reported in Section 3. Note that computing the gradients of the aggregated values of the last convolution layer with respect to the input pixels are considered as the Gestalt total which can be computed as follows. DISPLAYFORM0 which A n is the activation map of the last convolution layer, H, W and C are the height, width and channels of the last convolution layer; moreover, GT indicates as Gestalt Total. The sensitivity map calculations that have been described, so far, provide a scalar sensitivity value for each input to the image classification deep CNN. Color images, however, are regularly provided to such networks using multiple inputs per image pixel, often encoding each pixel over three color channels. Thus, the gradient calculation will actually produce three sensitivity values for each pixel. Since we hope to produce a sensitivity map that focuses in a general way on location information, it seems reasonable to aggregate the three sensitivity values into one. Since the direction of the sensitivity relationship with the class output is irrelevant, a good first step is to take the absolute value of each derivative. Given that dependence on even a single color channel suggests that a pixel is important for identifying the object, an argument can be made that a pixel should be labeled with the maximum of the three absolute derivatives. Alternatively, it could be argued that all color channels should be taken into account when producing the sensitivity map, in which case it might be better to average the three absolute derivatives. We have explored both of these aggregation methods, with appearing in Section 3. Object localization algorithms typically output the four coordinates of a bounding box to communicate the location of the target object. Such a bounding box is not intrinsic to a sensitivity map, however. Heuristic techniques could be used to identify a rectangular region that captures the majority of the high sensitivity pixels, while avoiding low sensitivity pixels, but we have taken a different approach. We have opted to learn a linear mapping from sensitivity maps to bounding box coordinates, using training images with ground truth location information. It is important to note that learning this mapping is not the same as learning to map from the original images to bounding box coordinates, as has been done in some other object localization systems. Sensitivity maps contain much less information than the original images, so using the sensitivity maps as inputs both reduces the dimensionality of the input to this mapping and makes for a more simple functional relationship between pixels and bounding box coordinates. We expect that this simplification will allow the mapping to bounding box coordinates to be successfully learned using a far smaller set of training images labeled with ground truth object locations. Indeed, we expect that a simple linear mapping could perform well. Formally, we define the parameters of the linear mapping to the four bounding box coordinates as a 4 × M matrix,Ŵ, (where M is the number of pixels in an image) and a 4-dimensional vector of"bias weights",ŵ. Given a sensitivity map, s, the output is (Ŵ s +ŵ). Given a training set of N images, the mapping is found by minimizing the following objective function with regard toŴ and w: DISPLAYFORM0 where s i is the sensitivity map for the i th image, and B i,j is the j th coordinate of the bounding box for the i th image. This learning process amounts to four independent linear regression problems, which can be solved efficiently. Once learned, mapping from sensitivity maps to bounding box coordinates can be done very quickly. With sensitivity map formation requiring only a single backward pass through the image classification network, the whole process -from image, to classification, to sensitivity map, to bounding box -can be performed in little more than twice the time it takes for the network to do object recognition. The code and the sensitivity map of on imageNet as well as PASCAL VOC dataset will be publicly available. We evaluated our proposed method for object localization on two challenging data sets: the PASCAL VOC 2007 BID8 data set and the ImageNet 2012 BID5 data set. The PASCAL VOC 2007 data set was selected due to its use in the existing object localization literature. The ImageNet data set is one of the largest publicly available data sets. It also contains many images annotated with ground truth bounding boxes. We followed the literature with regard to the evaluation criterion applied to our method, using CorLoc, which has been used for weakly supervised localization. The CorLoc metric is defined as the percentage of images in a data set that are correctly localized based on the PASCAL criterion, in which a given localization is considered correct if and only if the intersection over union (IOU) area of the predicted and ground truth bounding boxes is greater than one half: DISPLAYFORM0... where β p is the predicted bounding box and β gt is the ground truth bounding box BID27. To demonstrate that our approach works with an image classification deep CNN that was in no way specialized for our localization method, we opted to use a publicly available network. We used the VGG16 network, shown in FIG0, fully trained BID23. This network provides ImageNet object classes as output, allowing us to calculate sensitivity maps based on the network classification when examining ImageNet data. For the PASCAL VOC 2007 data set, we used the previously described method of calculating derivatives based on the attention map of VGG16, since there is not consistent class correspondnce between the PASCAL VOC 2007 classes and the classes on which VGG16 was trained. To produce sensitivity maps for the PASCAL VOC 2007 data set, we aggregated across color channels by using the maximum absolute derivative across the three inputs for each pixel. For the ImageNet data set, we averaged the absolute derivatives across the three inputs in order to produce pixel sensitivity values. For generating sensitivity maps, we have used a pretrained vgg 16 network, we have used the whole network architecture while we were experimenting on ImageNet dataset, otherwise we have removed the last 3 fully connected layers and computed the Gestalt Total by the last convolution layer. The derivatives in either case computed using just one backward pass to the original pixels. For learning bounding boxes we have used the aggregated sensitivity maps as an input. To learn the mapping from sensitivity maps to bounding box coordinates, we performed linear regression using stochastic gradient decent. Updates were performed in batches of 2,048. The learning rate was initialized to 0.1 and decayed by a factor of 10 every 10,000 iterations. The experiment run on 1 GPU for 4 days. The full PASCAL VOC 2007 data set includes 12,608 training set images and an equal number of testing set images BID8. Each image contains an object of 1 of 20 different categories. We applied our object localization method to this full data set. However, we were unable to find published localization performance data for other methods applied to the full data set, which we might use for comparison to our approach. Work reported in Tang et al. BID27 provides performance data on 6 of the classes: aeroplane, bicycle, boat, bus, horse, and motorbike. Performance on these same classes have also been reported by others Russell et al. FORMULA1; BID4; BID6. TAB1 compares the localization performance of our method with that of other approaches. Note that our method, while being very fast, outperforms the comparison algorithms. Examples of the bounding boxes selected by our method, compared to ground truth, for all 20 classes in the PASCAL VOC 2007 data set are shown in FIG1. Qualitatively, it appears as if our approach is most accurate when there is a single target object with little crowding. However, if the target object is small and in a crowded region of the image, performance is less reliable. While speed is an important property of our method, as is the reuse of classification training for localization, we compared our approach to data from some slower state-of-the-art deep learning techniques for localization that do not necessarily have these properties. We compared our method to R-CNN BID11, , and Poselets BID1. These were chosen due to the ready availability of published localization for these alternative methods on the PASCAL VOC 2007 data set, with the measure of performance being Average CorLoc (or mean Average Precision, mAP). The comparison are given in Table??. Several of the comparison methods display better localization performance than our approach, but it is important to keep in mind that the comparison cases had some important advantages, including taking the time to use a sliding window and access to the class labels on which the network was trained. Recall that our sensitivity maps were produced, in this case, by calculating the sensitivity of the network attention map activity to pixel values. Thus, this comparison illustrates trade-offs between speed, performance, and generalization. Note that as one of the reviewers mentioned it would be worth looking at the if we just use the sensitivity maps and heuristics to draw bounding boxes out of objects. For this experiment, we used a Gaussian smoothing filter to smooth out the sensitivity maps and then we picked top %20 pixels and draw the bounding box out of those pixels as other researchers obtained this experiment before BID28 BID20. Based on our observation we noticed that it could damage the mean CorLoc by %3 in our best observations. However, this process is highly depends on the smoothing σ parameter. The obtained from different σ values are reported in TAB3. ImageNet is a large image data set that has been systematically organized by object category BID5. We executed a large scale evaluation of our approach by using all images in ImageNet that are annotated with ground truth localization information. This subset contains 300,916 images involving 478 object classes. We divided this data set into a training set, a test set, and a validation set by sampling without replacement (i.e., the intersection between each pair of the three sets was empty). There were 225,687 images (75%) in the training set, and there were 45,137 images in each of the other two sets. We compared the performance of our approach with two methods discussed in Tang et al. BID27 for which ImageNet are explicitly reported: Top Objectiveness Box & CoLocalization. Also, we noted that many images in this data set presented the target object in the middle of the image, providing a bias that could be leveraged by learned localization systems. Thus, as a baseline of performance, we calculated the CorLoc performance for a system that blindly offered the same bounding box in the middle of the image, with average size, for every input. The are shown in TAB4. Once again, note the relatively high accuracy performance of our efficient method. Also note that the baseline was comfortingly low. As might be expected, performance varies with class. Our algorithm appears to do well on some objects, such as balls and dogs. One might suspect that failures arise in the linear mapping from sensitivity maps to bounding box coordinates, but a perusal of the sensitivity maps, themselves, suggests that the pixel sensitivity values vary in utility across different object categories. Still, our method performs fairly well across the classes. Note that the IOU does not fall below 0.62 for any class. This suggests that, while some individual images may be problematic, the overall performance for each class is quite good. This universally strong class-specific performance is also displayed in TAB4. Figure 4: Results of the proposed method on different object categories from the ImageNet data set. Each row shows 9 examples in one class. The green boxes are the ground truth, and the red ones are the predicted bounding boxes. The sensitivity analysis approach gives us the sensitivity of every single pixels in all channels in the RGB images and since we are in need of locations we need to aggregate among channels. We proposed two methods, an average function, a maximum function. The first approach is to taking average among channels and the second method is to pick up the maximum numbers among channels. We didn't notice significant difference between these two methods in the localization performance, the only information that comes to light is generating sensitivity maps based on average function is a bit smoother in visual sense than maximum function. The CorLoc between average and maximum aggregation function on ImageNet dataset are 68.7 and 67.9 respectively and the of these two aggregation operators on PASCAL VOC dataset is 39.2 and 40.1,respectively. To analyze our object localization approach since it highly depends to the hardware parameters and network architecture, we decided to analyze the speed in term of forward and backward passes. Our approach only needs two passes, one forward pass for the classification and one backward pass for localization. If we consider each forward/backward pass as n operations we can say our approach is O(N 2) + which means it needs one forward pass, one backward pass and one inference from the linear model. We have presented an approach to object localization based on performing a sensitivity analysis of a previously trained image classification deep CNN. Our method is fast enough to be used in online applications, and it demonstrates accuracy that is superior to some methods that are much slower. It is likely that even better accuracy could be had by incorporating sensitivity analysis information into a more sophisticated bounding box estimator. As previously noted, the idea of using sensitivity information has appeared in previously published work. There are ways in which the reported in this paper are distinct, however. We have moved beyond visualization of network function using sensitivity (or saliency) BID24 to performing direct comparisons between different methods on the localization task. We have shown that using a fast and simple measure of sensitivity can produce comparable performance to that of much slower methods. Our approach produces good generalization without modifying the classification network, as is done in Class Activation Mapping (CAM) BID28. With our PASCAL VOC 2007 , we have shown that our approach can successfully be applied to attention maps, even when the image contains objects belonging to a class on which the classification network was not trained, distinguishing it from. In short, we have demonstrated the power of a simple sensitivity measure for performing localization. Note that our approach may be used with image classifiers other than CNNs. The proposed sensitivity analysis can be conducted on any differentiable classifier, though performance will likely depend on classifer specifics. Indeed, at a substantial time cost, even a black box classifier could be approximately analyzed by making small changes to pixels and observing the effects on activation patterns. The proposed approach is quite general. Indeed, we are currently working on applying sensitivity analysis to deep networks trained on other tasks, with the goal of interpreting network performance on the current input in a useful way. Thus, we see a potentially large range of uses for sensitivity analysis in neural network applications.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkzUYjCcFm
Proposing a novel object localization(detection) approach based on interpreting the deep CNN using internal representation and network's thoughts
We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers. On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices. Thus trellis networks with general weight matrices generalize truncated recurrent networks. We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models. Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention. The code is available at https://github.com/locuslab/trellisnet.
[ 0, 0, 0, 0, 0, 1, 0 ]
HyeVtoRqtQ
Trellis networks are a new sequence modeling architecture that bridges recurrent and convolutional models and sets a new state of the art on word- and character-level language modeling.
We propose an end-to-end framework for training domain specific models (DSMs) to obtain both high accuracy and computational efficiency for object detection tasks. DSMs are trained with distillation and focus on achieving high accuracy at a limited domain (e.g. fixed view of an intersection). We argue that DSMs can capture essential features well even with a small model size, enabling higher accuracy and efficiency than traditional techniques. In addition, we improve the training efficiency by reducing the dataset size by culling easy to classify images from the training set. For the limited domain, we observed that compact DSMs significantly surpass the accuracy of COCO trained models of the same size. By training on a compact dataset, we show that with an accuracy drop of only 3.6%, the training time can be reduced by 93%. The framework is based on knowledge distillation BID0 but targets to reduce the accuracy gap 23 between student and teacher models by training the student using a restricted class of domain specific 24 images. Since such training may be conducted on edge-devices, we improve the training efficiency 25 by culling easy-to-classify images with small accuracy penalty. This paper's contribution is summarized below. • We propose an end-to-end framework for training domain specific models (DSMs) to mit-28 igate the tradeoff between object-detection accuracy and computational efficiency. To the 29 best of our knowledge, this is the first successful demonstration of training DSMs for object 30 detection tasks. • By training resnet18-based Faster-RCNN DSMs, we observed a 19.7% accuracy (relative DISPLAYFORM0 compute L train (i) from label(i) and pred(i). Collect Detection ← DSM.predict(image) Figure 1: Object detection of the test image, before and after domain specific training.• Since edge devices will have limited resources, we propose culling the training dataset to 35 significantly reduce the computation resource required for the training. Only training data 36 that has high utility in training is added. This filtering allows us to reduce training time by 93% with an accuracy loss of only 3.6%. DSM framework to train compact models with dataset constructed by domain-specific data. As illustrated in Algorithm 1, our DSM framework consists of preparation of the data and training of the DSM. A large challenge when deploying models in surveillance is preparing the training data 48 since manually labelling frames in videos is cumbersome. To overcome this, we label the dataset 49 used to train the DSM by using the predictions of a much larger teacher model with higher accuracy 50 and treating these predictions as ground truth labels. Furthermore, we compare the prediction on 51 image x i made by the teacher to that of the DSM; we determine whether to store the x i and label Teacher.predict(x i) in our compiled dataset Ω. After the training set is compiled, it is used to train 53 the DSM. Training a object detection model can take hours even with a GPU and can be challenging for 55 applications requiring frequent retraining. We exploit the fact that when the DSM is pretrained on 56 large-scale general dataset, it can already provide good predictions for a large chunk of the domain-57 specific data. This procedure develops a compact dataset Ω that is only composed of data that the 58 DSM finds inconsistent with the prediction made by the teacher model. Keeping data x j that both 59 the DSM and teacher detections are consistent is computationally redundant because it does not 60 contribute to gradient signals. We define L train to quantify the consistency between teacher and 61 DSM: images are for training and the later 3600 images for testing. DISPLAYFORM0 Results. As shown in table 1, we first train our res18 DSM using the full N train = 3600 training 78 images for 10 epochs using stochastic-gradient descent with a learn rate of 10 we aim to train the models with only the domain specific data. We show on
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyzWeVX_jQ
High object-detection accuracy can be obtained by training domain specific compact models and the training can be very short.
We compare the model-free reinforcement learning with the model-based approaches through the lens of the expressive power of neural networks for policies, $Q$-functions, and dynamics. We show, theoretically and empirically, that even for one-dimensional continuous state space, there are many MDPs whose optimal $Q$-functions and policies are much more complex than the dynamics. We hypothesize many real-world MDPs also have a similar property. For these MDPs, model-based planning is a favorable algorithm, because the ing policies can approximate the optimal policy significantly better than a neural network parameterization can, and model-free or model-based policy optimization rely on policy parameterization. Motivated by the theory, we apply a simple multi-step model-based bootstrapping planner (BOOTS) to bootstrap a weak $Q$-function into a stronger policy. Empirical show that applying BOOTS on top of model-based or model-free policy optimization algorithms at the test time improves the performance on MuJoCo benchmark tasks. Model-based deep reinforcement learning (RL) algorithms offer a lot of potentials in achieving significantly better sample efficiency than the model-free algorithms for continuous control tasks. We can largely categorize the model-based deep RL algorithms into two types: 1. model-based policy optimization algorithms which learn policies or Q-functions, parameterized by neural networks, on the estimated dynamics, using off-the-shelf model-free algorithms or their variants (; ; ; ; ;), and 2. model-based planning algorithms, which plan with the estimated dynamics;;. A deeper theoretical understanding of the pros and cons of model-based and the model-free algorithms in the continuous state space case will provide guiding principles for designing and applying new sample-efficient methods. The prior work on the comparisons of model-based and model-free algorithms mostly focuses on their sample efficiency gap, in the case of tabular MDPs , linear quadratic regulator , and contextual decision process with sparse reward . In this paper, we theoretically compare model-based RL and model-free RL in the continuous state space through the lens of approximability by neural networks, and then use the insight to design practical algorithms. What is the representation power of neural networks for expressing the Qfunction, the policy, and the dynamics? How do the model-based and model-free algorithms utilize the expressivity of neural networks? Our main finding is that even for the case of one-dimensional continuous state space, there can be a massive gap between the approximability of Q-function and the policy and that of the dynamics: The optimal Q-function and policy can be significantly more complex than the dynamics. We construct environments where the dynamics are simply piecewise linear functions with constant pieces, but the optimal Q-functions and the optimal policy require an exponential (in the horizon) number of linear pieces, or exponentially wide neural networks, to approximate. 1 The approximability gap can also be observed empirically on (semi-) randomly generated piecewise linear dynamics with a decent chance. (See Figure 1 for two examples.) When the approximability gap occurs, any deep RL algorithms with policies parameterized by neural networks will suffer from a sub-optimal performance. These algorithms include both model-free algorithms such as DQN and SAC , and model-based policy optimization algorithms such as SLBO and MBPO . To validate the intuition, we empirically apply these algorithms to the constructed or the randomly generated MDPs. Indeed, they fail to converge to the optimal rewards even with sufficient samples, which suggests that they suffer from the lack of expressivity. However, in such cases, model-based planning algorithms should not suffer from the lack of expressivity, because they only use the learned, parameterized dynamics, which are easy to express. The policy obtained from the planning is the maximizer of the total future reward on the learned dynamics, and can have an exponential (in the horizon) number of pieces even if the dynamics has only a constant number of pieces. In fact, even a partial planner can help improve the expressivity of the policy. If we plan for k steps and then resort to some Q-function for estimating the total reward of the remaining steps, we can obtain a policy with 2 k more pieces than what Q-function has. We hypothesize that the real-world continuous control tasks also have a more complex optimal Qfunction and a policy than the dynamics. The theoretical analysis of the synthetic dynamics suggests that a model-based few-steps planner on top of a parameterized Q-function will outperform the original Q-function because of the addtional expressivity introduced by the planning. We empirically verify the intuition on MuJoCo benchmark tasks. We show that applying a model-based planner on top of Q-functions learned from model-based or model-free policy optimization algorithms in the test time leads to significant gains over the original Q-function or policy. In summary, our contributions are: 1. We construct continuous state space MDPs whose Q-functions and policies are proved to be more complex than the dynamics (Sections 4.1 and 4.2.) 2. We empirically show that with a decent chance, (semi-)randomly generated piecewise linear MDPs also have complex Q-functions (Section 4.3.) 3. We show theoretically and empirically that the model-free RL or model-based policy optimization algorithms suffer from the lack of expressivity for the constructed MDPs (Sections 4.3), whereas model-based planning solve the problem efficiently (Section 5.2.) 4. Inspired by the theory, we propose a simple model-based bootstrapping planner (BOOTS), which can be applied on top of any model-free or model-based Q-learning algorithms at the test time. Empirical show that BOOTS improves the performance on MuJoCo benchmark tasks, and outperforms previous state-of-the-art on MuJoCo humanoid environment. Comparisons with Prior Theoretical Work. Model-based RL has been extensively studied in the tabular case (see and the references therein), but much less so in the context of deep neural networks approximators and continuous state space. give sample complexity and convergence guarantees suing principle of optimism in the face of uncertainty for non-linear dynamics. Below we review several prior regarding model-based versus model-free dichotomy in various settings. We note that our work focuses on the angle of expressivity, whereas the work below focuses on the sample efficiency. Tabular MDPs. The extensive study in tabular MDP setting leaves little gap in their sample complexity of model-based and model-free algorithms, whereas the space complexity seems to be the main difference. . The best sample complexity bounds for model-based tabular RL and model-free tabular RL only differ by a poly(H) multiplicative factor (where H is the horizon.) Linear Quadratic Regulator. and provided sample complexity bound for model-based LQR. analyzed sample efficiency of the modelbased and model-free problem in the setting of Linear Quadratic Regulator, and proved an O(d) gap in sample complexity, where d is the dimension of state space. Unlike tabular MDP case, the space complexity of model-based and model-free algorithms has little difference. The sample-efficiency gap mostly comes from that dynamics learning has d-dimensional supervisions, whereas Q-learning has only one-dimensional supervision. Contextual Decision Process (with function approximator). prove an exponential information-theoretical gap between mode-based and model-free algorithms in the factored MDP setting. Their definition of model-free algorithms requires an exact parameterization: the value-function hypothesis class should be exactly the family of optimal value-functions induced by the MDP family. This limits the application to deep reinforcement learning where overparameterized neural networks are frequently used. Moreover, a crucial reason for the failure of the model-free algorithms is that the reward is designed to be sparse. A large family of model-based RL algorithms uses existing model-free algorithms of its variant on the learned dynamics. MBPO , STEVE , and MVE are model-based Q-learning-based policy optimization algorithms, which can be viewed as modern extensions and improvements of the early model-based Q-learning framework, Dyna . SLBO is a model-based policy optimization algorithm using TRPO as the algorithm in the learned environment. Another way to exploit the dynamics is to use it to perform model-based planning. Racanière et al. and use the model to generated additional extra data to do planning implicitly. study how to combine an ensemble of probabilistic models and planning, which is followed by , which introduces a policy network to distill knowledge from a planner and provides a prior for the planner. Piché et al. uses methods in Sequential Monte Carlo in the context of control as inference. trains a Q-function and then perform lookahead planning. uses random shooting as the planning algorithm. uses the dynamics to improve the performance of model-free algorithms. backprops through a stochastic computation graph with a stochastic gradient to optimize the policy under the learned dynamics. distills a policy from trajectory optimization. trains a policy adversarially robust to the worst dynamics in the ensemble. reformulates the problem as a meta-learning problem and using meta-learning algorithms. Predictron learns a dynamics and value function and then use them to predict the future reward sequences. Another line of work focus on how to improve the learned dynamics model. Many of them use an ensemble of models (; ;), which are further extended to an ensemble of probabilistic models (; Markov Decision Process. A Markov Decision Process (MDP) is a tuple S, A, f, r, γ, where S is the state space, A the action space, f: S × A → ∆(S) the transition dynamics that maps a state action pair to a probability distribution of the next state, γ the discount factor, and r ∈ R S×A the reward function. Throughout this paper, we will consider deterministic dynamics, which, with slight abuse of notation, will be denoted by f: S × A → S. A deterministic policy π: S → A maps a state to an action. The value function for the policy is defined as is defined An RL agent aims to find a policy π that maximizes the expected total reward defined as where µ is the distribution of the initial state. Let π be the optimal policy, and V the optimal value function (that is, the value function for policy π). The value function V π for policy π and optimal value function V satisfy the Bellman equation and Bellman optimality equation, respectively. Let Q π and Q defines the state-action value function for policy π and optimal state-action value function. Then, for a deterministic dynamics f, we have Denote the Bellman operator for dynamics f by Problem Setting and Notations. In this paper, we focus on continuous state space, discrete action space MDPs with S ⊂ R. We assume the dynamics is deterministic (that is, s t+1 = f (s t, a t)), and the reward is known to the agent. Let x denote the floor function of x, that is, the greatest integer less than or equal to x. We use I[·] to denote the indicator function. We show that there exist MDPs in one-dimensional continuous state space that have simple dynamics but complex Q-functions and policies. Moreover, any polynomial-size neural network function approximator of the Q-function or policy will in a sub-optimal expected total reward, and learning Q-functions parameterized by neural networks requires fundamentally an exponential number of samples (Section 4.2). Section 4.3 illustrates the phenomena that Q-function is more complex than the dynamics occurring frequently and naturally even with random MDP, beyond the theoretical construction. (a) Visualization of dynamics for action a = 0, 1. (b) The reward function r(s, 0) and r(s, 1). (c) Approximation of optimal Qfunction Q (s, a) Figure 2: A visualization of the dynamics, the reward function in the MDP defined in Definition 4.1, and the approximation of its optimal Q-function for the effective horizon H = 4. We can also construct slightly more involved construction with Lipschitz dynamics and very similar properties. Please see Appendix C. Recall that we consider the infinite horizon case and 0 < γ < 1 is the discount factor. Let H = (1 − γ) −1 be the "effective horizon" -the rewards after H steps become negligible due to the discount factor. For simplicity, we assume that H > 3 and it is an integer. (Otherwise we take just take H = (1 − γ) −1.) Throughout this section, we assume that the state space S = and the action space A = {0, 1}. Definition 4.1. Given the effective horizon H = (1 − γ) −1, we define an MDP M H as follows. Let κ = 2 −H. The dynamics f by the following piecewise linear functions with at most three pieces. The reward function is defined as The initial state distribution µ is uniform distribution over the state space. The dynamics and the reward function for H = 4 are visualized in Figures 2a, 2b. Note that by the definition, the transition function for a fixed action a is a piecewise linear function with at most 3 pieces. Our construction can be modified so that the dynamics is Lipschitz and the same holds (see Appendix C). Attentive readers may also realize that the dynamics can be also be written succinctly as f (s, 0) = 2s mod 1 and f (s, 1) = 2s + κ mod 1 2, which are key properties that we use in the proof of Theorem 4.2 below. Optimal Q-function Q and the optimal policy π. Even though the dynamics of the MDP constructed in Definition 4.1 has only a constant number of pieces, the Q-function and policy are very complex: the policy is a piecewise linear function with exponentially number of pieces, the optimal Q-function Q and the optimal value function V are actually fractals that are not continuous anywhere. These are formalized in the theorem below. Theorem 4.2. For s ∈, let s (k) denotes the k-th bit of s in the binary representation. 3 The optimal policy π for the MDP defined in Definition 4.1 has 2 H+1 number of pieces. In particular, 2 The mod function is defined as: x mod 1 x − x. More generally, for positive real k, we define x mod k x − k x/k. 3 Or more precisely, we define s And the optimal value function is a fractal with the expression: The close-form expression of Q can be computed by Q (s, a) = r(s, a) + V (f (s, a)), which is also a fractal. We approximate the optimal Q-function by truncating the infinite sum to 2H terms, and visualize it in Figure 2c. We discuss the main intuitions behind the construction in the following proof sketch of the Theorem. A rigorous proof of Theorem 4.2) is deferred to Appendix B.1. Proof Sketch. The key observation is that the dynamics f essentially shift the binary representation of the states with some addition. We can verify that the dynamics satisfies f (s, 0) = 2s mod 1 and f (s, 1) = 2s + κ mod 1 where κ = 2 −H. In other words, suppose s = 0.s s · · · is the binary representation of s, and let left-shift(s) = 0.s Moreover, the reward function is approximately equal to the first bit of the binary representation (Here the small negative drift of reward for action a = 1, −2(γ H−1 − γ H), is only mostly designed for the convenience of the proof, and casual readers can ignore it for simplicity.) Ignoring carries, the policy pretty much can only affect the H-th bit of the next state s = f (s, a): the H-th bit of s is either equal to (H + 1)-th bit of s when action is 0, or equal its flip when action is 1. Because the bits will eventually be shifted left and the reward is higher if the first bit of a future state is 1, towards getting higher future reward, the policy should aim to create more 1's. Therefore, the optimal policy should choose action 0 if the (H + 1)-th bit of s is already 1, and otherwise choose to flip the (H + 1)-th bit by taking action 1. A more delicate calculation that addresses the carries properly would lead us to the form of the optimal policy (Equation.) Computing the total reward by executing the optimal policy will lead us to the form of the optimal value function (equation.) (This step does require some elementary but sophisticated algebraic manipulation.) With the form of the V, a shortcut to a formal, rigorous proof would be to verify that it satisfies the Bellman equation, and verify π is consistent with it. We follow this route in the formal proof of Theorem 4.2) in Appendix B.1. A priori, the complexity of Q or π does not rule out the possibility that there exists an approximation of them that do an equally good job in terms of maximizing the rewards. However, we show that in this section, indeed, there is no neural network approximation of Q or π with a polynomial width. We prove this by showing any piecewise linear function with a sub-exponential number of pieces cannot approximate either Q or π with a near-optimal total reward. Theorem 4.3. Let M H be the MDP constructed in Definition 4.1. Suppose a piecewise linear policy π has a near optimal reward in the sense that η(π) ≥ 0.92 · η(π), then it has to have at least Ω (exp(cH)/H) pieces for some universal constant c > 0. As a corollary, no constant depth neural networks with polynomial width (in H) can approximate the optimal policy with near optimal rewards. Consider a policy π induced by a value function Q, that is, π(s) = arg max a∈A Q(s, a). Then,when there are two actions, the number of pieces of the policy is bounded by twice the number of pieces of Q. This observation and the theorem above implies the following inapproximability of Q. Corollary 4.4. In the setting of Theorem 4.3, let π be the policy induced by some Q. If π is nearoptimal in a sense that η(π) ≥ 0.92 · η(π), then Q has at least Ω (exp(cH)/H) pieces for some universal constant c > 0. The intuition behind the proof of Theorem 4.3 is as follows. Recall that the optimal policy has the form π (s) = I[s (H+1) = 0]. One can expect that any polynomial-pieces policy π behaves suboptimally in most of the states, which leads to the suboptimality of π. Detailed proof of Theorem 4.3 is deferred to Appendix B.2. Beyond the expressivity lower bound, we also provide an exponential sample complexity lower bound for Q-learning algorithms parameterized with neural networks (see Appendix B.4). In this section, we show the phenomena that the Q-function not only occurs in the crafted cases as in the previous subsection, but also occurs more robustly with a decent chance for (semi-) randomly generated MDPs. (Mathematically, this says that the family of MDPs with such a property is not a degenerate measure-zero set.) It is challenging and perhaps requires deep math to characterize the fractal structure of Q-functions for random dynamics, which is beyond the scope of this paper. Instead, we take an empirical approach here. We generate random piecewise linear and Lipschitz dynamics, and compute their Q-functions for the finite horizon, and then visualize the Q-functions or count the number of pieces in the Q-functions. We also use DQN algorithm with a finite-size neural network to learn the Q-function. We set horizon H = 10 for simplicity and computational feasibility. The state and action space are and {0, 1} respectively. We design two methods to generate random or semi-random piecewise dynamics with at most four pieces. First, we have a uniformly random method, called RAND, where we independently generate two piecewise linear functions for f (s, 0) and f (s, 1), by generating random positions for the kinks, generating random outputs for the kinks, and connecting the kinks by linear lines (See Appendix D.1 for a detailed description.) In the second method, called SEMI-RAND, we introduce a bit more structure in the generation process, towards increasing the chance to see the phenomenon. The functions f (s, 0) and f (s, 1) have 3 pieces with shared kinks. We also design the generating process of the outputs at the kinks so that the functions have more fluctuations. The reward for both of the two methods is r(s, a) = s, ∀a ∈ A. (See Appendix D.1 for a detailed description.) Figure 1 illustrates the dynamics of the generated MDPs from SEMI-RAND. More details of empirical settings can be found in Appendix D.1. The optimal policy and Q can have a large number of pieces. Because the state space has one dimension, and the horizon is 10, we can compute the exact Q-functions by recursively applying Bellman operators, and count the number of pieces. We found that, 8.6% fraction of the 1000 MDPs independently generated from the RAND method has policies with more than 100 pieces, much larger than the number of pieces in the dynamics (which is 4). Using the SEMI-RAND method, a 68.7% fraction of the MDPs has polices with more than 10 3 pieces. In Section D.1, we plot the histogram of the number of pieces of the Q-functions. Figure 1 visualize the Q-functions and dynamics of two MDPs generated from RAND and SEMI-RAND method. These suggest that the phenomenon that Q-function is more complex than dynamics is not a degenerate phenomenon and can occur with non-zero measure. For more empirical , see Appendix D.2. Model-based policy optimization methods also suffer from a lack of expressivity. As an implication of our theory in the previous section, when the Q-function or the policy are too complex to be approximated by a reasonable size neural network, both model-free algorithms or model-based policy optimization algorithms will suffer from the lack of expressivity, and as a consequence, the sub-optimal rewards. We verify this claim on the randomly generated MDPs discussed in Section 4.3, by running DQN , SLBO , and MBPO with various architecture size. For the ease of exposition, we use the MDP visualized in the bottom half of Figure 1. The optimal policy for this specific MDP has 765 pieces, and the optimal Q-function has about 4 × 10 4 number of pieces, and we can compute the optimal total rewards. First, we apply DQN to this environment by using a two-layer neural network with various widths to parameterize the Q-function. The training curve is shown in Figure 3 (Left). Model-free algorithms Figure 1. The number after the acronym is the width of the neural network used in the parameterization of Q. We see that even with sufficiently large neural networks and sufficiently many steps, these algorithms still suffers from bad approximability and cannot achieve optimal reward. (Right): Performance of BOOTS + DQN with various planning steps. A near-optimal reward is achieved with even k = 3, indicating that the bootstrapping with the learned dynamics improves the expressivity of the policy significantly. Algorithm 1 Model-based Bootstrapping Planner (BOOTS) + RL Algorithm X 1: training: run Algorithm X, store the all samples in the set R, store the learned Q-function Q, and the learned dynamicsf if it is available in Algorithm X. 2: testing: iff is not available, learnf from the data in R Given: query oracle for function Q andf 3: using a zero-th order optimization algorithm (which only requires oracle query of the function value) such as cross-entropy method or random shooting. can not find near-optimal policy even with 2 14 hidden neurons and 1M trajectories, which suggests that there is a fundamental approximation issue. This is consistent with, in a sense that enlarging Q-network improves the performance of DQN algorithm at convergence. Second, we apply SLBO and MBPO in the same environment. Because the policy network and Q-function in SLOBO and MBPO cannot approximate the optimal policy and value function, we see that they fail to achieve near-optimal rewards, as shown in Figure 3 (Left). Our theory and experiments in Section 4.2 and 4.3 demonstrate that when the Q-function or the policy is complex, model-free or model-based policy optimization algorithms will suffer from the lack of expressivity. The intuition suggests that model-based planning algorithms will not suffer from the lack of expressivity because the final policy is not represented by a neural network. For the construction in Section 4.1, we can actually prove that even a few-steps planner can bootstrap the expressivity of the Q-function (formalized in Theorem 5.1 below). Inspired the theoretical , we apply a simple k-step model-based bootstrapping planner on top of existing Q-functions (trained from either model-based or model-free approach) in the test time, on either the one-dimensional MDPs considered in Section 4 or the continuous control benchmark tasks in MuJoCo. The bootstrapping planner is reminiscent of MCTS using in AlphaGo (; . However, here, we use the learned dynamics and deal with the continuous state space. The test policies for MBSAC and SAC are the deterministic policy that takes the mean of the output of the policy network, because the deterministic policy performs better than the stochastic policy in the test time. Given a function Q that is potentially not expressive enough to approximate the optimal Q-function, we can apply the Bellman operator with a learned dynamicsf for k times to get a bootstrapped version of Q: where s 0 = s, a 0 = a and s h+1 =f (s h, a h). Given the bootstrapped version, we can derive a greedy policy w.r.t it: Algorithm 1, called BOOTS summarizes how to apply the planner on top of any RL algorithm with a Q-function (straightforwardly). For the MDPs constructed in Section 4.1, we can prove that representing the optimal Q-function by B k f [Q] requires fewer pieces in Q than representing the optimal Q-function by Q directly. Theorem 5.1. Consider the MDP M H defined in Definition 4.1. There exists a constant-piece piecewise linear dynamicsf and a 2 H−k+1 -piece piecewise linear function Q, such that the bootstrapped policy π boots k,Q,f (s) achieves the optimal total rewards. By contrast, recall that in Theorem 4.3, we show that approximating the optimal Q-function directly with a piecewise linear function requires ≈ 2 H piecewise. Thus we have a multiplicative factor of 2 k gain in the expressivity by using the bootstrapped policy. Here the exponential gain is only magnificent enough when k is close to H because the gap of approximability is huge. However, in more realistic settings -the randomly-generated MDPs and the MuJoCo environment -the bootstrapping planner improvs the performance significantly as shown in the next subsection. BOOTS on random piecewise linear MDPs. We implement BOOTS (Algorithm 1) with various steps of planning and with the learned dynamics. 4. The planner is an exponential-time planner which enumerates all the possible future sequence of actions. We also implement bootstrapping with partial planner with varying planning horizon. As shown in Figure 3, BOOTS + DQN not only has the best sample-efficiency, but also achieves the optimal reward. In the meantime, even a partial planner helps to improve both the sample-efficiency and performance. More details of this experiment are deferred to Appendix D.3. BOOTS on MuJoCo environments. We work with the OpenAI Gym environments based on the Mujoco simulator with maximum horizon 1000 and discount factor 1. We apply BOOTS on top of three algorithms: (a) SAC We use k = 4 steps of planning unless explicitly mentioned otherwise in the ablation study (Section A.2). In Figure 4, we compare BOOTS+SAC with SAC, and BOOTS + MBSAC with MBSAC on Gym Ant and Humanoid environments, and demonstrate that BOOTS can be used on top of existing strong baselines. We found that BOOTS has little help for other simpler environments, and we suspect that those environments have much less complex Q-functions so that our theory and intuitions do not necessarily apply. (See Section A.2 for more ablation study.) In Figure 5, we compare BOOTS+MBSAC and BOOTS+MBPO with other MBPO, SAC, and STEVE 5 on the humanoid environment. We see a strong performance surpassing the previous state-of-the-art MBPO. Our study suggests that there exists a significant representation power gap of neural networks between for expressing Q-function, the policy, and the dynamics in both constructed examples and empirical benchmarking environments. We show that our model-based bootstrapping planner BOOTS helps to overcome the approximation issue and improves the performance in synthetic settings and in the difficult MuJoCo environments. We raise some interesting open questions. • Can we theoretically generalize our to high-dimensional state space, or continuous actions space? Can we theoretically analyze the number of pieces of the optimal Q-function of a stochastic dynamics? • In this paper, we measure the complexity by the size of the neural networks. It's conceivable that for real-life problems, the complexity of a neural network can be better measured by its weights norm. Could we build a more realistic theory with another measure of complexity? • The BOOTS planner comes with a cost of longer test time. How do we efficiently plan in high-dimensional dynamics with a long planning horizon? • The dynamics can also be more complex (perhaps in another sense) than the Q-function in certain cases. How do we efficiently identify the complexity of the optimal Q-function, policy, and the dynamics, and how do we deploy the best algorithms for problems with different characteristics? , the stochasticity in the dynamics can play a similar role as the model ensemble. Our algorithm is a few times faster than MBPO in wall-clock time. It performs similarlty to MBPO on Humanoid, but a bit worse than MBPO in other environments. In MBSAC, we use SAC to optimize the policy π β and the Q-function Q ϕ. We choose SAC due to its sample-efficiency, simplicity and off-policy nature. We mix the real data from the environment and the virtual data which are always fresh and are generated by our learned dynamics modelf θ. Algorithm 2 MBSAC 1: Parameterize the policy π β, dynamicsf θ, and the Q-function Q ϕ by neural networks. Initialize replay buffer B with n init steps of interactions with the environments by a random policy, and pretrain the dynamics on the data in the replay buffer. 2: t ← 0, and sample s 0 from the initial state distribution. 3: for n iter iterations do Perform action a t ∼ π β (·|s t) in the environment, obtain s as the next state from the environment. s t+1 ← s, and add the transition (s t, a t, s t+1, r t) to B. t ← t + 1. If t = T or the trajectory is done, reset to t = 0 and sample s 0 from the initial state distribution. for n policy iterations do for n model iterations do Optimizef θ with a mini-batch of data from B by one step of Adam. Sample n real data B real and n start data B start from B. Perform q steps of virtual rollouts usingf θ and policy π β starting from states in B start; obtain B virtual. Update π β and Q ϕ using the mini-batch of data in B real ∪ B virtual by SAC. For Ant, we modify the environment by adding the x and y axis to the observation space to make it possible to compute the reward from observations and actions. For Humanoid, we add the position of center of mass. We don't have any other modifications. All environments have maximum horizon 1000. For the policy network, we use an MLP with ReLU activation function and two hidden layers, each of which contains 256 hidden units. For the dynamics model, we use a network with 2 Fixup blocks , with convolution layers replaced by a fully connected layer. We found out that with similar number of parameters, fixup blocks leads to a more accurate model in terms of validation loss. Each fixup block has 500 hidden units. We follow the model training algorithm in in which non-squared 2 loss is used instead of the standard MSE loss. Planning with oracle dynamics and more environments. We found that BOOTS has smaller improvements on top of MBSAC and SAC for the environment Cheetah and Walker. To diagnose the issue, we also plan with an oracle dynamics (the true dynamics). This tells us whether the lack of improvement comes from inaccurate learned dynamics. The are presented in two ways in Figure 6 and Figure 7. In Figure 6, we plot the mean rewards and the standard deviation of various methods across the randomness of multiple seeds. However, the randomness from the seeds somewhat obscures the gains of BOOTS on each individual run. Therefore, for completeness, we also plot the relative gain of BOOTS on top of MBSAC and SAC, and the standard deviation of the gains in Figure 7. From Figure 7 we can see planning with the oracle dynamics improves the performance in most of the cases (but with various amount of improvements). However, the learned dynamics sometimes not always can give an improvement similar to the oracle dynamics. This suggests the learned dynamics is not perfect, but oftentimes can lead to good planning. This suggests the expressivity of the Q-functions varies depending on the particular environment. How and when to learn and use a learned dynamics for planning is a very interesting future open question. The effect of planning horizon. We experimented with different planning horizons in Figure 8. By planning with a longer horizon, we can earn slightly higher total rewards for both MBSAC and SAC. Planning horizon k = 16, however, does not work well. We suspect that it's caused by the compounding effect of the errors in the dynamics. In this section we provide the proofs omitted in Section 4. Proof of Theorem 4.2. Since the solution to Bellman optimal equations is unique, we only need to verify that V and π defined in equation satisfy the following, Recall that s (i) is the i-th bit in the binary representation of s, that is,, which ensures the H-bit of the next state is 1, we haveŝ For simplicity, define ε = 2(γ Now, we verify Eq. by plugging in the proposed solution (namely, Eq.). As a , which verifies Eq.. In the following we verify Eq.. Consider any a = π (s). Lets = f (s, a) for shorthand. Note thats (i) = s (i+1) for i > H. As a , For the case where s (H+1) = 0, we have π (s) = 1. For a = 0,s where the last inequality holds when γ H − ε > 0, or equivalently, γ > 2/3. For the case where s (H+1) = 1, we have π (s) = 0. For a = 1, we have s, where we define the max of an empty set is 0. The dynamics f (s, 1) implies thats Therefore, In both cases, we have V − γV (s) > r(s, a) for a = π (s), which proves Eq.. For a fixed parameter H, let z(π) be the number of pieces in π. For a policy π, define the state distribution when acting policy π at step h as µ π h. In order to prove Theorem 4.3, we show that if 1/2 − 2Hz(π)/2 H < 0.3, then η(π) < 0.92η(π). The proof is based on the advantage decomposition lemma. Lemma B.1 (Advantage Decomposition Lemma Corollary B.2. For any policy π, we have Intuitively speaking, since π = I[s (H+1) = 0], the a policy π with polynomial pieces behaves suboptimally in most of the states. Lemma B.3 shows that the single-step suboptimality gap V (s)− Q (s, π(s)) is large for a constant portion of the states. On the other hand, Lemma B.4 proves that the state distribution µ π h is near uniform, which means that suboptimal states can not be avoided. Combining with Corollary B.2, the suboptimal gap of policy π is large. Let ν h (k) = inf s∈ k µ π h (s), then by advantage decomposition lemma (namely, Corollary B.2), we have By Lemma B.4 and union bound, we get For the sake of contradiction, we assume z(π) = o (exp(cH)/H), then for large enough H we have, 49 for all h ≤ 10H. Consequently, for H > 500, we have Now, since η(π) ≤ 1/(1 − γ), we have η(π) < 0.92η(π). Therefore for near-optimal policy π, z(π) = Ω (exp(cH)/H). In this section, we present the proofs of two lemmas used in Section B.1 Proof of Lemma B.3. Note that for any k ∈ K, s (H) = 1, ∀s ∈ k. Now fix a parameter k ∈ K. Suppose π(s) = a i for s ∈ k. Then for any s such that s (H+1) + i = 1, we have For H > 500, we have γ H − ε > 0.366. Therefore, Proof of Lemma B.4. Now let us fix a parameter H and policy π. For every h, we prove by induction that there exists a function ξ h (s), such that For the base case h = 1, we define as the left and right endpoints of be the set of 2 solutions of equation where 0 ≤ x < 1, and we define y k ) can reach states in interval k by a single transition. We define a set I k = {i : That is, the intervals where policy π acts unanimously. Consequently, for i ∈ I k, the set {s : an interval of length 2 −H−1, and has the form for some integer w Now, the density ξ h+1 (s) for s ∈ k is defined as, The intuition of the construction is that, we discard those density that cause non-uniform behavior (that is, the density in intervals [x When the number of pieces of π is small, we can keep most of the density. Now, statement (b) is naturally satisfied by definition of ξ h+1. We verify statement (a) and (c) below. For any set B ⊆ k, let (T π) −1 (B) = {s ∈ S : f (s, π(s)) ∈ B} be the inverse of Markov transition T π. Then we have, where | · | is the shorthand for standard Lebesgue measure. By definition, we have which verifies statement (a). For statement (c), recall that S = is the state space. Note that T π preserve the overall density. That is (T π ξ h) (S) = ξ h (S). We only need to prove that and statement (c) follows by induction. By definition of ξ h+1 (s) and the induction hypothesis that ξ h (s) ≤ 1, we have On the other hand, for any s ∈ S, the set {k k)} has cardinality 2, which means that one intermittent point of π can correspond to at most 2 intervals that are not in I k for some k. Thus, we have which proves statement (c). Recall that corollary 4.4 says that in order to find a near-optimal policy by a Q-learning algorithm, an exponentially large Q-network is required. In this subsection, we show that even if an exponentially large Q-network is applied for Q learning, still we need to collect an exponentially large number of samples, ruling out the possibility of efficiently solving the constructed MDPs with Q-learning algorithms. Towards proving the sample complexity lower bound, we consider a stronger family of Q-learning algorithm, Q-learning with Oracle (Algorithm 3). We assume that the algorithm has access to a Q-ORACLE, which returns the optimal Q-function upon querying any pair (s, a) during the training process. Q-learning with Oracle is conceptually a stronger computation model than the vanilla Q-learning algorithm, because it can directly fit the Q functions with supervised learning, without relying on the rollouts or the previous Q function to estimate the target Q value. Theorem B.5 proves a sample complexity lower bound for Q-learning algorithm on the constructed example. Require: A hypothesis space Q of Q-function parameterization. 1: Sample s 0 ∼ µ from the initial state distribution µ 2: for i = 1, 2, · · ·, n do 3: Decide whether to restart the trajectory by setting s i ∼ µ based on historical information 4: Query Q-ORACLE to get the function Q (s i, ·). Apply any action a i (according to any rule) and sample s i+1 ∼ f (s i, a i). 6: Learn the Q-function that fit all the data the best: Return the greedy policy according to Q. Theorem B.5 (Informal Version of Theorem B.7). Suppose Q is an infinitely-wide two-layer neural networks, and R(Q) is 1 norm of the parameters and serves as a tiebreaker. Then, any instantiation of the Q-LEARNING WITH ORACLE algorithm requires exponentially many samples to find a policy π such that η(π) > 0.99η(π). Formal proof of Theorem B.5 is given in Appendix B.5. The proof of Theorem B.5 is to exploit the sparsity of the solution found by minimal-norm tie-breaker. It can be proven that there are at most O(n) non-zero neurons in the minimal-norm solution, where n is the number of data points. The proof is completed by combining with Theorem 4.3. A two-layer ReLU neural net Q(s, ·) with input s is of the following form, where d is the number of hidden neurons. w i,a, c a, k i, b i are parameters of this neural net, where c i,a, b i are bias terms. [x] + is a shorthand for ReLU activation I[x > 0]x. Now we define the norm of a neural net. Definition B.6 (Norm of a Neural Net). The norm of a two-layer ReLU neural net is defined as, Recall that the Q-learning with oracle algorithm finds the solution by the following supervised learning problem, Then, we present the formal version of theorem B.5. Theorem B.7. Let Q be the minimal 1 norm solution to Eq., and π the greedy policy according to Q. When n = o(exp(cH)/H), we have η(π) < 0.99η(π). The proof of Theorem B.5 is by characterizing the minimal-norm solution, namely the sparsity of the minimal-norm solution as stated in the next lemma. Lemma B.8. The minimal-norm solution to Eq. has at most 32n + 1 non-zero neurons. That is, |{i : k i = 0}| ≤ 32n + 1. We first present the proof of Theorem B.7, followed by the proof of Theorem B.8. Proof of Theorem B.7. Recall that the policy is given by π(s) = arg max a∈A Q(s, a). For a Qfunction with 32n + 2 pieces, the greedy policy according to Q(s, a) has at most 64n + 4 pieces. Combining with Theorem 4.3, in order to find a policy π such that η(π) > 0.99η(π), n needs to be exponentially large (in effective horizon H). Proof of Lemma B.8 is based on merging neurons. Let, and c = (c 1, c 2). In vector form, neural net defined in Eq. can be written as, First we show that neurons with the same x i can be merged together. Lemma B.9. Consider the following two neurons, with k 1 > 0, k 2 > 0. If x 1 = x 2, then we can replace them with one single neuron of the form k [x − x 1] + w without changing the output of the network. Furthermore, if w 1 = 0, w 2 = 0, the norm strictly decreases after replacement. Proof. We set k = |k 1 w 1 + k 2 w 2 | 1, and w = (k 1 w 1 + k 2 w 2)/k, where |w| 1 represents the 1-norm of vector w. Then, for all s ∈ R, The norm of the new neuron is |k | + |w | 1. By calculation we have, Note that the inequality (a) is strictly less when |k 1 w 1 | 1 = 0 and |k 2 w 2 | 1 = 0. Next we consider merging two neurons with different intercepts between two data points. Without loss of generality, assume the data points are listed in ascending order. That is, s i ≤ s i+1. Lemma B.10. Consider two neurons with k 1 > 0, k 2 > 0. If s i ≤ x 0 < x 0 + δ ≤ s i+1 for some 1 ≤ i ≤ n, then the two neurons can replaced by a set of three neurons, such that for s ≤ s i or s ≥ s i+1, the output of the network is unchanged. Furthermore, if δ ≤ (s i+1 − s i)/16 and |w 1 | 1 = 0, |w 2 | 1 = 0, the norm decreases strictly. Proof. For simplicity, define ∆ = s i+1 − s i. We set Note that for s ≤ s i, all of the neurons are inactive. For s ≥ s i+1, all of the neurons are active, and which means that the output of the network is unchanged. Now consider the norm of the two networks. Without loss of generality, assume |k 1 w 1 | 1 > |k 2 w 2 | 1. The original network has norm |k 1 | + |w 1 | 1 + |k 2 | + |w 2 | 1. And the new network has norm where the inequality (a) is a of Lemma E.1, and is strictly less when |w 1 | 1 = 0, |w 2 | 1 = 0. Similarly, two neurons with k 1 < 0 and k 2 < 0 can be merged together. Now we are ready to prove Lemma B.8. As hinted by previous lemmas, we show that between two data points, there are at most 34 non-zero neurons in the minimal norm solution. Proof of Lemma B.8. Consider the solution to Eq.. Without loss of generality, assume that s i ≤ s i+1. In the minimal norm solution, it is obvious that |w i | 1 = 0 if and only if k i = 0. Therefore we only consider those neurons with k i = 0, denoted by index 1 ≤ i ≤ d. Next we prove that in the minimal norm solution, |B t | ≤ 15. For the sake of contradiction, suppse |B t | > 15. Then there exists i, j such that,, and k i > 0, k j > 0. By Lemma B.10, we can obtain a neural net with smaller norm by merging neurons i, j together without violating Eq., which leads to contradiction. By Lemma B.9, |B t | ≤ 15 implies that there are at most 15 non-zero neurons with s t < −b i /k i < s t+1 and k i > 0. For the same reason, there are at most 15 non-zero neurons with On the other hand, there are at most 2 non-zero neurons with s t = −b i /k i for all t ≤ n, and there are at most 1 non-zero neurons with −b i /k i < s 1. Therefore, we have d ≤ 32n + 1. B.6 PROOF OF THEOREM 5.1 In this section we present the full proof of Theorem 5.1. Proof. First we define the true trajectory estimator the true optimal action sequence and the true optimal trajectory It follows from the definition of optimal policy that, a j = π (s j). Consequently we have Define the set G = {s : We claim that the following function satisfies the statement of Theorem 5.1 Since s k ∈ G, and s k ∈ G for s k generated by non-optimal action sequence, we have where the second inequality comes from the optimality of action sequence a h . As a consequence, for any In this section, we present an extension to our construction such that the dynamics is Lipschitz. The action space is A = {0, 1, 2, 3, 4}. We define CLIP(x) = max{min{x, 1}, 0}. Definition C.1. Given effective horizon H = (1 − γ) −1, we define an MDP M H as follows. Let κ = 2 −H. The dynamics is defined as Reward function is given by The intuition behind the extension is that, we perform the mod operation manually. The following theorem is an analog to Theorem 4.2. Theorem C.2. The optimal policy π for M H is defined by, And the corresponding optimal value function is, We can obtain a similar upper bound on the performance of policies with polynomial pieces. Theorem C.3. Let M H be the MDP constructed in Definition C.1. Suppose a piecewise linear policy π has a near optimal reward in the sense that η(π) ≥ 0.99 · η(π), then it has to have at least Ω (exp(cH)/H) pieces for some universal constant c > 0. The proof is very similar to that for Theorem 4.3. One of the difference here is to consider the case where f (s, a) = 0 or f (s, a) = 1 separately. Attentive readers may notice that the dynamics where f (s, a) = 0 or f (s, a) = 1 may destroy the "near uniform" behavior of state distribution µ π h (see Lemma B.4). Here we show that such destroy comes with high cost. Formally speaking, if the clip is triggered in an interval, then the averaged single-step suboptimality gap is 0.1/(1 − γ). for large enough H. Proof. Without loss of generality, we consider the case where f (s, π(s)) = 0. The proof for f (s, π(s)) = 1 is essentially the same. By elementary manipulation, we have RAND method. As stated in Section 4.3, the RAND method generates kinks {x i} and the corresponding values {x i} randomly. In this method, the generated MDPs are with less structure. The details are shown as follows. • State space S =. • Action space A = {0, 1}. • Number of pieces is fixed to 3. The positions of the kinks are generated by, x i ∼ U for i = 1, 2 and x 0 = 0, x 1 = 1. The values are generated by x i ∼ U. • The reward function is given by r(s, a) = s, ∀s ∈ S, a ∈ A. • The horizon is fixed as H = 10. • Initial state distribution is U. Figure 1 visualizes one of the RAND-generated MDPs with complex Q-functions. SEMI-RAND method. In this method, we add some structures to the dynamics, ing in a more significant probability that the optimal policy is complex. We generate dynamics with fix and shared kinks, generate the output at the kinks to make the functions fluctuating. The details are shown as follows. • State space S =. • Action space A = {0, 1}. • Number of pieces is fixed to 3. The positions of the kinks are generated by, x i = i/3, ∀0 ≤ i ≤ 3. And the values are generated by x i ∼ 0.65 × I[i mod 2 = 0] + 0.35 × U. • The reward function is r(s, a) = s for all a ∈ A. • The horizon is fixed as H = 10. • Initial state distribution is U. We randomly generate 10 3 1-dimensional MDPs whose dynamics has constant number of pieces. The histogram of number of pieces in optimal policy π is plotted. As shown in Figure 9, even for horizon H = 10, the optimal policy tends to have much more pieces than the dynamics. • The Q-network is a fully connected neural net with one hidden-layer. The width of the hidden-layer is varying. • The optimizer is SGD with learning rate 0.001 and momentum 0.9. • The size of replay buffer is 10 4. • Target-net update frequency is 50. • Batch size in policy optimization is 128. • The behavior policy is greedy policy according to the current Q-network with -greedy. exponentially decays from 0.9 to 0.01. Specifically, = 0.01 + 0.89 exp(−t/200) at the t-th episode. Implementation details of MBPO algorithm For the model-learning step, we use 2 loss to train our model, and we use Soft Actor-Critic (SAC) in the policy optimization step. The parameters are set as, • number of hidden neurons in model-net: 32, • number of hidden neurons in value-net: 512, • optimizer for model-learning: Adam with learning rate 0.001. • temperature: τ = 0.01, • the model rollout steps: M = 5, • the length of the rollout: k = 5, • number of policy optimization step: G = 5. Other hyper-parameters are kept the same as DQN algorithm. Implementation details of TRPO algorithm For the model-learning step, we use 2 loss to train our model. Instead of TRPO , we use PPO as policy optimizer. The parameters are set as, • number of hidden neurons in model-net: 32, • number of hidden neurons in policy-net: 512, • number of hidden neurons in value-net: 512, • optimizer: Adam with learning rate 0.001, • number of policy optimization step: 5. • The behavior policy is -greedy policy according to the current policy network. exponential decays from 0.9 to 0.01. Specifically, = 0.01 + 0.89 exp(−t/20000) at the t-th episode. Implementation details of Model-based Planning algorithm The perfect model-based planning algorithm iterates between learning the dynamics from sampled trajectories, and planning with the learned dynamics (with an exponential time algorithm which enumerates all the possible future sequence of actions). The parameters are set as, • number of hidden neurons in model-net: 32, • optimizer for model-learning: Adam with learning rate 0.001. Implementation details of bootstrapping The training time behavior of the algorithm is exactly like DQN algorithm, except that the number of hidden neurons in the Q-net is set to 64. Other parameters are set as, • number of hidden neurons in model-net: 32, • optimizer for model-learning: Adam with learning rate 0.001. • planning horizon varies. In this section, we present the technical lemmas used in this paper. Lemma E.1. For A, B, C, D ≥ 0 and AC ≥ BD, we have Furthermore, when BD > 0, the inequality is strict. Proof. Note that A + B + And when BD > 0, the inequality is strict.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hye4WaVYwr
We compare deep model-based and model-free RL algorithms by studying the approximability of $Q$-functions, policies, and dynamics by neural networks.