paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:dd1ac7776d55534c5458d43d1fe39af30386343d
[ "This paper introduces a novel conditional generative model for high dimensional data with multimodal output distributions. The proposed method, called modal uncertainty estimation (MUE), is a conditional VAE but with discrete latent representations. This discrete latent space allows the model to better handle multimodal outputs and provide confidence scores for the different modes predicted by the model. These capabilities are applied to the task of segmenting lesions in medical scans." ]
Many important problems in the real world don’t have unique solutions. It is thus important for machine learning models to be capable of proposing different plausible solutions with meaningful probability measures. In this work we propose a novel deep learning based framework, named modal uncertainty estimation (MUE), to learn the one-to-many mappings between the inputs and outputs, together with faithful uncertainty estimation. Motivated by the multi-modal posterior collapse problem in current conditional generative models, MUE uses a set of discrete latent variables, each representing a latent mode hypothesis that explains one type of input-output relationship, to generate the one-to-many mappings. Benefit from the discrete nature of the latent representations, MUE can estimate any input the conditional probability distribution of the outputs effectively. Moreover, MUE is efficient during training since the discrete latent space and its uncertainty estimation are jointly learned. We also develop the theoretical background of MUE and extensively validate it on both synthetic and realistic tasks. MUE demonstrates (1) significantly more accurate uncertainty estimation than the current state-of-the-art, and (2) its informativeness for practical use.
[]
[ { "authors": [ "Alexander Alemi", "Ben Poole", "Ian Fischer", "Joshua Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a broken elbo", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Samuel G Armato III", "Geoffrey McLennan", "Luc Bidaut", "Michael F McNitt-Gray", "Charles R Meyer", "Anthony P Reeves", "Binsheng Zhao", "Denise R Aberle", "Claudia I Henschke", "Eric A Hoffman" ], "title": "The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans", "venue": "Medical physics,", "year": 2011 }, { "authors": [ "Samuel G Armato III", "Geoffrey McLennan", "Luc Bidaut", "Michael F McNitt-Gray", "Charles R Meyer", "Anthony P Reeves", "Binsheng Zhao", "Denise R Aberle", "Claudia I Henschke", "Eric A Hoffman" ], "title": "URL https://wiki.cancerimagingarchive.net/ x/rgAe", "venue": "Data from lidc-idri,", "year": 2015 }, { "authors": [ "Marc G Bellemare", "Ivo Danihelka", "Will Dabney", "Shakir Mohamed", "Balaji Lakshminarayanan", "Stephan Hoyer", "Rémi Munos" ], "title": "The cramer distance as a solution to biased wasserstein gradients", "venue": "arXiv preprint arXiv:1705.10743,", "year": 2017 }, { "authors": [ "Kenneth Clark", "Bruce Vendt", "Kirk Smith", "John Freymann", "Justin Kirby", "Paul Koppel", "Stephen Moore", "Stanley Phillips", "David Maffitt", "Michael Pringle" ], "title": "The cancer imaging archive (tcia): maintaining and operating a public information repository", "venue": "Journal of digital imaging,", "year": 2013 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in deep learning", "venue": "University of Cambridge,", "year": 2016 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Irina Higgins", "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew M Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-toimage translation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Eddy Ilg", "Ozgun Cicek", "Silvio Galesso", "Aaron Klein", "Osama Makansi", "Frank Hutter", "Thomas Brox" ], "title": "Uncertainty estimates and multi-hypotheses networks for optical flow", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei" ], "title": "A Efros. Image-to-image translation with conditional adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Kendall", "Vijay Badrinarayanan", "Roberto Cipolla" ], "title": "Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding", "venue": "arXiv preprint arXiv:1511.02680,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Simon Kohl", "Bernardino Romera-Paredes", "Clemens Meyer", "Jeffrey De Fauw", "Joseph R Ledsam", "Klaus Maier-Hein", "SM Ali Eslami", "Danilo Jimenez Rezende", "Olaf Ronneberger" ], "title": "A probabilistic u-net for segmentation of ambiguous images", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yann LeCun", "Corinna Cortes. MNIST handwritten digit database." ], "title": "URL http://yann", "venue": "lecun.com/exdb/mnist/.", "year": 2010 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "arXiv preprint arXiv:1611.00712,", "year": 2016 }, { "authors": [ "Andriy Mnih", "Karol Gregor" ], "title": "Neural variational inference and learning in belief networks", "venue": "arXiv preprint arXiv:1402.0030,", "year": 2014 }, { "authors": [ "Andriy Mnih", "Danilo J Rezende" ], "title": "Variational inference for monte carlo objectives", "venue": "arXiv preprint arXiv:1602.06725,", "year": 2016 }, { "authors": [ "Eric Nalisnick", "Lars Hertel", "Padhraic Smyth" ], "title": "Approximate inference for deep latent gaussian mixtures", "venue": "In NIPS Workshop on Bayesian Deep Learning,", "year": 2016 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Ben Poole", "Oriol Vinyals" ], "title": "Preventing posterior collapse with delta-vaes", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Christian Rupprecht", "Iro Laina", "Robert DiPietro", "Maximilian Baust", "Federico Tombari", "Nassir Navab", "Gregory D Hager" ], "title": "Learning in an uncertain world: Representing ambiguity through multiple hypotheses", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Tim Salimans", "Han Zhang", "Alec Radford", "Dimitris Metaxas" ], "title": "Improving gans using optimal transport", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Akash Srivastava", "Lazar Valkov", "Chris Russell", "Michael U Gutmann", "Charles Sutton" ], "title": "Veegan: Reducing mode collapse in gans using implicit variational learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Gábor J Székely", "Maria L Rizzo" ], "title": "Energy statistics: A class of statistics based on distances", "venue": "Journal of statistical planning and inference,", "year": 2013 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Jakub Tomczak", "Max Welling" ], "title": "Vae with a vampprior", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Aaron Van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals" ], "title": "Neural discrete representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tiancheng Zhao", "Kyusong Lee", "Maxine Eskenazi" ], "title": "Unsupervised discrete sentence representation learning for interpretable neural dialog generation", "venue": "arXiv preprint arXiv:1804.08069,", "year": 2018 }, { "authors": [ "Chuanxia Zheng", "Tat-Jen Cham", "Jianfei Cai" ], "title": "Pluralistic image completion", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Richard Zhang", "Deepak Pathak", "Trevor Darrell", "Alexei A Efros", "Oliver Wang", "Eli Shechtman" ], "title": "Toward multimodal image-to-image translation", "venue": "In Advances in neural information processing systems,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Making predictions in the real world has to face with various uncertainties. One of the arguably most common uncertainties is due to partial or corrupted observations, as such it is often insufficient for making a unique and deterministic prediction. For example, when inspecting where a single CT scan of a patient contains lesion, without more information it is possible for radiologists to reach different conclusions, as a result of the different hypotheses they have about the image. In such an ambiguous scenario, the question is thus, given the observable, which one(s) out of the many possibilities would be more reasonable than others? Mathematically, this is a one-to-many mapping problem and can be formulated as follows. Suppose the observed information is x ∈ X in the input space, we are asked to estimate the conditional distribution p(y|x) for y ∈ Y in the prediction space, based on the training sample pairs (x,y).\nThere are immediate challenges that prevent p(y|x) being estimated directly in practical situations. First of all, both X and Y , e.g.as spaces of images, can be embedded in very high dimensional spaces with very complex structures. Secondly, only the unorganized pairs (x,y), not the one-tomany mappings x 7→ {yi}i, are explicitly available. Fortunately, recent advances in conditional generative models based on Variational Auto-Encoder (VAE) framework from Kingma & Welling (2014) shed light on how to tackle our problem. By modelling through latent variables c = c(x), one aims to explain the underlying mechanism of how y is assigned to x. And hopefully, variation of c will result in variation in the output ŷ(x, c), which will approximate the true one-to-many mappings distributionally.\nMany current conditional generative models, including cVAE in Sohn et al. (2015), BiCycleGAN in Zhu et al. (2017b), Probabilistic U-Net in Kohl et al. (2018), etc., are developed upon the VAE framework, with Gaussian distribution with diagonal covariance as the de facto parametrization of the latent variables. However, in the following we will show that such a parametrization put a dilemma between model training and actual inference, as a form of what is known as the posterior collapse problem in the VAE literature Alemi et al. (2018); Razavi et al. (2018). This issue is particularly easy to understand in our setting, where we assume there are multiple y’s for a given x.\nLet us recall that one key ingredient of the VAE framework is to minimize the KL-divergence between the latent prior distribution p(c|x) and the latent variational approximation pφ(c|x,y) of the posterior. Here φ denotes the model parameters of the “recognition model” in VAE. It does not matter if the prior is fixed p(c|x) = p(c) Kingma & Welling (2014) or learned p(c|x) = pθ(c|x) Sohn et al. (2015), as long as both prior and variational posterior are parameterized by Gaussians. Now suppose for a particular x, there there are two modes y1,y2 for the corresponding predictions. Since the minimization is performed on the entire training set, p(c|x) is forced to approximate a posterior mixture p(c|x,y(·)) of two Gaussians from mode y1 and y2. In the situation when the minimization is successful, meaning the KL divergence is small, the mixture of the variational posteriors must be close to a Gaussian, i.e.posterior collapsed as in Fig.1(b), and hence the multi-modal information is lost. Putting it in contrapositive, if multi-modal information is to be conveyed by the variational posterior, then the minimization will not be successful, meaning higher KL divergence. This may partly explain why it can be a delicate matter to train a conditional VAE. The situation is schematically illustrated in Figure 1 in one dimension. Note that the case in Figure 1(a) is usually more preferable, however the density values of the prior used during testing cannot reflect the uncertainty level of the outputs. We quantitative demonstrate this in Section 4 and Fig.2.\nOne direction to solve the above problem is to modify the strength of KL-divergence or the variational lower bound, while keeping the Gaussian parametrization, and has been explored in the literature extensively, as in Higgins et al. (2017); Alemi et al. (2018); Rezende & Viola (2018). However, besides the need of extensive parameter tuning for these approaches, they are not tailored for the multi-modal posterior collapse problem we described above, thus do not solve the inaccurate uncertainty estimation problem. Mixture or compositions of Gaussian priors have also been proposed in Nalisnick et al. (2016); Tomczak & Welling (2018), but the number of Gaussians in the mixture is usually fixed apriori. Hence making it a conditional generative model further complicates the matter, since the number in the mixture should depend on the input. We therefore adopt another direction, which is to use a latent distribution parameterization other than Gaussians, and one that can naturally exhibit multiple modes. The simplest choice would be to constrain the latent space to be a finite set, as proposed in van den Oord et al. (2017), so that we can learn the conditional distribution as a categorical distribution.\nWe argue that the approach of discrete latent space may be beneficial particularly in our setting. First, different from unconditional or weak conditional generative modelling tasks where diversity is the main consideration, making accurate predictions based on partial information often leads to a significantly restricted output space. Second, there is no longer noise injection during training, so that the decoder can utilize the information from the latent variable more effectively. This makes it less prone to ignore the latent variable completely, in contrast to many conditional generation methods using noise inputs. Third, the density value learned on the latent space is more interpretable, since the learned prior can approximate the variational posterior better. In our case, the latent variables can now represent latent mode hypotheses for making the corresponding most likely predictions. We call our approach modal uncertainty estimation (MUE).\nThe main contributions of this work are: (1) We solve the MUE problem by using c-VAE and justify the use of a discrete latent space from the perspective of multi-modal posterior collapse problem. (2) Our uncertainty estimation improves significantly over the existing state-of-art. (3)\nIn contrast to models using noise inputs that require sampling at the testing stage, our model can directly produce results ordered by their latent mode hypothesis probabilities, and is thus more informative and convenient for practical use.\nThe rest of paper is organized as follows. In Section 2 we sample some works that related to ours and stress the key differences between them. In Section 3 we layout our general framework and model details. We conducted a series of experiments on both synthetic and real datasets described in Section 4. The paper is concluded in Section 5." }, { "heading": "2 RELATED WORK", "text": "Conditional generative models aim to capture the conditional distribution of the data and generate them according to some given information. Thanks to the recent advancement of deep learning techniques, especially the methods of generative adversarial networks (GANs) Goodfellow et al. (2014) and variational auto-encoders (VAEs) Kingma & Welling (2014), conditional generative models have been effectively applied to various computer vision and graphics tasks such as image synthesis, style transfer, image in-painting, etc. Early works in this direction focused on learning the unimodal mapping, as in Isola et al. (2017) and Zhu et al. (2017a). They are called uni-modal because the mapping is between fixed categories, namely a one-to-one mapping. There are no latent codes to sample from, thus the generation is deterministic. In these works, images of a specific category are translated to another category, while keeping the desired semantic content. These methods achieved the goal through a meta supervision technique known as the adversarial loss as in the GAN framework, where one only needs to supply weak supervision for whether the generated image belongs to a certain category or not. Adversarial loss has been known for producing sharp visual look but it alone cannot guarantee faithful distribution approximation, where issues known as mode collapse and mode dropping often occur for complicated data distribution Srivastava et al. (2017). In Isola et al. (2017) it is noted that additional noise input in the conditional model in fact fails to increase variability in the output. How to ensure good approximation of output distribution for GANs is still an active area of research. Therefore, the above frameworks might not be suitable for approximating the distribution of one-to-many mappings.\nMany works have been proposed to extend to the setting of one-to-many mappings by learning disentangled representations, of e.g.“content” and “style”, and consequently some form of auto-encoding has to be used. Conditional generation can then be accomplished by corresponding latent code sampling and decoding. This includes the approaches of Zhu et al. (2017b); Huang et al. (2018) for multi-modal image-to-image translation, Zheng et al. (2019) for image in-painting, and many others. Since the main objectives of these works are the visual quality and diversity of the outputs, they are usually not evaluated in terms of the approximation quality of the output distribution. One notable exception is Probabilistic U-Net proposed in Kohl et al. (2018), which is based on the conditional VAE framework Sohn et al. (2015) and is close in spirit to ours. Probabilistic U-Net has shown superior performance over various other methods for calibrated uncertainty estimation, including the ensemble methods of Lakshminarayanan et al. (2017), multi-heads of Rupprecht et al. (2017); Ilg et al. (2018), drop-out of Kendall et al. (2015) and Image2Image VAE of Zhu et al. (2017b). However, as discussed in Section 1, Probabilistic U-Net cannot solve the multi-modal posterior collapse problem since it uses Gaussian latent parameterization. Therefore, in case the conditional distribution is varying for different input data, the performance is expected to degrade. Furthermore, the latent prior density learned has no interpretation, and thus cannot rank its prediction. To perform uncertainty estimation for Probabilistic U-Net, one must perform extensive sampling and clustering.\nOur framework improves significantly upon Probabilistic U-Net by introducing discrete latent space. With this latent parameterization we can directly output the uncertainty estimation and we can rank our predictions easily. The discrete latent space has been proposed in the vq-VAE framework of van den Oord et al. (2017). With such a latent space it can get rid of the noise sampling, which enables the latent variable to be more effectively utilized by the decoder and produce outputs with better visual quality. While our use of discrete latent space is motivated by the multi-modal posterior collapse problem. The major technical difference compared to our framework is that the image in vq-VAE framework is encoded by a collection of codes arranged in the spatial order. As such, the joint distribution of the codes cannot be obtained directly, and has to be estimated or sampled using e.g.an auto-regressive model in the spatial dimension, such as PixelCNN Van den Oord et al. (2016).\nIn contrast, we learn disentangled representations and only the necessary information to produce different outputs goes into the discrete latent space. In particular, we model the each mode of y given x by a single latent code, thus our model enjoys much simpler sampling.\nBesides vq-VAE van den Oord et al. (2017), the use of discrete latent variables in neural network has been explored in various previous works, including the early work of Mnih & Gregor (2014) and Mnih & Rezende (2016) that use single or multiple samples objectives with variance reduction techniques to help training. Others have explored using continuous approximations to the discrete distributions, know as Concrete Maddison et al. (2016) or Gumbel-Softmax Jang et al. (2016) distributions. As is noted in van den Oord et al. (2017), in general the above approaches have fallen short of their continuous counterparts. Worth mentioning is a recently proposed neural dialogue generation method Zhao et al. (2018) that uses Gumbel-Softmax approximation, which treats the dialogue generation as a one-to-many mapping problem. Our method diverge from theirs by the assumption about the model. In Zhao et al. (2018), they designed the learned discrete representation for an utterance to be “context free”. This is in contrast to our assumption that the latent hypothesis of an input should depend on the input itself. Taking the task of medical image segmentation for an example, if we encode the hypotheses from the segmentation alone as in Zhao et al. (2018), likely there will either be two modes (benign vs malignant) or a huge number of modes if the shape of the segmentation is taken into account. Moreover, it will not contain any information about what kinds of actual biological tissue they might be, which on the other hand can be judged from the actual scan image. In our case, we have deliberately separated the recognition task learning, e.g. segmenting the image, and the hypothesis learning, so that together they can approximate the variation of the outputs given the input.\nFinally, we briefly summarize the differences between MUE and existing uncertainty estimation methodologies in deep learning. Many existing works Gal & Ghahramani (2016); Gal (2016); Kendall et al. (2015); Kendall & Gal (2017) focus on model uncertainty, which try to capture the calibrated level of confidence of the model prediction by using stochastic regularization techniques. Such uncertainty will be of major interest for model predictions on unseen data and long-tail rare cases, or when model is trained on limited data. While ours is more about learning from conflicting or ambiguous training data, and estimating the calibrated uncertainty of the input-output relationship in the dataset. Interestingly, Kohl et al. (2018) has experimented using Dropout as comparison to the c-VAE framework in the MUE setting, but found it only achieved inferior performance. In general, since MUE is independent from the model uncertainty, our framework can be used jointly with existing techniques for prediction confidence estimation." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 GENERAL FRAMEWORK", "text": "Let (x,y) denote the data-label pair. We model the generation of y conditioned on x using the conditional VAE framework as in Sohn et al. (2015). First, a latent variable c is generated from some prior distribution pθ(c|x) parametrized by a neural network. Then the label y is generated from some conditional distribution pθ(y|c,x). We use θ to represent the collection of model parameters at testing time. The major distinction of our approach is that we assume c takes value in a finite set C, thought of as a code book for the latent mode hypotheses of our multi-modal data distribution. Our goal is to learn the optimal parameters θ∗ and the code book C, so that possibly multiple of the latent modes corresponding to x can be identified, and label predictions ŷ can be faithfully generated from x. The latter means the marginal likelihood pθ(y|x) should be maximized. The variational inference approach as in Kingma & Welling (2014) starts by introducing a posterior encoding model qφ(c|x,y) with parameters φ, which is used only during training. Since the label information is given, we will assume the posterior encoding model is deterministic, meaning there is no “modal uncertainty” for the posterior encoding model. So the posterior distribution will be a delta distribution for each data-label pair (x,y). In any case, the marginal log-likelihood of y can now be written as\nlog pθ(y|x) = Eqφ(c|x,y) [ log\nqφ(c|x,y) pθ(c|x,y)\n] + Eqφ(c|x,y) [ log\npθ(c,y|x) qφ(c|x,y)\n] (1)\nSince the first term in the RHS of (1) is non-negative, we have the variational lower bound log pθ(y|x) ≥ Eqφ(c|x,y) [ log\npθ(c,y|x) qφ(c|x,y) ] = −Eqφ(c|x,y) [ log\nqφ(c|y,x) pθ(c|x)\n] + Eqφ(c|x,y) [log pθ(y|c,x)]\n(2)\nWe further lower bound Equation (2) by observing that the entropy term −Eq(·)(log q(·)) is positive and is constant if qφ(c|x,y) is deterministic. This yields a sum of a negative cross entropy and a conditional likelihood\nlog pθ(y|x) ≥ Eqφ(c|x,y) [log pθ(c|x)] + Eqφ(c|x,y) [log pθ(y|c,x)] (3)\nFor our optimization problem, we will maximize the lower bound (3). Since c takes value in the finite code book C, the probability distribution pθ(c|x) can be estimated using multi-class classification, and the cross entropy term can be estimated efficiently using stochastic approximation.\nIt is important to note that since we assume qφ(c|x,y) is deterministic, we will not regularize it by pulling it to the prior distribution, in contrast to the previous conditional VAE frameworks. This means that the probability values of the posterior is not influenced by the probability values of the prior distribution. Instead, we will let the prior encoding model to be trained by the posterior encoding model, as a classification task with ground-truth being the class index obtained from the posterior encoder and the code book C. The lacking of prior regularization is also featured in the vqVAE approach in van den Oord et al. (2017) for unconditional generation, and in Razavi et al. (2018) it is argued that restricting the latent space C to be a finite set is itself a structural prior constraint for the VAE framework. Note that here the discrete latent space C should be considered just as a finite set of indices, without any other structure between these indices. Below we will discuss how to realize it in Rn so that the actual representation can be useful to the decoder.\nFirst of all, because C is a finite set, the objective (3) is not fully differentiable. We tackle this problem using a simple gradient approximation method and an extra regularization loss, following the approach of van den Oord et al. (2017); Razavi et al. (2019). In details, denote the prior encoding network by Eθ, the posterior encoding network by Eφ, and the decoder as Dθ. Since we assume a delta distribution for the posterior encoding model qφ(c|x,y), we can let the posterior encoder produce a deterministic output for the given input-output pair (x, y). In other words, no sampling is performed by the posterior encoder. Suppose the output of the posterior encoding network is e = Eφ(x,y). Its nearest neighbor c in C in `2 distance\nc = arg min c′∈C\n‖c′ − e‖2\nwill become the input to the decoder network. And we simply copy the gradient of c to that of e so that the posterior encoder can obtain gradient information from the label prediction error. To make sure the gradient approximation is accurate, we need to encourage the posterior encoder’s outputs to approximate values in C as close as possible. To achieve this we use an `2-penalization of the form β‖e − sg[c]‖2 with parameter β > 0, and sg is the stop-gradient operation. The code c is updated using exponential moving average of the corresponding posterior encoder’s outputs. In the above notation, our loss function to be minimized for a single input pair (x,y) is\nL(θ, φ) = CE(Eθ(x), idc) +Recon(Dθ(c,x),y) + β‖Eφ(x,y)− sg[c]‖2 (4)\nwhere CE denotes the cross entropy loss, Eθ(x) is the probability vector of length |C| and idc is corresponding code index for the input pair (x,y). Recon denotes the label reconstruction loss in lieu of the negative log-likelihood.\nDuring training, we learn the prior encoding model pθ(c|x), the posterior encoding model qφ(c|x,y), the decoding model pθ(y|c,x), together with the code book C in an end-to-end fashion. The posterior encoder plus decoder will learn good representation of the latent code, and the prior encoder will learn faithful uncertainty estimation from the stochastic training. At inference time, we use the learned prior encoding model to output a conditional distribution given x on C, where each of the code will correspond to a decoded label prediction with the associated probability." }, { "heading": "3.2 MODEL DESIGN AND TRAINING", "text": "Our proposed framework in Section 3.1 is general and can be applied to a wide range of network architecture designs. Nonetheless it is worth discussing how the latent code c should be learned and utilized, so that the posterior encoding model can really learn the mode hypothesis of the given data-label pair, not simply memorize the data which causes an overwhelming number of codes and over-fitting. One important principle is that the latent code should not be learned or used in a spatially dependent manner, especially in pixel-level recognition tasks such as segmentation. This can also ensure that the prior encoding network is learning to solve the majority of the recognition problem while the latent code supplies only the additional but necessary information to reconstruct distinct outputs from the same (or similar) input. For this purpose we have adopted a simple approach: the code output by posterior encoding network is obtained by global average pooling of its last layer; for the incorporation of the code into the intermediate feature of the decoding network that has spatial dimension (h,w), we will simply make h × w copies of the code, reshape it to have spatial size (h,w) and concatenate it to the corresponding feature.\nIn the experiments in Section 4 we will consider applications in computer vision and thus will use convolutional neural networks (CNNs) for the prior and posterior encoder, as well for the decoder, which together is similar to the U-Net architecture. Specifically, each encoding network consists of a sequence of downsampling residual blocks, and the decoder a sequence of upsampling residual blocks, where the decoder also has skip connections to the prior encoder by receiving its feature at each resolution level. The latent code is incorporated at a single level L into the decoder, which depends on the task.\nSuppose the latent code is c-dimensional. We initialize the code book C as a size (nC , c) i.i.d random normal matrix, where each column represent an individual code. The statistics of the normal distribution is computed from the output of the posterior encoding network at initialization on the training data. We have found it to be beneficial since it allows the entire model to be initialized at a lower loss position on the optimization landscape. We have also found that the number of code utilized during training follows an interesting pattern: at the very first stage only very few codes will be used, then the number gradually grows to maximum before it slowly declines and becomes stable when the reconstruction loss plateaus. We therefore allow the network to warm up in the first η epochs by training without the cross-entropy loss, since during this stage the number of utilized codes is unstable. This will not impair the learning of the posterior encoder, since it receive no gradient information from the prior. We have found nC = 512 to be well sufficient for all of our tasks, and the actual number of codes utilized after training is usually a fraction of it. Because of these observations, we did not try to explicitly enforce different codes to have different outputs, since for one reason the final number of codes are usually compact, and for the other we would like to allow different codes to have similar outputs, which means the possible situation where different hypotheses lead to similar predictions. We expect there will be some connection with the information bottleneck theory Tishby & Zaslavsky (2015) and leave this direction for future work. We will release our open source implementation to promote future research." }, { "heading": "4 EXPERIMENTS", "text": "To rigorously access our method’s ability to approximate the distribution of one-to-many mappings, in Section 4.1 we first conduct a synthetic experiment with known ground truth conditional distributions. In Section 4.2 we then demonstrate our method’s performance on the realistic and challenging task of lesion segmentation on possibly ambiguous lesion scan on the LIDC-IDRI benchmark. In both experiments we compare with the state-of-the-art method Probabilistic U-Net Kohl et al. (2018) of the same model complexity as ours. More details on the experimental setting and additional results can be found in Appendix A." }, { "heading": "4.1 QUANTITATIVE ANALYSIS ON SYNTHETIC TASKS", "text": "MNIST guess game To test the ability for multi-modal prediction quantitatively, we design a simple guessing game using the MNIST dataset LeCun & Cortes (2010) as follows. We are shown a collection of images and only one of them is held by the opponent. The image being held is not fixed and follows certain probability distribution. We need to develop a generative model to understand the mechanism of which image is being held based on the previously\nseen examples. In details, the input x will be an image that consists of four random digits, and belongs to one of the four categories: (A) (1, 2, 3, 4); (B) (3, 4, 5, 6); (C) (5, 6, 7, 8); (D) (7, 8, 9, 0). The number represents the label of the image sampled. The output y will be an image of the same size but only one of the input digit is present. Specifically, for (A) the probability distribution is (0.25, 0.25, 0.25, 0.25); for (B) (0.1, 0.4, 0.1, 0.4); for (C) (0.3, 0.5, 0.1, 0.1); for (D) (0.1, 0.1, 0.1, 0.7). Note that the distribution of the output, conditioned on each category’s input, consists of four modes, and is designed to be different for each category. We require the model to be trained solely based on the observed random training pairs (x,y), and thus no other information like digit categories should be used. The model would therefore need to learn to discriminate each category and assign the correct outputs with corresponding probabilities.\nThus for instance, an input image of Category (A) will be the combination of four random samples from Digit 1 to 4 in that order, and the output can be the same digit 1 in the input with probability 0.25, or it can be the same digit 2 with probability 0.25, and so forth. The images in the first row in Fig.3 illustrate an input image, where we also annotate the ground truth probability on the upper-left corner.\nWe trained our model on samples from the training dataset of MNIST and tested it on samples from the testing dataset, during both stages the random combination is conducted on the fly. The model here for demonstrating the results used a total of 11 codes after training. Please refer to Appendix A.1 for training specifics and more results.\nFrom the second to fifth row in Fig.3 we show the results of different models. Ours in Fig.3(a) are the top-4 predictions with explicit probability estimates annotated on the upper-left in each row. For example, the second row has probability 0.2634, which is\nvery close to the ground truth 0.25. In contrast, Probabilistic U-Net cannot rank its outputs and hence four random samples are drawn. Consequently one has little control when generating the samples and it’s likely to obtain non-sensible outputs as in the fourth row of Fig.3(b).\nOur method also performs much better quantitatively, as shown in Fig.2 with the results on 1000 random testing samples. We classify both models’ outputs into the ground truth modes and aggregate the corresponding probabilities. We can see in Fig.2(a) that our method successfully discovered the distributional properties of the one-to-many mappings, and provides accurate uncertainty estimate. In contrast, due to the Gaussian latent parametrization, neither the individual density of each input nor their averages can provide useful information, as shown by the left axis of Fig.2(b). By the right axis of Fig.2(b) we also count the mode frequencies for each category for Probabilistic U-Net. However, even calculated on the entire testing dataset, the distribution approximation is still far from accurate compared to ours. Note that our method can directly output the uncertainty estimate for each input accurately. This clearly demonstrates our method’s superiority and practical value." }, { "heading": "4.2 REAL APPLICATIONS", "text": "Lesion segmentation of possibly ambiguous lung CT scans We use the LIDC-IDRI dataset provided by Armato III et al. (2015; 2011); Clark et al. (2013), which contains 1018 lung CT scans from 1010 patients. Each scan has lesion segmentations by four (out of totally twelve) expert graders. The identities of the graders for each scan are unknown from the dataset. Samples from the testing set can be found in the first row of Fig.4. As can be seen, the graders are often in disagreement about whether the scan contains lesion tissue. We hypothesize that the disagreement is due to the different assumptions the experts have about the scan. For example, judging from the scan’s appearance, one of the graders might have believed that the suspicious tissue is in fact a normal tissue based on his/her experience, and thus gave null segmentation. There are also other possible underlying assumptions for the graders to come up with different segmentation shapes.\nOur task is to identify such ambiguous scenarios by proposing distinct segmentation results from the corresponding latent hypotheses with their associated probabilities, which will be helpful for clinicians to easily identify possible mis-identifications and ask for further examinations of the patients.\nOur network architecture for this task is a scaled-up version of the same model used in the MNIST guessing task. At training time, we randomly sample an CT scan x from the training set, and we randomly sample one of its four segmentations as y. The model we used to report the results has a total of 31 codes. The specifics of the training and more results can be found in Appendix A.3.\nSome sample testing results predicted by our model to have high uncertainty are illustrated in Fig.4. The first row is the input and its four segmentations, and the last two rows are our top-8 predictions, where the probability associated to each latent code is annotated on the upper-left corner. We can see that our method can capture the uncertainty that is contained in the segmentation labels with notable probability scores, as well as other type of segmentations that seem plausible without further information.\nSince no ground truth distribution for LIDC-IDRI dataset is available, quantitative evaluation has to be conducted differently from the MNIST guessing task. We follow the practice of Kohl et al. (2018) to adopt the generalized energy distance metric D2GED found in Bellemare et al. (2017); Salimans et al. (2018); Székely & Rizzo (2013), which is a statistical quantity that measures the\nincoherence between two subsets of a metric space. Please refer to Appendix A.2 for calculation details of this metric for segmentations based on Intersection over Union (IoU). The lower the value of D2GED, the closer the two subsets are. We report the results on the entire testing dataset in Fig.5(a). For our model, the mean D2GED of all testing data is 0.3354, the standard deviation is 0.2947. Our performance is thus competitive with that of Probabilistic U-Net, whose mean is 0.3470 and the standard deviation is 0.3139. Moreover, our model can give quantitative uncertainty estimate directly for each input scan, unlike Probabilistic U-Net that needs to perform sampling and clustering using a metric such as IoU to obtain uncertainty estimate.\nFinally, we visualize some segmentation results for some frequently used codes in Fig.5(b). The code used is annotated at the bottom. The segmentations from the codes of negligible probability (e.g.less than 10−4) are left blank. For example, the fourth column for code#102 may correspond to a latent hypothesis that leads to the conclusion of no lesion, and the scan in the third row is not compatible with that particular latent hypothesis. It would be an interesting future work to explore the semantics of the latent codes if more information about the patient and the scan is given." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "We have proposed MUE, a novel framework for learning one-to-many mapping with calibrated uncertainty estimation. As an effective solution of the multi-modal posterior collapse problem, the discrete latent representations are learned to explain the corresponding types of input-output relationship in one-to-many mappings. It also allows us effectively and efficiently perform uncertainty estimation of the model prediction. We have extensively validated our method’s performance and usefulness on both synthetic and realistic tasks, and demonstrate superior performance over the state-of-the-art methods." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "As described in the main text, each of our encoding network consists of a sequence of downsampling residual blocks, and the decoder a sequence of upsampling residual blocks. The decoder also has skip connections to the prior encoder by receiving its feature at each resolution level. A residual block consists of three convolution layers. As the shortcut connection, the input is added to the output if the input and the output have the same channel size, or otherwise a 1 × 1 convolution is applied before the addition. Bi-linear down sampling and up sampling is applied before the residual blocks if the spatial size is changed. We fix the `2 penalization weight β = 0.25, number of initial candidate codes nC = 512, and use the Adam optimizer Kingma & Ba (2015) with its default setting for all of our experiments. Hyper-parameters specific to each experiment are detailed in the following subsections.\nA.1 MNIST GUESSING GAME\nWe use 6 layers in the prior and the posterior encoding networks, with output channel dimension [16, 32, 64, 128] + [128, 128]. This notation means that the first 4 levels are used as in the U-Net which feed the decoder, and the last 2 levels are used for the latent code learning. The posterior encoder further uses a 1×1 and global average pooling to obtain the code. The code is of dimension 128. The prior encoder uses a linear layer to learn the distribution on C. Our decoder has output channel dimension [64, 32, 16, 1]. We incorporate the code at the bottom level, namely the 1-st layer of the decoder. For the Probabilistic U-Net, since the architecture is different, we used a structure of similar capacity, with the parameter num filter= [16, 32, 64, 128] in its released version, and we find the suggested hyperparameters in Kohl et al. (2018) for LIDC task works well in this case. For both networks, we use the binary cross entropy loss, a batch size of 256, and use the learning rate schedule [1e−4, 5e−5, 1e−5, 5e−6] at [0, 30k, 90k, 120k] iterations.\nSome additional results from our model and Probabilistic U-Net are shown in Fig.6 and Fig.7.\nA.2 GENERALIZED ENERGY DISTANCE METRIC FOR SEGMENTATIONS\nDenote Yx,Sx ⊂ Y to be the set of segmentation labels and the set of model predictions corresponding to the scan x, respectively. Y is equipped with the metric\nd(y, s) = 1− IoU(y, s),\nwhere IoU(·, ·) is the intersection-over-union operator that is suitable for evaluating the similarity between segmentations. The D2GED statistic in our case is defined to be\nD2GED(Yx,Sx) = 2 ∑ y∈Yx ∑ s∈Sx pspyd(y, s)− ∑ y∈Yx ∑ y′∈Yx pypy′d(y,y ′)− ∑ s∈Sx ∑ s′∈Sx psps′d(s, s ′),\nwhere ps is our model’s probability prediction for the output x and py is the ground truth probability. In case the ground truth is not available like LIDC-IDRI, we use py = 1|Yx| , where |Yx| denotes the cardinality of Yx. For our model, we choose Sx to be the top-N predictions. To be rigorous we normalize the sum of their probabilities to be 1 (though which in fact has negligible effect since N are usually chosen so that probability always almost sum up to 1). In the case of Probabilistic U-Net, we use N random output samples and ps is replaced by 1|Sx| = 1 N .\nA.3 LIDC-IDRI SEGMENTATION\nWe use 6 layers in the prior and the posterior encoding networks, with output channel dimension [32, 64, 128, 192] + [256, 512]. The code is of dimension 128. Our decoder has output channel dimension [128, 64, 32, 1]. We incorporate the code at the bottom level, namely the 1-st layer of the decoder. For the Probabilistic U-Net, since the architecture is different, we used a structure of similar capacity, with the parameter num filter= [32, 64, 128, 192] in its released version, and follows the suggested hyperparameters in Kohl et al. (2018). For both networks, we use the binary\ncross entropy loss, a batch size of 256, and use the learning rate schedule [1e−4, 5e−5, 1e−5, 5e−6] at [0, 30k, 90k, 120k] iterations.\nSome additional results from our model and Probabilistic U-Net are shown in Fig.8,9 and Fig.10, 11, respectively." } ]
2,020
null
SP:07471c50632db15eedbbc63f360a391140c1e094
[ "The paper empirically analyzes the evaluation framework of the current OOD detection systems for the image recognition task, specifically the evaluation described in [1] using Max-softmax and calibrated confidence. They motivate the paper by the necessity of having better evaluation for OOD detection to be reflective of real world scenarios. The addressed problem is interesting and valuable for the field as many of the defined OOD datasets, and evaluation metrics may not cover many real-world scenarios. They specifically addressed three scenarios, inputs that i) are irrelevant to the task ii) are from novel classes and iii) are from another domain (domain shift), which for the first 2 scenarios, they only evaluate them as unseen classes and not distinguish between them. Based on my understanding of the paper, they compare 5 OOD detection methods from the literature, suggest a few test datasets/scenarios and conclude using cosine similarity is consistently favorable for evaluation, and the choice of using confidence-based methods in case of domain shift detection scenarios.\t\t\t\t\t" ]
We reconsider the evaluation of OOD detection methods for image recognition. Although many studies have been conducted so far to build better OOD detection methods, most of them follow Hendrycks and Gimpel’s work for the method of experimental evaluation. While the unified evaluation method is necessary for a fair comparison, there is a question of if its choice of tasks and datasets reflect real-world applications and if the evaluation results can generalize to other OOD detection application scenarios. In this paper, we experimentally evaluate the performance of representative OOD detection methods for three scenarios, i.e., irrelevant input detection, novel class detection, and domain shift detection, on various datasets and classification tasks. The results show that differences in scenarios and datasets alter the relative performance among the methods. Our results can also be used as a guide for practitioners for the selection of OOD detection methods.
[]
[ { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Lukas Bossard", "Matthieu Guillaumin", "Luc Van Gool" ], "title": "Food-101 – mining discriminative components with random forests", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Hady Elsahar", "Matthias Gallé" ], "title": "To annotate or not? predicting performance drop under domain shift", "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "João Gama", "Indrė Žliobaitė", "Albert Bifet", "Mykola Pechenizkiy", "Abdelhamid Bouchachia" ], "title": "A survey on concept drift adaptation", "venue": "ACM computing surveys (CSUR),", "year": 2014 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Thomas Mosgaard Giselsson", "Rasmus Nyholm Jørgensen", "Peter Kryger Jensen", "Mads Dyrmann", "Henrik Skov Midtiby" ], "title": "A public image database for benchmark of plant seedling classification algorithms", "venue": null, "year": 2017 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Ross Girshick", "Piotr Dollár" ], "title": "Rethinking imagenet pre-training", "venue": "In Proceedings of the International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yen-Chang Hsu", "Yilin Shen", "Hongxia Jin", "Zsolt Kira" ], "title": "Generalized odin: Detecting out-ofdistribution image without learning from out-of-distribution data", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Eyke Hüllermeier", "Willem Waegeman" ], "title": "Aleatoric and epistemic uncertainty in machine learning: A tutorial introduction", "venue": null, "year": 1910 }, { "authors": [ "Aditya Khosla", "Nityananda Jayadevaprakash", "Bangpeng Yao", "Li Fei-Fei" ], "title": "Novel dataset for finegrained image categorization: Stanford dogs", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop on Fine-Grained Visual Categorization,", "year": 2011 }, { "authors": [ "Jonathan Krause", "Michael Stark", "Jia Deng", "Li Fei-Fei" ], "title": "3d object representations for fine-grained categorization", "venue": "In Proceedings of the International Workshop on 3D Representation and Recognition,", "year": 2013 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Improving confidence estimates for unfamiliar examples", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "David Sculley", "Sebastian Nowozin", "Joshua Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Omkar M. Parkhi", "Andrea Vedaldi", "Andrew Zisserman", "C.V. Jawahar" ], "title": "Cats and dogs", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Xingchao Peng", "Qinxun Bai", "Xide Xia", "Zijun Huang", "Kate Saenko", "Bo Wang" ], "title": "Moment matching for multi-source domain adaptation", "venue": "In Proceedings of the International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Kate Saenko", "Brian Kulis", "Mario Fritz", "Trevor Darrell" ], "title": "Adapting visual category models to new domains", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2010 }, { "authors": [ "Chandramouli S Sastry", "Sageev Oore" ], "title": "Detecting out-of-distribution examples with gram matrices", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Alireza Shafaei", "Mark Schmidt", "James J. Little" ], "title": "A less biased evaluation of out-of-distribution sample detectors", "venue": "In Proceedings of the British Machine Vision Conference,", "year": 2019 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Engkarat Techapanurak", "Suganuma Masanori", "Takayuki Okatani" ], "title": "Hyperparameter-free out-ofdistribution detection using softmax of scaled cosine similarity", "venue": null, "year": 1905 }, { "authors": [ "Marco Toldo", "Andrea Maracani", "Umberto Michieli", "Pietro Zanuttigh" ], "title": "Unsupervised domain adaptation in semantic segmentation: a review", "venue": null, "year": 2005 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Apoorv Vyas", "Nataraj Jammalamadaka", "Xia Zhu", "Dipankar Das", "Bharat Kaul", "Theodore L Willke" ], "title": "Out-of-distribution detection using an ensemble of self supervised leave-out classifiers", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Peter Welinder", "Steve Branson", "Takeshi Mita", "Catherine Wah", "Florain Schroff", "Serge Belongie", "Pietro Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical report, California Institute of Technology,", "year": 2010 }, { "authors": [ "Qing Yu", "Kiyoharu Aizawa" ], "title": "Unsupervised out-of-distribution detection by maximum classifier discrepancy", "venue": "In Proceedings of the International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Sehun Yu", "Dongha Lee", "Hwanjo Yu" ], "title": "Convolutional neural networks with compression complexity pooling for out-of-distribution image detection", "venue": "In Proceedings of the International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Yang Zhang", "Philip David", "Boqing Gong" ], "title": "Curriculum domain adaptation for semantic segmentation of urban scenes", "venue": "In Proceedings of the International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Ev Zisselman", "Aviv Tamar" ], "title": "Deep residual flow for out of distribution detection", "venue": "In Proceedings of the Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yang Zou", "Zhiding Yu", "Xiaofeng Liu", "BVK Kumar", "Jinsong Wang" ], "title": "Confidence regularized self-training", "venue": "In Proceedings of the International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Hsu" ], "title": "backbone part and a higher learning rate of 0.1 to the fully-connected layer; the weight decay for the fully-connected layer is set to 0, following Techapanurak et al", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "We reconsider the evaluation of OOD detection methods for image recognition. Although many studies have been conducted so far to build better OOD detection methods, most of them follow Hendrycks and Gimpel’s work for the method of experimental evaluation. While the unified evaluation method is necessary for a fair comparison, there is a question of if its choice of tasks and datasets reflect real-world applications and if the evaluation results can generalize to other OOD detection application scenarios. In this paper, we experimentally evaluate the performance of representative OOD detection methods for three scenarios, i.e., irrelevant input detection, novel class detection, and domain shift detection, on various datasets and classification tasks. The results show that differences in scenarios and datasets alter the relative performance among the methods. Our results can also be used as a guide for practitioners for the selection of OOD detection methods." }, { "heading": "1 INTRODUCTION", "text": "Despite their high performance on various visual recognition tasks, convolutional neural networks (CNNs) often show unpredictable behaviors against out-of-distribution (OOD) inputs, i.e., those sampled from a different distribution from the training data. For instance, CNNs often classify irrelevant images to one of the known classes with high confidence. A visual recognition system should desirably be equipped with an ability to detect such OOD inputs upon its real-world deployment.\nThere are many studies of OOD detection that are based on diverse motivations and purposes. However, as far as the recent studies targeted at visual recognition are concerned, most of them follow the work of Hendrycks & Gimpel (2017), which provides a formal problem statement of OOD detection and an experimental procedure to evaluate the performance of methods. Employing this procedure, the recent studies focus mainly on increasing detection accuracy, where the performance is measured using the same datasets.\nOn the one hand, the employment of the experimental procedure has arguably bought about the rapid progress of research in a short period. On the other hand, little attention has been paid to how well the employed procedure models real-world problems and applications. They are diverse in purposes and domains, which obviously cannot be covered by the single problem setting with a narrow range of datasets.\nIn this study, to address this issue, we consider multiple, more realistic scenarios of the application of OOD detection, and then experimentally compare the representative methods. To be specific, we consider the three scenarios: detection of irrelevant inputs, detection of novel class inputs, and detection of domain shift. The first two scenarios differ in the closeness between ID samples and OOD samples.\nUnlike the first two, domain shift detection is not precisely OOD detection. Nonetheless, it is the same as the other two in that what we want is to judge if the model can make a meaningful inference for a novel input. In other words, we can generalize OOD detection to the problem of judging this. Then, the above three scenarios are naturally fallen into the same group of problems, and it becomes natural to consider applying OOD detection methods to the third scenario. It is noteworthy that domain shift detection has been poorly studied in the community. Despite many demands from practitioners, there is no established method in the context of deep learning for image classification.\nBased on the above generalization of OOD detection, we propose a meta-approach in which any OOD detection method can be used as its component.\nFor each of these three scenarios, we compare the following methods: the confidence-based baseline (Hendrycks & Gimpel, 2017), MC dropout (Gal & Ghahramani, 2016), ODIN (Liang et al., 2017), cosine similarity (Techapanurak et al., 2019; Hsu et al., 2020), and the Mahalanobis detector (Lee et al., 2018). Domain shift detection is studied in (Elsahar & Gallé, 2019) with natural language processing tasks, where proxy-A distance (PAD) is reported to perform the best; thus we test it in our experiments.\nAs for choosing the compared methods, we follow the argument shared by many recent studies (Shafaei et al., 2019; Techapanurak et al., 2019; Yu & Aizawa, 2019; Yu et al., 2020; Hsu et al., 2020) that OOD detection methods should not assume the availability of explicit OOD samples at training time. Although this may sound obvious considering the nature of OOD, some of the recent methods (e.g., Liang et al. (2017); Lee et al. (2018)) use a certain amount of OOD samples as validation data to determine their hyperparameters. The recent studies, (Shafaei et al., 2019; Techapanurak et al., 2019), show that these methods do perform poorly when encountering OOD inputs sampled from a different distribution from the assumed one at test time. Thus, for ODIN and the Mahalanobis detector, we employ their variants (Hsu et al., 2020; Lee et al., 2018) that can work without OOD samples. The other compared methods do not need OOD samples.\nThe contribution of this study are summarized as follows. i) Listing three problems that practitioners frequently encounter, we evaluate the existing OOD detection methods on each of them. ii) We show a practical approach to domain shift detection that is applicable to CNNs for image classification. iii) We show experimental evaluation of representative OOD detection methods on these problems, revealing each method’s effectiveness and ineffectiveness in each scenario." }, { "heading": "2 PROBLEMS AND METHODS", "text": "" }, { "heading": "2.1 PRACTICAL SCENARIOS OF OOD DETECTION", "text": "We consider image recognition tasks in which a CNN classifies a single image x into one of C known classes. The CNN is trained using pairs of x and its label, and x is sampled according to x ∼ p(x). At test time, it will encounter an unseen input x, which is usually from p(x) but is sometimes from p′(x), a different, unknown distribution. In this study, we consider the following three scenarios.\nDetecting Irrelevant Inputs The new input x does not belong to any of the known classes and is out of concern. Suppose we want to build a smartphone app that recognizes dog breeds. We train a CNN on a dataset containing various dog images, enabling it to perform the task with reasonable accuracy. We then point the smartphone to a sofa and shoot its image, feeding it to our classifier. It could classify the image as a Bull Terrier with high confidence. Naturally, we want to avoid this by detecting the irrelevance of x. Most studies of OOD detection assumes this scenario for evaluation.\nDetecting Novel Classes The input x belongs to a novel class, which differs from any of C known classes, and furthermore, we want our CNN to learn to classify it later, e.g., after additional training. For instance, suppose we are building a system that recognizes insects in the wild, with an ambition to make it cover all the insects on the earth. Further, suppose an image of one of the endangered (and thus rare) insects is inputted to the system while operating it. If we can detect it as a novel class, we would be able to update the system in several ways. The problem is the same as the first scenario in that we want to detect whether x ∼ p(x) or not. The difference is that x is more similar to samples of the learned classes, or equivalently, p′(x) is more close to p(x), arguably making the detection more difficult. Note that in this study, we don’t consider distinguishing whether x is an irrelevant input or a novel class input, for the sake of simplicity. We left it for a future study.\nDetecting Domain Shift The input x belongs to one of C known classes, but its underlying distribution is p′(x), not p(x). We are especially interested in the case where a distributional shift p(x) → p′(x) occurs either suddenly or gradually while running a system for the long term. Our CNN may or may not generalize beyond this shift to p′(x). Thus, we want to detect if it does not. If we can do this, we would take some actions, such as re-training the network with new training\ndata (Elsahar & Gallé, 2019). We consider the case where no information is available other than the incoming inputs x′s.\nA good example is a surveillance system using a camera deployed outdoor. Let us assume the images’ quality deteriorates after some time since its deployment, for instance, due to the camera’s aging. Then, the latest images will follow a different distribution from that of the training data. Unlike the above two cases where we have to decide for a single input, we can use multiple inputs; we should, especially when the quality of input images deteriorate gradually as time goes.\nThe problem here has three differences from the above two scenarios. First, the input is a valid sample belonging to a known class, neither an irrelevant sample nor a novel class sample. Second, we are basically interested in the accuracy of our CNN with the latest input(s) and not in whether x ∼ p(x) or p′(x). Third, as mentioned above, we can use multiple inputs {xi}i=1,...,n for the judgment.\nAdditional remarks on this scenario. Assuming a temporal sequence of inputs, the distributional shift is also called concept drift (Gama et al., 2014). It includes several different subproblems, and the one considered here is called virtual concept drift in its terminology. Mathematically, concept drift occurs when p(x, y) changes with time. It is called virtual when p(x) changes while p(y|x) does not change. Intuitively, this is the case where the classes (i.e., concept) remain the same but p(x) changes, demanding the classifier to deal with inputs drawn from p′(x). Then, we are usually interested in predicting if x lies in a region of the data space for which our classifier is well trained and can correctly classify it. If not, we might want to retrain our classifier using additional data or invoke unsupervised domain adaptation methods (Ganin & Lempitsky, 2015; Tzeng et al., 2017)." }, { "heading": "2.2 COMPARED METHODS", "text": "We select five representative OOD detection methods that do not use real OOD samples to be encountered at test time.\nBaseline: Max-softmax Hendrycks & Gimpel (2017) showed that the maximum of the softmax outputs, or confidence, can be used to detect OOD inputs. We use it as the score of an input being in-distribution (ID). We will refer to this method as Baseline. It is well known that the confidence can be calibrated using temperature to better represent classification accuracy (Guo et al., 2017; Li & Hoiem, 2020). We also evaluate this calibrated confidence, which will be referred to as Calib.\nMC Dropout The confidence (i.e., the max-softmax) is also thought of as a measure of uncertainty of prediction, but it captures only aleatoric uncertainty (Hüllermeier & Waegeman, 2019). Bayesian neural networks (BNNs) can also take epistemic uncertainty into account, which is theoretically more relevant to OOD detection. MC (Monte-Carlo) dropout (Gal & Ghahramani, 2016) is an approximation of BNNs that is computationally more efficient than an ensemble of networks (Lakshminarayanan et al., 2017). To be specific, using dropout (Srivastava et al., 2014) at test time provides multiple prediction samples, from which the average of their max-softmax values is calculated and used as ID score.\nCosine Similarity It is recently shown in Techapanurak et al. (2019); Hsu et al. (2020) that using scaled cosine similarities at the last layer of a CNN, similar to the angular softmax for metric learning, enables accurate OOD detection. To be specific, the method first computes cosine similarities between the feature vector of the final layer and class centers (or equivalently, normalized weight vectors for classes). They are multiplied with a scale and then normalized by softmax to obtain class scores. The scale, which is the inverse temperature, is predicted from the same feature vector. These computations are performed by a single layer replacing the last layer of a standard CNN. The maximum of the cosine similarities (without the scale) gives ID score. The method is free of hyperparameters for OOD detection. We will refer to it as Cosine.\nODIN (with OOD-sample Free Extension) ODIN was proposed by Liang et al. (2017) to improve Baseline by perturbing an input x→ x+ · sgn(δx) in the direction δx of maximally increasing the max-softmax and also by temperature scaling. Thus, there are two hyperparameters, the perturbation size and the temperature T . In Liang et al. (2017), they are chosen by assuming the availability of explicit OOD samples. Recently, Hsu et al. (2020) proposed to select ← argmax ∑ yκ(x +\n· sgn(δx)), where yκ is the max-softmax and the summation is taken over ID samples in the\nvalidation set. As for the temperature, they set T = 1000. ID score is given by yκ(x+ · sgn(δx)). To distinguish from the original ODIN, we refer to this as ODIN∗.\nMahalanobis Detector The above three methods are based on the confidence. Another approach is to formulate the problem as unsupervised anomaly detection. Lee et al. (2018) proposed to model the distribution of intermediate layer’s activation by a Gaussian distribution for each class but with a shared covariance matrix among the classes. Given an input, the Mahalanobis distance concerning the predicted class is calculated at each layer. A score for OOD is given by the weighted sum of those calculated at different layers. The weights are predicted by logistic regression, which is determined by assuming the availability of OOD samples. To be free from the assumption, another method is suggested that generates adversarial examples from ID samples and regard them as OOD samples. It is also reported in (Hsu et al., 2020) that setting all the weights to one works reasonably well. We evaluate the last two methods that do not need OOD samples. Although the original method optionally uses input perturbation similar to ODIN, we do not use it because our experiments show that its improvement is very small despite its high computational cost.\nEffects of Fine-tuning a Pre-trained Network It has been well known that fine-tuning a pretrained network on a downstream task improves its prediction accuracy, especially when a small amount of training data is available. It was pointed out in (He et al., 2019) that the improvement is little when there is sufficient training data. Hendrycks et al. (2019) then show that even in that case, using a pre-trained network helps increase the overall robustness of the inference. It includes improved OOD detection performance, in addition to robustness to adversarial attacks, better calibration of confidence, robustness to covariate shift. However, their experimental validation is performed only on a single configuration with a few datasets. It remains unclear if the improvement can generalize to a broader range of purposes and settings that may differ in image size, the number of training samples, and ID/OOD combinations." }, { "heading": "3 EXPERIMENTAL RESULTS", "text": "We use Resnet-50 (He et al., 2016) for a base network. We use it as is for Baseline, ODIN∗, and Mahalanobis, which share the same networks with the same weights, which will be referred to as Standard. We apply dropout to the last fully-connected layer with p = 0.5 and draw ten samples for MC dropout. We modify the last layer and the loss function for Cosine, following Techapanurak et al. (2019). We use the ImageNet pre-trained model provided by the Torchvision libraryfor their pre-trained models. We employ AUROC to evaluate OOD detection performance with the first two scenarios, following previous studies.\n3.1 DETECTION OF IRRELEVANT INPUTS\nWe employ the following five tasks and datasets: dog breed recognition (120 classes and 10,222 images; Khosla et al. (2011)), plant seeding classification (12 classes and 5,544 images; Giselsson et al. (2017)), Food-101 (101 classes and 101,000 images; Bossard et al. (2014)), CUB-200 (200 classes and 11,788 images; Welinder et al. (2010)), and Stanford Cars (196 classes and 16,185 images; Krause et al. (2013)). These datasets\nwill be referred to as Dog, Plant, Food, Bird, and Cars. They are diverse in terms of image contents, the number of classes, difficulty of tasks (e.g., fine-grained/coarse-grained), etc. Choosing one of the five as ID and training a network on it, we regard each of the other four as OOD, measuring the OOD detection performance of each method on the 5× 4 ID-OOD combination. We train each network for three times to measure the average and standard deviation for each configuration. Table 1 shows the accuracy of the five datasets/tasks for the three networks (i.e., Standard, MC dropout, and Cosine) trained from scratch and fine-tuned from a pre-trained model, respectively. It is seen that there is large gap between training-from-scratch and fine-tuning a pre-trained model for the datasets with fewer training samples.\nFigure 2 shows the average AUROC of the compared OOD detection methods for each ID dataset over the four OOD datasets and three trials for each. The error bars indicate the minimum and\nmaximum of AUROC. The full results for each of the twenty ID-OOD pairs are reported in Tables 5 and 6 in Appendix A. The upper row of Fig. 2 shows the results with the networks trained from scratch. It is seen that the ranking of the compared methods are mostly similar for different ID datasets. For the five datasets, Cosine is consistently among the top group; Mahalanobis will be ranked next, since it performs mediocre for Dog and Food. For the tasks with low classification accuracy, Dog, Bird, and Car, as shown in Table 1, the OOD detection accuracy tends to be also low; however, there is no tendency in the ranking of the OOD detection methods depending on the ID classification accuracy.\nThe lower row of Fig. 2 shows the results with the fine-tuned networks. It is first observed for any dataset and method that the OOD detection accuracy is significantly higher than the networks trained from scratch. This reinforces the argument made by Hendrycks et al. (2019) that the use of pre-trained networks improves OOD detection performance. Furthermore, the performance increase is a lot higher for several cases than reported in their experiments that use CIFAR-10/100 and Tiny ImageNet (Deng et al., 2009). The detection accuracy is pushed to a near-maximum for each case. Thus, there is only a little difference among the methods; Cosine and Mahalanobis(sum) shows slightly better performance for some datasets." }, { "heading": "3.2 DETECTION OF NOVEL CLASSES", "text": "We conducted two experiments with different datasets. The first experiment uses the Oxford-IIIT Pet dataset (Parkhi et al., 2012), consisting of 25 dog breeds and 12 cat breeds. We use only the dog breeds and split them into 20 and 5 breeds. We then train each network on the first 20 dog breeds using the standard train/test splits per class. The remaining five breeds (i.e., Scottish Terrier, Shiba Inu, Staffordshire Bull Terrier, Wheaten Terrier, Yorkshire Terrier) are treated as OOD. It should be noted that the ImageNet dataset contains 118 dog breeds, some of which overlap with them. We intentionally leave this overlap to simulate a similar situation that could occur in practice. In the second experiment, we use the Food-101 dataset. We remove eight classes1 contained in the\n1Apple Pie, Breakfast Burrito, Chocolate Mousse, Gaucamole, Hamburger, Hot Dog, Ice Cream, Pizza\nImageNet dataset. We split the remaining 93 classes into 46 and 47 classes, called Food-A and -B, respectively. Each network is trained on Food-A. We split Food-A into 800/100/100 samples per class to form train/val/test sets. Treating Food-B as OOD, we evaluate the methods’ performance.\nTable 2 shows the methods’ performance of detecting OOD samples (i.e., novel samples). In the table we separate the Mahalanobis detector and the others; the latter are all based on confidence or its variant, whereas Mahalanobis is not. The ranking of the methods is similar between the two experiments. Cosine attains the top performance for both of the two training methods. While this is similar to the results of irrelevant sample detection (Fig. 2), the gap to the second best group (Baseline, Calib., and MC dropout) is much larger here; this is significant for training from scratch. Another difference is that neither variant of Mahalanobis performs well; they are even worse than Baseline. This will be attributable to the similarity between ID and OOD samples here. The classification accuracy of the original tasks, Dog and Food-A are given in Table 7 in Appendix B." }, { "heading": "3.3 DETECTION OF DOMAIN SHIFT", "text": "" }, { "heading": "3.3.1 PROBLEM FORMULATION", "text": "Given a network trained on a dataset Ds, we wish to estimate its classification error on a different dataset Dt. In practice, a meta-system monitoring the network estimates the classification error on each of the incoming datasets D(1)t ,D (2) t , · · · , which are chosen from the incoming data stream. It issues an alert if the predicted error for the latest D(T )t is higher than the pre-fixed target. We use an OOD score S for this purpose. To be specific, given Dt = {xi}i=1,...,n, we calculate an average of the score S = ∑n i Si/n, where Si is the OOD score for xi; note that an OOD score is simply given by a negative ID score. We want to use S to predict the classification error err = ∑n i=1 1(yi = ti)/n, where y and t are a prediction and the true label, respectively. Following Elsahar & Gallé (2019), we train a regressor f to do this, as err ∼ f(S). We assume multiple labeled datasets Do’s are available, each of which do not share inputs with Ds or Dt. Choosing a two-layer MLP for f , we train it on Do’s plus Ds. As they have labels, we can get the pair of err and S for each of them. Note that Dt does not have labels. It is reported in Elsahar & Gallé (2019) that Proxy-A Distance (PAD) (Ben-David et al., 2007) performs well on several NLP tasks. Thus, we also test this method (rigorously, the one called PAD∗ in their paper) for comparisons. It first trains a binary classifier using portions of Ds and Dt to distinguish the two. Then, the classifier’s accuracy is evaluated on the held-out samples of Ds and Dt, which is used as a metric of the distance between their underlying distributions. Intuitively, the classification is easy when their distance is large, and vice versa. We train f using 1− (mean absolute error) for S as in the previous work." }, { "heading": "3.3.2 DOMAIN SHIFT BY IMAGE CORRUPTION", "text": "We first consider the case when the shift is caused by the deterioration of image quality. An example is a surveillance camera deployed in an outdoor environment. Its images are initially of high quality, but later their quality deteriorates gradually or suddenly due to some reason, e.g., dirt on the lens, failure of focus adjustment, seasonal/climate changes, etc. We want to detect it if it affects classifi-\ncation accuracy. To simulate multiple types of image deterioration, we employ the method and code for generating image corruption developed by Hendrycks & Dietterich (2019). It can generate 19 types of image corruptions, each of which has five levels of severity.\nWe consider two classification datasets/tasks, Food-A (i.e., 46 selected classes from Food-101 as explained in Sec. 3.2) and ImageNet (the original 1,000 object classification). For Food-A, we first train each network on the training split, consisting only of the original images. We divide the test split into three sets, 1,533, 1,533, and 1,534 images, respectively. The first one is used for Ds as is (i.e., without corruption). We apply the image corruption method to the second and third sets. To be specific, splitting the 19 corruption types into 6 and 13, we apply the 6 corruptions to the second set to makeDo’s, and the 13 corruptions to the last to makeDt’s. As each corruption has five severity levels, there are 30(= 6× 5) Do’s and 65(= 13× 5) Dt’s. The former is used for training f (precisely, 20 are used for training and 10 are for validation), and the latter is for evaluating f .\nFor ImageNet, we choose 5,000, 2,000, and 5,000 images from the validation split without overlap. We use them to makeDs,Do’s, andDt’s, respectively. As with Food-A, we apply the 6 and 13 types of corruption to the second and third sets, making 30 Do’s and 65 Dt’s, respectively. For the evaluation of f , we calculate mean absolute error (MAE) and root mean squared error (RMSE) of the predicted err over the 65 Dt’s. We repeat this for 20 times with different splits of image corruptions (19→ 6 + 13), reporting their mean and standard deviation. Table 3 shows the results for Food-A and ImageNet. (The accuracies of the original classification tasks of Food-A and ImageNet are reported in Table 7 and Table 10 in Appendix B and C.) It is seen for both datasets that Cosine achieves the top-level accuracy irrespective of the training methods. For Food-A, using a pre-trained network boosts the performance for the confidence-based methods (i.e., from Baseline to ODIN∗), resulting in that MC dropout performs the best; Cosine attains almost the same accuracy. On the other hand, Mahalanobis and PAD do not perform well regardless of the datasets and training methods. This well demonstrates the difference between detecting the distributional shift p(x)→ p′(x) and detecting the deterioration of classification accuracy. We show scatter plots of S vs. err in Fig. 4 and 5 in Appendix C, which provides a similar, or even clearer, observation." }, { "heading": "3.3.3 OFFICE-31", "text": "To study another type of domain shift, we employ the Office-31 dataset (Saenko et al., 2010), which is popular in the study of domain adaptation. The dataset consists of three subsets, Amazon, DSLR, and Webcam, which share the same 31 classes and are collected from different domains. We train our CNNs on Amazon and evaluate the compared methods in terms of prediction accuracy of classification errors for samples in DSLR and Webcam. The classification accuracy of the CNNs on Amazon is provided in Table 11 in Appendix D.\nTo obtainDo’s for training f , we employ the same image corruption methods as Sec. 3.3.2; we apply them to Amazon samples to create virtual domain-shifted samples. The effectiveness of modeling the true shifted data, i.e., DSLR and Webcam, with these samples is unknown and needs to be experimentally validated. If this works, it will be practically useful. Specifically, we split the test splits of Amazon containing 754 images evenly into two sets. We use one for Ds and the other for\ncreating Do’s. We apply all the types of corruption, yielding 95(= 19× 5) Do’s. We then split them into those generated by four corruptions and those generated by the rest; the latter is used for training f , and the former is used for the validation. We iterate this for 20 times with different random splits of the corruption types, reporting the average over 20 × 3 trials, as there are three CNN models trained from different initial weights.\nTo evaluate each method (i.e., f based on a OOD score), we split DSLR and Webcam into subsets containing 50 samples, yielding 18 Dt’s in total. We apply f to each of them, reporting the average error of predicting classification errors. Table 4 shows the results. It is observed that Cosine works well in both training methods. The two variants of Mahalanobis show good performance when using a pre-trained model, but this may be better considered a coincidence, as explained below. Figure 3 shows the scatter plots of OOD score vs. classification error for each method. The green dots indicate Do’s, corrupted Amazon images used for training f , and the blue ones indicate Dt’s, subsets from DSLR and Webcam containing 50 samples each. For the method for which the green dots distribute with narrower spread, the regressor f will yield more accurate results. Thus, it is seen from Fig. 3 that both Mahalanobis’s tend to have large spread, meaning that they could perform poorly depending on incoming domain-shifted data. Cosine and MC dropout have narrower spread, confirming their performance in Table 4. Other results for DSLR and Webcam subsets with a different number of samples are provided in Appendix D." }, { "heading": "3.4 ANALYSES OF THE RESULTS", "text": "We can summarize our findings in the following. i) Using a pre-trained network has shown improvements in all the scenarios, confirming the report of Hendrycks et al. (2019). ii) The detector using cosine similarity consistently works well throughout the three scenarios. The method will be the first choice if it is acceptable to modify the network’s final layer. iii) The Mahalanobis detector, a SOTA method, works well only for irrelevant input detection. This is not contradictory with the\nprevious reports, since they employ only this very scenario. The method fits a Gaussian distribution to ID samples belonging to each class and uses the same covariance matrix for all the classes. This strategy might work well on easy cases when incoming OOD samples are mapped distantly from the Gaussian distributions. However, such a simple modeling method will not work in more challenging cases. For instance, incoming OOD samples could be mapped near the ID distributions, as in novel class detection. In such cases, the ID sample distribution needs to be very precisely modeled, for which the assumption of Gaussian distributions with a single covariance matrix is inadequate. iv) Domain shift detection requires detecting classification accuracy deterioration, not detecting a distributional shift of inputs, contrary to its name. This theoretically favors the confidence-based methods; they (particularly MC dropout) indeed work well, when used with a pre-trained network. However, the Mahalanobis detector is more like an anomaly detection method, although its similarity with a softmax classifier is suggested in (Lee et al., 2018). An input sample for which the network can make a correct classification can be detected as an ‘anomaly’ by the Mahalanobis detector." }, { "heading": "4 RELATED WORK", "text": "Many studies of OOD detection have been conducted so far, most of which are proposals of new methods; those not mentioned above include (Vyas et al., 2018; Yu & Aizawa, 2019; Sastry & Oore, 2020; Zisselman & Tamar, 2020; Yu et al., 2020). Experimental evaluation similar to our study but on the estimation of the uncertainty of prediction is provided in (Ovadia et al., 2019).\nIn (Hsu et al., 2020), the authors present a scheme for conceptually classifying domain shifts in two axes, semantic shift and non-semantic shift. Semantic shift (S) represents OOD samples coming from the distribution of an unseen class, and non-semantic shift (NS) represents to OOD samples coming from an unseen domain. Through the experiments using the DomainNet dataset (Peng et al., 2019), they conclude that OOD detection is more difficult in the order of S > NS > S+NS.\nIn this study, we classify the problems into three types from an application perspective. One might view this as somewhat arbitrary and vague. Unfortunately, Hsu et al.’s scheme does not provide help. For instance, according to their scheme, novel class detection is S, and domain shift is NS. However, it is unclear which to classify irrelevant detection between S and S+NS. Moreover, their conclusion (i.e., S > NS > S+NS) does not hold for our results; the difficulty depends on the closeness between classes and between domains. After all, we think that only applications can determine what constitutes domain and what constitutes classes. Further discussion will be left for a future study.\nAs mentioned earlier, the detection of domain shift in the context of deep learning has not been well studied in the community. The authors are not aware of a study for image classification and find only a few Elsahar & Gallé (2019) even when looking at other fields. On the other hand, there are a large number of studies of domain adaptation (DA); (Ganin & Lempitsky, 2015; Tzeng et al., 2017; Zhang et al., 2017; Toldo et al., 2020; Zou et al., 2019) to name a few. It is to make a model that has learned a task using the dataset of a particular domain adapt to work on data from a different domain. Researchers have been studied several problem settings, e.g., closed-set, partial, open-set, and boundless DA (Toldo et al., 2020). However, these studies all assume that the source and target domains are already known; no study considers the case where the domain of incoming inputs is unidentified. Thus, they do not provide a hint of how to detect domain shift." }, { "heading": "5 SUMMARY AND CONCLUSION", "text": "In this paper, we first classified OOD detection into three scenarios from an application perspective, i.e., irrelevant input detection, novel class detection, and domain shift detection. We have presented a meta-approach to be used with any OOD detection method to domain shift detection, which has been poorly studied in the community. We have experimentally evaluated various OOD detection methods on these scenarios. The results show the effectiveness of the above approach to domain shift several as well as several findings such as which method works on which scenario." }, { "heading": "A DETECTION OF IRRELEVANT INPUTS", "text": "" }, { "heading": "A.1 ADDITIONAL RESULTS", "text": "In our experiment for irrelevant input detection, using five datasets, we consider every pair of them, one for ID and the other for OOD. In the main paper, we reported only the average detection accuracy over four such pairs for an ID dataset. We report here the results for all the ID-OOD pairs. Tables 5 and 6 show the performance of the compared methods for training from scratch and for fine-tuning of a pre-trained network." }, { "heading": "B DETECTION OF NOVEL CLASSES", "text": "" }, { "heading": "B.1 CLASSIFICATION ACCURACY OF THE BASE TASKS", "text": "In our experiments for novel class detection, we employ two datasets, Dog and Food-A. Table 7 shows the classification accuracy for each of them. It is seen that for Dog, using a pre-trained model boosts the accuracy. There is a tendency similar to that seen in Table 1, that Cosine outperforms others in training from scratch. For Food-A, using a pre-trained model shows only modest improvement due to the availability of a sufficient number of samples." }, { "heading": "B.2 ADDITIONAL RESULTS", "text": "In one of the experiments explained in Sec. 3.2, we use only dog classes from the Oxford-IIIT Pet dataset. We show here additional results obtained when using cat classes. Choosing nine from 12 cat breeds contained in the dataset, we train the networks on classification of these nine breeds and test novel class detection using the remaining three breed classes. In another experiment, we use Food-A for ID and Food-B for OOD. We report here the results for the reverse configuration. Table 8 shows the classification accuracy of the new tasks. Table 9 shows the performance of the compared methods on the novel class detection. A similar observation to the experiments of Sec. 3.2 can be made." }, { "heading": "C DETECTION OF DOMAIN SHIFT (IMAGE CORRUPTION)", "text": "" }, { "heading": "C.1 CLASSIFICATION ACCURACY ON IMAGENET", "text": "Table 10 shows the accuracy of the three networks used by the compared OOD detection methods for 1,000 class classification of the ImageNet dataset. We use center-cropping at test time. The cosine network shows lower classification accuracy here." }, { "heading": "C.2 SCATTER PLOTS OF OOD SCORE VS. CLASSIFICATION ERROR", "text": "In Sec. 3.3.2, we showed experimental results of domain shift detection using Food-A. Given a set Dt of samples, each of the compared methods calculates an OOD score S for it, from which the average classification error err over samples from Dt is predicted. Figure 4 shows scatter plots showing the relation between the OOD score S and the true classification error for a number of datasets (i.e.,Dt’s). We have 95(= 19×5) such datasets, each containing images undergoing one of the combinations of 19 image corruptions and 5 severity levels. The method with a narrower spread of dots should provide a more accurate estimation. These scatter plots well depict which method works well and which does not, which agrees well with Table 3. The same holds true for the plots for ImageNet shown in Fig. 5." }, { "heading": "D DETECTION OF DOMAIN SHIFT (OFFICE-31)", "text": "" }, { "heading": "D.1 CLASSIFICATION ACCURACY OF THE BASE TASKS", "text": "Table 11 shows the classification accuracy of the three networks used by the compared methods for the different domain datasets of Office-31. These networks are trained only on Amazon." }, { "heading": "D.2 ADDITIONAL RESULTS", "text": "As with the experiments on image corruption, we evaluate how accurately the compared methods can predict the classification error on incoming datasets, Dt’s. Table 4 and Fig. 3 show the error of the predicted classification accuracy and the scatter plots of the OOD score and the true classification accuracy, where Dt’s are created by splitting DSLR and Webcam into sets containing 50 samples. We show here additional results obtained for Dt’s created differently. Table 12 and Fig. 6 show the prediction errors and the scatter plots forDt’s containing 30 samples. Table 13 and Fig. 7 show those for Dt’s of 100 samples. Table 14 and Fig. 8 show those for using the entire DSLR and Webcam for Dt’s; thus there are only two Dt’s. The standard deviations are computed for 20 × 3 trials (20 for random splitting of corruption types for train/val and 3 for network models trained from random initial weights), as explained in Sec. 3.3.3." }, { "heading": "E EFFECTIVENESS OF ENSEMBLES", "text": "An ensemble of multiple models is known to performs better than MC-dropout we considered in the main experiments for estimation of uncertainty etc. It is also known to be better approximation to Bayesian networks. Thus, we experimentally evaluate ensembles. We consider an ensemble of five models and train each model in two ways, i.e., “from-scratch” and “fine-tuning.” We randomly initialize all the weights of each model for the former. We initialize the last layer randomly and other layers with the pre-trained model’s weights for the latter. We evaluate ensembles for Baseline and Cosine. Tables 15, 16, 17, and 18 show the results for the three scenarios. In the tables, “(con.)” means confidence is used as an ID score, or equivalently, negative confidence is used as an OOD score. “(en.)” means the entropy is used as an OOD score.\nWe can observe the following from the tables:\n• An ensemble of models performs better than a single model. This is always true for Baseline. The same is true for Cosine except for domain shift detection. (The reason is not clear.)\n• An ensemble of Baseline models still performs lower than a single Cosine model for most cases. It sometimes shows better performance for fine-tuned models, but the margin is small.\n• Using entropy as OOD score tends to show slightly better performance than using confidence.\nWe conclude that Cosine’s superiority remains true even when we take ensembles into consideration." }, { "heading": "F ADDITIONAL DETAILS OF EXPERIMENTAL SETTINGS", "text": "" }, { "heading": "F.1 TRAINING OF THE NETWORKS", "text": "As is mentioned in the main paper, we employ Resnet-50 in all the experiments. For the optimization, we use SGD with the momentum set to 0.9 and the weight decay set to 10−4. The learning rate starts at 0.1, and then is divided by 10 depending on the performance of the validation dataset.\nTo fine-tune a pre-trained network, we use the learning rate of 0.001 for the standard network and that with MC dropout. For the network used with Cosine, we use the learning rate of 0.001 to the backbone part and a higher learning rate of 0.1 to the fully-connected layer; the weight decay for the fully-connected layer is set to 0, following Techapanurak et al. (2019) and Hsu et al. (2020)." }, { "heading": "F.2 DATASETS", "text": "Table 19 shows the specification of the datasets used in our experiments. Note that we modify some of the dataset and use them in several experiments. In the experiments of domain shift detection, we employed image corruption to simulate/model domain shift. The example of the corrupted images are shown in Fig. 9." } ]
2,020
null
SP:74ef7a70748db738244d9e402bbc4a9b43002896
[ "The submission concerns an application of group convolutions (Cohen & Welling, 2016) to the image synthesis setting, where images are produced by the generator of a GAN. The two GAN components are augmented mainly by a straightforward replacement of \"regular\" convolutions by group convolutions, in addition to some other training tricks of the trade (gradient penalty, spectral normalization). Experiments indicate somewhat lower FID scores on both synthetic and real settings. The method is seen as useful especially for the low data regime case." ]
Recent improvements in generative adversarial visual synthesis incorporate real and fake image transformation in a self-supervised setting, leading to increased stability and perceptual fidelity. However, these approaches typically involve image augmentations via additional regularizers in the GAN objective and thus spend valuable network capacity towards approximating transformation equivariance instead of their desired task. In this work, we explicitly incorporate inductive symmetry priors into the network architectures via group-equivariant convolutional networks. Group-convolutions have higher expressive power with fewer samples and lead to better gradient feedback between generator and discriminator. We show that group-equivariance integrates seamlessly with recent techniques for GAN training across regularizers, architectures, and loss functions. We demonstrate the utility of our methods for conditional synthesis by improving generation in the limited data regime across symmetric imaging datasets and even find benefits for natural images with preferred orientation.
[ { "affiliations": [], "name": "Neel Dey" }, { "affiliations": [], "name": "Antong Chen" } ]
[ { "authors": [ "Tomás Angles", "Stéphane Mallat" ], "title": "Generative networks as inverse problems with scattering transforms", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alberto Bietti", "Julien Mairal" ], "title": "Group invariance, stability to deformations, and complexity of deep convolutional representations", "venue": "The Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Mikolaj Binkowski", "Dougal J. Sutherland", "Michael Arbel", "Arthur Gretton" ], "title": "Demystifying MMD GANs", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jiřı́ Borovec", "Arrate Munoz-Barrutia", "Jan Kybic" ], "title": "Benchmarking of image registration methods for differently stained histological slides", "venue": "In 2018 25th IEEE International Conference on Image Processing (ICIP),", "year": 2018 }, { "authors": [ "Jiřı́ Borovec", "Jan Kybic", "Ignacio Arganda-Carreras", "Dmitry V Sorokin", "Gloria Bueno", "Alexander V Khvostikov", "Spyridon Bakas", "I Eric", "Chao Chang", "Stefan Heldmann" ], "title": "Anhir: automatic nonrigid histological image registration challenge", "venue": "IEEE Transactions on Medical Imaging,", "year": 2020 }, { "authors": [ "Lukas Bossard", "Matthieu Guillaumin", "Luc Van Gool" ], "title": "Food-101 – mining discriminative components with random forests", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "J. Bruna", "S. Mallat" ], "title": "Invariant scattering convolution networks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Ting Chen", "Xiaohua Zhai", "Marvin Ritter", "Mario Lucic", "Neil Houlsby" ], "title": "Self-supervised gans via auxiliary rotation loss", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Benjamin Chidester", "That-Vinh Ton", "Minh-Triet Tran", "Jian Ma", "Minh N Do" ], "title": "Enhanced rotationequivariant u-net for nuclear segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2019 }, { "authors": [ "Casey Chu", "Kentaro Minami", "Kenji Fukumizu" ], "title": "Smoothness and stability in gans", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Francesco Ciompi", "Yiping Jiao", "Jeroen van der Laak" ], "title": "Lymphocyte assessment hackathon (lysto), October 2019", "venue": "URL https://doi.org/10.5281/zenodo.3513571", "year": 2019 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Taco S Cohen", "Mario Geiger", "Maurice Weiler" ], "title": "A general theory of equivariant cnns on homogeneous spaces", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sander Dieleman", "Jeffrey De Fauw", "Koray Kavukcuoglu" ], "title": "Exploiting cyclic symmetry in convolutional neural networks", "venue": "arXiv preprint arXiv:1602.02660,", "year": 2016 }, { "authors": [ "Adji B Dieng", "Francisco JR Ruiz", "David M Blei", "Michalis K Titsias" ], "title": "Prescribed generative adversarial networks", "venue": "arXiv preprint arXiv:1910.04302,", "year": 2019 }, { "authors": [ "Carlos Esteves" ], "title": "Theoretical aspects of group equivariant neural networks", "venue": "arXiv preprint arXiv:2004.05154,", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton", "Alex Krizhevsky", "Sida D Wang" ], "title": "Transforming auto-encoders", "venue": "In International Conference on Artificial Neural Networks,", "year": 2011 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-toimage translation", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Ayush Jaiswal", "Wael AbdAlmageed", "Yue Wu", "Premkumar Natarajan" ], "title": "Capsulegan: Generative adversarial capsule network", "venue": "Computer Vision – ECCV 2018 Workshops,", "year": 2018 }, { "authors": [ "Alexia Jolicoeur-Martineau" ], "title": "The relativistic discriminator: a key element missing from standard gan", "venue": "arXiv preprint arXiv:1807.00734,", "year": 2018 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Tero Karras", "Miika Aittala", "Janne Hellsten", "Samuli Laine", "Jaakko Lehtinen", "Timo Aila" ], "title": "Training generative adversarial networks with limited data, 2020a", "venue": null, "year": 2020 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020b", "year": 2020 }, { "authors": [ "Junho Kim", "Minjae Kim", "Hyeonwoo Kang", "Kwang Hee Lee" ], "title": "U-gat-it: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Tuomas Kynkäänniemi", "Tero Karras", "Samuli Laine", "Jaakko Lehtinen", "Timo Aila" ], "title": "Improved precision and recall metric for assessing generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Maxime W. Lafarge", "Erik J. Bekkers", "Josien P.W. Pluim", "Remco Duits", "Mitko Veta" ], "title": "Rototranslation equivariant convolutional networks: Application to histopathology image analysis, 2020a", "venue": null, "year": 2020 }, { "authors": [ "Maxime W. Lafarge", "Josien P.W. Pluim", "Mitko Veta" ], "title": "Orientation-disentangled unsupervised representation learning for computational pathology, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Hugo Larochelle", "Dumitru Erhan", "Aaron Courville", "James Bergstra", "Yoshua Bengio" ], "title": "An empirical evaluation of deep architectures on problems with many factors of variation", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Stéphane Mallat" ], "title": "Group invariant scattering", "venue": "Communications on Pure and Applied Mathematics,", "year": 2012 }, { "authors": [ "Lars Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for gans do actually converge", "venue": "arXiv preprint arXiv:1801.04406,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "cgans with projection discriminator", "venue": "arXiv preprint arXiv:1802.05637,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Sangwoo Mo", "Minsu Cho", "Jinwoo Shin" ], "title": "Freeze the discriminator: a simple baseline for finetuning gans, 2020", "venue": null, "year": 2020 }, { "authors": [ "Atsuhiro Noguchi", "Tatsuya Harada" ], "title": "Image generation from small datasets via batch statistics adaptation", "venue": "arXiv preprint arXiv:1904.01774,", "year": 2019 }, { "authors": [ "Augustus Odena" ], "title": "Open questions about generative adversarial networks. Distill, 2019", "venue": "doi: 10. 23915/distill.00018", "year": 2019 }, { "authors": [ "E. Oyallon", "S. Zagoruyko", "G. Huang", "N. Komodakis", "S. Lacoste-Julien", "M. Blaschko", "E. Belilovsky" ], "title": "Scattering networks for hybrid representation learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm De Vries", "Vincent Dumoulin", "Aaron Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "David W Romero", "Mark Hoogendoorn" ], "title": "Co-attentive equivariant neural networks: Focusing equivariance on transformations co-occurring in data", "venue": "arXiv preprint arXiv:1911.07849,", "year": 2019 }, { "authors": [ "David W. Romero", "Erik J. Bekkers", "Jakub M. Tomczak", "Mark Hoogendoorn" ], "title": "Attentive group equivariant convolutional networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Doris Schattschneider" ], "title": "The plane symmetry groups: their recognition and notation", "venue": "The American Mathematical Monthly,", "year": 1978 }, { "authors": [ "Laurent Sifre", "Stephane Mallat" ], "title": "Rotation, scaling and deformation invariant scattering for texture discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2013 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Samarth Sinha", "Han Zhang", "Anirudh Goyal", "Yoshua Bengio", "Hugo Larochelle", "Augustus Odena" ], "title": "Small-gan: Speeding up gan training using core-sets", "venue": null, "year": 1910 }, { "authors": [ "Samarth Sinha", "Anirudh Goyal", "Colin Raffel", "Augustus Odena" ], "title": "Top-k training of gans: Improving generators by making critics less critical", "venue": "arXiv preprint arXiv:2002.06224,", "year": 2020 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Hoang Thanh-Tung", "Truyen Tran" ], "title": "On catastrophic forgetting in generative adversarial networks", "venue": "arXiv preprint arXiv:1807.04015,", "year": 2018 }, { "authors": [ "Dustin Tran", "Rajesh Ranganath", "David Blei" ], "title": "Hierarchical implicit models and likelihood-free variational inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yash Upadhyay", "Paul Schrater" ], "title": "Generative adversarial network architectures for image synthesis using capsule networks", "venue": "arXiv preprint arXiv:1806.03796,", "year": 2018 }, { "authors": [ "Bastiaan S Veeling", "Jasper Linmans", "Jim Winkens", "Taco Cohen", "Max Welling" ], "title": "Rotation equivariant cnns for digital pathology", "venue": "In International Conference on Medical image computing and computer-assisted intervention,", "year": 2018 }, { "authors": [ "Yaxing Wang", "Chenshen Wu", "Luis Herranz", "Joost van de Weijer", "Abel Gonzalez-Garcia", "Bogdan Raducanu" ], "title": "Transferring gans: generating images from limited data", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Maurice Weiler", "Gabriele Cesa" ], "title": "General e (2)-equivariant steerable cnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tom White" ], "title": "Sampling generative networks", "venue": "arXiv preprint arXiv:1609.04468,", "year": 2016 }, { "authors": [ "Marysia Winkels", "Taco S Cohen" ], "title": "Pulmonary nodule detection in ct scans with equivariant cnns", "venue": "Medical image analysis,", "year": 2019 }, { "authors": [ "Yan Wu", "Jeff Donahue", "David Balduzzi", "Karen Simonyan", "Timothy Lillicrap" ], "title": "Logan: Latent optimisation for generative adversarial networks", "venue": null, "year": 1912 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "arXiv preprint arXiv:1805.08318,", "year": 2018 }, { "authors": [ "Han Zhang", "Zizhao Zhang", "Augustus Odena", "Honglak Lee" ], "title": "Consistency regularization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Miaoyun Zhao", "Yulai Cong", "Lawrence Carin" ], "title": "On leveraging pretrained gans for limited-data generation", "venue": "arXiv preprint arXiv:2002.11810,", "year": 2020 }, { "authors": [ "Shengyu Zhao", "Zhijian Liu", "Ji Lin", "Jun-Yan Zhu", "Song Han" ], "title": "Differentiable augmentation for data-efficient gan training, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Zhengli Zhao", "Sameer Singh", "Honglak Lee", "Zizhao Zhang", "Augustus Odena", "Han Zhang" ], "title": "Improved consistency regularization for gans, 2020c", "venue": null, "year": 2020 }, { "authors": [ "Zhiming Zhou", "Jiadong Liang", "Yuxuan Song", "Lantao Yu", "Hongwei Wang", "Weinan Zhang", "Yong Yu", "Zhihua Zhang" ], "title": "Lipschitz generative adversarial nets", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zhao" ], "title": "Architectures are presented in Tables 9 and 10", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative visual modeling is an area of active research, time and again finding diverse and creative applications. A prevailing approach is the generative adversarial network (GAN), wherein density estimation is implicitly approximated by a min-max game between two neural networks (Goodfellow et al., 2014). Recent GANs are capable of high-quality natural image synthesis and scale dramatically with increases in data and compute (Brock et al., 2018). However, GANs are prone to instability due to the difficulty of achieving a local equilibrium between the two networks. Frequent failures include one or both networks diverging or the generator only capturing a few modes of the empirical distribution. Several proposed remedies include modifying training objectives (Arjovsky et al., 2017; Jolicoeur-Martineau, 2018), hierarchical methods (Karras et al., 2017), instance selection (Sinha et al., 2019; 2020), latent optimization (Wu et al., 2019), and strongly regularizing one or both networks (Gulrajani et al., 2017; Miyato et al., 2018; Dieng et al., 2019), among others. In practice, one or all of the above techniques are ultimately adapted to specific use cases.\nFurther, limits on data quantity empirically exacerbate training stability issues more often due to discriminator overfitting. Recent work on GANs for small sample sizes can be roughly divided into transfer learning approaches (Wang et al., 2018; Noguchi & Harada, 2019; Mo et al., 2020; Zhao et al., 2020a) or methods which transform/augment the available training data and provide the discriminator with auxiliary tasks. For example, Chen et al. (2019) propose a multi-task discriminator which additionally predicts the degree by which an input image has been rotated, whereas Zhang et al. (2020); Zhao et al. (2020c) incorporate consistency regularization where the discriminator is penalized towards similar activations for transformed/augmented real and fake images. However, with consistency regularization and augmentation, network capacity is spent learning equivariance to transformation as opposed to the desired task and equivariance is not guaranteed.\nIn this work, we consider the problem of training tabula rasa on limited data which possess global and even local symmetries. We begin by noting that GANs ubiquitously use convolutional layers\n∗Work started and partially done during an internship at Merck & Co., Inc. †Work done while employed at Merck & Co., Inc.\nwhich exploit the approximate translation invariance and equivariance of image labels and distributions, respectively. Equivariance to geometric transformations is key to understanding image representations (Bietti & Mairal, 2019). Unfortunately, other symmetries (e.g., rotations and reflections) inherent to modalities such as astronomy and medical imaging where galaxies and cells can be in arbitrary orientations are not accounted for by standard convolutional layers. To this end, Cohen & Welling (2016) proposed a group-theoretic generalization of convolutional layers (groupconvolutions) which in addition to translation, exploit other inherent symmetries and increase the expressive capacity of a network thereby increasing its sample efficiency significantly in detection (Winkels & Cohen, 2019), classification (Veeling et al., 2018), and segmentation (Chidester et al., 2019). Importantly, equivariant networks outperform standard CNNs trained with augmentations from the corresponding group (Veeling et al., 2018, Table 1), (Lafarge et al., 2020a, Fig. 7). See Cohen et al. (2019); Esteves (2020) for a formal treatment of equivariant CNNs.\nEquivariant features may also be constructed via scattering networks consisting of non-trainable Wavelet filters, enabling equivariance to diverse symmetries (Mallat, 2012; Bruna & Mallat, 2013; Sifre & Mallat, 2013). Generative scattering networks include Angles & Mallat (2018) where a standard convolutional decoder is optimized to reconstruct images from an embedding generated by a fixed scattering network and Oyallon et al. (2019) who show preliminary results using a standard convolutional GAN to generate scattering coefficients. We note that while both approaches are promising, they currently yield suboptimal synthesis results not comparable to modern GANs. Capsule networks (Hinton et al., 2011; Sabour et al., 2017) are also equivariant and emerging work has shown that using a capsule network for the GAN discriminator (Jaiswal et al., 2019; Upadhyay & Schrater, 2018) improves synthesis on toy datasets. However, capsule GANs and generative scattering approaches require complex training strategies, restrictive architectural choices not compatible with recent insights in GAN training, and have not yet been shown to scale to real-world datasets.\nIn this work, we improve the generative modeling of images with transformation invariant labels by using an inductive bias of symmetry. We replace all convolutions with group-convolutions thereby admitting a higher degree of weight sharing which enables increased visual fidelity, especially with limited-sample datasets. To our knowledge, we are the first to use group-equivariant layers in the GAN context and to use symmetry-driven considerations in both generator and discriminator architectures. Our contributions are as follows,\n1. We introduce symmetry priors via group-equivariance to generative adversarial networks. 2. We show that recent insights in improving GAN training are fully compatible with group-\nequivariance with careful reformulations. 3. We improve class-conditional image synthesis across a diversity of datasets, architectures,\nloss functions, and regularizations. These improvements are consistent for both symmetric images and even natural images with preferred orientation." }, { "heading": "2 METHODS", "text": "" }, { "heading": "2.1 PRELIMINARIES", "text": "Groups and group-convolutions. A group is a set with an endowed binary function satisfying the properties of closure, associativity, identity, and invertibility. A two-dimensional symmetry group is\nthe set of all transformations under which a geometric object is invariant with an endowed operation of composition. Given a group G and a map Φ : X → Y between two G-sets X and Y , Φ is said to be equivariant i.f.f. Φ(g ·x) = g ·Φ(x), ∀x ∈ X, ∀g ∈ G. Colloquially, an equivariant map implies that transforming an input and applying the map yields the same result as applying the map and then transforming the output. Analogously, invariance requires that Φ(g · x) = Φ(x), ∀x ∈ X, ∀g ∈ G. In deep networks, equivariance to a planar symmetry group can be achieved by either transforming filters (Cohen & Welling, 2016) or feature maps (Dieleman et al., 2016).\nOur work utilizes the plane symmetry groups p4 (all compositions of 90-degree rotations and translations) and p4m (all compositions of 90-degree rotations, reflections, and translations) (Schattschneider, 1978). These groups can be parameterized neatly following Cohen & Welling (2016),\ng(r, u, v) =\n[ cos( rπ2 ) −sin( rπ 2 ) u\nsin( rπ2 ) cos( rπ 2 ) v\n0 0 1\n] ; g′(m, r, u, v) = (−1)mcos( rπ2 ) (−1)m+1sin( rπ2 ) usin( rπ2 ) cos( rπ2 ) v 0 0 1 where g(r, u, v) parameterizes p4, g′(m, r, u, v) parameterizes p4m, 0 ≤ r < 4 (the number of 90- degree rotations),m ∈ {0, 1} (the number of reflections), and (u, v) ∈ Z2 (integer translations). The group operation is matrix multiplication for both groups. The matrix g(r, u, v) rotates and translates a point (expressed as homogeneous coordinate vector) in pixel space via left-multiplication. Analogous intuition follows for g′(m, r, u, v).\nWe now briefly define G-equivariant convolutions. We note that formally these are correlations and not convolutions and that the literature uses the terms interchangeably. A G-convolution between a vector-valued K-channel image f : Z2 → RK and filter ψ : Z2 → RK with f = (f1, f2, . . . , fk) and ψ = (ψ1, ψ2, . . . , ψk) can be expressed as [f ∗ ψ](g) = ∑ y∈Z2 ∑K k=1 fk(y)ψk(g\n−1y). For standard reference, if one considers G to be the translation group on Z2, we have g−1y = y− g and recover the standard convolution. After the first layer of a G-CNN, we see that (f ∗ ψ) is a function on G, necessitating that filter banks also be functions on G. Subsequent G-convolutional layers are therefore defined as [f ∗ ψ](g) = ∑ h∈G ∑K k=1 fk(h)ψk(g\n−1h). Finally, for tasks where the output is an image, it is necessary to bring the domain of feature maps from G back to Z2. We can pool the feature map for each filter over the set of transformations, corresponding to average or max pooling over the group of rotations (or roto-reflections as appropriate).\nGAN optimization and stability. As we focus on the limited data setting where training instability is exacerbated, we briefly describe the two major stabilizing methods used in all experiments here. We regularize the discriminator by using a zero-centered gradient penalty (GP) on the real data as proposed by Mescheder et al. (2018) of the form, R1 := γ2Ex∼Preal [‖∇D(x)‖ 2 2], where γ is the regularization weight, x is sampled from the real distribution Preal, and D is the discriminator. This GP has been shown to cause convergence (in toy cases), alleviate catastrophic forgetting (ThanhTung & Tran, 2018), and strongly stabilize GAN training. However, empirical work has found that this GP achieves stability at the cost of worsening GAN evaluation scores (Brock et al., 2018).\nA widely used technique for GAN stabilization is spectral normalization (Miyato et al., 2018), which constrains the discriminator to be 1-Lipschitz, thereby improving gradient feedback to the generator (Zhou et al., 2019; Chu et al., 2020). With spectral normalization, each layer is rescaled as, WSN = W/σ(W ), where W is the weight matrix for a given layer and σ(W ) is its spectral norm. In practice, σ(W ) is estimated via a power iteration method as opposed to computing the full singular value decomposition during each training iteration. Finally, applying spectral normalization to both generator and discriminator empirically improves training significantly (Zhang et al., 2018)." }, { "heading": "2.2 GROUP EQUIVARIANT GENERATIVE ADVERSARIAL NETWORKS", "text": "Here, we outline how to induce a symmetry prior into the GAN framework. Implementations are available at https://github.com/neel-dey/equivariant-gans. The literature has developed several techniques for normalization and conditioning of the individual networks, along with unique architectural choices - we extend these developments to the equivariant setting. We start by replacing all convolutional layers with group-convolutional layers where filters and feature maps are functions on a symmetry group G. Batch normalization moments (Ioffe & Szegedy, 2015) are calculated per group-feature map as opposed to spatial feature maps. Pointwise nonlinearities preserve equivariance for the groups considered here. Pre-activation residual blocks common to modern GANs are used freely as the sum of equivariant feature maps on G is also equivariant.\nGenerator. The generator is illustrated at a high-level in Figure 2. We use a fully connected layer to linearly project and reshape the concatenated noise vector z ∼ N (0, I) and class embedding c into spatial feature maps on Z2. We then use spectrally-normalized group-convolutions, interspersed with pointwise-nonlinearities, and nearest-neighbours upsampling to increase spatial extent. We use upsampling followed by group-convolutions instead of transposed group-convolutions to reduce checkerboard artefacts (Odena et al., 2016). We further use a novel group-equivariant classconditional batch normalization layer (described below) to normalize and class-condition image generation while also projecting the latent vector z to each level of the group-convolutional hierarchy. We finally max-pool over the set of transformations to obtain the generated image x.\nDiscriminator. The group-equivariant discriminator receives an input x, which it maps to a scalar indicating whether it is real or fake. We do this via spectrally normalized group-convolutions, pointwise-nonlinearities, and spatial-pooling layers to decrease spatial extent. After the final groupconvolutional layer, we pool over the group and use global average pooling to obtain an invariant representation at the output. Finally, we condition the discriminator output via the projection method proposed by Miyato & Koyama (2018). Importantly, the equivariance of group-convolutions depends on the convolutional stride. Strided convolutions were commonly used for downsampling in early GANs (Radford et al., 2015). However, stride values must be adjusted to the dataset to preserve equivariance, which makes comparisons to equivalent non-equivariant GAN architectures difficult. We therefore use pooling layers over the plane (commonly used in recent GANs) to downsample in all settings to preserve equivariance and enable a fair comparison.\nSpectral Normalization. As the singular values of a matrix are invariant under compositions of 90- degree rotations, transpositions, and reflections - spectral normalization on a group-weight matrix preserves equivariance and we use it freely.\nClass-conditional Batch Normalization. Conditional batch normalization (Perez et al., 2018) replaces the scale and shift of features with an affine transformation learned from the class label (and optionally from the latent vector as well (Brock et al., 2018)) via linear dense layers, and is widely used in generative networks. We propose a group-equivariance preserving conditional normalization by learning the affine transformation parameters per group-feature map, rather than each spatial feature. As we use fewer group-filters than equivalent non-equivariant GANs, we use fewer dense parameters to learn conditional scales and shifts." }, { "heading": "3 EXPERIMENTS", "text": "Common setups. In each subsection, we list specific experimental design choices with full details available in App. C. For each comparison, the number of group-filters in each layer is divided by the square root of the cardinality of the symmetry set to ensure a similar number of parameters to the standard CNNs to enable fair comparison. We skew towards stabilizing training over absolute\nperformance to compare models under the same settings to obviate extensive checkpointing typically required for BigGAN-like models. Optimization is performed via Adam (Kingma & Ba, 2014) with β1 = 0.0 and β2 = 0.9, as in Zhang et al. (2018); Brock et al. (2018). Unless otherwise noted, all discriminators are updated twice per generator update and employ unequal learning rates for the generator and discriminator following Heusel et al. (2017). We use an exponential moving average (α = 0.9999) of generator weights across iterations when sampling images as in Brock et al. (2018). All initializations use the same random seed, except for RotMNIST where we average over 3 random seeds. An overview of the small datasets considered here is presented in Table 1.\nEvaluation methodologies. GANs are commonly evaluated by embedding the real and generated images into the feature space of an ImageNet pre-trained network where similarity scores are computed. The Fréchet Inception Distance (FID) (Heusel et al., 2017) jointly captures sample fidelity and diversity and is presented for all experiments. To further evaluate both aspects explicitly, we present the improved precision and recall scores (Kynkäänniemi et al., 2019) for ablations on real-world datasets. As the medical imaging datasets (ANHIR and LYSTO) are not represented in ImageNet, we finetune Inception-v3 (Szegedy et al., 2016) prior to feature extraction for FID calculation as in Huang et al. (2018). For RotMNIST, we use features derived from the final pooling layer of the p4-CNN defined in Cohen & Welling (2016) to replace Inception-featurization. An analogous approach was taken in Binkowski et al. (2018) in their experiments on the canonical MNIST dataset. Natural image datasets (Food-101 and CIFAR-10) are evaluated with the official Tensorflow Inception-v3 weights. Importantly, we perform ablation studies on all datasets to evaluate group-equivariance in either or both networks.\nWe note that the FID estimator is strongly biased (Binkowski et al., 2018) and work around this limitation by always generating the same number of samples as the validation set as recommended in Binkowski et al. (2018). An alternative Kernel Inception Distance (KID) with negligible bias has\nbeen proposed (Binkowski et al., 2018), yet large-scale evaluation (Kurach et al., 2019) finds that KID correlates strongly with FID. We thus focus on FID in our experiments in the main text." }, { "heading": "3.1 SYNTHETIC EXPERIMENTS: ROTATED MNIST", "text": "Rotated MNIST (Larochelle et al., 2007) provides random rotations of the MNIST dataset and is a common benchmark for equivariant CNNs which we use to measure sensitivity to dataset size, loss function, and equivariance in either network to motivate choices for real-world experiments. We experiment with four different proportions of training data: 10%, 33%, 66%, and 100%. Additionally, the non-saturating loss (Goodfellow et al., 2014) (NSGAN), the Wasserstein loss (Arjovsky et al., 2017) (WGAN), and the relativistic average loss (Jolicoeur-Martineau, 2018) (RaGAN) are tested. For the equivariant setting, all convolutions are replaced with p4-convolutions. p4m is precluded as some digits do not possess mirror symmetry. All settings were trained for 20,000 generator iterations with a batch size of 64. Implementation details are available in Appendix C.2.1.\nResults. Fréchet distance of synthesized samples to the validation set is calculated at every thousand generator iterations. As shown in Table 2, we find that under nearly every configuration of loss and data availability considered, using p4-convolutions in either network improves both the mean and minimum Fréchet distance. As data availability increases, the best-case minimum and mean FID scores improve. With {33%, 66%, 100%} of the data, most improvements come from using a p4-discriminator, with the further usage of a p4-generator only helping in a few cases. At 10% data, having an equivariant generator is more impactful than an equivariant discriminator. These trends are further evident from App. A Fig. 6, where we see that GANs with p4-discriminators converge faster than non-equivariant counterparts. The NSGAN-GP and RAGAN-GP losses perform similarly, with WGAN-GP underperforming initially and ultimately achieving comparable results. Qualitatively, the equivariant model learns better representations as shown in Figure 3(a). Holding the class-label constant and interpolating between samples, we find that the standard GAN changes the shape of the digit in order to rotate it, whereas the equivariant model learns rotation in the latent space. Holding the latent constant and interpolating between classes shows that our model learns an intuitive interpolation between digits, whereas the standard GAN transforms the image immediately." }, { "heading": "3.2 REAL-WORLD EXPERIMENTS", "text": "Datasets. p4 and p4m-equivariant networks are most useful when datasets possess global roto(reflective) symmetry, yet have also been shown to benefit generic image representation due to local symmetries (Cohen & Welling, 2016; Romero et al., 2020). To this end, we experiment with two\ntypes of real-world datasets as detailed in Table 1: (1) sets with roto(-reflective) symmetry, such that the image label is invariant under transformation; (2) natural images with preferred orientation (e.g., the boat class of images in CIFAR-10 cannot be upside-down). Briefly, they are:\nANHIR provides high-resolution pathology slides stained with 5 different dyes to highlight different cellular properties (Borovec et al., 2020; 2018). We extract 128 × 128 foreground patches from images of different scales, as described in App. C.1.2. We use the staining dye as conditioning.\nLYSTO is a multi-organ pathology benchmark for the counting of immunohistochemistry stained lymphocytes (Ciompi et al., 2019). We re-purpose it here for conditional synthesis at a higher resolution of 256 × 256. As classification labels are not provided, we use the organ source as class labels. The use of organ sources as classes is validated in App. C.1.1. The high image resolution in addition to the limited sample size of 20,000 make LYSTO a challenging dataset for GANs.\nCIFAR-10 is a natural image vision benchmark of both small resolution and sample size (Krizhevsky et al., 2009). Previous work (Weiler & Cesa, 2019; Romero et al., 2020) finds that equivariantnetworks improve classification accuracy on CIFAR-10 and we include here it as a GAN benchmark.\nFood-101 is a small natural image dataset of a 101 categories of food taken in various challenging settings of over/under exposure, label noise, etc. (Bossard et al., 2014). Further, datasets with a high number of classes are known to be challenging for GANs (Odena, 2019). Importantly, even though the objects in this dataset have a preferred pose due to common camera orientations, we speculate that roto-equivariance may be beneficial here as food photography commonly takes an en face or mildly oblique view. We resize the training set to 64× 64 resolution for our experiments.\nBaseline architecture. To produce a strong non-equivariant baseline, we face several design choices. State-of-the-art GANs follow either BigGAN (Brock et al., 2018) or StyleGAN2 (Karras et al., 2020b) in design. As StyleGAN2 has not yet been demonstrated to scale to conditional generation with a large number of classes (to our knowledge), we follow a BigGAN-like construction despite the stability of StyleGAN2. For our small datasets, we make the following modifications: (1) we use fewer channels; (2) we do not use orthogonal regularization; (3) we do not use hierarchical latent projection as we find in early testing that projecting the entire latent to each normalization layer achieves similar results; (4) we do not use attention as equivariant attention is an area of active research (Romero & Hoogendoorn, 2019; Romero et al., 2020) but currently has prohibitively high memory requirements and may not yet scale to GANs. Further details are available in App. C.2.\nWe then modify either generator (G) and/or discriminator (D) as in Section 2.2 to obtain the corresponding equivariant settings. We note that a discriminator invariant to roto-reflections would assign the same amount of realism to an upright natural image versus a rotated/reflected copy of the same image, allowing the generator to synthesize images at arbitrary orientations. Therefore, for CIFAR10 and Food-101 we pool over rotations before the last residual block to enable the discriminator to detect when generated images are not in their canonical pose while maintaining most of the benefits of equivariance as studied in Weiler & Cesa (2019). We use p4m-equivariance for ANHIR and LYSTO and p4-equivariance for CIFAR-10 and Food-101 to reduce training time.\nComparisons. A natural comparison would be against standard GANs using augmentations drawn from the same group our model is equivariant to. However, augmentation on the real images alone would lead to the augmentations “leaking” into the generated images, e.g., vertical flip augmentation may lead to generated images being upside-down. Zhao et al. (2020c) propose balanced consistency regularization (bCR) for augmentations of both real and generated samples to alleviate this issue, and we thus use it as a comparison. We restrict the augmentations used in bCR to 90-degree rotations or 90-degree rotations and reflections as appropriate to enable a fair comparison against equivariant GANs. Using additional augmentations would help all methods across the board. We further compare against auxiliary rotations (AR) GAN (Chen et al., 2019) where real and fake images are augmented with 90-degree rotations and the discriminator is tasked with predicting their orientation. We do not use AR for ANHIR and LYSTO as they have no canonical orientation. For completeness, we also evaluate standard augmentation (reals only) for all datasets.\nResults. Quantitative FID results of ablations and comparisons against baselines are presented in Table 3. Equivariant networks (G-CNNs) outperform methods which use standard CNNs with or without augmentation across all datasets. For ANHIR and LYSTO, we find that p4m-equivariance in either network improves FID evaluation, with the best results coming from modifying both networks.\nHowever, for the upright datasets CIFAR-10 and Food-101, we find that having a p4-equivariant discriminator alone helps more than having both networks be p4-equivariant. We speculate that this effect is in part attributable to their orientation bias. With bCR and AR GANs, we find that standard CNNs improve significantly, yet are still outperformed by equivariant nets using no augmentation. We include a mixture of equivariant GANs and bCR for completeness and find that for ANHIR and Food-101, they have an additive effect, whereas they do not for LYSTO and CIFAR-10, indicating a dataset-sensitivity. Of note, we found that bCR with its suggested hyperparameters lead to immediate training collapse on ANHIR, LYSTO, and CIFAR-10, which was fixed by decreasing the strength of the regularization substantially. This may be due to the original work using several different types of augmentation and not just roto-reflections. Standard augmentation (i.e., augmenting training images alone) lead to augmentation leakage for CIFAR-10 and Food-101.\nQualitatively, as class differences in ANHIR should be stain (color) based, we visualize inter-class interpolations between synthesized samples in Figure 3(b). We find that our model better preserves structure while translating between stains, whereas the non-equivariant GAN struggles to do so. In our ablation study in terms of precision and recall in Figure 5, using p4m-equivariance in G and D achieves consistently higher recall for ANHIR and LYSTO. For Food-101, we find that G-CNN in G and D achieves higher precision, whereas CNN in G and G-CNN in D achieves higher recall. For CIFAR-10 precision and recall, we find no discernable differences between the two settings with lowest FID. Interestingly, for CIFAR-10 adding p4-equivariance to G but not D worsens FID but noticeably improves precision. These observations are consistent with our FID findings as FID tends to correlate better with recall (Karras et al., 2020a). Finally, we plot FID vs. generator updates in Figure 5, finding that the proposed framework converges faster than the baseline as a function of training iterations (for all datasets except ANHIR). Convergence plots for all datasets and all methods compared can be found in App. A Figure 7, showing similar trends." }, { "heading": "4 DISCUSSION", "text": "Future work. We present improved conditional image synthesis using equivariant networks, opening several potential future research directions: (1) As efficient implementations of equivariant attention develop, we will incorporate them to model long-range dependency; (2) Equivariance to continuous groups may yield further increased data efficiency and more powerful representations. However, doing so may require non-trivial modifications to current GAN architectures as memory limitations could bottleneck continuous group-equivariant GANs at relevant image sizes. Further, adding more discretizations beyond 4 rotations on a continuous group such has SE(2) may show diminishing returns (Lafarge et al., 2020a, Fig.7); (3) In parallel to our work, Karras et al. (2020a) propose a differentiable augmentation scheme for limited data GANs pertaining to which transformations to apply and learning the frequency of augmentation for generic images, with similar work presented in Zhao et al. (2020b). Our approach is fully complementary to these methods when employing transformations outside the considered group and will be integrated into future work; (4) Contemporaneously, Lafarge et al. (2020b) propose equivariant variational autoencoders allowing for control over generated orientations via structured latent spaces which may be used for equivariant GANs as well; (5) The groups considered here do not capture all variabilities present in natural images such as small diffeomorphic warps. Scattering networks may provide an elegant framework to construct GANs equivariant to a wider range of symmetries and enable higher data-efficiency.\nConclusion. We present a flexible framework for incorporating symmetry priors within GANs. In doing so, we improve the visual fidelity of GANs in the limited-data regime when trained on symmetric images and even extending to natural images. Our experiments confirm this by improving on conventional GANs across a variety of datasets, ranging from medical imaging modalities to real-world images of food. Modifying either generator or discriminator generally leads to improvements in synthesis, with the latter typically having more impact. To our knowledge, our work is the first to show clear benefits of equivariant learning over standard GAN training on high-resolution conditional image generation beyond toy datasets. While this work is empirical, we believe that it strongly motivates future theoretical analysis of the interplay between GANs and equivariance. Finally, improved results over augmentation-based strategies are presented, demonstrating the benefits of explicit transformation equivariance over equivariance-approximating regularizations.\nAcknowledgements. Neel Dey thanks Mengwei Ren, Axel Elaldi, Jorge Ono, and Guido Gerig." }, { "heading": "A SUPPLEMENTARY RESULTS", "text": "B IMAGE-TO-IMAGE TRANSLATION\nTo show the generic utility of equivariance in generative adversarial network tasks, we present a pilot study employing p4-equivariance in supervised image-to-image translation to learn mappings between visual domains. Using the popular Pix2Pix model of Isola et al. (2017) as a baseline, we replace both networks with p4-equivariant models. For completeness, we also evaluate whether employing p4-equivariance in just the discriminator achieves comparable results to modifying both networks, as in the natural image datasets in the main text.\nWe use the 256×256 Maps dataset first introduced in (Isola et al., 2017), consisting of 1096 training and 1098 validation images of pairs of Google maps images and their corresponding satellite/aerial view images. As FID has a highly biased estimator, its use for evaluating generation with only 1098 validation samples is contraindicated (Binkowski et al., 2018). We instead use the Kernel Inception Distance (KID) proposed by Binkowski et al. (2018) which exhibits low bias for small sample sizes and is adopted in recent image translation studies (Kim et al., 2020). Briefly, as in FID, KID embeds real and fake images into the feature-space of an appropriately chosen network and computes the squared maximum-mean discrepancy (with a polynomial kernel) between their embeddings. Lower values of KID are better. We use the official Tensorflow implementation and weights1.\nFor baseline Pix2Pix, we use pre-trained weights provided by the authors2. Interestingly, we find that their architectures can be optimized for improved performance by replacing transposed convolutions with resize-convolutions, reducing the number of parameters by swapping 4× 4 convolutional\n1https://github.com/tensorflow/gan/blob/master/tensorflow_gan/python/ eval/inception_metrics.py\n2https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix\nkernels for 3 × 3 kernels, and removing dropout. For equivariant models, we replace convolutions with p4-convolutions in this optimized architecture and halve the number of filters to keep the number of parameters similar across settings. Architectures are given in Tables 15 and 16. We leave all other experimental details identical to Isola et al. (2017) for all models, such as training for 200 epochs with random crops under a cross-entropy GAN loss.\nQuantitative results are presented in Table 4 which shows that p4-equivariance in either setting improves over both original baseline and optimized baseline by a wide margin, with the best results coming from p4-equivariance in the discriminator alone. Qualitative results are presented in Figure 12 showing improved translation fidelity, further supporting our hypothesis that equivariant networks benefit GAN tasks generically." }, { "heading": "C EXPERIMENTAL DETAILS", "text": "" }, { "heading": "C.1 DATA PREPARATION", "text": "" }, { "heading": "C.1.1 LYSTO CLASS CONDITIONING", "text": "To validate the assumption of the organ source being a discriminative feature, a suitable test would be to train a classifier to distinguish between sources. We partition the original training set with a 60/40 train/test split. The original testing set is not used as it has no publicly available organ source information. The dataset has 3 classes - colon, breast, and prostate. Holding out 20% of the new constructed training set for validation, we fine-tune ImageNet-pretrained VGG16 (Simonyan & Zisserman, 2014) and achieve 98% organ classification test accuracy, thus validating our assumption." }, { "heading": "C.1.2 ANHIR PATCH EXTRACTION", "text": "To extract patches for image synthesis, we choose the lung-lesion images from the larger ANHIR dataset, as these images are provided at different scales and possess diverse staining. The images were cropped to the nearest multiples of 128, and 128 × 128 patches were then extracted. Foreground/background masking was performed via K-means clustering, followed by morphological dilation. The images were then gridded into 128×128 patches, i.e., there was no overlap between patches. If a patch contained less than 10% foreground pixels, it was excluded from consideration." }, { "heading": "C.2 ADDITIONAL IMPLEMENTATION DETAILS", "text": "The following subsections list dataset-specific training strategies. Unless noted, all layers use orthogonal initializations. Batch normalization momentum is set to 0.1, and LeakyReLU slopes are set to 0.2 (if used). Spectral normalization is used everywhere except for the dense layer which learns the class embedding as specified in the BigGAN PyTorch GitHub repository3.\nFor ablation studies, as GANs consist of two networks (the generator and discriminator), we replace group-equivariant layers (convolutional, normalization, and pooling) with the corresponding standard layers in either generator or discriminator to evaluate which network benefits the most from equivariant learning. When we remove equivariant layers from both networks, we recover our baseline comparison. All settings use roughly the same number of parameters, with a very small difference in parameter count arising from the p4 (or p4m) class-conditional batch normalization layers requiring fewer affine scale and shift parameters than their corresponding standard normalization layers. Tangentially, we note that the equivariant networks require higher amounts of computation time. For example, for a fixed number of training iterations on ANHIR, p4m-equivariant GANs currently require approximately four times the amount of computation time.\nTo identify a common shared stable hyperparameter configuration for all ablations of our method on real datasets, a grid search was performed for the ANHIR dataset over learning rates for generator and discriminator (ηg, ηd) : ({10−4, 4× 10−4}, {5× 10−5, 2× 10−4}), gradient penalty strengths (γ = {0.01, 0.1, 1.0, 10.0}), and binary choices as to whether to use batch normalization in the discriminator or not, whether to use average-pooling or max-pooling to reduce spatial extent in the discriminator, and whether to use a Gaussian latent space or a Bernoulli latent space. We use the identified hyperparameter configuration as an initial starting point for all datasets, modifying them as appropriate as described below.\nFor ANHIR, LYSTO, and Food-101 we use the relativistic average adversarial loss (JolicoeurMartineau, 2018) for its stability and for CIFAR-10 we use the Hinge loss (Lim & Ye, 2017; Tran et al., 2017) to remain consistent with the literature for that dataset. For our implementation of auxiliary rotations GAN (Chen et al., 2019), we use the suggested regularization weights. For balanced consistency regularization (bCR) (Zhao et al., 2020c), we find that dataset-specific tuning of the regularization strength was required.\n3https://github.com/ajbrock/BigGAN-PyTorch" }, { "heading": "C.2.1 ROTMNIST", "text": "Given the low resolution of Rotated MNIST, we take a straightforward approach to synthesis without residual connections. In the generator, we sample from a 64D Gaussian latent space, concatenate class embeddings, and linearly project as described in Section 2.2. Four spectrally-normalized convolutional layers are then used with class-conditional batch normalization employed after every convolution except for the first and last layer. The discriminator uses three spectrally normalized convolutional layers, with leaky ReLU non-linearities. Average pooling is used to reduce the spatial extent of the feature maps, with global average pooling and conditional projection used at the end of the sequence. For NSGAN and RaGAN, we use the R1 GP, conservatively setting γ = 0.1. For WGAN, we use the GP defined in Gulrajani et al. (2017) to ensure the 1-Lipschitz constraint with the recommended weight of 10.0. Learning rates were set to ηG = 0.0001 and ηD = 0.0004, respectively. For the p4-equivariant models, max-pooling over rotations is used after the last groupconvolutional layer in both generator and discriminator to get planar feature maps. Architectures are presented in Tables 5 and 6." }, { "heading": "C.2.2 ANHIR", "text": "We sample from a 128D Gaussian latent space with a batch size of 32. The generator consists of 6 pre-activation residual blocks followed by a final convolutional layer to obtain a 3-channel output. We use class-conditional batch normalization after every convolution, except at the final layer. The discriminator uses 5 pre-activation residual blocks, followed by global average pooling and conditional projection. In the equivariant settings, we use residual blocks with p4m-convolutions for roto-reflective symmetries. We train with the relativisitic average loss and use the R1 GP with γ = 0.1. Learning rates are set to ηG = 0.0001 and ηD = 0.0004. All models were trained for approximately 60,000 generator iterations. bCR weights for comparison were set to λreal = 0.1 and λfake = 0.05 for roto-reflective augmentations, with higher values collapsing training. Architectures are presented in Tables 7 and 8." }, { "heading": "C.2.3 LYSTO", "text": "Implementation for LYSTO is similar to that of App. C.2.2, with some key differences due to the greater difficulty of training. Due to memory constraints, we use a batch size of 16. We increase the number of residual blocks to 6 in both generator and discriminator and halve the number of filters. The equivariant settings used the p4m roto-reflective symmetries. We initially experienced low sample diversity across a variety of hyperparameter settings. Contrary to recent literature, we find that using batch normalization in the discriminator in addition to spectral normalization greatly improves training for this dataset. Further, halving the learning rates for both networks to ηG = 0.00005 and ηD = 0.0002 and increasing the strength of the gradient penalty to 1.0 were necessary for ensuring training stability. As in App. C.2.2, all models were trained for approximately 60,000 generator iterations and bCR weights were set to λreal = 0.1 and λfake = 0.05 for roto-reflective augmentations. As test set labels are not publicly available for LYSTO, we evaluate FID, Precision, and Recall to the training set itself as done in a subset of experiments within Jolicoeur-Martineau (2018) and Zhao et al. (2020b). Architectures are presented in Tables 9 and 10.\nC.2.4 CIFAR-10\nFor CIFAR-10, we make the following changes to our training parameters to be in accordance with prior art for BigGAN-like designs for this dataset: (1) layer weights are now initialized from N (0, 0.02); (2) average pooling is used in the discriminator instead of max pooling; (3) learning rates ηG and ηD are now equal and set to 0.0002; (4) the discriminator is updated four times per generator update; (5) architectures are modified as in Tables 11 and 12; (6) we use the Hinge loss instead of the relativistic average loss. We use a batch size of 64. Karras et al. (2020a) suggest anR1 GP weight of γ = 0.01 for CIFAR-10 which we use here. We train all CIFAR-10 GANs for 100K generator iterations. bCR weights were set to λreal = 0.1 and λfake = 0.1 for 90-degree rotation augmentations.\nFor the p4-equivariant discriminators, we move the pooling over the group to before the last residual block as stated in the main text. Alternatively, we experimented with using a single additional standard convolutional layer with 32 filters after the p4-residual blocks as a lightweight alternative\nto making an entire residual block non-equivariant but this worsened FID evaluation. Interestingly, we find that substituting Global Average Pooling for Global Sum Pooling in the CIFAR-10 discriminators lead to an improvement of∼5 - 8 in terms of FID across the board. This architectural change to the ResNet-based GANs from Gulrajani et al. (2017) was originally made in Miyato et al. (2018), but to our knowledge has not been noted in the literature previously.\nC.2.5 FOOD-101\nCompared to the residual synthesis models in App. C.2.2 and C.2.3, we make several changes. We sample from a 64D latent Gaussian to lower the number of dense parameters and substantially increase the width of the residual blocks to account for the high number of image classes. We find that an 8× increase in the number of channels for the initial projection from the latent vector and class embedding improves training significantly. We use 4 residual blocks each in both generator and discriminator. For the equivariant setting, we use only p4 rotational symmetries to reduce training time. Importantly, we increase the batch size to 64 and theR1 GP to γ = 1.0, both of which improve the evaluation of all experimental settings. We train all GANs for ∼45K generator iterations. The suggested bCR weights of λreal = 10.0 and λfake = 10.0 from Zhao et al. (2020c) were used here for 90-degree rotation augmentations. However, when bCR with default parameters was combined with p4-equivariance in G and D, augmentations start to ‘leak’ into the generated images (e.g., G generating upside-down plates), necessitating lower weights of λreal = 0.5 and λfake = 0.5." }, { "heading": "C.3 ARCHITECTURES", "text": "Architectures for the Rotated MNIST experiments are given in Tables 5 and 6, ANHIR in Tables 7 and 8, and LYSTO in Tables 9 and 10. The residual blocks used in the ANHIR, LYSTO, CIFAR10, and Food-101 experiments are given in Figure 13. SN refers to spectral normalization and (z2− p4), (p4− p4), (z2− p4m), (p4m− p4m) refer to the type of convolution used." } ]
2,021
GROUP EQUIVARIANT GENERATIVE ADVERSARIAL NETWORKS
SP:f7611cb09eeb69912df93a040cf1ea98f59fd309
[ "This work proposes the approach of integrating priors into a DNN in the form of Linguistic sub-models that capture characteristics of OG. The authors use the example of the PAN-12 dataset for sexual predators to use information about linguistics behaviour for the grooming phases. The work then goes to highlight the augmentations that are done on baseline DNN models to include these CL characteristics. The authors then go on to show the impact of these augmenations on performance of classification on the PAN-12 dataset." ]
Online grooming (OG) of children is a pervasive issue in an increasingly interconnected world. We explore various complementary methods to incorporate Corpus Linguistics (CL) knowledge into accurate and interpretable Deep Learning (DL) models. They provide an implicit text normalisation that adapts embedding spaces to the groomers’ usage of language, and they focus the DNN’s attention onto the expressions of OG strategies. We apply these integrations to two architecture types and improve on the state-of-the-art on a new OG corpus.
[]
[ { "authors": [ "Paul Baker", "Rachelle Vessey", "Tony McEnery" ], "title": "The language of violent jihad", "venue": null, "year": 2021 }, { "authors": [ "Andrew Brindle" ], "title": "The language of hate: A corpus lingusitic analysis of white supremacist", "venue": null, "year": 2016 }, { "authors": [ "Emily Chiang", "Tim Grant" ], "title": "Deceptive identity performance: Offender moves and multiple identities in online child abuse conversations", "venue": "Applied Linguistics,", "year": 2019 }, { "authors": [ "Mohammad Mahdi Derakhshani", "Saeed Masoudnia", "Amir Hossein Shaker", "Omid Mersa", "Mohammad Amin Sadeghi", "Mohammad Rastegari", "Babak N. Araabi" ], "title": "Assisted excitation of activations: A learning technique to improve object detectors", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Kuzman Ganchev", "Jennifer Gillenwater", "Ben Taskar" ], "title": "Posterior regularization for structured latent variable models", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Jos Mara Gmez Hidalgo", "Andrs Alfonso Dı́az", "Caurcel" ], "title": "Combining predation heuristics and chat-like features in sexual predator identification", "venue": "CLEF,", "year": 2012 }, { "authors": [ "Zhiting Hu", "Xuezhe Ma", "Zhengzhong Liu", "Eduard Hovy", "Eric Xing" ], "title": "Harnessing deep neural networks with logic rules", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Ruslan Salakhutdinov", "Xiaodan Liang", "Lianhui Qin", "Haoye Dong", "Eric Xing" ], "title": "Deep generative models with learnable knowledge constraints", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Giacomo Inches", "Fabio Crestani" ], "title": "Overview of the International Sexual Predator Identification Competition at PAN-2012", "venue": "In CLEF,", "year": 2012 }, { "authors": [ "Jeremy Kawahara", "Colin J. Brown", "Steven P. Miller", "Brian G. Booth", "Vann Chau", "Ruth E. Grunau", "Jill G. Zwicker", "Ghassan Hamarneh" ], "title": "BrainNetCNN: Convolutional neural networks for brain", "venue": "Towards predicting neurodevelopment. NeuroImage,", "year": 2017 }, { "authors": [ "Dan Liu", "Ching Yee Suen", "Olga Ormandjieva" ], "title": "A novel way of identifying cyber predators", "venue": "In arXiv preprint arXiv:1712.03903,", "year": 2017 }, { "authors": [ "Nuria Lorenzo-Dus", "Lella Nouri" ], "title": "The discourse of the US alt-right online – A case study of the Traditionalist Worker Party blog", "venue": "Critical Discourse Studies,", "year": 2020 }, { "authors": [ "Nuria Lorenzo-Dus", "Cristina Izura", "Roco Pérez-Tattam" ], "title": "Understanding grooming discourse in computer-mediated environments", "venue": "Discourse, Context and Media,", "year": 2016 }, { "authors": [ "Minh Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Guoqin Ma" ], "title": "Tweets classification with BERT in the field of disaster management, 2019", "venue": null, "year": 2019 }, { "authors": [ "Courtney Mansfield", "Ming Sun", "Yuzong Liu", "Ankur Gandhe", "Björn Hoffmeister" ], "title": "Neural text normalization with subword units", "venue": "In North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Katerina Margatina", "Christos Baziotis", "Alexandros Potamianos" ], "title": "Attention-based conditioning methods for external knowledge integration", "venue": "In Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Nikhil Muralidhar", "Mohammad Raihanul Islam", "Manish Marwah" ], "title": "Incorporating prior domain knowledge into deep neural networks", "venue": "IEEE Big Data,", "year": 2018 }, { "authors": [ "Minh Nguyen", "Thien Huu Nguyen" ], "title": "Who is killed by police: Introducing supervised attention for hierarchical LSTMs", "venue": "In International Conference on Computational Linguistics,", "year": 2018 }, { "authors": [ "Lella Nouri", "Nuria Lorenzo-Dus" ], "title": "Investigating Reclaim Australia and Britain Firsts use of social media: Developing a new model of imagined political communities online", "venue": "Journal for Deradicalization,", "year": 2019 }, { "authors": [ "Adeline Paiement", "Lili Tao", "Massimo Camplani", "Sion Hannuna", "Dima Damen", "Majid Mirmehdi" ], "title": "Online quality assessment of human movement from skeleton data", "venue": "In British Machine Vision Conference,", "year": 2014 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "GloVe: Global vectors for word representation", "venue": "In Empirical Methods in Natural Language Processing,", "year": 2014 }, { "authors": [ "Ioannis Saridakis", "Effie Mouka" ], "title": "A corpus study of outgrouping in Greek radical right computermediated discourses", "venue": "Journal of Language Aggression and Conflict,", "year": 2020 }, { "authors": [ "Daniela Schneevogt", "Emily Chiang", "Tim Grant" ], "title": "Do Perverted Justice chat logs contain examples of overt persuasion and sexual extortion? A research note responding to Chiang and Grant (2017, 2018)", "venue": "Language and Law= Linguagem e Direito,", "year": 2018 }, { "authors": [ "John Sinclair" ], "title": "Corpus, concordance, collocation", "venue": null, "year": 1991 }, { "authors": [ "Ekta Sood", "Simon Tannert", "Diego Frassinelli", "Andreas Bulling", "Ngoc Thang Vu" ], "title": "Interpreting attention models with human visual attention in machine reading comprehension", "venue": "In ACL SIGNLL Conference on Computational Natural Language Learning (CoNLL),", "year": 2020 }, { "authors": [ "Anna Vartapetiance", "Lee Gillam" ], "title": "Our Little Secret”: Pinpointing potential predators", "venue": "Security Informatics,", "year": 2014 }, { "authors": [ "Esa Villatoro-Tello", "Antonio Juárez-González", "Hugo Jair Escalante", "Manuel Montes-y ¡Gómez", "Luis" ], "title": "Villaseñor-Pineda. A two-step approach for effective detection of misbehaving users in chats – Notebook for PAN at CLEF", "venue": "In CLEF,", "year": 2012 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V. Le" ], "title": "XLNet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zi Yin", "Yuanyuan Shen" ], "title": "On the dimensionality of word embedding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Online grooming (OG) is a communicative process of entrapment in which an adult lures a minor into taking part in sexual activities online and, at times, offline (Lorenzo-Dus et al., 2016; Chiang & Grant, 2019). Our aim is to detect instances of OG. This is achieved through binary classification of whole conversations into OG (positive class) or neutral (negative class). This classification requires the ability to capture subtleties in the language used by groomers. Corpus Linguistic (CL) analysis provides a detailed characterisation of language in large textual datasets (McEnery & Wilson, 2003; Sinclair, 1991). We argue that, when integrated into ML models, the products of CL analysis may allow a better capture of language subtleties, while simplifying and guiding the learning task. We consider two types of CL products and explore strategies for their integration into several stages of DNNs. Moreover, we show that CL knowledge may help law enforcement in interpreting the ML decision process, towards the production of evidences for potential prosecution.\nOur text heavily uses slang and sms-style writing, as many real-world Natural Language Processing (NLP) tasks for chat logs. Text normalisation methods were proposed to reduce variance in word choice and/or spelling and simplify learning, e.g. (Mansfield et al., 2019) for sms-style writing. However, they do not account for the final analysis goal and may discard some informative variance, e.g. the use of certain forms of slang possibly indicative of a user category. CL analysis provides with the preferred usage of spelling variants or synonyms. We propose to use this domain knowledge to selectively normalise chat logs while preserving the informative variance for the classification task.\nAs demonstrated by the CL analysis in (Lorenzo-Dus et al., 2016), the theme and immediate purpose of groomer messages may vary throughout the conversation, in order to achieve the overarching goal of entrapping the victims. Groomers use a series of inter-connected ”sub-goals”, referred to as OG processes here, namely gaining the child’s trust, planning activities, building a relationship, isolating them emotionally and physically from his/her support network, checking their level of compliance, introducing sexual content and trying to secure a meeting off-line. The language used within these processes is not always sexually explicit, which makes their detection more challenging. However, CL analysis additionally flags some contexts associated to the OG processes, in the form of word collocations (i.e. words that occur within a same window of 7 words) that tend to occur more frequently in, and therefore can be associated with, OG processes. We propose to exploit the relations between the OG processes and their overarching goal of OG to improve the final OG classification. We use the CL identified context windows to guide the learning of our DNN.\nOur main contributions are: 1) We explore different strategies for integrating CL knowledge into DNNs. They are applied to two architecture types and demonstrated on OG detection, but may generalise to other NLP applications that involve digital language and/or complex conversational strategies. 2) The principle and several implementations of selectively normalising text through modifying a word embedding in support to classification. 3) The decomposition of conversation\nanalysis into identifying sub-goals. Our DNN implicitly models the relations between these sub-goals and the conversation’s overarching final goal. 4) A new attention mechanism for LSTM based on the direct stimulation of its input gates, with two proposed implementations. 5) A state-of-the-art (SoTA) and interpretable OG detector. 6) A new corpus for OG detection, to be publicly released on demand, and that extends PAN2012 with more conversations and with products of CL analysis." }, { "heading": "2 RELATED WORK", "text": "Villatoro-Tello et al. (2012) detected OG chat logs using a DNN to classify binary bag-of-words. This simple approach highlights the importance of commonly used words amongst groomers which we exploit for selective text normalisation. This is emphasised in (Vartapetiance & Gillam, 2014; Hidalgo & Dı́az, 2012) where a set of phrases are derived from the important features of a Naı̈ve Bayes classifier to describe common behaviours among groomers. Liu et al. (2017) obtained the current OG detection SoTA using a word embedding for semantic of important words and an LSTM.\nIntegrating domain knowledge into DNNs is often done with additional losses that assist with sparse and low quality data. (Muralidhar et al., 2018) penalise a DNN’s output violating logical rules w.r.t. the input features. (Hu et al., 2018) use the posterior regularisation framework of (Ganchev et al., 2010) to encode domain constraints for generative models. A teacher-student architecture in (Hu et al., 2016) incorporates first-order logic rules to create an additional loss for the student network. Other works integrated prior knowledge in the design of the DNN architecture. In BrainNetCNN (Kawahara et al., 2017), the convolutions of a convolutional neural network (CNN) are defined based on the graph data’s locality to account for the brain’s connectivity. The training procedure may also integrate priors without modifying the DNN’s architecture. Derakhshani et al. (2019) use assisted excitation of CNN neurons in the images’ areas of interest, thus providing both localisation and semantic information to the DNN. An attention mechanism was used in a supervised way to focus a DNN on important words in (Nguyen & Nguyen, 2018). We experiment with these various approaches and adapt them to our domain knowledge and DNN architectures.\nLinguistic knowledge was integrated to learnt word embeddings in the past. Knowledge in the form of lexicons, that carry a manual categorisation and/or ranking of words, is combined with a learnt word embedding in (Margatina et al., 2019). Three strategies are proposed, namely concatenating the lexicon and embedding features, and using the lexicon features to conditionally select or transform the word embeddings. In our study, we are concerned with a different type of linguistic knowledge. However, our modification of word embedding (Section 4.1) may also exploit this lexicon knowledge." }, { "heading": "3 AUGMENTED PAN2012 DATASET", "text": "PAN2012 (Inches & Crestani, 2012) is a standard corpus for OG detection. It was gathered from Omegle (one-to-one conversations, IRC (technical discussions in groups), and the Perverted Justice (PJ) website1 (chat logs from convicted groomers interacting with trained adult decoys), with 396 groomers and 5700 / 216,121 OG / non-OG conversations. Some non-OG chat logs contain sexual wording, making the OG classification more challenging. Conversations are truncated to 150 messages each, which limits both CL and ML analyses. To resolve this limitation, we augment the corpus with full OG conversations and the addition of new groomers from PJ, totalling 623 groomers in 6204 OG conversations (same negatives which could not be augmented to fuller conversations due to no access to the original data). Final OG / non-OG conversations total an average (std) of 215 (689) / 13 (23) messages and 1010 (3231) / 94 (489) words, respectively. Statistics on the dataset content are in the sup. materials. PJ data is freely available online and was largely used in previous social science and NLP studies, thus its use does not raise any peculiar ethical concern. For a debate on its usability see (Chiang & Grant, 2019; Schneevogt et al., 2018).\nOur dataset also includes the results of a CL analysis of the new corpus using the method described in (Lorenzo-Dus et al., 2016), which involves a heavy use of manual analysis by CL experts. As part of data preparation for CL analysis, word variants are identified, which are either spelling variations (mistakes or intentional e.g. ‘loool’→‘lol’), or the same semantic meaning behind two terms (e.g. ‘not comfy’→‘uncomfortable’). These variants are not specific to OG, but rather reflect digital language,\n1http://perverted-justice.com\nand are therefore valid for other real-world chat logs. The CL analysis also identified the variants that are most used among groomers. The CL products in our dataset include: 1) the set of variants both general and groomer-preferred, 2) a set of frequent 3-word collocates (not necessarily direct neighbours, but located within a window of 7 words) that are used among many different users, and 3) a manual annotation of 2100 samples of OG processes (there are 7 types of OG processes, as identified in (Lorenzo-Dus et al., 2016) and listed in the introduction and detailed in the sup. materials) that could be associated to 3-word collocates and the context windows that these latter define. These CL products are sensitive data that might be used to help groomers refine their strategies, therefore they will only be shared on request. They are used in Sections 4-5 to train a DNN model, but this model does not require CL analysis to be performed at testing phase, as it takes raw text only as input." }, { "heading": "4 METHODOLOGY", "text": "Overarching vision and general applicability – We integrate two CL priors into DNNs: the word variants and the identification of OG processes. Word variants provide knowledge of same semantic meaning, which allows reducing variance in the text. The knowledge of groomer’s preferred variants brings an implicit and selective text normalisation that supports the classification task. It is achieved through a reduction of distances between non-discriminative variants in a word embedding. This selective normalisation is applicable to other classification tasks from real-world chat logs, provided an updated selection of the preferred and discriminative variants. As highlighted in Section 3, the variants reflect digital language and are relevant to different analyses of chat conversations. The selection of discriminative variants is done easily and automatically following a procedure described in Section 4.1 using empirical occurrences in positive and negative conversations. This knowledge integration is also applicable to all DNNs that use a word embedding to capture word semantic.\nThe use of OG processes aides in differentiating between causal conversations involving sexual language, and OG conversations with complex strategies and sub-goals (i.e. OG processes). The language associated to OG processes, reflected by the 3-word collocates and context windows that they define, may be more informative than traditionally used simple sexual wording in making this distinction. We propose 3 strategies to integrate this knowledge, namely the definition of sub-tasks and two stimulations of DNN attention. They all guide the learning by providing focus on contexts of interest (a valuable complement to attention mechanisms, as demonstrated in our experiments), and by implicitly modelling the relation between sub- and final goals. This CL knowledge integration principle is generally applicable to the analysis of complex conversations, provided an appropriate CL identification of the conversation’s sub-goals and of their associated language through context windows. This identification of sub-goals has been the focus of many social science studies. For example, a large corpus of works have identified strategies for persuasion and manipulation in extreme ideology groups, e.g. (Brindle, 2016; Nouri & Lorenzo-Dus, 2019; Lorenzo-Dus & Nouri, 2020; Saridakis & Mouka, 2020) for radical right hate speech and (Baker et al., 2021) for jihadi radicalisation. This established baseline of knowledge may be integrated into DNNs in multidisciplinary works. The identification of frequent 3-word collocates is automated, as described in (Lorenzo-Dus et al., 2016). The association of their occurrences to the identified sub-goals is the only task that may require additional manual work. Our stimulation of DNN attention may also be more generally used to focus a DNN’s attention on a priori known important elements of a training set.\nBase models – We demonstrate the general applicability of our CL integration strategies by applying them to two DNN architecture types representative of the two NLP standards of recurrent and transformer models. The recurrent DNN of Liu et al. (2017) is the current SoTA for OG classification. It comprises a language model that builds two word and sentence embeddings, and an OG classifier with two LSTMs and a linear projection. Our base model #1 is a modified version (Fig. 1 left) with the word embedding provided as input to the OG classifier in place of the sentence embedding. This word embedding will be more directly impacted by our CL integration, and it increases explainability as will be seen next. It may be replaced by similar embeddings, and we also present results using the pre-trained GloVe (Pennington et al., 2014). Further, to compensate for the loss of sentence structure modelling previously provided by the sentence embedding, and to account for the longer sequences of inputs into the classifier, we add an unsupervised attention mechanism (Luong et al., 2015) into the classifier. Following the method in (Luong et al., 2015), the hidden states of the last LSTM for all words of the conversation are provided to the attention mechanism that outputs a conversation embedding of the same size as the LSTMs hidden state, namely 256.\nXLNet (Yang et al., 2019) is a popular transformer model, a SoTA for many NLP tasks, and therefore a strong baseline for this study. It iteratively refines word embeddings, starting from an initial embedding that captures word semantic similarly to that of Liu et al. (2017), and attaining richer word representations that account for word relationships within a sentence using a positional embedding and self attention layers. The refined contextualised word embeddings are classified by linear projection. In our application, this projection fails to handle our class imbalance and always outputs the same class with F-score at 0.392. Providing the contextualised word embeddings to a two-layer LSTM, whose last hidden state is used as a conversation embedding to be classified by the linear projection, solves this issue2 and forms base model #2 (Fig. 1 right). The combination of a transformer model with LSTM is not new, see for example (Ma, 2019), and has the advantage of allowing the use of our LSTM-based knowledge integration strategies (see ‘Stimulating LSTM input gates’ in Section 4.2).\nInput to the models – The analysis is performed on whole conversations, and the final OG / non-OG classification is obtained for the whole conversation, rather than per-message. Messages are separated by the [SEP] token, so that inter-text representations can be modelled. Messages from both users are included with no distinction. For base model #2, the [CLS] token is added at the beginning of conversations following the XLNet standard. Conversations longer than 2,000 words are truncated to retain their end part (12% / 8e-5% of OG / non-OG conversations). All base and CL-augmented DNNs take raw text as input only. The only text preparation prior to the DNN is tokenisation of named entities. We do not apply explicit text normalisation such as (Mansfield et al., 2019) as part of text preparation, since the methodological premise of the paper is the design of a hybrid approach where an ML model incorporates its own text normalisation informed by CL knowledge." }, { "heading": "4.1 IMPLICIT AND SELECTIVE TEXT NORMALISATION BASED ON WORD VARIANTS", "text": "The natural stage of DNNs to integrate knowledge on word variants is the word embedding that captures word semantic (i.e. before any LSTM or self-attention layer in the base models). The mean occurrence frequency of variants in the OG corpus is significantly larger, by two orders of magnitude, than that of all words (see sup. materials). Therefore, using these common words to modify the word embedding may have a strong impact on classification. We propose 3 strategies to modify the embedding based on our set of N pairs of variants {(vi1, v j 1), ..., (v i N , v j N )} using the principle that words with same semantic should be moved closer to each other in the embedding space.\nAlthough variants have same intended meaning, some may be discriminative of groomers’ language. Hence, it may be useful for OG classification to keep them apart in the word embedding space. The significance of word w for classification is determined based on empirical occurrences in OG and non-OG conversations within the training set: δp(w) = |p(w|ypos)−p(w|yneg)|. If δp(vik) or δp(v j k) are high (i.e. within the last 5 percentiles for all words), we do not use the pair for modification of the embedding. We considered increasing the separation between these variants, but we found that this modifies too much the embedding and reduces its semantic representation power. Out of 4590 pairs of variants, we retain 2955 for modification of the word embedding. In effect, this selective modification applies an implicit and selective text normalisation which supports the OG classification.\nWe experiment with 3 implementations that may apply to different usage scenarios such as training a new language space (supervised word embedding modification), or modifying an existing one before training a new classifier (manifold-based) or before fine-tuning an existing classifier (elastic pulling).\n2The reason for this behaviour remains to be investigated.\nSupervised word embedding modification – A regularisation term is added with weight λ to the language modelling loss LEmb to minimise the L2 distance D between the selected word variants’ embeddings: L̃Emb = (1− λ)LEmb + λ [ 1 N ∑N k=1D(vik, v j k) ] .\nManifold learning – We perform a global transformation of the existing word embedding by building a new space through manifold learning from an edited pairwise distance matrix with D̃(vik, v j k) = λD(vik, v j k) where λ ∈ [0, 1] modulates the strength of distance reduction between selected variants. We use Robust Diffusion Map (Paiement et al., 2014), but other manifold learning methods could be explored. This implementation requires re-training subsequent modules from scratch, as words’ new embeddings may be very different from initial ones. Note that we make an unusual use of manifold learning for word embedding modification rather than for dimensionality reduction. It is possible to reduce dimensionality, which may help to combat overfitting for classification, as discussed in (Yin & Shen, 2018). However, the dimensionality of the word embedding is unchanged in our experiments.\nElastic pulling – Our third implementation modifies the existing word embedding ‘in place’ through local movements that pull together the representations of selected variants. This mostly preserves all words’ original representations (i.e. coordinates in the embedding space), thus limiting the amount of change needed for the classifier to simple fine-tuning. Two variants’ representations vik and v j k of coordinates xik and x j k are pulled towards their centre x̂k = xik+x j k 2 by the amount δxik = x̂k − x i k modulated by λ ∈ [0, 1]: x̃ik = xik + λδxik . We propagate the pull operation to neighbouring word representations, with strength of pull decreasing with distance (i.e. modulated by a radial basis function (RBF) φik centred on xik), so as to preserve the pairwise relationships between variants and their neighbours: x̃ = x+λφik(x)δxik . We use an inverse multiquadric φ i k(x) = (||x−xik||2 +γ2)− β 2 , with global support so that all words can be considered for propagating each pulling operation without the need for a costly identification of those word representations that are located within the pulling’s neighbourhood. β and γ tune the RBF’s decay rate i.e. the locality of propagated pull. We found that the method is not very sensitive to these values as long as the pull’s reach is sufficient, within a radius of the order of magnitude of δxik , and we set them empirically to 1.0 and 3.0 respectively." }, { "heading": "4.2 INTEGRATING KNOWLEDGE ON OG PROCESSES", "text": "Our annotated samples of OG processes are associated to 3-word collocates, which are used to identify contexts of interest. We define a continuous representation of the presence of the 7 OG processes using 7 Gaussian Mixture Models (GMM) with components centred on the 3-word collocates and their std being the span of each collocate (max. 7 words as mentioned in Section 3). We propose 3 uses to focus the attention of the DNN on parts of the conversations that implement the OG processes and on the associated language, and to implicitly model the relations between OG processes and OG.\nAuxiliary OG process detection – A second output branch is added to the DNN after the LSTMs, with a fully-connected layer and MSE loss, to estimate the pseudo occurrence probability of the 7 OG processes provided by their GMMs, at each word location. For base model #1, we experimented with adding an attention block as in the main branch, but found that this didn’t help with the OG process detection, probably because this task is more local and doesn’t need to consider as large a context as classification of whole conversation. The new branch also serves as additional regularisation to prevent overfitting given the class imbalance between (non-)OG chat logs. Further, it allows for an OG process-based interpretation of what the DNN considers as relevant clues for OG classification.\nStimulating attention – Both the unsupervised attention of Luong et al. (2015) in base model #1, and the self-attention layers of base model #2, compute an attention energy et for word at position t. It may be stimulated during training to guide the DNN’s attention on occurrences of OG processes3. We propose two strategies that are not mutually exclusive and may be combined: a) through supervision by the sum G of GMMs used as ground-truth distribution of the salient locations and attention energies: Lattention = 1T ∑T t=1(et − G(t))2 , with T the length of messages from both users. This is similar to (Nguyen & Nguyen, 2018), but with G highlighting higher-level OG processes rather than single important words. b) through direct excitation of the attention energies, inspired by\n3 No annotation of OG processes (i.e. GMM) is required at testing time.\n(Derakhshani et al., 2019) that excited CNN’s activations to speed up localisation learning in images. We propose two possible implementations: (A) ẽt = et + G(t) et, and (B) ẽt = et + G(t). Stimulating LSTM input gates – An alternative (or complement) to stimulating an attention mechanism is to stimulate LSTM cells directly, during training3, in locations containing OG processes indicated by G. This is a new way to stimulate attention and to encourage the LSTM to recognise and focus on the contexts of OG processes. We propose two implementations that are not mutually exclusive and may be combined: a) through supervision by minimising the loss between the average input gates’ activations it and the combined GMMs: Lstimulation = 1T ∑T t=1(it − G(t))2 ; b) through excitation of activations, following the same idea as for exciting attention. The input gate activation idt of each LSTM cell d is augmented during OG processes, indicated by a peak of G, through: ĩdt = i d t + G(t) D ∑D d=1 h d t . We average over the hidden states h d t of the D LSTM cells in the layer, by analogy to (Derakhshani et al., 2019) that averaged over all channels of a CNN’s activation map." }, { "heading": "5 EXPERIMENTS", "text": "Both original and modified base model #1 (including its word embedding) are trained from random weights on our dataset. Experiments with the GloVe embedding use GloVe’s pre-trained weights from Common Crawl 840B (Pennington et al., 2014). The XLNet part of base model #2 is pre-trained on BookCorpus and English Wikipedia, see (Yang et al., 2019). Detailed training procedures are provided in the sup. materials. For base model #2, the selective text normalisation is not tested due to lack of time in fine-tuning XLNet. The attention is stimulated in the final self-attention layer only, and future experiments may test other locations.\nWe divide the corpus into 30% of users for training, 70% for testing, and 30% of training for validation, using a similar ratio as Inches & Crestani (2012). This division based on users ensures that the model may not recognise the specific language of a groomer, but focuses on trends in OG language. OG classification is evaluated by: precision, recall, area under precision-recall curve (AUPR), F1 score, and the F0.5 score used in (Inches & Crestani, 2012) to weight the precision metric higher. The effects of selective text normalisation are further measured by their proportion of distance reduction\nbetween selected variants ∆D = 1N ∑ k |D(vik,v j k) new−D(vik,v j k) old|\nD(vik,v j k) old\nand average resulting distance D = 1N ∑ k D(vik, v j k). Accuracy is provided in the sup. materials." }, { "heading": "5.1 EVALUATIONS OF THE INDIVIDUAL CL-KNOWLEDGE INTEGRATION STRATEGIES", "text": "We evaluate the individual effects of the different CL augmentations in Table 1 in comparison to non-augmented models. We also try combining the two supervised and excitation-based methods for stimulating attention and LSTM input gates. For a fairer comparison of the selective text normalisations, their modulation parameters λ are approximately adjusted to provide a loosely similar ∆D, as reported in the sup. materials. Base model #1 (before augmentations) obtains similarly good results using both word embeddings, even though GloVe encompasses more words in a larger embedding, with resulting larger D. Base model #1 generally responds well to all CL-integrations (with some implementations of the selective text normalisation performing better than others, as discussed next). The selective text normalisation is less effective on GloVe, maybe due to a more drastic reduction of its larger initial distances. A grid search on λ may be performed in the future to investigate this behaviour. Base model #2 outperforms base model #1, which confirms XLNet’s status as one of the NLP SoTA. It also responds well to augmentations based on knowledge of OG processes, with all metrics consistently improved. Thus, the selective text normalisation for XLNet’s embedding for word semantic (i.e. before self-attention layers) remains an interesting strategy to evaluate in future experiments.\nAmong the 3 proposed implementations of selective text normalisation, only the pulling version provided an improvement, while the other two hindered OG classification in spite of a smaller reduction of distances between selected variants. For the supervised approach, this may be explained by the new loss term conflicting with the original word embedding loss. For manifold learning, although the algorithm preserves pairwise distances by design (as verified in sup. materials), this does not seem enough to fully preserve the semantic representation power of the word embedding.\nOn the other hand, the more gentle elastic pulling could preserve the original semantic representation of the word embedding while introducing an implicit normalisation of the selected word variants that supports OG classification.\nIt is worth noting, for base model #1, that D the average distance between the representations of two selected variants is at 3.72, higher than the average distance between all other pairs of words which is at 2.86. Therefore, base model #1, even though fully trained on OG classification, was not able to discover on its own the knowledge that some variants have same semantic meaning while not being discriminative for the OG classification task, and could therefore have same or similar representations. This, together with the improved results from modifying the word embedding, demonstrate the usefulness of integrating this knowledge into the model.\nAll integrations of knowledge on OG processes improved the performance of the models. This demonstrates that focusing the DNNs’ attention on the language associated with OG processes does help capturing subtleties of grooming language. In addition, when exploring the attention energies of (non-augmented) base model #1, we observe that the contexts that the model learnt to focus on are not related to our labelled instances of OG processes: the average (std) attention energy for these instances is 0.0009 (0.0002), lower than energy across all conversations at 0.0016 (0.0128). A similar observation is made for base model #2, where tokens’ energies are obtained from the last self-attention layer similarly as in (Sood et al., 2020) by retaining the max pairwise energy for each token (row) and normalising by the sum of retained energies. This is done for each attention head, before averaging across heads. The resulting average (std) energy for our instances of OG processes is 0.110 (0.072), slightly lower than the energy across all conversations at 0.120 (0.088). Thus, neither models were able to discover on their own the sub-goals that the CL analysis of Lorenzo-Dus et al. (2016) identified, and their associated language. This knowledge is therefore an added value for the models, as also demonstrated by the improved results. The 3 strategies seem roughly equally helpful at focusing the DNN’s attention and capturing the subtleties of grooming language, and future work\nwill explore their combinations. Improvements are more consistent for AUPR and precision (and consequently F0.5), thanks to fewer false positives. This reduction in false positives may be due to an easier distinction of OG conversations from neutral but sexually-oriented ones.\nFor both stimulation strategies (attention mechanism and LSTM input gates) combining the supervision and excitation approaches provides better results than using them individually. This suggests that these two processes support each other during optimisation. Indeed, improved DNN’s attention (expressed in et and it) from excitation may assist with the supervised attention task. In addition, improved attention from supervision may also reinforce the excitation and allow it to work at its best." }, { "heading": "5.2 OG CLASSIFICATION PERFORMANCE", "text": "The prior integration methods are combined into fully augmented models #1 and #2. All algorithms for selective text normalisation have the same aim, thus we only retain the best performing elastic pulling. As suggested by the previous discussion on the supervision-excitation symbiosis, the different strategies and their implementations for integrating knowledge on OG processes may be complementary. Thus, we use all 3 strategies, combining supervision and excitation for both stimulation strategies, and choosing excitation B over A due to its better results. For augmented model #2, only the augmentations tested individually in Table 1 are used. Comparative results on OG classification are provided in Table 2 over baselines and SoTA NLP models.\nAlthough base model #1 does slightly worse than (Liu et al., 2017), its augmented version outperforms it by a margin. XLNet of base model #2 is the best performing of non-CL-augmented models. Its augmentation by the combined integration of CL knowledge on word variants and OG processes significantly improves its performance and produces the new SoTA.\nFor both base models, the combined augmentations (Table 2) add up to improvements that are superior to those of individual augmentations (Table 1). This is particularly true for augmented model #1 that accumulates all 4 augmentations, while augmented model #2 is limited to 3. Its SoTA may be further improved in the future through adding the selective text normalisation. In the sup. materials, we further explore how the individual augmentations add up through their progressive additions to a simple LSTM model. We observe that their respective benefits are complementary.\nIn order to verify that the improved results do come from a better understanding of language provided by CL knowledge, rather than merely from additional regularisation, we also compare against L1 and L2 regularised version of both base models. Although regularisation does improve the results, the performance gains from integrating CL knowledge are significantly superior for both models.\nVisualisation – Since the augmented models make use of OG processes’ recognition to capture the language associated with grooming, their auxiliary OG process detection may be used to highlight, at word level, those parts of the conversation that the model associates to OG processes. These are visualised in Fig. 2, where the estimated likelihood of the Compliance testing process is indicated in shades of red. The DNN focused on questions about personal situation and on invitations to talk\nover the phone, as indicators of compliance testing happening, in line with our general understanding of this OG process. While these elements of discussion may seem neutral enough and may not be captured by generic OG classifiers, the DNN’s understanding of OG processes and of their relation to OG made it increase the OG classification score at each detection of OG process. In future work, a similar visualisation could be performed using the attention energies et and LSTM input gate activations it, to assess the better capturing of subtle language clues provided by the two other strategies for integrating knowledge on OG processes, and their effect on OG classification." }, { "heading": "6 CONCLUSION", "text": "We have explored the integration of CL knowledge in a hybrid (data- and knowledge-driven) DNN. We considered two types of CL knowledge, namely: 1) variants of semantically equivalent words that are, or not, discriminative of OG, and which we use to perform a selective text normalisation in support of classification. Existing normalisation methods would apply to full text with no such distinction, thus failing to provide this support. 2) The identification of some OG processes and their associated language, which we use to focus the attention of the DNN on subtle language clues. We compared several integration approaches, including a new method for stimulating an LSTM’s attention directly through its input gates, without the need for an external attention mechanism. For our final augmented model, we selected a gentle pulling method for selective text normalisation as well as a combination of auxiliary tasks, and supervision and excitation for stimulated attention and stimulated LSTM, whose benefits add up to produce the SoTA. We demonstrated the general applicability of our approaches on two architecture types of recurrent and transformer neural networks and two word embeddings of different complexities. Our results show performance improvements over base and SoTA models for both architectures, while allowing for a CL based interpretation of the classification decision through visualisation of predicted OG processes and DNN’s attention.\nWhile we have demonstrated the applicability of these methodologies for CL knowledge on OG, we see the potential for other domains that utilise similar representations or model architectures (see discussion in Section 4). The selective text normalisation that we propose is more generally applicable to other classification tasks from chat conversations, and its proposed implementations are usable on other DNNs that use a word embedding to capture word semantic. The decomposition of conversations into sub-goals may be obtained from CL studies on other applications. Some of the proposed strategies may also allow the integration of other (non-CL) domain knowledge. It may be generally useful to estimate auxiliary quantities that are known to be relevant to the task and that may usefully constrain the DNN’s attention and learnt features. Our two other methods for focusing the DNN’s attention (i.e. stimulated attention and stimulated LSTM input gates) may be generally used to weight more some important elements of the training data.\nPerspectives for OG prevention – The proposed OG classification method has been designed based on requirements from specialised law enforcement to assist in the investigation of large quantities of chat logs. Its intended usage is to facilitate triage by law enforcement of digital materials that could be seized from suspected offenders after enough evidence allowed launching the procedure. Flagged conversations are to be investigated more thoroughly by a trained human operator following law enforcement’s strict robustness and security protocols to ensure integrity. Within this usage scenario, there is, therefore, no risk of innocents to be automatically prosecuted. The aim of our work is not to address the possible biases in the human decision, which are addressed by law enforcement’s protocols. However, the proposed visualisation that helps focusing on key aspects of the conversation, together with the reduced workload and associated lowered time pressure, may allow a more thorough and fairer investigation of the flagged conversations. Mitigation measures should be put in place, but these are outside the scope of this work." } ]
2,020
null
SP:98871703cab28ed757a6ea54eea0407621624d62
[ "The paper presents a method to combine graph convolutional neural networks (GCNs) with generative adversarial networks (GANs). The authors focus on the problem of semi-supervised learning on graphs and propose an end-to-end framework in which the generative model is followed by direct convolutions on the graph nodes. Experiments are conducted on standard benchmark datasets and the proposed method, GraphCGAN is compared against several state-of-the-art approaches." ]
Graph convolutional networks (GCN) achieved superior performances in graphbased semi-supervised learning (SSL) tasks. Generative adversarial networks (GAN) also show the ability to increase the performance in SSL. However, there is still no good way to combine the GAN and GCN in graph-based SSL tasks. In this work, we present GraphCGAN, a novel framework to incorporate adversarial learning with convolution-based graph neural network, to operate on graphstructured data. In GraphCGAN, we show that generator can generate topology structure and attributes/features of fake nodes jointly and boost the performance of convolution-based graph neural network classifier. In a number of experiments on benchmark datasets, we show that the proposed GraphCGAN outperforms the reference methods by a significant margin.
[]
[ { "authors": [ "Mikhail Belkin", "Partha Niyogi", "Vikas Sindhwani" ], "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "venue": "Journal of machine learning research,", "year": 2006 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Fan Yang", "William W Cohen", "Russ R Salakhutdinov" ], "title": "Good semisupervised learning that requires a bad gan", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "Molgan: An implicit generative model for small molecular graphs", "venue": "arXiv preprint arXiv:1805.11973,", "year": 2018 }, { "authors": [ "Ming Ding", "Jie Tang", "Jie Zhang" ], "title": "Semi-supervised learning on graphs with generative adversarial nets", "venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Tom Fletcher" ], "title": "Semi-supervised learning with gans: Manifold invariance with improved inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Guohao Li", "Matthias Muller", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepgcns: Can gcns go as deep as cnns", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Wenyuan Li", "Zichen Wang", "Jiayun Li", "Jennifer Polson", "William Speier", "Corey W Arnold" ], "title": "Semisupervised learning based on generative adversarial network: a comparison between good gan and bad gan approach", "venue": "In CVPR Workshops,", "year": 2019 }, { "authors": [ "Qing Lu", "Lise Getoor" ], "title": "Link-based classification", "venue": "In Proceedings of the 20th International Conference on Machine Learning", "year": 2003 }, { "authors": [ "Zaiqiao Meng", "Shangsong Liang", "Jinyuan Fang", "Teng Xiao" ], "title": "Semi-supervisedly co-embedding attributed networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Augustus Odena" ], "title": "Semi-supervised learning with generative adversarial networks", "venue": "arXiv preprint arXiv:1606.01583,", "year": 2016 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Laurens Van Der Maaten" ], "title": "Accelerating t-sne using tree-based algorithms", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Petar Veličković", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "arXiv preprint arXiv:1809.10341,", "year": 2018 }, { "authors": [ "Hongwei Wang", "Jia Wang", "Jialin Wang", "Miao Zhao", "Weinan Zhang", "Fuzheng Zhang", "Xing Xie", "Minyi Guo" ], "title": "Graphgan: Graph representation learning with generative adversarial nets", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jason Weston", "Frédéric Ratle", "Hossein Mobahi", "Ronan Collobert" ], "title": "Deep learning via semisupervised embedding", "venue": "In Neural Networks: Tricks of the Trade,", "year": 2012 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "Philip S Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": null, "year": 1901 }, { "authors": [ "Zhu Xiaojin", "Ghahramani Zoubin" ], "title": "Learning from labeled and unlabeled data with label propagation", "venue": null, "year": 2002 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Zhilin Yang", "William W Cohen", "Ruslan Salakhutdinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": "arXiv preprint arXiv:1603.08861,", "year": 2016 }, { "authors": [ "Wayne W Zachary" ], "title": "An information flow model for conflict and fission in small groups", "venue": "Journal of anthropological research,", "year": 1977 }, { "authors": [ "Jiani Zhang", "Xingjian Shi", "Junyuan Xie", "Hao Ma", "Irwin King", "Dit-Yan Yeung" ], "title": "Gaan: Gated attention networks for learning on large and spatiotemporal graphs", "venue": "arXiv preprint arXiv:1803.07294,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph-based semi-supervised learning (SSL) aims to classify nodes in graph, where only small amounts of nodes are labeled due to the expensive and time-consuming label collection process. To solve such task, various graph neural networks (GNNs) have been proposed using the idea of convolutional neural networks (CNN) to implicitly propagate the information of labeled nodes to unlabeled nodes through the linkage between nodes (Kipf & Welling, 2016; Veličković et al., 2017; Hamilton et al., 2017). These convolution-based graph neural networks have achieved superior performance on multiple benchmark datasets in graph-based SSL tasks (Wu et al., 2019).\nRecently, generative adversarial networks (GANs) (Goodfellow et al., 2014) have been shown a power in improving the performance of image-based SSL problems (Odena, 2016; Salimans et al., 2016; Li et al., 2019b). In semi-GAN (Salimans et al., 2016), authors converted the M -class classification task into solving (M + 1)-class problem where the synthetic (M + 1)th class is generated by the GAN’s generator. Later on, Dai et al. provided a theoretical insight that the generated data are able to boost the performance of classifier under certain assumptions. Our work is motivated by the the semi-GAN.\nGraphSGAN (Ding et al., 2018) first investigated the adversarial learning over graph, where the graph is embedding into an embedding space and synthetic data are generated in the corresponding space. The multi-layer perceptron (MLP) is trained as the classifier on the embedding vectors. However, to our knowledge, there is still no existed method to combine the adversarial learning to convolution-based GNNs on graph-based SSL task. In this work, we explore the potential of incorporating the convolution-based GNN and GAN. The challenges of constructing a general framework have three folds: first, the attributed graph data are non-Euclidean whose distribution contains information of graph topology structure as well as the attributes of nodes. Hence, it is not trivial to construct generator to model the distribution. Second, even the generator can model the graph’s distribution, the generator should be trained properly to boost the performance of the classifier. A poor-quality generator would introduce noise to the existed graph and affect the classifier. Third, many variants of GCN have been proposed continuously. The framework should be built with flexibility to adapt to different convolution-based GNNs.\nWe construct a novel approach called GraphCGAN to deal with above challenges. First, to model the distribution of graph, the generator is built sequentially from two sub-generators: one models\nthe attribute information (node’s attribute) and another one models the graph topology structure (adjacency relation of node). Details can be found in Section 3.1. Second, in GraphCGAN, the generator is trained based on the feature matching technique (Salimans et al., 2016) which minimizes the distance between generated nodes and real nodes in the constructed feature space. This technique showed a good performance in SSL tasks in practice. The details for construction of loss functions can be found in Section 3.3. For GCN, the attributes of nodes are aggregated convolutionally by multiple layers. The representation of the last layer is usually considered as the prediction for the labels. For variants of GCN, the main differences exist in the strategy of layer aggregation (Hamilton et al., 2017). In our framework, we choose the second to the last layer of convolution-based GNN as the feature matching functions. Therefore, our framework is easily extended to variants of GCN. More discussions can be found in Section 3.2." }, { "heading": "2 PRELIMINARY", "text": "We first introduce the notation about graph. Let G = (V,E) denote a graph, where V is the set of nodes with |V | = n and E ⊂ V × V is a set of edges with |E| = m. The adjacency matrix A ∈ R|V |×|V | is defined as Aij = 1 if node vi and vj has edge, otherwise Aij = 0. Suppose each node vi has a d-dimensional feature xi ∈ Rd and a single value label yi ∈ {1, 2, ..,M}. In the semi-supervised learning setting, there is a disjoint partition for the nodes, V = V L ∪ V U , such that, for vi ∈ V L, the corresponding label is known and for vj ∈ V U the corresponding label is unknown. The distributions of node in labeled set V L and unlabeled set V U are denoted as pV L and pV U , respectively. The semi-supervised learning is to learn the label for unlabeled set {yj |vj ∈ V U} given adjacency matrix A, feature matrix X = [xi]vi∈V and labels for labeled sets {yi|vi ∈ V L}." }, { "heading": "2.1 CONVOLUTION BASED GRAPH NEURAL NETWORK CLASSIFIER", "text": "Based on the Laplacian smoothing, the convolution-based GNN models propagate the information of nodes features across the nodes’ neighbors in each layer. Specifically, in GCN, the layer-wise propagation rule can be defined as follows:\nH(l+1) = σ(D−1AH(l)W(l) + b(l)), l = 0, 1, 2.., L− 1 (1)\nwhere W(l) and b(l) are layer-specific trainable weight matrix and bias, respectively. σ(·) is an activation function. D is the diagonal degree matrix with Dii = ∑ j Aij . Hence, D\n−1A represents normalization of adjacency matrix A. The initial layer H(0) is the feature matrix X. The final layer H(L) followed by a softmax layer can be viewed as the prediction of one-hot representation for the true label y.\nRecently, many variants of the GCN layer-wise propagation rule had been proposed, including graph attention network, cluster GCN (Veličković et al., 2017; Chiang et al., 2019), which achieved stateof-the-art performances in many benchmark datasets." }, { "heading": "2.2 GENERATIVE ADVERSARIAL NETWORK BASED SEMI-SUPERVISED LEARNING", "text": "In semi-GAN, the classifier C and generator G play a non-cooperative game, where classifier aims to classify the unlabeled data as well as distinguish the generated data from real data; generator attempts to match feature of real data and that of generated data. Therefore, the objective function for classifier can be divided into two parts (Salimans et al., 2016). The first part is the supervised loss function Lsup = Ev,y∼pV L logPC(y|v, y ≤M) which is the log probability of the node label given the real nodes. The second part is the unsupervised loss function\nLun−sup = Ev∼pV U logPC(y ≤M |v) + Ev∼pV G logPC(y =M + 1|v) which is the sum of log probability of the first M classes for real nodes and the log probability of the (M + 1)th class for generated nodes V G. The classifier C can be trained by maximize the objective function\nLC = Lsup + Lun−sup. (2)\nFor objective function of generator, Salimans et al. (2016) found minimizing feature matching loss in Equation 3 achieved superior performance in practice\nLG = ||Ev∼pV U (f(v))− Ez∼pz(z)(f(G(z)))|| 2 2, (3)\nwhere the feature matching function f(·) maps the input into a feature space and z ∼ pz(z) is drawn from a given distribution like uniform distribution. Furthermore, Dai et al. (2017) provided a theoretical justification that complementary generator G was able to boost the performance of classifier C in SSL task." }, { "heading": "3 FRAMEWORK OF GRAPHCGAN", "text": "To combine the aforementioned Laplacian smoothing on graph and semi-GAN on SSL together, we develop GraphCGAN model, using generated nodes to boost the performance of convolution-based GNN models." }, { "heading": "3.1 CONSTRUCTION OF GENERATOR FOR GRAPHCGAN", "text": "The generator G generates fake node v0 by generating feature vector x0 ∈ Rd and adjacency relation a0 ∈ Rn jointly, where a0,i = 1 if the fake node is connected to real node vi, otherwise a0,i = 0. Therefore, the distribution for generated node pG(v0) can be expressed by the joint distribution of the corresponding feature and adjacency relation pG(x0,a0). From the conditional distribution formula, the joint distribution can be written as pG(x0,a0) = pG1(x0)pG2(a0|x0). We use sub-generators G1 and G2 to generate fake feature x0 and a0|x0, respectively. In practice, a0|x0 can be modeled by G2(z;x0) = G2(z;G1(z)) where the adjacency relation a0 is constructed by sub-generator G2 given the input of x0. The distribution of generated node can be denoted by\npG(v0) = pG(x0,a0) = pG(x0)p(a0|x0) = p(G1(z))p(G2(z;G1(z))) =: p(G(z)). (4)\nIf B nodes (v0,1, v0,2, .., v0,B) are generated, the generated feature matrix is denoted as X0 = (xT0,1,x T 0,2, ..,x T 0,B) T and generated adjacency matrix has form A0 = (aT0,1,a T 0,2, ..,a T 0,B)\nT . Hence, the combined adjacency matrix can be denoted as\nà = [ A AT0 A0 IB ] ∈ R(n+B)×(n+B), (5)\nThe combined feature vector is\nX̃ = [ X X0 ] ∈ R(n+B)×d. (6)\nThe diagonal degree matrix D̃ ∈ R(n+B)×(n+B) can be denoted as [ D∗ 0 0 DB ] where D∗ ∈ Rn×n\nwith D∗,ii = ∑ j Aij + ∑ bA0,bi and DB ∈ RB×B with DB,bb = ∑ j A0,bj + 1." }, { "heading": "3.2 ANALYSIS OF CLASSIFIER FOR GRAPHCGAN", "text": "In GraphCGAN, we adopt the convolution-based GNN, such as GCN, GraphSage (Hamilton et al., 2017) or GAT (Veličković et al., 2017), as the the classifier. The classifier is applied to the enlarged graph G̃ = [X̃, Ã] to obtain the prediction ỹ of nodes V ∪ V G.\nSpecially, considering the layer-wise propagation of GCN (Equation 1) as the classifier in GraphCGAN, the propogation rule can be denoted as\nH̃(l+1) = σ(D̃−1ÃH̃(l)W(l) + b̃(l))\n= σ( [ D−1∗ 0 0 D−1B ] [ A AT0 A0 IB ] [ H (l) ∗ H (l) 0 ] W(l) + [ b(l) b (l) B ] )\n= σ(\n[ D−1∗ AH (l) ∗ +D −1 ∗ A T 0 H (l) 0\nD−1B A0H (l) ∗ +D −1 B H (l) 0\n] W(l) + [ b(l)\nb (l) B\n] )\n= σ(\n[ D−1∗ AH (l) ∗ W (l) + b (l) ∗\n(D−1B A0H (l) ∗ +D −1 B W (l))W(l) + b (l) B\n] )\n=:\n[ H (l+1) ∗\nH (l+1) 0\n] .\n(7)\nwhere the first layer is chosen as the enlarged feature matrix H̃(0) = X̃. Weight matrix W(l) has the same in Equation 1. Bias vector b̃(l) has dimension (n+B) which is denoted as [b(l)T ,b(l)TB ]\nT . We denote b(l)∗ = D−1∗ A T 0 H (l) ∗ W\n(l) + b(l) to make the format clear. From Equation 7, the layer propagation of real nodes (first n rows) follows the same format as the GCN layer propagation in Equation 1. As a special case, for the zero generator A0 = 0 or X0 = 0, the performance of classifier on V ∪ V G would be the same as that of original classifier on V .\nFor the last layer H̃(L) ∈ R(n+B)×M , we adopt the strategy in Salimans et al. (2016) to obtain the (M + 1) class label ỹ by\nỹ = softmax(H̃(L)||0(n+B)×1), (8)\nwhere || denotes concatenation and 0(n+B)×1 ∈ R(n+B)×1 is a zero matrix. The loss function for classifier in GraphCGAN follows the same format in Equation 2." }, { "heading": "3.3 LOSS FUNCTIONS", "text": "Let us denote g(·, ·; θC) as the map from feature vector and adjacency vector to the space of second to the last layer in convolution-based GNN with trainable parameter θC . Specially, in the case of GCN, for node vi with feature vector xi and adjacency vector ai,\ng(xi,ai; θC) = H̃ (L−1) i , (9)\nwhere H̃(L−1)i denotes the i-th row of H̃ (L−1) and θC = [W(l); b̃(l)]l=0,1,..,L−2.\nAccording to Equation 4, the loss function of generatorG can be decomposed into two parts: the loss functions of sub-generators G1 and G2 separately. To construct G1, the feature matching function f in Equation 3 should solely depend on feature vector. Therefore, we mask the adjacency matrix à as identity matrix Ĩ ∈ R(n+B)×(n+B) in layer propagation. Formally, the feature matching loss function of G1 is constructed as\nLG1 = ||Exi(g(xi, Ii; θC))− Ez∼pz(z)(g(G1(z),0; θC))|| 2 2,\nwhere Ii denote the i-th row of identity matrix I ∈ Rn×n and 0 is the zero vector. After x0 = G1(z) is built, the feature matching loss function of G2 can be constructed similarly from\nLG2 = ||Eai(g(xi,ai; θC))− Ez∼pz(z)(g(x0, G2(z); θC))|| 2 2.\nTherefore, loss function for G can be written as\nLG = LG1 + LG2 . (10)\nFurthermore, when multiple fake nodes are generated, Salimans et al. (2016) showed that adding pull-away item to loss function can increase the entropy of generator which led to better performance\nin practice. The pull-away loss for sub-generators G1, G2 can be denoted as\nLptG1 = 1\nB(B − 1) B∑ i ∑ j 6=i ( g(G1(zi),0; θC) Tg(G1(zj)),0; θC) ||g(G1(zi)),0; θC)||||g(G1(zj)),0; θC)|| )\nand\nLptG2 = 1\nB(B − 1) B∑ i ∑ j 6=i ( g(x0,i, G2(zi); θC) Tg(x0,j , G2(zj); θC) ||g(x0,i, G2(zi); θC)||||g(x0,j , G2(zj); θC)|| ).\nThe loss function for G with pull-away item can be written as\nL∗G = LG + L pt G1 + LptG2 . (11) Besides, Dai et al. (2017) constructed the complementary loss by\nLcG1 = Ex∼pG1 log(p(x))I(p(x) > ε), L c G2 = Ea∼pG2 log(p(a))I(p(a) > ε),\nwhich could also increase performance. Therefore, the loss function for G with complementary loss can be written as\nL∗∗G = L∗G + LcG1 + L c G2 . (12)\nThe procedure is formally presented in Algorithm 1.\nAlgorithm 1: GraphCGAN Algorithm Input: Adjacency matrix A, Node feature X, initialized fake nodes V G = [A0,X0].\nhyper-parameters including dimension of the noise vector dnoise, the number of steps KD, and the size of fake nodes B and early stop error.\nOutput: Prediction Ỹ 1 while not early stop do 2 Combine the fake nodes V G to the graph and obtain à and X̃ from Equation 5 and Equation 6; 3 Classifier: 4 iterD = 0 5 while iterD < KD do 6 Use convolution-based GNN as the classifier C, and extract the map to the intermediate layer g(., .) as Equation 9; 7 Train C by minimizing LC (Equation 2) on combined graph, obtain predicted result Ỹ; iterD = iterD + 1; 8 Generator: 9 Generate a random noise vector Z ∼ U(0, I) ∈ RB×dnoise ;\n10 Train generator G = [G1;G2] by minimizing Equation 10 or Equation 11 or Equation 12; 11 Obtain X0 = G1(Z) and A0 = G2(Z;G1(Z))." }, { "heading": "4 RELATED WORK", "text": "" }, { "heading": "4.1 GRAPH-BASED SEMI-SUPERVISED LEARNING", "text": "The challenge for graph-based SSL is to leverage unlabeled data to improve performance in classification. There are three categories of the Graph-based semi-supervised learning. The first one is the Laplacian regularization-based methods (Xiaojin & Zoubin, 2002; Lu & Getoor, 2003; Belkin et al., 2006). The second type is the embedding-based methods, including DeepWalk (Perozzi et al., 2014), SemiEmb (Weston et al., 2012), and Planetoid (Yang et al., 2016). The third type is convolutional based graph neural networks such as GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2017), ClusterGCN (Chiang et al., 2019) and DeepGCN (Li et al., 2019a). Such methods address the semi-supervised learning in an end-to-end manner. Convolution-based methods perform the graph convolution by taking the weighted average of a node’s neighborhood information. In many graph semi-supervised learning tasks, the convolution-based methods achieved the state-of-the-art performance (Wu et al., 2019)." }, { "heading": "4.2 GNN LEARNING WITH GAN", "text": "GAN is wildly used in obtaining generative graph models. GraphGAN (Wang et al., 2018) proposed a framework for graph embedding task. Specifically, GraphGAN can generate the link relation for a center node. However, GraphGAN cannot be applied to attributed graph.\nMolGAN (De Cao & Kipf, 2018) proposed a framework for generating the attributed graph of molecule by generating the adjacency matrix and feature matrix independently. After that, MolGAN used an the score for the generated molecule as reward function to choose the reasonable combination of attributes and topology structure by an auxiliary reinforcement learning model. In comparison, GraphCGAN can generate attributes and adjacency matrix of the attributed graph jointly, which can capture the correlation between the attributes and topology relation.\nDGI (Veličković et al., 2018) proposed a general approach for learning node representations within graph-structured data in an unsupervised manner. For the generator, In DGI, the fake nodes are created from a pre-specified corruption function applied on the original nodes. In contrast, our GraphCGAN can generate the fake nodes from a dynamic generator during the training GAN process. For the classifier, the DGI uses GCN only, however, our GraphCGAN is flexible and adaptive to other convolution-based GNN models." }, { "heading": "4.3 GAN WITH SEMI-SUPERVISED LEARNING", "text": "SGAN (Odena, 2016) first introduced the adversarial learning to the semi-supervised learning on image classification task. GAN-FM (Salimans et al., 2016) stabilized training process in SGAN by introducing feature-matching and minibatch techniques. In Kumar et al. (2017), authors discuss about the effects of adding fake samples and claimed that moderate fake samples could improve the performance in image classification task.\nGraphSGAN Ding et al. (2018) proposed a framework for graph Laplacian regularization based classifier with GAN to solve graph-based semi-supervised learning tasks. In GraphSGAN, fake samples in the feature space of hidden layer are generated, hence it can not be applied to convolutional based classifiers. In constrast, our model generates fake nodes directly and is adaptive to convolutional based classifiers." }, { "heading": "5 EXPERIMENTS", "text": "In this section, our primary goal is to show that the adversary learning can boost the performance of convolution-based GNNs in graph-based SSL under our framework. We evaluate GraphCGAN on established graph-based benchmark tasks against baseline convolution-based GNN models and some other related methods. We first introduce the dataset, experiment setup and results. Besides, we study the property of the generated nodes from our model during the training process. The ablation study is also provided in this section. The code GraphCGAN-ICLR.zip is provided as the supplementary file." }, { "heading": "5.1 DATASETS", "text": "Three standard citation network benchmark datasets - Cora, Citeseer and Pubmed (Sen et al., 2008) are analyzed. We closely follow the setting in Kipf & Welling (2016) and Veličković et al. (2017) which allows for only 20 nodes per class to be used for training. The predictive power of the trained models is evaluated on 1000 test nodes, and 500 additional nodes are used for validation purposes." }, { "heading": "5.2 EXPERIMENT SETUP AND RESULT", "text": "Two widely used of convolution-based GNNs, GCN and GAT, are considered as classifiers in GraphCGAN. In order to show the generated nodes can help improve the performance of the methods. We adopt the same model setting in the original papers (Kipf & Welling, 2016; Veličković et al., 2017). Specially, for classifier in GraphCGAN-GCN, the number of layers L is 2, the dimension of the hidden layer is 16, the dropout rate is 0.5, activation function in the hidden layer is Relu. For GraphCGAN-GAT, the number of layers L is 2, the dimension of the hidden layer is 8, and number\nof attention heads is 8, the dropout rate is 0.6, activation function in the hidden layer is Sigmoid. The hyper-parameter for the weight of L2 regularization is 5e-4. For the generator, we use the loss function in Equation 12 (Ablation study for loss function of generator can be found in Table 2 Appendix A). In Cora and Citeseer, we generate B = 64 fake nodes. In Pubmed, the number of fake nodes is B = 256. The ablation study of size of fake nodes are provided in Figure 1.\nThe results is presented in Table 1, the best and the second best results are masked in bold font. We particularly note that both GraphCGAN-GCN and GraphCGAN-GAT outperform GCN and GAT in a significant margin, respectively. More specifically, we are able to improve upon GCN by a margin of 0.9%, 2.3% and 0.9% on Cora, Citeseer and Pubmed, respectively. Besides, GraphCGAN-GAT can improve upon GAT by a margin of 1.0%, 0.7% and 1.7%, suggesting that the adding fake nodes strategy in our GraphCGAN model can boost the performance for reference convolution-based GNN model. To be noticed that GraphCGAN can be easily extended to other convolution-based GNN models." }, { "heading": "5.3 VISUALIZATION OF GAN PROCESS", "text": "In this subsection, we investigate about the distribution of the generated nodes during the training process. We consider three datasets to illustrate generated nodes in different perspectives. For Karate club graph (Zachary, 1977), it contains 34 nodes without features. The feature matrix X̃ is set as identity matrix during the training process. Therefore, the plot (first row in Figure 2) of fake nodes shows the distribution of G2(z; I). It can be found, after training, fake nodes mainly connect to the boundary nodes1 which is preferred as discussed in GraphSGAN (Ding et al., 2018). MNIST datasets (LeCun et al., 1998) contain the images of handwritten digit. We can consider it as a graph\n1Boundary nodes are nodes connected to different clusters\nwith image feature by constructing an identity adjacency matrix à = Ĩ. Therefore, the plot (second row in Figure 2) of fake feature shows the distribution of G1(z) which has the shape around to digit eight. Last, we generated B = 256 nodes for Cora dataset which are plotted in two-dimension by T-SHN (Van Der Maaten, 2014) techniques on the feature space of g(·, ·; θC) shown in the third row in Figure 2, which can be considered as the distribution for G(z). We can find the generated nodes present as a complementary part for the existed nodes" }, { "heading": "6 CONCLUSION", "text": "We propose GraphCGAN, a novel framework to improve the convolution-based GNN using GAN. In GraphCGAN, we design a generator to generate attributed graph, which is able to generate adjacency matrix and feature jointly. We also provide a new insight for the semi-supervised learning with convoluntional graph neural network under GAN structure. A flexible algorithm is proposed, which can be easily extended to other sophisticated architecture of GraphC, such as GAAN (Zhang et al., 2018) and GIN (Xu et al., 2018).\nOne potential future direction is to extend the GraphCGAN in other relevant tasks including community detection, co-embedding of attributed network (Meng et al., 2019) and even graph classification. Extending the model to incorporate edge features by generating the fake edge will allow us to tackle a larger amount of problems. Finally, in GAN, the stability of training process can be studied." }, { "heading": "A ABLATION STUDY FOR LOSS FUNCTION OF GENERATOR", "text": "" } ]
2,020
null
SP:1996387f48b0d87ffe78a2c08a08faeb618c2213
[ "This paper proposes a novel approach to learn an embedding of continuous time values and use an attention mechanism to produce a fixed-length representation of a time series containing a variable number of observations. In particular, it proposes an mTAN network to leverage the mTAN module in an encoder-decoder framework for both unsupervised and supervised Learning. The main contribution of this paper is the introduction of Multi-Time Attention Networks to learns a time representation and learns to attend to observations at different time points by computing a similarity weighting by the learning time embedding. Empirical studies are performed to show the superiority of the proposed model mTANs over several baseline approaches on the tasks unsupervised and supervised learning. " ]
Irregular sampling occurs in many time series modeling applications where it presents a significant challenge to standard deep learning models. This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate. In this paper, we propose a new deep learning framework for this setting that we call MultiTime Attention Networks. Multi-Time Attention Networks learn an embedding of continuous time values and use an attention mechanism to produce a fixed-length representation of a time series containing a variable number of observations. We investigate the performance of this framework on interpolation and classification tasks using multiple datasets. Our results show that the proposed approach performs as well or better than a range of baseline and recently proposed models while offering significantly faster training times than current state-of-the-art methods.1
[ { "affiliations": [], "name": "Satya Narayan Shukla" }, { "affiliations": [], "name": "Benjamin M. Marlin" } ]
[ { "authors": [ "Xiangrui Cai", "Jinyang Gao", "Kee Yuan Ngiam", "Beng Chin Ooi", "Ying Zhang", "Xiaojie Yuan" ], "title": "Medical concept embedding with time-aware attention", "venue": null, "year": 2018 }, { "authors": [ "Zhengping Che", "Sanjay Purushotham", "Kyunghyun Cho", "David Sontag", "Yan Liu" ], "title": "Recurrent neural networks for multivariate time series with missing values", "venue": "Scientific Reports,", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Junyoung Chung", "Çağlar Gülçehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "arXiv e-prints,", "year": 2014 }, { "authors": [ "J.S. Clark", "O.N. Bjørnstad" ], "title": "Population time series: process variability, observation errors, missing values, lags, and hidden states", "venue": "Ecology, 85(11):3140–3150,", "year": 2004 }, { "authors": [ "Edward De Brouwer", "Jaak Simm", "Adam Arany", "Yves Moreau" ], "title": "Gru-ode-bayes: Continuous modeling of sporadically-observed time series", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Joseph Futoma", "Sanjay Hariharan", "Katherine A. Heller" ], "title": "Learning to detect sepsis with a multitask gaussian process RNN classifier", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Max Horn", "Michael Moor", "Christian Bock", "Bastian Rieck", "Karsten Borgwardt" ], "title": "Set functions for time series", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Alistair EW Johnson", "Tom J Pollard", "Lu Shen", "H Lehman Li-wei", "Mengling Feng", "Mohammad Ghassemi", "Benjamin Moody", "Peter Szolovits", "Leo Anthony Celi", "Roger G Mark" ], "title": "Mimic-iii, a freely accessible critical care database", "venue": "Scientific Data,", "year": 2016 }, { "authors": [ "S. Kazemi", "R. Goel", "Sepehr Eghbali", "Janahan Ramanan", "Jaspreet Sahota", "Sanjay Thakur", "S. Wu", "C. Smyth", "P. Poupart", "Marcus A. Brubaker" ], "title": "Time2vec: Learning a vector representation of time", "venue": null, "year": 1907 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Steven Cheng-Xian Li", "Benjamin M Marlin" ], "title": "A scalable end-to-end gaussian process adapter for irregularly sampled time series classification", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Steven Cheng-Xian Li", "Benjmain M. Marlin" ], "title": "Classification of sparse and irregularly sampled time series with mixtures of expected Gaussian kernels and random features", "venue": "In 31st Conference on Uncertainty in Artificial Intelligence,", "year": 2015 }, { "authors": [ "Zachary C Lipton", "David Kale", "Randall Wetzel" ], "title": "Directly modeling missing data in sequences with rnns: Improved classification of clinical time series", "venue": "In Machine Learning for Healthcare Conference,", "year": 2016 }, { "authors": [ "Benjamin M. Marlin", "David C. Kale", "Robinder G. Khemani", "Randall C. Wetzel" ], "title": "Unsupervised pattern discovery in electronic health care data using probabilistic clustering models", "venue": "In Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium,", "year": 2012 }, { "authors": [ "Michael C. Mozer", "Denis Kazakov", "Robert V. Lindsey" ], "title": "Discrete event, continuous time rnns", "venue": "CoRR, abs/1710.04110,", "year": 2017 }, { "authors": [ "Daniel Neil", "Michael Pfeiffer", "Shih-Chii Liu" ], "title": "Phased lstm: Accelerating recurrent network training for long or event-based sequences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Xueping Peng", "Guodong Long", "Tao Shen", "Sen Wang", "J. Jiang", "Michael Blumenstein" ], "title": "Temporal self-attention network for medical concept embedding", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2019 }, { "authors": [ "Trang Pham", "Truyen Tran", "Dinh Phung", "Svetha Venkatesh" ], "title": "Predicting healthcare trajectories from medical records: A deep learning approach", "venue": "Journal of Biomedical Informatics,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In Proceedings of the 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Yulia Rubanova", "Ricky T.Q. Chen", "David K Duvenaud" ], "title": "Latent ordinary differential equations for irregularly-sampled time series", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "T. Ruf" ], "title": "The lomb-scargle periodogram in biological rhythm research: analysis of incomplete and unequally spaced time-series", "venue": "Biological Rhythm Research,", "year": 1999 }, { "authors": [ "Jeffrey D Scargle" ], "title": "Studies in astronomical time series analysis. ii-statistical aspects of spectral analysis of unevenly spaced data", "venue": "The Astrophysical Journal,", "year": 1982 }, { "authors": [ "M. Schulz", "K. Stattegger" ], "title": "Spectrum: Spectral analysis of unevenly spaced paleoclimatic time series", "venue": "Computers & Geosciences,", "year": 1997 }, { "authors": [ "Satya Narayan Shukla", "Benjamin M Marlin" ], "title": "Interpolation Prediction Networks for Irregularly Sampled Time Series", "venue": null, "year": 2019 }, { "authors": [ "Ikaro Silva", "George Moody", "Daniel Scott", "Leo Celi", "Roger Mark" ], "title": "Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge", "venue": "Computing in cardiology,", "year": 2012 }, { "authors": [ "Huan Song", "Deepta Rajan", "Jayaraman J. Thiagarajan", "Andreas Spanias" ], "title": "Attend and diagnose: Clinical time series analysis using attention models", "venue": "In 32nd AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Qingxiong Tan", "Mang Ye", "Baoyao Yang", "Siqi Liu", "Andy Jinhua Ma", "Terry Cheuk-Fung Yip", "Grace Lai-Hung Wong", "Pongchi Yuen" ], "title": "Data-gru: Dual-attention time-aware gated recurrent unit for irregular multivariate time series", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "undefinedukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Da Xu", "Chuanwei Ruan", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "title": "Self-attention with functional time representation learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Pranjul Yadav", "Michael Steinbach", "Vipin Kumar", "Gyorgy Simon" ], "title": "Mining electronic health records (ehrs): A survey", "venue": "ACM Computing Surveys (CSUR),", "year": 2018 }, { "authors": [ "J. Yoon", "W.R. Zame", "M. van der Schaar" ], "title": "Estimating missing data in temporal data streams using multi-directional recurrent neural networks", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2019 }, { "authors": [ "Jinsung Yoon", "William R Zame", "Mihaela Van Der Schaar" ], "title": "Deep sensing: Active sensing using multi-directional recurrent neural networks", "venue": null, "year": 2018 }, { "authors": [ "Yuan Zhang", "Xi Yang", "Julie Ivy", "Min Chi" ], "title": "Attain: Attention-based time-aware lstm networks for disease progression modeling", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Irregularly sampled time series occur in application domains including healthcare, climate science, ecology, astronomy, biology and others. It is well understood that irregular sampling poses a significant challenge to machine learning models, which typically assume fully-observed, fixed-size feature representations (Marlin et al., 2012; Yadav et al., 2018). While recurrent neural networks (RNNs) have been widely used to model such data because of their ability to handle variable length sequences, basic RNNs assume regular spacing between observation times as well as alignment of the time points where observations occur for different variables (i.e., fully-observed vectors). In practice, both of these assumptions can fail to hold for real-world sparse and irregularly observed time series. To respond to these challenges, there has been significant progress over the last decade on building and adapting machine learning models that can better capture the structure of irregularly sampled multivariate time series (Li & Marlin, 2015; 2016; Lipton et al., 2016; Futoma et al., 2017; Che et al., 2018; Shukla & Marlin, 2019; Rubanova et al., 2019).\nIn this work, we introduce a new model for multivariate, sparse and irregularly sampled time series that we refer to as Multi-Time Attention networks or mTANs. mTANs are fundamentally continuous-time, interpolation-based models. Their primary innovations are the inclusion of a learned continuous-time embedding mechanism coupled with a time attention mechanism that replaces the use of a fixed similarity kernel when forming representation from continuous time inputs. This gives mTANs more representational flexibility than previous interpolation-based models (Shukla & Marlin, 2019).\nOur approach re-represents an irregularly sampled time series at a fixed set of reference points. The proposed time attention mechanism uses reference time points as queries and the observed time points as keys. We propose an encoder-decoder framework for end-to-end learning using an mTAN module to interface with given multivariate, sparse and irregularly sampled time series inputs. The encoder takes the irregularly sampled time series as input and produces a fixed-length latent representation over a set of reference points, while the decoder uses the latent representations to produce reconstructions conditioned on the set of observed time points. Learning uses established methods for variational autoencoders (Rezende et al., 2014; Kingma & Welling, 2014).\n1Implementation available at : https://github.com/reml-lab/mTAN\nThe main contributions of the mTAN model framework are: (1) It provides a flexible approach to modeling multivariate, sparse and irregularly sampled time series data (including irregularly sampled time series of partially observed vectors) by leveraging a time attention mechanism to learn temporal similarity from data instead of using fixed kernels. (2) It uses a temporally distributed latent representation to better capture local structure in time series data. (3) It provides interpolation and classification performance that is as good as current state-of-the-art methods or better, while providing significantly reduced training times." }, { "heading": "2 RELATED WORK", "text": "An irregularly sampled time series is a time series with irregular time intervals between observations. In the multivariate setting, there can also be a lack of alignment across different variables within the same multivariate time series. Finally, when gaps between observation times are large, the time series is also considered to be sparse. Such data occur in electronic health records (Marlin et al., 2012; Yadav et al., 2018), climate science (Schulz & Stattegger, 1997), ecology (Clark & Bjørnstad, 2004), biology (Ruf, 1999), and astronomy (Scargle, 1982). It is well understood that such data cause significant issues for standard supervised machine learning models that typically assume fully observed, fixed-size feature representations (Marlin et al., 2012).\nA basic approach to dealing with irregular sampling is fixed temporal discretization. For example, Marlin et al. (2012) and Lipton et al. (2016) discretize continuous-time observations into hour-long bins. This has the advantage of simplicity, but requires ad-hoc handling of bins with more than one observation and results in missing data when bins are empty.\nThe alternative to temporal discretization is to construct models with the ability to directly use an irregularly sampled time series as input. Che et al. (2018) present several methods based on gated recurrent unit networks (GRUs, Chung et al. (2014)), including an approach that takes as input a sequence consisting of observed values, missing data indicators, and time intervals since the last observation. Pham et al. (2017) proposed to capture time irregularity by modifying the forget gate of an LSTM (Hochreiter & Schmidhuber, 1997), while Neil et al. (2016) introduced a new time gate that regulates access to the hidden and cell state of the LSTM. While these approaches allow the network to handle event-based sequences with irregularly spaced vector-valued observations, they do not support learning directly from vectors that are partially observed, which commonly occurs in the multivariate setting because of lack of alignment of observation times across different variables.\nAnother line of work has looked at using observations from the future as well as from the past for interpolation. Yoon et al. (2019) and Yoon et al. (2018) presented an approach based on the multi-directional RNN (M-RNN) that can leverage observations from the relative past and future of a given time point. Shukla & Marlin (2019) proposed the interpolation-prediction network framework, consisting of several semi-parametric RBF interpolation layers that interpolate multivariate, sparse, and irregularly sampled input time series against a set of reference time points while taking into account all observed data in a time series. Horn et al. (2020) proposed a set function-based approach for classifying time-series with irregularly sampled and unaligned observation.\nChen et al. (2018) proposed a variational auto-encoder model (Kingma & Welling, 2014; Rezende et al., 2014) for continuous time data based on the use of a neural network decoder combined with a latent ordinary differential equation (ODE) model. They model time series data via a latent continuous-time function that is defined via a neural network representation of its gradient field. Building on this, Rubanova et al. (2019) proposed a latent ODE model that uses an ODE-RNN model as the encoder. ODE-RNNs use neural ODEs to model the hidden state dynamics and an RNN to update the hidden state in the presence of a new observation. De Brouwer et al. (2019) proposed GRU-ODE-Bayes, a continuous-time version of the Gated Recurrent Unit (Chung et al., 2014). Instead of the encoder-decoder architecture where the ODE is decoupled from the input processing, GRU-ODE-Bayes provides a tighter integration by interleaving the ODE and the input processing steps.\nSeveral recent approaches have also used attention mechanisms to model irregularly sampled time series (Song et al., 2018; Tan et al., 2020; Zhang et al., 2019) as well as medical concepts (Peng et al., 2019; Cai et al., 2018). Most of these approaches are similar to Vaswani et al. (2017) where they replace the positional encoding with an encoding of time and model sequences using self-attention.\nHowever, instead of adding the time encoding to the input representation as in Vaswani et al. (2017), they concatenate it with the input representation. These methods use a fixed time encoding similar to the positional encoding of Vaswani et al. (2017). Xu et al. (2019) learn a functional time representation and concatenate it with the input event embedding to model time-event interactions.\nLike Xu et al. (2019) and Kazemi et al. (2019), our proposed method learns a time representation. However, instead of concatenating it with the input embedding, our model learns to attend to observations at different time points by computing a similarity weighting using only the time embedding. Our proposed model uses the time embedding as both the queries and keys in the attention formulation. It learns an interpolation over the query time points by attending to the observed values at key time points. Our proposed method is thus similar to kernel-based interpolation, but learning the time attention based similarity kernel gives our model more flexibility compared to methods like that of Shukla & Marlin (2019) that use similarity kernels with fixed functional forms. Another important difference relative to many of these previous methods is that our proposed approach attends only to the observed data dimensions at each time point and hence does not require a separate imputation step to handle vector valued observations with an arbitrary collection of dimensions missing at any given time point." }, { "heading": "3 THE MULTI-TIME ATTENTION MODULE", "text": "In this section, we present the proposed Multi-Time Attention Module (mTAN). The role of this module is to re-represent a sparse and irregularly sampled time series in a fixed-dimensional space. This module uses multiple continuous-time embeddings and attention-based interpolation. We begin by presenting notation followed by the time embedding and attention components.\nNotation: In the case of a supervised learning task, we let D = {(sn, yn)|n = 1, ..., N} represent a data set containing N data cases. An individual data case consists of a single target value yn (discrete for classification), as well as a D-dimensional, sparse and irregularly sampled multivariate time series sn. Different dimensions d of the multivariate time series can have observations at different times, as well as different total numbers of observations Ldn. Thus, we represent time series d for data case n as a tuple sdn = (tdn,xdn) where tdn = [t1dn, ..., tLdndn] is the list of time points at which observations are defined and xdn = [x1dn, ..., xLdndn] is the corresponding list of observed values. In the case of an unsupervised task such as interpolation, each data case consists of a multivariate time series sn only. We drop the data case index n for brevity when the context is clear.\nTime Embedding: Time attention module is based on embedding continuous time points into a vector space. We generalize the notion of a positional encoding used in transformer-based models to continuous time. Time attention networks simultaneously leverage H embedding functions φh(t), each outputting a representation of size dr. Dimension i of embedding h is defined as follows:\nφh(t)[i] = { ω0h · t+ α0h, if i = 0 sin(ωih · t+ αih), if 0 < i < dr\n(1)\nwhere the ωih’s and αih’s are learnable parameters. The periodic terms can capture periodicity in time series data. In this case, ωih and αih represent the frequency and phase of the sine function. The linear term, on the other hand, can capture non-periodic patterns dependent on the progression of time. For a given difference ∆, φh(t+ ∆) can be represented as a linear function of φh(t).\nLearning the periodic time embedding functions is equivalent to using a one-layer fully connected network with a sine function non-linearity to map the time values into a higher dimensional space. By contrast, the positional encoding used in transformer models is defined only for discrete positions. We note that our time embedding functions subsume positional encodings when evaluated at discrete positions.\nMulti-Time Attention: The time embedding component described above takes a continuous time point and embeds it into H different dr-dimensional spaces. In this section, we describe how we leverage time embeddings to produce a continuous-time embedding module for sparse and irregularly sampled time series. This multi-time attention embedding module mTAN(t, s) takes as input a query time point t and a set of keys and values in the form of a D-dimensional multivariate sparse and irregularly sampled time series s (as defined in the notation section above), and returns a J-\ndimensional embedding at time t. This process leverages a continuous-time attention mechanism applied to the H time embeddings. The complete computation is described below.\nmTAN(t, s)[j] = H∑ h=1 D∑ d=1 x̂hd(t, s) · Uhdj (2)\nx̂hd(t, s) = Ld∑ i=1 κh(t, tid)xid (3)\nκh(t, tid) = exp\n( φh(t)wv Tφh(tid) T / √ dk )∑Ld\ni′=1 exp ( φh(t)wvTφh(ti′d)T / √ dk ) (4)\nAs shown in Equation 2, dimension j of the mTAN embedding mTAN(t, s)[j] is given by a linear combination of intermediate univariate continuous-time functions x̂hd(t, s). There is one such function defined for each input data dimension d and each time embedding h. The parameters Uhdj are learnable linear combination weights.\nAs shown in Equation 3, the structure of the intermediate continuous-time function x̂hd(t, s) is essentially a kernel smoother applied to the dth dimension of the time series. However, the interpolation weights κh(t, tid) are defined based on a time attention mechanism that leverages time embeddings, as shown in Equation 4. As we can see, the same time embedding function φh(t) is applied for all data dimensions. The form of the attention mechanism is a softmax function over the observed time points tid for dimension d. The activation within the softmax is a scaled inner product between the time embedding φh(t) of the query time point t and the time embedding φh(tid) of the observed time point, the key. The parameters w and v are each dr × dk matrices where dk ≤ dr. We use a scaling factor 1√\ndk to normalize the dot product to counteract the growth in the dot product magnitude with\nincrease in the dimension dk.\nLearning the time embeddings provides our model with flexibility to learn complex temporal kernel functions κh(t, t′). The use of multiple simultaneous time embeddings φh(t) and a final linear combination across time embedding dimensions and data dimensions means that the final output representation function mTAN(t, s) is extremely flexible. Different input dimensions can leverage different time embeddings via learned sparsity patterns in the parameter tensor U . Information from different data dimensions can also be mixed together to create compact reduced dimensional representations. We note that all of the required computations can be parallelized using masking variables to deal with unobserved dimensions, allowing for efficient implementation on a GPU.\nDiscretization: Since the mTAN module defines a continuous function of t given s, it can not be directly incorporated into neural network architectures that expect inputs in the form of fixeddimensional vectors or discrete sequences. However, the mTAN module can easily be adapted to\nproduce such an output representation by materializing its output at a set of reference time points r = [r1, ..., rK ]. In some cases, we may have a fixed set of such points. In other cases, the set of reference time points may need to depend on s itself. In particular, we define the auxiliary function ρ(s) to return the set of time points at which there is an observation on any dimension of s.\nGiven a collection of reference time points r, we define the discretized mTAN module mTAND(r, s) as mTAND(r, s)[i] = mTAN(ri, s). This module takes as input the set of reference time points r and the time series s and outputs a sequence of mTAN embeddings of length |r|, each of dimension J . The architecture of the mTAND module is shown in Figure 1. The mTAND module can be used to interface sparse and irregularly sampled multivariate time series data with any deep neural network layer type including fully-connected, recurrent, and convolutional layers. In the next section, we describe the construction of a temporal encoder-decoder architecture leveraging the mTAND module, which can be applied to both classification and interpolation tasks." }, { "heading": "4 ENCODER-DECODER FRAMEWORK", "text": "As described in the last section, we leverage the discretized mTAN module in an encoder-decoder framework as the primary model in this paper, which we refer to as an mTAN network. We develop the encoder-decoder framework within the variational autoencoder (VAE) framework in this section. The architecture for the model framework is shown in Figure 2.\nModel Architecture: As we are modeling time series data, we begin by defining a sequence of latent states zi. Each of these latent states are IID-distributed according to a standard multivariate normal distribution p(zi). We define the set of latent states z = [z1, ..., zK ] at K reference time points.\nWe define a three-stage decoder. First, the latent states are processed through an RNN decoder module to induce temporal dependencies resulting in a first set of deterministic latent variables hdecRNN = [h dec 1,RNN , ...,h dec K,RNN ]. Second, the output of the RNN decoder stage and the K time points hdecRNN are provided to the mTAND module along with a set of T query time points t. The mTAND module outputs a sequence of embeddings hdecTAN = [h dec 1,TAN , ...,h dec T,TAN ] of length |t|. Third, the mTAN embeddings are independently decoded using a fully connected decoder fdec() and the result is used to parameterize an output distribution. In this work, we use a diagonal covariance Gaussian distribution with mean given by the final decoded representation and a fixed variance σ2. The final generated time series is given by ŝ = (t,x) with all data dimensions observed. The full generative process is shown below. We let pθ(x|z, t) define the probability distribution over\nthe values of the time series x given the time points t and the latent variables z. θ represents the parameters of all components of the decoder.\nzk ∼ p(zk) (5) hdecRNN = RNN dec(z) (6)\nhdecTAN = mTAND dec(t,hdecRNN ) (7)\nxid ∼ N (xid; fdec(hdeci,TAN )[d], σ2I) (8)\nFor an encoder, we simply invert the structure of the generative process. We begin by mapping the input time series s through the mTAND module along with a collection of K reference time points r. We apply an RNN encoder to the mTAND model that outputs hencTAN to encode longer-range temporal structure. Finally, we construct a distribution over latent variables at each reference time point using a diagonal Gaussian distribution with mean and variance output by fully connected layers applied to the RNN outputs hencRNN . The complete encoder architecture is described below. We define qγ(z|r, s) to be the distribution over the latent variables induced by the input time series s and the reference time points r. γ represents all of the parameters in all of the encoder components.\nhencTAN = mTAND enc(r, s) (9) hencRNN = RNN enc(hencTAN ) (10)\nzk ∼ qγ(zk|µk,σ2k), µk = fencµ (henck,RNN ), σ2k = exp(fencσ (henck,RNN )) (11)\nUnsupervised Learning: To learn the parameters of our encoder-decoder model given a data set of sparse and irregularly sampled time series, we follow a slightly modified VAE training approach and maximize a normalized variational lower bound on the log marginal likelihood based on the evidence lower bound or ELBO. The learning objective is defined below where pθ(xjdn|z, tn) and qγ(z|r, sn) are defined in the previous section.\nLNVAE(θ, γ) = N∑ n=1 1∑ d Ldn ( Eqγ(z|r,sn)[log pθ(xn|z, tn)]−DKL(qγ(z|r, sn)||p(z)) ) (12)\nDKL(qγ(z|r, sn)||p(z)) = K∑ i=1 DKL(qγ(zi|r, sn)||p(zi)) (13)\nlog pθ(xn|z, tn) = D∑ d=1 Ldn∑ j=1 log pθ(xjdn|z, tjdn) (14)\nSince irregularly sampled time series can have different numbers of observations across different dimensions as well as across different data cases, it can be helpful to normalize the terms in the standard ELBO objective to avoid the model focusing more on sequences that are longer at the expense of sequences that are shorter. The objective above normalizes the contribution of each data case by the total number of observations it contains. The fact that all data dimensions are not observed at all time points is accounted for in Equation 14. In practice, we use k samples from the variational distribution qγ(z|r, sn) to compute the learning objective. Supervised Learning: We can also augment the encoder-decoder model with a supervised learning component that leverages the latent states as a feature extractor. We define this component to be of the form pδ(yn|z) where δ are the model parameters. This leads to an augmented learning objective as shown in Equation 15 where the λ term trades off the supervised and unsupervised terms.\nLsupervised(θ, γ, δ) = LNVAE(θ, γ) + λEqγ(z|r,sn) log pδ(yn|z) (15)\nIn this work, we focus on classification as an illustrative supervised learning problem. For the classification model pδ(yn|z), we use a GRU followed by a 2-layer fully connected network. We use a small number of samples to approximate the required intractable expectations during both learning and prediction. Predictions are computed by marginalizing over the latent variable as shown below.\ny∗ = arg max y∈Y Eqγ(z|r,s)[log pδ(y|z)] (16)" }, { "heading": "5 EXPERIMENTS", "text": "In this section, we present interpolation and classification experiments using a range of models and three real-world data sets (Physionet Challenge 2012, MIMIC-III, and a Human Activity dataset). Additional illustrative results on synthetic data can be found in Appendix A.2.\nDatasets: The PhysioNet Challenge 2012 dataset (Silva et al., 2012) consists of multivariate time series data with 37 variables extracted from intensive care unit (ICU) records. Each record contains sparse and irregularly spaced measurements from the first 48 hours after admission to ICU. We follow the procedures of Rubanova et al. (2019) and round the observation times to the nearest minute. This leads to 2880 possible measurement times per time series. The data set includes 4000 labeled instances and 4000 unlabeled instances. We use all 8000 instances for interpolation experiments and the 4000 labeled instances for classification experiments. We focus on predicting in-hospital mortality. 13.8% of examples are in the positive class.\nThe MIMIC-III data set (Johnson et al., 2016) is a multivariate time series dataset consisting of sparse and irregularly sampled physiological signals collected at Beth Israel Deaconess Medical Center from 2001 to 2012. Following the procedures of Shukla & Marlin (2019), we extract 53, 211 records each containing 12 physiological variables. We focus on predicting in-hospital mortality using the first 48 hours of data. 8.1% of the instances have positive labels.\nThe human activity dataset consists of 3D positions of the waist, chest and ankles collected from five individuals performing various activities including walking, sitting, lying, standing, etc. We follow the data preprocessing steps of Rubanova et al. (2019) and construct a dataset of 6, 554 sequences with 12 channels and 50 time points. We focus on classifying each time point in the sequence into one of eleven types of activities.\nExperimental Protocols: We conduct interpolation experiments using the 8000 data cases in the PhysioNet data set. We randomly divide the data set into a training set containing 80% of the instances, and a test set containing the remaining 20% of instances. We use 20% of the training data for validation. In the interpolation task, we condition on a subset of available points and predict values for rest of the time points. We perform interpolation experiments with a varying percentage of observed points ranging from 50% to 90% of the available points. At test time, the values of observed points are conditioned on and each model is used to infer the values at rest of the available time points in the test instance. We repeat each experiment five times using different random seeds to initialize the model parameters. We assess performance using mean squared error (MSE).\nWe use the labeled data in all three data sets to conduct classification experiments. The PhysioNet and MIMIC III problems are whole time series classification problems. Note that for the human activity dataset, we classify each time point in the time series. We treat this as a smoothing problem and condition on all available observations when producing the classification at each time-point (similar to labeling in a CRF). We use bidirectional RNNs as the RNN-based baselines for the human activity dataset. We randomly divide each data set into a training set containing 80% of the time series, and a test set containing the remaining 20% of instances. We use 20% of the training set for validation. We repeat each experiment five times using different random seeds to initialize the model parameters. Due to class imbalance in the Physionet and MIMIC-III data sets, we assess classification performance using area under the ROC curve (the AUC score). For the Human Activity dataset, we evaluate models using accuracy.\nFor both interpolation and prediction, we select hyper-parameters on the held-out validation set using grid search, and then apply the best trained model to the test set. The hyper-parameter ranges searched for each model/dataset/task are fully described in Appendix A.4.\nModels: The model we focus on is the encoder-decoder architecture based on the discretized multitime attention module (mTAND-Full). In the classification experiments, the hidden state at the last observed point is passed to a two-layer binary classification module for all models. For each data set, the structure of this classifier is the same for all models. For the proposed model, the sequence of latent states is first passed through a GRU and then the final hidden state is passed through the same classification module. For the classification task only, we consider an ablation of the full model that uses the proposed mTAND encoder, which consists of our mTAND module followed by a GRU to extract a final hidden state, which is then passed to the classification module (mTAND-Enc). We compare to several deep learning models that expand on recurrent networks to accommodate irregular\nsampling. We also compare to several encoder-decoder approaches. The full list of model variants is briefly described below. We use a Gated Recurrent Unit (GRU) (Chung et al., 2014) module as the recurrent network throughout. Architecture details can be found in Appendix A.3.\n• RNN-Impute: Missing observations replaced with weighted average of last observed measurement within that time series and global mean of the variable across training examples (Che et al., 2018).\n• RNN-∆t: Input is concatenated with masking variable and time interval ∆t indicating how long the particular variable is missing.\n• RNN-Decay: RNN with exponential decay on hidden states (Mozer et al., 2017; Che et al., 2018). • GRU-D: combining hidden state decay with input decay (Che et al., 2018). • Phased-LSTM: Captures time irregularity by a time gate that regulates access to the hidden and\ncell state of the LSTM (Neil et al., 2016) with forward filling to handle partially observed vectors.\n• IP-Nets: Interpolation prediction networks, which use several semi-parametric RBF interpolation layers, followed by a GRU (Shukla & Marlin, 2019).\n• SeFT: Uses a set function based approach where all the observations are modeled individually before pooling them together using an attention based approach (Horn et al., 2020).\n• RNN-VAE: A VAE-based model where the encoder and decoder are standard RNN models. • ODE-RNN: Uses neural ODEs to model hidden state dynamics and an RNN to update the hidden\nstate in presence of a new observation (Rubanova et al., 2019).\n• L-ODE-RNN: Latent ODE where the encoder is an RNN and decoder is a neural ODE (Chen et al., 2018).\n• L-ODE-ODE: Latent ODE where the encoder is an ODE-RNN and decoder is a neural ODE (Rubanova et al., 2019).\nPhysionet Experiments: Table 1 compares the performance of all methods on the interpolation task where we observe 50%− 90% of the values in the test instances. As we can see, the proposed method (mTAND-Full) consistently and substantially outperforms all of the previous approaches across all of the settings of observed time points. We note that in this experiment, different columns correspond to different setting (for example, in the case of 70%, we condition on 70% of data and predict the rest of the data; i.e., 30%) and, hence the results across columns are not comparable.\nTable 2 compares predictive performance on the PhysioNet mortality prediction task. The full Multi-Time Attention network model (mTAND-Full) and the classifier based only on the Multi-Time Attention network encoder (mTAND-Enc) achieve significantly improved performance relative to the current state-of-the-art methods (ODE-RNN and L-ODE-ODE) and other baseline methods.\nWe also report the time per epoch in minutes for all the methods. We note that the ODE-based models require substantially more run time than other methods due to the required use of an ODE solver (Chen et al., 2018; Rubanova et al., 2019). These methods also require taking the union of all observation time points in a batch, which further slows down the training process. As we can see, the proposed full Multi-Time Attention network (mTAND-Full) is over 85 times faster than ODE-RNN and over 100 times faster than L-ODE-ODE, the best-performing ODE-based models.\nMIMIC-III Experiments: Table 2 compares the predictive performance of the models on the mortality prediction task on MIMIC-III. The Multi-Time Attention network-based encoder-decoder framework (mTAND-Full) achieves better performance than the recent IP-Net and SeFT model as well as all of the RNN baseline models. While ODE-RNN and L-ODE-ODE both have slightly better" }, { "heading": "Model AUC Score Accuracy time", "text": "mean AUC than mTAND-Full, the differences are not statistically significant. Further, as shown on the PhysioNet classification problem, mTAND-Full is more than an order of magnitude faster than the ODE-based methods.\nHuman Activity Experiments: Table 2 shows that the mTAND-based classifiers achieve significantly better performance than the baseline models on this prediction task, followed by ODE-based models and IP-Nets.\nAdditional Experiments: In Appendix A.2, we demonstrate the effectiveness of learning temporally distributed latent representations with mTANs on a synthetic dataset. We show that mTANs are able to capture local structure in the time series better than latent ODE-based methods that encode to a single time point. This property of mTANs helps to improve the interpolation performance in terms of mean squared error.\nWe also perform ablation experiments to show the performance gain achieved by learning similarity kernels and time embeddings in Appendix A.1. In particular, we show that learning the time embedding improves classification performance compared to using fixed positional encodings. We also demonstrate the effectiveness of learning the similarity kernel by comparing to an approach that uses fixed RBF kernels. Appendix A.1 shows that learning the similarity kernel using the mTAND module performs as well as or better than using a fixed RBF kernel." }, { "heading": "6 DISCUSSION AND CONCLUSIONS", "text": "In this paper, we have presented the Multi-Time Attention (mTAN) module for learning from sparse and irregularly sampled data along with a VAE-based encoder-decoder model leveraging this module. Our results show that the resulting model performs as well or better than a range of baseline and state-of-the-art models on both the interpolation and classification tasks, while offering training times that are one to two orders of magnitude faster than previous state of the art methods. While in this work we have focused on a VAE-based encoder-decoder architecture, the proposed mTAN module can be used to provide an interface between sparse and irregularly sampled time series and many different types of deep neural network architectures including GAN-based models. Composing the mTAN module with convolutional networks instead of recurrent architectures may also provide further computational enhancements due to improved parallelism." }, { "heading": "ACKNOWLEDGEMENTS", "text": "Research reported in this paper was partially supported by the National Institutes of Health under award numbers 5U01CA229445 and 1P41EB028242." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ABLATION STUDY", "text": "In this section, we perform ablation experiments to show the performance gain achieved by learning similarity kernel and time embedding. Table 3 shows the ablation results by substituting fixed positional encoding (Vaswani et al., 2017) in place of learnable time embedding defined in Equation 1 in mTAND-Full model on PhysioNet and MIMIC-III dataset for classification task. We report the average AUC score over 5 runs. As we can see from Table 3, learning the time embedding improves AUC score by 1% as compared to using fixed positional encodings.\nSince mTANs are fundamentally continuous-time interpolation-based models, we perform an ablation study by comparing mTANs with the IP-nets (Shukla & Marlin, 2019). IP-Nets use several semiparametric RBF interpolation layers, followed by a GRU to model irregularly sampled time series. In this framework, we replace the RBK kernel with a learnable similarity kernel using mTAND module, the corresponding model is mTAND-Enc. Table 4 compares the performance of the two methods on classification task on PhysioNet, MIMIC-III and Human Activity dataset. We report the average AUC score over 5 runs. Table 4 shows that learning the similarity kernel using mTAND module performs as well or better than using a fixed RBF kernel." }, { "heading": "A.2 SYNTHETIC INTERPOLATION EXPERIMENTS", "text": "To demonstrate the capabilities of our model on the interpolation task, we generate a synthetic dataset consisting of 1000 trajectories each of 100 time points sampled over t ∈ [0, 1]. We fix 10 reference points and use RBF kernel with a fixed bandwidth of 100 for constructing local interpolations at 100 time points over [0, 1]. The values at the reference points are drawn from a standard normal distribution.\nWe randomly sample 20 observations from each trajectory to simulate a sparse and irregularly sampled multivariate time series. We use 80% of the data for training and 20% for testing. At test time, encoder conditions on 20 irregularly sampled time points and the decoder generates interpolations on all 100 time points. Figure 3 illustrates the interpolation results on the test set for the Multi-Time Attention Network and Latent ODE model with ODE encoder (Rubanova et al., 2019). For both the models, we draw 100 samples from the approximate posterior distribution. As we can see from Figure 3, the ODE interpolations are much smoother and haven’t been able to capture the local structure as well as mTANS.\nTable 5 compares the proposed model with best performing baseline Latent-ODE with ODE encoder (L-ODE-ODE) on reconstruction and interpolation task. For both the tasks, we condition on the 20 irregularly sampled time points and reconstruct the input points (reconstruction) and the whole set of 100 time points (interpolation). We report the mean squared error on test set." }, { "heading": "A.3 ARCHITECTURE DETAILS", "text": "Multi-Time Attention Network (mTAND-Full): In our proposed encoder-decoder framework (Figure 2), we use bi-directional GRU as the recurrent model in both encoder and decoder. In encoder, we use a 2 layer fully connected network with 50 hidden units and ReLU activations to map the RNN hidden state at each reference point to mean and variance. Similarly in decoder, mTAN embeddings are independently decoded using a 2 layer fully connected network with 50 hidden units and ReLU activations, and the result is used to parameterize the output distribution. For classification tasks, we use a separate GRU layer on top of the latent states followed by a 2-layer fully connected layer with 300 units and ReLU activations to output the class probabilities.\nMulti-Time Attention Encoder (mTAND-Enc): As we show in the experiments, the proposed mTAN module can standalone be used for classification tasks. The mTAND-Enc consists of MultiTime attention module followed by GRU to extract the final hidden state which is then passed to a 2-layer fully connected layer to output the class probabilities.\nLoss Function: For computing the evidence lower bound (ELBO) during training, we use negative log-likelihood with fixed variance as the reconstruction loss. For all the datasets, we use a fixed variance of 0.01. For computing ELBO, we use 5 samples for interpolation task and 1 sample for classification tasks. We use cross entropy loss for classification. For the classification tasks, we tune the λ parameter in the supervised learning loss function (Equation 15). We achieved best performance using λ as 100 and 5 for Physionet, MIMIC-III respectively. For human activity dataset, we achieved best results without using the regulaizer or ELBO component. We found that KL annealing with coeff 0.99 improved the performance of interpolation and classification tasks on Physionet." }, { "heading": "A.4 HYPERPARAMETERS", "text": "Baselines: For Physionet and Human Activity dataset, we use the reported hyperparameters for RNN baselines as well as ODE models from Rubanova et al. (2019). For MIMIC-III dataset, we independently tune the hyperparameters of the baseline models on the validation set. We search for GRU hidden units, latent dimension, number of hidden units in fully connected network for ODE function in recognition and generative model over the range {20, 32, 64, 128, 256}. For ODEs, we also searched the number of layers in fully connected network in the range {1, 2, 3}. mTAN: We learn time embeddings of size 128. The number of embeddings H ∈ {1, 2, 4}. The linear projection matrices used for projecting time embedding W are each dk ∗ dk/h where dk is the embedding size. We search the latent dimension and GRU encoder hidden size over the range {32, 64, 128}. We keep GRU decoder hidden size at 50. For the classification tasks, we use 128 reference points. For interpolation task, we search number of reference points over the range {8, 16, 32, 64, 128}. We use Adam Optimizer for training the models. For classification, experiments are run for 300 iteration with learning rate 0.0001, while for interpolation task experiments are run for 500 iterations with learning rate 0.001. Best hyperparameters are reported in the code.\nA.5 VISUALIZING ATTENTION WEIGHTS\nIn this section, we visualize the attention weights learned by our proposed model. We experiment using synthetic dataset (described in A.2) which consists of univariate time series. Figure 4 shows the attention weights learned by the encoder mTAND module. The input shown in the figure is the irregularly sampled time points and the edges show how the output at reference points attends to the values on the input time points. The final output can be computed by substituting the attention weights in Equation 3." }, { "heading": "A.6 TRAINING DETAILS", "text": "" }, { "heading": "A.6.1 DATA GENERATION AND PREPROCESSING", "text": "All the datasets used in the experiments are publicly available and can be downloaded using the following links: PhysioNet: https://physionet.org/content/challenge-2012/ MIMIC-III: https://mimic.physionet.org/\nHuman Activity: https://archive.ics.uci.edu/ml/datasets/Localization+ Data+for+Person+Activity.\nWe rescale each feature to be between 0 and 1 for Physionet and MIMIC-III dataset. We also rescale the time to be in [0, 1] for all datasets. In case of MIMIC-III dataset, for the time series missing entirely, we follow the preprocessing steps of Shukla & Marlin (2019) and assign the starting point (time t=0) value of the time series to the global mean for that variable." }, { "heading": "A.6.2 SOURCE CODE", "text": "The code for reproducing the results in this paper is available at https://github.com/ reml-lab/mTAN." }, { "heading": "A.6.3 COMPUTING INFRASTRUCTURE", "text": "All experiments were run on a Nvidia Titan X GPU." } ]
2,021
MULTI-TIME ATTENTION NETWORKS FOR IRREGULARLY SAMPLED TIME SERIES
SP:3b9ce25cba7d3b62e4927a76feccea0106d9b338
[ "The paper proposes to re-think the fashion of using label information in the VAE framework. The authors propose to disentangle information about the label (or, more generally, the context) in a \"hard-coded\" manner, namely, by using a separate set of variables for the label (context). The paper is written in a lucid manner, and the presented results are sound." ]
We present a principled approach to incorporating labels in variational autoencoders (VAEs) that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs—capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the characteristic capturing VAE (CCVAE), a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints.
[ { "affiliations": [], "name": "Tom Joy" }, { "affiliations": [], "name": "Sebastian M. Schmon" }, { "affiliations": [], "name": "Philip H. S. Torr" }, { "affiliations": [], "name": "N. Siddharth" }, { "affiliations": [], "name": "Tom Rainforth" } ]
[ { "authors": [ "Tameem Adel", "Zoubin Ghahramani", "Adrian Weller" ], "title": "Discovering interpretable representations for both deep generative and discriminative models", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Samuel K. Ainsworth", "Nicholas J. Foti", "Adrian K.C. Lee", "Emily B. Fox" ], "title": "Interpretable VAEs for nonlinear group factor analysis", "venue": null, "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "Rodney A Brooks" ], "title": "Intelligence without representation", "venue": "Artificial intelligence,", "year": 1991 }, { "authors": [ "Emilien Dupont" ], "title": "Learning disentangled joint continuous and discrete representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in deep learning", "venue": "PhD thesis, University of Cambridge,", "year": 2016 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Geoffrey E Hinton", "Richard S Zemel" ], "title": "Autoencoders, minimum description length and helmholtz free energy. In Advances in neural information processing", "venue": null, "year": 1994 }, { "authors": [ "Maximilian Ilse", "Jakub M Tomczak", "Christos Louizos", "Max Welling" ], "title": "Diva: Domain invariant variational autoencoders", "venue": "arXiv preprint arXiv:1905.10427,", "year": 2019 }, { "authors": [ "Jeremy Irvin", "Pranav Rajpurkar", "Michael Ko", "Yifan Yu", "Silviana Ciurea-Ilcus", "Chris Chute", "Henrik Marklund", "Behzad Haghgoo", "Robyn Ball", "Katie Shpanskaya" ], "title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Yang Li", "Quan Pan", "Suhang Wang", "Haiyun Peng", "Tao Yang", "Erik Cambria" ], "title": "Disentangled variational auto-encoder for semi-supervised learning", "venue": "Information Sciences,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Lars Maaløe", "Casper Kaae Sønderby", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Auxiliary deep generative models", "venue": "arXiv preprint arXiv:1602.05473,", "year": 2016 }, { "authors": [ "Lars Maaløe", "Marco Fraccaro", "Ole Winther" ], "title": "Semi-supervised generation with cluster-aware generative models", "venue": "arXiv preprint arXiv:1704.00637,", "year": 2017 }, { "authors": [ "Lars Maaløe", "Marco Fraccaro", "Valentin Liévin", "Ole Winther" ], "title": "Biva: A very deep hierarchy of latent variables for generative modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jiayuan Mao", "Chuang Gan", "Pushmeet Kohli", "Joshua B Tenenbaum", "Jiajun Wu" ], "title": "The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision", "venue": null, "year": 1904 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N Siddharth", "Yee Whye Teh" ], "title": "Disentangling disentanglement in variational autoencoders", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jonas Mueller", "David Gifford", "Tommi Jaakkola" ], "title": "Sequence to better sequence: continuous revision of combinatorial structures", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Rajesh Ranganath", "Dustin Tran", "David Blei" ], "title": "Hierarchical variational models", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Yuge Shi", "N. Siddharth", "Brooks Paige", "Philip H.S. Torr" ], "title": "Variational mixture-of-experts autoencoders for multi-modal deep generative models", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "N. Siddharth", "T Brooks Paige", "Jan-Willem Van de Meent", "Alban Desmaison", "Noah Goodman", "Pushmeet Kohli", "Frank Wood", "Philip Torr" ], "title": "Learning disentangled representations with semisupervised deep generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Lewis Smith", "Yarin Gal" ], "title": "Understanding measures of uncertainty for adversarial example detection", "venue": "arXiv preprint arXiv:1803.08533,", "year": 2018 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo" ], "title": "Joint multimodal learning with deep generative models", "venue": "In International Conference on Learning Representations Workshop,", "year": 2017 }, { "authors": [ "Joshua B Tenenbaum" ], "title": "Mapping a manifold of perceptual observations", "venue": "In Advances in neural information processing systems,", "year": 1998 }, { "authors": [ "Joshua B Tenenbaum", "William T Freeman" ], "title": "Separating style and content with bilinear models", "venue": "Neural computation,", "year": 2000 }, { "authors": [ "Ramakrishna Vedantam", "Ian Fischer", "Jonathan Huang", "Kevin Murphy" ], "title": "Generative models of visually grounded imagination", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Martin J Wainwright", "Michael I Jordan" ], "title": "Graphical models, exponential families, and variational inference", "venue": "Foundations and Trends® in Machine Learning,", "year": 2008 }, { "authors": [ "Mike Wu", "Noah Goodman" ], "title": "Multimodal generative models for scalable weakly-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Taihong Xiao", "Jiapeng Hong", "Jinwen Ma" ], "title": "Dna-gan: Learning disentangled representations from multi-attribute images", "venue": "arXiv preprint arXiv:1711.05415,", "year": 2017 }, { "authors": [ "Taihong Xiao", "Jiapeng Hong", "Jinwen Ma" ], "title": "Elegant: Exchanging latent encodings with gan for transferring multiple face attributes", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Learning hierarchical features from deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Higgins" ], "title": "diag(σ2φ(x))) with μφ(x) and diag(σ2φ(x)) being the architecture", "venue": "Higgins et al", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning the characteristic factors of perceptual observations has long been desired for effective machine intelligence (Brooks, 1991; Bengio et al., 2013; Hinton & Salakhutdinov, 2006; Tenenbaum, 1998). In particular, the ability to learn meaningful factors—capturing human-understandable characteristics from data—has been of interest from the perspective of human-like learning (Tenenbaum & Freeman, 2000; Lake et al., 2015) and improving decision making and generalization across tasks (Bengio et al., 2013; Tenenbaum & Freeman, 2000).\nAt its heart, learning meaningful representations of data allows one to not only make predictions, but critically also to manipulate factors of a datapoint. For example, we might want to manipulate the age of a person in an image. Such manipulations allow for the expression of causal effects between the meaning of factors and their corresponding realizations in the data. They can be categorized into conditional generation—the ability to construct whole exemplar data instances with characteristics dictated by constraining relevant factors—and intervention—the ability to manipulate just particular factors for a given data point, and subsequently affect only the associated characteristics.\nA particularly flexible framework within which to explore the learning of meaningful representations are variational autoencoders (VAEs), a class of deep generative models where representations of data are captured in the underlying latent variables. A variety of methods have been proposed for inducing meaningful factors in this framework (Kim & Mnih, 2018; Mathieu et al., 2019; Mao et al., 2019; Kingma et al., 2014; Siddharth et al., 2017; Vedantam et al., 2018), and it has been argued that the most effective generally exploit available labels to (partially) supervise the training process (Locatello et al., 2019). Such approaches aim to associate certain factors of the representation (or equivalently factors of the generative model) with the labels, such that the former encapsulate the latter—providing a mechanism for manipulation via targeted adjustments of relevant factors.\n∗work done while at Oxford †equal contribution\nPrior approaches have looked to achieve this by directly associating certain latent variables with labels (Kingma et al., 2014; Siddharth et al., 2017; Maaløe et al., 2016). Originally motivated by the desiderata of semi–supervised classification, each label is given a corresponding latent variable of the same type (e.g. categorical), whose value is fixed to that of the label when the label is observed and imputed by the encoder when it is not.\nThough natural, we argue that this assumption is not just unnecessary but actively harmful from a representation-learning perspective, particularly in the context of performing manipulations. To allow manipulations, we want to learn latent factors that capture the characteristic information associated with a label, which is typically much richer than just the label value itself. For example, there are\nvarious visual characteristics of people’s faces associated with the label “young,” but simply knowing the label is insufficient to reconstruct these characteristics for any particular instance. Learning a meaningful representation that captures these characteristics, and isolates them from others, requires encoding more than just the label value itself, as illustrated in Figure 1.\nThe key idea of our work is to use labels to help capture and isolate this related characteristic information in a VAE’s representation. We do this by exploiting the interplay between the labels and inputs to capture more information than the labels alone convey; information that will be lost (or at least entangled) if we directly encode the label itself. Specifically, we introduce the characteristic capturing VAE (CCVAE) framework, which employs a novel VAE formulation which captures label characteristics explicitly in the latent space. For each label, we introduce a set of characteristic latents that are induced into\ncapturing the characteristic information associated with that label. By coupling this with a principled variational objective and carefully structuring the characteristic-latent and label variables , we show that CCVAEs successfully capture meaningful representations, enabling better performance on manipulation tasks, while matching previous approaches for prediction tasks. In particular, they permit certain manipulation tasks that cannot be performed with conventional approaches, such as manipulating characteristics without changing the labels themselves and producing multiple distinct samples consistent with the desired intervention. We summarize our contributions as follows:\ni) showing how labels can be used to capture and isolate rich characteristic information; ii) formulating CCVAEs, a novel model class and objective for supervised and semi-supervised\nlearning in VAEs that allows this information to be captured effectively; iii) demonstrating CCVAEs’ ability to successfully learn meaningful representations in practice." }, { "heading": "2 BACKGROUND", "text": "VAEs (Kingma & Welling, 2013; Rezende et al., 2014) are a powerful and flexible class of model that combine the unsupervised representation-learning capabilities of deep autoencoders (Hinton & Zemel, 1994) with generative latent-variable models—a popular tool to capture factored low-dimensional representations of higher-dimensional observations. In contrast to deep autoencoders, generative models capture representations of data not as distinct values corresponding to observations, but rather as distributions of values. A generative model defines a joint distribution over observed data x and latent variables z as pθ(x, z) = p(z)pθ(x | z). Given a model, learning representations of data can be viewed as performing inference—learning the posterior distribution pθ(z | x) that constructs the distribution of latent values for a given observation.\nVAEs employ amortized variational inference (VI) (Wainwright & Jordan, 2008; Kingma & Welling, 2013) using the encoder and decoder of an autoencoder to transform this setup by i) taking the model likelihood pθ(x | z) to be parameterized by a neural network using the decoder, and ii) constructing an amortized variational approximation qφ(z | x) to the (intractable) posterior pθ(z | x) using the encoder. The variational approximation of the posterior enables effective estimation of the objective—maximizing the marginal likelihood—through importance sampling. The objective is obtained through invoking Jensen’s inequality to derive the evidence lower bound (ELBO) of the\nmodel which is given as: log pθ(x) = logEqφ(z|x) [ pθ(z,x)\nqφ(z | x)\n] ≥ Eqφ(z|x) [ log pθ(z,x)\nqφ(z | x)\n] ≡ L(x;φ, θ). (1)\nGiven observations D = {x1, . . . ,xN} taken to be realizations of random variables generated from an unknown distribution pD(x), the overall objective is 1N ∑ n L(xn; θ, φ). Hierarchical VAEs Sønderby et al. (2016) impose a hierarchy of latent variables improving the flexibility of the approximate posterior, however we do not consider these models in this work.\nSemi-supervised VAEs (SSVAEs) (Kingma et al., 2014; Maaløe et al., 2016; Siddharth et al., 2017) consider the setting where a subset of data S ⊂ D is assumed to also have corresponding labels y. Denoting the (unlabeled) data as U = D\\S, the log-marginal likelihood is decomposed as\nlog p (D) = ∑\n(x,y)∈S log pθ(x,y) + ∑ x∈U log pθ(x),\nwhere the individual log-likelihoods are lower bounded by their ELBOs. Standard practice is then to treat y as a latent variable to marginalize over whenever the label is not provided. More specifically, most approaches consider splitting the latent space in z = {zy, z\\y} and then directly fix zy = y whenever the label is provided, such that each dimension of zy explicitly represents a predicted value of a label, with this value known exactly only for the labeled datapoints. Much of the original motivation for this (Kingma et al., 2014) was based around performing semi–supervised classification of the labels, with the encoder being used to impute the values of zy for the unlabeled datapoints. However, the framework is also regularly used as a basis for learning meaningful representations and performing manipulations, exploiting the presence of the decoder to generate new datapoints after intervening on the labels via changes to zy . Our focus lies on the latter, for which we show this standard formulation leads to serious pathologies. Our primary goal is not to improve the fidelity of generations, but instead to demonstrate how label information can be used to structure the latent space such that it encapsulates and disentangles the characteristics associated with the labels." }, { "heading": "3 RETHINKING SUPERVISION", "text": "As we explained in the last section, the de facto assumption for most approaches to supervision in VAEs is that the labels correspond to a partially observed augmentation of the latent space, zy. However, this can cause a number of issues if we want the latent space to encapsulate not just the labels themselves, but also the characteristics associated with these labels. For example, encapsulating the youthful characteristics of a face, not just the fact that it is a “young” face. At an abstract level, such an approach fails to capture the relationship between the inputs and labels: it fails to isolate characteristic information associated with each label from the other information required to reconstruct data. More specifically, it fails to deal with the following issues.\nFirstly, the information in a datapoint associated with a label is richer than stored by the (typically categorical) label itself. That is not to say such information is absent when we impose zy = y, but here it is entangled with the other latent variables z\\y , which simultaneously contain the associated information for all the labels. Moreover, when y is categorical, it can be difficult to ensure that the VAE actually uses zy, rather than just capturing information relevant to reconstruction in the higher-capacity, continuous, z\\y . Overcoming this is challenging and generally requires additional heuristics and hyper-parameters.\nSecond, we may wish to manipulate characteristics without fully changing the categorical label itself. For example, making a CelebA image depict more or less ‘smiling’ without fully changing its “smile” label. Here we do not know how to manipulate the latents to achieve this desired effect: we can only do the binary operation of changing the relevant variable in zy . Also, we often wish to keep a level of diversity when carrying out conditional generation and, in particular, interventions. For example, if we want to add a smile, there is no single correct answer for how the smile would look, but taking zy = \"smile\" only allows for a single point estimate for the change.\nFinally, taking the labels to be explicit latent variables can cause a mismatch between the VAE prior p(z) and the pushforward distribution of the data to the latent space q(z) = EpD(x)[qφ(z | x)]. During training, latents are effectively generated according to q(z), but once learned, p(z) is used to make generations; variations between the two effectively corresponds to a train-test mismatch. As there is a ground truth data distribution over the labels (which are typically not independent), taking the latents as the labels themselves implies that there will be a ground truth q(zy). However, as this is not generally known a priori, we will inevitably end up with a mismatch.\nWhat do we want from supervision? Given these issues, it is natural to ask whether having latents directly correspond to labels is actually necessary. To answer this, we need to think about exactly what it is we are hoping to achieve through the supervision itself. Along with uses of VAEs more generally, the three most prevalent tasks are: a) Classification, predicting the labels of inputs where these are not known a priori; b) Conditional Generation, generating new examples conditioned on those examples conforming to certain desired labels; and c) Intervention, manipulating certain desired characteristics of a data point before reconstructing it.\nInspecting these tasks, we see that for classification we need a classifier form z to y, for conditional generation we need a mechanism for sampling z given y, and for inventions we need to know how to manipulate z to bring about a desired change. None of these require us to have the labels directly correspond to latent variables. Moreover, as we previously explained, this assumption can be actively harmful, such as restricting the range of interventions that can be performed." }, { "heading": "4 CHARACTERISTIC CAPTURING VARIATIONAL AUTOENCODERS", "text": "To correct the issues discussed in the last section, we suggest eschewing the treatment of labels as direct components of the latent space and instead employ them to condition latent variables which are designed to capture the characteristics. To this end, we similarly split the latent space into two components, z = {zc, z\\c}, but where zc, the characteristic latent, is now designed to capture the characteristics associated with labels, rather than directly encode the labels themselves. In this breakdown, z\\c is intended only to capture information not directly associated with any of the labels, unlike z\\y which was still tasked with capturing the characteristic information.\nFor the purposes of exposition and purely to demonstrate how one might apply this schema, we first consider a standard VAE, with a latent space z = {zc, z\\c}. The latent representation of the VAE will implicitly contain characteristic information required to perform classification, however the structure of the latent space will be arranged to optimize for reconstruction and characteristic information may be entangled between zc and z\\c. If we were now to jointly learn a classifier—from zc to y—with the VAE, resulting in the following objective:\nJ = ∑\nx∈U LVAE(x) + ∑ (x,y)∈S ( LVAE(x) + αEqφ(z|x) [log qϕ(y | zc)] ) , (2)\nwhere α is a hyperparameter, there will be pressure on the encoder to place characteristic information in zc, which can be interpreted as a stochastic layer containing the information needed for classification and reconstruction1. The classifier thus acts as a tool allowing y to influence the structure of z, it is this high level concept, i.e. using y to structure z, that we utilize in this work.\nHowever, in general, the characteristics of different labels will be entangled within zc. Though it will contain the required information, the latents will typically be uninterpretable, and it is unclear how we could perform conditional generation or interventions. To disentangle the characteristics of different labels, we further partition the latent space, such that the classification of particular labels yi only has access to particular latents zic and thus log qϕ(y | zc) = ∑ i log qϕi(y\ni | zic). This has the critical effect of forcing the characteristic information needed to classify yi to be stored only in the corresponding zic, providing a means to encapsulate such information for each label separately. We further see that it addresses many of the prior issues: there are no measure-theoretic issues as zic is not discrete, diversity in interventions is achieved by sampling different zic for a given label, z i c can be manipulated while remaining within class decision boundaries, and a mismatch between p(zc) and q(zc) does not manifest as there is no ground truth for q(zc).\nHow to conditionally generate or intervene when training with (2) is not immediately obvious though. However, the classifier implicitly contains the requisite information to do this via inference in an implied Bayesian model. For example, conditional generation needs samples from p(zc) that classify to the desired labels, e.g. through rejection sampling. See Appendix A for further details." }, { "heading": "4.1 THE CHARACTERISTIC CAPTURING VAE", "text": "One way to address the need for inference is to introduce a conditional generative model pψ(zc | y), simultaneously learned alongside the classifier introduced in (2), along with a prior p(y). This\n1Though, for convenience, we implicitly assume here, and through the rest of the paper, that the labels are categorical such that the mapping zc → y is a classifier, we note that the ideas apply equally well if some labels are actually continuous, such that this mapping is now a probabilistic regression.\napproach, which we term the CCVAE, allows the required sampling for conditional generations and interventions directly. Further, by persisting with the latent partitioning above, we can introduce a factorized set of generative models p(zc | y) = ∏ i p(z i c | yi), enabling easy generation and manipulation of zic individually. CCVAE ensures that labels remain a part of the model for unlabeled datapoints, which transpires to be important for effective learning in practice.\nTo address the issue of learning, we perform variational inference, treating y as a partially observed auxiliary variable. The final graphical model is illustrated in Figure 2. The CCVAE can be seen as a way of combining top-down and bottom-up information to obtain a structured latent representation. However, it is important to highlight that CCVAE does not contain a hierarchy of latent variables. Unlike a hierarchical VAE, reconstruction is performed only from z ∼ qφ(z | x) without going through the “deeper” y, as doing so would lead to a loss of information due to the bottleneck of y. By enforcing each label variable to link to different characteristiclatent dimensions, we are able to isolate the generative factors corresponding to different label characteristics." }, { "heading": "4.2 MODEL OBJECTIVE", "text": "We now construct an objective function that encapsulates the model described above, by deriving a lower bound on the full model log-likelihood which factors over the supervised and unsupervised subsets as discussed in § 2. The supervised objective can be defined as\nlog pθ,ψ(x,y) ≥ Eqϕ,φ(z|x,y) [ log\npθ(x | z)pψ(z | y)p(y) qϕ,φ(z | x,y)\n] ≡ LCCVAE(x,y), (3)\nwith pψ(z | y) = p(z\\c)pψ(zc | y). Here, we avoid directly modeling qϕ,φ(z | x,y); instead leveraging the conditional independence x ⊥ y | z, along with Bayes rule, to give\nqϕ,φ(z | x,y) = qϕ(y | zc)qφ(z | x)\nqϕ,φ(y | x) , where qϕ,φ(y | x) =\n∫ qϕ(y | zc)qφ(z | x)dz.\nUsing this equivalence in (3) yields (see Appendix B.1 for a derivation and numerical details) LCCVAE(x,y)=Eqφ(z|x) [ qϕ(y | zc) qϕ,φ(y | x) log pθ(x | z)pψ(z | y) qϕ(y | zc)qφ(z | x) ] +log qϕ,φ(y | x)+log p(y). (4)\nNote that a classifier term log qϕ,φ(y | x) falls out naturally from the derivation, unlike previous models (e.g. Kingma et al. (2014); Siddharth et al. (2017)). Not placing the labels directly in the latent space is crucial for this feature. When defining latents to directly correspond to labels, observing both x and y detaches the mapping qϕ,φ(y | x) between them, resulting in the parameters (ϕ, φ) not being learned—motivating addition of an explicit (weighted) classifier. Here, however, observing both x and y does not detach any mapping, since they are always connected via an unobserved random variable zc, and hence do not need additional terms. From an implementation perspective, this classifier strength can be increased, we experimented with this, but found that adjusting the strength had little effect on the overall classification accuracies. We consider this insensitivity to be a significant strength of this approach, as the model is able to apply enough pressure to the latent space to obtain high classification accuracies without having to hand tune parameter values. We find that the gradient norm of the classifier parameters suffers from a high variance during training, we find that not reparameterizing through zc in qϕ(y | zc) reduces this affect and aides training, see Appendix C.3.1 for details.\nFor the datapoints without labels, we can again perform variational inference, treating the labels as random variables. Specifically, the unsupervised objective, LCCVAE(x), derives as the standard (unsupervised) ELBO. However, it requires marginalising over labels as p(z) = p(zc)p(z\\c) = p(z\\c) ∑ y p(zc|y)p(y). This can be computed exactly, but doing so can be prohibitively expensive if the number of possible label combinations is large. In such cases, we apply Jensen’s inequality a second time to the expectation over y (see Appendix B.2) to produce a looser, but cheaper to calculate, ELBO given as\nLCCVAE(x) = Eqφ(z|x)qϕ(y|zc) [ log ( pθ(x | z)pψ(z | y)p(y) qϕ(y | zc)qφ(z | x) )] . (5)\nCombining (4) and (5), we get the following lower bound on the log probability of the data log p (D) ≥ ∑\n(x,y)∈S LCCVAE(x,y) + ∑ x∈U LCCVAE(x), (6)\nthat unlike prior approaches faithfully captures the variational free energy of the model. As shown in § 6, this enables a range of new capabilities and behaviors to encapsulate label characteristics." }, { "heading": "5 RELATED WORK", "text": "The seminal work of Kingma et al. (2014) was the first to consider supervision in the VAEs setting, introducing the M2 model for semi–supervised classification which was also approach to place labels directly in the latent space. The related approach of Maaløe et al. (2016) augments the encoding distribution with an additional, unobserved latent variable, enabling better semi-supervised classification accuracies. Siddharth et al. (2017) extended the above work to automatically derive the regularised objective for models with arbitrary (pre-defined) latent dependency structures. The approach of placing labels directly in the latent space was also adopted in Li et al. (2019). Regarding the disparity between continuous and discrete latent variables in the typical semi-supervised VAEs, Dupont (2018) provide an approach to enable effective unsupervised learning in this setting.\nFrom a purely modeling perspective, there also exists prior work on VAEs involving hierarchies of latent variables, exploring richer higher-order inference and issues with redundancy among latent variables both in unsupervised (Ranganath et al., 2016; Zhao et al., 2017) and semi-supervised (Maaløe et al., 2017; 2019) settings. In the unsupervised case, these hierarchical variables do not have a direct interpretation, but exist merely to improve the flexibility of the encoder. The semi-supervised approaches extend the basic M2 model to hierarchical VAEs by incorporating the labels as an additional latent (see Appendix F in Maaløe et al., 2019, for example), and hence must incorporate additional regularisers in the form of classifiers as in the case of M2. Moreover, by virtue of the typical dependencies assumed between labels and latents, it is difficult to disentangle the characteristics just associated with the label from the characteristics associated with the rest of the data—something we capture using our simpler split latents (zc, z\\c).\nFrom a more conceptual standpoint, Mueller et al. (2017) introduces interventions (called revisions) on VAEs for text data, regressing to auxiliary sentiment scores as a means of influencing the latent variables. This formulation is similar to (2) in spirit, although in practice they employ a range of additional factoring and regularizations particular to their domain of interest, in addition to training models in stages, involving different objective terms. Nonetheless, they share our desire to enforce meaningfulness in the latent representations through auxiliary supervision.\nAnother related approach involves explicitly treating labels as another data modality (Vedantam et al., 2018; Suzuki et al., 2017; Wu & Goodman, 2018; Shi et al., 2019). This work is motivated by the need to learn latent representations that jointly encode data from different modalities. Looking back to (3), by refactoring p(z | y)p(y) as p(y | z)p(z), and taking q(z | x,y) = G(q(z | x), q(z | y)), one derives multi-modal VAEs, where G can construct a product (Wu & Goodman, 2018) or mixture (Shi et al., 2019) of experts. Of these, the MVAE (Wu & Goodman, 2018) is more closely related to our setup here, as it explicitly targets cases where alternate data modalities are labels. However, they differ in that the latent representations are not structured explicitly to map to distinct classifiers, and do not explore the question of explicitly capturing the label characteristics. The JLVM model of Adel et al. (2018) is similar to the MVAE, but is motivated from an interpretability perspective—with labels providing ‘side-channel’ information to constrain latents. They adopt a flexible normalising-flow posterior from data x, along with a multi-component objective that is additionally regularised with the information bottleneck between data x, latent z, and label y.\nDIVA (Ilse et al., 2019) introduces a similar graphical model to ours, but is motivated to learn a generalized classifier for different domains. The objective is formed of a classifier which is regularized by a variational term, requiring additional hyper-parameters and preventing the ability to disentangle the representations. In Appendix C.4 we propose some modifications to DIVA that allow it to be applied in our problem domain.\nIn terms of interoperability, the work of Ainsworth et al. (2018) is closely related to ours, but they focus primarily on group data and not introducing labels. Here the authors employ sparsity in the multiple linear transforms for each decoder (one for each group) to encourage certain latent dimensions to encapsulate certain factors in the sample, thus introducing interoperability into the\nmodel. Tangentially to VAEs, similar objectives of structuring the latent space using GANs also exist Xiao et al. (2017; 2018), although they focus purely on interventions and cannot perform conditional generations, classification, or estimate likelihoods." }, { "heading": "6 EXPERIMENTS", "text": "Following our reasoning in § 3 we now showcase the efficacy of CCVAE for the three broad aims of (a) intervention, (b) conditional generation and (c) classification for a variety of supervision rates, denoted by f . Specifically, we demonstrate that CCVAE is able to: encapsulate characteristics for each label in an isolated manner; introduce diversity in the conditional generations; permit a finer control on interventions; and match traditional metrics of baseline models. Furthermore, we demonstrate that no existing method is able to perform all of the above,2 highlighting its sophistication over existing methods. We compare against: M2 (Kingma et al., 2014); MVAE (Wu & Goodman, 2018); and our modified version of DIVA (Ilse et al., 2019). See Appendix C.4 for details.\nTo demonstrate the capture of label characteristics, we consider the multi-label setting and utilise the Chexpert (Irvin et al., 2019) and CelebA (Liu et al., 2015) datasets.3 For CelebA, we restrict ourselves to the 18 labels which are distinguishable in reconstructions; see Appendix C.1 for details. We use the architectures from Higgins et al. (2016) for the encoder and decoder. The label-predictive distribution qϕ(y | zc) is defined as Ber(y | πϕ(zc)) with a diagonal transformation πϕ(·) enforcing qϕ(y | zc) = ∏ i qϕi(yi | zci). The conditional prior pψ(zc | y) is then defined as N (zc|µψ(y), diag(σ2ψ(y))) with appropriate factorization, and has its parameters also derived through MLPs. See Appendix C.3 for further details." }, { "heading": "6.1 INTERVENTIONS", "text": "If CCVAE encapsulates characteristics of a label in a single latent (or small set of latents), then it should be able to smoothly manipulate these characteristics without severely affecting others. This allows for finer control during interventions, which is not possible when the latent variables directly correspond to labels. To demonstrate this, we traverse two dimensions of the latent space and display the reconstructions in Figure 3. These examples indicate that CCVAE is indeed able to smoothly manipulate characteristics. For example, in b) we are able to induce varying skin tones rather than have this be a binary intervention on paleskin, unlike DIVA in a). In c), the zic associated with the necktie label has also managed to encapsulate information about whether someone is wearing a shirt or is bare-necked. No such traversals are possible for M2 and it is not clear how one would do them for MVAE; additional results, including traversals for DIVA, are given in Appendix D.2." }, { "heading": "6.2 DIVERSITY OF GENERATIONS", "text": "Label characteristics naturally encapsulate diversity (e.g. there are many ways to smile) which should be present in the learned representations. By virtue of the structured mappings between labels and characteristic latents, and since zc is parameterized by continuous distributions, CCVAE is able to capture diversity in representations, allowing exploration for an attribute (e.g. smile) while\n2DIVA can perform the same tasks as CCVAE but only with the modifications we ourselves suggest and still not to a comparable quality.\n3CCVAE is well-suited to multi-label problems, but also works on multi-class problems. See Appendix D.6 for results and analyses on MNIST and FashionMNIST.\npreserving other characteristics. This is not possible with labels directly defined as latents, as only discrete choices can be made—diversity can only be introduced here by sampling from the unlabeled latent space—which necessarily affects all other characteristics. To demonstrate this, we reconstruct multiple times with z = {zc ∼ pψ(zc | y), z\\c} for a fixed z\\c. We provide qualitative results in Figure 4.\nIf several samples are taken from zc ∼ pψ(zc | y) when intervening on only a single characteristic, the resulting variations in pixel values should be focused around the locations relevant to that characteristic, e.g. pixel variations should be focused around the neck when intervening on necktie. To demonstrate this, we perform single interventions on each class, and take multiple samples of zc ∼ pψ(zc | y). We then display the variance of each pixel in the reconstruction in green in Figure 5, where it can be seen that generally there is only variance in the spatial locations expected. Interestingly, for the class smile (2nd from right), there is variance in the jaw line, suggesting that the model is able capture more subtle components of variation that just the mouth." }, { "heading": "6.3 CLASSIFICATION", "text": "To demonstrate that reparameterizing the labels in the latent space does not hinder classification accuracy, we inspect the predictive ability of CCVAE across a range of supervision rates, given in Table 1. It can be observed that CCVAE generally obtains prediction accuracies slightly superior to other models. We emphasize here that CCVAE’s primary purpose is not to achieve better classification accuracies; we are simply checking that it does not harm them, which it most clearly does not." }, { "heading": "6.4 DISENTANGLEMENT OF LABELED AND UNLABELED LATENTS", "text": "If a model can correctly disentangle the label characteristics from other generative factors, then manipulating z\\c should not change the label characteristics of the reconstruction. To demonstrate this, we perform “characteristic swaps,” where we first obtain z = {zc, z\\c} for a given image, then swap in the characteristics zc to another image before reconstructing. This should apply the exact characteristics, not just the label, to the scene/background of the other image (cf. Figure 6).\nComparing CCVAE to our baselines in Figure 7, we see that CCVAE is able to transfer the exact characteristics to a greater extent than other models. Particular attention is drawn to the preservation of labeled characteristics in each row, where CCVAE is able to preserve characteristics, like the precise skin tone and hair color of the pictures on the left. We see that M2 is only able to preserve the label and not the exact characteristic, while MVAE performs very poorly, effectively ignoring the attributes entirely. Our modified DIVA variant performs reasonably well, but less reliably and at the cost of reconstruction fidelity compared to CCVAE.\nAn ideal characteristic swap should not change the probability assigned by a pre-trained classifier between the original image and a swapped one. We employ this as a quantitative measure, reporting the average difference in log probabilities for multiple swaps in Table 2. CCVAE is able to preserve the characteristics to a greater extent than other models. DIVA’s performance is largely due to its heavier weighting on the classifier, which adversely affects reconstructions, as seen earlier." }, { "heading": "7 DISCUSSION", "text": "We have presented a novel mechanism for faithfully capturing label characteristics in VAEs, the characteristic capturing VAE (CCVAE), which captures label characteristics explicitly in the latent space while eschewing direct correspondences between label values and latents. This has allowed us to encapsulate and disentangle the characteristics associated with labels, rather than just the label values. We are able to do so without affecting the ability to perform the tasks one typically does in the (semi-)supervised setting—namely classification, conditional generation, and intervention. In particular, we have shown that, not only does this lead to more effective conventional label-switch interventions, it also allows for more fine-grained interventions to be performed, such as producing diverse sets of samples consistent with an intervened label value, or performing characteristic swaps between datapoints that retain relevant features." }, { "heading": "8 ACKNOWLEDGMENTS", "text": "TJ, PHST, and NS were supported by the ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1 and EPSRC/MURI grant EP/N019474/1. Toshiba Research Europe also support TJ. TJ would also like to thank Dr. M. Stoddart. PHST would also like to acknowledge the Royal Academy of Engineering and FiveAI.\nSMS was partially supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/K503113/1.\nTR’s research leading to these results has received funding from a Christ Church Oxford Junior Research Fellowship and from Tencent AI Labs." }, { "heading": "A CONDITIONAL GENERATION AND INTERVENTION FOR EQUATION (2)", "text": "For the model trained using (2) as the objective to be usable, we must consider whether it can carry out the classification, conditional generation, and intervention tasks outlined previously. Of these, classification is straightforward, but it is less apparent how the others could be performed. The key here is to realize that the classifier itself implicitly contains the information required to perform these tasks.\nConsider first conditional generation and note that we still have access to the prior p(z) as per a standard VAE. One simple way of performing conditional generation would be to conduct a rejection sampling where we draw samples ẑ ∼ p(z) and then accept these if and only if they lead to the classifier predicting the desired labels up to a desired level of confidence, i.e. qφ(y | ẑc) > λ where 0 < λ < 1 is some chosen confidence threshold. Though such an approach is likely to be highly inefficient for any general p(z) due to the curse of dimensionality, in the standard setting where each dimension of z is independent, this rejection sampling can be performed separately for each zic, making it relatively efficient. More generally, we have that conditional generation becomes an inference problem where we wish to draw samples from\np (z | {qφ(y | zc) > λ}) ∝ p(z)I (qφ(y | zc) > λ) .\nInterventions can also be performed in an analogous manner. Namely, for a conventional intervention where we change one or more labels, we can simply resample the zic associated we those labels, thereby sampling new characteristics to match the new labels. Further, unlike prior approaches, we can perform alternative interventions too. For example, we might attempt to find the closest zic to the original that leads to the class label changing; this can be done in a manner akin to how adversarial attacks are performed. Alternatively, we might look to manipulate the zic without actually changing the class itself to see what other characteristics are consistent with the labels.\nTo summarize, (2) yields an objective which provides a way of learning a semi-supervised VAEs that avoids the pitfalls of directly fixing the latents to correspond to labels. It still allows us to perform all the tasks usually associated with semi-supervised VAEs and in fact allows a more general form of interventions to be performed. However, this comes at the cost of requiring inference to perform conditional generation or interventions. Further, as the label variables y are absent when the labels are unobserved, there may be empirical complications with forcing all the denotational information to be encoded to the appropriate characteristic latent zic. In particular, we still have a hyperparameter α that must be carefully tuned to ensure the appropriate balance between classification and reconstruction." }, { "heading": "B MODEL FORMULATION", "text": "B.1 VARIATIONAL LOWER BOUND\nIn this section we provide the mathematical details of our objective functions. We show how to derive it as a lower bound to the marginal model likelihood and show how we estimate the model components.\nThe variational lower bound for the generative model in Figure 2, is given as LCCVAE = ∑ x∈U LCCVAE(x) + ∑ (x,y)∈S LCCVAE(x,y)\nLCCVAE(x,y) = Eqφ(z|x) [ qϕ(y | zc) qϕ,φ(y | x) log ( pθ(x | z)pψ(z | y) qϕ(y | zc)qφ(z | x) )] + log qϕ,φ(y | x) + log p(y),\nLCCVAE(x) = Eqφ(z|x)qϕ(y|zc) [ log ( pθ(x | z)pψ(zc | y)p(y) qϕ(y | zc)qφ(z | x) )] .\nThe overall likelihood in the semi-supervised case is given as pθ(x,y) = ∏\n(x,y)∈S pθ(x,y) ∏ x∈U pθ(x),\nTo derive a lower bound for the overall objective, we need to obtain lower bounds on log pθ(x) and log pθ(x,y). When the labels are unobserved the latent state will consist of z and y. Using the\nfactorization according to the graph in Figure 2 yields log pθ(x) ≥ Eqφ(z|x)qϕ(y|zc) [ log ( pθ(x | z)pψ(z | y)p(y) qϕ(y | zc)qφ(z | x) )] ,\nwhere pψ(z | y) = p(z\\c)pψ(zc | y). For supervised data points we consider a lower bound on the likelihood pθ(x,y),\nlog pθ(x,y) ≥ ∫ log pθ(x | z)pψ(z | y)p(y)\nqϕ,φ(z | x,y) qϕ,φ(z | x,y)dz,\nin order to make sense of the term qϕ,φ(z | x,y), which is usually different from qφ(z | x) we consider the inference model\nqϕ,φ(z | x,y) = qϕ(y | zc)qφ(z | x)\nqϕ,φ(y | x) , where qϕ,φ(y | x) =\n∫ qϕ(y | zc)qφ(z | x)dz.\nReturning to the lower bound on log pθ(x,y) we obtain log pθ(x,y) ≥ ∫ log pθ(x | z)pψ(z | y)p(y)\nq(z | x,y) q(z | x,y)dz\n= ∫ log ( pθ(x | z)pψ(z | y)p(y)qϕ,φ(y | x)\nqϕ(y | zc)qφ(z | x)\n) qϕ(y | zc)qφ(z | x)\nqϕ,φ(y | x) dz\n= Eqφ(z|x) [ qϕ(y | zc) qϕ,φ(y | x) log ( p(x | z)pψ(zc | y) qϕ(y | zc)qφ(z | x) )] + log qϕ,φ(y | x) + log p(y),\nwhere qϕ(y | zc)/qϕ,φ(y | x) denotes the Radon-Nikodym derivative of qϕ,φ(z | x,y) with respect to qφ(z | x)." }, { "heading": "B.2 ALTERNATIVE DERIVATION OF UNSUPERVISED BOUND", "text": "The bound for the unsupervised case can alternatively be derived by applying Jensen’s inequality twice. First, use the standard (unsupervised) ELBO\nlog pθ(x) ≥ Eqφ(z|x) [ log\npθ(x | z)p(z) qφ(z | x)\n] .\nNow, since calculating p(z) = p(zc)p(z\\c) = p(z\\c) ∑\ny p(zc | y)p(y) can be expensive we can apply Jensen’s inequality a second time to the expectation over zc to obtain\nlog p(zc) ≥ Eqϕ(y|zc) [ log\npψ(zs | y)p(y) qϕ(y | zs)\n] .\nSubstituting this bound into the unsupervised ELBO yields again our bound log p(x) ≥ Eqφ(z|x)qϕ(y|zc) [ log\npθ(x | z)p(z | y) qφ(z | x)qϕ(y | zc)\n] + log p(y) (7)\nC IMPLEMENTATION" }, { "heading": "C.1 CELEBA", "text": "We chose to use only a subset of the labels present in CelebA, since not all attributes are visually distinguishable in the reconstructions e.g. (earrings). As such we limited ourselves to the following labels: arched eyebrows, bags under eyes, bangs, black hair, blond hair, brown hair, bushy eyebrows, chubby, eyeglasses, heavy makeup, male, no beard, pale skin, receding hairline, smiling, wavy hair, wearing necktie, young. No images were omitted or cropped, the only modifications were keeping the aforementioned labels and resizing the images to be 64 × 64 in dimension." }, { "heading": "C.2 CHEXPERT", "text": "The Chexpert dataset comprises of chest X-rays taken from a variety of patients. We down-sampled each image to be 64 × 64 and used the same networks from the CelebA experiments. The five main attributes for Chexpert are: cardiomegaly, edema, consolidation, atelectasis, pleuraleffusion. Which for non medical experts can be interpreted as: enlargement of the heart; fluid in the alveoli; fluid in the lungs; collapsed lung; fluid in the corners of the lungs.\nC.3 IMPLEMENTATION DETAILS\nFor our experiments we define the generative and inference networks as follows. The approximate posterior is represented as qφ(z | x) = N (zc, z\\c | µφ(x), diag(σ2φ(x))) with µφ(x) and diag(σ2φ(x)) being the architecture from Higgins et al. (2016). The generative model pθ(x | z) is represented by a Laplace distribution, again parametrized using the architecture from Higgins et al. (2016). The label predictive distribution qϕ(y | zc) is represented as Ber(y | πϕ(zc)) with πϕ(zc) being a diagonal transformation forcing the factorisation qϕ(y | zc) = ∏ i qψi(yi | zci). The conditional prior is given as pψ(zc | y) = N (zc | µψ(y), diag(σ2ψ(y))), with the appropriate factorisation, where the parameters are represented by an MLP. Finally, the prior placed on the portion of the latent space reserved for unlabelled latent variables is p(z\\c) = N (z\\c | 0, I)). For the latent space zc ∈ Rmc and z\\c ∈ Rm\\c , where m = mc +m\\c with mc = 18 and m\\c = 27 for CelebA. The architectures are given in and Table 3.\nOptimization We trained the models on a GeForce GTX Titan GPU. Training consumed ∼ 2Gb for CelebA and Chexpert, taking around 2 hours to complete 100 epochs respectively. Both models were optimized using Adam with a learning rate of 2× 10−4 for CelebA respectively." }, { "heading": "C.3.1 HIGH VARIANCE OF CLASSIFIER GRADIENTS", "text": "The gradients of the classifier parameters ϕ suffer from a high variance during training. We find that not reparameterizing zc for qϕ(y | zc) reduces this issue:\nLCCVAE(x,y)=Eqφ(z|x) [ qϕ(y | z̄c) qϕ,φ(y | x) log pθ(x | z)pψ(z | y) qϕ(y | z̄c)qφ(z | x) ] +log qϕ,φ(y | x)+log p(y). (8)\nFigure 8: Gradient norms of classifier.\nwhere z̄c indicates that we do not reparameterize the sample. This significantly reduces the variance of the magnitude of the gradient norm ∇ϕ, allowing the classifier to learn appropriate weights and structure the latent space. This can be seen in Figure 8, where we plot the gradient norm of ϕ for when we do reparameterize zc (blue) and when we do not (orange). Clearly not reparameterizing leads to a lower variance in the gradient norm of the classifier, which aides learning. To a certain extent these gradients can be viewed as redundant, as there is already gradients to update the predictive distribution due to the log qϕ,φ(y | x) term anyway." }, { "heading": "C.4 MODIFIED DIVA", "text": "The primary goal of DIVA is domain invariant classification and not to obtain representations of individual characteristics like we do here. The objective is essentially a classifier which is regularized by a variational objective. However, to achieve domain generalization, the authors aim to disentangle the domain, class and other generative factors. This motivation leads to a graphical model that is similar in spirit to ours ( Figure 9), in that the latent variables are used to predict labels, and the introduction of the inductive bias to partition the latent space. As such, DIVA can be modified to suit our problem of encapsulating characteristics. The first modification we need to consider is the removal of zd, as we are not considering multi-domain problems. Secondly, we introduce the factorization present in CCVAE, namely qϕ(y | zc) = ∏ i qψi(yi |zci). With these two modifications an alternative objective can now be constructed, with the supervised given as\nLSDIV A(x,y) = Eqφ(z|x) log pθ(x | z)− βKL(qφ(z\\c|x)||p(z\\c)) − βKL(qφ(zc|x)||pψ(zc | y)),\nand the unsupervised as\nLUDIV A(x) = Eqφ(z|x) log pθ(x | z)− βKL(qφ(z\\c|x)||p(z\\c)) + βEqφ(zc|x)qϕ(y|zc)[log pψ(zc | y)− log qφ(zc|x)], + βEqφ(zc|x)qϕ(y|zc)[log p(y)− log qϕ(y | zc)],\nwhere y has to be imputed. The final objective for DIVA is then given as log pθ (D) ≥ ∑\n(x,y)∈S LSDIV A(x,y) + ∑ x∈U [ LUDIV A(x) + αEq(zc|x) log qϕ(y | zc) ] .\nIt is interesting to note the differences to the objective of CCVAE, namely, there is no emergence of a natural classifier in the supervised case, and y has to be imputed in the unsupervised case instead of relying on variational inference as in CCVAE. Clearly such differences have a significant impact on performance as demonstrated by the main results of this paper." }, { "heading": "D ADDITIONAL RESULTS", "text": "" }, { "heading": "D.1 SINGLE INTERVENTIONS", "text": "Here we demonstrate single interventions where we change the binary value for the desired attributes. To quantitatively evaluate the single interventions, we intervene on a single label and report the changes in log-probabilities assigned by a pre-trained classifier. If the single intervention only affects the characteristics of the chosen label, then there should be no change in other classes and only a change on the chosen label. Intervening on all possible labels yields a confusion matrix, with the optimal results being a diagonal matrix with zero off-diagonal elements. We also report the condition number for the confusion matrices, given in the titles.\nIt is interesting to note that the interventions for CCVAE are subtle, this is due to the latent zic ∼ p(zic|yi), which will be centered around the mean. More striking intervention can be achieved by traversing along zic." }, { "heading": "D.2 LATENT TRAVERSALS", "text": "Here we provide more latent traversals for CCVAE in Figure 18 and for DIVA in Figure 19. CCVAE is able to smoothly alter characteristics, indicating that it is able to encapsulate characteristics in a single dimension, unlike DIVA which is unable to alter the characteristics effectively, suggesting it cannot encapsulate the characteristics." }, { "heading": "D.3 GENERATION", "text": "We provide results for the fidelity of image generation on CelebA. To do this we use the FID metric Heusel et al. (2017), we omitted results for Chexpert as the inception model used in FID has not been trained on the typical features associated with X-Rays. The results are given in Table 4, interestingly for low supervision rates MVAE obtains the best performance but for higher supervision rates M2 outperforms MVAE. We posit that this is due to MVAE having little structure imposed on the latent space, as such the POE can structure the representation purely for reconstruction without considering the labels, something which is not possible as the supervision rate is increased. CCVAE obtains competitive results with respect to M2. It is important to note that generative fidelity is not the focus of this work as we focus purely on how to structure the latent space using labels. It is no surprise then that the generations are bad as structuring the latent space will potentially be at odds with the reconstruction term in the loss." }, { "heading": "D.4 CONDITIONAL GENERATION", "text": "To asses conditional generation, we first train an independent classifier for both datasets. We then conditionally generate samples given labels and evaluate them using this pre-trained classifier. Results provided in Table 5. CCVAE and M2 are comparable in generative abilities, but DIVA and MVAE perform poorly, indicated by random guessing." }, { "heading": "D.5 DIVERSITY OF CONDITIONAL GENERATIONS", "text": "We also report more examples for diversity, as in Figure 5, in Figure 20." }, { "heading": "D.6 MULTI-CLASS SETTING", "text": "Here we provide results for the multi-class setting of MNIST and FashionMNIST. The multi-class setting is somewhat tangential to our work, but we include it for completeness. For CCVAE, we have some flexibility over the size of the latent space. Trying to encapsulate representations for each label is not well suited for this setting, as it’s not clear how you could alter the representation of an image being a 6, whilst preserving the representation of it being an 8. In fact, there is really only one label for this setting, but it takes multiple values. With this in mind, we can now make an explicit choice about how the latent space will be structured, we can set zc ∈ R or zc ∈ RN , or conversely, store all of the representation in zc, i.e. z\\c = ∅. Furthermore, we do not need to enforce the factorization qϕ(y | zc) = ∏ i q(yi|zic), and instead can be parameterized by a function F : RN → RM where M is the number of possible classes.\nClassification We provide the classification results in Table 6.\nConditional Generation We provide classification accuracies for pre-trained classifier using conditional generated samples as input and the condition as a label. We also report the mutual information to give an indication of how out-of-distribution the samples are. In order to estimate the uncertainty, we transform a fixed pre-trained classifier into a Bayesian predictive classifier that integrates over the posterior distribution of parameters ω as p(y | x,D) = ∫ p(y | x, ω)p(ω | D)dω. The utility of classifier uncertainties for out-of-distribution detection has previously been explored Smith & Gal (2018), where dropout is also used at test time to estimate the mutual information (MI) between the predicted label y and parameters ω (Gal, 2016; Smith & Gal, 2018) as\nI(y, ω | x,D) = H[p(y | x,D)]− Ep(ω|D) [H[p(y | x, ω)]] .\nHowever, the Monte Carlo (MC) dropout approach has the disadvantage of requiring ensembling over multiple instances of the classifier for a robust estimate and repeated forward passes through the classifier to estimate MI. To mitigate this, we instead employ a sparse variational GP (with 200 inducing points) as a replacement for the last linear layer of the classifier, fitting just the GP to the data and labels while holding the rest of the classifier fixed. This, in our experience, provides a more robust and cheaper alternative to MC-dropout for estimating MI. Results are provided in Table 7.\nLatent Traversals We can also perform latent traversals for the multi-class setting. Here, we perform linear interpolation on the polytope where the corners are obtained from the network µψ(y) for four different classes. We provide the reconstructions in Figure 21.\nDiversity in Conditional Generations Here we show how we can introduce diversity in the conditional generations whilst keeping attributes such as pen-stroke and orientation constant. Inspecting the M2 results Figure 22 and Figure 23, where we have to sample from z to introduce diversity, indicates that we are unable to introduce diversity without affecting other attributes.\nInterventions We can also perform interventions on individual classes, as showed in Figure 24." } ]
2,021
CAPTURING LABEL CHARACTERISTICS IN VAEs
SP:c80e745edb60717dcaa312fb3c01723bdb72f81d
[ "This paper presents a new algorithm called Regioned Episodic Reinforcement Learning (RERL), which combines ideas from episodic memory, with automatic sub-goal creation or “goal-oriented” RL. The method works by dividing the state space into regions, where a different goal identifies each region. Then, using an episodic memory technique, the agent is able to learn about new experiences in a sample efficient way. This allows the agent to explore effectively, and learn a good policy quickly in problems where there are sparse rewards. The paper provides some theoretical justification for the new algorithm, and provides some empirical results that demonstrate its effectiveness. " ]
Goal-oriented reinforcement learning algorithms are often good at exploration, not exploitation, while episodic algorithms excel at exploitation, not exploration. As a result, neither of these approaches alone can lead to a sample-efficient algorithm in complex environments with high dimensional state space and delayed rewards. Motivated by these observations and shortcomings, in this paper, we introduce Regioned Episodic Reinforcement Learning (RERL) that combines the episodic and goal-oriented learning strengths and leads to a more sample efficient and effective algorithm. RERL achieves this by decomposing the space into several sub-space regions and constructing regions that lead to more effective exploration and high values trajectories. Extensive experiments on various benchmark tasks show that RERL outperforms existing methods in terms of sample efficiency and final rewards.
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Kavosh Asadi", "Dipendra Misra", "Michael L Littman" ], "title": "Lipschitz continuity in model-based reinforcement learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Adrià Puigdomènech Badia", "Bilal Piot", "Steven Kapturowski", "Pablo Sprechmann", "Alex Vitvitskyi", "Daniel Guo", "Charles Blundell" ], "title": "Agent57: Outperforming the atari human benchmark", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Akhil Bagaria", "George Konidaris" ], "title": "Option discovery using deep skill chaining", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In ICML,", "year": 2009 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Dynamic programming and optimal control", "venue": "Athena scientific Belmont, MA,", "year": 1995 }, { "authors": [ "Charles Blundell", "Benigno Uria", "Alexander Pritzel", "Yazhe Li", "Avraham Ruderman", "Joel Z Leibo", "Jack Rae", "Daan Wierstra", "Demis Hassabis" ], "title": "Model-free episodic control", "venue": null, "year": 2016 }, { "authors": [ "Chris Drummond" ], "title": "Accelerating reinforcement learning by composing solutions of automatically identified subtasks", "venue": "Journal of Artificial Intelligence Research,", "year": 2002 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Benjamin Eysenbach", "Xinyang Geng", "Sergey Levine", "Ruslan Salakhutdinov" ], "title": "Rewriting history with inverse rl: Hindsight inference for policy improvement", "venue": "arXiv preprint arXiv:2002.11089,", "year": 2020 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Roy Fox", "Sanjay Krishnan", "Ion Stoica", "Ken Goldberg" ], "title": "Multi-level discovery of deep options", "venue": "In arXiv,", "year": 2017 }, { "authors": [ "Kevin Frans", "Jonathan Ho", "Xi Chen", "Pieter Abbeel", "John Schulman" ], "title": "Meta learning shared hierarchies", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Itzhak Gilboa", "David Schmeidler" ], "title": "Case-based decision theory", "venue": "The quarterly Journal of economics,", "year": 1995 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Guy Hacohen", "Daphna Weinshall" ], "title": "On the power of curriculum learning in training deep networks", "venue": "arXiv preprint arXiv:1904.03626,", "year": 2019 }, { "authors": [ "Steven Hansen", "Alexander Pritzel", "Pablo Sprechmann", "André Barreto", "Charles Blundell" ], "title": "Fast deep reinforcement learning using online adjustments from the past", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Yuu Jinnai", "Jee Won Park", "Marlos C Machado", "George Konidaris" ], "title": "Exploration in reinforcement learning with deep covering options", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "George Konidaris", "Andrew Barto" ], "title": "Skill discovery in continuous reinforcement learning domains using skill chaining", "venue": "NeurIPS,", "year": 2009 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Su Young Lee", "Choi Sungik", "Sae-Young Chung" ], "title": "Sample-efficient deep reinforcement learning via episodic backward update", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Máté Lengyel", "Peter Dayan" ], "title": "Hippocampal contributions to control: the third way", "venue": "In NeurIPS,", "year": 2008 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Zichuan Lin", "Tianqi Zhao", "Guangwen Yang", "Lintao Zhang" ], "title": "Episodic memory deep q-networks", "venue": "arXiv preprint arXiv:1805.07603,", "year": 2018 }, { "authors": [ "Marlos C Machado", "Clemens Rosenbaum", "Xiaoxiao Guo", "Miao Liu", "Gerald Tesauro", "Murray Campbell" ], "title": "Eigenoption discovery through the deep successor representation", "venue": "arXiv preprint arXiv:1710.11089,", "year": 2017 }, { "authors": [ "Joseph R Manns", "Ramona O Hopkins", "Larry R Squire" ], "title": "Semantic memory and the human hippocampus", "venue": null, "year": 2003 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "On the effectiveness of least squares generative adversarial networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "David Marr", "David Willshaw", "Bruce McNaughton" ], "title": "Simple memory: a theory for archicortex", "venue": "In From the Retina to the Neocortex", "year": 1991 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": null, "year": 2016 }, { "authors": [ "Ofir Nachum", "Shixiang Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aaron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Sujoy Paul", "Jeroen Vanbaar", "Amit Roy-Chowdhury" ], "title": "Learning from trajectories via subgoal discovery", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": null, "year": 2018 }, { "authors": [ "Vitchyr Pong", "Shixiang Gu", "Murtaza Dalal", "Sergey Levine" ], "title": "Temporal difference models: Modelfree deep rl for model-based control", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Alexander Pritzel", "Benigno Uria", "Sriram Srinivasan", "Adria Puigdomenech", "Oriol Vinyals", "Demis Hassabis", "Daan Wierstra", "Charles Blundell" ], "title": "Neural episodic control", "venue": null, "year": 2017 }, { "authors": [ "Zhizhou Ren", "Kefan Dong", "Yuan Zhou", "Qiang Liu", "Jian Peng" ], "title": "Exploration via hindsight goal generation", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Matthew J Salganik", "Peter Sheridan Dodds", "Duncan J Watts" ], "title": "Experimental study of inequality and unpredictability in an artificial cultural", "venue": "market. science,", "year": 2006 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In ICML,", "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In ICML,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Wenling Shang", "Alex Trott", "Stephan Zheng", "Caiming Xiong", "Richard Socher" ], "title": "Learning world graphs to accelerate hierarchical reinforcement learning", "venue": "In ICML Workshop,", "year": 2019 }, { "authors": [ "Özgür Şimşek", "Alicia P Wolfe", "Andrew G Barto" ], "title": "Identifying useful subgoals in reinforcement learning by local graph partitioning", "venue": "In ICML,", "year": 2005 }, { "authors": [ "Robert J Sutherland", "Jerry W Rudy" ], "title": "Configural association theory: The role of the hippocampal formation in learning, memory, and amnesia", "venue": "Psychobiology,", "year": 1989 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Alexander Trott", "Stephan Zheng", "Caiming Xiong", "Richard Socher" ], "title": "Keeping your distance: Solving sparse reward tasks using self-balancing shaped rewards", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "The laplacian in rl: Learning representations with efficient approximations", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Guangxiang Zhu", "Zichuan Lin", "Guangwen Yang", "Chongjie Zhang" ], "title": "Episodic reinforcement learning with associative memory", "venue": "In ICLR,", "year": 2019 } ]
[ { "heading": null, "text": "Goal-oriented reinforcement learning algorithms are often good at exploration, not exploitation, while episodic algorithms excel at exploitation, not exploration. As a result, neither of these approaches alone can lead to a sample-efficient algorithm in complex environments with high dimensional state space and delayed rewards. Motivated by these observations and shortcomings, in this paper, we introduce Regioned Episodic Reinforcement Learning (RERL) that combines the episodic and goal-oriented learning strengths and leads to a more sample efficient and effective algorithm. RERL achieves this by decomposing the space into several sub-space regions and constructing regions that lead to more effective exploration and high values trajectories. Extensive experiments on various benchmark tasks show that RERL outperforms existing methods in terms of sample efficiency and final rewards." }, { "heading": "1 INTRODUCTION", "text": "Despite its notable success, the application of reinforcement learning (RL) still suffers from sample efficiency in real-world applications. To achieve human-level performance, episodic RL (Pritzel et al., 2017; Lee et al., 2019) is proposed to construct episodic memory, enabling the agent to assimilate new experiences and act upon them rapidly. While episodic algorithms work well for tasks where it is easy to collect valuable trajectories and easy to design dense reward functions, both of these requirements become roadblocks when applying to complex environments with sparse reward. Goal-oriented RL (Andrychowicz et al., 2017; Paul et al., 2019) decomposes the task into several goal-conditioned tasks, where the intrinsic reward is defined as the success probability of reaching each goal by the current policy and the ability to guide the agent to finally reach the target state. These methods intend to explore more unique trajectories and use all trajectories in the training procedure, which may involve unrelated ones and result in inefficient exploitation. In this paper, we propose a novel framework that can combine the strengths of episodic and goal-oriented algorithms and thus can efficiently explore and rapidly exploit high-value trajectories.\nThe inefficient learning of deep RL has several plausible explanations. In this work, we focus on addressing these challenges: (C1) Environments with a sparse reward signal can be difficult to learn, as there may be very few instances where the reward is non-zero. Goal-oriented RL can mitigate this issue by building intrinsic reward signals (Ren et al., 2019), but suffer from the difficulty of generating appropriate goals from high-dimensional space. (C2) Training goal-oriented RL models using all historical trajectories rather than selected ones would involve unrelated trajectories in training. The training process of goal generation algorithms could be unstable and inefficient (Kumar et al., 2019), as data distribution shifts when the goal changes. It can be fairly efficient if updates happen only with highly related trajectories. (C3) Redundant exploration is another issue that limits the performance as it is inefficient for the agent to explore the same areas repeatedly (Ostrovski et al., 2017). Instead, it would be much more sensible for agents to learn to divide the task into several sub-tasks to avoid redundant exploration.\nIn this paper, we propose Regioned Episodic Reinforcement Learning (RERL), which tackles the limitations of deep RL listed above and demonstrates dramatic improvements in a wide range of environments. Our work is, in part, inspired by studies on psychology and cognitive neuroscience (Lengyel & Dayan, 2008; Manns et al., 2003), which discovers that when we observe an event, we scan through our corresponding memory storing this kind of events and seek experiences related to\nthis one. Our agent regionalizes the historical trajectories into several region-based memories∗. At each timestep, the region controller will evaluate each region and select one for further exploration and exploitation. Each memory binds a specific goal and a series of goal-oriented trajectories and uses a value-based look-up to retrieve highly related and high-quality trajectories when updating the value function. We adopt hindsight (i.e., the goal state is always generated from visited states in the memory) and diversity (i.e., goal state should be distant from previous goal states in other memories) constraints in goal generation for goal reachability and agent exploration. This architecture conveys several benefits: (1) We can automatically construct region-based memory by goal-oriented exploration, where trajectories guided by the same goal share one memory (see Section 3.1). (2) Within each memory, we alleviate the high-dimensional issue (C1) by enforcing that the goal space is a set of visited states (see Section 3.2). (3) In order to improve efficiency in exploitation (C2), our architecture stabilizes training using trajectories within the memory instead of randomly selected transitions (see Section 3.3 for details). (4) Our algorithm takes previous goals in other memories when generating a goal in current memory. Specifically, we propose the diversity constraint to encourage the agent to explore unknown states (see Section 3.2), which aims at improving exploration efficiency (C3). The contributions of this paper are as follows: (1) We introduce RERL, a novel framework that combines the strengths of episodic RL and goal-oriented RL for efficient exploration and exploitation. (2) We propose hindsight and diversity constraints in goal generation, which allows the agents to construct and update the regioned memories automatically. (3) We evaluate RERL in challenging robotic environments and show that our method can naturally handle sparse reward environments without any additional prior knowledge and manually modified reward function. RERL can be closely incorporated with various policy networks such as deep deterministic policy gradient (DDPG (Lillicrap et al., 2015)) and proximal policy optimization (PPO (Schulman et al., 2017)). Further, ablation studies demonstrate that our exploration strategy is robust across a wide set of hyper-parameters." }, { "heading": "2 PRELIMINARIES", "text": "In RL (Sutton & Barto, 2018), the goal of an agent is to maximize its expected cumulative reward by interacting with a given environment. The RL problem can be formulated as a Markov Decision Process (MDP) by a tuple (S,A,P, r, γ), where S is the state space, A is the action space, P : S × A → ∆(S) is the state transition probability distribution, r : S × A → [0, 1] is the reward function, and γ ∈ [0, 1) is the discount factor for future rewards. Our objective is to find a stochastic policy π : S × A → [0, 1) that maximizes the expected cumulative reward Rt = ∑T k=0 γ\nkrt+k within the MDP, where T is the episode length. In the finite-horizon setting, the state-action value function Qπ(s, a) = E[Rt|st = s, a] is the expected return for executing action a on state s and following π afterward. The value function can be defined as\nV π(s) := E [ T∑ k=0 γkrt+k(st, at) | st = s, π ] , ∀s ∈ S, (1)\nwhere T is the episode length and the goal of the agent is to maximize the expected return of each state st. Deep Q Network (DQN, (Mnih et al., 2015)) utilizes an off-policy learning strategy, which samples (st, at, rt, st+1) tuples from a replay buffer for training. It is a typical parametric RL method and suffers from sample inefficiency due to slow gradient-based updates. The key idea of episodic RL is to store good past experiences in a tabular-based non-parametric memory and rapidly latch onto past successful policies when encountering similar states, instead of waiting for many optimization steps. However, in environments with sparse rewards, there may be very few instances where the reward is non-zero, making it difficult for an agent to find good past experiences. In order to address this issue, goal-oriented RL is proposed. In the goal-conditioned setting that we use here, the policy and the reward are also conditioned on a goal g ∈ G (Schaul et al., 2015). The distance function d (used to define goal completion and generate sparse reward upon the completion of goal) may be exposed as a shaped intrinsic reward without any additional domain knowledge: r(st, at|g) = 1, if d(φ(·|st+1), g) ≤ δ, and r(st, at|g) = −d(φ(·|st+1), g) otherwise, where φ : ∗The common idea our method shares with neuroscience is utilizing highly related information to promote learning efficiency. The difference is that memories are regioned according to the generated goals in this paper, and fictions in cognitive neuroscience.\nAlgorithm 1 Framework of RERL 1: repeat 2: Select Region together with Region-based\nMemory. 3: Generate goals for exploration with Goal-\noriented RL. 4: Interact with the Environment. 5: Store historical trajectories into Memory. 6: Update value estimation for exploitation\nwith Episodic RL. 7: until Q function Converges.\nS → G is a known and tractable mapping†. While we expect that cooperation of goal generation and distance function themselves to lead the agent to the final state (global optimum), in practice, we need to consider that there exist local optima due to state space structure or transition dynamics (Trott et al., 2019). Once we can generate appropriate goal g and anti-goal ḡ, we are able to redefine the intrinsic reward function as:\nr(st, at|g, ḡ) := {\n1 d(φ(·|st+1), g) ≤ δ min[0,−d(φ(·|st+1), g) + d(φ(·|st+1), ḡ)] otherwise,\n(2)\nwhere st+1 ∼ P(·|st, at) denotes the next state; φ : S → G is the extended joint generation for both goal and anti-goal generations; ḡ ∈ G is the anti-goal and acts as a state that the agent should avoid, which prevents the policy from getting stuck at the local optimum and enables the agent to learn to reach the goal location quickly;(Trott et al., 2019) δ is a given threshold indicating whether the goal is considered to be reached (Plappert et al., 2018). To make use of r(st, at|g, ḡ) in practice, we require a method to dynamically estimate the local optima that frustrate learning without relying on domain-expertise or hand-picked estimations.\nThe idea of the universal value function (Schaul et al., 2015) is to use a universal functional approximator to represent a large number of value functions. In the goal-oriented scenario, the value function conditioned on any given goal of g and anti-goal ḡ can be defined as\nV π(s, g, ḡ) := Eat∼π(·|st,g,ḡ),st+1∼P(·|st,at) [ T∑ t=1 γtr(st, at|g, ḡ) | st = s ] . (3)\nLet X : {x | x = (s, g, ḡ)}, denote the joint set over state and goal spaces. Specifically, we define x∗ ∈ X over initial state s0 ∈ S, initial goal g∗ ∈ G and initial anti-goal ḡ∗ ∈ G. At the start of every goal-oriented task (Plappert et al., 2018), an initial-terminal states pair will be drawn from the task distribution. In this paper, we regard the terminal state as the original goal g∗ and set the original anti-goal ḡ∗ as the initial state to encourage the agent to explore at the beginning. In this setting, the agent tries to find a policy π that maximizes the expectation of discounted cumulative reward V π(x∗). From the comparison of Eqs. (1) and (3), one can see that the critical points for goal-oriented RL are to generate appropriate goals. However, as stated in (Ren et al., 2019), in goaloriented RL, the value function V π(x) is optimized with respect to a shifting goal-conditioned task distribution, which makes learning unstable. This issue requires RL algorithms to rapidly obtain value estimation under current goal-conditioned tasks, which is the strength of episodic RL. For convenience, we replace all (s, g, ḡ) tuples with x in the following context." }, { "heading": "3 REGIONED EPISODIC REINFORCEMENT LEARNING", "text": "†The definition of φ depends on the definitions of state and goal, and varys when encountering the different environments (Ren et al., 2019). For example, the goal only indicates the designated position of the destination in the Ant Maze environment (see Figure 4(b)), thus, the mapping is defined as a mapping from a system state to the position of the destination in this case.\nThe basic idea behind this paper is to ‘divide-andconquer’‡ exploration and exploitation problems in RL. Firstly, we adopt goal-oriented RL to ‘divide’ the state space into several regions, where a specific goal identifies each region. We then utilize episodic RL to ‘conquer’, where we store and learn from highly related and high-quality trajectories in regionbased memories. The overall framework is called Region Episodic Reinforcement Learning (RERL).\nWe provide the overall algorithm in Algorithm 1 and illustration in Figure 1, which combines the strengths of goal-oriented RL and episodic RL to perform efficient exploration (illustrated as orange part in Figure 2) and exploitation (illustrated as blue part in Figure 2). In the following section, we first introduce the definition of regions together with region-based memories in Section 3.1. In order to automatically obtain regions during exploration, we generate appropriate goals to guide the agent under hindsight and diversity constraints in Section 3.2. Since the goal of any RL agent is to learn a policy that can maximize the expected return, we show the value estimation update formulation based on region-based memory in Section 3.3." }, { "heading": "3.1 CONSTRUCT REGION-BASED MEMORY", "text": "Following (Florensa et al., 2017), many previous goal-oriented RL works (Ren et al., 2019; Asadi et al., 2018) adopt Assumption 1 to guarantees continuous goal-space representation.\nAssumption 1. A value function V π(x) has Lipschitz continuity over goal g and anti-goal ḡ, which can be formulated as\n|V π(x)− V π(x′)| ≤ L · d(x, x′), (4)\nwhere L denotes the Lipschitz constant. Considering that this Lipschitz continuity may not hold for every x ∈ X , we partition the joint set X into several subsets. If d(x, x′) is not too large within each sub-set, generally speaking, it is reasonable to claim that the bound Eq. (4) holds for most scenarios. In this paper, we define these sub-spaces as regions. We formulate the definition as follows:\nDefinition 1. Considering that X satisfies X = ⋃Ni=1 Xi and Xi⋂Xj = ∅, ∀i, j = 1, 2, . . . , N and i 6= j, we define each subset Xi as a region, where N is the number of regions.\nRegion Controller\nRegion 1\n...\n...\nKey\nRegion 2\n...\nValue Key Key\nKey\nValue Value\nValue\nMemory\nKey Key Key\nKey\nValue Value Value\nValue\nMemory\nFigure 3: An illustration for region-based memory.\nAn ideal partition strategy should divide state space into several parts, and each part leads to one meaningful goal (e.g., in the case of exploring a large house, ideal partition strategy should divide the house into separated rooms). RL algorithms explore each partition while ignoring other state space, thus significantly reducing exploration complexity. However, one should note that it is impractical to find the perfect partition strategy without any task-specific manual engineering. One possible solution to automatically generate these regions is to bind each region with a series of goals. In other words, we can design a region-based goal generation where at each timestep, we pick up one region and update the goal within the region. This architecture conveys several advantages: (1) It allows the agent to solve a complex environment through ’divide-and-\nconquer’. (2) Goal generation is modified within a sub-space, which can improve the stability.\nIn order to achieve this, we construct region-based memories based on historical trajectories. Specifically, for each region-based memory Mn,we have a simple memory module Mn(x) = (K(x), V π(x)), where x ∈ Xn, K(x) = (φ(·|s), g, ḡ) is the key of the memory and V π(x) is the value of the memory. As shown in Figure 8, each memory binds a specific region. At each episode, the region controller selects the region-based memory containing the highest value state for further exploration and exploitation. The motivation behind this is very intuitive that the agent always focuses on the region with the highest potential. However, directly adopting this greedy operation may lead to the phenomena of rich-get-richer (Salganik et al., 2006). Instead, we adopt\n‡Different from traditional divide-and-conquer algorithms, we ‘divide and conquer’ the problem with only one round of problem division instead of using a recursive way.\nBoltzmann softmax to select one region Xn. We use Xn to denote a division of joint set X , which is conceptual and not accessible. In the practice, we use the trajectories stored inMn instead.\nSelected-Mn = exp(maxm∈Mn Vm/ι)∑N i=1 exp(maxm∈Mi Vm/ι) , (5)\nwhere ι denotes the temperature hyper-parameter to control the exploration rate, Vm is the value of the sampled memorym, andN is the number of regions. In practice, we set the initial temperature at 1.0, then gradually reduce the temperature to 0.01 to limit region level exploration. After selecting a region Xn, the agent will focus on performing efficient exploration and exploitation upon the historical experience in its associated memory Mn. We here prove that the value optimization problem in a region-based setting is a relaxed lower bound for the original one through Proposition 1.\nProposition 1. Given the joint set X and several region-based sets (i.e., sub-sets) Xn, where n = 1, 2, . . . , N and N is the number of regions, we have\n∀π, max x∈X V π(x) ≥ max x∈{x1,x2...,xN} V π(x), where xn = arg max xn∈Xn V π(xn). (6)\nProof. The proof of Proposition 1 is provided in Appendix C.1." }, { "heading": "3.2 EXPLORE WITH GOAL-ORIENTED REINFORCEMENT LEARNING", "text": "In this section, we aim to find appropriate goals for exploration. In this paper, we analyze that appropriate goals should have the following three properties, namely (1) high value (close to terminal state), (2) reachability (appropriate for current policy), and (3) exploratory potential (explore unvisited states). To this end, we search for high-value states, according to Eq. (3), under hindsight and diversity constraints. Based on Assumption 1, we can easily derive that\n∀xn ∈ Xn, x′n ∈ X ′n, V π(xn) ≥ V π(x′n)− L · d(xn, x′n). (7)\nJointly considering Eqs. (9) and (7), optimizing cumulative rewards in Eq. (3) can be relaxed into the following surrogate problem:\nmax π,x∈{x1,x2,...,xN} V π(x), where xn = arg max xn∈Xn {V π(xn)−L ·d(xn, x∗)}, n = 1, 2, 3, . . . , N, (8)\nNote that this new objective function is intuitive. Instead of directly optimizing with x∗, which is likely to be hard, we hope to find a collection of surrogate sets x ∈ X , which benefit the exploration, ease the optimization, and are close to or converge towards x∗. However, as stated in (Ren et al., 2019), the joint optimization of π and x is non-trivial due to high-dimensional observation and shifting distribution during optimization. In order to find appropriate states for goal generation and make the system stable, we then introduce two constraints, namely hindsight constraint for reachability and diversity constraint for exploratory potential.\nHindsight Constraint. In order to guarantee goal reachability and improve learning stability, we adopt the idea of hindsight goals (Andrychowicz et al., 2017), which means G ⊆ S. We first enforce X on a finite set of Z particles that can only be from those already achieved states from trajectories {τ} in the current memoryMk, which means that the support of X should be base onMn. DeepQ Network (DQN, (Mnih et al., 2015)) parameterizes the action-value function by deep neural networks Qθ(s, a) using Q-learning (Watkins & Dayan, 1992) to learn which action is the best to take at the timestep t. According to Eq. (8), one can see that we are aiming to find high-value states with similar goal-conditioned tasks. Based on the components of region-based memories, we rank and select top-Z trajectories {τz}Zz=1, where τz = {szt } corresponding to goal-oriented task xz , to maximize ∑Z z=1 w(xz, τz), where w(xz, τz) is defined as\nw(xz, τz) := α d(xz, x ∗) + min\nst∈τz\n( ‖φ(·|st)− g∗‖ − 1\nL V π(xz)\n) , (9)\nwhere the first term is to measure the goal-conditioned task similarity with the key in the memory, and the second term is to select high-value states, and α is the hyperparameter to balance these two terms.\nDiversity Constraint. In order to encourage the agent to explore unvisited states and avoid the overlapping among the regions, we adopt the diversity constraint in goal generation. Then, we can re-formulate Eq. (9) as\nw(xz, τz) := α d(xz, x ∗) + min\nst∈τz ‖φ(·|st)− g∗‖ − 1 L V π(xz)− 1 β 1 N ∑ j∈−n (‖φ(·|st)− gj‖) , (10)\nwhere β adjusts the weight of the diversity constraint, and −n denotes the set of index except n. The motivation behind this is that considering that goals in goal-oriented RL indicate the direction for exploration, the generated goal is expected to be different from historical goals in other regions. Therefore, the formulation of our goal generation can be easily derived from Eq. (10), which can be formulated as\ng = φ · | arg min st∈τz (‖φ(·|st)− g∗‖ − 1 L V π(xz)− 1 β 1 N ∑ j∈−n (‖φ(·|st)− gj‖)) , (11) where τz is obtained through maximizing ∑Z z=1 w(xz, τz), where w(xz, τz) is defined according to Eq. (10). For the anti-goal generation, we directly assign the visited state with the average value in the region as the anti-goal. The original motivation for the anti-goal setting is to avoid local optima, which and can be further described as a reward shaping technique (Trott et al., 2019). An illustrated example of the goal generation is shown in Appendix B.1." }, { "heading": "3.3 EXPLOIT WITH EPISODIC REINFORCEMENT LEARNING", "text": "Similar to previous episodic RL algorithms (Lin et al., 2018; Zhu et al., 2019), we adopt region-based memories to maintain the historically highest values V π(xt) for each joint state-goal distribution and action pair. When encountering a new state, the agent will look up and update the corresponding memory according to the following equation:\nV π(xt)← {\nmax (V π(xt), Rt) , if xt satisfies Mn(xt) ∈Mn Rt, otherwise . (12)\nWhen the goal is changing (g → g′), the agent is required to conduct goal relabeling, similar to (Andrychowicz et al., 2017). That is, the agent needs to firstly update the key (K(x) → K(x′)), then re-calculate the reward (Rt → R′t) and update the value according to Eq. (12). Note that RERL enables the agent to rapidly assimilate new experiences to improve sample efficiency by looking up the region-based memory. Furthermore, slowly changing goal-conditioned tasks guarantees stability by restricting goal updating within each region. Based on the up-to-date region-based memories, our algorithm can be adapted to various RL training algorithms. We give a proof of convergence in Appendix C.2, our algorithm can converge to a unique optimal point when using Q-learning for value learning.\nOverall Algorithm. We provide the overall algorithm in Algorithm 2 in Appendix 2. We also provide some other views, including curriculum learning and maximum entropy reinforcement learning, to better understand how RERL works. Please refer to Appendix B.2 and B.3 for details." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we perform an experimental evaluation of the proposed method of learning from trajectories and compare it with other state-of-the-art methods. We also perform an ablation study\nof different settings of our framework. In this section, we provide the experimental results to answer the following questions:\n1. Can our RERL approach obtain better convergence in various environments? 2. Can our goal generation enhance the RL method to achieve asymptotic performance with higher\nefficiency? 3. Can our RERL tackle a complex multi-path goal distribution? 4. Can our RERL scale to higher-dimensional goal-spaces? 5. Do our generated goals really encourage exploration?\nTo answer the first two questions, we demonstrate our method in two challenging robotic locomotion tasks (see Figure 5(a)(b)). To answer the third question, we train an ant agent to reach any position within a multi-path maze (see Figure 5(c)). To answer the fourth question, we investigate how our method performs with the dimension of goal-space in an environment (see Figure 5(d) for the 3D case). To answer the final question, we conduct a visualization study (see Figure 6) on generated goals. Specifically, we conduct extensive experiments with existing approaches:\n• HER: Andrychowicz et al. (2017) introduced Hindsight Experience Replay , which constructs imaginary goals in a simple heuristic way to tackle the sparse reward issue. • HGG: Ren et al. (2019) proposed a Hindsight Goal Generation incorporating with DDPG (Lillicrap et al., 2015) that generates valuable hindsight goals to guide the agent. • AutoGG: Florensa et al. (2017) leveraged Least-Squares GAN (Mao et al., 2018) to mimic the set of Goals of Intermediate Difficulty as an automatic goal generator. • SR: Trott et al. (2019) proposed a novel framework named Sibling Rivalry accompanied by PPO Schulman et al. (2017) for learning from sibling trajectories with self-balancing reward. • POINT: Jinnai et al. (2019) proposed to extend covering options to large state spaces, automatically discovering task-agnostic options that encourage exploration. • EMDQN: Lin et al. (2018) leverages episodic memory to supervise an agent during training. Note that RERL can be closely incorporated with policy networks such as A2C (Mnih et al., 2016), DDPG (Lillicrap et al., 2015), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), SoftAC (Haarnoja et al., 2018), etc. The detailed description of experiment settings and implementation\ndetails can be found in Appendix D.1 and D.3. In this paper, we implement HGG+DDPG and SR+PPO, as initially proposed.\nAnt Locomotion. We test RERL in two challenging environments of a complex robotic agent navigating either a free space (Free Ant, Figure. 4(a)) or a U-shaped maze (Maze Ant, Figure. 4(b)). Duan et al. (2016) described the task of trying to reach the other end of the U-turn, and they show that standard RL methods are unable to solve it. We further extend the task to evaluate whether the agent is able to reach any given position ( -balls depicted in red) within the maze for Maze Ant or within the target square for Free Ant. As showed in Figure 5(a)(b), the performance of our approach exceeds that of the baselines above.\nMulti-Path Point-Mass Maze. We show that RERL is efficient at tracking clearly multi-path distributions of goals. To this end, we introduce a new maze environment with multiple paths, as illustrated in Figure 4(c). As in the experiment above, our task is to learn a policy that can reach any feasible center of the mass (x, y) corresponding to -balls in state space, like the one depicted in red. As shown in Figure 5(c), our approach obtains better performance even in a multi-path environment where goal distribution is naturally more complex than previous environments (see Appendix E.2 for demonstration).\nN -dimensional Point-Mass Maze. We use an N -dimensional Point-Mass to demonstrate the performance of our method as the state space dimension increases. As shown in Figure 5(d), our approach outperforms strong baselines in the high-dimensional experiment.\nAtari Game Pong We evaluate RERL in Atari Game, where several episodic RL algorithms (Badia et al., 2020; Lin et al., 2018) have achieved good performance. In the previous goal-oriented environments such as Maze, both state and goal have physical meaning (e.g., location in the maze). Therefore, it is easy for us to define the distance between two states, which denotes the physical distance in the maze. However, in the Atari Game environment, both state and goal are the image. Hence, the distance here has no physical meaning, which implies that directly attending the goaloriented setting will result in bad performance. In order to verify the analysis above, we directly use the extrinsic reward from environment (denoted as RERL+PPO+NOGoal), and then gradually (i.e., 5%, 20%, 50%) add the intrinsic reward in the reward function (denoted as RERL+PPO+Goal5, ERL+PPO+Goal20, ERL+PPO+Goal50, respectively). We present the result in Figure 5(i). Results show that the goal-oriented setting of RERL is not suitable for the environment like Atari Game. Also, in the Atari Game environment, there are less sparse rewards than the Ant Maze environment. Therefore, simple episodic RL algorithm such as EMDQN can obtain better performance than RERL.\nMore experimental results of environments above can be found in Appendix E.1.\n(a) iteration=50 (c) iteration=150\nstart state target state\nA ut\nom at\nic G\noa lG\nen\ner\nat io\nn\nfo\nr R\nei nf\nor ce\nm en\ntL ea\nrn in\ng A\nge nt\ns\n(a )I\nte ra\ntio n\n5 (b\n)I te\nra tio\nn 90\n(c )I\nte ra\nrti on\n35 0\nFi gu\nre 3.\nG oa\nls th\nat ,a\nti te\nra tio\nns i,\nou ra\nlg or\nith m\ntra in\nso n\n-2 00\nsa m\npl ed\nfr om\nG oa\nlG A\nN ,1\n00 fr\nom re\npl ay\n.G re\nen go\nal ss\nat is\nfy R̄\ng (⇡\ni )\nR m\na x .B\nlu e\non es\nha ve\nap pr\nop ria\nte di\nffi cu\nlty fo\nrt he\ncu rr\nen tp\nol ic\ny R\nm in\n R̄\ng (⇡\ni ) \nR m\na x .T\nhe re\nd on\nes ha\nve R\nm in\nR̄\ng (⇡\ni ).\n(a )I\ntr 5:\nC ov\ner ag\ne= 0.\n20 (b\n)I tr\n90 :C\nov\ner\nag e=\n0. 48\n(c )I\ntr 35\n0: C\nov er\nag e=\n0. 71\nFi gu\nre 4.\nV is\nua liz\nat io\nn of\nth e\npo lic\ny pe\nrf or\nm an\nce (s\nam e\npo lic\ny\ntra\nin in\ng as\nin Fi\ng. 3)\n.F or\nill us\ntra tio\nn pu\nrp os\nes ,e\nac h\ngr id\nce ll\nis co\nlo re d\nac co rd in g to th e ex pe ct ed re tu rn ac hi ev ed w he n fix in g\nits\nce\nnt\ner\nas go al :R ed in di ca te s 10 0% su cc es s; bl ue in di ca te s 0% su cc es s.\ngo al\ns ob\nse rv\ned in\nth e\npr ev\nio us\npo lic\ny op\ntim iz\nat io\nn\nst\nep\n.W\ne\nev al ua te no w ho w m uc h th is ap pr ox im at io n af fe ct\ns\nle\nar\nni\nng\nby co m pa rin g th e le ar ni ng pe rf or m an ce of ou rG oa\nlG\nA\nN\nto\na po lic y tra in ed on go al s sa m pl ed un ifo rm ly fr om\nG\nO\nI\nD\ni\nby us in g re je ct io n sa m pl in g. Th is m et ho d is or de\nrs\nof\nm\nag\n-\nni tu de m or e sa m pl e in ef fic ie nt ,b ut gi ve s us an up\npe\nrb\nou\nnd\non th e pe rf or m an ce of ou rm et ho d. Fi gs .2 c2d de\nm\non\nst\nra\nte\nth at ou r pe rf or m an ce is qu ite cl os e to th e pe rf or\nm\nan\nce\nof\nth is m uc h le ss ef fic ie nt ba se lin e.\n5. 2.\nM ul\ntipa\nth po\nin t-m\nas sm\naz e\nIn th\nis se\nct io\nn w\ne sh\now th\nat ou\nrG oa\nlG A\nN m\net ho\nd is\nef fi-\nci en\nta tt\nra ck\nin g\ncl ea\nrly m\nul ti-\nm od\nal di\nst rib\nut io\nns of\ngo al s g 2 G O I D i. To th is en d, w e in tr od uc e a ne w m az e en vi - ro nm en tw ith m ul tip le pa th s, as ca n be se en in Fi g. 1d .T o ke ep th e ex pe ri m en ts im pl e w e re pl ac e th e A nt ag en tb y a po in tm as s, w hi ch ac tio ns ar e th e ve lo ci ty ve ct or (2 di m ). A s in th e ot he re xp er im en ts ,o ur ai m is to le ar n a po lic y th at ca n re ac h an y fe as ib le go al co rr es po nd in g to ✏- ba lls in st at e sp ac e, lik e th e on e de pi ct ed in re d. Si m ila rt o th e ex pe rim en ts in Fi gu re s 3 an d 4, he re w e sh ow th e go al s th at ou r al go ri th m ge ne ra te d to tr ai n th e M ut lipa th po in t-m as s ag en t. Fi gu re s 5 an d 6 sh ow th e re su lts .I t ca n be ob se rv ed th at ou r m et ho d pr od uc es a m ul tim od al di st rib ut io n ov er go al s, tra ck in g al lt he ar ea sw he re go al sa re at th e ap pr op ri at e le ve lo fd iffi cu lty .N ot e th at th e sa m pl es fr om th e re gu la ri ze d re pl ay bu ff er ar e re sp on si bl e fo r th e tra ili ng sp re ad of “H ig h R ew ar d” go al s an d th e G oa lG A N is re sp on si bl e fo r th e m or e co nc en tr at ed no de s (s ee on ly G oa lG A N sa m pl es in A pp en di x Fi g. 11 ). A cl ea rb en efi t\nof us\nin g\nou r\nG oa\nlG A\nN as\na ge\nne ra\ntiv e\nm od\nel is\nth at no pr io rk no w le dg e ab ou tt he di st rib ut io n to fit is re qu ire d (li ke th e nu m be r of m od es ). Fi na lly , ha vi ng se ve ra l po ss ib le pa th s to re ac h a sp ec ifi c go al do es no th in de rt he le ar ni ng of ou ra lg or ith m th at co ns is te nt ly re ac he s fu ll co ve ra ge in th is pr ob le m (s ee A pp en di x Fi g. 12 ).\n5. 3.\nN -d\nim en\nsi on\nal Po\nin tM\nas s\nIn m\nan y\nre al\n-w or\nld R\nL pr\nob le\nm s,\nth e\nse to\nff ea\nsi bl\ne st\nat es\nis a\nlo w\ner -d\nim en\nsi on\nal su\nbs et\nof th\ne fu\nll st\nat e\nsp ac\ne, de\nfin ed\nby th\ne co\nns tr\nai nt\ns of\nth e\nen vi\nro nm\nen t.\nFo r\nex am\npl e,\nth e\nki ne\nm at\nic co\nns tr\nai nt\ns of\na ro\nbo t\nlim it\nth e\nse t\nof fe\nas ib le st at es th at th e ro bo tc an re ac h. In th is se ct io n w e us e an N -d im en si on al Po in tM as s to de m on st ra te th e pe rf or m an ce of ou rm et ho d as th e em be dd in g di m en si on in cr ea se s. In th is ex pe ri m en ts , th e fu ll st at esp ac e of th e N - di m en si on al Po in tM as s is th e hy pe rc ub e [ 5, 5] N . H ow - ev er ,t he Po in tM as s ca n on ly m ov e w ith in a sm al ls ub se t of th is st at e sp ac e. In th e tw odi m en si on al ca se , th e se t of fe as ib le st at es co rr es po nd s to th e [ 5, 5] ⇥ [ 1, 1] re ct - an gl e, m ak in g up 20 % of th e fu ll sp ac e. Fo r N > 2, th e fe as ib le sp ac e is th e C ar te si an pr od uc to ft hi s 2D st rip w ith [ ✏, ✏] N 2 ,w he re ✏ = 0. 3. In th is hi gh er -d im en si on al en - vi ro nm en t, ou ra ge nt re ce iv es a re w ar d of 1 w he n it m ov es w ith in ✏ N = 0. 3 p N p 2 of th e go al st at e, to ac co un tf or th e in cr ea se in av er ag e L 2 di st an ce be tw ee n po in ts in hi gh er di m en si on s. Th e fr ac tio n of th e vo lu m e of th e fe as ib le sp ac e de cr ea se s as N in cr ea se s (e .g .0 .0 00 23 :1 fo rN = 6) . W e co m pa re th e pe rf or m an ce of ou r m et ho d to th e ba se - lin es in Fi g. 7. T he un if or m sa m pl in g ba se lin e ha s po or 1.0 0.8 0.6 0.4 0.2 0.0 goalanti-goal (b) iteration=50 (d) iteration=150 normalized episode average return\nFigure 6: Each grid cell in U-maze is colored according to the expe ted retur success rate when fixing its center as the target state.\nVisualization Study on Generated Goals. n order to investigate whether the generated goals, served as curriculum in curriculum learning, truly guide the agent to t e target state and are at an appropriate difficulty level, we show the distribution of generated goals at different training stages. Results in Figure 6 show that the generated goals are approaching as the training proceeds, and at an appropriate success rate level, where the hindsight constraint helps the agent aim at feasible positions while our diversity constraint encourages the agent to approach the target state. More results of visualization study can be found in Appendix E.2.\nComparison with Explicit Curriculum Learning. Since our method can be seen as an explicit curriculum learning for exploration in Appendix B.2, we also compare our method with another recently proposed automatic curriculum learning method for goal-ori nted RL. (see Appendix E.3\nfor the detailed experiment). The result in Figure 5(e) indicates that RERL substantially outperforms this explicit curriculum learning approach even with GOID.\nImpact of Goal Generation. To further investigate the performance gain from RERL, we design the ablation study on goal generation. We incorporate the goal generator with various RL algorithms and evaluate their performance in the Maze Ant Locomotion environment. Results in Figure 5(f) illustrate that RERL significantly helps the RL method obtain effective and stable performance.\nImpact of Hyper-parameter Selection. Also, we study the effect of hyper-parameter selection here, i.e., Lipschitz constant L, number of regions N , number of trajectories Z, diversity weight α and hindsight weight β. We conduct the experiments on the Maze Ant Locomotion environment and report the results in Figure 5(g) and (h). Refer to Appendix E.4 for detailed information." }, { "heading": "5 RELATED WORK", "text": "Goal-oriented Reinforcement Learning. Goal-oriented RL allows an agent to learn a goalconditioned policy, which takes the current state and goal state as the input and predicts a sequence of actions to reach the goal (Florensa et al., 2018; Paul et al., 2019). Recent attempts (Andrychowicz et al., 2017; Ren et al., 2019; Pong et al., 2018) combine off-policy RL algorithms with goalrelabelling to efficiently generate appropriate goals from the visited states and guarantee goal reachability using a hindsight constraint. In this paper, we divide the state space into several regions. Within each region, our algorithm learns a goal-conditioned policy to reach the generated goal. While most previous methods (Wu et al., 2018; Drummond, 2002) perform goal-oriented RL on top of the explored trajectories, our method utilizes a diversity constraint in the goal generation procedure to enhance exploration and automatically structure region-based memory.\nEpisodic Reinforcement Learning. Episode RL is proposed by cognitive studies of episodic memory (Sutherland & Rudy, 1989; Marr et al., 1991; Lengyel & Dayan, 2008) in human decision making (Gilboa & Schmeidler, 1995). Recent works have investigated integrating episodic memory with deepQ networks (DQNs) in non-parametric (Blundell et al., 2016) and parametric (Pritzel et al., 2017) ways. In order to fully utilize the episodic memory, value propagation methods (Hansen et al., 2018; Zhu et al., 2019) have been proposed to obtain trajectory-centric value estimates. A common theme in recent work is finding similar historical trajectories to estimate the value function. While most methods use look-up operations (Pritzel et al., 2017) or graph structure (Zhu et al., 2019), our algorithm explicitly divides trajectories sharing the same goal into one region.\nHierarchical Reinforcement Learning. Hierarchical RL learns a set of primitive tasks that together help an agent learn the complex task. There are mainly two lines of work. One class of algorithms (Shang et al., 2019; Nachum et al., 2018; Bacon et al., 2017; Frans et al., 2018; Vezhnevets et al., 2017) jointly learn a low-level policy together with a high-level policy, where the lower-level policy interacts directly with the environment to achieve each task, while the higher-level policy instructs the lower-level policy via high-level actions or goals to sequence these tasks into the complex task. The other class of methods (Drummond, 2002; Fox et al., 2017; Şimşek et al., 2005) focuses on discovering sub-tasks or sub-goals that are easy to reach in a short time and can guide the agent to the terminal state. Recently, several option-discovery approaches (Jinnai et al., 2019; Machado et al., 2017; Bagaria & Konidaris, 2019; Konidaris & Barto, 2009) are proposed to find a set of options to reduce the environment’s cover time. Different from these works, the idea of our work is similar to curriculum learning where the sub-tasks are geting harder during the training procedure. As stated by (Nachum et al., 2018), while jointly learning high-level and low-level policies can be unstable, we sidestep the problem by constraining goal generation to be within a specific region under a hindsight constraint." }, { "heading": "6 CONCLUSION", "text": "In this paper, we present a framework that incorporates episodic RL with goal-oriented RL to improve the efficiency of exploration and exploitation. RERL does not require any additional reward engineering or domain expertise. For future work, it would be interesting to further investigate to incorporate our work with representation learning to obtain a better representation of the environment and imitation learning to enhance the learning efficiency with expert knowledge." }, { "heading": "A ALGORITHM", "text": "Algorithm 2 Regioned Episodic Reinforcement Learning (RERL) 1: Initialize π, g∗ as terminal state, ḡ∗ as inital state 2: Initialize region-based memories {Mn}Nn=1 by random sample 3: for episode = 1, 2, . . . , E do 4: Select region n according to Eq. (5) 5: Collect Z trajectories {τz}Zz=1 for each Mn that maximize ∑Z z=1 w(xz, τz) according to\nEq. (10) 6: Construct intermediate goal g according to Eq. (11), ḡ for eachMn with s of average value\n7: for t = 1, 2, . . . , T do 8: at ← π(a|s, g, ḡ) 9: st+1 ∼ P(·|st, at)\n10: rt ← r(s, g, ḡ) according to Eq. (2) 11: Mn ←Mn ∪ {Mn(st, g, ḡ} 12: UpdateMn according to Eq. (12) 13: Sample a minibatch b fromMn 14: Update policy π on minibatch b using DDPG or PPO 15: end for 16: end for\nThe overall description of our algorithm is shown in Algorithm 2. In the initialization procedure, we set the terminal state as the initial goal and initial state as the initial anti-goal, and sample trajectories into each memory. At each episode e, the agent selects one region that is most promising to lead to the terminal state in line 4. We construct a goal based on the historical trajectories in line 5. We take previous goals in other memories into consideration in the goal generation in line 6. From line 8 to line 12, the agent interacts with environment and update the memory. Our work focuses on how to build an efficient exploration and exploitation mechanism that is naturally complementary with policy networks such as deep deterministic policy gradient (DDPG (Lillicrap et al., 2015)) and proximal policy optimization (PPO (Schulman et al., 2017)) in line 14." }, { "heading": "B DICUSSIONS", "text": "B.1 EXAMPLE FOR GOAL GENERATION\nPrevious works (Florensa et al., 2018; Vezhnevets et al., 2017) adopt a goal generator to construct immediate intrinsic rewards according to the previous states. However, they often suffer a lot from balancing the efficiency of exploration and exploitation and stability in training. In the first episode, the agent explores two trajectories in different directions, with the closest one τ1a to the target state labeled as the goal g1 and the farthest one τ1b as the anti-goal ḡ1. In the second episode, the agent evaluates the highest value of states in the regions and selects one according to Eq. (5). The agent does exploration under the guided by g1 (illustrated as sun icon in blue region) and ḡ1 (illustrated as moon icon in blue region). If the agent selects the re-\ngion 2, following the similar procedures, the agent will explore the region guided by g2 (illustrated as sun icon in green region) and ḡ2 (illustrated as moon icon in green region). Note that goal g will direct the exploration. Hence, in the goal generation, we take the historical goals in the other regions into consideration by the diversity constraint. However, for the anti-goal generation, there is no need to consider other region data as described in Section 3." }, { "heading": "B.2 RELATIONSHIP TO CURRICULUM LEARNING", "text": "In order to better understand why our method can work in complex environments and can excel other traditional methods more intuitively, we further investigate the relationship between our algorithm from Eq. (3) and empirical utility maximization formulation proposed in (Hacohen & Weinshall, 2019). We provide theoretical analysis that under some assumptions, optimizing our objection function can be similar to optimizing a curriculum algorithm under additional constraints.\nFollowing Section 2, we formulated reinforcement learning problem as a Markov Decision Process (MDP) by a tuple (S,A,P, r, γ), where S is the state space, A is the action space, P : S × A → ∆(S) is the state transition probability distribution, r : S × A → [0, 1] is the reward function, and γ ∈ [0, 1) is the discount factor for future rewards. The utility function is defined as the expected sum of the immediate and long-time utility Uπ(s) under the policy π : S×A → [0, 1), and discount factor γ ∈ [0, 1), which can be formulated as:\nUπ(s) := Es0=s,at∼π(·|st),st+1∼P(·|st,at) [ T∑ t=0 γtr(st, at) ] , (1)\nwhere T is the episode length. We can utilize Uπ(s) to represent the long-time reward (i.e., episode reward). In order to formulate the short-time reward, similarly, we define Uπ(st) by Uπ(st) := γ\ntr(st, at). In a similar manner with the Empirical Risk Minimization (ERM) framework, we choose to maximize the average utility, which is defined as follows:\nπ∗ = arg max π U(π), where U(π) := E(Uπ) = 1 T T∑ t=1 Uπ(st) (2)\nHindsight Constraint. We define the scoring function, i.e., pacing function in curriculum learning (Bengio et al., 2009) with φ : S → G × G which is a known and tractable mapping. φ effectively provides a Bayesian prior g ∈ G for data sampling, namely, exploration, where g denotes goal and G denotes goal space. Based on the analysis above, we can formulate Eq. (1) as\nUg(π) = Eg[Uπ] = 1\nT T∑ t=1 Uπ(st) · φ(·|st), (3)\nwhere φ(·|st) denotes the induced prior probability conditioned on st. In order to guarantee the convergence, φ(·|st) should always be a non-increasing function of the difficulty level of st. In our algorithm, we define the goal space G as a set of visited states in state space S (i.e., hindsight constraint in Section 3), which guarantees each goal/anti-goal is sampled from previous states. This proves the following result: Proposition 1. The difference between the expected utility function with and without prior g (i.e., Ug(π) and U(π)) is the covariance between utility function Uπ(s) and goal generation φ(·|s).\nProof. The proof of Proposition 1 can be found in Appendix C.3.\nDiversity Constraint. However, one should be noted that goal g here is sampled from previous states, which guarantees the reachability of the goal but also limits potential exploration. To address this issue, we adopt diversity measure Hregion(π) to encourage the exploration between different region (diversity constraint in Section 3 is a simple implementation). Combining the aforementioned hindsight and diversity constraints, we define our objective as\nπ∗ = arg max π Ug(π), under hindsight and diversity constraints. (4)\nwhich can be easily derive as equivalence to Eq. (6). Proposition 2. The modified optimization landscape induced by curriculum learning has the same global optimum π∗ as the original problem.\nProof. The proof of Proposition 2 can be found in Appendix C.4.\nAccording to the analysis, we can conclude that our algorithm can be regarded as a novel curriculum learning approach in a goal-oriented setting, which can be proved to have the same global optimum as the original problem. In Section 4, we conduct experiments to prove that the goals are generated at different levels as the curriculum to guide the agent in curriculum learning." }, { "heading": "B.3 RELATIONSHIP WITH MAXIMUM ENTROPY RL", "text": "In this section, we consider multi-goal RL as goal-oriented policy learning (Schaul et al., 2015; Plappert et al., 2018). We further discuss the motivation behind these two constraints, namely hindsight and diversity constraints, and the relationships between our work and inverse maximum entropy reinforcement learning.\nPreliminaries. We begin with some notations and previous motivations in maximum entropy reinforcement learning (Eysenbach et al., 2020). The likelihood of a trajectory τ := {st}Tt=1 under policy π can be formulated as L(s) = P(s0) · ∏ t P(st+1|st, at)π(at|st). In the goal-oriented RL, we can re-write it as\nL(s, g) = P(s0) · ∏ t P(st+1|st, at)π(at|st, g), (5)\nwhere the initial state is sampled as s0 ∼ P(s0) and subsequent states are governed by a dynamic distribution st+1 ∼ P(st+1|st, at). As we discuss in Appendix B.2, goal-oriented RL can be regarded as regular RL with prior knowledge g generated by mapping function φ based s. Hence, the target joint distribution over goals and states is\nLtarget(s, g) = φ(·|s) Z(g) · P(s0) ∏ t P(st+1|st, at)er(st,gt,ḡt). (6)\nwhere Ltarget(s, g) be the joint distribution over state s ∈ S , goal g ∈ G; and Z(g) is the factor of normalization.\nDiversity Constraint. We can express the multi-goals RL objective as the reverse KL divergence between the joint state-goal distributions:\nmax π −H(s, g) = max π −DKL(L(s, g)‖Ltarget(s, g)) (7)\nwhere the joint distribution of likelihood L and prior information g of a trajectory τ is defined as L(s, g) := L(s|g) · φ(·|s). Then, we can rewrite Eq. (7) as maximizing the expected (entropyregularized) reward of a goal-conditioned policy L(s|g):\nEg∼φ(·|s), s∼L(·|g) [( T∑ t=1 r(st, at|g)− log π(st, at|g) ) − log Z(g) ] . (8)\nHindsight Constraint. Since the distribution over goals g is fixed, we can ignore the log Z(g) term for optimization. A less common but more intriguing choice is to factor L(s, g) = φ(·|s) · B(s), where B(τ) is represented non-parametrically as a distribution over previously-observed states. Therefore, φ(·|s) is formulated as a hindsight relabeling distribution. In this implementation, we sample goals from previous states in the region-based memory to present B(s)." }, { "heading": "C PROOFS", "text": "" }, { "heading": "C.1 PROOF OF PROPOSITION 1", "text": "Proposition 3. Given the joint set X and several region-based sets (i.e., sub-sets) Xn, where n = 1, 2, . . . , N and N is the number of regions, we have\n∀π, max x∈X V (x) ≥ max x∈{x1,x2...,xN} V (x), where xn = arg max xn∈Xn V (xn). (9)\nIn this section, we provide the proof of Proposition 1. The motivation of Proposition 1 is to find a relaxed lower bound of V (x), x ∈ X based on the definition of the region.\nProof. By Eq. (3), ∀π we have\nmax x∈X V (x) = max x∈X\nEs∈S;g,ḡ∈G,P [ T∑ t=1 γtr(st, at|g, ḡ) ]\n≥ max x∈{x1,x2,...,xN} {max x1∈X1 Es∈S1;g,ḡ∈G1,P [ T∑ t=1 γtr(st, at, st+1) ] , . . . ,\n. . . , max xN∈XN Es∈SN ;g,ḡ∈GN ,P [ T∑ t=1 γtr(st, at, st+1) ] }\n≥ max x∈{x1,x2,...,xN} V π(x), where xi = arg max xi∈Xi V (xn), n = 1, 2, 3, . . . , N.\n(10)\nThe intuition behind the proposition is easy to understand. Since we have partitioned the joint set X into several region-based sets (i.e., sub-sets) {Xn}Nn=1. We effectively avoid the agent switching among regions, meanwhile removing these trajectories out of the original candidate trajectory family." }, { "heading": "C.2 PROOF OF PROPOSITION 4", "text": "Proposition 4. Denote the Bellman backup operator inQ learning with goal as B : R|S|×|A|×|G| → R|S|×|A|×|G| and a mappingQ : S×A×G→ R|S|×|A|×|G| with |S| <∞ and |A| <∞. Repeated applications of the operator B for our goal-oriented state-action value estimate Q̂ converges to a unique optimal value Q̂∗.\nProof. The proof of Proposition 4 is done in two main steps. The first step is to show that our goal g ∈ G can converge to the terminal state. In the second step, we prove that given goal g, our goaloriented approach can converge to a unique optimal value Q∗. In other words, we need to prove that g → g∗ in the first step and Q→ Q∗ in the second step. Step I. Our algorithm aims to find the high-value previous states for goal generation. At the beginning of the task, the terminal state will be regarded as the final goal since it has the highest value. Hence, the terminal state, if it has been visited once, will be assigned as the goal. Assume that the agent can conduct plenty of exploration. Then, we can say that the generated goal g will keep approaching the terminal state g∗.\nStep II. Note that the proof of convergence for our goal-oriented RL is quite similar to Q-learning (Bellman, 1966; Bertsekas et al., 1995; Sutton & Barto, 2018). The differences between our approach and Q-learning are that Q-value Q(s, a, g, ḡ) is also conditioned on goal g and anti-goal ḡ. As introduced in Section 3, anti-goal ḡ works like a reward shaping technique, which is proposed to avoid local optima (Trott et al., 2019). Hence, we omit ḡ in the following proof. We provide detailed proof as follows:\nWe can obtain goal g ∈ G approaching the terminal state from Step I. Based on that, our estimated goal-conditioned action-value function Q̂ can be defined as\nBQ̂(s, a, g) = R(s, a, g) + γ ·max a′∈A ∑ s′∈S P (s′|s, a) · Q̂(s′, a′, g). (11)\nFor any action-value function estimates Q̂1, Q̂2, we study that\n|BQ̂1(s, a, g)− BQ̂2(s, a, g)| = γ · |max\na′∈A ∑ s′∈S P (s′|s, a) · Q̂1(s′, a′, g)−max a′∈A ∑ s′∈S P (s′|s, a) · Q̂2(s′, a′, g)|\n≤ γ ·max a′∈A | ∑ s′∈S P (s′|s, a) · Q̂1(s′, a′, g)− ∑ s′∈S P (s′|s, a) · Q̂2(s′, a′, g)|\n= γ ·max a′∈A ∑ s′∈S P (s′|s, a) · |Q̂1(s′, a′, g)− Q̂2(s′, a′, g)|\n≤ γ · max s∈S,a∈A |Q̂1(s, a, g)− Q̂2(s, a, g)|\n(12)\nCombining Step I and II, we can conclude that our goal-conditioned estimated state-action value Q̂ can converge to a unique optimal value Q∗ leading to the terminal state g∗." }, { "heading": "C.3 PROOF OF PROPOSITION 1", "text": "In this section, we provide proof of Proposition 1. From Eq. (3), Ug(π) is a function of π which is determined by the correlation between Uπ(s) and φ(g) (i.e., φ(·|s)). We can rewrite Eq. (3) as\nUg(π) = 1 T { T∑ t=1 (Uπ(st)− E[Uπ])(φ(gt)− E[φ]) + T · E[Uπ]E[φ]}\n= 1\nT {Cov[Uπ, φ] + T · E[Uπ]E[φ]}\n= 1\nT {U(π) + Cov[Uπ, φ]}\n(13)\nThis derivation can be found in Appendix C.6. We can find that curriculum learning changes the landscape of the optimization function over the policy π from U(π) to Ug(π). Intuitively, the above equation also suggests that if the induced goal g, which defines a latent variable over the goal space G, is positively correlated with the optimal utility Uπ∗(s), and more so than with any other Uπ(s), then the gradients in the direction of the optimal policy π in the new optimization landscape may be overall steeper.\nHence, this is necessary to design task-related goals. However, it is infeasible to obtain appropriate goals through handcrafted design and manual generation. In this paper, we introduce hindsight and diversity constraints to help the agent learn from achieved task-related information (previous states) and unknown task-related information (unexplored states) respectively." }, { "heading": "C.4 PROOF OF PROPOSITION 2", "text": "In this section, we provide proof of Proposition 2. In order to prove that the modified optimization function in the state-space-related parameter space π has the property that the global maximum at π∗ is more pronounced, we derive the objective function based on Proposition 1. We can assume that optimal policy π∗ maximizes the covariance between φ(g) (i.e., φ(·|s)) and utility Uπ(s), namely\narg max π U(π) = arg max π Cov[Uπ, φ] = π∗ (14)\nThe proof of the assumption can be found in Appendix C.3. We introduce Lemma 1 here, the proof of which can be found in Appendix C.5.\nLemma 1. (Florensa et al. (2017)) For any curriculum satisfying Eq. (14):\n1. π∗ = arg maxπ U(π) = arg maxπ U(π∗) 2. Ug(π∗)− Ug(π) ≥ U(π∗)− U(π), ∀π\nLemma 1 has proposed two claims. The first one presents that the problem of maximizing the covariance between φ(g) and utility Uπ(s) shares the same optimal solution with the original problem. In addition, the modified optimization function in the original parameter space without goal g has the property that the global maximum with goal g is more pronounced." }, { "heading": "C.5 PROOF OF LEMMA 1", "text": "In this section, we provide the proof of Lemma 1. Claim 1 in Lemma 1 can be derived directly from Eq. (14), while for the claim 2, we have\nProof. Ug(π∗)− Ug(π) = Ug(π∗)− U(π)− Cov[Uπ, g]\n≥ Ug(π∗)− U(π)− Cov[Uπ∗ , g] = U(π∗)− U(π)\n(15)" }, { "heading": "C.6 DETAILED DERIVATION OF EQ. (13)", "text": "In this section, we provide the detailed derivation of Eq. (13). We begin from the formulation of Ug(π) in Eq. (13) and try to obtain that in Eq. (3)." }, { "heading": "Proof.", "text": "Ug(π) = 1 T { T∑ t=1 (Uπ(st)− E[Uπ])(φ(gi)− E[φ]) + T · E[Uπ]E[φ]}\n= 1 T { T∑ t=1 (Uπ(st)φ(gt))− T∑ t=1 (Uπ(st)E[φ])− T∑ t=1 (φ(gt)E[Uπ]) + T · E[Uπ]E[φ] + T · E[Uπ]E[φ]}\n= 1 T { T∑ t=1 (Uπ(st)φ(gt))− T · E[Uπ]E[φ]− T∑ t=1 (φ(gt)E[Uπ]) + T · E[Uπ]E[φ] + T · E[Uπ]E[φ]}\n= 1\nT T∑ t=1 (Uπ(st)φ(gt)) + 1 T {T · E[Uπ]E[φ]− T∑ t=1 (φ(gt)) · E[Uπ]}\n(16) Since E[φ] := 1T ∑T t=1(φ(gt)), we have\nUg(π) = 1\nT T∑ t=1 (Uπ(st)φ(gt)) + 1 T {T · E[Uπ]E[φ]− T · E[Uπ]E[φ]}\n= 1\nT T∑ t=1 Uπ(st)φ(gt)\n(17)" }, { "heading": "D EXPERIMENT", "text": "D.1 MODIFIED ENVIRONMENTS\nAnt Locomotion. In this part, we introduce two environments based on Ant Locomotion, namely Free Ant and Ant Maze. The ant is a quadruped with 8 actuated joint, 2 for each leg. The environment is implemented in Mujoco. Besides the coordinates of the center of mass, the joint angles and joint velocities are also contained in the observation of the agent. Considering the high degrees of freedom, navigation in this quite complex task requires motor coordination. More details can be found in Duan et al. (2016), and the only difference is that in our goal-oriented version of Ant, we extend the observation with the goals. The reward is still a sparse indicator function being 1 only when the center of mass (x, y) of the Ant is within = 0.5 positions corresponding to -balls in state space. For the Free Ant experiments, the objective is to reach any position in the square [−5, 5]2. Therefore the goal space is 2 dimensional, the state-space is 41 dimensional, and the action space is 8 dimensional. As for the Ant Maze environment, the agent is constrained to move within the maze environment, U-maze in this\ncase, and the size of all the blocks in the maze is 8× 8. The maze consists of a totally 18 blocks. Multi-Path Point Maze. All the experiment setting is similar to the Ant Maze environment. We replace the Ant agent with a Point-Mass and change the maze into a multi-path one. The action of the Point-Mass is a velocity vector, namely, in the 2 dimension.\nN -dimensional Point-Mass Maze. In the N -dimensional Point-Mass maze experiment, the agent can only move within a small subset of the state space. In the two-dimensional case, the set of feasible states corresponds to the [−5, 5]× [−1, 1] rectangle, making up 20% of the full space. For N > 2, the feasible space is the Cartesian product of this 2D strip with [− , ]N−2, where = 0.3.\nIn this higher-dimensional environment, our agent receives a reward of 1 when it moves within N = 0.3 √ N√ 2\nof the goal state, to account for the increase in average L2 distance between points in higher dimensions. In these experiments, the full state-space of the N -dimensional Point Mass is the hypercube [−5, 5]N ." }, { "heading": "D.2 EVALUATION DETAILS", "text": "We adopt HGG (Ren et al., 2019) incorporating with DDPG (Lillicrap et al., 2015), SR (Trott et al., 2019) accompanying with PPO (Schulman et al., 2017) as these models are originally proposed. All curves presented in this paper are plotted from 12 runs with random task initializations and seeds. Following the regular procedure in goal-oriented RL, an episode is considered successful if and only if the agent obtain 1 as the reward according to Eq. (2) where δ stays the same for all the approaches. However, in the practice, we conduct reward as the r(st, at|g, ḡ) = min[0,−d(φ(gt+1|st+1), g) + d(φ(ḡt+1|st+1), ḡ)] to accelerate the training process. D.3 IMPLEMENTATION DETAILS Almost all hyper-parameters using DDPG (Lillicrap et al., 2015), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), Soft-AC (Haarnoja et al., 2018) are kept the same as benchmark results. Specifically, we list our hyper-parameters as here. number of MPI workers: 1; buffer size: 104 trajectories; number of regions N : 5 in agent level; batch size: 256, number of trajectories Z: 50, Lipschitz constant L: 5; learning rate: 10−5 in the network level; discount factor: 0.99; interpolation factor in Polyak averaging (if there is): 0.995; scale of additive Gaussian noise: 0.2; probability of HER (Andrychowicz et al., 2017) experience replay: 0.8." }, { "heading": "E RESULTS", "text": "" }, { "heading": "E.1 ADDITIONAL EVALUATION ON STANDARD TASKS", "text": "In this section, we provide additional results on comparisons between RERL and various baselines.\nIn order to answer the first two questions, we demonstrate our method in two challenging robotic locomotion tasks, where the goals are the (x, y) position of the center of mass of a dynamically complex quadruped agent. In the first example, the agent has no constraints, and in the second one, the agent is inside a U-maze (see Section 4 for details). Results in Figure 9(a)(b) demonstrate that the performance of our approach exceeds that of the strong baselines mentioned in Section 4. To answer the third question, we train an ant agent to reach any position within a multi-path maze. As\nshown in Figure 9(c), our approach obtains better performance even in the multi-path environment where goal distribution is naturally more complex than previous environments. To answer the fourth question, we investigate how our method performs with the dimension of goal-space in an environment where the goal space grows in dimension within the feasible region, e.g., 2D and 3D. As shown in Figure 9(d), our approach outperforms strong baselines in both low- and high-dimensional environments." }, { "heading": "E.2 ADDITIONAL RESULTS ON VISUALIZATION OF GENERATED GOALS", "text": "To answer the final question, we conduct a visualization study on generated goals to investigate whether goals can encourage the agent to the target state, and anti-goals can prevent the agent from the local optima. The visualization of goals can also represent the effect of diversity and hindsight constraints through exploration and reachability of generated goals.\nResults in Figure 10 show that the hindsight constraint helps the agent aim at feasible positions while our diversity constraint encourages the agent to approach the target state. Specifically, from\n1 and 2 , one can note that the agent is pulled by its goal and pushed by its anti-goal and goals from the other regions. Hence, once a region is leading to a wrong direction, it also can encourage exploration via diversity constraint.\nAs illustrated in Figure 11, the generated goals are approaching as the training proceeds, and at an appropriate success rate level, which is accorded with the curriculum in the curriculum learning (see Appendix B.2 for details).\nResults showed in Figure 13 and 12 are similar with that in Figure 11 and 10 respectively, which actually can confirm the analysis above." }, { "heading": "E.3 EXPERIMENT ON THE COMPARISON WITH EXPLICIT CURRICULUM LEARNING", "text": "In (Florensa et al., 2017), GOID is defined as a goal set as GOID(π) = {g : α ≤ f(π, g) ≤ 1− α} where f(π, g) represents the average success rate in a small region closed by goal g. In order to construct the GOID set, we follow its definition and sample generated goals from GOID(π) via rejection sampling." }, { "heading": "E.4 ADDITIONAL RESULTS ON ABLATION STUDY", "text": "In this section, we set up a set of ablation tests on several hyper-parameters used in the RERL. The selection of Lipschitz constant L is task-dependent since it is highly related to the scale of the value function and goal distance. For the robotics tasks tested in this paper (i.e., Ant Maze Locomotion), as showed in Figure 14(a), we find that the performance of RERL is reasonable as long as L is not too small. Similar to L, the selection of the number of regions N is also theoretically task-specific. We test a few choices on Ant Maze Locomotion and find a range of N that works well. As Figure 14(b) illustrates, it appears that the RERL is reasonable as long asN is not too large. As for the number of trajectories Z, we plot the curve on different Z in Figure 14(c) and find that for the simple tasks, the choice of Z is not critical. Parameters α and β together define the trade-off between value function, diversity, and hindsight constraints. Results in Figure 14(d)(e) show that the choice of α and β is indeed robust." } ]
2,020
REGIONED EPISODIC REINFORCEMENT LEARNING
SP:4b9cb72dcc70459c938b5ba8aaec2ea8fa253e1b
[ "The manuscript proposes SAME, a model based on GNN and meta-learning for learning multi-task node embeddings. Unlike multi-task learning setting, SAME aims at learning to quickly adapt to multiple tasks. Two model variants iSAME and eSAME are proposed base on different settings in inner/outer loop of parameter update. Experiments on several datasets demonstrate the good performance of SAME. " ]
Graph Neural Networks (GNNs) have become the state-of-the-art method for many applications on graph structured data. GNNs are a framework for graph representation learning, where a model learns to generate low dimensional node embeddings that encapsulate structural and feature-related information. GNNs are usually trained in an end-to-end fashion, leading to highly specialized node embeddings. While this approach achieves great results in the single-task setting, generating node embeddings that can be used to perform multiple tasks (with performance comparable to single-task models) is an open problem. We propose a novel representation learning strategy, based on meta-learning, capable of producing multi-task node embeddings. Our method avoids the difficulties arising when learning to perform multiple tasks concurrently by, instead, learning to quickly (i.e. with a few steps of gradient descent) adapt to multiple tasks singularly. We show that the embeddings produced by our method can be used to perform multiple tasks with comparable or higher performance than both single-task and multitask end-to-end models. Our method is model-agnostic and task-agnostic and can hence be applied to a wide variety of multi-task domains.
[]
[ { "authors": [ "Ferran Alet", "Erica Weng", "Tomas Lozano-Perez", "L. Kaelbling" ], "title": "Neural relational inference with fast modular meta-learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Pedro Avelar", "Henrique Lemos", "Marcelo Prates", "Luis Lamb" ], "title": "Multitask learning on graph neural networks: Learning multiple graph centrality measures with a unified network", "venue": "In ICANN Workshop and Special Sessions", "year": 2019 }, { "authors": [ "Eytan Bakshy", "Lili Dworkin", "Brian Karrer", "Konstantin Kashin", "Benjamin Letham", "Ashwin Murthy", "Shaun Singh" ], "title": "Ae: A domain-agnostic platform for adaptive experimentation", "venue": "In NeurIPS Systems for ML Workshop,", "year": 2018 }, { "authors": [ "Avishek Joey Bose", "Ankit Jain", "Piero Molino", "William L Hamilton" ], "title": "Meta-graph: Few shot link prediction via meta learning", "venue": null, "year": 2019 }, { "authors": [ "Michael M. Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: Going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Ines Chami", "Sami Abu-El-Haija", "Bryan Perozzi", "Christopher Ré", "K. Murphy" ], "title": "Machine learning on graphs: A model and comprehensive taxonomy", "venue": null, "year": 2020 }, { "authors": [ "Zhao Chen", "Vijay Badrinarayanan", "Chen-Yu Lee", "Andrew Rabinovich" ], "title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks", "venue": null, "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Tristan Deleu", "Tobias Würfl", "Mandana Samiei", "Joseph Paul Cohen", "Yoshua Bengio" ], "title": "Torchmeta: A Meta-Learning library for PyTorch", "venue": null, "year": 2019 }, { "authors": [ "Paul D. Dobson", "Andrew J. Doig" ], "title": "Distinguishing enzyme structures from non-enzymes without alignments", "venue": "Journal of Molecular Biology,", "year": 2003 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Victor Garcia", "Joan Bruna" ], "title": "Few-shot learning with graph neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": null, "year": 2017 }, { "authors": [ "William L Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Representation learning on graphs: Methods and applications", "venue": "IEEE Data Engineering Bulletin,", "year": 2017 }, { "authors": [ "Lu Haonan", "Seth H. Huang", "Tian Ye", "Guo Xiuyan" ], "title": "Graph star net for generalized multi-task learning", "venue": null, "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Chester Holtz", "Onur Atan", "Ryan Carey", "Tushit Jain" ], "title": "Multi-task learning on graphs with node and graph level labels", "venue": "In NeurIPS Workshop on Graph Representation Learning,", "year": 2019 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-learning in neural networks: A survey", "venue": null, "year": 2020 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": null, "year": 2018 }, { "authors": [ "Jongmin Kim", "Taesup Kim", "S. Kim", "C. Yoo" ], "title": "Edge-labeling graph neural network for few-shot learning", "venue": null, "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Diya Li", "Heng Ji" ], "title": "Syntax-aware multi-task graph convolutional networks for biomedical relation extraction", "venue": "In LOUHI,", "year": 2019 }, { "authors": [ "Lu Liu", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Chengqi Zhang" ], "title": "Learning to propagate for graph meta-learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Pengfei Liu", "J. Fu", "Y. Dong", "Xipeng Qiu", "J. Cheung" ], "title": "Learning multi-task communication with message passing for sequence learning", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Kaushalya Madhawa", "Tsuyoshi Murata" ], "title": "Active learning on graphs via meta learning", "venue": "In ICML Workshop on Graph Representation Learning and Beyond,", "year": 2020 }, { "authors": [ "Kevis-Kokitsi Maninis", "Ilija Radosavovic", "Iasonas Kokkinos" ], "title": "Attentive single-tasking of multiple tasks", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Floriane Montanari", "Lara Kuhnke", "Antonius Ter Laak", "Djork-Arné Clevert" ], "title": "Modeling physicochemical ADMET endpoints with multitask graph convolutional networks. Molecules, 2019", "venue": null, "year": 2019 }, { "authors": [ "Christopher Morris", "Nils M. Kriege", "Franka Bause", "Kristian Kersting", "Petra Mutzel", "Marion Neumann" ], "title": "Tudataset: A collection of benchmark datasets for learning with graphs", "venue": "In ICML Workshop on Graph Representation Learning and Beyond,", "year": 2020 }, { "authors": [ "Cuong Q. Nguyen", "Constantine Kreatsoulas", "Branson Kim M" ], "title": "Meta-learning gnn initializations for low-resource molecular property prediction", "venue": "In ICML Workshop on Graph Representation Learning and Beyond,", "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "NeurIPS", "year": 2019 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg", "Jake Vanderplas", "Alexandre Passos", "David Cournapeau", "Matthieu Brucher", "Matthieu Perrot", "Édouard Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of maml", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Ida Schomburg", "Antje Chang", "Christian Ebeling", "Marion Gremse", "Christian Heldt", "Gregor Huhn", "Dietmar Schomburg" ], "title": "Brenda, the enzyme database: updates and major new developments", "venue": "Nucleic acids research,", "year": 2004 }, { "authors": [ "Trevor Standley", "Amir R. Zamir", "Dawn Chen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Which tasks should be learned together in multi-task learning", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Qiuling Suo", "Jingyuan Chou", "Weida Zhong", "Aidong Zhang" ], "title": "Tadanet: Task-adaptive network for graph-enriched meta-learning", "venue": "In ACM SIGKDD,", "year": 2020 }, { "authors": [ "Jeffrey J. Sutherland", "Lee A. O’Brien", "Donald F. Weaver" ], "title": "Spline-fitting with a genetic algorithm: A method for developing classification structure-activity relationships", "venue": "Journal of Chemical Information and Computer Sciences,", "year": 2003 }, { "authors": [ "Simon Vandenhende", "Stamatios Georgoulis", "Marc Proesmans", "Dengxin Dai", "Luc Van Gool" ], "title": "Revisiting multi-task learning in the deep learning era", "venue": null, "year": 2020 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph Attention Networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Shanfeng Wang", "Qixiang Wang", "Maoguo Gong" ], "title": "Multi-task learning based network embedding", "venue": "Frontiers in Neuroscience,", "year": 2020 }, { "authors": [ "Xie", "Maoguo Gong", "Yuan Gao", "A.K. Qin", "Xiaolong Fan" ], "title": "A multi-task representation", "venue": "Learning Systems,", "year": 2020 }, { "authors": [ "Chawla", "Zhenhui Li" ], "title": "Graph few-shot learning via knowledge transfer", "venue": "In AAAI,", "year": 2018 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nOriginal Embeddings Transferred Embeddings\nNC GC->NC LP->NC0.50\n0.55\n0.60\n0.65\n0.70\n0.75\n0.80\n0.85\n0.90\nAc cu\nra cy\n- 13.21% - 14.52%\nNode Classification\n(b) GC NC->GC LP->GC0.20\n0.25\n0.30\n0.35\n0.40\n0.45\n0.50\n0.55\n0.60\nAc cu\nra cy\n- 21.29%\n- 10.82%\nGraph Classification\n(c) LP NC->LP GC->LP0.55\n0.60\n0.65\n0.70\n0.75\n0.80\nRO C\nAU C - 5.89%\n- 4.43%\nLink Prediction\n(d)\nFigure 1: Performance drop when transferring node embeddings between tasks on (a) Node Classification (NC), (b) Graph Classification (GC), and (c) Link Prediction (LP) on the ENZYMES dataset. On the horizontal axis, “x ->y” indicates that the embeddings obtained from a model trained on task x are used to train a network for task y.\nGraph Neural Networks (GNNs) are deep learning models that operate on graph structured data, and have become one of the main topics of the deep learning research community. Part of their success is given by great empirical performance on many graph-related tasks. Three tasks in particular, with many practical applications, have received the most attention: graph classification, node classification, and link prediction. GNNs are centered around the concept of node representation learning, and typically follow the same architectural pattern with an encoderdecoder structure (Hamilton et al., 2017; Chami et al., 2020; Wu et al., 2020). The encoder produces node embeddings (low-dimensional vec-\ntors capturing relevant structural and feature-related information about each node), while the decoder uses the embeddings to carry out the desired downstream task. The model is then trained in an end-to-end manner, giving rise to highly specialized node embeddings. While this can lead to state-of-the-art performance, it also affects the generalization and reusability of the embeddings. In fact, taking the encoder from a GNN trained on a given task and using its node embeddings to train a decoder for a different task leads to substantial performance loss, as shown in Figure 1.\nThe low transferability of node embeddings requires the use of one specialized encoder and one specialized decoder for each considered task. However, many practical machine learning applications operate in resource-constrained environments where being able to share part of the model architecture between tasks is of great importance. Furthermore, the training signal from multiple related tasks can lead to higher generalization. Nevertheless, making sure tasks do not negatively interfere with each other is not trivial (Standley et al., 2020). The problem of learning models that can perform multiple tasks is known as Multi-Task Learning (MTL), and is an open area of research, attracting many researchers in the deep learning community (Vandenhende et al., 2020).\nMTL on graphs has not received much attention, and no single model capable of performing the three most common graph-related tasks has yet been proposed. In fact, we notice that training a multi-head model with the classical procedure, i.e. by performing multiple tasks concurrently on each graph, and updating the parameters with some form of gradient descent to minimize the sum of the single-task losses, can lead to a performance loss with respect to single-task models. Thus, we propose a novel optimization-based meta-learning (Finn et al., 2017) procedure with a focus on representation learning that can generate node embeddings that generalize across tasks.\nOur proposed meta-learning procedure produces task-generalizing node embeddings by not aiming at a setting of the parameters that can perform multiple tasks concurrently (like a classical method would do), or to a setting that allows fast multi-task adaptation (like traditional meta-learning), but to a setting that can easily be adapted to perform each of the tasks singularly. In fact, our metalearning procedure aims at a setting of the parameters where a few steps of gradient descent on a given task, can lead to good performance on that task, hence removing the burden of directly learning to solve multiple tasks concurrently.\nWe summarize our contributions as follows:\n• We propose a novel method for learning representations that can generalize to multiple tasks. We apply it to the challenging setting of graph MTL, and show that a GNN trained with our method produces higher quality node embeddings with respect to classical end-toend training procedures. Our method is based on meta-learning and is model-agnostic and task-agnostic, which makes it easily applicable to a wide range of multi-task domains.\n• To the best of our knowledge, we are the first to propose a GNN model generating a single set of node embeddings that can be used to perform the three most common graph-related tasks (graph classification, node classification, and link prediction). In particular, our embeddings lead to comparable or higher performance with respect to single-task models even when used as input to a simple linear classifier.\n• We show that the episodic training strategy at the base of our proposed meta-learning procedure leads to better node embeddings even for models trained on a single task. This unexpected finding provides interesting directions that we believe can be useful to the whole deep representation learning community." }, { "heading": "2 RELATED WORK", "text": "GNNs, MTL, and meta-learning are very active areas of research. We highlight works that are at the intersections of these subjects, and point the interested reader to comprehensive reviews of each field. To the best of our knowledge there is no work using meta-learning for graph MTL, or proposing a GNN performing graph classification, node classification, and link prediction concurrently.\nGraph Neural Networks. GNNs have a long history (Scarselli et al., 2009), but in the past few years the field has grown exponentially; we refer the reader to Chami et al. (2020); Wu et al. (2020) for a thorough review of the field. The first popular GNN approaches were based on filters in the graph spectral domain (Bronstein et al., 2017), and presented many challenges including high computational complexity. Defferrard et al. (2016) introduced ChebNet, which uses Chebyshev polynomials to produce localized and efficient filters in the graph spectral domain. Graph Convolutional Networks (Kipf & Welling, 2017) then introduced a localized first-order approximation of spectral graph convolutions which was then extended to include attention mechanisms (Veličković et al., 2018). Recently, Xu et al. (2019) provides theoretical proofs for the expressivity of GNNs.\nMulti-Task Learning. Works at the intersection of MTL and GNNs have mostly focused on multihead architectures. These models are all composed of a series of GNN layers followed by multiple heads that perform the desired downstream tasks. In this category, Montanari et al. (2019) propose a model for the prediction of physico-chemical properties. Holtz et al. (2019) and Xie et al. (2020) propose multi-task models for concurrently performing node and graph classification. Finally, Avelar et al. (2019) introduce a multi-head GNN for learning multiple graph centrality measures, and Li & Ji (2019) propose a MTL method for the extraction of multiple biomedical relations. The work by (Haonan et al., 2019) introduces a model that can be trained for several tasks singularly, hence, unlike the previously mentioned approaches and our proposed method, it can not perform multiple tasks concurrently. There are also some works that use GNNs as a tool for MTL: Liu et al. (2019b) use GNNs to allow communication between tasks, while Zhang et al. (2018) use GNNs to estimate the test error of a MTL model. We further mention the work by Wang et al. (2020) which considers the task of generating “general” node embeddings, however their method is not based on GNNs, does not consider node attributes (unlike our method), and is not focused on the three most common graph related tasks, which we consider. For an exhaustive review of deep MTL techniques we refer the reader to Vandenhende et al. (2020).\nMeta-Learning. Meta-Learning consists in learning to learn. Many methods have been proposed (see the review by Hospedales et al. (2020)), specially in the area of few-shot learning. Garcia & Bruna (2018) frame the few-shot learning problem with a partially observed graphical model and use GNNs as an inference algorithm. Liu et al. (2019a) use GNNs to propagate messages between class prototypes and improve existing few-shot learning methods, while Suo et al. (2020) use GNNs to introduce domain-knowledge in the form of graphs. There are also several works that use metalearning to train GNNs in few-shot learning scenarios with applications to node classification (Zhou et al., 2019; Yao et al., 2020), edge labelling (Kim et al., 2019), link prediction (Alet et al., 2019; Bose et al., 2019), and graph regression (Nguyen et al., 2020). Finally, other combinations of metalearning and GNNs involve adversarial attacks on GNN models (Zügner & Günnemann, 2019) and active learning (Madhawa & Murata, 2020)." }, { "heading": "3 PRELIMINARIES", "text": "" }, { "heading": "3.1 GRAPH NEURAL NETWORKS", "text": "Many popular state-of-the-art GNN models follow the message-passing paradigm (Gilmer et al., 2017). Let us represent a graph G = (A,X) with an adjacency matrix A ∈ {0, 1}n×n, and a node feature matrix X ∈ Rn×d, where the v-th row Xv represents the d dimensional feature vector of node v. Let H(`) ∈ Rn×d′ be the matrix containing the node representations at layer `. A message passing layer updates the representation of every node v as follows:\nmsg(`)v = AGGREGATE({H(`)u ∀u ∈ Nv}) H(`+1)v = UPDATE(H (`) v ,msg (`) v )\nwhere H(0) = X, Nv is the set of neighbours of node v, AGGREGATE is a permutation invariant function, and UPDATE is usually a neural network. After L message-passing layers, the final node embeddings H(L) are used to perform a given task, and the network is trained end-to-end." }, { "heading": "3.2 MODEL-AGNOSTIC META-LEARNING AND ANIL", "text": "MAML (Model-Agnostic Meta-Learning) is an optimization-based meta-learning strategy proposed by Finn et al. (2017). Let fθ be a deep learning model, where θ represents its parameters. Let p(E) be a distribution over episodes1, where an episode Ei ∼ p(E) is defined as a tuple containing a loss function, a support set, and a target set: Ei = (LEi(·),SEi , TEi), where support and target sets are simply sets of labelled examples. MAML’s goal is to find a value of θ that can quickly, i.e. in a few steps of gradient descent, be adapted to new episodes. This is done with a nested loop optimization procedure: an inner loop adapts the parameters to the support set of an episode by performing some steps of gradient descent, and an outer loop updates the initial parameters aiming at a setting that\n1The meta-learning literature usually derives episodes from tasks (i.e. tuples containing a dataset and a loss function). We focus on episodes to avoid using the term task for both a MTL task, and a meta-learning task.\nallows fast adaptation. Formally, by defining θ′i(t) as the parameters after t adaptation steps on the support set of episode Ei, we can express the computations in the inner loop as\nθ′i(t) = θ ′ i(t− 1)− α∇θ′i(t−1)LEi(fθ′i(t−1),SEi), with θ ′ i(0) = θ\nwhere L(fθ′i(t−1),SEi) indicates the loss over the support set SEi of the model with parameters θ′i(t − 1), and α is the learning rate. The meta-objective that the outer loop tries to minimize is defined as Lmeta = ∑ Ei∼p(E) LEi(fθ′i(t), TEi), which leads to the following parameter update 2\nθ = θ − β∇θLmeta = θ − β∇θ ∑ Ei∼p(E) LEi(fθ′i(t), TEi).\nRaghu et al. (2020) showed that feature reuse is the dominant factor in MAML: in the adaptation loop, only the last layer(s) in the network are updated, while the first layer(s) remain almost unchanged. The authors then propose ANIL (Almost No Inner Loop) where they split the parameters in two sets: one that is used for adaptation in the inner loop, and one that is only updated in the outer loop. This simplification leads to computational improvements while maintaining performance." }, { "heading": "4 OUR METHOD", "text": "Our novel representation learning technique, based on meta-learning, is built on three insights:\n(i) optimization-based meta-learning is implicitly learning robust representations. The findings by Raghu et al. (2020) suggest that in a model trained with MAML, the first layer(s) learn features that are reusable across episodes, while the last layer(s) are set up for fast adaptation. MAML is then implicitly focusing on learning reusable representations that generalize across episodes.\n(ii) meta-learning episodes can be designed to encourage generalization. If we design support and target set to mimic the training and validation sets of a classical training procedure, then the meta-learning procedure is effectively optimizing for generalization.\n(iii) meta-learning can learn to quickly adapt to multiple tasks singularly, without having to learn to solve multiple tasks concurrently. We can design the meta-learning procedure so that, for each considered task, the inner loop adapts the parameters to a task-specific support set, and tests the adaptation on a task-specific target set. The outer loop then updates the parameters to allow this fast multiple single-task adaptation. This procedure is effectively searching for a parameter setting that can be easily adapted to obtain good single-task performance, without learning to solve multiple tasks concurrently. This procedure differs from classical training methods (which aim at solving multiple tasks concurrently), and from traditional meta-learning approaches (which aim at parameters that allow fast multi-task adaptation, inheriting the problems of classical methods)3.\n2We limit ourself to one step of gradient descent for clarity, but any optimization strategy could be used. 3We provide a more detailed discussion on the differences with traditional methods in Appendix C.\nBased on (ii) and (iii), we develop a novel meta-learning procedure where the inner loop adapts to multiple tasks singularly, each time with the goal of single-task generalization. Using an encoderdecoder architecture, (i) suggests that this procedure leads to an encoder that learns features that are reusable across episodes. Furthermore, in each episode, the learner is adapting to multiple tasks, hence the encoder is learning features that are general across multiple tasks.\nIntuition. Training multi-task models is very challenging (Standley et al., 2020), as some losses may dominate over others, or gradients for different tasks may point in opposite directions. Some methods have been proposed to counteract this issues (Kendall et al., 2018; Chen et al., 2018), but they are not always effective and it is not clear how one should choose which method to apply (Vandenhende et al., 2020). We design a meta-learning procedure where the learner does not have to find a configuration of the parameters that concurrently performs all tasks, but it has to find a configuration that can easily be adapted to perform each of the tasks singularly. By then leveraging the implicit/explicit robust representation learning that happens with MAML and ANIL, we can extract an encoder capable of generating node representations that generalize across tasks.\nIn the rest of this section, we formally present our novel meta-learning procedure for multi-task graph representation learning. There are three aspects that we need to define: (1) Episode Design: how is a an episode composed, (2) Model Architecture Design: what is the architecture of our model, (3) Meta-Training Design: how, and which, parameters are adapted/updated." }, { "heading": "4.1 EPISODE DESIGN", "text": "In our case, an episode becomes a multi-task episode (Figure 2 (a)). To formally introduce the concept, let us consider the case where the tasks are graph classification (GC), node classification (NC), and link prediction (LP). We define a multi-task episode E(m)i ∼ p(E(m)) as a tuple\nE(m)i = (L (m) Ei ,S (m) Ei , T (m) Ei ) L(m)Ei = λ (GC)L(GC)Ei + λ (NC)L(NC)Ei + λ (LP )L(LP)Ei S(m)Ei = {S (GC) Ei ,S (NC) Ei ,S (LP) Ei }, T (m) Ei = {T (GC) Ei , T (NC) Ei , T (LP) Ei }\nwhere λ(·) are balancing coefficients. The meta-objective of our method then becomes: L(m)meta = ∑\nE(m)i ∼p(E(m))\nλ(GC)L(GC)Ei + λ (NC)L(NC)Ei + λ (LP )L(LP)Ei .\nSupport and target sets are set up to resemble a training and a validation set. This way the outer loop’s objective becomes to maximize the performance on a validation set, given a training set, hence pushing towards generalization. In more detail, given a batch of graphs, we divide it in equally sized splits (one per task), and we create support and target sets as follows:\nGraph Classification: S(GC)Ei and T (GC) Ei contain labeled graphs, obtained with a random split.\nNode Classification: S(NC)Ei and T (NC) Ei are composed of the same graphs, with different labelled\nnodes. We mimic the common semi-supervised setting (Kipf & Welling, 2017) where feature vectors are available for all nodes, and only a small subset of nodes is labelled.\nLink Prediction: S(LP)Ei and T (LP) Ei are composed of the same graphs, with different query edges. In\nevery graph we randomly remove some edges, used as positive examples together with nonremoved edges, and randomly sample couples of non-adjacent nodes as negative examples.\nThe full algorithm for the creation of multi-task episodes is provided in Appendix A." }, { "heading": "4.2 MODEL ARCHITECTURE DESIGN", "text": "We use an encoder-decoder model with a multi-head architecture. The backbone (which represents the encoder) is composed of 3 GCN (Kipf & Welling, 2017) layers with ReLU non-linearities and residual connections (He et al., 2016). The decoder is composed of three heads. The node classification head is a single layer neural network with a Softmax activation that is shared across nodes\nand maps node embeddings to class predictions. In the graph classification head, first a single layer neural network (shared across nodes) performs a linear transformation (followed by a ReLU activation) of the node embeddings. The transformed node embeddings are then averaged and a final single layer neural network with Softmax activation outputs the class predictions. The link prediction head is composed of a single layer neural network with a ReLU non-linearity that transforms node embeddings, and another single layer neural network that takes as input the concatenation of two node embeddings and outputs the probability of a link between them." }, { "heading": "4.3 META-TRAINING DESIGN", "text": "We first present our meta-learning training procedure, and successively describe which parameters are adapted/updated in the inner and outer loops.\nMeta-Learning Training Procedure. To avoid the problems arising from training a model that performs multiple tasks concurrently, we design a meta-learning procedure where the inner loop adaptation and the meta-objective computation involves a single task at a time. Only the parameter update performed to minimize the meta-objective involves multiple tasks, but, crucially, it does not aim at a setting of parameters that can solve, or quickly adapt to, multiple tasks concurrently, but to a setting that allows multiple fast single-task adaptation.\nAlgorithm 1: Proposed Meta-Learning Procedure Input: Model fθ; Episodes E = {E1, .., En} init(θ) for Ei in E do\no loss← 0 for t in (GC, NC, LP) do\nθ′(t) ← θ θ′(t) ← ADAPT(fθ,S(t)Ei ,L (t) Ei ) o loss← o loss+TEST(fθ′(t) , T (t) Ei ,L (t) Ei )\nend θ ← UPDATE(θ,o loss, θ′(GC), θ′(NC), θ′(LP ))\nend\nThe pseudocode of our procedure is in Algorithm 1. ADAPT performs a few steps of gradient descent on a task specific loss function and support set, TEST computes the value of the metaobjective component on a task specific loss function and target set, and UPDATE optimizes the parameters by minimizing the meta-objective. Notice how the multiple heads of the decoder in our model are never used concurrently.\nParameter Update in Inner/Outer Loop. Let us partition the parameters of our model in four sets: θ = [θGCN, θNC, θGC, θLP] representing the parameters of the backbone (θGCN ), node classification head (θNC), graph classification head (θGC), and link prediction head (θLP ). We name our proposed meta-learning strategy SAME (Single-Task Adaptation for Multi-Task Embeddings), and present two variants (Figure 2 (b)-(c)):\nImplicit SAME (iSAME): all the parameters θ are used for adaptation. This strategy makes use of the implicit feature-reuse factor of MAML, leading to parameters θGCN that are general across multi-task episodes.\nExplicit SAME (eSAME): only the head parameters θNC, θGC, θLP are used for adaptation. Contrary to the previous, this strategy explicitly aims at learning the parameters θGCN to be general across multi-task episodes by only updating them in the outer loop." }, { "heading": "5 EXPERIMENTS", "text": "Our goal is to assess the quality of the representations learned by our proposed methods. In more detail, we aim to answer the following questions:\nQ1: Do our proposed meta-learning procedures lead to high quality node embeddings in the singletask setting?\nQ2: Do our proposed meta-learning procedures lead to high quality node embeddings in the multitask setting?\nThroughout this section we use GC to refer to graph classification, NC for node classification, and LP for link prediction. Unless otherwise stated, accuracy (%) is used for NC and GC, while ROC AUC (%) is used for LP.\nDatasets. To perform multiple tasks, we consider datasets with graph labels, node attributes, and node labels from the TUDataset library (Morris et al., 2020): ENZYMES (Schomburg et al., 2004), PROTEINS (Dobson & Doig, 2003), DHFR(Sutherland et al., 2003), and COX2 (Sutherland et al., 2003). ENZYMES is a dataset of protein structures belonging to six classes. PROTEINS is a dataset of chemical compounds with two classes (enzyme and non-enzyme). DHFR, and COX2 are datasets of chemical inhibitors which can be active or inactive. In all datasets, each node has a series of attributes containing physical and chemical measurements.\nExperimental Setup. We perform a 10-fold cross validation, and average results across folds. To ensure a fair comparison, we use the same architecture for all training strategies. We tested loss balancing techniques (e.g. uncertainty weights (Kendall et al., 2018), and gradnorm (Chen et al., 2018)) but found that they were not effective. In our experiments we notice that the losses do not need to be balanced, and we set λ(GC) = λ(NC) = λ(LP ) = 1 without performing any tuning. For more information we refer to Appendix B, and we provide source code as supplementary material.\nQ1: For every task, we train a linear classifier on top of the embeddings produced by a model trained using our proposed methods, and compare against a network with the same architecture trained in a classical manner. Results are shown in Table 1. For all three tasks, a linear classifier on the embeddings produced by our methods achieves comparable, if not superior, performance to an end-to-end model. In fact, the linear classifier is never outperformed by more than 2%, and it can outperform the classical end-to-end model by up to 12%. These results show that our meta-learning procedures are learning high quality node embeddings.\nQ2: We train a model with our proposed methods, on all multi-task combinations, and use the embeddings as the input for a linear classifier. We compare against models with the same task-specific architecture trained in the classical manner on a single task, and with a fine-tuning baseline. The latter is a model that has been trained on all three tasks, and then fine-tuned on two specific tasks. The idea is that the initial training on all tasks should lead the model towards the extraction of features that it would otherwise not consider (by only seeing 2 tasks), and the fine-tuning process should then allow the model to use these features to target the specific tasks of interest. Results are shown in Table 2 (we omit standard deviation for space limitations). We notice that the embeddings produced by our procedures in a multi-task setting, followed by a linear classifier, achieve comparable performance to end-to-end single-task models. In fact, the linear classifier is never outperformed by more than 3%, and in 50% of the cases it actually achieves higher performance. We further notice that the fine-tuning baseline severely struggles, and is almost always outperformed by both single-task\nmodels, and our proposed methods. These results indicate that the episodic meta-learning procedure adopted by SAME is extracting features that are otherwise not accessible with standard training techniques.\nQ3: We train a multi-task model, and we then train a new simple network (with the same architecture as the heads described in Section 4.2), which we refer to as classifier, on the embeddings to perform a task that was not seen during training. We compare the performance of the classifier on the embeddings learned by a model trained in a classical manner, and with our proposed procedure. Intuitively, this tests gives us a way to analyse if the embeddings learned by our proposed approaches contain “more information” than embeddings learned in a classical manner. Results on the ENZYMES dataset are shown in Figure 3, where we notice that embeddings learned by our proposed approaches lead to at least 10% higher per-\nformance. We observe an analogous trend on the other datasets, and report all results in Appendix D.\nQ4: We train the same multi-task model, both in the classical supervised manner, and with our proposed approaches, on all multi-task combinations. For our approaches, we then train a linear classifier on top of the node embeddings. We further consider the fine-tuning baseline introduced in Q2. We use the multi-task performance (∆m) metric (Maninis et al., 2019), defined as the average per-task drop with respect to the single-task baseline: ∆m = 1T ∑T i=1 (Mm,i −Mb,i) /Mb,i, where Mm,i is the value of a task’s metric for the multi-task model, and Mb,i is the value for the baseline.\nResults are shown in Table 4. We first notice that usually multi-task models achieve lower performance than specialized single-task ones. We then highlight that linear classifiers trained on the embeddings produced by our procedures are not only comparable, but in many cases significantly superior to classically trained multi-task models. In fact, a multi-task model trained in a classical manner is highly sensible to the tasks that are being learned (e.g. GC and LP negatively interfere with each other in every dataset), while our methods seem much less sensible: the former has a worst-case average drop in performance of 29%, while our method has a worst-case average drop of less than 3%. Finally, we also notice that the fine-tuning baseline generally performs worst than classically trained models, confirming that transferring knowledge in multi-task settings is not easy, and more advanced techniques, like our proposed method SAME, are needed." }, { "heading": "6 CONCLUSIONS", "text": "In this work we propose a novel representation learning strategy for multi-task settings. Our method overcomes the problems that arise when learning to solve multiple tasks concurrently by optimizing for a parameter setting that can quickly, i.e. with few steps of gradient descent, be adapted for high single-task performance on multiple tasks. We apply our method to graph representation learning, and find that our training procedure leads to higher quality node embeddings, both in the multi-task setting, and in the single-task setting. In fact, we show that a linear classifier trained on the embeddings produced by our method has comparable or better performance than classical end-to-end supervised models. Furthermore, we find that the embeddings learned with our proposed procedures lead to higher performance on downstream tasks that were not seen during training. We believe this work can be highly useful to the whole deep representation learning community, as our method is model agnostic and task agnostic, and can therefore be applied on a wide variety of multi-task domains." }, { "heading": "A EPISODE DESIGN ALGORITHM", "text": "Algorithm 2 contains the procedure for the creation of the episodes for our meta-learning procedures. The algorithm takes as input a batch of graphs (with graph labels, node labels, and node features) and the loss function balancing weights, and outputs a multi-task episode. We assume that each graph has a set of attributes that can be accessed with a dot-notation (like in most object-oriented programming languages).\nNotice how the episodes are created so that only one task is performed on each graph. This is important as in the inner loop of our meta-learning procedure, the learner adapts and tests the adaptated parameters on one task at a time. The outer loop then updates the parameters, optimizing for a representation that leads to fast single-task adaptation. This procedure bypasses the problem of learning parameters that directly solve multiple tasks, which can be very challenging.\nAnother important aspect to notice is that the support and target sets are designed as if they were the training and validation splits for training a single-task model with the classical procedure. This way the meta-objective becomes to train a model that can generalize well." }, { "heading": "B ADDITIONAL EXPERIMENTAL DETAILS", "text": "In this section we provide additional information on the implementation of the models used in our experimental section. We implement our models using PyTorch (Paszke et al., 2019), PyTorch Geometric (Fey & Lenssen, 2019) and Torchmeta (Deleu et al., 2019). For all models the number and structure of the layers is as described in Section 4.2 of the paper, where we use 256-dimensional node embeddings at every layer.\nAt every cross-validation fold the datasets are split into 70% for training, 10% for validation, and 20% for testing. For each model we perform 100 iterations of hyperparameter optimization over the same search space (for shared parameters) using Ax (Bakshy et al., 2018).\nWe tried some sophisticated methods to balance the contribution of loss functions during multi-task training like GradNorm (Chen et al., 2018) and Uncertainty Weights (Kendall et al., 2018), but we saw that usually they do not positively impact performance. Furthermore, in the few cases where they increase performance, they work for both classically trained models, and for models trained with our proposed procedures. We then set the balancing weights to λ(GC) = λ(NC) = λ(LP ) = 1 to provide better comparisons between the training strategies.\nAlgorithm 2: Episode Design Algorithm Input : Batch of n randomly sampled graphs B = {G1, ..,Gn} Loss weights λ(GC), λ(NC), λ(LP ) ∈ [0, 1] Output: Episode Ei = (L(m)Ei ,S (m) Ei , T (m) Ei )\nB(GC),B(NC),B(LP ) ← equally divide the graphs in B in three sets\n/* Graph Classification */ S(GC)Ei , T (GC) Ei ← randomly divide B (GC) with a 60/40 split\n/* Node Classification */ for Gi in B(NC) do num labelled nodes← Gi.num nodes× 0.3 N ← divide nodes per class, then iteratively randomly sample one node per class without\nreplacement and add it to N until |N | = num labelled nodes G′i ← copy(Gi) Gi.labelled nodes← N ; G′i.labelled nodes← Gi.nodes \\ N S(NC)Ei .add(Gi); T (NC) Ei .add(G ′ i)\nend\n/* Link Prediction */ for Gi in B(LP ) do E\n(N) i ← randomly pick negative samples (edges that are not in the graph; possibly in the same number as the number of edges in the graph) E\n1,(N) i , E 2,(N) i ← divide E (N) i with an 80/20 split\nE (P ) i ← randomly remove 20% of the edges in Gi G′(1)i ← Gi removed of E (P ) i G′(2)i ← copy(G ′(1) i ) G′(1)i .positive edges← G ′(1) i .edges; G ′(2) i .positive edges← E (P ) i G′(1)i .negative edges← E 1,(N) i ; G ′(2) i .negative edges← E 2,(N) i S(LP )Ei .add(G ′(1) i ); T (LP ) Ei .add(G ′(2) i )\nend\nS(m)Ei ← {S (GC) Ei ,S (NC) Ei ,S (LP) Ei } T (m)Ei ← {T (GC) Ei , T (NC) Ei , T (LP) Ei } L(GC)Ti ← Cross-Entropy(·); L (NC) Ti ← Cross-Entropy(·) L(LP)Ti ← Binary Cross-Entropy(·) L(m)Ei = λ (GC)L(GC)Ti + λ (NC)L(NC)Ti + λ\n(LP )L(LP)Ti Return E = (L(m)Ei ,S (m) Ei , T (m) Ei )\nLinear Model. The linear model trained on the embeddings produced by our proposed method is a standard linear SVM. In particular we use the implementation available in Scikit-learn (Pedregosa et al., 2011) with default hyperparameters. For graph classification, we take the mean of the node embeddings as input. For link prediction we take the concatenation of the embeddings of two nodes. For node classification we keep the embeddings unaltered.\nDeep Learning Baselines. We train the single task models for 1000 epochs, and the multi-task models for 5000 epochs, with early stopping on the validation set (for multi-task models we use the sum of the task validation losses or accuracies as metrics for early-stopping). Optimization is done using Adam (Kingma & Ba, 2015). For node classification and link prediction we found that normalizing the node embeddings to unit norm in between GCN layers helps performance.\nOur Meta-Learning Procedure. We train the single task models for 5000 epochs, and the multitask models for 15000 epochs, with early stopping on the validation set (for multi-task models we use the sum of the task validation losses or accuracies as metrics for early-stopping). Early stopping is very important in this case as it is the only way to check if the meta-learned model is overfitting the training data. The inner loop adaptation consists of 1 step of gradient descent. Optimization in the outer loop is done using Adam (Kingma & Ba, 2015). We found that normalizing the node embeddings to unit norm in between GCN layers helps performance." }, { "heading": "C COMPARISON WITH TRADITIONAL TRAINING APPORACHES", "text": "Our proposed meta-learning approach is significantly different from the classical training strategy (Algorithm 3), and the traditional meta-learning approaches (Algorithm 4).\nThe classical training approach for multi-task models takes as input a batch of graphs, which is simply a set of graphs, where on each graph the model has to execute all the tasks. Based on the cumulative loss on all tasks\nL = λ(GC)L(GC) + λ(NC)L(NC) + λ(LP )L(LP)\nfor all the graphs in the batch, the parameters are updated with some form of gradient descent, and the procedure is repeated for each batch.\nThe traditional meta-learning approach takes as input an episode, like our approach, but for every graph in the episode all the tasks are performed. The support set and target set are single sets of graphs, where every task can be performed on all graphs. The support set is used to obtain the adapted parameters θ′, which have the goal of concurrently solving all tasks on all graphs in the target set. The loss functions, both for the inner loop and for the outer loop, are the same as the one used by the classical training approach. The outer loop then updates the parameters aiming at a setting that can easily, i.e. with a few steps of gradient descent, be adapted to perform multiple tasks concurrently given a support set.\nAlgorithm 3: Classical Training Input: Model fθ; Batches B = {B1, ..,Bn} init(θ) for Bi in B do\nloss← concurrently perform all tasks on all graphs in Bi θ ← UPDATE(θ,loss)\nend\nAlgorithm 4: Traditional Meta-Learning Input: Model fθ; Episodes E = {E1, .., En} init(θ) for Ei in E do\ni loss← concurrently perform all tasks on all support set graphs θ′ ← ADAPT(θ,i loss) o loss← concurrently perform all\ntasks on all target set graphs using parameters θ′ θ ← UPDATE(θ, θ′,o loss)\nend" }, { "heading": "D FULL RESULTS FOR THE GENERALIZATION OF NODE EMBEDDINGS", "text": "Table 4 contains results for a neural network, trained on the embeddings generated by a multi-task model, to perform a task that was not seen during the training of the multi-task model. Accuracy (%) is used for node classification (NC) and graph classification (GC); ROC AUC (%) is used for link prediction (LP). The embeddings produced by our meta-learning methods lead to higher performance (up to 35%), showing that our procedures lead to the extraction of more informative node embeddings with respect to the classical end-to-end training procedure." } ]
2,020
null
SP:a7713950962f783173dbcf3ecd14289782380561
[ "The paper presents a semi-supervised model to predict the vitality of beehives. The inputs of the model are data from sensors (audio on one hand and environmental on the other hand such as temperature, humidity ...). The objective is to predict simultaneously 3 values of interest: the frames state of beehives, the potential diseases and their severity. The architecure is composed of two modules, the first one is an auto-encoder in charge of embedding the audio spectrogram in a low latent dimensional space and the second one a MLP to predict the outputs from the latent spectrogram and the environmental data. The paper presents results of the proposed architecture on a small dataset, an ablation study to show the benefits of the auto-encoder module and the role of the environmental data and a latent space analysis to understand the ability of the model to capture relevant audio information linked to the diseases. " ]
Honey bees are critical to our ecosystem and food security as a pollinator, contributing 35% of our global agriculture yield (Klein et al., 2007). In spite of their importance, beekeeping is exclusively dependent on human labor and experiencederived heuristics, while requiring frequent human checkups to ensure the colony is healthy, which can disrupt the colony. Increasingly, pollinator populations are declining due to threats from climate change, pests, environmental toxicity, making their management even more critical than ever before in order to ensure sustained global food security. To start addressing this pressing challenge, we developed an integrated hardware sensing system for beehive monitoring through audio and environment measurements, and a hierarchical semi-supervised deep learning model, composed of an audio modeling module and a predictor, to model the strength of beehives. The model is trained jointly on audio reconstruction and prediction losses based on human inspections, in order to model both low-level audio features and circadian temporal dynamics. We show that this model performs well despite limited labels, and can learn an audio embedding that is useful for characterizing different sound profiles of beehives. This is the first instance to our knowledge of applying audio-based deep learning to model beehives and population size in an observational setting across a large number of hives.
[ { "affiliations": [], "name": "BEEHIVE STRENGTHS" } ]
[ { "authors": [ "David J Anderson", "Pietro Perona" ], "title": "Toward a science of computational ethology", "venue": null, "year": 2014 }, { "authors": [ "Ada D. Eban-Rothschild", "Guy Bloch" ], "title": "Differences in the sleep architecture of forager and young honeybees (apis mellifera)", "venue": "Journal of Experimental Biology,", "year": 2008 }, { "authors": [ "S. Ferrari", "M. Silva", "M. Guarino", "D. Berckmans" ], "title": "Monitoring of swarming sounds in bee hives for early detection of the swarming period", "venue": "Computers and Electronics in Agriculture,", "year": 2008 }, { "authors": [ "D. Howard", "Olga Duran", "Gordon Hunter", "Krzysztof Stebel" ], "title": "Signal processing the acoustics of honeybees (apis mellifera) to identify the ”queenless” state in hives", "venue": "Proceedings of the Institute of Acoustics, 35:290–297,", "year": 2013 }, { "authors": [ "Aren Jansen", "Daniel PW Ellis", "Shawn Hershey", "R Channing Moore", "Manoj Plakal", "Ashok C Popat", "Rif A Saurous" ], "title": "Coincidence, categorization, and consolidation: Learning to recognize sounds with minimal supervision", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Nadia Kazlauskas", "M. Klappenbach", "A. Depino", "F. Locatelli" ], "title": "Sickness behavior in honey bees", "venue": "Frontiers in Physiology,", "year": 2016 }, { "authors": [ "Alexandra Klein", "Bernard Vaissière", "Jim Cane", "Ingolf Steffan-Dewenter", "Saul Cunningham", "Claire Kremen", "Teja Tscharntke" ], "title": "Importance of pollinators in changing landscapes for world crops. Proceedings", "venue": "Biological sciences / The Royal Society, 274:303–13,", "year": 2007 }, { "authors": [ "Grzegorz Krzywoszyja", "Ryszard Rybski", "Grzegorz Andrzejewski" ], "title": "Bee swarm detection based on comparison of estimated distributions samples of sound", "venue": "IEEE Transactions on Instrumentation and Measurement,", "year": 2018 }, { "authors": [ "Kangkang Lu", "Chuan-Sheng Foo", "Kah Kuan Teh", "Huy Dat Tran", "Vijay Ramaseshan Chandrasekhar" ], "title": "Semi-Supervised Audio Classification with Consistency-Based Regularization", "venue": "In Proc. Interspeech", "year": 2019 }, { "authors": [ "Margarita López-Uribe", "Michael Simone-Finstrom" ], "title": "Special issue: Honey bee research in the us: Current state and solutions to beekeeping problems", "venue": "Insects, 10:22,", "year": 2019 }, { "authors": [ "Amro Qandour", "Iftekhar Ahmad", "Daryoush Habibi", "Mark Leppard" ], "title": "Remote beehive monitoring using acoustic signals", "venue": null, "year": 2014 }, { "authors": [ "Michael Ramsey", "M. Bencsik", "Michael Newton" ], "title": "Extensive vibrational characterisation and long-term monitoring of honeybee dorso-ventral abdominal vibration signals", "venue": "Scientific Reports,", "year": 2018 }, { "authors": [ "Michael-Thomas Ramsey", "Martin Bencsik", "Michael Newton", "Maritza Reyes", "Maryline Pioz", "Didier Crauser", "Noa Simon-Delso", "Yves Le Conte" ], "title": "The prediction of swarming in honeybee colonies using vibrational spectra", "venue": "Scientific Reports, 10,", "year": 2020 }, { "authors": [ "Antonio Robles-Guerrero", "Tonatiuh Saucedo-Anaya", "Efrén González-Ramérez", "Carlos Eric Galván-Tejada" ], "title": "Frequency analysis of honey bee buzz for automatic recognition of health status: A preliminary study", "venue": "Res. Comput. Sci.,", "year": 2017 }, { "authors": [ "Stephen Russell", "Andrew B. Barron", "David Harris" ], "title": "Dynamic modelling of honey bee (apis mellifera) colony growth and failure", "venue": "Ecological Modelling,", "year": 2013 }, { "authors": [ "F. Ruttner" ], "title": "Biogeography and Taxonomy of Honeybees", "venue": "URL https://books.google.com/books?id=9j4gAQAAMAAJ", "year": 1988 }, { "authors": [ "Shôichi F Sakagami", "Hiromi Fukuda" ], "title": "Life tables for worker honeybees", "venue": "Population Ecology,", "year": 1968 }, { "authors": [ "Edward E Southwick" ], "title": "The honey bee cluster as a homeothermic superorganism", "venue": "Comparative Biochemistry and Physiology Part A: Physiology,", "year": 1983 }, { "authors": [ "Anton Stabentheiner", "Helmut Kovac", "Robert Brodschneider" ], "title": "Honeybee colony thermoregulation–regulatory mechanisms and contribution of individuals in dependence on age, location and thermal stress", "venue": "PLoS one,", "year": 2010 }, { "authors": [ "J.P. van der Sluijs", "N. Vaage" ], "title": "Pollinators and global food security: the need for holistic global stewardship", "venue": "Food Ethics,", "year": 2016 }, { "authors": [ "Milagra Weiss", "Patrick Maes", "William Fitz", "Lucy Snyder", "Tim Sheehan", "Brendon Mott", "Kirk Anderson" ], "title": "Internal hive temperature as a means of monitoring honey bee colony health in a migratory beekeeping operation before and during winter", "venue": "Apidologie, 48,", "year": 2017 }, { "authors": [ "Mark L. Winston" ], "title": "The Biology of the Honey Bee", "venue": "ISBN 9780674074095", "year": 1987 } ]
[ { "heading": null, "text": "Honey bees are critical to our ecosystem and food security as a pollinator, contributing 35% of our global agriculture yield (Klein et al., 2007). In spite of their importance, beekeeping is exclusively dependent on human labor and experiencederived heuristics, while requiring frequent human checkups to ensure the colony is healthy, which can disrupt the colony. Increasingly, pollinator populations are declining due to threats from climate change, pests, environmental toxicity, making their management even more critical than ever before in order to ensure sustained global food security. To start addressing this pressing challenge, we developed an integrated hardware sensing system for beehive monitoring through audio and environment measurements, and a hierarchical semi-supervised deep learning model, composed of an audio modeling module and a predictor, to model the strength of beehives. The model is trained jointly on audio reconstruction and prediction losses based on human inspections, in order to model both low-level audio features and circadian temporal dynamics. We show that this model performs well despite limited labels, and can learn an audio embedding that is useful for characterizing different sound profiles of beehives. This is the first instance to our knowledge of applying audio-based deep learning to model beehives and population size in an observational setting across a large number of hives." }, { "heading": "1 INTRODUCTION", "text": "Pollinators are one of the most fundamental parts of crop production worldwide (Klein et al., 2007). Without honey bee pollinators, there would be a substantial decrease in both the diversity and yield of our crops, which includes most common produce (van der Sluijs & Vaage, 2016). As a model organism, bees are also often studied through controlled behavioral experiments, as they exhibit complex responses to many environmental factors, many of which are yet to be fully understood. A colony of bees coordinate its efforts to maintain the overall health, with different types of bees tasked for various purposes. One of the signature modality of characterizing bee behavior is through the buzzing frequencies emitted through the vibration of the wings, which can correlate with various properties of the surroundings, including temperature, potentially allowing for a descriptive ’image’ of the hive in terms of strength (Howard et al., 2013; Ruttner, 1988).\nHowever, despite what is known about honey bees behavior and their importance in agriculture and natural diversity, there remains a substantial gap between controlled academic studies and the field practices carried out (López-Uribe & Simone-Finstrom, 2019). In particular, beekeepers use their long-tenured experience to derive heuristics for maintaining colonies, which necessitates frequent visual inspections of each frame of every box, many of which making up a single hive. During each inspection, beekeepers visually examine each frame and note any deformities, changes in colony size, amount of stored food, and amount of brood maintained by the bees. This process is labor intensive, limiting the number of hives that can be managed effectively. As growing risk factors make human inspection more difficult at scale, computational methods are needed in tracking changing hive dynamics on a faster timescale and allowing for scalable management. With modern sensing hardware that can record data for months and scalable modeling with state-of-the-art tools in machine learning, we can potentially start tackling some of challenges facing the management of our pollinators, a key player in ensuring food security for the future." }, { "heading": "2 BACKGROUND AND RELATED WORKS", "text": "Our work falls broadly in applied machine learning within computational ethology, where automated data collection methods and machine learning models are developed to monitor and characterize biological species in natural or controlled settings (Anderson & Perona, 2014). In the context of honey bees, while there has been substantial work characterizing bee behavior through controlled audio, image, and video data collection with classical signal processing methods, there has not been a large-scale effort studying how current techniques in deep learning can be applied at scale to the remote-monitoring of beehives in the field.\nPart of the challenge lies in data collection. Visual-sensing within beehives is nearly impossible given the current design of boxes used to house bees. These boxes are heavily confined with narrow spaces between many stacked frames for bees to hatch, rear brood, and store food. This makes it difficult to position cameras to capture complete data, without a redesign of existing boxes. Environment sensors, however, can capture information localized to a larger region, such as temperature and humidity. Sound, likewise, can travel across many stacked boxes, which are typically made from wood and have good acoustics. Previous works have explored the possibility of characterizing colony status with audio in highly stereotyped events, such as extremely diseased vs healthy beehives (Robles-Guerrero et al., 2017) or swarming (Krzywoszyja et al., 2018; Ramsey et al., 2020), where the old Queen leaves with a large portion of the original colony. However, we have not seen work that attempt to characterize more sensitive measurements, such as population of beehives, based on audio. We were inspired by these works and the latest advances in hardware sensing and deep learning audio models to collect audio data in a longitudinal setting across many months for a large number of managed hives, and attempt to characterize some of the standard hive inspection items through machine learning.\nWhile audio makes it possible to capture a more complete picture of the inside of a hive, there are still challenges related to data semantics in the context of annotations. Image and video data can be readily processed and labeled post-collection if the objects of interest are recognizable. However, with honey bees, the sound properties captured by microphones are extremely difficult to discriminate, even to experts, due to the fact that the sound is not semantically meaningful, and microphone sensitivity deviations across sensors makes it difficult to compare data across different hives. Thus, it is not possible to retrospectively assign labels to data, making humans inspections during data collection the only source of annotations. As beekeepers cannot inspect a hive frequently due to the large number of hives managed and the potential disturbance caused to the hive, the task becomes few-shot learning.\nIn low-show learning for audio, various works have highlighted the usefulness of using semisupervised or unsupervised objectives and/or learning an embedding of audio data, mostly for the purpose of sound classification or speech recognition (Jansen et al., 2020; Lu et al., 2019). These models typically capture semantic differences between different sound sources. We were inspired by the audio classification work with semi-supervised or contrastive-learning objectives to build an architecture that could model our audio and learn an embedding without relying only on task-specific supervision. Unlike previous audio datasets used in prior works, longitudinal data is unlikely to discretize into distinct groups due to the slower continuously shifting dynamics across time on the course of weeks. Therefore, we make the assumption that unlike current audio datasets which contain audio from distinct classes that can be clustered into sub-types, our data more likely occupy a smooth latent space, due to the slow progression in time of changing properties, such as the transition between healthy and low-severity disease, or changes in the size of the population, as bee colonies increase by only around one frame per week during periods of colony growth (Russell et al., 2013; Sakagami & Fukuda, 1968)." }, { "heading": "3 METHODS", "text": "Hive Setup Each hive composes of multiple 10-frame standard Langstroth boxes stacked on top of one another, with the internal sensor located in the center frame of the bottom-most box, and the external sensor on the outside side wall of the box. This sensor placement is based on prior knowledge that bees tend to collect near the bottom box first prior to moving up the tower (Winston, 1987). Due to difficulties in obtaining data that would span the spectrum of different colony sizes\nwithout intervention in a timely manner, we set up hives of varying sizes in the beginning to capture a range of populations. This allowed our dataset to span a number of frame sizes, from 1 to 23 for bee frames, and 0 to 11 for brood frames. Aside from these manipulations, all other observed effects, such as progression of disease states, are of natural causes free from human intervention.\n3.1 DATA COLLECTION\nSensor Data Given prior works that showed the possibility of characterizing honey bee colonies through audio, we developed battery-powered sensorbars that can be fitted to a single frame of a standard Langstroth bee box. Each sensor is designed for longitudinal data collection over the span of many weeks on a single charge. The sensor records sub-sampled data every 15 minutes, at all hours of day. Each multi-modal data sample composes of a one-minute audio sample and point estimates of the temperature, humidity, and pressure, for both inside and outside the box (Fig. 1). For the purpose of the daily-snapshot model described in this work, we use data from all days with 96 samples collected. In sum, we have collected ∼1000 total days of data, across 26 hives, with up to 180 corresponding human inspection entries. These inspection entries captured information related to hive status, which for our purpose are frames of bees, frames of brood, disease status, and disease severity.\nInspections We used data from one inspector for all collected data used in this work in order to increase annotation consistency. The inspector performed observation in each hive roughly once per week, during which they visually examine each individual frames in all boxes for that hive. The hives are placed 2 meters apart from one another such that cross contamination of audio is unlikely, given the sensor is isolated to within each stack of boxes. For frame labels, the inspector visually examine each frame to determine if that frame is at least 60% covered, given which it would be added to the total frame count. We prevent overcrowding on each frame by introducing empty frames whenever necessary, such that each frame is covered at most up to 90%, as is common practice. This allows us to obtain a lowerbound on the error range of our inspections at around ±20%. During the same inspection, the inspector also check for the presence of any diseases and its severity. Severity scores between none, low, moderate, and severe, where low corresponds to a single observation of diseased bees, moderate for several observations of disease, and severe for prevalent signs of disease." }, { "heading": "4 GENERATIVE-PREDICTION NETWORK", "text": "Given the difficulty of collecting ground truths due to the nature of our data, we sought out to develop a semi-supervised model and leverage our large number of audio samples. Additionally, behavior from bees leaving and returning to beehives means that data from one full-day circadian cycle must be used for predictions in order to model same-day variations. Therefore, we developed a model\ntrained on hierarchical objectives to allow for modeling both low-level audio features on a minutelong basis, as well any complex temporal dynamics within a given day. We do not consider longer time horizon for this work as the focus is in modeling a snapshot of the hive’s current state, not where it will be in the future. Given prior works characterizing beehive sound profiles in lab settings, we know that local audio features are critical, as audio strength along certain known frequencies correlate with different behaviors and types of bees, which could potentially allow for discerning population sizes and disease statuses." }, { "heading": "4.1 ARCHITECTURE", "text": "The model composes of two components in hierarchy and purpose: an audio embedding module, and a temporal prediction module (Fig. 2). The embedding module learns a very low-dimensional representation of each one-minute long audio samples, while sharing its parameters across all 96 samples across each day. Each encoder-decoder is a convolutional variational autoencoder. This embedding module outputs a concatenated audio latent space, which is 96 × dz , representing all samples from the beehive across one day. The embedding module is trained via variational inference on maximizing the log likelihood of the reconstruction, which is a 56 x 56 downsampled mel spectrogram, same as the input. The embedding module is pre-trained to optimize each sample separately, and not capture temporal dynamics explicitly. It is used to learn feature filters that are less dependent on the prediction loss downstream, which can bias the model due to limited data that has assigned inspection entries.\nThe predictor is a shallow feed-forward network, designed to prevent overfitting and model simple daily temporal dynamics. It is trained only on data with matching inspection labels. The predictor takes in all concatenated latent variables from 96 audio samples of each day, along with corresponding 96 samples of environmental data, which includes temperature, humidity, and pressure for inside and outside the box. The sensor input is less important for predicting the momentary population and disease status than to normalize the observed audio, given that activity is known to vary with respect to temperature and humidity. The predictor is then connected to multiple heads, jointly-trained on multi-task objectives. The parameter sharing used in the predictor is also designed to reduce overfitting and to capture similar representations, as the predicted tasks are likely similar in nature (disease and population).\nObjectives The embedding module is trained jointly on audio sample reconstruction via the evidence lowerbound (Eq. 1) as well as a global prediction loss across a given day, backpropagated through the latent variables. The generator is pre-trained for ∼8000 iterations in order to learn a stable latent representations before prediction gradients are propagated. The predictor is trained via a multi-task prediction losses. This training then continues until all losses have converged and stabilized. The multi-task objective is composed of Huber loss (Eq. 2) for frames and disease severity regressions and categorical cross-entropy for disease classification.\nlog p(x) ≥ L(x) = Ez∼q(z|x) log p(x|z)−DKL[q(z|x)||p(z)] (1)\nL(y, f(x)) =\n{ 1 2 [y − f(x)]\n2 for |y − f(x)| ≤ δ, δ (|y − f(x)| − δ/2) otherwise. (2)" }, { "heading": "4.2 EVALUATION", "text": "Due to the nature of our annotation collection, deviations can be expected in our ground truths for both diseases and frames. In particular, inspections often occur on days with incomplete data samples. In order to reduce uncertainty around our inspection entries, we curated our training and validation sets by excluding inspections that were recorded more than two days away from the nearest day with a complete set of sensor data, due to hardware or battery issues causing some days to record incomplete data. This led us to a inspection-paired dataset of 38 samples across 26 hives, spanning 79 days. Recognizing the limited sample size, we carry out 10-fold validation with all models evaluated in our model comparisons. In addition, to reduce the possibility of cross contamination between the training and test set due to sensor similarities, we do not train and validate on the same sensor / hive, even if the datapoints are months apart. This is done despite our sensors not being cross-calibrated, as we wanted to see whether our predictions are able to fully generalize across completely different hives without the need for fine-tuned sensors, which can be costly to implement.\nTo account for the variance around ground truths of frames, we compute cumulative density functions for percentage differences between all frame predictions and inspections, in order to examine the fraction of predictions that fall within our ground truth error range’s lowerbound, which is ∼ ±20% of the assigned label (Fig. 4). We compute validation scores for each partitioned validation set, averaged across all 10 groups and for each training iteration, in order to gather a more complete understanding of how each model’s training landscape evolves, and also to assess how easily each model overfits (Fig. 9). For all evaluation results, we show mean losses / scores across 10 separate runs, each composed of training a model 10 times on a 10-fold validation scheme." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 MODEL COMPARISONS", "text": "We compared several versions of the model with ablation in order to understand the effect of each module or objective on prediction performance (Table 1). First, we compared GPN without the sound-embedding module, and trained it on environment features and hand-crafted audio features, which is the norm in prior literature. These handcrafted audio features include fast Fourier transformed (fft) audio and mean audio amplitude, where fft is sampled across 15 bins between 0 to 2667 Hz, which is just above what prior literature have shown bee related frequencies to be under (Howard et al., 2013; Qandour et al., 2014). We also trained a version of GPN purely on audio without any environment data, which we expect is needed to remove the confounding effects of external environment on bee behavior.\nMLP trained on sensor data and fft features. We found that the fully supervised MLP trained on handcrafted features worked well, given a carefully selected bin size, performing slightly worse than GPN trained on spectrograms (Fig. 3). We think this is evidence that the GPN is learning filters across the frequency domain rather than the temporal domain, as frequencies likely matter more than temporal structure within each minute-long sample given the sound lack any fast temporal variations.\nGPN trained on all vs only labeled data. As we collect 96 audio samples per day across many months, many of these datapoints are not associated with matching annotations for task predictions. However, given that our model can be trained purely on a generative objective, we compared models training on all of our data on days with complete 96 samples, vs only on the data with assigned inspection entries. By including unlabeled data, we are able to increase our valid dataset from 38 to 856. We found that this model performed better for all tasks, likely attributable to learning more generalizable kernels in the encoders (Fig. 3)." }, { "heading": "5.2 EFFECTS OF ENVIRONMENT MODALITIES ON PERFORMANCE", "text": "As this is the first work we have seen using combined non-visual multi-modal data for predicting beehive populations, we wanted to understand which modalities are most important for prediction performance. Thus, we tested the effects of sensor modality exclusion on performance. We trained multiple GPNs, removing each environmental modality from the sensor data used for predictor training, and observe the increases in validation loss or decrease in accuracy (Table 2). We found that when compared to a GPN trained on all sensor modalities in Table 1, removing temperature had the greatest effect on performance, while humidity and pressure had less effect. This is possibly because thermoregulation is an important behavioral trait that is highly correlated with a healthy hive. Since air pressure is unlikely to capture this property, humidity and temperature are likely the most important, with temperature being sufficient. We do believe that these modalities are important for prediction if hives are placed in radically different geographic locations with varying altitudes and seasonal effects, which is beyond the scope of our dataset." }, { "heading": "5.3 LEARNED EMBEDDING AND AUDIO PROFILES", "text": "Due to the nature of the purely observational data resulting in limited samples of diseased states, we examined the generative model’s embeddings and outputs from the decoder to evaluate the generative model’s ability to model diseased states. In particular, we wanted to understand if it has learned to model variations in audio spectrograms, and if a purely unsupervised model trained on\nspectrograms alone or with prediction signal would show inherent differences between diseased and healthy states.\nWe sampled audio spectrograms from the test set and examined how well our embedding model has learned to reconstruct the spectrograms, and in general, we found that the embedding model has learned to model the frequency bands we see in our data (Fig. 5) as well as the magnitude of each band. We also sampled from our latent variables and examined the differences in learned sound profiles extracted from the decoder, and found that the two dimensions correspond to differences across the frequency spectrum (Fig. 6 a).\nWe also projected audio data from the dataset into the embedding space across each day, representing 96 audio samples × 2 latent dimensions. For visualization, we then compute PCAs of each full-day latent representation. The first two PCA components captured significant portion of the variance of the embedding space, with PC-1 and PC-2 representing respective 74.17% and 10.61% of the variance. We color-mapped this PCA space with disease severity: healthy, low disease, medium disease, and high disease, with the goal of observing how these diseased states are organized in the latent space. We found that within each disease class there was relatively consistent organization. The audio samples that are classified as healthy occupied a more distinct region within this space, and the samples classified as diseased separated out primarily along PC-1. The low disease severity class was the most discriminated within the space, followed by the medium disease severity, then high disease severity. We were interested in what features of the frequency spectra were learned within the embedding model, thus extracted spectrograms across 24 hour window for each projected point and computed the mean across each disease severity class. We compared these frequency spectra to the corresponding latent variables. Within the healthy audio samples, we observed changes in magnitude and broadening of spectral peaks across the day, which could be associated\nwith honey bee circadian rhythms, and similar changes are observed in the corresponding latent variables (Eban-Rothschild & Bloch, 2008; Ramsey et al., 2018). The low disease severity class, which shows the most separation in PCA space, has the most distinct spectrogram signatures. In particular, within the low-disease severity we see the peak at 645 Hz broadening and increasing in magnitude that was consistent across 24 hours. As the disease class progressed from low to medium and high severity, we observed reduction in magnitude and any circadian changes become less apparent. These characteristics appear to be consistent with studies comparing healthy and disease auditory and behavioral modifications within bee hives, and the embedding model is capturing some of the differences in acoustic profiles between the disease severity classes (Kazlauskas et al., 2016; Robles-Guerrero et al., 2017)." }, { "heading": "6 CONCLUSION", "text": "We have shown for the first time the potential of using custom deep learning audio models for predicting strengths of beehives based on multi-modal data composed of audio spectrograms and environment sensing data, despite collecting data across 26 sensors in 26 hives without cross-sensor calibration and ground truth uncertainties for downstream prediction tasks. We believe that properly calibrated microphones with improved signal, better sensor placement, and better managed inspection labels would further improve the predictions, reducing the need for frequent human inspections, a major limitation of current beekeeping practices. We have also shown the usefulness of semisupervised learning as a framework for modeling beehive audio, of which is easy to collect large amounts of samples but difficult to assign inspections, making annotation collection only possible during data collection. The learned latent space allows us to characterize different sound profiles and how they evolve over time, as shown with the progression in disease severity of pollinators in this work. Future efforts may explore how similar frameworks can be used to study the dynamics of beehives across longer time horizons combined with geographic data, and explore its use in forecasting, which can be used to improve labor and time management of commercial beekeeping." }, { "heading": "A APPENDIX", "text": "A.1 DATA VALIDATION\nWe examine the quality of collected sensor data in several ways. For environmental sensor data, which composes of temperature, humidity, and pressure, we developed a dashboard to examine all datapoints to ensure smoothness across time and check all values to ensure they are within plausible range. For audio data, we looked for features that could characterize mean activity across day. Bees have rigid circadian rhythms and are active during the day. By looking at a simple metric such as audio amplitude, we were able to see that the peak amplitude occurred on average around midnight, when bees are mostly in the hive. This is congruent with established literature, which also suggests that bees often vibrate their wings to modulate the hive temperature, which we have verified through the inside temperature sensed for hives with bees compared to without bees (Weiss et al., 2017; Southwick, 1983; Stabentheiner et al., 2010). Boxes with bees have consistent inside temperatures due to thermoregulation by healthy colonies, whereas an empty box we tested captures the same internal and external temperatures.\nWe also computed Pearson correlation coefficients between pairwise features to verify the plausibility of the linear relationships between our data features (Fig. 8), and trained linear models to predict each feature based on other features. We additionally examined conditional probabilities in order to verify each relationships in greater detail. In summary, we find that there is some sensitivity differences across microphone, other sensor modalities were most likely self and cross-consistent. We also inspect audio spectrograms and remove data samples with failed audio captures.\nA.2 DATA PREPROCESSING\nEach 56-second audio sample is converted into a wav file and preprocessed with the python package Librosa to obtain a full sized mel spectrogram which is 128×1680 with a maximum frequency set at 8192 Hz, half of the sampling rate of the data at 16,384. This spectrogram is then downsampled through mean pooling to 61 by 56, with 61 representing the frequency dimension, and 56 representing second-long time bins. Given the spectrogram captures frequencies well beyond bee sounds established in prior literature (Ramsey et al., 2020; Howard et al., 2013; Ferrari et al., 2008), we crop this spectrogram at 56 dimensions, representing a 56 by 56 downsampled spectrogram, which is used to feed into the embedding module after normalizing to between 0 and 1. We did not use some common transformations such as Mel-frequency cepstrum (MFCC), as it enforces speechdominant priors that do not apply to our data and would likely result in bias or data loss. We also normalize each environmental sensor feature separately and as well as prediction ground truths to between 0 and 1 for more stable training.\nA.3 DATASET PREPARATION\nIt can be difficult to assign data to each inspection given often inspections tend to correspond to gaps in data collection. We worked with our inspector to understand the nature of each inspection entry for each hive. For each inspection entry, we match that entry to the closest date we have a full 96 samples for. We realized after doing this that a number of inspections could not be matched to any sensor data, and unfortunately had to discard those labels from the dataset. In order to be stringent\nabout our label assignment, we only pair labels to data if the difference in time is under 2 days. This led to a dataset of 40 days with labels.\nA.4 HYPERPARAMETERS\nTraining Adam was used as the optimizer for all objectives. We found that training with 4 objectives was relatively stable, given sufficient pretraining. We used a learning rate of 3e-5 for all except for disease classification, which we found required a slightly larger learning rate to converge at 1e4. The multiple objectives seems to have regularized the model from overfitting as evident in the validation curves, with the exception of diseases likely because there is significant class imbalance and insufficient number of examples for diseased, due to the nature of the dataset. The number of pretrain iterations was determined empirically based on the decrease in validation loss. This pretraining is useful in order to prevent the network from overfiting to the prediction loss from the very beginning and not learn to model the audio spectrogram.\n0 2000 4000 6000 8000 6000\n4000\n2000\n0\n2000\nNe ga\ntiv e\nLo g\nLik el\nih oo\nd\nReconstruction\n0 2000 4000 6000 8000 0.0\n0.5\n1.0\n1.5\n2.0\n2.5\nHu be\nr L os\ns\nFrames\n0 2000 4000 6000 8000 0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAc cu\nra cy\nDisease Type\n0 2000 4000 6000 8000 0.0\n0.5\n1.0\n1.5\n2.0\n2.5\nHu be\nr L os\ns\nDisease Severity\nTraining Iteration\nFigure 9: Tracking validation metrics for GPN across training iterations, averaged across 10-folds. Generative model pretraining occupies the bulk of the training iterations, after which the validation loss quickly plateaus.\nArchitecture We found that the predictor could overfit to the training set very easily, thus decided on using a single layer which attaches to multiple heads to allow for parameter sharing. We swept over different numbers of latent variables for the generative module, and found two latent\nvariables worked best when considering prediction performance and also reconstruction quality and interpretability of the latent space.\nA.5 AUDIO FEATURES CAPTURED BY THE EMBEDDING MODEL\nMagnitude After projecting audio data from the dataset into the embedding space across each day, and computing PCAs of each full-day latent representation, we found the first two PCA components captured significant portion of the variance of the embedding space, with PC-1 and PC-2 representing respective 74.17% and 10.61% of the variance. To understand what specific audio features were captured by the embedding model, we swept in ascending order across PC-1 and created a 24 hour average frequency spectra for each sample. We observed the broadening of a peak around 675 Hz and increase in magnitude across all spectra. We plotted integrated magnitudes for frequency spectra from 0 to 8196 Hz against order on PC-1, and found high correlation of magnitude within the defined region to position on PC-1, demonstrating that the embedding model captured variation in magnitude across this frequency range." } ]
2,020
null
SP:47cbc46d73c5ad9d50744a7ff9fd6797eff273c4
[ "The paper considers the sequential recommendation problem. The proposed method essentially combines the following two ideas: (i) two-stage learning: using conventional CF to pretrain user/item embeddings, and feed them (fixed, unlearned) into the 2nd stage learning. (ii) two-time-scale: using 2 RNNs to model active users and inactive users respectively." ]
We propose a surprisingly simple but effective two-time-scale (2TS) model for learning user representations for recommendation. In our approach, we will partition users into two sets, active users with many observed interactions and inactive or new users with few observed interactions, and we will use two RNNs to model them separately. Furthermore, we design a two-stage training method for our model, where, in the first stage, we learn transductive embeddings for users and items, and then, in the second stage, we learn the two RNNs leveraging the transductive embeddings trained in the first stage. Through the lens of online learning and stochastic optimization, we provide theoretical analysis that motivates the design of our 2TS model. The 2TS model achieves a nice bias-variance trade-off while being computationally efficient. In large scale datasets, our 2TS model is able to achieve significantly better recommendations than previous state-of-the-art, yet being much more computationally efficient.
[]
[ { "authors": [ "Rianne van den Berg", "Thomas N Kipf", "Max Welling" ], "title": "Graph convolutional matrix completion", "venue": "arXiv preprint arXiv:1706.02263,", "year": 2017 }, { "authors": [ "Avishek Joey Bose", "Ankit Jain", "Piero Molino", "William L Hamilton" ], "title": "Meta-graph: Few shot link prediction via meta learning", "venue": null, "year": 1912 }, { "authors": [ "O. Celma" ], "title": "Music Recommendation and Discovery in the Long Tail", "venue": null, "year": 2010 }, { "authors": [ "Xinshi Chen", "Shuang Li", "Hui Li", "Shaohua Jiang", "Yuan Qi", "Le Song" ], "title": "Generative adversarial user model for reinforcement learning based recommendation system", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hanjun Dai", "Yichen Wang", "Rakshit Trivedi", "Le Song" ], "title": "Deep coevolutionary network: Embedding user and item features for recommendation", "venue": "arXiv preprint arXiv:1609.03675,", "year": 2016 }, { "authors": [ "Robin Devooght", "Hugues Bersini" ], "title": "Long and short-term recommendations with recurrent neural networks", "venue": "In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization,", "year": 2017 }, { "authors": [ "Mehrdad Farajtabar", "Yichen Wang", "Manuel Gomez-Rodriguez", "Shuang Li", "Hongyuan Zha", "Le Song" ], "title": "Coevolve: A joint point process model for information diffusion and network evolution", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Palash Goyal", "Sujit Rokka Chhetri", "Arquimedes Canedo" ], "title": "dyngraph2vec: Capturing network dynamics using dynamic graph representation learning", "venue": "Knowledge-Based Systems,", "year": 2020 }, { "authors": [ "F Maxwell Harper", "Joseph A Konstan" ], "title": "The movielens datasets: History and context", "venue": "Acm transactions on interactive intelligent systems (tiis),", "year": 2015 }, { "authors": [ "Balázs Hidasi", "Alexandros Karatzoglou", "Linas Baltrunas", "Domonkos Tikk" ], "title": "Session-based recommendations with recurrent neural networks", "venue": "arXiv preprint arXiv:1511.06939,", "year": 2015 }, { "authors": [ "Dietmar Jannach", "Malte Ludewig" ], "title": "When recurrent neural networks meet the neighborhood for session-based recommendation", "venue": "In Proceedings of the Eleventh ACM Conference on Recommender Systems,", "year": 2017 }, { "authors": [ "Wang-Cheng Kang", "Julian McAuley" ], "title": "Self-attentive sequential recommendation", "venue": "IEEE International Conference on Data Mining (ICDM), pp. 197–206", "year": 2018 }, { "authors": [ "Yehuda Koren", "Robert Bell", "Chris Volinsky" ], "title": "Matrix factorization techniques for recommender systems", "venue": null, "year": 2009 }, { "authors": [ "Srijan Kumar", "Xikun Zhang", "Jure Leskovec" ], "title": "Predicting dynamic embedding trajectory in temporal interaction networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Hoyeop Lee", "Jinbae Im", "Seongwon Jang", "Hyunsouk Cho", "Sehee Chung" ], "title": "Melu: Meta-learned user preference estimator for cold-start recommendation", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Federico Monti", "Michael Bronstein", "Xavier Bresson" ], "title": "Geometric matrix completion with recurrent multi-graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Arkadi Nemirovski", "Anatoli Juditsky", "Guanghui Lan", "Alexander Shapiro" ], "title": "Robust stochastic approximation approach to stochastic programming", "venue": "SIAM Journal on optimization,", "year": 2009 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Qi Pi", "Weijie Bian", "Guorui Zhou", "Xiaoqiang Zhu", "Kun Gai" ], "title": "Practice on long sequential user behavior modeling for click-through rate prediction", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Aravind Sankar", "Yanhong Wu", "Liang Gou", "Wei Zhang", "Hao Yang" ], "title": "Dysat: Deep neural representation learning on dynamic graphs via self-attention networks", "venue": "In Proceedings of the 13th International Conference on Web Search and Data Mining,", "year": 2020 }, { "authors": [ "Fei Sun", "Jun Liu", "Jian Wu", "Changhua Pei", "Xiao Lin", "Wenwu Ou", "Peng Jiang" ], "title": "Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Jiaxi Tang", "Francois Belletti", "Sagar Jain", "Minmin Chen", "Alex Beutel", "Can Xu", "Ed H. Chi" ], "title": "Towards neural mixture recommender for long range dependent user sequences", "venue": "In The World Wide Web Conference,", "year": 2019 }, { "authors": [ "Manasi Vartak", "Arvind Thiagarajan", "Conrado Miranda", "Jeshua Bratman", "Hugo Larochelle" ], "title": "A meta-learning perspective on cold-start recommendations for items", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Qitian Wu", "Hengrui Zhang", "Hongyuan Zha" ], "title": "Inductive relational matrix completion", "venue": "arXiv preprint arXiv:2007.04833,", "year": 2020 }, { "authors": [ "Shuang-Hong Yang", "Bo Long", "Alexander J Smola", "Hongyuan Zha", "Zhaohui Zheng" ], "title": "Collaborative competitive filtering: learning recommender using context of user choice", "venue": "In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval,", "year": 2011 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Yixin Chen" ], "title": "Inductive matrix completion based on graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Guorui Zhou", "Xiaoqiang Zhu", "Chenru Song", "Ying Fan", "Han Zhu", "Xiao Ma", "Yanghui Yan", "Junqi Jin", "Han Li", "Kun Gai" ], "title": "Deep interest network for click-through rate prediction", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Guorui Zhou", "Na Mou", "Ying Fan", "Qi Pi", "Weijie Bian", "Chang Zhou", "Xiaoqiang Zhu", "Kun Gai" ], "title": "Deep interest evolution network for click-through rate prediction", "venue": "In Proceedings of the AAAI conference on artificial intelligence,", "year": 2019 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nA hypothetical user’s interaction with recommendation systems gives us diminishing returns in terms of its information value in understanding the user. For an active user who has lots of historical interactions, she is typically well understood by the recommender, and each new interaction gives relatively little new information. In contrast, for an inactive or new user, every additional interaction will provide interesting information for understanding this user. Therefore, the representations for active and inactive users should be updated differently when a new interaction occurs. Figure 1 illustrates such information diminishing phenomenon, where the amount of change in user embedding from φt to φt+1 due to an additional interaction is decaying. One can select a particular threshold t∗ for the number of interactions, above which the users can be categorized to active users, and below which inactive users. Roughly active users’ embeddings evolve slowly as a function of the number of interactions, while inactive users’ embeddings evolve fast. Hence a two-time-scale embedding evolution.\nApart from the time-scale difference in temporal dynamics, the simultaneous presence of active and inactive users also presents other modeling and computational challenges. On the one hand, active\nusers lead to long sequences of interactions and high degree nodes in the user-item interaction graph. Existing sequence models, such as RNN models, have some limitations when dealing with long-range sequences, due to the difficulty in gradient propagation. Moreover, graph neural network-based models become computationally inefficient due to the intensive message passing operations through high-degree nodes introduced by active users. On the other hand, predicting preferences of inactive or\nnew users (also known as the cold-start problem) is a challenging few-shot learning problem, where a decision needs to be made given only a few number of observations. To address various challenges imposed by the presence of two types of users, we leverage the different dynamics of these users and propose (i) a two-time-scale (2TS) model and (ii) a two-stage training algorithm.\n2TS model. Based on the number of observed interactions, we partition the users into two sets: active and inactive users. Our 2TS model (Fig. 1) update the embeddings of active users and inactive users by two RNNs with independent parameters, in order to respect the two-time-scale nature. Moreover, the initial embeddings of inactive users are represented by a common embedding ψ, which is shared across all inactive users. Therefore, the overall model for inactive users is inductive, in the sense that the learned model can be applied to unseen users. In contrast, the initial embedding of each active user is a user-specific embedding φu, which is also called transductive embedding. Such embeddings are very expressive, which can better express users with a long history.\nTwo-stage training. In stage 1, we first learn transductive user embeddings φu and transductive item embeddings xi using a classical collaborative filtering method. Then we fix these embeddings, and in stage 2, we will learn the parameters of the two RNNs and a common initialization ψ for inactive users. It is notable that the transductive embeddings for inactive users are abandoned in stage 2. Only those for active users are finally used in the 2TS model. Besides, for active users, we do not use all interaction data to learn the RNN since their transductive embeddings have already encoded the information of their history. We only use a small number of last clicked items to learn the adaptation for active users, which improves the efficiency of the training process.\nThe proposed 2TS model and the two-stage training algorithm lead to a few advantages:\n• Bias-variance trade-off. The differential use of transductive and inductive embeddings for the two RNN models allows 2TS to achieve a good overall bias-variance trade-off. We theoretically analyze such trade-off in Section 2 through the lens of learning-to-learn paradigm for designing online learning (or adaptation) algorithms. Our theory shows that there exists an optimal threshold to split users to achieve the best overall excessive risk. • Encode long-range sequence. The transductive embeddings φu for active users are user-specific vectors, so they can memorize the user’s long-range history during the training, without suffering from the difficulty of gradient propagation. The RNN on top of these transductive embeddings is only used for adaptation to recently engaged new items. • Computational efficiency. The efficiency of our method on large-scale problems mainly comes from two designs in the algorithm. First, stage 1 learns the transductive embeddings of active users and items, which contain a large number of parameters. However, it is fast since it does not involve any deep neural components and the loss is simply a convex function. Second, stage 2 only learns the RNNs which contain a small number of parameters, and the RNN for active users is only trained on a few last engaged items, which cuts off the long sequences. Experimentally, our method reveals to be much more efficient than the baselines on large-scale datasets.\nWe summarize the contributions of this paper as follows:\n• To explain the intuition and motivation of the 2TS model, we provide theoretical analysis on a simplified setting, which rigorously argues the need for differential use of transductive and inductive embeddings for active and inactive users (Section 2). • Motivated by the analysis, we design the 2TS model and a two-stage training method, for practical use (Section 3). The proposed method is applied to two large-scale benchmark datasets and compared comprehensively to various baseline models, spanning a diverse set of categories, which shows that our method is advantageous in terms of both accuracy and efficiency (Section 5)." }, { "heading": "2 THEORETICAL MOTIVATION: WHY TWO-TIME-SCALE MODELS?", "text": "We will first present the motivation for designing the 2TS model, through the lens of online learning and stochastic optimization. Our analysis quantitatively reveals that (i) the embeddings for active and inactive users evolve in different time scales, and (ii) two different online learning algorithms for active and inactive users respectively can lead to a better overall estimation of user embeddings.\nOur analysis will be carried out in a learning-to-learn setting, where online learning algorithms need to be designed to tackle a family of tasks for estimating the embedding vector of a user. Though this idealized setting can not cover all aspects of real-world recommendation problems, it leads to\nclear insights on the 2TS behavior of active and inactive users and the benefits of using two different online algorithms for these two respective user groups. These insights also motivate our practical implementation of the 2TS model using deep learning in Section 3." }, { "heading": "2.1 SETTING: LEARNING-TO-LEARN", "text": "Our setting consists of three components: the estimation task for an individual user, the distribution of tasks for a family of users, and the online algorithms which we want to design.\nIndividual Task. We associate each user u with a ground truth embedding φ∗µ ∈ Rd, which can be thought of as a vector representation of a user’s preference. This embedding is defined by a distribution µ over user’s clicks over items (x, y), where x ∈ X represents the item embedding which we assume is bounded, i.e., ‖x‖2 ≤ Bx, and y ∈ {0, 1} indicates whether the item is clicked. They follow the user-specific distribution (x, y) ∼ µ. More specifically, the ground truth user embedding is defined as the minimizer of the expected risk according to a regularized logistic loss φ∗µ := arg minφ∈ΦRµ(φ), whereRµ(φ) := E(x,y)∼µ`(φ,x, y),\nand `(φ,x, y) := −yx>φ+ log(1 + exp(x>φ)) + c2‖φ‖ 2 2, (1)\nwhere c > 0 is some regularization constant and Φ := {φ ∈ Rd : ‖φ‖2 ≤ Bφ}. Typically, we do not have access to the distribution µ, but a sampled set of T observations z[T ] := {(x1, y1), · · · , (xT , yT )} ∼ µT , which can be used as training samples to obtain an initial estimate φ(z[T ]) of the user embedding. We assume φ(z[T ]) is obtained by applying stochastic gradient descent (SGD) to the loss ` over z[T ] in this section, but this training stage is not limited just to the SGD algorithm. It is expected that with more samples the estimation will be closer to the ground truth φ∗µ. The estimation φ(z[T ]) is modeling the offline training stage of a recommendation system.\nA Distribution of Tasks. We consider a distribution of users, by assuming that the user-specific distribution µ is sampled from a meta-distribution µ ∼ pu. Furthermore, the number of observed interactions for a user, denoted by T ∼ pαT , is a random variable that follows a power law distribution with density p(T ) ∝ (T + 1)−α. The power law distribution models the fact that there will be lots of users with very few interactions, and very few users with many interactions.1 A key assumption of our model is that the variance of the ground truth user embeddings is small. That is Varm = Eµ∼pu‖φ∗µ −m‖22 ≤ r, wherem = Eµ∼puφ∗µ. (2) The assumption of small variance is critical in the sense that it will allow us to aggregate information from inactive users to obtain better estimations of their user embeddings.\nOnline Algorithm Design Problem. Our goal is to design online adaptation algorithms for this distribution of users, such that the overall excessive risk of the online adaptation across all users is small. This is to model the online sequential recommendation stage where new user-item click information is incorporated after system deployment. Note that we are not restricted to design a single online learning algorithm for all users. In fact, we will show later that 2 online learning algorithms can actually lead to better risk bound than 1 online learning algorithm.\nIn this algorithm design problem, each user µ corresponds to an online learning task, where items arrive sequentially. Starting from an initial embedding φ1µ, an online algorithm updates the user embedding whenever it observes a new user-item interaction (xt, yt) ∼ µ: φt+1µ ← Update(φtµ,xt, yt), (3) and then it applies φt+1µ to the next item xt+1, and suffers the loss `(φ t+1 µ ,xt+1, yt+1) in Eq. 1. The excessive risk of this online algorithm after encountering N sequential samples is 1 N ∑N t=1 `(φ t µ,xt, yt)− `(φ∗µ,xt, yt).\nThe design problem involves (1) the initialization for the algorithm and (2) the update step of the algorithm. One obvious choice is to use the user-specific embedding φ(z[T ]) estimated in the offline training phase as the initial embedding φ1µ for the online algorithm. However, is this estimation the best choice for the initial embedding φ1µ = φ(z[T ])? Is there a better choice? How should the initialization depend on T ? We answer these questions in the next subsection.\n1This assumption is sensible since Fig. 5 shows that T approximately follow a power law in real-world datasets." }, { "heading": "2.2 ANALYSIS AND ITS IMPLICATIONS", "text": "We focus on designing online gradient descent (OGD) algorithms for Eq. 3, where the update step is φt+1µ ← ProjΦ [ φtµ − γ · ∂`(φtµ,xt, yt) ] . In our theoretical analysis, we will focus on designing only the initial embedding for the algorithms and the learning rate γ to gain insights, and deferred the design of more components of the algorithms to later practical neural implementation.\nA key aspect of our analysis is that we will consider a threshold t∗ ∈ N that divides the users into two groups: active users with offline training sample T ≥ t∗ before online adaptation and inactive users with T < t∗. Under such split of the users into two groups, we will design 2 online learning algorithms, one for the active user group and the other for the inactive user group. Furthermore\n• Different initialization. The initial embeddings φ1µ for the two online learning algorithms are designed as: for active users, their learned embeddings φ(z[T ]) are used as the initialization, and for inactive users, they use a common embedding as the initialization. Mathematically,\nActive User: φ1µ = φ(z[T ]) if T ≥ t∗ and Inactive User: φ1µ = m if T < t∗, (4)\nwherem represents the common initialization. We will later show in Theorem 2.1 that the optimal choice of the common initialization is the meanm = Eµφ∗µ, and we can also analytically find the optimal threshold t∗. The intuition is that, for inactive user, if we start from the estimation φ(z[T ]) and make online adaptation for the new items, it will not perform well due the large discrepancy between φ(z[T ]) and the ground truth φ∗µ. However, since φ ∗ µ is distributed around the meanm, we can aggregate the offline observations from inactive users to estimatem. Then it may be better off to use the estimatedm as the shared initial user embedding for the inactive users. In contrast, for active users, φ(z[T ]) can be a better initialization thanm since it is already very close to φ∗µ. • Different learning rate. The learning rates γ are designed to be different for the two online learning algorithms. As later showed in Theorem 2.1, the optimal learning rate for active users is smaller than that for inactive users, and leads to a better overall excessive risk. The intuition is that, for an inactive user, each additional observation provides us a lot of information about this user, and we want to employ a larger learning rate to fully ingest such information. In contrast, for an active user, the additional observation provide diminishing amount of information about this user, and we can use a smaller learning rate.\nIn the theorem below, we show precisely that different initialization and learning rate for the active and inactive users can lead to a smaller expected excessive risk. Furthermore, there is a theoretical optimal choice of the threshold t∗ to achieve this. A more detailed statement of the theorem and the proof of the theorem are provided in Appendix A.\nTheorem 2.1. Assume the setting and notation in Section 2.1. Assume φtu is updated by OGD with step size γ. With the initialization strategy in Eq. 4, the expected excessive risk is bounded by\nE 1N ∑N t=1`(φ t µ,xt, yt)− `(φ∗µ,xt, yt) ≤ (2γN) −1 ( β(t∗) + γ2B2N ) , (5)\nwhere β(t∗) = ζ(α+1,t ∗+1) ζ(α,1) Q̄+ ( 1− ζ(α,t ∗+1) ζ(α,1) ) Varm, B = Bx+cBφ, ζ is the Hurwitz zeta function,\nand Q̄ is a constant larger than Varm. Optimizing the step size γ = B−1 √ β(t∗) /N , we have\nE 1N ∑N t=1`(φ t µ,xt, yt)− `(φ∗µ,xt, yt) ≤ B √ β(t∗) /N. (6)\nMoreover, the optimal choice of the threshold t∗ is\nt∗∗ = arg mint∗≥0β(t ∗) = ⌊ Q̄ /Varm ⌋ − 1. (7)\nImplication 1: A differential design for the initial embeddings is better. If all users are initialized by their user-specific embeddings φ(z[T ]), then it corresponds to the case when t∗ = 0. However, it is a less optimal choice than t∗ = t∗∗, and the gap between them can be quantitatively characterized by β(0) − β(t∗∗) = ∑t∗∗ t=1( Q̄ t − Varm)t −α. Since the excessive risk is proportional to √ β(t∗), the gap shows the advantage of the differential initialization for active and inactive users in Eq. 4. Furthermore, if Varm is smaller, the gap and the advantage of differential initialization are larger.\nImplication 2: The optimal step sizes for active and inactive users are different. In the above theorem, the optimal step size γ for the online algorithm is optimized over all users. However, if we\nconsider using two different step sizes for the group of active users and the group of inactive users, the optimal step sizes for these two groups will be (see derivations in Appendix A.2)\nγact = √ ζ(α+1,t∗+1)Q̄ ζ(α,t∗+1)N for active users, and γin = 1 B √ Varm N for inactive users.\nIt is easy to verify that γact < γin, which suggests that embeddings of active users should evolve with a smaller step size than inactive users. This is consistent with our intuition for the 2TS model." }, { "heading": "3 METHODOLOGY FOR PRACTICE", "text": "In this section, we will describe our concrete methodology, including (i) the parameterization of the two-time-scale (2TS) model and (ii) a two-stage training algorithm for learning this model efficiently and effectively. The design is motivated and guided by the analysis in Section 2, though more advanced techniques are incorporated in order to achieve the state-of-the-art performance.\nInstead of following the OGD algorithm in Section 2, we will use more flexible models for the online update dynamics of user embeddings, as in the meta-learning framework (Ravi & Larochelle, 2016). In particular, we will use two different RNNs, which we call two-time-scale (2TS) model, to model the dynamics of active and inactive users respectively, which correspond to the update steps in Eq. 3.\nTo learn the 2TS model, we first perform a generalized collaborative filtering method to obtain transductive user and item embeddings, which is similar to the training phase in Section 2, except that item embeddings also need to be learned from data and a more advanced loss function is adopted. The learned transductive embeddings will be fixed in stage 2. Transductive embeddings for active users will be used as the initial embeddings during the training of RNNs, while for inactive users a common initialization will be learned together with the RNNs." }, { "heading": "3.1 NEURAL TWO-TIME-SCALE (2TS) MODEL", "text": "Consider a set of users u ∈ U := {1, 2, · · · , U} and items i ∈ I := {1, · · · , I} in a system. In our model, users are divided into two sets by the number of observed interactions T in the training set. With a threshold t∗ ∈ N, we consider users with more than t∗ observed interactions as active users Uact, and those with less than t∗ interactions as inactive users Uin, which include new users. Our two-time-scale model consists of two RNNs, one for active users and the other for inactive users. They can be thought of as two different online adaptation operators for active and inactive users respectively. Furthermore, these two RNNs will also differ in the way in whether they have user-specific parameters and how they are initialized. Assume for the moment, we have already learned an initial set of transductive user embeddings { φu ∈ Rd : u ∈ U } and item embeddings{\nxi ∈ Rd : i ∈ I } , which will be explained in later two-stage training methods.\nThe RNN for inactive or new users is purely inductive in the sense that it does not contain user-specific parameters. We will not use user-specific transductive embeddings φu in this RNN. Instead, the RNN will start with a learned common initialization ψ ∈ Rd, and then be updated by RNN-based adaptation operators every time a new interaction is observed. More precisely, for an inactive user who clicks on items (i1, · · · , iT ) sequentially, the embedding will be updated sequentially as:\ninactive users u ∈ Uin: φ0u = ψ; φt+1u = RNNcellΘ1(φtu,xit). (8)\nFor active users, we will view their transductive embeddings as the memories for their long-term history. Different from inactive users, the initial embeddings in the RNN for active users will be set to be the user-specific transductive embeddings φu. Then the RNN-based adaptation operator will update this initial transductive user embeddings if there are more interacted items:\nactive users u ∈ Uact: φ0u = φu; φt+1u = RNNcellΘ2(φtu,xit). (9)\nThe inactive and active user models in Eq. 8 and Eq. 9 constitute our two-time-scale model (2TS model). The model consists of several sets of parameters, including the transductive embeddings for active users { φu ∈ Rd : u ∈ Uact } and items { xi ∈ Rd : i ∈ I } , the two sets of RNN parameters Θ1 and Θ2 respectively, and the additional common initialization ψ for the inactive user RNN. These parameters will be learned differently using a two-stage method described in the next section." }, { "heading": "3.2 TWO-STAGE TRAINING", "text": "Let D be the training dataset that records the interactions between users and items as an ordered sequence where the t-th interaction between some user u and item i is denoted as et := (ut, it). Our training method for the two-time-scale model consists of two stages. In stage 1, we will learn the transductive embeddings for all users and items from observed interactions D. In stage 2, we will learn the parameters of the two RNNs. In summary, the blue-colored transductive embeddings for active users {φu : u ∈ Uact} and items {xi : i ∈ I} are learned in stage 1 and then fixed in stage 2. The red-colored components that include the common initialization ψ for inactive users and the RNN parameters Θ1 and Θ2 are learned in stage 2. More specifically,\nStage 1. We will first ignore the sequential ordering of the interactions, and learn the transductive user and item embeddings by optimizing the cross-entropy loss (or called softmax loss):\nL({φu}, {xi}) = 1|D| ∑ (u,i)∈D log [∑ j∈O(u,i) exp(φ > u xj) ] − φ>u xi, (10)\nwhere the set O(u, i) is the set of items displayed to user u when he/she clicks on item i. Typically this information is not given, so we could randomly sample p items as the non-clicked items, and use these p items together with the clicked item i to constitute the offer set O(u, i). Stage 1 training is similar to collaborative competitive filtering (CCF) in Yang et al. (2011). It is efficient since the objective is a simple function. Besides, since the objective is convex in either φu or xi, it is easier to obtain the global optimum and the results are affected less by the initialization.\nSince active users have many interactions, their transductive embeddings are learned well and lead to good predictions in held-out data. However, for inactive users with fewer observed interactions, the learned transductive embeddings could be overfitted. Thus, in the next stage of training RNNs, we will re-use the transductive embeddings for active users but will discard those for inactive users. Furthermore, the learned item embeddings {xi} will also be used in the next stage of training. Stage 2. We can divide the training set according to active and inactive user as Dact = {(ut, it) : (ut, it) ∈ D ∧ ut ∈ Uact} and Din = {(ut, it) : (ut, it) ∈ D ∧ ut ∈ Uin}. Then we will train the parameters of the two RNNs using the following loss functions:\nLin(Θ1,ψ) := 1|Din| ∑ (ut,it)∈Din log [∑ j∈O(ut,it) exp(φ > utxj) ] − φ>utxit , (11)\nLact(Θ2) := 1|DKact| ∑ (ut,it)∈DKact log [∑ j∈O(ut,it) exp(φ > utxj) ] − φ>utxit , (12)\nwhere user embeddings {φut} in Eq. 11 and Eq. 12 are updated sequentially using the corresponding RNNs in Eq. 8 and Eq. 9, respectively. Furthermore, DKact ⊂ Dact ⊂ D contains the last K interactions for each active user observed in the training set. That is, for an active user who has more than t∗ interactions, we will only use the last K (K ≤ t∗) interaction of the user to train the RNN. First, this allows us to avoid the direct use of RNN for long sequence modeling, which is inefficient. Second, the transductive embeddings for active users have already encoded most information from these users and only a small online adaption is needed to boost the performance further.\nOverall, we find that by encoding the history into the transductive embedding and only learn the K step adaptation can largely reduce the computational cost. This reflects another benefit of treating active and inactive using the threshold t∗." }, { "heading": "3.3 IMPLEMENTATION DETAILS", "text": "We present two implementation details that are essential for the performance of our model in the experiments but less relevant to the main ideas of this paper.\n2TS-Plus. We propose a variant of our 2TS model called 2TS-Plus. The only difference from 2TS is that, we replace the user embeddings φut in 2TS by φ̂ut ← W>[φ>ut ,x > it−1\n]> where xit−1 is the item embedding of the user ut’s most recently clicked item it−1, and W is a learnable weight matrix. In summary, 2TS-Plus explicitly incorporates the information of the ‘last clicked item’ to compute the user embeddings. Our experiments show that this can consistently improve performance. In fact, we incorporate the ‘last clicked item’ because we found that the baseline JODIE (Kumar et al., 2019)\nperforms particularly well on lastfm-1K. Thus we looked into their implementation and found that it is the ‘last clicked item’ that helps with the performance.\nFeatures. In many datasets, item features {fi}i∈I are provided. In the experiments, we concatenate the transductive item embeddings xi with the feature-based embedding, gϕ(fi), where gϕ a simple network with parameter ϕ. The concatenation [x>i , gϕ(fi)\n>]> will be used as the item embeddings, and both transductive embeddings xi and the parameters ϕ are learned in stage 1." }, { "heading": "4 RELATED WORK", "text": "Our stage 1 training is most related to methods based on matrix factorization (MF) (Koren et al., 2009). Many methods for MC are collaborative filtering (CF)-based (Koren et al., 2009). Neural models for recommendation broadly fall into the following three categories.\nGraph-based models. Monti et al. (2017) first studied recommender systems with graph neural network (GNN). Berg et al. (2017) proposed graph convolutional matrix completion (GCMC) which applies a GNN on the user-item bipartite graph to learn user and item embeddings. It is a transductive model, as it learns user/item-specific embeddings. PinSage (Ying et al., 2018) and IGMC (Zhang & Chen, 2020) are recently proposed inductive GNN-based recommender systems. Despite showing promising results, existing GNN-based recommender systems usually have poor scalability, due to their expensive neighborhood sampling procedure for performing graph convolution.\nDynamic graph models. Based on the idea that both users and items will evolve over time via temporal interactions, Dai et al. (2016); Farajtabar et al. (2017); Kumar et al. (2019); Goyal et al. (2020); Sankar et al. (2020) take the graph temporal evolution into modeling design. However, most existing dynamic graph approaches cannot scale to large interaction graphs. JODIE (Kumar et al., 2019) proposed a batching method to make the training process more efficient.\nDeep Sequence models. RNN (Hidasi et al., 2015; Jannach & Ludewig, 2017) and LSTM (Devooght & Bersini, 2017; Chen et al., 2019) based deep models are widely used in sequential recommendation. Other methods based on attention models (Zhou et al., 2018; 2019) have also been explored. However, these models still have difficulties in leveraging the information contained in states located far into the past due to gradient propagation issues (Pascanu et al., 2013). Several recent advances are proposed to deal with long-range sequences (Tang et al., 2019; Pi et al., 2019)\nCold-start problem. Traditional approaches to address cold-start problems include content filtering, hybrid models, etc. Vartak et al. (2017) proposed a meta-learning perspective for the item cold-start problem, where recommending new items to users is formulated as learning a learning algorithm. Lee et al. (2019) proposed to use meta-learning to estimate new user’s preferences with a few consumed items. Bose et al. (2019) proposed Meta-Graph to perform few-shot link prediction across multiple graphs. Wu et al. (2020) proposed to compute embeddings for new users using the embeddings of active users, via an attention-based model. They share the same idea of splitting the users into two sets, but their main target is the matrix factorization problem and neither consider the temporal evolution nor the two-time-scale difference.\n5 EXPERIMENTS Table 2: Dataset Statistics Dataset #users #items #interactions\nlastfm-1K 1,000 1,000 1,293,103 ML-25M 162,538 59,048 24,999,849 Taobao 987,975 4,111,798 96,678,667 To evaluate the performance of the 2TS model, we conduct experiments on three public datasets, two of which are the largest recommendation benchmark datasets that are closer to industrial scenario, making our results more solid and convincing. By comparing to a diverse set of state-of-the-art (SOTA) methods, we demonstrate the scalability and accuracy of our 2TS model. See detailed configurations in Appendix C.\nDataset. We consider 3 public datasets: Taobao dataset (Pi et al., 2019), MovieLens-25M (ML-25M) (Harper & Konstan, 2015), and lastfm-1K (Celma, 2010). Dataset statistics are summarized in Table 2. Taobao and ML-25M are large-scale. Especially, Taobao contains about 100 million interactions. The results on these two datasets are much more convincing. All datasets provide user-item interaction data where for each interaction, user ID, item ID, timestamp, and item feature are given. For ML-25M, we ignore the ratings and simply use it as interaction data. For each dataset, we sort the interactions by timestamp, and use the first 70% as the training set, the following 10% as the validation set, and the last 20% as the test set. All reported results are estimated on the test set.\nreleased by the authors or the proposed methods are not scalable and will not be able to run on the indicated datasets without our modifications to their methods. The ] symbol indicates that the released implementations have been modified by us to better adapt to the dataset and evaluation metric, so that their results in this table should be expected to be better than the original version. The concrete modifications we made are described in Appendix B.\nmethod Taobao ML-25M lastfm-1KMRR Rec@10 MRR Rec@10 MRR Rec@10\nCF-based CCF] 0.402 0.621 0.193 0.392 0.051 0.116\nGNN-based models\nGCMC] 0.303 0.542 0.171 0.378 0.081 0.129 GCMC-SAGE] 0.149 0.366 0.109 0.264 0.168 0.208 GCMC-GAT] 0.230 0.494 0.185 0.404 0.064 0.118\nDeep sequence models\nSumPooling 0.415 0.664 0.302 0.591 0.071 0.180 GRU4REC] 0.546 0.777 0.364 0.657 0.152 0.344\nDIEN] 0.605 0.834 0.356 0.638 0.100 0.213 MIMN] 0.607 0.828 0.363 0.653 0.115 0.261\nSASRec] 0.488 0.702 0.360 0.653 0.147 0.323 Bert4Rec 0.280* 0.480* 0.397 0.652 0.216 0.369\nDynamic graph models\ndynAERNN -oom- -oom- 0.249 0.509 0.021 0.038 JODIE] 0.454* 0.680* 0.354* 0.634* 0.176 0.325\nOur method 2TS 0.669 0.844 0.404 0.693 0.151 0.3322TS-Plus 0.680 0.844 0.409 0.691 0.203 0.369 Improvement to\nbest baseline 2TS 10.2% 1.2% 1.8% 5.5% - - 2TS-Plus 12.0% 1.2% 3.0% 5.2% - -\nBaselines. We compare 2TS with 12 models spanning 4 categories: (i) CF-based methods: Collaborative competitive filtering (CCF) (Yang et al., 2011) is an advanced CF model which takes into account the context of user’s choice. It is a simple yet very effective method. (ii) GNN-based models: GCMC (Berg et al., 2017) is a SOTA graph-based architecture for recommendation. GCMC-SAGE is its stochastic variant which is more efficient. GCMC-GAT is another variant based on graph attention networks (Veličković et al., 2017). (iii) Deep sequence models: SumPooling is a simple yet effective baseline that is widely used in industry. GRU4REC (Hidasi et al., 2015) is a representative of RNN-based models. DIEN (Zhou et al., 2019) is an advanced attention-based sequence model. MIMN (Pi et al., 2019) is a memory-enhanced RNN architecture to better capture long sequential user behaviors. It is a strong baseline and the authors come from the team that publicized Taobao dataset. SASRec (Kang & McAuley, 2018) is a 2-layer transformer decoder-like model. Also built on top of transformer architecture, Bert4Rec (Sun\net al., 2019) further introduced a BERT type Cloze task with bidirectional attention. (iv) Dynamic graph models: JODIE (Kumar et al., 2019) and dynAERNN (Goyal et al., 2020) are two dynamic temporal graph approaches which learn graph node dynamic embeddings via temporal modeling.\nEvaluation Metric. We measure the performance of different models in terms of the mean reciprocal rank (MRR), the average of the reciprocal rank, and recall@10, which is the fraction of interactions in which the ground truth item is ranked in the top 10. For both metrics, higher values are better. For every interaction, the ranking of ground truth item is calculated with respect to 500 items where the other 499 negative items are randomly sampled from the set of all items.\nOverall Performance (Table 1). We summarize the overall performance in terms of MRR and Rec@10 in Table 1. On the large-scale datasets Taobao and ML-25M, our models 2TS and 2TS-Plus have consistent and significant improvements compared to all baselines. In terms of MRR, an improvement of 12.0% and 3.0% are achieved by 2TS-Plus, respectively. Note that the lastfm-1K dataset is much smaller than Taobao and ML-25M. We run experiments on lastfm-1K just to show that the models have very different behaviors on datasets of different scales. It is interesting to see that, for example, GCMC-SAGE, which performs the worst on both two large-scale datasets, achieves the 3rd best performance on lastfm-1K. The real advantages of our models are for large-scale problems where candidate items are sparse.\nML-25M\nTaobao\nScalability & Efficiency (Fig. 2). To compare the training efficiency, we evaluate different models’ intermediate checkpoint performances on the test set. Fig. 2 shows that the performance of 2TS increases fast. 2TS’s first stage transductive training is efficient and provides an effective initialization for the second stage inductive training. Besides, though not revealed from the figures, the training time of 2TS for each epoch is much smaller than the baselines.\nPerformance for different users. Fig. 3 shows the MRR performances averaged over users with different numbers of observed interactions (the more interactions the more active the user is). The dash line indicates the threshold that we use to split users into inactive and active groups. 2TS and 2TS-Plus lead to consistent improvements over the entire range of users. On ML-25M, the improvement on inactive users are more obvious, while on Taobao the improvement on active ones is more obvious.\nTwo-time-scale behavior (Fig. 4). Recall 2TS has two RNNs. One updates the embeddings of inactive users and the other updates those of active users. Given the learned RNNs, on the test set, we compute the changes of user embeddings in 2-norm as the user interacts with more items (Fig. 4). The behavior aligns with our intuition. Embeddings of inactive users are changed faster and those of active users are more static.\nAblation Study (Table 3). To show the effectiveness of the two-stage training, we compare 2TS to 2TSSingleStage, which has exactly the same architecture as 2TS but all parameters are trained together in a single stage. 2TS-SingleStage performs worse, which means the transductive embeddings are learned very well in Stage 1, benefit from the convexity and easi-\nness of optimization. Besides, the training of 2TS-SingleStage is a lot slower. Furthermore, to show the effectiveness of the 2TS model design, we compare it to 2-GRU, which applies two different RNNs to inactive and active users, but there are no transductive user embeddings for active users." }, { "heading": "6 CONCLUSION", "text": "We proposed to learn two-time-scale representation (2TS) for large recommendation systems to explicitly address the learning efficiency for different user dynamics. We also proposed a two-stage training schema, to leverage the transductive embedding as the inductive model initialization for active users. We evaluated 2TS on large scale recommendation ranking tasks, and showed that 2TS is able to outperform several class of state-of-the-art methods including deep sequence models, graph neural network, and dynamic graphs. Our results show that, designing different representation to capture diverse interaction dynamics are desirable. Future work includes separating and updating user dynamics in an algorithmic way." }, { "heading": "A THEORETICAL MOTIVATION: WHY TWO-TIME-SCALE MODELS?", "text": "Here we will provide more details for Section 2. First, we summarize the setting in Section 2.1 formally and mathematically:\nSetting (A).\n• Each item is represented by a vector x in a given bounded space X := {x ∈ Rd : ‖x‖2 ≤ Bx}. • Each user is represented by a distribution µ over the spaceX×{0, 1}. Each sample z := (x, y) ∼ µ\nfrom this distribution is an item x and a binary label y indicating whether this item is clicked by the user. Further, we assume that a user µ is drawn from a meta-distribution µ ∼ pu. • We parameterize the user µ through a logistic regression model so that a groundtruth user embedding is defined as the minimizer to the expected risk:\nφ∗µ := arg minφ∈ΦRµ(φ), whereRµ(φ) := E(x,y)∼µ`(φ,x, y), and `(φ,x, y) := −yx>φ+ log(1 + exp(x>φ)) + c2‖φ‖ 2 2,\nwhere c > 0 is a regularization constant. Assume the parameter space is also bounded Φ := {φ ∈ Rd : ‖φ‖2 ≤ Bφ}. • Training samples. Each user is associated with T observed interactions, denoted by z[T ] := {(x1, y1), · · · , (xT , yT )} ∼ µT . Assume T ≥ 0 is a random variable that follows a power law distribution with density p(T ) ∝ (T + 1)−α, and denote T ∼ pαT . T is independent of µ. Given the observations z[T ] for a user µ, an estimate of the user embedding φ(z[T ]) can be computed using learning algorithms such as stochastic gradient descent (SGD). In this paper, we assume φ(z[T ]) is estimated by SGD with initialization φ0 = 0 and step size θ = 1c on the loss `. • Test scenario. Each user µ can be viewed as an online learning task, where items arrive sequentially. Starting from an initial embedding φ1µ, an online algorithm updates the user embedding whenever it observes a new data point (xt, yt) ∼ µ:\nφt+1µ ← Update(φtµ,xt, yt),\nthen applies the most updated embedding to φt+1µ the next item xt+1, and suffers from the loss `(φt+1µ ,xt+1, yt+1). The excessive risk of this online algorithm applying to N test samples is\n1 N ∑N t=1 `(φ t µ,xt, yt)− `(φ∗µ,xt, yt).\nBefore the proof of the main theorem, we first introduce the result of stochastic analysis by (Nemirovski et al., 2009), which can be adapted to derive the following proposition that tells the error of the estimation φ(z[T ]) from the training samples.\nProposition A.1. Assume Setting (A). Given the T observations z[T ] ∼ µT , if we estimate the user embedding using projected stochastic gradient descent (SGD) with initialization 0, step size 1c , and operated on `2-regularized logistic regression loss 1T ∑T i=1−yix>i φ+log(1+exp(x>i φ))+ c 2‖φ‖ 2 2, then the expected squared error of the estimation φ(z[T ]) can be bounded by\nEz[T ]∼µT ‖φ(z[T ])− φ ∗ µ‖22 ≤\nQ(µ) T + 1 , where Q(µ) = max{ (Bx + cBφ)\n2\nc2 , ‖φ∗µ‖22}.\nProof. Consider the loss `(φ,x, y) = −yx>φ+ log(1 + exp(x>φ)) + c2‖φ‖ 2 2. It is easy to verify that ` is c-strongly convex, and that the second moment of the gradient is bounded by\nE‖∂φ`‖22 = E‖( exp(x>φ)\n1 + exp(x>φ) − y)x+ cφ‖22 ≤ E (‖x‖2 + c‖φ‖2) 2 ≤ (Bx + cBφ)2.\nThen applying the result by (Nemirovski et al., 2009) (more specifically, Equation (2.9) in the paper), we can obtain the above error bound.\nA.1 PROOF OF THEOREM 2.1.\nWe first restate Theorem 2.1 as the following theorem, which provides more details. Then we present the proof.\nTheorem A.1 (Detailed version of Theorem 2.1). Assume Setting (A). Assume the online algorithm for updating the embeddings φtµ in Eq. 3 is online gradient descent (OGD) with step size γ, i.e., φt+1µ = ProjΦ [ φtµ − γ∂`(φtµ,xt, yt) ] . Let z[T ] denote the observed training samples and z[N ] denote the sequence of test samples for the online learning task. With the initialization strategy in Eq. 4, the expected excessive risk is bounded by\nE(µ,T )∼(pu×pαT )Ez[T ]∼µTEz[N]∼µN 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt)\n≤ β(t ∗) + γ2(Bx + cBφ) 2N\n2γN , (13)\nwhere\nβ(t∗) = ζ(α+ 1, t∗ + 1)\nζ(α, 1) Q̄+\n( 1− ζ(α, t ∗ + 1)\nζ(α, 1)\n) Varm\nQ̄ = Eµ∼puQ(µ) = Eµ∼pu max{ (Bx + cBφ)\n2\nc2 , ‖φ∗µ‖22},\nand ζ(a, b) = ∞∑ i=0 (i+ b)−a is the Hurwitz zeta function .\nChoosing the optimal step size γ = 1Bx+cBφ √ β(t∗) N , the upper bound is\nE 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt) ≤ 1 Bx + cBφ\n√ β(t∗)\nN . (14)\nBesides, the optimal choice of the threshold t∗ is\nt∗∗ = arg min t∗≥0 β(t∗) =\n⌊ Q̄\nVarm\n⌋ − 1. (15)\nProof. Denote the N test samples by z[N ] ∼ µN . Following the analysis and results in (Nemirovski et al., 2009), one can show that\nEz[N]∼µN ‖φ t+1 µ − φ∗µ‖22 ≤Ez[N]∼µN ‖φ t µ − φ∗µ‖22 − 2γEz[N]∼µN [ (φtµ − φ∗µ)>∂`(φtµ,xt, yt) ] + γ2(Bx + cBφ) 2.\nThe above inequality corresponds to equation (2.6) in (Nemirovski et al., 2009). Then, since ` is convex in φ, then\nEz[N]∼µN [ (φtµ − φ∗µ)>∂`(φtµ,xt, yt) ] ≥ Ez[N]∼µN [ `(φtµ,xt, yt)− `(φ∗µ,xt, yt) ] .\nDenote et = Ez[N]∼µN ‖φ t µ − φ∗µ‖22 and combine the above two inequalities, we have 2γEz[N]∼µN [ `(φtµ,xt, yt)− `(φ∗µ,xt, yt) ] ≤ et − et+1 + γ2(Bx + cBφ)2.\nSumming over t, we have\nEz[N]∼µN 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt) ≤ ‖φ1µ − φ∗µ‖22 2γN + γ(Bx + cBφ) 2 2\nTaking expectation over the training samples z[T ], T , and µ, we have\nE(µ,T )∼(pu×pαT )Ez[T ]∼µTEz[N]∼µN 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt)\n≤ E(µ,T )∼(pu×pαT )Ez[T ]∼µT ‖φ(z[T ])− φ∗µ‖22 2γN 1 [T ≥ t∗]\n+ E(µ,T )∼(pu×pαT ) ‖m− φ∗µ‖22 2γN 1 [T < t∗]\n+ γ(Bx + cBφ)\n2\n2 .\nRegarding the first term on the right hand side, by Proposition A.1,\nE(µ,T )∼(pu×pαT )Ez[T ]∼µT ‖φ(z[T ])− φ ∗ µ‖221 [T ≥ t∗]\n≤ E(µ,T )∼(pu×pαT ) Q(µ)1 [T ≥ t∗] T + 1 = Eµ∼puQ(µ)ET∼pαT 1 [T ≥ t∗] T + 1\n= Q̄ET∼pαT 1 [T ≥ t∗] T + 1 = Q̄ ∞∑ T=t∗ 1 T + 1 (T + 1)−α ζ(α, 1) = Q̄ ζ(α+ 1, t∗ + 1)\nζ(α, 1) .\nRegarding the second term,\nE(µ,T )∼(pu×pαT )‖m− φ ∗ µ‖221 [T < t∗]\n= VarmET∼pαT 1 [T < t ∗] = Varm\n( 1− ζ(α, t ∗ + 1)\nζ(α, 1)\n) .\nCombining the above inequalities, we can obtain the first bound in Eq. 13, and Eq. 14 follows. Now we need to find the optimal choice of the threshold t∗. Note that\nβ(t∗) = ET∼pαT\n[ Q̄\nT + 1 1 [T ≥ t∗] + Varm1 [T < t∗] ] ≥ ET∼pαT min { Q̄ T + 1 ,Varm } .\nSince Q̄T+1 monotonely decreases as T increases, it is easy to see that\nmin\n{ Q̄\nT + 1 ,Varm\n} = { Varm, when T < Q̄Varm − 1 Q̄ T+1 , when T,≥ Q̄ Varm − 1.\nTherefore, the lower bound of β(t∗) achieves at\nt∗∗ =\n⌊ Q̄\nVarm\n⌋ − 1,\nand it is the optimizer.\nA.2 DETAILS FOR IMPLICATION 2 IN SECTION 2\nRegarding Implication 2 in Section 2, here we provide more derivation steps for the optimal step sizes for the two group of users.\nThe expected excessive risk can be written as the sum of the excessive risks of the two groups of users:\nE\n[ 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt)\n]\n= E\n[( 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt)\n) 1[T < t∗] ] for inactive users\n+ E\n[( 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt)\n) 1[T ≥ t∗] ] for active users\nWith similar derivation steps as the proof for Theorem A, we can see that for inactive users, the bound is\nE\n[( 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt)\n) 1[T < t∗] ]\n≤ E [(\nVarm 2γN + γ(Bx + cBφ)\n2\n2\n) 1[T < t∗] ] = ( Varm 2γN + γ(Bx + cBφ) 2 2 ) Pr[T < t∗].\nOptimizing the step size γ gives γin = 1Bx+cBφ √ Varm N .\nSimilarly, for active users, the bound is\nE\n[( 1\nN N∑ t=1 `(φtµ,xt, yt)− `(φ∗µ,xt, yt)\n) 1[T ≥ t∗] ]\n≤ E [(\nQ̄ (T + 1)2γN + γ(Bx + cBφ)\n2\n2\n) 1[T ≥ t∗] ] = ( Q̄ζ(α+ 1, t∗ + 1)\n2γN + γ(Bx + cBφ)\n2ζ(α, t∗ + 1)\n2\n) /ζ(α, t∗ + 1).\nOptimizing the step size γ gives γact = 1Bx+cBφ √ ζ(α+1,t∗+1)Q̄ ζ(α,t∗+1)N ." }, { "heading": "B BASELINE SPECIFICATION", "text": "Some of the compared baseline methods are not directly scalable to large scale interaction graphs, or originally designed for ranking task. To make the baselines runnable on large and sparse graphs and comparable to our proposed method, we have made a few adaptions.\n• CCF: Since item features are provided in the datasets, to allow CCF to make use of the features, we modify it in the same way as how we incorporate features into our 2TS model (see Section 3.3).\n• JODIE: we made the following adaptions: 1) replaced the static one-hot representation to 64-dim embeddings because the number of nodes are large; 2) represent the category / tag categorical feature via a learnable embedding table; 3) used triplet loss with random negative example rather than original MSE loss, which empirically show improvements.\n• dynAERNN: These two methods are originally designed for discrete graph snapshots, while our focused tasks are continues interaction graphs. We manually transformed the interactions graph into 10 snapshots, with equal edge count increments. For the downstream ranking task, we followed the evaluation method used in dySat (Sankar et al., 2020): after the node embeddings are trained, we train a logistic regression classifier to predict link between a pair of user / item node embedding using Hadmard operator. The logits on test set are used for ranking. For Taobao dataset, we are not able to get reslt of dynAERNN given the memory constraint.\n• GRU4Rec & DIEN & MIMN: We changed the data batching from per-user based to per-example based, to better adapt for time ordered interaction data and to better make the full use of the training data. Besides, the implementation of GRU4Rec follows implementation by the authors of MIMN paper, which includes some modifications compared to the original version of GRU4Rec released by their authors, and should be expected to perform better.\n• Bert4Rec: For ML-25M and lastfm-1k dataset, we followed the hyper-parameter setting for ML20M dataset, according to the implementation released by the authors, and changed dimension to 128. For Taobao dataset, to fit the model size into 16GB GPU memory, we changed the embedding dimension to 32 and batch size to 8.\n• GCMC & GCMC-SAGE & GCMC-GAT: We changed the loss function from Softmax over different ratings to Softmax over [true item, 10 random selected items], to better adapt to the ranking task.\nBesides, we clarify that in the baseline models Bert4Rec, SASRec, and dynAERNN, item features are not incorporated, since it’s not straightforward to include features into their original model design, and it’s unclear if adding features will help their setup or not." }, { "heading": "C CONFIGURATION OF 2TS AND 2TS-PLUS", "text": "We will present some important hyperparameters chosen for the 2TS and 2TS-Plus in the experiments.\n• Threshold t∗ ∈ N: Users with more than t∗ interactions observed in the training set are considered active users, and those with less than or equal to t∗ observed interactions are considered inactive users. For the results that we get in Table 1, the choices of the thresholds are given in Table 4.\n• Last K interactions for active users: In Stage 2 training, for each active user, we only use the last K interactions to train the RNN. The number K is chosen to be either 10 or 20, and they actually give similar performances.\n• Embedding dimension: Both user embeddings and item embeddings are set to be of dimension 64. If the item contains features, we will use another 64 dimension for the feature embedding.\n• Learning rate: the learning rate for training 2TS and 2TS-Plus is searched over 1e− 3 and 1e− 4 only. Most of the time 1e− 4 works better. The optimizer is Adam." } ]
2,020
LEARNING TWO-TIME-SCALE REPRESENTATIONS FOR LARGE SCALE RECOMMENDATIONS
SP:2749a34e8528dfd4fcc733f9b9f175fcacbcb223
[ "The paper proposes to replace the weighted average of the values in standard self-attention with the average of values sampled in a way that the expectation is close to the result of self-attention. In particular, the authors associate with each query-key pair, a Bernoulli random variable with expected value close to the exponential of the dot-product. Sampling these variables and averaging the values per query is formulated in an efficient way using locality-sensitive hashing." ]
Transformer-based models have come to dominate the landscape in a wide range of natural language processing (NLP) applications. The heart of the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and consequently, depends quadratically on the input sequence length. It is known that training such models on longer sequences is quite expensive, and often, prohibitively so. We show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity to linear. We bypass the quadratic cost by considering selfattention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant). This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of LSH (based on feasibility of deployment on GPU architectures). We evaluate our proposed algorithm on the GLUE benchmark with standard 512 sequence length and our method achieves comparable or even slightly better performance than a standard pretrained Transformer. To evaluate whether our method can indeed handle longer sequences, we conduct experiments on long sequence (4096) language model pretraining and achieve consistent results as standard self-attention, while observing sizable inference speed-ups and memory savings.
[]
[ { "authors": [ "Alexandr Andoni", "Piotr Indyk", "Thijs Laarhoven", "Ilya Razenshteyn", "Ludwig Schmidt" ], "title": "Practical and optimal lsh for angular distance", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Iz Beltagy", "Matthew E Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "arXiv preprint arXiv:2004.05150,", "year": 2020 }, { "authors": [ "Moses Charikar", "Paris Siminelakis" ], "title": "Hashing-based-estimators for kernel density in high dimensions", "venue": "IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2017 }, { "authors": [ "Moses S Charikar" ], "title": "Similarity estimation techniques from rounding algorithms", "venue": "In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing,", "year": 2002 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "ArXiv, abs/1904.10509,", "year": 2019 }, { "authors": [ "Krzysztof Choromanski", "Valerii Likhosherstov", "David Dohan", "Xingyou Song", "J. Davis", "Tamás Sarlós", "D. Belanger", "Lucy J. Colwell", "Adrian Weller" ], "title": "Masked language modeling for proteins via linearly scalable long-context", "venue": "transformers. ArXiv,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "William B Dolan", "Chris Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In Proceedings of the International Workshop on Paraphrasing,", "year": 2005 }, { "authors": [ "Danilo Giampiccolo", "Bernardo Magnini", "Ido Dagan", "Bill Dolan" ], "title": "The third PASCAL recognizing textual entailment challenge", "venue": "In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing,", "year": 2007 }, { "authors": [ "Angelos Katharopoulos", "Apoorv Vyas", "Nikolaos Pappas", "François Fleuret" ], "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "venue": "arXiv preprint arXiv:2006.16236,", "year": 2020 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "ArXiv, abs/2001.04451,", "year": 2020 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": null, "year": 1909 }, { "authors": [ "Omer Levy", "Yoav Goldberg", "Ido Dagan" ], "title": "Improving distributional similarity with lessons learned from word embeddings", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2015 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Behnam Neyshabur", "Nathan Srebro" ], "title": "On symmetric and asymmetric lshs for inner product search", "venue": "In International Conference on Machine Learning,", "year": 1926 }, { "authors": [ "William H Press", "Saul A Teukolsky", "William T Vetterling", "Brian P Flannery" ], "title": "Numerical recipes 3rd edition: The art of scientific computing", "venue": "Cambridge university press,", "year": 2007 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of EMNLP,", "year": 2013 }, { "authors": [ "Ryan Spring", "Anshumali Shrivastava" ], "title": "A new unbiased and efficient class of lsh-based samplers and estimators for partition function computation in log-linear models", "venue": "arXiv preprint arXiv:1703.05160,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Trieu H Trinh", "Quoc V Le" ], "title": "A simple method for commonsense reasoning", "venue": "arXiv preprint arXiv:1806.02847,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In the Proceedings of ICLR", "year": 2019 }, { "authors": [ "Sinong Wang", "Belinda Z. Li", "Madian Khabsa", "Han Fang", "Hao Ma" ], "title": "Linformer: Self-attention with linear complexity", "venue": "ArXiv, abs/2006.04768,", "year": 2020 }, { "authors": [ "Johannes Welbl", "Pontus Stenetorp", "Sebastian Riedel" ], "title": "Constructing datasets for multi-hop reading comprehension across documents", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R. Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In Proceedings of NAACL-HLT,", "year": 2018 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": null, "year": 1910 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Manzil Zaheer", "Guru Guruganesh", "Kumar Avinava Dubey", "Joshua Ainslie", "Chris Alberti", "Santiago Ontanon", "Philip Pham", "Anirudh Ravula", "Qifan Wang", "Li Yang" ], "title": "Big bird: Transformers for longer sequences", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Rowan Zellers", "Ari Holtzman", "Hannah Rashkin", "Yonatan Bisk", "Ali Farhadi", "Franziska Roesner", "Yejin Choi" ], "title": "Defending against neural fake news", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Rich Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "N Rij" ], "title": "the output vector are partition tom τ -dimensional binary hash code. The time complexity for random project isO(nmτd). To efficiently approximate random projection, we follow the construction used in Andoni et al", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The Transformer model (Vaswani et al., 2017) is incredibly effective across a diverse set of natural language processing (NLP) applications including machine translation (Vaswani et al., 2017), language inference (Devlin et al., 2018) and paraphrasing (Raffel et al., 2019). Transformer-based models such as BERT (Devlin et al., 2018) are pretrained in an unsupervised manner and later finetuned on different downstream tasks, often providing state-of-the-art performance on standard benchmarks. While such models have strong empirical performance, their high computational and memory requirements remain quite high. Consequently, in the NLP setting, most current models have certain constraints on the sequence length, e.g., BERT and other transformer-based language models (Yang et al., 2019; Liu et al., 2019) limit the sentence length to be at most 512.\nThe Multi-Head Self-Attention is central to Transformer based models and provides a flexible global receptive field to exchange information among input tokens. While self-attention provides immense benefits, it is also a key bottleneck in training with long sequences. In particular, the output of self-attention is a combination of all tokens where coefficients are determined by the similarities among tokens. While this is empirically beneficial, it involves a sizable resource footprint. For sequence length n, this leads to a O(n2) complexity in both time and memory to compute pairwise similarities among all input tokens. This quadratic cost is a roadblock in attaining potential benefits that may be realizable in various applications by capturing long term context dependencies. As we will discuss in more detail later, the foregoing issue is a major thrust of several recent and ongoing efforts focused on mitigating the sizable resource requirements of such models.\nOur work is inspired by ideas of importance sampling via hashing-based sampling techniques (Spring & Shrivastava, 2017; Charikar & Siminelakis, 2017). We proposed a Bernoulli based sampling to approximate self-attention, scaling linearly with the input sequence length. We achieve this by viewing self-attention as a sum of individual tokens associated with Bernoulli random variables\nwhose success probability is determined by the similarities among tokens. In principle, we can sample all Bernoulli random variables at once with a single hash (although in practice, this number may be a small constant to lower the approximation variance). This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of hashing-based importance sampling (based on feasibility of deployment on GPU architectures). The resulting strategy (You Only Sample Almost Once, YOSO-Attention) is far more amenable to an efficient and backpropagation friendly implementation, and has a favorable empirical performance profile on natural language modeling tasks. We evaluate our proposed algorithm on the GLUE benchmark (Wang et al., 2019) with 512 sequence length as well as on long sequence language model pretraining where we see promising results with speed-ups and memory savings." }, { "heading": "2 BACKGROUND: SELF-ATTENTION", "text": "Self-Attention. Self-attention is a scaled dot-product attention mechanism to capture token dependencies in the input sequence, which can be defined as,\nA(Q,K,V ) = softmax (QWQ)(KWK)T√dh︸ ︷︷ ︸ P V WV = DP exp (P)V WV (1) where Q,K,V ∈ Rn×d are embedding matrices from the input sequence, called queries, key and values respectively. Here, n is the input sequence length, d is the embedding dimension of each token, WQ,WK ,WV ∈ Rd×dh are learned parameter matrices, dh is the dimension of hidden embedding, and DP is a n × n diagonal matrix which normalizes each row of the exp (P) matrix such that the row entries sum up to 1. For simplicity, we overload the notations for Q,K,V to denote QWQ,KWK ,V WV in our presentation.\nMulti-Head Self-Attention. Multi-Head self-attention in Transformers runs through the scaled dot-product attention multiple times and the attention outputs are concatenated to help the model capture information from multiple representation subspaces Vaswani et al. (2017). Multi-Head Selfattention can be formally written as,\nMultiHead(Q,K,V ) = Concat ( A1(Q,K,V ), · · · ,Ah(Q,K,V ) ) W (2)\nwhere h is the number of heads, Ai, i = 1, . . . , h are heads with different parameter matrices. Self-Attention Bottleneck. A key bottleneck in self-attention is computing the softmax matrix, softmax(P), which requires the calculation of all pairwise input token similarities. To reduce this cost, we seek to approximate the softmax matrix by viewing self-attention for each query as an expectation of a softmax distribution and computing the approximated self-attention with an efficient sampling mechanism. In the following sections, we will first review LSH-based importance sampling and then propose Bernoulli sampling with LSH to estimate self-attention efficiently." }, { "heading": "3 IMPORTANCE SAMPLING VIA LOCALITY SENSITIVE HASHING", "text": "Importance sampling (Press et al., 2007) helps approximate properties of a target distribution by a weighted average of random draws from another distribution. It is known (Press et al., 2007) that importance sampling can be directly used for the softmax distribution by drawing samples from a uniform distribution – which avoids sampling from the softmax distribution directly which is harder. But this leads to a high variance estimate since the softmax distribution is usually concentrated in a small region. When using this idea for softmax matrix approximation for self-attention in particular, the variance tends to grow with the input sequence length. Before proceeding, we will summarize an interesting importance sampling method for low variance estimators, specifically, importance sampling via LSH from (Charikar & Siminelakis, 2017; Spring & Shrivastava, 2017).\nLSH-based Importance Sampling. Consider the case when the angular distance between a key and a query is small. In this case, the similarity (between the key and the query) as well as the softmax probability will be large. When viewed through the lens of a nearest neighbor retrieval, the above property coincides with a large collision probability of high similarity key-query pairs, assuming that\nthe neighbor retrieval is implemented via LSH. Motivated by the link between softmax probability p and LSH collision probability q, Spring & Shrivastava (2017) and Charikar & Siminelakis (2017) suggest using LSH as an efficient sampler for low variance softmax estimators.\n(a) Spring & Shrivastava (2017) propose approximating softmax by sampling a set, S, a collection of neighboring keys for each query formed by the union of colliding keys using m hash tables. The estimator is computed using |S|−1 ∑ i∈S p(q,ki) q(q,ki)\nvi where q is a query vector, ki,vi are key and value vectors in the sampling set S, and p(·, ·) and q(·, ·) are softmax probability and collision probability of given pairs. The procedure is equivalent to performing importance sampling without replacement, which involves a dependency among the samples. Deduplication (avoiding double counting) requires memory to store keys in each hash table and runtime to deduplicate keys for each query. If the size of hash buckets is skewed, the GPU memory needs depend on the size of the hash bucket and the runtime depends on the size of S.\n(b) Charikar & Siminelakis (2017) proposed a Hash based Estimator to simulate a proposal distribution for importance sampling via LSH, which can be easily applied in the context of softmax. For each hash table, a key is uniformly selected from the bucket that the query is hashed to, for simulating a draw from a proposal distribution. The estimate is computed as m−1 ∑m i=1 p(q,ki)|Hi(q)| q(q,ki)\nvi where |Hi(q)| denotes the size of hash bucket in the i-th hash table which q is hashed to. This simulates m samples drawn with replacement from the proposal distribution. However, the probability of one key being sampled depends not only on (a) the angular distance to the query but also (b) the number of keys within the hash bucket, leading to a sampling dependency among all keys. Further, using it for self-attention causes a dependence between the sparsity in the softmax matrix and the number of hashes used. Specifically, the number of tokens that each query can attend to is bounded by the number of hashes: the procedure samples at most one key for each hash table and so, it adds one additional nonzero to the softmax matrix, at most.\nRemark 1. While LSH-based importance sampling exploits the agreement between high probability p(·, ·) and high collision probability q(·, ·), the alignment is not perfect. Samples from proposal distribution must be reweighted to compensate for the difference. Further, for different queries, the likelihood ratios between softmax distribution and proposal distribution w.r.t. a single key are different. Therefore, the reweighing has to be done during querying. Although maintaining hash tables for storing keys is not a major problem in general, the high memory cost for hash tables and computation time for reweighing would influence efficiency when applied to self-attention." }, { "heading": "4 YOSO-ATTENTION", "text": "We start from LSH-based importance sampling and seek to address some of the aforementioned issues when it is deployed for approximating self-attention. Instead of using LSH to simulate sampling from a proposal distribution over tokens, we view attention as a sum of tokens associated with Bernoulli random variables. This modification relates better to LSH and less with LSH-based importance sampling – the probability of one query colliding with a key is not based on other keys. This strategy helps avoid the sampling dependency issue in LSH-based importance sampling and offers an opportunity to develop a strategy more amenable to GPUs.\nRemark 2. We assume that the input keys and queries of self-attention are unit length – to unify dot-product similarity in self-attention and cosine similarity in LSH. This is simple using Neyshabur & Srebro (2015): a temperature variable τ is used to bound the squared `2 norm of all queries and keys and to reconstruct new unit length keys and queries while preserving their pairwise similarities. We can work with the softmax matrix in angular distance metric and derive our algorithm.\nSelf-Attention via Bernoulli Sampling. We aim to approximate self-attention, which uses a softmax matrix to capture the context dependency among tokens via their pairwise similarities. Assuming that we can represent this context dependency directly using collision probability q(·, ·), then the challenges discussed in importance sampling can be resolved. The coincidence of softmax probability p(·, ·) and LSH collision probability q(·, ·) makes q(·, ·) a sensible starting point for approximating self-attention. Specifically, to model dependency based on similarity, the collision probability aligns well with the exponential function in softmax in the domain of interest [−1, 1] in Figure 1: both functions have positive zeroth, first and second order derivatives.\nNote that (a) positive zeroth order derivative indicates that the dependency is positive, (b) positive first order derivative ensures that the dependency based on similarity is monotonic, and (c) positive second order derivative means that low similarity corresponds to almost no dependency. This leads us to hypothesize that a collision-based self-attention may be as effective as softmax-based self-attention. It can be formulated as,\nn∑ i=1 Bi(q,ki)vi (3)\nwhere Bi(q,ki) is a Bernoulli random variable where the success probability is given by the collision probability of q with the keys ki. Hence, it can be determined by the similarity between q,ki. In a single hash, each Bi(q,ki) generates a realization to determine whether the corresponding token will be part of attention output or not. Conceptually, when sampling from softmax distribution, only one token is sampled as the attention output. In contrast, Bernoulli sampling determines whether each individual token is a part of the attention output. In principle, to determine the context dependency among tokens, you only need to sample once (YOSO) using a single hash to generate realizations of all Bernoulli random variables, Bi(q,ki), i = 1, . . . , n. Specifically, when keys are hashed to a hash table using a single hash, the realization of Bi(q,ki) for each query q will be 1 if q collides with ki, otherwise it will be 0. To our knowledge, using LSH collision probability to replace softmax dependencies for self-attention has not been studied before.\nYOSO-Attention. By replacing softmax dependency with Bernoulli random variables and using LSH as an efficient sampler to estimate the success probability, we achieve an efficient self-attention (YOSO-Attention) to approximate softmax-based self-attention.\nYOSO(Q,K,V ) = B(Q,K)V ; E[YOSO(Q,K,V )] = ( 1− arccos(QK T )\nπ\n)τ V (4)\nwhere B(Q,K) is the Bernoulli sampling matrix using m hashes.\nB(Q,K)i,j = 1\nm m∑ k=1 1fk(Qi,:)=fk(Kj,:) where fk, k = 1, . . . ,m are hash functions. (5)\nNormalizing Attention. In standard self-attention, each row of the softmax matrix is normalized so that the dependencies sum up to 1. In the above, we have discussed how the pairwise query-key dependencies can be estimated using Bernoulli sampling. We now present how to normalize the dependency in our method as standard self-attention. We can first estimate the dependencies and then normalize them using the sum of estimated dependencies estimated by B(Q,K)1 where 1 is a vector of all entries being 1. B(Q,K)1 can be computed by Eq. 4 by plugging 1 into V . To make the estimation of self-attention more efficient, we turn to adopt a `2 normalization to the attention output, similar as Levy et al. (2015) to use `2 normalization for word embedding. Thus, attention outputs are invariant of the scaling, B(Q,K)1, under `2 normalization. Therefore, we have,\nN-YOSO(Q,K,V ) = `2(B(Q,K)V ) (6)\nEmpirically, we show the `2 normalization does not affect the performance of our method as expected, which can be seen in Figure 3.\nLSH-based Bernoulli Sampling. Now we discuss how to implement the procedure of using Bernoulli sampling to approximate self-attention. While a standard LSH procedure can be used, maintaining hash tables to store keys is inefficient on a GPU – the GPU memory size required for hash table cannot be predetermined and the workload might be skewed due to skewed bucket sizes. To tackle this issue, we propose LSH-based Bernoulli Sampling by only saving the summation of values corresponding to hashed keys instead of storing a collection of hashed keys.\nThe overview of our algorithm is shown in Figure 2. To compute Y = B(Q,K)V , the procedure proceeds as follows. For each k ∈ [1, . . . ,m], we sample a hash function fk and create a hash table Hk ∈ R2τ×d representing 2τ d-dimensional buckets. For each key Kj,:, we add the value Vj,: to the bucket whose index is hash code fk(Kj,:), denoted as Hkfk(Kj,:),\nHkfk(Kj,:) ←H k fk(Kj,:) + Vj,: (7)\nNote that the size of Hk is O(2τd) and is independent of which bucket keys are hashed. With all keys processed for k ∈ [1, . . . ,m], for each query Qi,:, we maintain an output vector Yi,: initialized to 0. Then, we allocate the bucket in Hk using fk(Qi,:) for k ∈ [1, . . . ,m] and add all corresponding results in buckets to the output vector Yi,: as\nYi,: ← Yi,: +Hkfk(Qi,:),: (8) Therefore, each final output Yi,: can be computed as,\nYi,: = m∑ k=1 n∑ j=1 1fk(Qi,:)=fk(Kj,:)Vj,: = n∑ j=1 B(Q,K)i,jVj,: (9)\nRemark 3. The memory and time complexity of this algorithm are O(m2τd) and O(nmd) respectively, In addition, both time and memory are independent of the size of hash buckets. Further, We can improve the memory complexity to O(m2τ ) by reusing hash table and processing a few dimensions each time without increasing time complexity. The constant τ is small as it controls the decay rate of attention weight with respect to the angular distance between query and key, and it can be chosen to be a function of log2(n). In our experiments, τ is set to log2(n).\nSpeed-up. While not essential, we find that a fast random projection for computing the LSH hash code will be beneficial, since this step takes a large portion of the overall runtime. As suggested by Andoni et al. (2015), we use the approximated random projection to reduce time complexity to O(nmτ log2(d)), allowing fast computation of hash codes.\nBackpropagation through YOSO-Attention. For training, we also need to show backward propagation steps for YOSO-Attention. Here, we discuss this last component of YOSO-Attention which enables end-to-end and efficient training.\nFor backpropagation, the gradient of the loss L w.r.t. V can be estimated similar to equation 4,\n∇V L = ((1− arccos(QKT )\nπ )τ )T (∇YOSOL) ≈ B(K,Q)(∇YOSOL) (10)\nThe gradients of L w.r.t. Q,K are similar, so we only provide the expression for Q, ∇QL = (( ∇YOSOL)V T ) ( τ(1− arccos(QK T )\nπ )τ−1\n) ( π √ 1− (QKT )2 )) K (11)\nwhere , are element-wise division and multiplication. The problem with the true gradient is that it goes to infinity as the alignment score between the query and the key approaches 1, which might lead to divergence. To avoid this numerical issue, we use a lower bound of the actual derivative of the collision probability, [[(∇YOSOL)V T ] τ2 (1 − arccos(QKT ) π )\nτ ]K, see Figure 1, which can be efficiently estimated via a variation of LSH-based Bernoulli Sampling. Specifically, note that the approximation can be decomposed into sum of d LSH-based Bernoulli Sampling,\n(∇̂QL)i,: = d∑ l=1 (∇YOSOL)i,l n∑ j=1 B(Q,K)i,j(Vj,l τ 2 Kj,:) (12)\nTherefore, following LSH-based Bernoulli Sampling, the memory complexity is O(m2τd2), and time complexity is O(nmd2). The d2 term can be eliminated by repeatedly using the same hash tables d2 times without increasing runtime, which improves the memory complexity to O(m2τ ). The overall complexity of our method and comparison to standard self-attention is summarized in Table 1. Further, to address the quadratic dependence on d, in the Appendix, we will discuss a scheme to estimate the same quantity but is linear in d." }, { "heading": "5 RELATED WORKS", "text": "There are a number of efforts describing ways to reduce the quadratic cost of self attention w.r.t. input sequence length. Among these works, Linformer (Wang et al., 2020) suggests that low rank attention might be sufficient and adds linear projections (on the sequence) to fixed size keys and values. There are also other low rank approximation ideas (Katharopoulos et al., 2020), (Choromanski et al., 2020) using separable functions on queries and keys to replace softmax self-attention. By assuming the self-attention rank to be independent of input sequence length, these methods can achieve O(n) time and memory complexity. Another direction is to exploit the sparsity of softmax matrix and focus on certain sparsity patterns by only computing softmax dependencies within those patterns, including Sparse Transformer (Child et al., 2019), Longformer (Beltagy et al., 2020), and Big Bird (Zaheer et al., 2020) and Reformer (Kitaev et al., 2020). Note that, instead of using LSH as a tool to approximate nearest neighbor search to dynamically determine the sparsity pattern in Reformer, our YOSO-attention takes advantage of the connection of query-key similarity to the LSH collision probability to model the dependency among tokens." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we provide the empirical results for the proposed approach. To evaluate our proposed method, we follow the BERT language model pretraining procedure (Devlin et al., 2018) and evaluate the performance of our method in both intrinsic tasks and multiple downstream tasks in GLUE benchmark as well as runtime and memory relative to standard self attention. Previously, we assumed that queries and keys are unit length and described the construction to make it work. In the experiments, we found that simply applying a `2 normalization on queries and keys and using a temperature τ as a hyperparameter does not degrade the performance of model and yet is more efficient to compute, so we use the simpler version in the experiments.\nBERT Pretraining. Following Devlin et al. (2018), the model is pretrained on BookCorpus (Zhu et al., 2015) and English Wikipedia. To evaluate the capacity of model capturing the sentence level information, instead of using Next-Sentence-Prediction (NSP) as sentence level loss in the original BERT, we adapt the Sentence-Ordering-Prediction (SOP) proposed in ALBERT (Lan et al., 2019) as a more difficult task compared to NSP. All model are trained with Mask-Language-Modeling (MLM) and SOP objectives. We used the same hyperparameters for pretraining as Devlin et al. (2018). However, due to the computational resources limit, all models are trained for 500K steps. The batch size is set so that around 217 tokens are processed per step. (batch size of 256 for sequence length 512, and batch size of 32 for sequence length 4096).\nNumber of Hashes during Pretraining. Since the estimation variance decreases as the number of hashes increases, to evaluate the trade-off between efficiency and performance in YOSO, we test on four hash settings: 16 hashes, 32 hashes, 64 hashes, and expectation of collision to simulate infinite hashes. We plot MLM validation perplexity and SOP validation loss curves of 512 length model pretrained with softmax self-attention and YOSO-Attention in the right plots of Figure 3. The curves of our method using expectation match and slightly exceed softmax self-attention, indicating our method is indeed as capable as self-attention. It is expected that as the number of hashes increase, the performance of our method will approach the curve using expectation as the approximation\nbecome more accurate. For both MLM and SOP, we confirm that our method is as effective as softmax self-attention.\nNumber of Hashes during Validation. YOSO-Attention is a stochastic model. To make the inference deterministic, as in dropout (Srivastava et al., 2014), we take the expectation as our output. However, directly computing expectation involves a O(n2) cost, so we experiment with the effect of different hash settings in validation and simulate expectation as the number of hashes increases. We plot the MLM perplexity and SOP loss of the same pretrained models using different number of hashes on validation in the center plots of Figure 3. We observe that as the number of hash increases, the MLM perplexity and SOP loss generally decreases for all pretraining hash settings.\nPretraining on Longer Sequence. To examine whether our method can scale linearly with sequence length, we continue to pretrain BERT-base models using the corresponding 500K step checkpoints for 512 length model, and add additional positional embedding as suggested in Beltagy et al. (2020). We observe that compared to 512 sequence length, the small performance gap between YOSO-Attention and softmax self-attention does not increase as suggested in the left plots of Figure 3, providing evidence that the number of hashes can be chosen independent of sequence length.\nGLUE Tasks. In addition to intrinsic tasks, we examined the effectiveness of our method on diverse downstream tasks and ask how our method compares with standard attention even after finetuning. We finetuned all pretrained BERT-base model on MRPC (Dolan & Brockett, 2005), RTE (Giampiccolo et al., 2007), SST-2 (Socher et al., 2013), QNLI (Rajpurkar et al., 2016), QQP (Chen et al., 2018), and MNLI\n(Williams et al., 2018) tasks in the GLUE benchmarks and report their corresponding dev metrics. For large datasets including QNLI, QQP, and MMNL, due to extensive resource needs, we cannot do hyperparameter search, so we used a batch size of 32 and learning rate 3e-5 to update our model and finetune our models for 4 epochs. For MRPC, RTE, and SST-2, we follow BERT finetuning to do a hyperparameter search with candidate batch size {8, 16, 32} and learning rate {2e-5, 3e-5, 4e-5, 5e-5} and select the best dev set result. Results are listed in Table 2. We observed that YOSO’s performance on downstream tasks is comparable with standard attention, and even has slightly better\nresults in some hash settings. Further, the downstream performance of YOSO generally increases as more hashes are used, providing an adjustable trade-off between efficiency and accuracy.\nLonger Sequence Task. To further evaluate YOSO on long sequence tasks, we extended the positional embeddings of a trained YOSO-64 model and used it as an initialization to train a 4096 length YOSO-128 model using a batch size of 64 and learning rate 5e-5 on BookCorpus (Zhu et al., 2015), English Wikipedia, one third of the Stories (Trinh & Le, 2018), and one third of Realnews (Zellers et al., 2019) for 100K steps, similar to Longformer pretraining (Beltagy et al., 2020). Then, we finetuned our model on WikiHop (Welbl et al., 2018). Due to the computational resource limits, we only tested a small set of hyperparameters (batch size = 32, learning rate ∈ {1e-5, 2e-5, 4e-5}, number of epochs = 10). The dev accuracy is 73.7 for YOSO-128-E, which is comparable to 73.8 in Longformer-512 (see caption in Table 3) without hyperparameter search but slightly worse than 75.0 that Longformer-512 achieves with hyperparameter search.\nComparisons to Baselines. Apart from comparing YOSO to standard self-attention, we also evaluated its competitiveness with other efficient attention methods. To keep the financial costs of these experiments reasonable, instead of training all methods from scratch, we used RoBERTa-base’s pretrained weights as the starting point and trained each model using batch size 512 and learning rate 5e-5 on BookCorpus (Zhu et al., 2015) and English Wikipedia for 95K steps. Then, we finetuned the models on SST-2, QQP, and MNLI. These results are shown in Table 3. We observed that our performance is competitive with other baselines while the memory consumption of YOSO is much less (2.6×, 1.9×, 2.1× memory savings compared to Reformer, Longformer, and Linformer respectively, see Backward-Cache in Table 4). This has potential ramifications for training\nsuch models with more moderate hardware resources which are much less expensive. Further, notice that YOSO is potentially applicable to a wider range of applications, especially where the input sequence represents an unordered set of high dimensional points (where spatial locality of the input sequence may not hold).\nEstimation Error. To assess the effectiveness of our algorithm, using Q,K from the trained model, we generated attention matrices using our algorithm with different number of hashes and compare it against standard self-attention. In Figure 4, visually, we see that our method produces similar attention patterns as standard self-attention. The estimation of attention matrix becomes more accurate as the number of hashes increases. Further, each output of YOSO-Attention is a weighted sum of random variables as shown in equation 3; so one may suspect that as the sequence length increases, the variance of YOSO-Attention output might potentially increase. We did not observe this behavior which may be partly due to the hyperparameter τ = O(log(n)) that controls the decay rate of LSH collision probability as the similarity changes. We can also ask whether or not the estimation error of YOSO-Attention for a fixed number of hashes increases as the sequence length increases. We use Q,K,V generated by the pretrained model and estimate the error between N-YOSO(Q,K,V ) and E[N-YOSO(Q,K,V )]. As the left plot of Figure 5 suggests, the relative error of our method stays almost constant as the sequence length increases from 128 to 4096. This indicates that using sampling to estimate attention weight based on YOSO-Attention can scale up with sequence length and preserve the same estimation quality without increasing the number of hashes.\nE[ ‖E[N-YOSO(Q,K,V )]−N-YOSO(Q,K,V )‖∞‖E[N-YOSO(Q,K,V )]‖∞ ]. The relative error is estimated by computing E[N-YOSO(Q,K,V )] based on collision probability, then estimating N-YOSO(Q,K,V ) multiple times, finally computing the mean of relative error of each estimate as an estimation of the outer expectation. (b) The runtime per token is estimated by estimating N-YOSO(Q,K,V ) multiple times and measuring the total time elapsed and then dividing the total time by number of iterations and sequence length to get runtime per token.\nRuntime and Memory. We measure the runtime of our method as sequence length increases. To show the trend more precisely, we measured the runtime per token as shown in Figure 5 (right). There is a slight increase in runtime per token as the sequence length increases, but note that the x-axis of the plot is log scale, so the increment is small. When the sequence length increases by 32×, the runtime per token only increases by 30%, which is explained by our choice of hyperparameter τ = O(log(n)). Aside from the plot, we report the training and testing efficiency of our method as well as three other efficient attention meth-\nods against standard self-attention. The results were measured using Q,K,V of a specified sequence length generated by a trained model and fed into a BERT-base Multi-Head Attention module multiple times. The experiments were performed on a single NVIDIA 2080TI. From Table 4, we can see that while for a standard 512 length sequence, our method has a similar runtime as self-attention, as the sequence length increases, the speed-up and memory savings become significant. While our method offers similar runtime savings as other efficient attention methods, the memory consumption for training (i.e., Backward-Cache) of our method is much lower than all other methods in almost all settings." }, { "heading": "7 CONCLUSION", "text": "We presented a transformer-based model, YOSO-Attention, that scales linearly in the number of input tokens. This allows the model to be applicable to a wide range of long document NLP tasks. Via a randomized sampling based scheme, our model approximates self-attention as a sum of individual tokens associated with Bernoulli random variables that can be sampled at once by a single hash, in principle. With specific modifications of LSH, YOSO-Attention can be efficiently deployed within a deep learning framework and various aspects of this idea and our implementation, we expect, will find use in other novel settings and applications (e.g., in vision)." }, { "heading": "A APPENDIX", "text": "In Appendix, we provide some details of our method that are left out in the main text.\nBackpropogation Derivation When using expectation of LSH collision as attention weights, the attention of one query q to keys ki and associated values vi for all i ∈ {1, ..., n} is defined as\ny = n∑ i=1 ( 1− arccos(q Tki) π )τ vi (13)\nthen we want to compute the gradient of loss w.r.t. q, which we denoted as ∇qL, with the gradient of loss w.r.t. y denoted, ∇yL, given. We start by computing the p-th entry of ∇qL:\n∂L ∂qp = d∑ j=1 ∂L ∂yj ∂yj ∂qp = d∑ j=1 ∂L ∂yj ∂ ∂qp [ n∑ i=1 ( 1− arccos(q Tki) π )τ vij ] (14)\nThen use ddx (1− arccos(x) π ) τ =\nτ(1− arccos(x)π ) τ−1\nπ √ 1−x2 and plug it into the Eq. 13\n∂L ∂qp = d∑ j=1 ∂L ∂yj n∑ i=1\nτ ( 1− arccos(q Tki) π )τ−1 π √ 1− (qTki)2 kip vij (15) After swapping the order of two summations, Eq. 13 becomes\n∂L ∂qp = n∑ i=1\n(∇yL)Tvi τ ( 1− arccos(q Tki) π )τ−1 π √ 1− (qTki)2 kip (16)\nNote that only kip is different for different entries of∇qL, so we can write it as\n∇qL = n∑ i=1\n(∇yL)Tvi τ ( 1− arccos(q Tki) π )τ−1 π √ 1− (qTki)2 ki (17)\nEquation 11 is the matrix form of above ∇QL = (( ∇YOSOL)V T ) ( τ(1− arccos(QK T )\nπ )τ−1\n) ( π √ 1− (QKT )2 )) K (18)\nNote that π √ 1− (QKT )2 approaches to 0 as alignment score between the query and the key\napproaches 1, so we use the fact that 12 (1− arccos(x) π ) ≤ 1 π √ 1−x2 for x ∈ [−1, 1] and define a lower bound to replace the actual gradient\n∇QL = (( ∇YOSOL)V T ) ( τ\n2 (1− arccos(QK\nT ) π )τ )) K (19)\nApproximating Random Projection in LSH. In the main text, we discussed how to estimate selfattention using Bernoulli sampling via LSH. The first step of using LSH is computing hash code using random projection. To compute hash codes for a vector x, we proceed as follows.\nF : Rd → {0, 1}mτ F (x) = sign(Rx) (20)\nwhere R ∈ R(mτ)×d,Rij ∼ N (0, 1), then the output vector are partition tom τ -dimensional binary hash code. The time complexity for random project isO(nmτd). To efficiently approximate random projection, we follow the construction used in Andoni et al. (2015). The output of mτ -dimensional vector is divided to mτd d-dimensional vectors, then hash codes are estimated by\nF (x) = concat(sign(HD13HD 1 2HD 1 1x), ..., sign(HD\nmτ d 3 HD mτ d 2 HD mτ d 1 x)) (21)\nwhere Dji are diagonal matrices with entries uniformly sampled from {−1,+1}, and H is Hadamard matrix. This approximation reduce time complexity to O(nmτ log2(d)).\nAlternative Procedure for Approximating Backpropagation. In the main text, we provided a procedure as shown in Eq. 12, which use LSH-based Bernoulli Sampling d times as subroutine. The complexity of this procedure is linear w.r.t. sequence length n, which is desirable but the runtime can be large if d is relatively large. Therefore, we provide second procedure, which is linear with respect to d. The gradient of L w.r.t. the i-th row of Q is written as\n(∇̂QL)i,: = n∑ j=1 (∇YOSOL)Ti,:Vj,:B(Q,K)i,j τ 2 Kj,: (22)\nNote that if B(Q,K)i,j is zero then the corresponding summation term does not need to be computed. The alternative procedure counts the number of success in m samples at each entry B(Q,K)i,j and only computes the summation term when B(Q,K)i,j is non-zero, and thus the runtime is O(nnz(S(A,B))(m+ d)) (counting number of success + computing nonzero terms). In the worst case, nnz(B(Q,K)) = n2, it would be as expensive as dense matrix multiplications in complexity and even worst in practice due to large memory latency resulted from indirect memory access. However, in practice, B(Q,K) is generally sparse if τ is set properly. Further, the first procedure guarantees a linear complexity scaling of our method for extremely long sequences. As an improvement, we can dynamically select one from these two method based on runtime, than the time complexity is O(min(nmd2, nnz(B(Q,K))(m+ d)))." } ]
2,020
YOU ONLY SAMPLE (ALMOST) ONCE: LINEAR COST SELF-ATTENTION VIA BERNOULLI SAMPLING
SP:f17c1ecc9bb74a6c267c54a8863d0fcd336f4fdf
[ "The paper proposes the discrete Gaussian based differentially private federated learning algorithm to achieve both differential privacy and communication efficiency in federated learning. In particular, it adds discrete Gaussian noise into client updates and uses secure aggregation to prevent the server from observing the individual updates. The algorithm satisfies RDP and has lower communication cost compared to the previous method cpSGD." ]
In this paper, we propose the discrete Gaussian based differentially private federated learning (DP-FED), a unified scheme to achieve both differential privacy (DP) and communication efficiency in federated learning (FL). In particular, compared with the only prior work taking care of both aspects, DP-FED provides stronger privacy guarantee, better composability and smaller communication cost. The key idea is to apply the discrete Gaussian noise to the private data transmission. We provide complete analysis of the privacy guarantee, communication cost and convergence rate of DP-FED. We evaluated DP-FED on INFIMNIST and CIFAR10. The results show that DP-FED outperforms the-state-of-the-art by 4.7% to 13.0% in terms of model accuracy while saving one third of the communication cost. The results might be surprising at its first glance but is reasonable because the quantization level k in DP-FED is independent of q. As long as q is large enough, the probability that the noise exceeds q is small and thus has negligible impact on the model accuracy.
[]
[ { "authors": [ "Naman Agarwal", "Ananda Theertha Suresh", "Felix Xinnan X Yu", "Sanjiv Kumar", "Brendan McMahan" ], "title": "cpsgd: Communication-efficient and differentially-private distributed sgd", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yossi Arjevani", "Ohad Shamir" ], "title": "Communication complexity of distributed convex learning and optimization", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Maria Florina Balcan", "Avrim Blum", "Shai Fine", "Yishay Mansour" ], "title": "Distributed learning, communication complexity and privacy", "venue": "In Conference on Learning Theory, pp", "year": 2012 }, { "authors": [ "Borja Balle", "Gilles Barthe", "Marco Gaboardi" ], "title": "Privacy amplification by subsampling: Tight analyses via couplings and divergences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Leon Bottou" ], "title": "The infinite mnist dataset", "venue": null, "year": 2007 }, { "authors": [ "Clément Canonne", "Gautam Kamath", "Thomas Steinke" ], "title": "The discrete gaussian for differential privacy", "venue": "arXiv preprint arXiv:2004.00010,", "year": 2020 }, { "authors": [ "Jiecao Chen", "He Sun", "David Woodruff", "Qin Zhang" ], "title": "Communication-optimal distributed clustering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Cynthia Dwork", "Guy N Rothblum", "Salil Vadhan" ], "title": "Boosting and differential privacy", "venue": "IEEE 51st Annual Symposium on Foundations of Computer Science,", "year": 2010 }, { "authors": [ "Robin C Geyer", "Tassilo Klein", "Moin Nabi" ], "title": "Differentially private federated learning: A client level perspective", "venue": "arXiv preprint arXiv:1712.07557,", "year": 2017 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Bargav Jayaraman", "Lingxiao Wang", "David Evans", "Quanquan Gu" ], "title": "Distributed learning without distress: Privacy-preserving empirical risk minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "H Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "arXiv preprint arXiv:1602.05629,", "year": 2016 }, { "authors": [ "H Brendan McMahan", "Daniel Ramage", "Kunal Talwar", "Li Zhang" ], "title": "Learning differentially private recurrent language models", "venue": "arXiv preprint arXiv:1710.06963,", "year": 2017 }, { "authors": [ "Ilya Mironov" ], "title": "Rényi differential privacy", "venue": "IEEE 30th Computer Security Foundations Symposium (CSF),", "year": 2017 }, { "authors": [ "Milad Nasr", "Reza Shokri", "Amir Houmansadr" ], "title": "Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Reza Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov" ], "title": "Membership inference attacks against machine learning models", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Ananda Theertha Suresh", "X Yu Felix", "Sanjiv Kumar", "H Brendan McMahan" ], "title": "Distributed mean estimation with limited communication", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Stacey Truex", "Nathalie Baracaldo", "Ali Anwar", "Thomas Steinke", "Heiko Ludwig", "Rui Zhang", "Yi Zhou" ], "title": "A hybrid approach to privacy-preserving federated learning", "venue": "In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security,", "year": 2019 }, { "authors": [ "John N Tsitsiklis", "Zhi-Quan Luo" ], "title": "Communication complexity of convex optimization", "venue": "Journal of Complexity,", "year": 1987 }, { "authors": [ "Hongyi Wang", "Mikhail Yurochkin", "Yuekai Sun", "Dimitris Papailiopoulos", "Yasaman Khazaeni" ], "title": "Federated learning with matched averaging", "venue": "arXiv preprint arXiv:2002.06440,", "year": 2020 }, { "authors": [ "Chulin Xie", "Keli Huang", "Pin-Yu Chen", "Bo Li" ], "title": "Dba: Distributed backdoor attacks against federated learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Samuel Yeom", "Irene Giacomelli", "Matt Fredrikson", "Somesh Jha" ], "title": "Privacy risk in machine learning: Analyzing the connection to overfitting", "venue": "IEEE 31st Computer Security Foundations Symposium (CSF),", "year": 2018 }, { "authors": [ "Mikhail Yurochkin", "Mayank Agarwal", "Soumya Ghosh", "Kristjan Greenewald", "Nghia Hoang" ], "title": "Statistical model aggregation via parameter matching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mikhail Yurochkin", "Mayank Agarwal", "Soumya Ghosh", "Kristjan Greenewald", "Trong Nghia Hoang", "Yasaman Khazaeni" ], "title": "Bayesian nonparametric federated learning of neural networks", "venue": "arXiv preprint arXiv:1905.12022,", "year": 2019 }, { "authors": [ "Yuchen Zhang", "John Duchi", "Michael I Jordan", "Martin J Wainwright" ], "title": "Information-theoretic lower bounds for distributed statistical estimation with communication constraints", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Federated learning (FL) is a popular machine learning paradigm that allows a central server to train models over decentralized data sources. In federated learning, each client performs training locally on their data source and only updates the model change to the server, which then updates the global model based on the aggregated local updates. Since the data stays locally, FL can provide better privacy protection than traditional centralized learning. However, FL is facing two main challenges: (1) FL lacks a rigorous privacy guarantee (e.g., differential privacy (DP)) and indeed, it has been shown to be vulnerable to various inference attacks (Nasr et al., 2019; Pustozerova & Mayer; Xie et al., 2019); (2) FL incurs considerable communication costs. In many potential applications of FL such as mobile devices, these two challenges are present simultaneously.\nHowever, privacy and communication-efficiency have mostly been studied independently in the past. As regards privacy, existing work has applied a gold-standard privacy notion – differential privacy (DP) – to FL, which ensures that the server could hardly determine the participation of each client by observing their updates (Geyer et al., 2017). To achieve DP, each client needs to inject noise to their local updates and as a side effect, the performance of the trained model would inevitably degrade. To improve model utility, secure multiparty computation (SMC) has been used in tandem with DP to reduce noise (Jayaraman et al., 2018; Truex et al., 2019). The key idea is to prevent the server from observing the individual updates, make only the aggregate accessible, and thus transform from local DP to central DP. However, SMC introduces extra communication overhead to each client. There has been extensive research on improving communication efficiency of FL while ignoring the privacy aspect (Tsitsiklis & Luo, 1987; Balcan et al., 2012; Zhang et al., 2013; Arjevani & Shamir, 2015; Chen et al., 2016). However, these communication reduction methods either have incompatible implementations with the existing DP mechanisms or would break the DP guarantees when combined with SMC.\nThe only existing work that tries to reconcile DP and communication efficiency in FL is cpSGD (Agarwal et al., 2018). The authors leveraged the Binomial mechanism, which adds Binomial noise into local updates to ensure differential privacy. The discrete nature of Binomial noise allows it to be transmitted efficiently. However, cpSGD faces several limitations when applied to real-world applications. Firstly, with Binomial noise, the output of a learning algorithm would have different supports on different input datasets; as a result, Binomial noise can only guarantee approx-\nimate DP where the participation of the client can be completely exposed with nonzero probability. Also, there lacks a tight composition for DP with Binomial noise and the resulting privacy budget skyrockets in a multi-round FL protocol. Hence, the Binomial mechanism cannot produce a useful model with a reasonable privacy budget on complex tasks. Last but not least, the Binomial mechanism involves several mutually constrained hyper-parameters and the privacy formula is extremely complicated, which makes hyper-parameter tuning a difficult task.\nIn this paper, we propose the discrete Gaussian based differential private federated learning (D2PFED), an alternative technique to reduce communication costs while maintaining differential privacy in FL. Our key idea is to leverage the discrete Gaussian mechanism in FL, which adds discrete Gaussian noise into client updates. We show that the discrete Gaussian mechanism satisfies Rényi DP which provides better composability. We employ secure aggregation along with the discrete Gaussian mechanism to lower the noise and exhibit the privacy guarantee for this hybrid privacy protection approach. To save the communication cost, we integrate the stochastic quantization and random rotation into the protocol. We then cast FL as a general distributed mean estimation problem and provide the analysis of the utility for the overall protocol. Our theoretical analysis sheds light on the superiority of D2P-FED to cpSGD. Our experiments show that D2P-FED can lead to state-ofthe-art performance in terms of managing the trade-off among privacy, utility, and communication." }, { "heading": "2 RELATED WORK", "text": "It is well studied how to improve the communication cost in traditional distributed learning settings (Tsitsiklis & Luo (1987); Balcan et al. (2012); Zhang et al. (2013); Arjevani & Shamir (2015); Chen et al. (2016)). However, most of the approaches either require communication between the workers or are designed for specific learning tasks so they cannot be applied directly to generalpurpose FL. The most relevant work is Suresh et al. (2017) which proposed to use stochastic quantization to save the communication cost and random rotation to lower mean squared error of the estimated mean. We follow their approach to improve the communication efficiency and model utility of D2P-FED. Nevertheless, our work differs from theirs in that we also study how to ensure DP for rotated and quantized data transmission and prove a convergence result for the learning algorithm with both communication cost reduction and privacy protection steps in place.\nOn the other hand, differentially private FL is undergoing rapid development during the past few years (Geyer et al. (2017); McMahan et al. (2017); Jayaraman et al. (2018)). However, these methods mainly focus on improving utility under a small privacy budget and ignore the issue of communication cost. In particular, we adopt a similar hybrid approach to Truex et al. (2019), which combines SMC with DP for reducing the noise. SMC ensures that the centralized server can only see the aggregated update but not individual ones from clients and as a result, the noise added by each client can be reduced by a factor of the number of clients participating in one round. The difference of our work from theirs is that we inject discrete Gaussian noise to local updates instead of the continuous Gaussian noise. This allows us to use secure aggregation (Bonawitz et al., 2017) which is much cheaper than threshold homomorphic encryption used by Truex et al. (2019). We further study the interaction between discrete Gaussian noise and the secure aggregation as well as their effects on the learning convergence.\nWe identify cpSGD (Agarwal et al. (2018)) as the most comparable work to D2P-FED. Just like D2P-FED, cpSGD aims to improve both the communication cost and the utility under rigorous privacy guarantee. However, cpSGD suffers from three main defects discussed in Section 1. This paper proposes to use the discrete Gaussian mechanism to mitigate these issues in cpSGD." }, { "heading": "3 BACKGROUND AND NOTATION", "text": "In this section, we provide an overview of FL and DP and establish the notation system. We use bold lower-case letters (e.g. a,b,c) to denote vectors, and bold upper-case letters (e.g. A, B, C) for matrices. We denote 1 · · ·n by [n].\nFL Overview. In a FL system, there are one server and n clients Ci, i ∈ [n]. The server holds a global model of dimension d. Each client holds (IID or non-IID) samples drawn from some unknown distribution D. The goal is to learn the global model w ∈ Rd that minimizes some loss function L(w,D). To achieve this, the system runs a T -round FL protocol. The server initializes\nthe global model with w0. In round t ∈ [T ], the server randomly sub-samples γn clients from [n] with sub-sampling rate γ and broadcasts the global model wt−1 to the chosen clients. Each chosen client Ci then runs the local optimizers (e.g. SGD, Adam, and RMSprop), computes the difference between the locally optimized model w(i)t and the global model wt−1: g (i) t = w (i) t − wt−1, and uploads g(i)t to the server. The server takes the average of the differences and update the global model wt = wt−1 + 1k ∑ g (i) t .\nCommunication in FL. The clients in FL are often edge devices, where the upload bandwidth is fairly limited; therefore, communication efficiency is of uttermost importance to FL. Let π denote a communication protocol. We denote the per-round communication cost as C(π, g[n]). To lower the communication cost, the difference vectors are typically compressed before sent to the server. The compression would degrade model performance and we measure the performance loss via the mean squared error. Specifically, letting ḡ denote the actual mean of difference vectors 1n ∑n i=1 g (i) and g̃ denote the server’s estimated mean of difference vectors using some protocol such as D2PFED, we could measure the performance loss by E(π, g[n]) = E[‖g̃ − ḡ‖2], i.e., the mean squared error between the estimated and the actual mean. This mean squared error is directly related to the convergence rate of FL (Agarwal et al., 2018).\nThreat Model & Differential Privacy. We assume that the server is honest-but-curious. Namely, the server will follow the protocol honestly under the law enforcement or reputation pressure, but is curious to learn the client-side data from the legitimate client-side messages. In the FL context, the server wants to get information about the client-side data by studying the local updates received without deviating from the protocol.\nThe above attack, widely known as the inference attack (Shokri et al., 2017; Yeom et al., 2018; Nasr et al., 2019), can be effectively mitigated using a canonical privacy notation namely differential privacy (DP). Intuitively, DP, in the context of ML, ensures that the trained model is nearly the same regardless of the participation of any arbitrary client. Definition 1 (( , δ)-DP). A randomized algorithm f : D → R is ( , δ)-differentially private if for every pair of neighboring datasets D and D′ that differs only by one datapoint, and every possible (measurable) output set E the following inequality holds: P [f(D) ⊆ E] ≤ e P [f(D′) ⊆ E] + δ.\n( , δ)-DP has been used as a privacy notion in most of the existing works of privacy-preserving FL. However, in this paper, we consider a generalization of DP, Rényi differential privacy (RDP), which is strictly stronger than ( , δ)-DP for δ > 0 and allows tighter analysis for compositing multiple mechanisms. This second point is particularly appealing, as FL mostly comprises multiple rounds yet the existing works suffer from skyrocketing privacy budgets for multi-round learning. Definition 2 ((α, )-RDP). For two probability distributions P and Q with the same support, the Rényi divergence of order α > 1 is defined by Dα(P‖Q) ∆ = 1α−1 logEx∼Q( P (x) Q(x) )\nα. A randomized mechanism f : D → R is (α, )-RDP, if for any neighboring datasets D,D′ ∈ D it holds that Dα(f(D)‖f(D′)) ≤ . The intuition behind RDP is the same as other variants of differential privacy: “Similar inputs should yield similar output distributions,” and the similarity is measured by the Rényi divergence under RDP. RDP can also be converted to ( , δ)-DP using the following transformation. Lemma 1 (RDP-DP conversion (Mironov (2017))). If M obeys (α, )-RDP, then M obeys ( + log(1/δ)/(α− 1), δ)-DP for all 0 < δ < 1.\nRDP enjoys an operationally convenient and quantitatively accurate way of tracking cumulative privacy loss when compositing multiple mechanisms (Lemma 2) or being combined with subsampling (Wang et al., 2018). As a result, RDP is particularly suitable for the context of ML. Lemma 2 (Adaptive composition of RDP (Mironov (2017))). If (randomized) mechanismM1 obeys (α, 1)-RDP, andM2 obeys (α, 2)-RDP, then their composition obeys (α, 1 + 2)-RDP." }, { "heading": "4 DISCRETE GAUSSIAN MECHANISM", "text": "In this section, we present the discrete Gaussian mechanism and establish its privacy guarantee. We first introduce discrete Gaussian distribution.\nDefinition 3 (Discrete Gaussian Distribution). Discrete Gaussian is a probability distribution on a discrete additive subgroup L (for instance, a multiple of Z) parameterized by σ. For a discrete Gaussian distribution NL(σ) and x ∈ L, the probability mass on x is proportional to e−x 2/(2σ2).\nDiscrete Gaussian mechanism works by adding noise drawn from discrete Gaussian distribution. Canonne et al. (2020) proved concentrated DP for the discrete Gaussian mechanism. However, there lacks tight privacy amplification and composition theorem for concentrated DP. To address, we turn to RDP and provide the first RDP analysis for the discrete Gaussian mechanism. The proof is delayed to Appendix A due to space limitation.\nTheorem 1 (RDP for discrete Gaussian mechanism). If f has sensitivity 1 and range(f) ⊆ L, then the discrete Gaussian mechanism: f(·) +NL(σ) satisfies (α, α/(2σ2))-RDP.\nUnder RDP, discrete Gaussian exhibits tight privacy amplification bound under sub-sampling (Wang et al., 2018). This suits FL well since a subset of clients is sub-sampled to upload the updates in each round.\nCorollary 1 (Privacy amplification for discrete Gaussian mechanism (Wang et al., 2018)). If a discrete Gaussian mechanism is (α, α2σ2 )-RDP, then augmented with subsampling (without replacement), the privacy guarantee is amplified to (1) (α,O(αγ 2\nσ2 )) in the high privacy regime; or (2) (α,O(αγ2e 1 σ2 )) in the low privacy regime.\nBesides, RDP enables discrete Gaussian mechanism to be composed tightly with analytical moments accountant (Wang et al., 2018), which saves a huge amount of privacy budget in a multi-round FL. Analytical moments accountant is a data structure that tracks the cumulant generating function of the composed mechanisms symbolically. Since it has no closed-form solution, we instead introduce the canonical composition of RDP (Mironov, 2017) below for the ease of discussion in Section 6.3.\nCorollary 2 (Composition for discrete Gaussian mechanism (Wang et al., 2018)). If a discrete Gaussian mechanism is (α, α2σ2 )-RDP, then the sequential composition of T such mechanisms yield (α, Tα2σ2 )-RDP guarantee. If we convert all the RDP guarantees back to ( , δ)-DP, the growth of under the same δ is asymptotically O( √ T ).\nNote that both privacy amplification and composition are given in an asymptotic form for the clarity of presentation. For tight bound, we refer the readers to Theorem 27 and Section 3.3 in Wang et al. (2018).\n5 D2P-FED: ALGORITHM AND PRIVACY ANALYSIS\nIn this section, we formally present D2P-FED and provide rigorous privacy analysis." }, { "heading": "5.1 ALGORITHM", "text": "Algorithm 1 provides the pseudocode for D2P-FED. It follows the general FL pipeline which iteratively performs the following steps: (1) the server broadcasts the global model to a subset of clients; (2) the selected clients train the global model on their local data and upload the resulting model difference; and (3) the server aggregates the model differences uploaded by the clients and updates the global model. Grounded on the general FL pipeline, D2P-FED introduces some additional steps at the client side as follows to improve communication efficiency and privacy.\nStochastic Quantization & Random Rotation (line 11-14). To lower the communication cost, the clients stochastically quantize the values in the update vectors to some discrete domain. Compared with real number encoding which costs 32 or 64 bits per dimension, the quantized value only requires log2(k) bits per dimension where k is the level of quantization (see line 12-14 in Algorithm 1 and McMahan et al. (2016) for detailed explanation.). On the other hand, quantization lowers the fidelity of the update vector and thus leads to some error in estimating the mean of gradients. To lower the estimation error, the clients apply randomized rotation to the updates before quantization as proposed by McMahan et al. (2016). The details are discussed in Section 6.1.\nDiscrete Gaussian Mechanism (line 15). We apply the discrete Gaussian mechanism to ensure DP. To determine the noise magnitude to be added, we need to bound the `2-sensitivity (defined in Section 5.2) of the gradient aggregate. Without quantization and random rotation, one could clip the individual gradient update and consequently the `2-sensitivity is just the clipping threshold. However, the inclusion of compression steps makes the analysis of `2-sensitivity more sophisticated. We provide the analysis of sensitivity and RDP guarantees for the entire algorithm in Section 5.2. Each client samples the noise from the discrete Gaussian distribution. In contrast to prior work where each client adds independent noise, we require the clients to share the same random seed, generate the same noise and add an average share of the noise in each round. This is because the sum of multiple independent discrete Gaussians is no longer a discrete Gaussian. Note that the same random seed is also required for communication reduction in secure aggregation so we can conveniently reuse it here without introducing more further overhead (see Bonawitz et al. (2017) for details). The noise magnitude is set to ensure that the aggregate noise from all clients provides the global DP.\nAlgorithm 1: D2P-FED Protocol.\nInput: Support Lattice: L = 2g max\nk−1 · Z, Noise Scale: σ, Rotation Matrix: R, Random Seed: s Quantization Level: k, φq(x) = 2g max k−1 (( k−1 2gmaxx+ q−1 2 ) mod q − q−1\n2 ), q is odd. 1 for t← [T ] do 2 Server: 3 Sub-sample a subset of clients S ⊂ [n], |S| = γn and broadcast wt−1 and gmax to S 4 Client: 5 foreach client i ∈ S do Send uij to j, uij ∼ Unif(Ldq),Lq := {x ∈ L, |x| ≤ q−1 2 } 6 Client: 7 foreach client i ∈ S do 8 Train the model w(i)t with wt as initialization 9 g\n(i) t = w (i) t − wt−1 /* Compute the difference */\n10 g (i) t,clipped = g (i) t /max(1, ‖g(i)t ‖2 D ) /* Clip the difference */ 11 g (i) t,rotated = R× g (i) t,clipped /* Random Rotation */ 12 let b[r] := −gmax + 2rg max\nk−1 for every r ∈ [0, k) /* Quantize */ 13 for j ∈ d, b[r] ≤ g̃(i)t,quantized[j] ≤ b[r + 1] do\n14 g̃ (i) t,quantized[j] = b[r + 1] w.p. g (i) t,rotated[j]− b[r] b[r + 1]− b[r] b[r] o.w. 15 g̃\n(i) t,dp = g̃ (i) t,quantized + νi γn ,νi s∼ NdL(σ) /* Discrete Gaussian */ 16 g̃\n(i) t = φq(φq(g̃ (i) t,dp) + ∑ j 6=i,j∈S uij − ∑ j 6=i,j∈S uji) /* Mask */\n17 Send g̃(i)t to the server 18 Server: 19 g̃t = 1 γn ∑ i∈S g̃ (i) t /* Aggregate */ 20 wt = wt−1 + g̃t\nSecure aggregation. (line 5,16,19) To reduce the noise magnitude for ensuring DP, we hide clients’ individual updates from the central server and only allow it to see the aggregated updates via the technique of secure aggregation. If the individual updates were available to the central server, they should also be protected with the same privacy guarantee as the averaged update; in this case, the required noise scales up withO(γn). On the other hand, if the central server can only access the aggregated updates, then the required noise isO(1). Hence, secure aggregation of local updates can lead to significant noise reduction. However, there exists a challenge for integrating secure aggregation with discrete Gaussian mechanism: discrete Gaussian variables have infinite support, thereby incompatible with secure aggregation which operates on finite field. In Section 5.2, we propose to address this challenge by mapping the noised vector to a cyclic additive group and then applying secure aggregation and show that the RDP guarantees are preserved under the mapping." }, { "heading": "5.2 PRIVACY ANALYSIS", "text": "Before applying discrete Gaussian mechanism to D2P-FED, we need to figure out how to calibrate the added noise. In differential privacy, the calibration is guided by sensitivity of the function as defined below. Definition 4 (`2-sensitivity). Given a function f : D → R and two neighboring datasets D and D′, the `2-sensitivity of f : ∆f is defined as ∆f = maxD,D′ ‖f(D)− f(D′)‖2.\nIn DP for deep learning, the traditional way to bound `2-sensitivity is to clip the update vector. However, quantization will further influence the sensitivity after clipping. We provide the sensitivity after quantization as below. The proof is delayed to Appendix B due to space limitation. Theorem 2. If we clip the l2 norm of g to D, and quantize it to k = √ d + 1 levels, then the `2 sensitivity of the difference is 4D.\nGiven Theorem 1 and Theorem 2, we provide the RDP bound for D2P-FED. Corollary 3 (RDP for D2P-FED). Given the clipping boundD, the noise scale σ, D2P-FED follows (α, 8αD 2\nσ2 )-RDP.\nRemark 1: Comparison with cpSGD It seems unclear how to interpret the above bound when compared with Theorem 1 in cpSGD (Agarwal et al., 2018). Indeed, the claim that D2P-FED has a better privacy guarantee than cpSGD can be mainly justified by the following three aspects: (1) D2P-FED follows RDP which is a strictly stronger privacy notion than cpSGD which is intrisically limited to ( , δ)-DP; (2) D2P-FED enjoys a tighter composition compared to cpSGD. This is of critical significance in a FL protocol with potentially thousands of rounds; (3) Our experimental results in Figure 1a also empirically show that D2P-FED enjoys a tighter composition than cpSGD. The total privacy budget for D2P-FED grows much more slowly than cpSGD as training proceeds.\nRemark 2: Privacy Effect of Secure Aggregation. Corollary 3 is built on the assumption that the centralized server only has access to the summed updates but not the individual ones. If the centralized server has access to individual updates, the noise has to scale up γn times to maintain the same privacy guarantee which severely hinders the model accuracy. To consolidate the assumption, we leverage a cryptographic technique, secure aggregation (Bonawitz et al., 2017) which guarantees that the centralized server can only see the aggregated result. The basic intuition is to mask the inputs with random values canceling out in pairs. However, since discrete Gaussian has infinite support, we cannot directly apply random masks to it. To reconcile secure aggregation with discrete Gaussian, we propose to project the involved values into a quotient group after shifting and then apply the random masks as shown in line 16 in Algorithm 1. According to the post-processing theorem of RDP (Mironov (2017)), the result still follows rigorous Rényi differential privacy as proved in Appendix C. Note that we consider a simplified version of the full secure aggregation protocol (Bonawitz et al. (2017)) in Algorithm 1 and omit many interesting details such as the generation of the random masks and how to deal with dropout. We deem this to be enough to clarify the idea behind the reconciliation. The complete version of secure aggregation can be reconciled using exactly the same trick. Theorem 3 (Informal). Distributed discrete Gaussian mechanism with secure aggregation obeys the same RDP guarantee as vanilla global discrete Gaussian mechanism with the same parameters." }, { "heading": "6 COMMUNICATION PROTOCOL & UTILITY ANALYSIS", "text": "In this section, we present our communication protocol in detail and discuss the communication cost and the estimation error of D2P-FED with direct comparison to cpSGD. The drastic improvement of D2P-FED mainly comes from the tight composition of discrete Gaussian mechanism compared to binomial mechanism in cpSGD." }, { "heading": "6.1 COMMUNICATION PROTOCOL", "text": "As the first step, we leverage stochastic k-level quantization proposed by McMahan et al. (2016) to lower the communication cost as described in line 12-14 in Algorithm 1. If we denote vanilla\nstochastic k-level quantization with πk, then we successfully reduce the per-round communication cost C(πk, g[n]) = n · (ddlog2 ke+ Õ(1)). However, stochastic quantization sacrifices some accuracy for communication efficiency. Concretely, E(πk, g[n]) = O( dn · 1 n ∑n i=1 ‖g(i)‖22). Since the dimension of parameters d is tens of thousands to hundreds of thousands in federated learning, the estimation error of the mean is too large. Thus to reduce the estimation error, we randomly rotate the difference vector (McMahan et al., 2016) as the second step. The key intuition is that the MSE of stochastic uniform quanti-\nzation is O( dn (g max)2). With random rotation, we can limit gmax to √ log d d w.h.p. so the MSE will be improved to O( log dn ). Agarwal et al. (2018) also leverages random rotation to reduce MSE. However, in their setting, random rotation intrinsically harms their privacy guarantee because the `∞-sensitivity might increase with rotation. A natural advantage of discrete Gaussian is that the privacy guarantee only depends on `2-sensitivity which is an invariant under rotation. Thus, random rotation does not harm our privacy guarantee at all. We omit the details here and refer the interested readers to McMahan et al. (2016). We denote the protocol using k-level quantization and random rotation with π(rot)k . We know that C(π (rot) k , g\n[n]) remains the same while the MSE error is reduced to E(π(rot)k , g[n]) = O( log d n · 1 n ∑n i=1 ‖g(i)‖22).\n6.2 CONVERGENCE RATE OF D2P-FED\nIn this section, we relate the convergence rate with mean squared error using Corollary 4 and analyze the mean squared error of mean estimation in D2P-FED. Note that we assume each client executes one iteration in each round so g equals to the gradient or belongs to the sub-gradients.\nCorollary 4 (Ghadimi & Lan (2013)). Let F (w) = L(w,D) for some given distribution D. Let F (w) be L-smooth and ∀w, ‖∇F (w)‖ ≤ ρ. Let w0 satisfy F (w0) − F (w∗) ≤ ρF . Then after T rounds\nEt∼(Unif(T ))[‖∇F (wt)‖22] ≤ 2ρFL\nT +\n2 √ 2λ √ LρF√ T + ρB\n, where λ2 = max1≤t≤T 2E[‖g(wt) − ∇F (wt)‖22] + 2 max1≤t≤T Eq[‖g(wt) − g̃(wt)‖22], and B = max1≤t≤T ‖g(wt)− g̃(wt)‖.\nAs corollary 4 indicates, for a given gradient bound, the convergence rate approximately grows with the reciprocal of MSE. Thus, we analyze D2P-FED’s MSE and obtain the following theorem. The proof is delayed to Appendix D due to space limitation. Theorem 4. If we choose σ ≥ 1/ √ 2π, the mean squared error is\nE(π(rot)k,q,NL(σ2), g [n]) ≤ (1− 1 1 + 3e−2π2σ2 (1−Φ(nq)))·4d(g\nmax)2\nn(k − 1)2 ( 1 4 +\nσ2\nγ2n2 )+(1−Φ(n(q−k−1)))·q2\nwhere Φ is the cumulative distribution function (CDF) of the standard normal distribution.\nRemark 1: Choice of gmax. As indicated by Theorem 4, the dominant term in MSE is proportional to the square of gmax. A natural choice of gmax is the clipping bound D. If we want to match up with the MSE guarantee in cpSGD: O(σ\n2 log(d) n(k−1)2 ), we need to inherit their choice of gmax = O(D √\nlog(d) d ), and this can be achieved by clipping l∞ norm of the gradient after\nrandom rotation. For instance, according to Lemma 8 in Agarwal et al. (2018), we can choose gmax = 2 √\nlog( 2ndδ )D√ d\n. In that case, the possibility that the maximum of g exceeds gmax is at most δ. It follows that the possibility that the `∞-clipping really changes the update is bounded by δ. Hence, the RHS of the MSE bound in Theorem 4 evolves to (1 − 1\n1+3e−2π2σ2 (1 − Φ(nq)) − δ) ·\n4d(gmax)2\nn(k−1)2 ( 1 4 +\nσ2\nγ2n2 ) + (1−Φ(n(q− k− 1)) + δ) · q 2, which is on the same order with the original\nbound given δ is small.\nRemark 2: Comparison with cpSGD. As the MSE bound is on the same order with cpSGD (even the constants are close!), a natural question is ”what is the advantage of D2P-FED over cpSGD in\nterms of MSE?” Indeed, the advantage stems from a smaller standard deviation of the noise. Given a fixed privacy budget, due to the tighter composition of sub-sampled RDP (Wang et al., 2018), each round of D2P-FED can have more privacy budget and thus smaller noise scale. Plugging a smaller σ into Theorem 4 will give a better MSE and thus a better convergence rate as cpSGD follows a similar convergence rate bound. Moreover, according to Figure 1 in Agarwal et al. (2018), even with the same noise scale, Gaussian noise provides stronger privacy guarantee than Binomial noise. As discrete Gaussian noise follows the same RDP bound as Gaussian noise, we believe discrete Gaussian can map the same-scale noise to lower privacy cost.\n6.3 COMMUNICATION COST OF D2P-FED\nFirst, we provide the trivial per-round communication cost which is exactly the same as cpSGD. Theorem 5. The per-round communication cost of D2P-FED is\nC(π(rot)k,q,NL(σ2), g [n]) = n · (d log(nq + 1) + Õ(1))\n. Now let’s compare the number of rounds in cpSGD and D2P-FED qualitatively. Note that during the following discussion, we usually omit δ for the ease of clarification and assume that δ is fixed. For cpSGD, the known tightest bound is by the combination of standard privacy amplification (Balle et al., 2018) and advanced composition (Dwork et al., 2010). Concretely, if a mechanism costs privacy budget, then after composed with sub-sampling the privacy budget is reduced to O(γ ) where γ is the sub-sampling rate. If the mechanism is composed sequentially T times, the privacy budget grows toO( √ T log(1/δ) ). Thus, the total privacy budget of cpSGD isO(γ √ T log(1/δ) ). On the other hand, D2P-FED provides a total privacy budget of O(γ √ T ), saving a factor of √ log(1/δ). Since δ is typically very small, the saving is quite significant. If the privacy budgets for the two protocols are the same, then D2P-FED can use noise withO( √ log(1/δ)) smaller scale than cpSGD\nin each round. This will lead to aO( √\nlog(1/δ))-time faster convergence. Then for a given gradient bound, D2P-FED can reach it withO(log(1/δ))-time fewer rounds and thus saveO(log(1/δ))-time communication cost.\nBoth D2P-FED and cpSGD intrinsically require secure aggregation to establish their privacy guarantee. Agarwal et al. (2018) did not discuss the issue explicitly. As pointed out in Bonawitz et al. (2017), once combined with secure aggregation, each field has to expand at least γn times (γn is the number of chosen clients) to prevent overflow of the sum." }, { "heading": "7 EVALUATION", "text": "We would like to answer the following three questions using empirical evaluation: (1) How D2PFED performs in a multi-round federated learning compared to cpSGD under either the same privacy guarantee or the same communication cost? (2) How different choices of hyper-parameters affect the performance of D2P-FED? (3) Does D2P-FED work under heterogeneous data distribution? Due to space limitation, we present our main results for (1) in this section and defer the results for (2) and (3) to Appendix E." }, { "heading": "7.1 EXPERIMENT SETUP", "text": "To answer the above questions, we evaluated D2P-FED and cpSGD on INFIMNIST (Bottou (2007)) and CIFAR10 (Krizhevsky et al. (2009)). We sampled 10M hand-written digits from INFIMNIST and randomly split the data among 100K clients. In each round, 100 clients are randomly chosen to upload their difference vectors to train a three-layer MLP. For CIFAR10, we select 10 out of 2000 clients in each round to train a 2-layer convolutional network. All RDP bounds are converted to ( , δ)-DP for the ease of comparison and the total δ is set to 1e−5 for all experiments." }, { "heading": "7.2 MODEL ACCURACY VS. PRIVACY BUDGET", "text": "To answer the first question, we studied model accuracy under the same privacy budget as shown in Figure 1a. Compared with cpSGD, D2P-FED achieves 4.7% higher model accuracy on INFIMNIST\nand 13.0% higher model accuracy on CIFAR10 after convergence. As expected, D2P-FED composes far tighter under sub-sampling as the lines are much sharper than those of cpSGD. Consequently, D2P-FED converges at a smaller privacy budget than cpSGD as well. Although in Figure 1a cpSGD has better accuracy in the high privacy region, it is not necessarily the case but depends on the scale of the discrete Gaussian noise, as studied in Section E.1. Note that the results for cpSGD in Figure 1 is different from the results in Figure 2 in the original paper (Agarwal et al., 2018). The reason is that in the original paper, they do not sub-sample clients in each round but assign each client to exactly one round beforehand to avoid composition which cpSGD cannot handle well. However, the scheme is far from practical in the real world due to the dynamic nature of the clients." }, { "heading": "7.3 MODEL ACCURACY VS. COMMUNICATION COST", "text": "To answer the second question, we also studied the model accuracy under the same communication cost. As shown in Figure 1b, D2P-FED consistently achieves better model accuracy under the same communication cost on both INFIMNIST and CIFAR10. The main reason is that the tight composition property allows D2P-FED to use smaller per-feature communication cost while still achieving better accuracy. As a concrete instance, D2P-FED with 50% compression rate can achieve better accuracy than cpSGD with 25% compression rate. cpSGD with 50% compression rate either leads to an unacceptable privacy budget or does not converge." }, { "heading": "8 CONCLUSION", "text": "In this work, we developed D2P-FED to achieve both differential privacy and communication efficiency in the context of federated learning. By applying the discrete Gaussian mechanism to the private data transmission, D2P-FED provides stronger privacy guarantee, better composability and smaller communication cost than the only prior work, cpSGD, both theoretically and empirically." }, { "heading": "A PROOF FOR THEOREM 1", "text": "We consider the Rényi divergence between two discrete Gaussian distributions differing in the mean value.\nProof.\nDα(NL(0, σ 2)‖NL(µ, σ2))\n(1) =\n1\nα− 1 log ∑ L 1∑ L exp(−(x− µ)2/(2σ2)) exp(−αx2/(2σ2)) · exp(−(1− α)(x− µ)2/(2σ2))\n= 1\nα− 1 log 1∑ L exp(−(x− µ)2/(2σ2)) ∑ L exp((−x2 + 2(1− α)µx− (1− α)µ2)/(2σ2))\n(2) ≤ 1 α− 1 log{exp((α2 − α)µ2/(2σ2))}\n= αµ2/(2σ2)\n(1) Because range(f) ⊆ L, we only consider µ ∈ L. Thus the denominator of NL(0, σ2) and NL(µ, σ 2) cancels out as ∑ L exp(−(x − µ)2/(2σ2)) is periodic. (2) ∑ L exp(−(x−µ) 2/(2σ2))∑\nL exp(−(x−µ)2/(2σ2)) =\n√ πϑ((1−α)πµ,e−π 2 )\nϑ(0, 1e ) ≤ 1 where ϑ is the Jacobi theta function (Wikipedia)." }, { "heading": "B PROOF FOR THEOREM 2", "text": "Proof. The l2 sensitivity of δ is naturally bounded by 2D. The rotation does not change the sensitivity. The k-level quantization might expand the space as shown in Figure 2. An upper bound on the radius of the red circle is D + √ d Dk−1 . When we take k = √ d− 1, it reduces to 2D. Thus, the upper bound on the sensitivity is 4D." }, { "heading": "C PROOF FOR THEOREM 3", "text": "We first prove the following lemma.\nLemma 3. ∑ φq(x) = φq( ∑ x)\nProof. ∑ φq(x) = ∑ 2gmax k − 1 (( k − 1 2gmax x+ q − 1 2 ) mod q − q − 1 2 )\n= 2gmax k − 1 (( k − 1 2gmax\n∑ x+\nq − 1 2 ) mod q − q − 1 2 )\n= φq( ∑ x)\nThen we prove Theorem 3 as follows.\nProof. Given Lemma 3,\ng̃t = 1\nk ∑ i∈S g̃ (i) t\n= 1\nk ∑ i∈S φq(φq(g̃ (i) t,dp) + ∑ j 6=i,j∈S uij − ∑ j 6=i,j∈S uji)\n= 1\nk φq( ∑ i∈S (φq(g̃ (i) t,dp) + ∑ j 6=i,j∈S uij − ∑ j 6=i,j∈S uji))\n= 1\nk φq( ∑ i∈S g̃ (i) t,dp)\n.\nThe expression ∑ i∈S g̃ (i) t,dp forms a centralized discrete Gaussian mechanism and according to the post-processing theorem, the same RDP guarantee still holds." }, { "heading": "D PROOF FOR THEOREM 4", "text": "Proof Sketch. Before starting the proof, we introduce two lemmas for the proof.\nLemma 4 (Proposition 19 from Canonne et al. (2020)). For all σ ∈ R with σ > 0,\nV[NZ(0, σ2)] ≤ σ2(1− 4π2σ2\ne4π2σ2−1 ) < σ2\nMoreover, if σ2 ≤ 13 , then V[NZ(0, σ 2)] ≤ 3 · e− 1 2σ2\nLemma 5 (Proposition 23 from Canonne et al. (2020)). For all m ∈ Z with m ≥ 1 and all σ ∈ R with σ > 0, PX∼NZ(0,σ2)[X ≥ m] ≤ PX∼N(0,σ2)[X ≥ m− 1]. Moreover, if σ ≥ 1/ √ 2π, we have\nPX∼NZ(0,σ2)[X ≥ m] ≥ 1\n1 + 3e−2π2σ2 PX∼N(0,σ2)[X ≥ m]\nNow we start our proof. MSE can be rewritten in the following format.\nE[‖ ˆ̄X − X̄‖22|] = 1\nn2 d∑ j=1 n∑ i=1 E[( ˆ̄Xi(j)−Xi(j))2]\nFor each ? = E[( ˆ̄Xi(j)−Xi(j))2] , we need to consider two cases. If no overflow happens, due to Lemma 4\n?¬o ≤ E[( 2Xmax\nk − 1 )2(V(Ber(pi(j))) + V(NL(σ)))] ≤\n4(Xmax)2\n(k − 1)2 ( 1 4 +\nσ2\nγ2n2 )\nIf overflow happens, trivially we have ?o ≤ k2. Thus, we have ? = P[¬o] · ?¬o + P[o] · ?o ≤ PX∼NL(σ2)[X ≤ q] · ?¬o + PX∼NL(σ2)[X ≥ q − k] · ?o (1) With Lemma 5, we can bound the probabilities and get the final MSE result in the theorem." }, { "heading": "E OTHER EVALUATION", "text": "E.1 INFLUENCE OF NOISE SCALE\nThe hyper-parameter of the most vital interest in D2P-FED is the scale of the noise. To understand the effect of the noise scale, we evaluated D2P-FED on INFIMNIST, with 3 different choices of noise scale as shown in Figure 3. It is no surprise that the higher the noise scale, the smaller the privacy budget and the lower the model accuracy. This also illustrates the claim we have in Section 7.2 that D2P-FED can also have relatively good performance in the high privacy region at the cost of the model accuracy.\nE.2 INFLUENCE OF HETEROGENEOUS DATA DISTRIBUTION\nIt is well known that data is sometimes heterogeneously distributed among clients in a federated learning system. Thus, to better understand D2P-FED’s behavior under heterogeneous data distribution, we simulated heterogenenous data distribution by distributing the INFIMNIST data according to the classes and evaluated D2P-FED on these clients. The results are shown in Figure 4 and we can see that under heterogeneous data distribution the model accuracy drops by more than 10%. This complies with the previous empirical results and there have been a line of researches focusing on addressing the issue (Yurochkin et al., 2019a;b; Wang et al., 2020). Although orthogonal to this paper, we deem it as an interesting open problem how to integrate these works with D2P-FED.\nE.3 INFLUENCE OF GROUP SIZE q\nWe also ran D2P-FED with multiple choices of discrete group size q. We observe that once the noise scale σ is fixed, the performance is relatively robust to q." } ]
2,020
null
SP:20a4cfac4c8e66208f4a4bd6b2ceeb3c8cabac3a
[ "This paper proposes a reliable multi-view classification mechanism equipped with uncertainty, called Trusted Multi-View Classification. The goal is to dynamically assess the quality of different views for different samples to provide reliable uncertainty estimation. The idea is clear and well-motivated. The authors perform empirical studies on diverse datasets to conclude that the proposed algorithm is effective, robust and reliable. " ]
Multi-view classification (MVC) generally focuses on improving classification accuracy by using information from different views, typically integrating them into a unified comprehensive representation for downstream tasks. However, it is also crucial to dynamically assess the quality of a view for different samples in order to provide reliable uncertainty estimations, which indicate whether predictions can be trusted. To this end, we propose a novel multi-view classification method, termed trusted multi-view classification, which provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The algorithm jointly utilizes multiple views to promote both classification reliability and robustness by integrating evidence from each view. To achieve this, the Dirichlet distribution is used to model the distribution of the class probabilities, parameterized with evidence from different views and integrated with the DempsterShafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness for out-ofdistribution samples. Extensive experimental results validate the effectiveness of the proposed model in accuracy, reliability and robustness.
[ { "affiliations": [], "name": "Zongbo Han" }, { "affiliations": [], "name": "Changqing Zhang" }, { "affiliations": [], "name": "Huazhu Fu" }, { "affiliations": [], "name": "Joey Tianyi Zhou" } ]
[ { "authors": [ "Shotaro Akaho" ], "title": "A kernel method for canonical correlation analysis", "venue": "arXiv preprint cs/0609071,", "year": 2006 }, { "authors": [ "Galen Andrew", "Raman Arora", "Jeff Bilmes", "Karen Livescu" ], "title": "Deep canonical correlation analysis", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "José M Bernardo", "Adrian FM Smith" ], "title": "Bayesian theory, volume 405", "venue": null, "year": 2009 }, { "authors": [ "Yunlong Bian", "Chuang Gan", "Xiao Liu", "Fu Li", "Xiang Long", "Yandong Li", "Heng Qi", "Jie Zhou", "Shilei Wen", "Yuanqing Lin" ], "title": "Revisiting the effectiveness of off-the-shelf temporal modeling approaches for large-scale video classification", "venue": "arXiv preprint arXiv:1708.03805,", "year": 2017 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural network", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "AP Dempster" ], "title": "Upper and lower probabilities induced by a multivalued mapping", "venue": "The Annals of Mathematical Statistics,", "year": 1967 }, { "authors": [ "Arthur P Dempster" ], "title": "A generalization of bayesian inference", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1968 }, { "authors": [ "John S Denker", "Yann LeCun" ], "title": "Transforming neural-net output levels to probability distributions", "venue": "In Advances in Neural Information Processing Systems,", "year": 1991 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Li Fei-Fei", "Pietro Perona" ], "title": "A bayesian hierarchical model for learning natural scene categories", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),", "year": 2005 }, { "authors": [ "Li Fei-Fei", "Rob Fergus", "Pietro Perona" ], "title": "Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories", "venue": "In 2004 Conference on Computer Vision and Pattern Recognition workshop,", "year": 2004 }, { "authors": [ "Bela A Frigyik", "Amol Kapila", "Maya R Gupta" ], "title": "Introduction to the dirichlet distribution and related processes. Department of Electrical Engineering, University of Washignton, UWEETR-2010-0006", "venue": null, "year": 2010 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Bayesian convolutional neural networks with bernoulli approximate variational inference", "venue": "arXiv preprint arXiv:1506.02158,", "year": 2015 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2011 }, { "authors": [ "David J Hand", "Robert J Till" ], "title": "A simple generalisation of the area under the roc curve for multiple class classification problems", "venue": "Machine learning,", "year": 2001 }, { "authors": [ "Kaveh Hassani", "Amir Hosein Khasahmadi" ], "title": "Contrastive multi-view representation learning on graphs", "venue": "arXiv preprint arXiv:2006.05582,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Jay Heo", "Hae Beom Lee", "Saehoon Kim", "Juho Lee", "Kwang Joon Kim", "Eunho Yang", "Sung Ju Hwang" ], "title": "Uncertainty-aware attention for reliable interpretation and prediction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Harold Hotelling" ], "title": "Relations between two sets of variates", "venue": "In Breakthroughs in statistics,", "year": 1992 }, { "authors": [ "Audun Jøsang" ], "title": "Subjective Logic: A formalism for reasoning under uncertainty", "venue": null, "year": 2018 }, { "authors": [ "Audun Jøsang", "Robin Hankin" ], "title": "Interpretation and fusion of hyper opinions in subjective logic", "venue": "In 2012 15th International Conference on Information Fusion,", "year": 2012 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Douwe Kiela", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Efficient large-scale multi-modal classification", "venue": "arXiv preprint arXiv:1802.02892,", "year": 2018 }, { "authors": [ "Douwe Kiela", "Suvrat Bhooshan", "Hamed Firooz", "Davide Testuggine" ], "title": "Supervised multimodal bitransformers for classifying images and text", "venue": "arXiv preprint arXiv:1909.02950,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Hildegard Kuehne", "Hueihan Jhuang", "Estı́baliz Garrote", "Tomaso Poggio", "Thomas Serre" ], "title": "Hmdb: a large video database for human motion recognition", "venue": "In 2011 International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "Bayesian methods for adaptive models", "venue": "PhD thesis, California Institute of Technology,", "year": 1992 }, { "authors": [ "David JC MacKay" ], "title": "A practical bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Jooyoung Moon", "Jihyo Kim", "Younghak Shin", "Sangheum Hwang" ], "title": "Confidence-aware learning for deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian learning for neural networks, volume 118", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Richard J Perrin", "Anne M Fagan", "David M Holtzman" ], "title": "Multimodal techniques for diagnosis and prognosis of alzheimer’s disease", "venue": null, "year": 2009 }, { "authors": [ "Rajesh Ranganath", "Sean Gerrish", "David Blei" ], "title": "Black box variational inference", "venue": "In Artificial Intelligence and Statistics,", "year": 2014 }, { "authors": [ "Murat Sensoy", "Lance Kaplan", "Melih Kandemir" ], "title": "Evidential deep learning to quantify classification uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kari Sentz", "Scott Ferson" ], "title": "Combination of evidence in Dempster-Shafer theory, volume 4015", "venue": null, "year": 2002 }, { "authors": [ "Glenn Shafer" ], "title": "A mathematical theory of evidence, volume 42", "venue": "Princeton university press,", "year": 1976 }, { "authors": [ "Nathan Silberman", "Derek Hoiem", "Pushmeet Kohli", "Rob Fergus" ], "title": "Indoor segmentation and support inference from rgbd images", "venue": "In European Conference on Computer vision,", "year": 2012 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Jing Sui", "Shile Qi", "Theo GM van Erp", "Juan Bustillo", "Rongtao Jiang", "Dongdong Lin", "Jessica A Turner", "Eswar Damaraju", "Andrew R Mayer", "Yue Cui" ], "title": "Multimodal neuromarkers in schizophrenia via cognition-guided mri fusion", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Joost van Amersfoort", "Lewis Smith", "Yee Whye Teh", "Yarin Gal" ], "title": "Uncertainty estimation using a single deep deterministic neural network", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Chong Wang" ], "title": "Variational bayesian approach to canonical correlation analysis", "venue": "IEEE Transactions on Neural Networks,", "year": 2007 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Jeff Bilmes" ], "title": "On deep multi-view representation learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Weiran Wang", "Xinchen Yan", "Honglak Lee", "Karen Livescu" ], "title": "Deep variational canonical correlation analysis", "venue": "arXiv preprint arXiv:1610.03454,", "year": 2016 }, { "authors": [ "Weiyao Wang", "Du Tran", "Matt Feiszli" ], "title": "What makes training multi-modal classification networks hard", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xin Wang", "Devinder Kumar", "Nicolas Thome", "Matthieu Cord", "Frederic Precioso" ], "title": "Recipe recognition with large multimodal food dataset", "venue": "In 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW),", "year": 2015 }, { "authors": [ "Changqing Zhang", "Zongbo Han", "Huazhu Fu", "Joey Tianyi Zhou", "Qinghua Hu" ], "title": "Cpm-nets: Cross partial multi-view networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Heng Zhang", "Vishal M Patel", "Rama Chellappa" ], "title": "Hierarchical multimodal metric learning for multimodal classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Multi-view data, typically associated with multiple modalities or multiple types of features, often exists in real-world scenarios. State-of-the-art multi-view learning methods achieve tremendous success across a wide range of real-world applications. However, this success typically relies on complex models (Wang et al., 2015a; Tian et al., 2019; Bachman et al., 2019; Zhang et al., 2019; Hassani & Khasahmadi, 2020), which tend to integrate multi-view information with deep neural networks. Although these models can provide accurate classification results, they are usually vulnerable to yield unreliable predictions, particularly when presented with views that are not well-represented (e.g., information from abnormal sensors). Consequently, their deployment in safety-critical applications (e.g., computer-aided diagnosis or autonomous driving) is limited. This has inspired us to introduce a new paradigm for multi-view classification to produce trusted decisions.\nFor multi-view learning, traditional algorithms generally assume an equal value for different views or assign/learn a fixed weight for each view. The underlying assumption is that the qualities or importance of these views are basically stable for all samples. In practice, the quality of a view often varies for different samples which the designed models should be aware of for adaption. For example, in multi-modal medical diagnosis (Perrin et al., 2009; Sui et al., 2018), a magnetic resonance (MR) image may be sufficient for one subject, while a positron emission tomography (PET) image may be required for another. Therefore, the decision should be well explained according to multi-view inputs. Typically, we not only need to know the classification result, but should also be able to answer\n∗Corresponding author: Changqing Zhang\n“How confident is the decision?” and “Why is the confidence so high/low for the decision?”. To this end, the model should provide in accurate uncertainty for the prediction of each sample, and even individual view of each sample.\nUncertainty-based algorithms can be roughly divided into two main categories, i.e., Bayesian and non-Bayesian approaches. Traditional Bayesian approaches estimate uncertainty by inferring a posterior distribution over the parameters (MacKay, 1992a; Bernardo & Smith, 2009; Neal, 2012). A variety of Bayesian methods have been developed, including Laplace approximation (MacKay, 1992b), Markov Chain Monte Carlo (MCMC) (Neal, 2012) and variational techniques (Graves, 2011; Ranganath et al., 2014; Blundell et al., 2015). However, compared with ordinary neural networks, due to the doubling of model parameters and difficulty in convergence, these methods are computationally expensive. Recent algorithm (Gal & Ghahramani, 2016) estimates the uncertainty by introducing dropout (Srivastava et al., 2014) in the testing phase, thereby reducing the computational cost. Several non-Bayesian algorithms have been proposed, including deep ensemble (Lakshminarayanan et al., 2017), evidential deep learning (Sensoy et al., 2018) and deterministic uncertainty estimate (van Amersfoort et al., 2020). Unfortunately, all of these methods focus on estimating the uncertainty on single-view data, despite the fact that fusing multiple views through uncertainty can improve performance and reliability.\nIn this paper, we propose a new multi-view classification algorithm aiming to elegantly integrate multiview information for trusted decision making (shown in Fig. 1(a)). Our model combines different views at an evidence level instead of feature or output level as done previously, which produces a stable and reasonable uncertainty estimation and thus promotes both classification reliability and robustness. The Dirichlet distribution is used to model the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. In summary, the specific contributions of this paper are:\n(1) We propose a novel multi-view classification model aiming to provide trusted and interpretable (according to the uncertainty of each view) decisions in an effective and efficient way (without any additional computations and neural network changes), which introduces a new paradigm in multi-view classification. (2) The proposed model is a unified framework for promising sample-adaptive multi-view integration, which integrates multi-view information at an evidence level with the DempsterShafer theory in an optimizable (learnable) way. (3) The uncertainty of each view is accurately estimated, enabling our model to improve classification reliability and robustness. (4) We conduct extensive experiments which validate the superior accuracy, robustness, and reliability of our model thanks to the promising uncertainty estimation and multi-view integration strategy." }, { "heading": "2 RELATED WORK", "text": "Uncertainty-based Learning. Deep neural networks have achieved great success in various tasks. However since most deep models are essentially deterministic functions, the uncertainty of the model cannot be obtained. Bayesian neural networks (BNNs) (Denker & LeCun, 1991; MacKay, 1992b; Neal, 2012) endow deep models with uncertainty by replacing the deterministic weight parameters with distributions. Since BNNs struggle in performing inference and usually come with prohibitive computational costs, a more scalable and practical approach, MC-dropout (Gal & Ghahramani, 2016), was proposed. In this model, the inference is completed by performing dropout sampling from the weight during training and testing. Ensemble based methods (Lakshminarayanan et al., 2017) train and integrate multiple deep networks and also achieve promising performance. Instead of indirectly modeling uncertainty through network weights, the algorithm (Sensoy et al., 2018) introduces the subjective logic theory to directly model uncertainty without ensemble or Monte Carlo sampling. Building upon RBF networks, the distance between test samples and prototypes can be used as the agency for deterministic uncertainty (van Amersfoort et al., 2020). Benefiting from the learned weights of different tasks with homoscedastic uncertainty learning, (Kendall et al., 2018) achieves impressive performance in multi-task learning.\nMulti-View Learning. Learning on data with multiple views has proven effective in a variety of tasks. CCA-based multi-view models (Hotelling, 1992; Akaho, 2006; Wang, 2007; Andrew et al.,\nnetworks ( 1 ). The obtained evidence parameterizes the Dirichlet distribution ( 2 ) to induce the classification probability and uncertainty ( 3 ). The overall uncertainty and classification probability are inferred by combining the beliefs of multiple views based on the DST ( 4 ). The combination rule and an example are shown in Definition 4 and (b), respectively. Given two sets of beliefs (blue and green blocks), we recombine the compatible parts of the two sets (brown blocks) and ignore the mutually exclusive parts (white blocks) of the two sets to obtain the combined beliefs.\n2013; Wang et al., 2015a; 2016) are representative ones that have been widely used in multi-view representation learning. These models essentially seek a common representation by maximizing the correlation between different views. Considering common and exclusive information, hierarchical multi-modal metric learning (HM3L) (Zhang et al., 2017) explicitly learns shared multi-view and view-specific metrics, while AE2-Nets (Zhang et al., 2019) implicitly learn a complete (view-specific and shared multi-view) representation for classification. Recently, the methods (Tian et al., 2019; Bachman et al., 2019; Chen et al., 2020; Hassani & Khasahmadi, 2020) based on contrastive learning have also achieved good performance. Due to its effectiveness, multi-view learning has been widely used in various applications (Kiela et al., 2018; Bian et al., 2017; Kiela et al., 2019; Wang et al., 2020).\nDempster-Shafer Evidence Theory (DST). DST, which is a theory on belief functions, was first proposed by Dempster (Dempster, 1967) and is a generalization of the Bayesian theory to subjective probabilities (Dempster, 1968). Later, it was developed into a general framework to model epistemic uncertainty (Shafer, 1976). In contrast to Bayesian neural networks, which indirectly obtain uncertainty through multiple stochastic samplings from weight parameters, DST directly models uncertainty. DST allows beliefs from different sources to be combined with various fusion operators to obtain a new belief that considers all available evidence (Sentz et al., 2002; Jøsang & Hankin, 2012). When faced with beliefs from different sources, Dempster’s rule of combination tries to fuse their shared parts, and ignores conflicting beliefs through normalization factors. A more specific implementation will be discussed later." }, { "heading": "3 TRUSTED MULTI-VIEW CLASSIFICATION", "text": "It has been shown that using a softmax output as confidence for predictions often leads to high confidence values, even for erroneous predictions since the largest softmax output is used for the final prediction (Moon et al., 2020; van Amersfoort et al., 2020). Therefore, we introduce an evidencebased uncertainty estimation technique which can provide more accurate uncertainty and allow us to flexibly integrate multiple views for trusted decision making." }, { "heading": "3.1 UNCERTAINTY AND THE THEORY OF EVIDENCE", "text": "In this subsection, we elaborate on evidential deep learning to quantify the classification uncertainty for each of multiple views, which simultaneously models the probability of each class and overall uncertainty of the current prediction. In the context of multi-class classification, Subjective logic (SL) (Jøsang, 2018) associates the parameters of the Dirichlet distribution (Definition A.1 in the Appendix)\nwith the belief distribution, where the Dirichlet distribution can be considered as the conjugate prior of the categorical distribution (Bishop, 2006).\nAccordingly, we need to determine the concentration parameters, which are closely related to the uncertainty. We elaborate on the Subjective logic (Jøsang, 2018), which defines a theoretical framework for obtaining the probabilities (belief masses) of different classes and overall uncertainty (uncertainty mass) of the multi-classification problem based on the evidence collected from data. Note that evidence refers to the metrics collected from the input to support the classification (step 1 in Fig. 1(a)) and is closely related to the concentration parameters of Dirichlet distribution. Specifically, for the K classification problems, subjective logic tries to assign a belief mass to each class label and an overall uncertainty mass to the whole frame based on the evidence. Accordingly, for the vth view, the K + 1 mass values are all non-negative and their sum is one:\nuv + K∑ k=1 bvk = 1, (1)\nwhere uv ≥ 0 and bvk ≥ 0 indicate the overall uncertainty and the probability for the kth class, respectively.\nFor the vth view, subjective logic connects the evidence ev = [ev1, · · · , evK ] to the parameters of the Dirichlet distribution αv = [αv1, · · · , αvK ] (step 2 in Fig. 1(a)). Specifically, the parameter αvk of the Dirichlet distribution is induced from evk, i.e., α v k = e v k + 1. Then, the belief mass b v k and the uncertainty uv (step 3 in Fig. 1(a)) are computed as\nbvk = evk Sv = αvk − 1 Sv and uv = K Sv , (2)\nwhere Sv = ∑K i=1 (e v i + 1) = ∑K i=1 α v i is the Dirichlet strength. Eq. 2 actually describes the phenomenon where the more evidence observed for the kth category, the greater the probability assigned to the kth class. Correspondingly, the less total evidence observed, the greater the total uncertainty. The belief assignment can be considered as a subjective opinion. Given an opinion, the mean of the corresponding Dirichlet distribution p̂v for the class probability p̂vk is computed as p̂vk = αvk Sv (Frigyik et al., 2010).\nDifferences from traditional deep-neural-network classifiers. Firstly, the output of traditional neural network classifiers can be considered as a point on a simplex, while Dirichlet distribution parametrizes the density of each such probability assignment on a simplex. Therefore, with the Dirichlet distribution, SL models the second-order probability and uncertainty of the output. Secondly, the softmax function is widely used in the last layer of traditional neural network classifiers. However, using the softmax output as the confidence often leads to over-confidence. In our model, the introduced SL can avoid this problem by adding overall uncertainty mass. Existing methods (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017) usually require additional computations during inference to output uncertainty. Since the uncertainty is obtained during the inference stage, it is difficult to seamlessly train a model with high accuracy, robustness and reasonable uncertainty in a unified framework. Accordingly, the limitations underlying existing algorithms (e.g., inability to directly obtain uncertainty) also limits their extension to trusted multi-view classification.\nFor clarity, we provide typical examples under a triple classification task to illustrate the above formulation. Let us assume that e = 〈40, 1, 1〉 and accordingly we have α = 〈41, 2, 2〉. The\ncorresponding Dirichlet distribution, shown in Fig. 2(a), yields a sharp distribution centered on the top of the standard 2-simplex. This indicates that sufficient evidence has been observed to ensure accurate classification. In contrast, let us assume that we have the evidence e = 〈0.0001, 0.0001, 0.0001〉, which is little evidence for classification. Accordingly, we obtain the Dirichlet distribution parameter α = 〈1.0001, 1.0001, 1.0001〉 and the uncertainty mass u ≈ 1. As shown in Fig. 2(b), in this case, the evidence induces quite a flat distribution over the simplex. Finally, when e = 〈5, 5, 5〉, there is also a high uncertainty, as shown in Fig. 2(c), even though the overall uncertainty is reduced compared to the second case. As shown in Fig. 2(d), we can convert a Dirichlet distribution into a standard 3-simplex (a regular tetrahedron with vertices (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1) in R4) based on the subjective logic theory (Eq. 1 and Eq. 2), where the point (M) in the simplex corresponding to { {bk}3k=1, u } indicates an opinion. Accordingly, the expectation value p̂ of the Dirichlet distribution is the projection ofM on the bottom." }, { "heading": "3.2 DEMPSTER’S RULE OF COMBINATION FOR MULTI-VIEW CLASSIFICATION", "text": "Having introduced evidence and uncertainty for the single-view case, we now focus on their adaptation to data with multiple views. The Dempster–Shafer theory of evidence allows evidence from different sources to be combined arriving at a degree of belief (represented by a mathematical object called the belief function) that takes into account all the available evidence (see Definition 3.1). Specifically, we need to combine V independent sets of probability mass assignments {Mv}V1 , whereMv ={ {bvk}Kk=1, uv } , to obtain a joint massM = { {bk}Kk=1, u } (step 4 in Fig. 1(a)).\nDefinition 3.1 (Dempster’s combination rule for two independent sets of masses) The combination (called the joint mass) M = { {bk}Kk=1, u } is calculated from the two sets of masses M1 ={\n{b1k}Kk=1, u1 } andM2 = { {b2k}Kk=1, u2 } in the following manner:\nM =M1 ⊕M2. (3) The more specific calculation rule can be formulated as follows:\nbk = 1\n1− C (b1kb 2 k + b 1 ku 2 + b2ku 1), u =\n1\n1− C u1u2, (4) where C = ∑ i 6=j b 1 i b 2 j is a measure of the amount of conflict between the two mass sets (the white blocks in Fig. 1(b)), and the scale factor 11−C is used for normalization.\nThe joint opinionM is formed based on the fusion of opinionsM1 andM2. The joint belief mass of class k (bk) and overall uncertainty (u) correspond to the brown blocks in Fig. 1(b). Intuitively, the combination rule ensures: (1) when both views are of high uncertainty (large u1 and u2), the final prediction must be of low confidence (small bk); (2) when both views are of low uncertainty (small u1 and u2), the final prediction may be of high confidence (large bk); (3) when only one view is of low uncertainty (only u1 or u2 is large), the final prediction only depends on the confident view.\nThen, given data with V different views, we can obtain the above-mentioned mass for each view. Afterwards, we can combine the beliefs from different views with Dempster’s rule of combination. Specifically, we fuse the belief mass and uncertainty mass between different views with the following rule: M =M1 ⊕M2 ⊕ · · ·MV . (5) After obtaining the joint mass M = { {bk}Kk=1, u } , according to Eq. 2, the corresponding joint evidence from multiple views and the parameters of the Dirichlet distribution are induced as\nS = K\nu , ek = bk × S and αk = ek + 1. (6)\nBased on the above combination rule, we can obtain the estimated multi-view joint evidence e and the corresponding parameters of joint Dirichlet distribution α to produce the final probability of each class and the overall uncertainty.\nAdvantages of using subjective logic compared with softmax. Compared with softmax output, using subjective uncertainty is more suitable for the fusion of multiple decisions. Subjective logic provides an additional mass function (u) that allows the model distinguish between a lack of evidence. In our model, subjective logic provides the degree of overall uncertainty of each view, which is important for trusted classification and interepretability to some extent." }, { "heading": "3.3 LEARNING TO FORM OPINIONS", "text": "In this section, we will discuss how to train neural networks to obtain evidence for each view, which can then be used to obtain the corresponding masses {Mv}Vv=1 andM. The neural networks can capture the evidence from input to induce a classification opinion (Kiela et al., 2018), and the conventional neural-network-based classifier can be naturally transformed into the evidence-based classifier with minor changes. Specifically, the softmax layer of a conventional neural-network-based classifier is replaced with an activation function layer (i.e., RELU) to ensure that the network outputs non-negative values, which are considered as the evidence vector e. Accordingly, the parameters of the Dirichlet distribution can be obtained.\nFor conventional neural-network-based classifiers, the cross-entropy loss is usually employed:\nLce = − K∑ j=1 yij log(pij), (7)\nwhere pij is the predicted probability of the ith sample for class j. For our model, given the evidence of the ith sample obtained through the evidence network, we can get the parameter αi (i.e., αvi = e i i + 1) of the Dirichlet distribution and form the multinomial opinions D(pi|αi), where pi is the class assignment probabilities on a simplex. After a simple modification on Eq. 7, we have the adjusted cross-entropy loss:\nLace(αi) = ∫ K∑\nj=1\n−yij log (pij) 1 B (αi) K∏ j=1 p αij−1 ij dpi = K∑ j=1 yij (ψ (Si)− ψ (αij)) , (8)\nwhere ψ(·) is the digamma function. Eq. 8 is the integral of the cross-entropy loss function on the simplex determined by αi. The above loss function ensures that the correct label of each sample generates more evidence than other classes, however, it cannot guarantee that less evidence will be generated for incorrect labels. That is to say, in our model, we expect the evidence for incorrect labels to shrink to 0. To this end, the following KL divergence term is introduced:\nKL [D (pi|α̃i) ‖D (pi|1)]\n= log\n( Γ( ∑K k=1 α̃ik)\nΓ(K) ∏K\nk=1 Γ(α̃ik)\n) + ∑K k=1 (α̃ik − 1) [ ψ (α̃ik)− ψ (∑K j=1 α̃ij )] ,\n(9)\nwhere α̃i = yi + (1 − yi) αi is the adjusted parameter of the Dirichlet distribution which can avoid penalizing the evidence of the groundtruth class to 0, and Γ(·) is the gamma function. Therefore, given parameter αi of the Dirichlet distribution for each sample i, the sample-specific loss is L(αi) = Lace(αi) + λtKL [D (pi|α̃i) ‖D (pi|1)] , (10) where λt > 0 is the balance factor. In practice, we can gradually increase the value of λt so as to prevent the network from paying too much attention to the KL divergence in the initial stage of training, which may result in a lack of good exploration of the parameter space and cause the network to output a flat uniform distribution.\nTo ensure that all views can simultaneously form reasonable opinions and thus improve the overall opinion, we use a multi-task strategy with following overall loss function:\nLoverall = N∑ i=1\n[ L(αi) +\nV∑ v=1 L(αvi )\n] . (11)\nThe optimization process for the proposed model is summarized in Algorithm 1 (in the Appendix)." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "In this section, we conduct experiments on six real-world datasets: Handwritten1, CUB (Wah et al., 2011), Caltech101 (Fei-Fei et al., 2004), PIE2, Scene15 (Fei-Fei & Perona, 2005) and HMDB\n1https://archive.ics.uci.edu/ml/datasets/Multiple+Features 2http://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html\n(Kuehne et al., 2011). We first compare our algorithm with single-view classifiers to validate the effectiveness of our algorithm in utilizing multiple views. Then, we apply existing classifiers to multi-view features and conduct experiments under different levels of noise to investigate their ability in identifying multi-view OOD samples. Details of these datasets and experimental setting can be found in the appendix.\nCompared methods. We compare the proposed method with following models: (a) MCDO (monte carlo dropout) (Gal & Ghahramani, 2015) casts dropout network training as approximate inference in a Bayesian neural network; (b) DE (deep ensemble) (Lakshminarayanan et al., 2017) is a simple, nonBayesian method which involves training multiple deep models; (c) UA (uncertainty-aware attention) (Heo et al., 2018) generates attention weights following a Gaussian distribution with a learned mean and variance, which allows heteroscedastic uncertainty to be captured and yields a more accurate calibration of prediction uncertainty; (d) EDL (evidential deep Learning) (Sensoy et al., 2018) designs a predictive distribution for classification by placing a Dirichlet distribution on the class probabilities." }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "Comparison with uncertainty-based algorithms using the best view. We first compare our algorithm with current uncertainty-based classification methods. The detailed experimental results are shown in Table 1. Since most existing uncertainty-based classification methods use single-view data, we report the results of each method with the best-performing view in terms of both accuracy and AUROC (Hand & Till, 2001) to comprehensively compare our method with others. As shown in Table 1, our model outperforms other methods on all datasets. Taking the results on PIE and Scene15 as examples, our method improves the accuracy by about 7.6% and 14.8% compared to the second-best models (EDL/MCDO) in terms of accuracy respectively. Although our model is clearly more effective than single-view uncertainty-based models, it is natural to further ask - what happens if all algorithms utilize multiple views?\nComparison with uncertainty-based algorithms using multiple views. To further validate the effectiveness of our model in integrating different various views, we concatenate the original features of multiple views for all comparison methods. We add Gaussian noise with different levels of standard deviations (σ) to half of the views. The comparison results are shown in Fig. 4. As can be observed that when the data is free of noise, our method can achieve competitive results. After introducing noise to the data, the accuracy of all the comparison methods significantly decreases. Fortunately, benefiting from the uncertainty-based fusion, the proposed method is aware of the view-specific noise and thus achieves impressive results on all datasets. Therefore, the effectiveness for both clean and\nnoisy multi-view data is well validated. However, it will be more convincing to explicitly investigate the performance in uncertainty estimation.\nUncertainty estimation. To evaluate the uncertainty estimation, we visualize the distribution of in-/out-of-distribution samples in terms of uncertainty. We consider the original samples as indistribution data, while the samples with Gaussian noise are viewed as out-of-distribution data. Specifically, we add Gaussian noise with the fixed level of standard deviations (σ = 10) to 50% of the test samples. The experimental results are shown in Fig. 5. According to the results, the following observations are drawn: (1) Datasets with higher classification accuracy (e.g., Handwritten) are usually associated with lower uncertainty for the in-distribution samples. (2) In contrast, datasets with lower accuracy are usually associated with higher uncertainty for the in-distribution samples. (3) Much higher uncertainties are usually estimated for out-of-distribution samples on all datasets. These observations imply the reasonability of our model in estimating uncertainty, since it can facilitate discrimination between these classes. Fig. 3 shows that our algorithm provides much more accurate predictions as the prediction uncertainty decreases. This implies that trusted decisions are supported based on the output (classification and its corresponding uncertainty) of our model." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose a novel trusted multi-view classification (TMC) algorithm which, based on the Dempster-Shafer evidence theory, can produce trusted classification decisions on multi-view data. Our algorithm focuses on decision-making by fusing the uncertainty of multiple views, which is essential for making trusted decisions. The TMC model can accurately identify the views which are risky for decision making, and exploits informative views in the final decision. Furthermore, our model can produce the uncertainty of a current decision while making the final classification, providing intepretability. The empirical results validate the effectiveness of the proposed algorithm in classification accuracy and out-of-distribution identification." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by National Natural Science Foundation of China (No. 61976151, No. 61732011), and the Natural Science Foundation of Tianjin of China (No. 19JCYBJC15200)." } ]
2,021
TRUSTED MULTI-VIEW CLASSIFICATION
SP:697a56b8f9152e50ee683f5a1b59bc272b01c4db
[ "This work proposes a method to robustly (<.5 adversarial workers) aggregate model updates using two non-colluding servers. The proposed method scales well with the number of workers and is compatible with local DP and different robust aggregation protocols. Especially the scalability is a big improvement compared to previous methods. The authors discuss related work that relies on public key infrastructure and requires pairwise secrets between clients. One big advantage of the proposed protocol is that there is no communication between the workers." ]
Increasingly machine learning systems are being deployed to edge servers and devices (e.g. mobile phones) and trained in a collaborative manner. Such distributed/federated/decentralized training raises a number of concerns about the robustness, privacy, and security of the procedure. While extensive work has been done in tackling with robustness, privacy, or security individually, their combination has rarely been studied. In this paper, we propose a secure two-server protocol that offers both input privacy and Byzantine-robustness. In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.
[]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Mehrdad Aliasgari", "Marina Blanton", "Yihua Zhang", "Aaron Steele" ], "title": "Secure computation on floating point numbers", "venue": "In NDSS,", "year": 2013 }, { "authors": [ "Dan Alistarh", "Zeyuan Allen-Zhu", "Jerry Li" ], "title": "Byzantine stochastic gradient descent", "venue": "In NeurIPS Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Eugene Bagdasaryan", "Andreas Veit", "Yiqing Hua", "Deborah Estrin", "Vitaly Shmatikov" ], "title": "How to backdoor federated learning", "venue": "arXiv 1807.00459v3,", "year": 2020 }, { "authors": [ "Moran Baruch", "Gilad Baruch", "Yoav Goldberg" ], "title": "A little is enough: Circumventing defenses for distributed learning", "venue": "arXiv preprint arXiv:1902.06156,", "year": 2019 }, { "authors": [ "Donald Beaver" ], "title": "Efficient multiparty protocols using circuit randomization", "venue": "In Annual International Cryptology Conference,", "year": 1991 }, { "authors": [ "Peva Blanchard", "El Mahdi El Mhamdi", "Rachid Guerraoui", "Julien Stainer" ], "title": "Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent", "venue": "In NeurIPS - Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Manuel Blum", "Silvio Micali" ], "title": "How to generate cryptographically strong sequences of pseudorandom bits", "venue": "SIAM journal on Computing,", "year": 1984 }, { "authors": [ "Keith Bonawitz", "Vladimir Ivanov", "Ben Kreuter", "Antonio Marcedone", "H Brendan McMahan", "Sarvar Patel", "Daniel Ramage", "Aaron Segal", "Karn Seth" ], "title": "Practical secure aggregation for privacypreserving machine learning", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Keith Bonawitz", "Hubert Eichner", "Wolfgang Grieskamp", "Dzmitry Huba", "Alex Ingerman", "Vladimir Ivanov", "Chloe Kiddon", "Jakub Konecny", "Stefano Mazzocchi", "H Brendan McMahan" ], "title": "Towards federated learning at scale: System design", "venue": "In SysML - Proceedings of the 2nd SysML Conference,", "year": 2019 }, { "authors": [ "Melissa Chase", "Ran Gilad-Bachrach", "Kim Laine", "Kristin E Lauter", "Peter Rindal" ], "title": "Private collaborative neural network learning", "venue": "IACR Cryptology ePrint Archive,", "year": 2017 }, { "authors": [ "Valerie Chen", "Valerio Pastro", "Mariana Raykova" ], "title": "Secure computation for machine learning with spdz", "venue": "arXiv preprint arXiv:1901.00329,", "year": 2019 }, { "authors": [ "Edward Chou", "Josh Beal", "Daniel Levy", "Serena Yeung", "Albert Haque", "Li Fei-Fei" ], "title": "Faster cryptonets: Leveraging sparsity for real-world encrypted inference", "venue": "arXiv preprint arXiv:1811.09953,", "year": 2018 }, { "authors": [ "Henry Corrigan-Gibbs", "Dan Boneh" ], "title": "Prio: Private, robust, and scalable computation of aggregate statistics", "venue": "In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI", "year": 2017 }, { "authors": [ "Alexandre Evfimievski", "Johannes Gehrke", "Ramakrishnan Srikant" ], "title": "Limiting privacy breaches in privacy preserving data mining", "venue": "In Proceedings of the twenty-second ACM SIGMOD-SIGACTSIGART symposium on Principles of database systems,", "year": 2003 }, { "authors": [ "Avishek Ghosh", "Justin Hong", "Dong Yin", "Kannan Ramchandran" ], "title": "Robust federated learning in a heterogeneous environment", "venue": "arXiv preprint arXiv:1906.06629,", "year": 2019 }, { "authors": [ "Ran Gilad-Bachrach", "Nathan Dowlin", "Kim Laine", "Kristin Lauter", "Michael Naehrig", "John Wernsing" ], "title": "Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "In ICML - Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Lie He", "Sai Praneeth Karimireddy", "Martin Jaggi" ], "title": "Byzantine-robust learning on heterogeneous datasets via resampling", "venue": "arXiv preprint arXiv:2006.09365,", "year": 2020 }, { "authors": [ "Ehsan Hesamifard", "Hassan Takabi", "Mehdi Ghasemi" ], "title": "CryptoDL: Deep Neural Networks over Encrypted Data", "venue": "arXiv preprint arXiv:1711.05189,", "year": 2017 }, { "authors": [ "Chiraag Juvekar", "Vinod Vaikuntanathan", "Anantha Chandrakasan" ], "title": "GAZELLE: A low latency framework for secure neural network inference", "venue": "In 27th USENIX Security Symposium (USENIX Security", "year": 2018 }, { "authors": [ "Shiva Prasad Kasiviswanathan", "Homin K Lee", "Kobbi Nissim", "Sofya Raskhodnikova", "Adam Smith" ], "title": "What can we learn privately", "venue": "SIAM Journal on Computing,", "year": 2011 }, { "authors": [ "Marcel Keller", "Emmanuela Orsini", "Peter Scholl" ], "title": "Mascot: faster malicious arithmetic secure computation with oblivious transfer", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Marcel Keller", "Valerio Pastro", "Dragos Rotaru" ], "title": "Overdrive: making SPDZ great again", "venue": "In Annual International Conference on the Theory and Applications of Cryptographic Techniques,", "year": 2018 }, { "authors": [ "Liping Li", "Wei Xu", "Tianyi Chen", "Georgios B Giannakis", "Qing Ling" ], "title": "RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Jian Liu", "Mika Juuti", "Yao Lu", "Nadarajah Asokan" ], "title": "Oblivious neural network predictions via minionn transformations", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Kalikinkar Mandal", "Guang Gong", "Chuyi Liu" ], "title": "Nike-based fast privacy-preserving highdimensional data aggregation for mobile devices", "venue": "Technical report, CACR Technical Report, CACR 2018-10,", "year": 2018 }, { "authors": [ "H. Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Agüera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "arXiv preprint arXiv:1602.05629,", "year": 2016 }, { "authors": [ "El Mahdi El Mhamdi", "Rachid Guerraoui", "Sébastien Rouault" ], "title": "The hidden vulnerability of distributed learning in byzantium", "venue": "arXiv preprint arXiv:1802.07927,", "year": 2018 }, { "authors": [ "Payman Mohassel", "Yupeng Zhang" ], "title": "SecureML: A system for scalable privacy-preserving machine learning", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Luis Muñoz-González", "Battista Biggio", "Ambra Demontis", "Andrea Paudice", "Vasin Wongrassamee", "Emil C. Lupu", "Fabio Roli" ], "title": "Towards poisoning of deep learning algorithms with back-gradient optimization", "venue": "arXiv preprint arXiv:1708.08689,", "year": 2017 }, { "authors": [ "Luis Muñoz-González", "Kenneth T. Co", "Emil C. Lupu" ], "title": "Byzantine-robust federated machine learning through adaptive model averaging", "venue": "arXiv preprint arXiv:1909.05125,", "year": 2019 }, { "authors": [ "Arvind Neelakantan", "Luke Vilnis", "Quoc V Le", "Lukasz Kaiser", "Karol Kurach", "Ilya Sutskever", "James Martens" ], "title": "Adding gradient noise improves learning for very deep networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Krishna Pillutla", "Sham M. Kakade", "Zaid Harchaoui" ], "title": "Robust Aggregation for Federated Learning", "venue": "arXiv preprint arXiv:1912.13445,", "year": 2019 }, { "authors": [ "Daniel Ramage", "Stefano Mazzocchi" ], "title": "Federated analytics: Collaborative data science without data collection", "venue": "https://ai.googleblog.com/2020/05/ federated-analytics-collaborative-data.html, May", "year": 2020 }, { "authors": [ "M. Sadegh Riazi", "Mohammad Samragh", "Hao Chen", "Kim Laine", "Kristin Lauter", "Farinaz Koushanfar" ], "title": "Xonn: Xnor-based oblivious deep neural network inference", "venue": null, "year": 1902 }, { "authors": [ "Bita Darvish Rouhani", "M. Sadegh Riazi", "Farinaz Koushanfar" ], "title": "DeepSecure: Scalable ProvablySecure Deep Learning", "venue": "arXiv preprint arXiv:1705.08963,", "year": 2017 }, { "authors": [ "Theo Ryffel", "Edouard Dufour-Sans", "Romain Gay", "Francis Bach", "David Pointcheval" ], "title": "Partially encrypted machine learning using functional encryption", "venue": null, "year": 1905 }, { "authors": [ "Reza Shokri", "Vitaly Shmatikov" ], "title": "Privacy-preserving deep learning", "venue": "In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security,", "year": 2015 }, { "authors": [ "Nigel P. Smart", "Titouan Tanguy" ], "title": "TaaS: Commodity MPC via Triples-as-a-Service", "venue": "Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop,", "year": 2019 }, { "authors": [ "Sameer Wagh", "Divya Gupta", "Nishanth Chandran" ], "title": "SecureNN: 3-party secure computation for neural network training", "venue": "Proceedings on Privacy Enhancing Technologies,", "year": 2019 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Phocas: dimensional byzantine-resilient stochastic gradient descent", "venue": "arXiv preprint arXiv:1805.09682,", "year": 2018 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance", "venue": "In ICML 2019 - 35th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation", "venue": "arXiv preprint arXiv:1903.03936,", "year": 2019 }, { "authors": [ "Andrew C Yao" ], "title": "Theory and application of trapdoor functions", "venue": "In 23rd Annual Symposium on Foundations of Computer Science (SFCS", "year": 1982 }, { "authors": [ "Dong Yin", "Yudong Chen", "Kannan Ramchandran", "Peter Bartlett" ], "title": "Byzantine-robust distributed learning: Towards optimal statistical rates", "venue": "arXiv preprint arXiv:1803.01498,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent years have witnessed fast growth of successful machine learning applications based on data collected from decentralized user devices. Unfortunately, however, currently most of the important machine learning models on a societal level do not have their utility, control, and privacy aligned with the data ownership of the participants. This issue can be partially attributed to a fundamental conflict between the two leading paradigms of traditional centralized training of models on one hand, and decentralized/collaborative training schemes on the other hand. While centralized training violates the privacy rights of participating users, existing alternative training schemes are typically not robust. Malicious participants can sabotage the training system by feeding it wrong data intentionally, known as data poisoning. In this paper, we tackle this problem and propose a novel distributed training framework which offers both privacy and robustness.\nWhen applied to datasets containing personal data, the use of privacy-preserving techniques is currently required under regulations such as the General Data Protection Regulation (GDPR) or Health Insurance Portability and Accountability Act (HIPAA). The idea of training models on decentralized datasets and incrementally aggregating model updates via a central server motivates the federated learning paradigm (McMahan et al., 2016). However, the averaging in federated learning, when viewed as a multi-party computation (MPC), does not preserve the input privacy because the server observes the models directly. The input privacy requires each party learns nothing more than the output of computation which in this paradigm means the aggregated model updates. To solve this problem, secure aggregation rules as proposed in (Bonawitz et al., 2017) achieve guaranteed input privacy. Such secure aggregation rules have found wider industry adoption recently e.g. by Google on Android phones (Bonawitz et al., 2019; Ramage & Mazzocchi, 2020) where input privacy guarantees can offer e.g. efficiency and exactness benefits compared to differential privacy (both can also be combined).\nThe concept of Byzantine robustness has received considerable attention in the past few years for practical applications, as a way to make the training process robust to malicious actors. A Byzantine participant or worker can behave arbitrarily malicious, e.g. sending arbitrary updates to the server. This poses great challenge to the most widely used aggregation rules, e.g. simple average, since a single Byzantine worker can compromise the results of aggregation. A number of Byzantine-robust aggregation rules have been proposed recently (Blanchard et al., 2017; Muñoz-González et al., 2017; Alistarh et al., 2018; Mhamdi et al., 2018; Yin et al., 2018; Muñoz-González et al., 2019) and can be used as a building block for our proposed technique.\nAchieving both input privacy and Byzantine robustness however remained elusive so far, with Bagdasaryan et al. (2020) stating that robust rules “...are incompatible with secure aggregation”. We here prove that this is not the case. Closest to our approach is (Pillutla et al., 2019) which tolerates data poisoning but does not offer Byzantine robustness. Prio (Corrigan-Gibbs & Boneh, 2017) is a private and robust aggregation system relying on secret-shared non-interactive proofs (SNIP). While\ntheir setting is similar to ours, the robustness they offer is limited to check the range of the input. Besides, the encoding for SNIP has to be affine-aggregable and is expensive for clients to compute.\nIn this paper, we propose a secure aggregation framework with the help of two non-colluding honestbut-curious servers. This framework also tolerates server-worker collusion. In addition, we combine robustness and privacy at the cost of leaking only worker similarity information which is marginal for high-dimensional neural networks. Note that our focus is not to develop new defenses against state-of-the-art attacks, e.g. (Baruch et al., 2019; Xie et al., 2019b). Instead, we focus on making arbitary current and future distance-based robust aggregation rules (e.g. Krum by Mhamdi et al. (2018), RFA by Pillutla et al. (2019)) compatible with secure aggregation.\nMain contributions. We propose a novel distributed training framework which is • Privacy-preserving: our method keeps the input data of each user secure against any other\nuser, and against our honest-but-curious servers. • Byzantine robust: our method offers Byzantine robustness and allows to incorporate\nexisting robust aggregation rules, e.g. (Blanchard et al., 2017; Alistarh et al., 2018). The results are exact, i.e. identical to the non-private robust methods. • Fault tolerant and easy to use: our method natively supports workers dropping out or\nnewly joining the training process. It is also easy to implement and to understand for users. • Efficient and scalable: the computation and communication overhead of our method is\nnegligible (less than a factor of 2) compared to non-private methods. Scalability in terms of cost including setup and communication is linear in the number of workers." }, { "heading": "2 PROBLEM SETUP, PRIVACY, AND ROBUSTNESS", "text": "We consider the distributed setup of n user devices, which we call workers, with the help of two additional servers. Each worker i has its own private part of the training dataset. The workers want to collaboratively train a public model benefitting from the joint training data of all participants.\nIn every training step, each worker computes its own private model update (e.g. a gradient based on its own data) denoted by the vector xi. The aggregation protocol aims to compute the sum z = ∑n i=1 xi (or a robust version of this aggregation), which is then used to update a public model. While the result z is public in all cases, the protocol must keep each xi private from any adversary or other workers.\nSecurity model. We consider honest-but-curious servers which do not collude with each other but may collude with malicious workers. An honest-but-curious server follows the protocol but may try to inspect all messages. We also assume that all communication channels are secure. We guarantee the strong notion of input privacy, which means the servers and workers know nothing more about each other than what can be inferred from the public output of the aggregation z.\nByzantine robustness model. We allow the standard Byzantine worker model which assumes that workers can send arbitrary adversarial messages trying to compromise the process. We assume that a fraction of up to α (< 0.5) of the workers is Byzantine, i.e. are malicious and not follow the protocol.\nAdditive secret sharing. Secret sharing is a way to split any secret into multiple parts such that no part leaks the secret. Formally, suppose a scalar a is a secret and the secret holder shares it with k parties through secret-shared values 〈a〉. In this paper, we only consider additive secret-sharing where 〈a〉 is a notation for the set {ai}ki=1 which satisfy a = ∑k p=1 ap, with ap held by party p. Crucially, it must not be possible to reconstruct a from any ap. For vectors like x, their secret-shared values 〈x〉 are simply component-wise scalar secret-shared values. Two-server setting. We assume there are two non-colluding servers: model server (S1) and worker server (S2). S1 holds the output of each aggregation and thus also the machine learning model which is public to all workers. S2 holds intermediate values to perform Byzantine aggregation. Another key assumption is that the servers have no incentive to collude with workers, perhaps enforced via a potential huge penalty if exposed. It is realistic to assume that the communication link between the two servers S1 and S2 is faster than the individual links to the workers. To perform robust aggregation, the servers will need access to a sufficient number of Beaver’s triples. These are data-independent values required to implement secure multiplication in MPC on both servers, and can be precomputed beforehand. For completeness, the classic algorithm for multiplication is given in in Appendix B.1.\nByzantine-robust aggregation oracles. Most of existing robust aggregation algorithms rely on distance measures to identity potential adversarial behavior (Blanchard et al., 2017; Yin et al., 2018; Mhamdi et al., 2018; Li et al., 2019; Ghosh et al., 2019). All such distance-based aggregation rules can be directly incorporated into our proposed scheme, making them secure. While many aforementioned papers assume that the workers have i.i.d datasets, our protocol is oblivious to the distribution of the data across the workers. In particular, our protocol also works with schemes such as (Li et al., 2019; Ghosh et al., 2019; He et al., 2020) designed for non-iid data." }, { "heading": "3 SECURE AGGREGATION PROTOCOL: TWO-SERVER MODEL", "text": "Each worker first splits its private vector xi into two additive secret shares, and transmits those to each corresponding server, ensuring that neither server can reconstruct the original vector on its own. The two servers then execute our secure aggregation protocol. On the level of servers, the protocol is a two-party computation (2PC). In the case of non-robust aggregation, servers simply add all shares (we present this case in detail in Algorithm 1). In the robust case which is of our main interest here, the two servers exactly emulate an existing Byzantine robust aggregation rule, at the cost of revealing only distances of worker gradients on the server (the robust algorithm is presented in Algorithm 2). Finally, the resulting aggregated output vector z is sent back to all workers and applied as the update to the public machine learning model." }, { "heading": "3.1 NON-ROBUST SECURE AGGREGATION", "text": "In each round, Algorithm 1 consists of two stages: • WorkerSecretSharing (Figure 1a): each worker i randomly splits its private input xi into\ntwo additive secret shares xi = x (1) i + x (2) i . This can be done e.g. by sampling a large noise value ξi and then using (xi ± ξi)/2 as the shares. Worker i sends x(1)i to S1 and x (2) i\nto S2. We write 〈xi〉 for the two secret-shared values distributed over the two servers. • AggregationAndUpdate (Figure 1c): Given binary weights {pi}ni=1, each server locally\ncomputes 〈 ∑n i=1 pixi〉. Then S2 sends its share 〈 ∑n\ni=1 pixi〉(2) to S1 so that S1 can then compute z = ∑n i=1 pixi. S1 updates the public model with z.\nOur secure aggregation protocol is extremely simple, and as we will discuss later, has very low communication overhead, does not require heavy cryptographic primitives, gives strong input privacy and is compatible with differential privacy, and is robust to worker dropouts and failures. We believe this makes our protocol especially attractive for federated learning applications.\nWe now argue about correctness and privacy. It is clear that the output z of the above protocol satisfies z = ∑n i=1 pixi, ensuring that all workers compute the right update. Now we argue about the privacy guarantees. We track the values stored by each of the servers and workers:\n• S1: Its own secret shares {x(1)i }ni=1 and the sum of the other shares 〈 ∑n\ni=1 pixi〉(2). • S2: Its own secret shares {x(2)i }ni=1. • Worker i: xi and z = ∑n i=1 pixi.\nClearly, the workers have no information other than the aggregate z and their own data. S2 only has the secret share which on their own leak no information about any data. Hence surprisingly, S2 does not learn anything in this process. S1 has its own secret share and also the sum of the other shares. If n = 1, then z = xi and hence S1 is allowed to learn everything. If n > 1, then S1 cannot recover information about any individual secret share x(2)i from the sum. Thus, S1 learns z and nothing else." }, { "heading": "3.2 ROBUST SECURE AGGREGATION", "text": "We now describe how Algorithm 2 replaces the simple aggregation with any distance-based robust aggregation rule Aggr, e.g. Multi-Krum (Blanchard et al., 2017). The key idea is to use two-party MPC to securely compute multiplication.\n• WorkerSecretSharing (Figure 1a): As before, each worker i secret shares 〈xi〉 distributed over the two servers S1 and S2. • RobustWeightSelection (Figure 1b): After collecting all secret-shared values {〈xi〉}i,\nthe servers compute pairwise difference {〈xi − xj〉}i<j locally. S2 then reveals—to itself exclusively—in plain text all of the pairwise Euclidean distances between workers {‖xi − xj‖2}i<j with the help of precomputed Beaver’s triples and Algorithm 3. The distances are kept private from S1 and workers. S2 then feeds these distances to the distance-based robust aggregation rule Aggr, returning (on S2) a binary weight vector p = {pi}ni=1 ∈ {0,1}n, representing the indices of the robust subset selected by Aggr. • AggregationAndUpdate (Figure 1c): Given weight vector p from previous step, we would\nlike S1 to compute ∑n\ni=1 pixi. To do so, S2 secret shares with S1 the values of {〈pi〉} instead of sending in plain-text since they may be private. Then, S1 reveals to itself, but not to S2, in plain text the value of z = ∑n i=1 pixi using secret-shared multiplication and\nupdates the public model. • WorkerPullModel (Figure 1d): Workers pull the latest public model on S1 and update it\nlocally.\nThe key difference between the robust and the non-robust aggregation scheme is the weight selection phase where S2 computes all pairwise distances and uses this to run a robust-aggregation rule in a black-box manner. S2 computes these distances i) without leaking any information to S1, and ii) without itself learning anything other than the pair-wise distances (and in particular none of the actual values of xi). To perform such a computation, S1 and S2 use precomputed Beaver’s triplets (Algorithm 3 in the Appendix), which can be made available in a scalable way (Smart & Tanguy, 2019)." }, { "heading": "3.3 SALIENT FEATURES", "text": "Overall, our protocols are very resource-light and straightforward from the perspective of the workers. Further, since we use Byzantine-robust aggregation, our protocols are provably fault-tolerant even if a large fraction of workers misbehave. This further lowers the requirements of a worker. We eleborate the features as follows.\nCommunication overhead. In applications, individual uplink speed from worker and servers is typically the main bottleneck, as it is typically much slower than downlink, and the bandwidth between servers can be very large. For our protocols, the time spent on the uplink is within a factor of 2 of the non-secure variants. Besides, our protocol only requires one round of communication, which is an advantage over interactive proofs.\nFault tolerance. The workers in Algorithm 1 and Algorithm 2 are completely stateless across multiple rounds and there is no offline phase required. This means that workers can start participating in the protocols simply by pulling the latest public model. Further, our protocols are unaffected if some workers drop out in the middle of a round. Unlike in (Bonawitz et al., 2017), there is no entanglement between workers and we don’t have unbounded recovery issues.\nCompatibility with local differential privacy. One byproduct of our protocol can be used to convert differentially private mechanisms, such as (Abadi et al., 2016) which only guarantees the privacy of the aggregated model, into the stronger locally differentially private mechanisms which guarantee user-level privacy.\nAlgorithm 1 Two-Server Secure Aggregation (Non-robust variant) Setup: n workers (non-Byzantine) with private vectors xi. Two non-colluding servers S1 and S2. Workers: (WorkerSecretSharing)\n1. split private xi into additive secret shares 〈xi〉 = {x(1)i ,x (2) i } (such that xi = x (1) i +x (2) i ) 2. send x(1)i to S1 and x (2) i to S2\nServers: 1. ∀ i, S1 collects x(1)i and S2 collects x (2) i\n2. (AggregationAndUpdate): (a) On S1 and S2, compute 〈 ∑n i=1 xi〉 locally\n(b) S2 sends its share of 〈 ∑n\ni=1 xi〉 to S1 (c) S1 reveals z = ∑n i=1 xi to everyone\nAlgorithm 2 Two-Server Secure Robust Aggregation (Distance-Based) Setup: n workers, αn of which are Byzantine. Two non-colluding servers S1 and S2. Workers: (WorkerSecretSharing)\n1. split private xi into additive secret shares 〈xi〉 = {x(1)i ,x (2) i } (such that xi = x (1) i +x (2) i ) 2. send x(1)i to S1 and x (2) i to S2\nServers: 1. ∀ i, S1 collects gradient x(1)i and S2 collects x (2) i\n2. (RobustWeightSelection): (a) For each pair (xi, xj) compute their Euclidean distance (i < j):\n• On S1 and S2, compute 〈xi − xj〉 = 〈xi〉 − 〈xj〉 locally • Use precomputed Beaver’s triples (see Algorithm 3) to compute the\ndistance ‖xi − xj‖2 (b) S2 perform robust aggregation rule p =Aggr({‖xi − xj‖2}i<j) (c) S2 secret-shares 〈p〉 with S1\n3. (AggregationAndUpdate): (a) On S1 and S2, use MPC multiplication to compute 〈 ∑n i=1 pixi〉 locally\n(b) S2 sends its share of 〈 ∑n\ni=1 pixi〉(2) to S1 (c) S1 reveals z = ∑n i=1 pixi to all workers.\nWorkers: 1. (WorkerPullModel): Collect z and update model locally\nOther Byzantine-robust oracles. We can also use some robust-aggregation rules which are not based on pair-wise distances such as Byzantine SGD (Alistarh et al., 2018). Since the basic structures are very similar to Algorithm 2, we put Algorithm 8 in the appendix.\nSecurity. The security of Algorithm 1 is straightforward as we previously discussed. The security of Algorithm 2 again relies on the separation of information between S1 and S2 with neither the workers nor S1 learning anything other than the aggregate z. We will next formally prove that this is true even in the presence of malicious workers.\nRemark 1. Our proposed scheme leverages classic 2-party secret-sharing for addition and multiplication. These building blocks however are originally proposed for integers and quantized values, not real values. For floating point operations as used in machine learning, one can use the secure counterparts (Aliasgari et al., 2013) of the two operations. This is facilitated by deep learning training being robust to limited precision training (Gupta et al., 2015) and additional noise (Neelakantan et al., 2016), with current models routinely trained in 16 bit precision. In contrast to (Bonawitz et al., 2017) which relies on advanced cryptographic primitives such as Diffie-Hellman’s key agreement which must remain exact and discrete, our protocols only use much simpler secure arithmetic operations—only addition and multiplication—which are tolerant to rounding errors. For the privacy implications of secret sharing when using floating point, which go beyond the scope of our work, we refer the reader to the information theoretic analysis of Aliasgari et al. (2013)." }, { "heading": "4 THEORETICAL GUARANTEES", "text": "" }, { "heading": "4.1 EXACTNESS", "text": "In the following lemma we show that Algorithm 2 gives the exact same result as non-privacypreserving version. Lemma 2 (Exactness of Algorithm 2). The resulting z in Algorithm 2 is identical to the output of the non-privacy-preserving version of the used robust aggregation rule.\nProof. After secret-sharing xi to 〈xi〉 to two servers, Algorithm 2 performs local differences {〈xi − xj〉}i<j . Using shared-values multiplication via Beaver’s triple, S2 obtains the list of true Euclidean distances {‖xi − xj‖2}i<j . The result is fed to a distance-based robust aggregation rule oracle, all solely on S2. Therefore, the resulting indices {pi}i as used in z := Σni=1pixi are identical to the aggregation of non-privacy-preserving robust aggregation.\nWith the exactness of the protocol established, we next focus on the privacy guarantee." }, { "heading": "4.2 PRIVACY", "text": "We prove probabilistic (information-theoretic) notion of privacy which gives the strongest guarantee possible. Formally, we will show that the distribution of the secret does not change even after being conditioned on all observations made by all participants, i.e. each worker i, S1 and S2. This implies that the observations carry absolutely no information about the secret. Our results rely on the existence of simple additive secret-sharing protocols as discussed in the Appendix.\nEach worker i only receives the final aggregated z at the end of the protocol and is not involved in any other manner. Hence no information can be leaked to them. We will now examine S1. The proofs below rely on Beaver’s triples which we summarize in the following lemma. Lemma 3 (Beaver’s triples). Suppose we secret share 〈x〉 and 〈y〉 between S1 and S2 and want to compute xy on S2. There exists a protocol which enables such computation which uses precomputed shares BV = (〈a〉, 〈b〉, 〈c〉) such that S1 does not learn anything and S2 only learns xy.\nDue to the page limit, we put the details about Beaver’s triples, multiplying secret shares, as well as the proofs for the next two theorems to the Appendix. Theorem I (Privacy for S1). Let z = ∑n i=1 pixi where {pi}ni=1 is the output of byzantine oracle or a vector of 1s (non-private). Let BVij = 〈aij , bij , cij〉 and BV pi = 〈api , b p i , c p i 〉 be the Beaver’s triple used in the multiplications. Let 〈·〉(1) be the share of the secret-shared values 〈·〉 on S1. Then for all workers i\nP(xi = xi | {〈xi〉(1), 〈pi〉(1)}ni=1, {BV (1) i,j ,xi − xj − aij ,xi − xj − bij}i<j ,\n{〈‖xi − xj‖2〉(1)}i<j , {BV p(1)i , pi − a p i , pi − b p i } n i=1, z) = P(xi = xi|z)\nNote that the conditioned values are what S1 observes throughout the algorithm. {BV (1)ij ,xi −xj − aij ,xi − xj − bij}i<j and {BV p(1)i , pi − a p i , pi − b p i }ni=1 are intermediate values during shared values multiplication.\nFor S2, the theorem to prove is a bit different because in this case S2 doesn’t know the output of aggregation z. In fact, this is more similar to an independent system which knows little about the underlying tasks, model weights, etc. We show that while S2 has observed many intermediate values, it can only learn no more than what can be inferred from model distances. Theorem II (Privacy for S2). Let {pi}ni=1 is the output of byzantine oracle or a vector of 1s (nonprivate). Let BVij = 〈aij , bij , cij〉 and BV pi = 〈api , b p i , c p i 〉 be the Beaver’s triple used in the multiplications. Let 〈·〉(2) be the share of the secret-shared values 〈·〉 on S2. Then for all workers i\nP(xi = xi | {〈xi〉(2), 〈pi〉(2), pi}ni=1, {BV (2) i,j ,xi − xj − aij ,xi − xj − bij}i<j ,\n{〈‖xi − xj‖2〉(2), ‖xi − xj‖2}i<j , {BV p(2)i , pi − a p i , pi − b p i } n i=1)\n= P(xi = xi | {‖xi − xj‖2}i<j)\n(1)\nNote that the conditioned values are what S2 observed throughout the algorithm. {BV (2)ij ,xi−xj − aij ,xi − xj − bij}i<j and {BV p(2)i , pi − a p i , pi − b p i }ni=1 are intermediate values during shared values multiplication.\nThe model distances indeed only leaks similarity among the workers. Such similarity, however, does not tell S2 information about the parameters; in (Mhamdi et al., 2018) the leeway attack attacks distance based-rules because they don’t distinguish two gradients with evenly distributed noise and two different gradients very different in one parameter. This means the leaked information has low impact to the privacy.\nIt is also worth noting that curious workers can only inspect others’ values by learning from the public model/update. This is because in our scheme, workers don’t interact directly and there is only one round of communication between servers and workers. So the only message a worker receives is the public model update." }, { "heading": "4.3 COMBINING WITH DIFFERENTIAL PRIVACY", "text": "While input privacy is our main goal, our approach is naturally compatible with other orthogonal notions of privacy. Global differential privacy (DP) (Shokri & Shmatikov, 2015; Abadi et al., 2016; Chase et al., 2017) is mainly concerned about the privacy of the aggregated model, and whether it leaks information about the training data. On the other hand, local differential privacy (LDP) (Evfimievski et al., 2003; Kasiviswanathan et al., 2011) is stronger notions which is also concerned with the training process itself. It requires that every communication transmitted by the worker does not leak information about their data. In general, it is hard to learn deep learning models satisfying LDP using iterate perturbation (which is the standard mechanism for DP) (Bonawitz et al., 2017).\nOur non-robust protocol is naturally compatible with local differential privacy. Consider the usual iterative optimization algorithm which in each round t performs\nwt ← wt−1 − η(xt + νt) , where xt = 1n ∑n i=1 xt,i . (2)\nHere xt is the aggregate update,wt is the model parameters, and νt is the noise added for DP (Abadi et al., 2016).\nTheorem III (from DP to LDP). Suppose that the noise νt in (2) is sufficient to ensure that the set of model parameters {wt}t∈[T ] satisfy (ε, δ)-DP for ε ≥ 1. Then, running (2) with using Alg. 1 to compute (xt + ηt) by securely aggregating {x1,t + nηt,x2,t, . . . ,xn,t} satisfies (ε, δ)-LDP.\nUnlike existing approaches, we do not face a tension between differential privacy which relies on real-valued vectors and cryptographic tools which operate solely on discrete/quantized objects. This is because our protocols do not rely on cryptographic primitives like Diffie-Hellman key agreement, in contrast to e.g. (Bonawitz et al., 2017). In particular, the vectors xi can be full-precision (real-valued) at the cost of adding marginal rounding error which can be tolerated by robust aggregation rule and stochastic gradient descent algorithms. Thus, our secure aggregation protocol can be integrated with a mechanism which has global DP properties e.g. (Abadi et al., 2016), and prove local DP guarantees for the resulting mechanism." }, { "heading": "5 EMPIRICAL ANALYSIS OF OVERHEAD", "text": "We present an illustrative simulation on a local machine (i7-8565U) to demonstrate the overhead of our scheme. We use PyTorch with MPI to train a neural network of 1.2 million parameters on the MNIST dataset. We compare the following three settings: simple aggregation with 1 server, secure aggregation with 2 servers, robust secure aggregation with 2 servers (with Krum (Blanchard et al., 2017)). The number of workers is always 5.\nFigure 2 shows the time spent on all parts of training for one aggregation step. Tgrad is the\n7\ntime spent on batch gradient computation; Tw2s refers to the time spend on uploading and downloading gradients; Ts2s is the time spend on communication between servers. Note that the server-to-server communication could be further reduced by employing more efficient aggregationn rules. Since the simulation is run on a local machine, time spent on communication is underestimated. In the right hand side figure, we adjusts time by assuming the worker-to-server link has 100Mbps bandwidth and 1Gbps respectively for the server-to-server link. Even in this scenario, we can see that the overhead from private aggregation is small. Furthermore, the additional overhead by the robustness module is moderate comparing to the standard training, even for realistic deep-learning settings. For comparison, a zero-knowledge-proof-based approach need to spend 0.03 seconds to encode a submission of 100 integers (Corrigan-Gibbs & Boneh, 2017)." }, { "heading": "6 LITERATURE REVIEW", "text": "Secure Aggregation. In the standard distributed setting with 1 server, Bonawitz et al. (2017) proposes a secure aggregation rule which is also fault tolerant. They generate a shared secret key for each pair of users. The secret keys are used to construct masks to the input gradients so that masks cancel each other after aggregation. To achieve fault tolerance, they employ Shamir’s secret sharing. To deal with active adversaries, they use a public key infrastructure (PKI) as well as a second mask applied to the input. A followup work (Mandal et al., 2018) minimizes the pairwise communication by outsourcing the key generation to two non-colluding cryptographic secret providers. However, both protocols are still not scalable because each worker needs to compute a shared-secret key and a noise mask for every other client. When recovering from failures, all live clients are notified and send their masks to the server, which introduces significant communication overhead. In contrast, workers in our scheme are freed from coordinating with other workers, which leads to a more scalable system.\nByzantine-Robust Aggregation/SGD. Blanchard et al. (2017) first proposes Krum and Multi-Krum for training machine learning models in the presence of Byzantine workers. Mhamdi et al. (2018) proposes a general enhancement recipe termed Bulyan. Alistarh et al. (2018) proves a robust SGD training scheme with optimal sample complexity and the number of SGD computations. MuñozGonzález et al. (2019) uses HMM to detect and exclude Byzantine workers for federated learning. Yin et al. (2018) proposes median and trimmed-mean based robust algorithms which achieve optimal statistical performance. For robust learning on non-i.i.d dataset only appear recently (Li et al., 2019; Ghosh et al., 2019; He et al., 2020). Further, Xie et al. (2018) generalizes the Byzantine attacks to manipulate data transfer between workers and server and Xie et al. (2019a) extends it to tolerate an arbitrary number of Byzantine workers.\nPillutla et al. (2019) proposes a robust aggregation rule RFA which is also privacy preserving. However, it is only robust to data poisioning attack as it requires workers to compute aggregation weights according to the protocol. Corrigan-Gibbs & Boneh (2017) proposes a private and robust aggregation system based on secret-shared non-interactive proof (SNIP). Despite the similarities between our setups, the generation of a SNIP proof on client is expansive and grows with the dimensions. Besides, this paper offers limited robustness as it only validates the range of the data.\nInference As A Service. An orthogonal line of work is inference as a service or oblivious inference. A user encrypts its own data and uploads it to the server for inference. (Gilad-Bachrach et al., 2016; Rouhani et al., 2017; Hesamifard et al., 2017; Liu et al., 2017; Mohassel & Zhang, 2017; Chou et al., 2018; Juvekar et al., 2018; Riazi et al., 2019) falls into a general category of 2-party computation (2PC). A number of issues have to be taken into account: the non-linear activations should be replaced with MPC-friendly activations, represent the floating number as integers. Ryffel et al. (2019) uses functional encryption on polynomial networks. Gilad-Bachrach et al. (2016) also have to adapt activations to polynomial activations and max pooling to scaled mean pooling.\nServer-Aided MPC. One common setting for training machine learning model with MPC is the server-aided case (Mohassel & Zhang, 2017; Chen et al., 2019). In previous works, both the model weights and the data are stored in shared values, which in turn makes the inference process computationally very costly. Another issue is that only a limited number of operations (function evaluations) are supported by shared values. Therefore, approximating non-linear activation functions again introduces significant overhead. In our paper, the computation of gradients are local to the\nworkers, only output gradients are sent to the servers. Thus no adaptations of the worker’s neural network architectures for MPC are required." }, { "heading": "7 CONCLUSION", "text": "In this paper, we propose a novel secure and Byzantine-robust aggregation framework. To our knowledge, this is the first work to address these two key properties jointly. Our algorithm is simple and fault tolerant and scales well with the number of workers. In addition, our framework holds for any existing distance-based robust rule. Besides, the communication overhead of our algorithm is roughly bounded by a factor of 2 and the computation overhead, as shown in Algorithm 3, is marginal and can even be computed prior to training." }, { "heading": "A PROOFS", "text": "Theorem I (Privacy for S1). Let z = ∑n\ni=1 pixi where {pi}ni=1 is the output of byzantine oracle or a vector of 1s (non-private). Let BVij = 〈aij , bij , cij〉 and BV pi = 〈api , b p i , c p i 〉 be the Beaver’s triple used in the multiplications. Let 〈·〉(1) be the share of the secret-shared values 〈·〉 on S1. Then for all workers i\nP(xi = xi | {〈xi〉(1), 〈pi〉(1)}ni=1, {BV (1) i,j ,xi − xj − aij ,xi − xj − bij}i<j ,\n{〈‖xi − xj‖2〉(1)}i<j , {BV p(1)i , pi − a p i , pi − b p i } n i=1, z) = P(xi = xi|z)\nNote that the conditioned values are what S1 observes throughout the algorithm. {BV (1)ij ,xi −xj − aij ,xi − xj − bij}i<j and {BV p(1)i , pi − a p i , pi − b p i }ni=1 are intermediate values during shared values multiplication.\nProof. First, we use the independence of Beaver’s triple to simplify the conditioned term.\n• The Beaver’s triples are data-independent. Since 〈api 〉(2) and 〈b p i 〉(2) only exist in {pi −\napi , pi − b p i }i and they are independent of all other variables, we can remove {pi − a p i , pi −\nbpi }i from conditioned terms. • For the same reason {BV p(1)i }ni=1 are independent of all other variables and can be removed. • The secret shares of aggregation weights 〈pi〉(1) := (pi + ηi)/2 and 〈pi〉(2) := (pi − ηi)/2\nwhere ηi is random noise. Then {〈pi〉(1)}i are independent of all other variables. Thus it can be removed.\nNow the left hand side (LHS) can be simplified as\nLHS =P(xi = xi|{〈xi〉(1)}ni=1,\n{BV (1)i,j ,xi − xj − aij ,xi − xj − bij ,\n〈‖xi − xj‖2〉(1)}i<j , z)\n(3)\nThere are other independence properties:\n• The secret shares of the input 〈xi〉 can be seen as generated by random noise ξi. Thus 〈xi〉(1) := (ξi + xi)/2 and 〈xi〉(2) := (−ξi + xi)/2 are independent of others like xi. Besides, for all j 6= i, 〈xi〉(·) and 〈xj〉(·) are independent. • Beaver’s triple {BV (1)i,j }i<j and {BV (2) i,j }i<j are clearly independent. Since they are gener-\nated before the existance of data, they are always independent of {x(·)j }j . Next, according to Beaver’s multiplication Algorithm 3,\n〈‖xi − xj‖2〉(1) = c(1)ij + (xi − xj − aij)b (1) ij + (xi − xj − bij)a (1) ij\nwe can remove this term from condition: LHS = P(xi = xi|{〈xi〉(1)}ni=1, z,\n{BV (1)i,j ,xi − xj − aij ,xi − xj − bij}i<j) (4)\nBy the independence between 〈xi〉(·) and BV (·)ij , we can further simplify the conditioned term\nLHS = P(xi = xi|{〈xi〉(1)}ni=1, z,\n{BV (1)i,j , 〈xi − xj − aij〉 (2), 〈xi − xj − bij〉(2)}i<j)\n(5)\nSince BV (1)ij and BV (2) ij are always independent of all other variables, we know that\nLHS = P(xi = xi|{〈xi〉(1)}ni=1, z) (6) For worker i, ∀j 6= i, 〈xi〉(·) and 〈xj〉(1) are independent\nLHS = P(xi = xi|z). Theorem II (Privacy for S2). Let {pi}ni=1 is the output of byzantine oracle or a vector of 1s (nonprivate). Let BVij = 〈aij , bij , cij〉 and BV pi = 〈api , b p i , c p i 〉 be the Beaver’s triple used in the multiplications. Let 〈·〉(2) be the share of the secret-shared values 〈·〉 on S2. Then for all workers i\nP(xi = xi | {〈xi〉(2), 〈pi〉(2), pi}ni=1, {BV (2) i,j ,xi − xj − aij ,xi − xj − bij}i<j ,\n{〈‖xi − xj‖2〉(2), ‖xi − xj‖2}i<j , {BV p(2)i , pi − a p i , pi − b p i } n i=1)\n= P(xi = xi | {‖xi − xj‖2}i<j)\n(1)\nNote that the conditioned values are what S2 observed throughout the algorithm. {BV (2)ij ,xi−xj − aij ,xi − xj − bij}i<j and {BV p(2)i , pi − a p i , pi − b p i }ni=1 are intermediate values during shared values multiplication.\nProof. Similar to the proof of Theorem I, we can first conclude\n• {pi − api , pi − b p i }i and {BV p (2) i }ni=1 could be dropped because these they are data inde-\npendent and no other terms depend on them. • {〈pi〉(2)}ni=1 is independent of the others so it can be dropped. • {pi}ni=1 can be inferred from {‖xi − xj‖2}ij so it can also be dropped. • By the definition of {〈‖xi−xj‖2〉(2)}ij , it can be represented by {xi}(2) and {BV (2)ij ,xi− xj − aij ,xi − xj − bij}i<j .\nNow the left hand side (LHS) can be simplified as\nLHS =P(xi = xi|{〈xi〉(2)}ni=1,\n{BV (2)ij ,xi − xj − aij ,xi − xj − bij , ‖xi − xj‖2}i<j)\n(7)\nBecause xi is independent of {〈xi〉(2)}ni=1 as well as data independent terms like {BV (2)ij ,a (1) ij , b (1) ij }i<j , we have\nLHS = P(xi = xi ∣∣ ‖xi − xj‖2}i<j)\nTheorem III (from DP to LDP). Suppose that the noise νt in (2) is sufficient to ensure that the set of model parameters {wt}t∈[T ] satisfy (ε, δ)-DP for ε ≥ 1. Then, running (2) with using Alg. 1 to compute (xt + ηt) by securely aggregating {x1,t + nηt,x2,t, . . . ,xn,t} satisfies (ε, δ)-LDP.\nProof. Suppose that worker i ∈ [n] copmutes it gradient xi based on data di ∈ D. For the sake of simplicity, let us assume that the arregate model satisfies ε-DP. The proof is identical for the more relaxed notion of (ε, δ)-DP fo rε ≥ 1. This implies that for any j ∈ [n] and dj , d̃j ∈ D,\nPr [ 1 n ( ∑n i=1 xi(di)) + ν = y ]\nPr [ 1 n ( ∑ i6=j xi(di)) + 1 nxj(d̃j) + ν = y ] ≤ ε ,∀y . (8) Now, we examine the communication received by each server and measure how much information is revealed about any given worker j ∈ [n]. The values stored and seen are:\n• S1: The secret share (x1 + nν)(1), {xi(di)(1)}ni=2 and the sum of other shares (x1 + nν)(2) + ∑n i=2 xi(di) (2) = (( ∑n i=1 xi(di)) + nν) (2). • S2: The secret share (x1 + nν)(2), {xi(di)(2)}ni=2. • Worker i: z = ( ∑n i=1 xi(di)) + nν.\nThe equality above is because our secret shares are linear. Now, the values seen by any worker satisfy ε-LDP directly by (8). For the server, note that by the definition of our secret shares, we have for any worker j,\nxj(dj) (1) is independent of xj(dj)\n⇒Pr[xj(dj)(1) = y] = Pr[xj(dj)(1) = ỹ] ,∀y, ỹ ⇒Pr[xj(dj)(1) = y] = Pr[xj(d̃j)(1) = y] ,∀dj , d̃j ∈ D .\nA similar statement holds for the second share. This proves that the values computed/seen by the workers or servers satisfy ε-LDP." }, { "heading": "B NOTES ON SECURITY", "text": "B.1 BEAVER’S MPC PROTOCOL\nIn this section, we briefly introduce Beaver (1991)’s classic implementations of addition 〈x+ y〉 and multplication 〈xy〉 given additive secret-shared values 〈x〉 and 〈y〉 where each party i holding xi and yi. The algorithm for multiplication is given in Algorithm 3.\nAlgorithm 3 Beaver (1991)’s MPC Protocol input: 〈x〉; 〈y〉; Beaver’s triple (〈a〉, 〈b〉, 〈c〉) s.t. c = ab output: 〈z〉 s.t. z = xy for all party i do\nlocally compute xi − ai and yi − bi and then broadcast them to all parties collect all shares and reveal x− a = Σi(xi − ai), y − b = Σi(yi − bi) compute zi := ci + (x− a)bi + (y − b)ai\nend for The first party 1 updates z1 := z1 + (x− a)(y − b)\nAddition. The secret-shared values form of sum, 〈x+ y〉, is obtained by simply each party i locally compute xi + yi.\nMultiplication. Assume we already have three secret-shared values called a triple, 〈a〉, 〈b〉, and 〈c〉 such that c = ab.\nThen note that if each party broadcasts xi − ai and yi − bi, then each party i can compute x− a and y − b (so these values are publicly known), and hence compute\nzi := ci + (x− a)bi + (y − b)ai\nAdditionally, one party (chosen arbitrarily) adds on the public value (x− a)(y − b) to their share so that summing all the shares up, the parties get\nΣizi = c+ (x− a)b+ (y − b)a+ (x− a)(y − b) = xy and so they have a secret sharing 〈z〉 of xy.\nThe generation of Beaver’s triples. There are many different implementations of the offline phase of the MPC multiplication. For example, semi-homomorphic encryption based implementations (Keller et al., 2018) or oblivious transfer-based implementations (Keller et al., 2016). Since their security and performance have been demonstrated, we may assume the Beaver’s triples are ready for use at the initial step of our protocol.\nB.2 NOTES ON OBTAINING A SECRET SHARE\nSuppose that we want to secret share a bounded real vector x ∈ (−B,B]d for some B ≥ 0. Then, we sample a random vector ξ uniformly from (−B,B]d. This is easily done by sampling each coordinate independently from (−B,B]. Then the secret shares become (ξ,x − ξ). Since ξ is drawn from a uniform distribution from [−B,B]d, the distribution of x− ξ conditioned on x is still uniform over (−B,B]d and (importantly) independent of x. All arithmetic operations are then carried out modulo [−B,B] i.e. B + 1 ≡ −B + 1 and −B − 1 ≡ B − 1. This simple scheme ensures information theoretic input-privacy for continuous vectors.\nThe scheme described above requires access to true randomness i.e. the ability to sample uniformly from (−B,B]. We make this assumption to simplify the proofs and the presentation. We note that differential privacy techniques such as (Abadi et al., 2016) also assume access to a similar source of true randomness. In practice, however, this would be replaced with a pseudo-random-generator (PRG) (Blum & Micali, 1984; Yao, 1982).\nB.3 COMPUTATIONAL INDISTINGUISHABILITY\nLet {Xn}, {Yn} be sequences of distributions indexed by a security parameter n (like the length of the input). {Xn} and {Yn} are computationally indistinguishable if for every polynomial-time A and polynomially-bounded ε, and sufficiently large n∣∣Pr[A(Xn) = 1]− Pr[A(Yn) = 1]∣∣ ≤ ε(n) (9) If a pseudorandom generator, instead of true randomness, is used in Appendix B.2 , then the shares are indistinguishable from a uniform distribution over a field of same length. Thus in Theorem I and Theorem II, the secret shares can be replaced by an independent random variable of uniform distribution with negligible change in probability.\nB.4 NOTES ON THE SECURITY OF S2\nTheorem II proves that S2 does not learn anything besides the pairwise distances between the various models. While this does leak some information about the models, S2 cannot use this information to reconstruct any xi. This is because the pair-wise distances are invariant to translations, rotations, and shuffling of the coordinates of {xi}. This remains true even if S2 additionally learns the global model too." }, { "heading": "C DATA OWNERSHIP DIAGRAM", "text": "In Figure 3, we show a diagram of data ownership to demonstrate of the data transmitted among workers and servers. Note that the Beaver’s triples are already local to each server so that no extra communication is needed." }, { "heading": "D THREE SERVER MODEL", "text": "In this section, we introduce a robust algorithm with information-theoretical privacy guarantee at the cost of more communication between servers. We avoid exposing pairwise distances to S2 by adding to the system an additional non-colluding server, the crypto provider(Wagh et al., 2019). A crypto provider does not receive shares of gradients, but only assists other servers for the multiparty computation. Now our pipeline for one aggregation becomes: 1) the workers secret share their gradients into 2 parts; 2) the workers send their shares to S1 and S2 respectively; 3) S1, S2 and the crypto provider compute the robust aggregation rule using crypto primitives; 4) servers reveal the output of aggregation and send back to workers.\nThe (Wagh et al., 2019) use crypto provider to construct efficient protocols for the training and inference of neural networks. In their setup, workers secret share their samples to the servers and then servers secure compute a neural network. In contrast, we consider the federated learning setup where workers compute the gradients and servers perform a multiparty computation of a (robust) aggregation function. The aggregated neural network is public to all. As the (robust) aggregation function is much simpler than a neural netwuork, our setup is more computationally efficient. Note that we can directly plug our secure robust aggregation rule into their pipeline and ensure both robustness and privacy-preserving in their setting.\nThe crypto provider enables servers to compute a variety of functions on secret shared values.\n• MATMUL: Given 〈x〉 and 〈y〉, return 〈x>y〉. The crypto provider generates and distribute beaver’s triple for multiplication. • PRIVATECOMPARE: Given 〈x〉 and a number r, reveal a bit (x > r) to S1 and S2, see\nAlgorithm 5. This can be directly used to compare 〈x〉 and 〈y〉 by comparing 〈x− y〉 and 0. • SELECTSHARE: Given 〈x〉 and 〈y〉 and α ∈ {0, 1}, return 〈(1−α)x+αy〉, see Algorithm 6.\nThis function can be easily extended to select one from more quantities.\nThe combination of PRIVATECOMPARE and SELECTSHARE enables sorting scalar numbers, like distances. Thus we can use these primitives to compute Krum on secret-shared values. For other aggregation rules like RFAPillutla et al. (2019), we need other primitives like division. We refer to Wagh et al. (2019) for more primitives like division, max pooling, ReLU. We also leave the details of the three aforementioned primitives in Appendix D.1.\nThree server MultiKrum. In Algorithm 4 we present a three-server MULTIKRUM algorithm. First, S1, S2, and S3 compute the pairwise distances {〈dij〉}ij , but do not reveal it like Algorithm 2. For each i, we use PRIVATECOMPARE and SELECTSHARE to sort {〈dij〉}j by their magnitude. Then we compute 〈scorei〉 using SELECTSHARE. Similarly we sort {〈scorei〉}i and get a selection vector α for workers with lowest scores. Finally, we open 〈α ·X〉 and reveal ∑ i∈I αixi to everyone.\nWe remark that sorting the {〈dij〉}j does not leak anything about their absolute or relative magnitude. This is because: 1) S3 picks i, j from π1, π2 which is unknown to S1 and S2; 2) S3 encodes i, j into a selection vector αij and secret shares it to S1 and S2; 3) For S1 and S2, they only observe secretshared selection vectors which is computationally indistinguishable from a random string. Thus S1 and S2 learn nothing more than the outcome of MultiKrum. On the other hand, PRIVATECOMPARE guarantees the crypto provider S3 does not know the results of comparison. So S3 also knows nothing more than the output of MultiKrum. Thus Algorithm 4 enjoys information-theoretical security.\nD.1 THREE SERVER MODEL IN SECURENN\nChanges in the notations. The Algorithm 6 and Algorithm 5 from SecureNN use different notations. For example they use 〈w〉pj to represent the share j of w in a ring of Zp. Morever, the Algorithm 5 secret shares each bit of a number x of length ` which writes {〈x[i]〉p}i∈[`]. The ⊕ means xor sum.\nAlgorithm 4 Three Server MULTIKRUM\nInput: S1 and S2 hold {〈xi〉(0)}i and {〈xi〉(1)}i resp. f , m Output: ∑ i∈I αixi where I is the set selected by MULTIKRUM\nOn S1 and S2 and S3: For i in 1 . . . n do\nFor j 6= i in 1 . . . n do Compute 〈xi − xj〉 locally on S1 and S2 Call FMATMUL({S1, S2}, S3) with (〈xi−xj〉, 〈xi−xj〉) and get 〈dij〉 = 〈‖xi−xj‖2〉\nEnd for End for Denote d = [dij ]i<j be a vector of the distances and X = [x1; . . . ;xn]\nOn S3: Let π1 and π2 be 2 random permutation function. For i in π1(1 . . . n) do\nFor j 6= i in π2(1 . . . n) do Let αij be the selection vector of d whose entry for dij is 1 and all others are 0. Compute 〈αij〉 and send 〈αij〉0 to S1 and send 〈αij〉1 to S2. Call Algorithm 6 with input (〈αij〉, {〈dij〉}i<j) and get 〈d′ij〉. (dij = 〈d′ij〉(0)+〈d′ij〉(1)) End for Sort {〈d′ij〉}j using Algorithm 5 to compute 〈scorei〉 = ∑ i→j〈d′ij〉\nEnd for Sort {〈scorei〉}i using Algorithm 5 and record the m indicies I with lowset scores. Let α be a selection vector of length n so that for all entry i ∈ I are 1 and all others are 0. Compute 〈α〉 and send 〈α〉(0) to S1 and send 〈α〉(1) to S2. Compute 〈α ·X〉 using FMATMUL.\nOn S1 and S2: Let k = 0 for S1 and k = 1 for S2 For ĩ in 1 . . . n do\nFor j̃ in 1 . . . (n− 1) do Receive 〈α∗∗〉(k) from S3 Call Algorithm 6 with input (〈α∗∗〉, {〈dij〉}i<j) and get 〈d′∗j̃〉. End for Sort {〈d′∗j̃〉}j̃ using Algorithm 5 to compute 〈scoreĩ〉 = ∑ ĩ→j̃〈d′∗j̃〉\nEnd for Sort {〈scoreĩ〉}ĩ using Algorithm 5. Receive 〈α〉k Compute 〈α ·X〉 using FMATMUL. S1 and S2: Open 〈α ·X〉 to reveal ∑ i∈I αixi\nAlgorithm 5 PRIVATECOMPARE ΠPC({S1, S2}, S3) (Wagh et al., 2019, Algo. 3) Input: S1 and S2 hold {〈x[i]〉p0}i∈[`] and {〈x[i]〉 p 1}i∈[`], respectively, a common input r (an l bit\ninteger) and a common random bit β. The superscript p is a small prime number like 67. Output: S3 gets a bit β ⊕ (x > r) Common Randomness: S1, S2 hold ` common random value si ∈ Z∗p for all i ∈ [`] and a random permutation π for ` elements. S1 and S2 additionally hold ` common random values ui ∈ Z∗p.\nOn each j ∈ {0, 1} server Sj+1 Let t = r + 1 mod 2` for i = {`, `− 1, . . . , 1} do\nif β = 0 then 〈wi〉pj = 〈x[i]〉 p j + jr[i]− 2r[i]〈x[i]〉 p j\n〈ci〉pj = jr[i]− 〈x[i]〉 p j + j + ∑` k=i+1〈wk〉 p j\nelse if β = 1 AND r 6= 2` − 1 then 〈wi〉pj = 〈x[i]〉 p j + jt[i]− 2t[i]〈x[i]〉 p j\n〈ci〉pj = −jt[i] + 〈x[i]〉 p j + 1− j + ∑` k=i+1〈wk〉 p j\nelse If i 6= 1, 〈ci〉pj = (1− j)(ui + 1)− jui, else 〈ci〉 p j = (−1)j · ui.\nend if end for Send {〈di〉pj}i = π({si〈ci〉 p j}i) to S3" }, { "heading": "On server S3", "text": "For all i ∈ [`], S3 computes di = Reconstp(〈di〉p0, 〈di〉 p 1) and sets β\n′ = 1 iff ∃i ∈ [`] such that di = 0 S3 outputs β′\nAlgorithm 6 SELECTSHARE ΠSS({S1, S2}, S3) (Wagh et al., 2019, Algo. 2)\nInput: S1 and S2 hold (〈α〉L0 , 〈x〉L0 , 〈y〉L0 ) and (〈α〉L1 , 〈x〉L1 , 〈y〉L1 ), resp. Output: S1 and S2 get 〈z〉L0 and 〈z〉L1 resp., where z = (1− α)x+ αy. Common Randomness: S1, S2 hold shares of 0 over ZL denoted by u0 and u1. For j ∈ {0, 1}, Sj+1 compute 〈w〉Lj = 〈y〉Lj − 〈x〉Lj . S1, S2, S3 call FMATMUL({S1, S2}, S3) with Sj+1, j ∈ {0, 1} having input (〈α〉Lj , 〈w〉Lj ) and S1, S2 learn 〈c〉L0 and 〈c〉L1 , resp. For j ∈ {0, 1}, Sj+1 outputs 〈z〉Lj = 〈x〉Lj + 〈c〉Lj + uj" }, { "heading": "E EXAMPLE: TWO-SERVER PROTOCOL WITH BYZANTINESGD ORACLE", "text": "We can replace MultiKrum with ByzantineSGD in (Alistarh et al., 2018). To fit into our protocol, we make some minor modifications but still guarantee that output is same. The core part of (Alistarh et al., 2018) is listed in Algorithm 7.\nAlgorithm 7 ByzantineSGD (Alistarh et al., 2018) input: I is the set of good workers, {Ai}i∈[m], {‖Bi−Bj‖}i<j {‖∇k,i−∇k,j‖}i<j (i, j ∈ [m]), thresholds TA,TB > 0 output: Subset good workers S Amed := median{A1, . . . , Am}; Bmed ← Bi where i ∈ [m] is any machine s.t. |{j ∈ [m] : ‖Bj −Bi‖ ≤ TB}| > m/2; ∇med ← ∇k,i where i ∈ [m] is any machine s.t. |{j ∈ [m] : ‖∇k,j −∇k,i‖ ≤ 2ν}| > m/2; S ← {i ∈ I : |Ai −Amed| ≤ TA ∧ ‖Bi −Bmed‖ ≤ TB ∧ ‖∇k,j −∇k,i‖ ≤ 4ν};\nThe main algorithm can be summarized in Algorithm 8, the red lines highlights the changes. Different from Multi-Krum (Blanchard et al., 2017), Alistarh et al. (2018) uses states in their algorithm. As a result, the servers need to keep track of such states.\nAlgorithm 8 Two-Server Secure ByzantineSGD Setup:\n• n workers, at most α percent of which are Byzantine. • Two non-colluding servers S1 and S2 • ByzantineSGD Oracle: returns an indices set S.\n– With thresholds TA and TB – Oracle state Aoldi , 〈Boldi 〉 for each worker i\nWorkers: 1. (WorkerSecretSharing):\n(a) randomly split private xi into additive secret shares 〈xi〉 = {x(1)i ,x (2) i } (such\nthat xi = x (1) i + x (2) i )\n(b) sends x(1)i to S1 and x (2) i to S2\nServers: 1. ∀ i, S1 collects gradient x(1)i and S2 collects x (2) i .\n(a) Use Beaver’s triple to compute Ai := 〈〈xi〉, 〈w −w0〉〉inner +Aoldi (b) 〈Bi〉 := 〈xi〉+ 〈Boldi 〉\n2. (RobustSubsetSelection): (a) For each pair (i, j) of gradients computes their distance (i < j):\n• On S1 and S2, compute 〈Bi −Bj〉 = 〈Bi〉 − 〈Bj〉 locally • Use precomputed Beaver’s triple and Algorithm 3 to compute the\ndistance ‖Bi −Bj‖2 • On S1 and S2, compute 〈xi − xj〉 = 〈xi〉 − 〈xj〉 locally • Use precomputed Beaver’s triple and Algorithm 3 to compute the\ndistance ‖xi − xj‖22 (b) S2 perform Byzantine SGD S=ByzantineSGD({Ai}i, {‖Bi−Bj‖}i<j , {‖xi−\nxj‖}i<j ,TA,TB); if |S| < 2, exit; Convert S to a weight vector p of length n (c) S2 secret-shares 〈p〉 with S1\n3. (AggregationAndUpdate): (a) On S1 and S2, use MPC multiplication to compute 〈 ∑n i=1 pixi〉 locally\n(b) S2 sends its share of 〈 ∑n\ni=1 pixi〉(2) to S1 (c) S1 reveals z = ∑n i=1 pixi to all workers.\n(d) S2 updates Aoldi ← Ai, 〈Boldi 〉 ← 〈Bi〉 Workers:\n1. (WorkerPullModel): Collect z and update model w ← w + z locally" }, { "heading": "F ADDITIONAL EXPERIMENTS", "text": "We benchmark the performance of our two-server protocol with one-server protocol on the google kubernetes engine. We create a cluster of 8 nodes (machine-type=e2-standard-2) where 2 servers are deployed on different nodes and the workers are deployed evenly onto the rest 6 nodes. We run the experiments with 5, 10, 20, 50 workers and a large model of 25.6 million parameters (similar to ResNet-56) and a small model of 1.2 million parameters. We only record the time spent on communication and aggregation (krum). We benchmark each experiment for three times and take their average. The results are shown in Figure 4.\nScaling with dimensions. In Figure 4a, we compute the ratio of time spent on large model and small model. We can see that the ratio of two-server model is very close to the ideal ratio which suggests it scales linearly with dimensions. This is expected because krum scales linearly with dimension. For aggregation rules based on high-dimensional robust mean estimation, we can remove the dependence on d. We leave it as a future work to incorporate more efficient robust aggregation functions.\nScaling with number of workers. In Figure 4b, we can see that the time spent on both one-server and two-server model grow with O(n2). However, we notated that this complexity comes from the aggregation rule we use, which is krum, not from our core protocol. For other aggregation rules like ByzantineSGD Alistarh et al. (2018), the complexity of aggregation rule is O(n) and we can observe better scaling effects. We leave it as a future work to incorporate and benchmark more efficient robust aggregation rules.\nSetups. Note that in our experiments, the worker-to-server communication and server-to-server communication has same bandwidth of 1Gb/s. In the realistic application, the link between servers can be infiniband and the bandwidth between worker and server are typically smaller. Thus, this protocol will be more efficient than we have observed here." } ]
2,020
null
SP:9caede157f5546829e12c95bd290a760c1aa2dce
[ "Basically, it seems that the proposed method is interesting and meaningful. The scheduling problem in this paper is based on the analogy to a server having a water pitcher, and the deep reinforcement learning approach for the scheduling problem has been designed. However, the scheduling problem in wireless networks is a very famous issue. Of course, applying DRF to it is quite interesting. However, the authors need to describe the conventional well-known scheduling algorithms and compare them with the proposed scheme (now, the current paper only focuses on applying the DRF to the scheduling and evaluating its performance in aspects of an optimization problem.). Further, typically, in scheduling problems, efficiency (total data rate) and fairness are the key factors and it is needed to describe the relationship between these conventional performance metrics and the satisfaction probability. " ]
In this paper, we investigate the problem of scheduling and resource allocation over a time varying set of clients with heterogeneous demands. This problem appears when service providers need to serve traffic generated by users with different classes of requirements. We thus have to allocate bandwidth resources over time to efficiently satisfy these demands within a limited time horizon. This is a highly intricate problem and solutions may involve tools stemming from diverse fields like combinatorics and optimization. Recent work has successfully proposed Deep Reinforcement Learning (DRL) solutions, although not yet for heterogeneous user traffic. We propose a deep deterministic policy gradient algorithm combining state of the art techniques, namely Distributional RL and Deep Sets, to train a model for heterogeneous traffic scheduling. We test on diverse number scenarios with different time dependence dynamics, users’ requirements, and resources available, demonstrating consistent results. We evaluate the algorithm on a wireless communication setting and show significant gains against state-of-theart conventional algorithms from combinatorics and optimization (e.g. Knapsack, Integer Linear Programming, Frank-Wolfe).
[]
[ { "authors": [ "Marc G. Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In International Conference on Machine Learning, ICML, Syndey, Australia,", "year": 2017 }, { "authors": [ "Richard Bellman" ], "title": "A markovian decision process", "venue": "Journal of mathematics and mechanics,", "year": 1957 }, { "authors": [ "Arthur Charpentier", "Romuald Elie", "Carl Remlinger" ], "title": "Reinforcement learning in economics and finance", "venue": "arXiv preprint arXiv:2003.10014,", "year": 2020 }, { "authors": [ "Sandeep Chinchali", "Pan Hu", "Tianshu Chu", "Manu Sharma", "Manu Bansal", "Rakesh Misra", "Marco Pavone", "Sachin Katti" ], "title": "Cellular network traffic scheduling with deep reinforcement learning", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Will Dabney", "Georg Ostrovski", "David Silver", "Rémi Munos" ], "title": "Implicit quantile networks for distributional reinforcement learning", "venue": "arXiv preprint arXiv:1806.06923,", "year": 2018 }, { "authors": [ "Will Dabney", "Mark Rowland", "Marc G Bellemare", "Rémi Munos" ], "title": "Distributional reinforcement learning with quantile regression", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, USA,", "year": 2018 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Matteo Hessel", "Ian Osband", "Alex Graves", "Volodymyr Mnih", "Remi Munos", "Demis Hassabis" ], "title": "Noisy networks for exploration", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Marguerite Frank", "Philip Wolfe" ], "title": "An algorithm for quadratic programming", "venue": "Naval Research Logistics Quarterly,", "year": 1956 }, { "authors": [ "C. Stratton" ], "title": "Jaquette. Markov decision processes with a new optimality criterion: Discrete time", "venue": "The Annals of Statistics,", "year": 1973 }, { "authors": [ "Leslie Pack Kaelbling", "Michael L Littman", "Anthony R Cassandra" ], "title": "Planning and acting in partially observable stochastic domains", "venue": "Artificial intelligence,", "year": 1998 }, { "authors": [ "Jens Kober", "J. Bagnell", "Jan Peters" ], "title": "Reinforcement learning in robotics: A survey", "venue": "The International Journal of Robotics Research, 32:1238–1274,", "year": 2013 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "John E Mitchell" ], "title": "Branch-and-cut algorithms for combinatorial optimization problems", "venue": "Handbook of applied optimization,", "year": 2002 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Oshri Naparstek", "Kobi Cohen" ], "title": "Deep multi-user reinforcement learning for distributed dynamic spectrum access", "venue": "IEEE Transactions on Wireless Communications,", "year": 2018 }, { "authors": [ "Yasar Sinan Nasir", "Dongning Guo" ], "title": "Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks", "venue": "IEEE Journal on Selected Areas in Communications,", "year": 2019 }, { "authors": [ "Mohammadreza Nazari", "Afshin Oroojlooy", "Lawrence Snyder", "Martin Takác" ], "title": "Reinforcement learning for solving the vehicle routing problem", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Albert H. Nuttall" ], "title": "Some integrals involving the q m function", "venue": "IEEE Transactions on Information Theory, 21(1):95–96,", "year": 1975 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "Proceedings of the 31st International Conference on Machine Learning, PMLR, 32(1):387–395,", "year": 2014 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Christopher C. Tan", "Norman C. Beaulieu" ], "title": "On first-order markov modeling for the rayleigh fading channel", "venue": "IEEE Transactions on Communications,", "year": 2000 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Manzil Zaheer", "Satwik Kottur", "Siamak Ravanbakhsh", "Barnabas Poczos", "Russ R Salakhutdinov", "Alexander J Smola" ], "title": "URL http://papers.nips.cc/paper/ 6931-deep-sets.pdf", "venue": "Deep sets", "year": 2017 }, { "authors": [ "Karl Johan Åström" ], "title": "Optimal control of markov processes with incomplete state information", "venue": "Journal of Mathematical Analysis and Applications,", "year": 1965 } ]
[ { "heading": null, "text": "In this paper, we investigate the problem of scheduling and resource allocation over a time varying set of clients with heterogeneous demands. This problem appears when service providers need to serve traffic generated by users with different classes of requirements. We thus have to allocate bandwidth resources over time to efficiently satisfy these demands within a limited time horizon. This is a highly intricate problem and solutions may involve tools stemming from diverse fields like combinatorics and optimization. Recent work has successfully proposed Deep Reinforcement Learning (DRL) solutions, although not yet for heterogeneous user traffic. We propose a deep deterministic policy gradient algorithm combining state of the art techniques, namely Distributional RL and Deep Sets, to train a model for heterogeneous traffic scheduling. We test on diverse number scenarios with different time dependence dynamics, users’ requirements, and resources available, demonstrating consistent results. We evaluate the algorithm on a wireless communication setting and show significant gains against state-of-theart conventional algorithms from combinatorics and optimization (e.g. Knapsack, Integer Linear Programming, Frank-Wolfe)." }, { "heading": "1 INTRODUCTION", "text": "User scheduling (i.e., which user to be served when) and associated resource allocation (i.e., which and how many resources should be assigned to scheduled users) are two long-standing fundamental problems in communications, which have recently attracted vivid attention in the context of next generation communication systems (5G and beyond). The main reason is the heterogeneity in users’ traffic and the diverse Quality of Service (QoS) requirements required by the users. The goal of this paper is to design a scheduler and resource assigner, which takes as inputs the specific constraints of the traffic/service class each user belongs in order to maximize the number of satisfied users.\nThis problem is hard to solve since we have at least two main technical challenges: (i) except for some special cases, there is no simple closed-form expression for the problem and a fortiori for its solution; (ii) the problem solving algorithm has to be scalable with the number of users. Current solutions rely on combinatorial approaches or suboptimal solutions, which seem to work satisfactorily in specific scenarios, failing though to perform well when the number of active users is large. This motivates the quest for alternative solutions; we propose to resort to Deep Reinforcement Learning (DRL) to tackle this problem.\nIn the context of DRL, we propose to combine together several ingredients in order to solve the aforementioned challgening problem. In particular, we leverage on the theory of Deep Sets to design permutation equivariant and invariant models, which solves the scalability issue, i.e., the number of users can be increased without having to increase the number of parameters. We also stabilize the learning process by adding in a new way the distributional dimension marrying it with Dueling Networks to ”center the losses”.\nFinally, we compare the proposed DRL-based algorithm with conventional solutions based on combinatorial or suboptimal optimization approaches. Our experiments and simulation results clearly show that our DRL method significanlty outperforms conventional state-of-the-art algorithms." }, { "heading": "2 RELATED WORK", "text": "The scheduling problem is a well known problem appearing in various fields and as technologies progress and more people want to take advantage of the new services, how to schedule them in an efficient way becomes more intricate. This is exactly the case in wireless communication systems. Researchers are resorting to new methods, such as deep reinforcement learning, which have shown impressive results Mnih et al. (2015); Silver et al. (2016). For example in (Chinchali et al., 2018) they perform scheduling on a cellular level using Deep Reinforcement learning (DRL). Also ideas using DRL in a distributed way to perform dynamic power allocation has appeared in (Naparstek & Cohen, 2018; Nasir & Guo, 2019). Nevertheless, to the best of our knowledge, the problem of scheduling on traffic of users with heterogeneous performance requirements has not been appropriately addressed. To solve this hard problem, one can resort to distributional Reinforcement Learning researched in Jaquette (1973) and followed by (Dabney et al., 2018a;b) in order to have richer representations of the environment and obtain better solutions. Also techniques like noisy network for better explorations (Fortunato et al., 2018) or architectures like dueling networks (Wang et al., 2016) have greatly improved stability of the trained models. Finally ideas (Zaheer et al., 2017) managed to simplify and improve neural network models when permutation invariance properties apply. We combine those ideas with a deep deterministic policy gradient method (Lillicrap et al., 2016) to reach a very efficient scheduling algorithm." }, { "heading": "3 THE SCHEDULING AND RESOURCE ALLOCATION PROBLEM", "text": "" }, { "heading": "3.1 THE PROBLEM", "text": "The problem we consider here involves a set of randomly arriving users that communicate wirelessly with a base station (service provider); users require that their traffic is served according to the quality of service (QoS) requirements imposed by the service class they belong to. We consider the case where users belong to different service classes with heterogeneous requirements. Each class specifies the amount of data to be delivered, the maximum tolerable latency, and the “importance/priority” of the user. A centralized scheduler (at the base station) at each time step takes as input this time varying set of users belonging to different service classes and has to decide how to allocate its limited resources per time step in order to maximize the long-term ”importance” weighted sum of satisfied users. A user is considered to be satisfied whenever it successfully received its data within the maximum tolerable latency specified by its service class.\nThe hard problem of scheduling and resource allocation - which is combinatorial by nature - is exacerbated by the wireless communication, which in turn brings additional uncertainty due to time-varying random connection quality. The scheduler that assigns resources does not exclude the possibility of a bad connection (low channel quality) which renders data transmission unsuccessful. In order to mitigate that effect, some protocols make use of channel state information (CSI) to the transmitter, i.e., the base station/scheduler knows in advance the channel quality and adapts the allocated resources to the instantaneous channel conditions. We consider here two extreme cases of channel knowledge: (i) full-CSI, in which perfect (instantaneous, error free) CSI is provided to the scheduler enabling accurate estimation of exact resources each user needs; and (ii) no-CSI, in which the scheduler is agnostic to the channel quality. In case of unsuccessful/erroneous data reception, we employ a simple retransmission protocol (HARQ-type I). A widely used way to model the channel connection dynamics is to make the wireless channel quality depend on the distance of the user from the base station and evolve in a Markovian way from the quality on the channel realization of the previous time step. The mathematical description of the traffic generator model and the channel dynamics is provided in the appendix A.\nTo better understand the problem, we draw the following analogy. Imagine a server having a water pitcher that is full at every time step and has to distribute it across a set of people. Every person has a glass and leaves satisfied only if its glass is filled (or overfilled) at any time instant prior to certain maximum waiting time. As mentioned before, in this work we consider a retransmission protocol (HARQ-type I), which in our analogy means that the server cannot fill a glass using multiple trials; if a glass is not filled completely at a time step, then it will be emptied and the server has to retry. The wireless communication setting brings the additional complication that the sizes of the glasses are not actually fixed but fluctuate (due to the randomness of the connection quality of each user). In\nthe full-CSI case, the server could know at every time step the size of glasses and therefore the exact amount of resources required. On the other hand, with no-CSI, the server can only roughly estimate the size, mainly using the amount of data requested and the distance between user and base station.\nThe problem can be modeled as a Markov Decision Process(MDP) (Bellman, 1957) (S,A, R, P, γ)1, where S is the state space of the environment (described in detail in appendix A.2), and A is the action space (in our case the set of all feasible allocations). After action at ∈ A at state st ∈ S a reward rt ∼ R(·|st, at) is obtained and the next state follows the probability st+1 ∼ P (·|st, at). The discount factor is γ ∈ [0, 1). Under a fixed policy π : S → A the return is a random variable defined as Zπt = ∑∞ i=0 γ\nt+irt+i representing the discounted sum of rewards when a trajectory of states is taken following the policy π. An agent (scheduler) ideally aims to find the optimal policy π? maximizing the mean reward Eπ[Zπ]. Being more rigorous, only in the full-CSI case the states st are fully observed, whereas for no-CSI, the channel qualities are unknown to the agent and the observation ot ⊂ st is a part of the state, leading to a Partially Observable MDP (POMDP)(Åström, 1965). A way to reduce POMDP to MDP is by substituting the states with the ”belief” of the states (Kaelbling et al., 1998) of the value of st. Another way is using the complete history {o0, a0, o1, a1, · · · , at−1, ot−1}, which fortunately in our case it works since only the most recent part is relevant, i.e., the one representing if and how many resources have been previously allocated to only the currently active users." }, { "heading": "3.2 THE DEEP REINFORCEMENT LEARNING APPROACH", "text": "Deep reinforcement learning (DRL) has shown impressive results in many problems modeled as MDP, but mainly in cases where the environment is close to deterministic due to game rules (Atari, Chess, Go (Mnih et al., 2015; Silver et al., 2017; 2016)) or physical laws (robotics and physics tasks (Kober et al., 2013; Lillicrap et al., 2015)). A very relevant question to ask is whether we can develop a DRL algorithm coping successfully with environments exhibiting high variance randomness as in our case due to the channel dynamics and heterogeneous traffic. Existing applications of DRL to problems with similar properties in other fields are encouraging (trading, pricing, vehicle routing(Nazari et al., 2018; Charpentier et al., 2020))." }, { "heading": "3.2.1 POLICY NETWORK", "text": "Our objective is to a scheduler that can handle a large number of users K, say K = 100, in which case the action space becomes infeasibly large for a traditional Deep Q-learning Network approach. For that, we propose to employ a deep deterministic policy gradient (DDPG) method (Lillicrap et al., 2016), with which we aim at training a policy πθ : S → A modelled as a Neural Network (NN) with parameters θ. Moreover, our method should work in both full-CSI and no-CSI cases with minor, if any, modification. With full-CSI the exact amount of required resources (bandwidth) per user is known, so the (discrete) action is just to select the subset of user to satisfy, while for noCSI, it is continuous since on top of selecting the users, the scheduler has to decide on the portion of resources each user takes. For no-CSI those portion are exactly the output of πθ but for fullCSI we do a continuous relaxation2 and the output provides the value (related to importance) per resources; that way, a user ranking is obtained, which allows the scheduler to proceed sequentially: the scheduler serves/satisfies as many of the most “valuable” (highest rank) users as possible subject to available resources. This discrepancy in the output process is the only difference in the model between full-CSI and no-CSI.\nSetting Zπ(st, at) = rt + Zπt+1 with rt ∼ R(·|st, at) being the return if at t the action at is taken followed by policy π and Qπ(st, at) = E[Zπ(st, at)] be the expected return conditioned on the action at st is at then the objective of the agent to maximize is J(θ) = Est0∼pt0 [Q\nπθst0 , πθ(st0))] with pt0 being the probability of the initial state st0 at time t0. The gradient can be written (Silver et al., 2014):\n∇θJ(θ) = Est0∼pt0 ,s∼ρπθst0 [∇θπθ(s)∇aQ πθ (s, a)|a = πθ(s)] (1)\n1The only discrepancy is that the scheduler aims to ideally maximize the sum of rewards, i.e., for γ = 1, and not the discounted one.\n2The continuous relaxation is also mandatory for a DDPG approach to work so that the gradients can pass from the value network.\nwith ρπθst0 the discounted state (improper) distribution defined as ρ πθ st0 (s) = ∑∞ i=0 γ i P(st+i = s|st0 , πθ). In practice ρπθst0 is approximated by the (proper) distribution % πθ st0 (s) := ∑∞ i=0P(st+i = s|st0 , πθ). To compute the gradient, the function Qπθ (s, a) is needed which is approximated by another NN Qψ(s, a), named value network, described in the next subsection.\nWe now explain the architecture of the model πθ. The policy falls in a category of permutation equivariant functions meaning that permuting the users should only result in permuting likewise the resource allocation. In (Zaheer et al., 2017) necessary and sufficient conditions are shown for permutation equivariance in neural networks; we adopt their model with minor changes. At first the characteristics xi ∈ RNx , i ∈ {1, · · ·K} of each (active) user are processed individually by the same function φuser : RNx → RHx modeled as a two layer fully connected network. Then all those features per user are aggregated with the permutation equivariant fσ : RK×H → RK×H ′ of H/H ′ input/output channels:\nfσ(x) = σ(xΛ + 11 ᵀxΓ), 1 = [1, · · · , 1] ∈ RK , Λ,Γ ∈ RD×D\n′\nand σ(·) an element wise non-linear function. We stack two of those, one frelu : RK×Hx → R K×H′x with σ() being the relu(x) = max(0, x) and a second flinear : RK×H ′ x → RK×1 without any non-linearity σ(). This structure on top of preserving the desirable permutation equivariance property it also brings a significant reduction of parameters since an increase of the number of users does not necessitate additional parameters with bigger network prone to overfitting traps.\nBefore the final non-linearity, which is a smooth approximation of ReLU, namely softplus(x) = log(1 + ex), guaranteeing the output is positive, there is a critical normalization step x → x−E[x]||x||2 with || · ||2 being the `2 norm. To better understand the criticality of that step, consider the case of full-CSI where the output denotes how valuable each user is. Without the normalization step, the value network perceives that the higher the value assigned to a user, the more probable is to get resources, be satisfied and take reward, leading to a pointless attempt of increasing every user’s value. However, by subtracting the mean value, whenever the value of a user increases, the value of the rest decreases, giving hence the sense of the total resources being limited. In the case of no-CSI, there is an additional benefit. Here there is an extra final operation, i.e., x→ x||x||1 , see Figure 1, so as to signify portions (of the total bandwidth) adding up to 1. Having done the normalization step previously (dividing by ||x||2), helps keeping the denominator ||x||1 stable. A final note regards the exploration. The output has to satisfy properties (like positivity and/or adding to 1) which makes the common approach of adding noise on the actions cumbersome. An easy way out is through noisy networks (Fortunato et al., 2018), which introduce noise to the weights of a layer, resulting to changed decision of the policy network. The original approach considers the variance of the noise to be learnable; we keep it constant though since it provides better results. The noise is added at φusers parameters, resulting to altered output features per user and therefore different allocations." }, { "heading": "3.2.2 VALUE NETWORK", "text": "As mentioned previously Qπθ (s, a) is used for computing the gradient (3.2.1), however as it is intractable to compute, a neural network, called value network, is employed to approximate it. The common approach is through the Bellman operator\nT πQ(st, at) = E[R(st, at)] + γEst+1∼P (st,at)[Q(st+1, π(at))]\nto minimize the temporal difference error, i.e., the difference between before and after applying the Bellman operator. This leads to the minimization of the loss\nL2(ψ) = Est0∼pt0 ,s∼ρπθst0 [(Qψ(s, a)− T πθ′Qψ′(s, a)) 2]\nwhere (πθ′ , Qψ′) correspond to two separate networks called target policy and target value networks, respectively, used for stabilizing the learning, and are periodically (or gradually) updated as copies of the current actor and value networks.\nAnother approach is the following: instead of only approximating the expected value of the return, we approximate its distribution as in (Barth-Maron et al., 2018). Algorithmically, it is impossible to\nrepresent the full space of probability distributions with a finite number of parameters so the value neural network Zπθψ : S × A → RNQ must approximate with a discrete representation the actual Zπθ . Among other variations (Bellemare et al., 2017; Dabney et al., 2018a) one can choose the representation to be a uniform (discrete) probability distribution supported at {(Zπθψ )i, i ∈ {1, · · · , NQ}} where (Zπθψ )i is the i-th element of the output. More rigorously, the distribution that the value neural network represents, is 1NQ ∑NQ i=1 δ(Zπθψ )i\nwhere δx is a Dirac delta function at x (Dabney et al., 2018b). Minimizing the 1-Wasserstein distance between this distribution and the actual one of Zπθ can be done by minimizing the quantile regression loss\nL1(ψ) = NQ∑ i=1 E st0∼pt0 ,s∼ρ πθ st0 ,Z∼T πθ′Z π θ′ ψ′ (st,at) [fi(Z − (Zπθψ )i)]\nwhere fi(x) = x( 2i−12NQ − 1{x<0}) with 1 being the indicator function, the distributional Bellman operator is T πZπ(st, at) D = R(st, at) + γZ\nπ(st+1, π(at)), st+1 ∼ P (st, at) and Zπθψ being the target policy network (defined in the same way as before).\nAn important observation is that even though we approximate the distribution of Zπθ (s, a), what we need at then end is only its expected value, approximated as Qπθ (s, a) ≈ 1NQ ∑NQ i=1(Z πθ ψ )i. Therefore a natural question that arises here is why using Zπθψ instead of a simpler Qψ(s, a) approximates straight away the needed expected value. If instead of having a scheduler and its users, we consider a teacher and its students, things become evident. Even though the objective of the teacher is to increase the mean “knowledge” of its students, using the distribution of the capacity/knowledge of the students enable for example deciding whether to distribute its attention uniformly among students or focus more on a fraction of them.\nEven though intuitively we expected to observe gains by using the distribution, that was not the case at first. The main problem was that the distribution Zπθ was far away from 0 making it very difficult for for the policy network to well approximate them. One way to solve this could have been through scaling the rewards (i.e., the rewards are divided by the standard deviation of a rolling discounted sum of rewards). We came out with a new proposal and we propose to center the distribution through a dueling architecture (Wang et al., 2016). As shown in Figure 1 just before the output there is the dueling architecture with:\n(Zπθψ )i = Z πθ,Mean ψ + (Z πθ,Shape ψ )i −\n1\nNQ NQ∑ i=1 (Zπθ,Shapeψ )i,∀i ∈ {1, · · · , Nq}\nwhich effectively pushes Zπθ,Meanψ to approximate Qπθ used for training the policy. To further encourage the decomposition of the shape and the center of the distribution, we add a loss term Lshape = ||Zπθ,Shapeψ ||2, centering Z πθ,Shape ψ around zero.\nIn Figure 2 we provide additional element to support the choice of distributional reinforcement learning. We use the traffic model described in Table 1a showing the two classes of users with different requirements. In Figure 2a (which is the mean taken over five experiments) we see that all approaches finally converge to approximately the same value; nevertheless, the combination of distributional with dueling is faster. Figures 2b and 2c focus on two (out of the five) experiments, where it is evident the advantage of the distributional approach. This approach is able to detect the existence of two difference classes with different requirements, thus gradually improving the satisfaction rate for both of them. On the other hand, trying only to learn the expected value leads to a training where one class is improved at the expense of the other.\nA final remark is that the architecture should be designed in a way preserving the permutation invariance. If we associate every user’s characteristics with the resources given by the agent, i.e., the action corresponding to it, then permuting the users and accordingly the allocation should not influence the assessment of the success of the agent. To build such an architecture, we adopt the one of the Policy Network using ideas from (Zaheer et al., 2017).\n0 2 4 6\nMillions of Samples\n0.5\n0.6\n0.7\n0.8\n0.9\nS a ti s fa\nc ti o n P\nro b a b ili\nty\nDistr&Duel Expected Distr\n(a)\n0 2 4 6\nMillions of Samples\n0.2\n0.4\n0.6\n0.8\n1 Expected\nClass 1, exp. 1 Class 2, exp. 1 Class 1, exp. 2 Class 2, exp. 2\n(b)\n0 2 4 6\nMillions of Samples\n0.2\n0.4\n0.6\n0.8\n1 Distr&Duel\nClass 1, exp. 1 Class 2, exp. 1 Class 1, exp. 2 Class 2, exp. 2\n(c)\nFigure 2: Comparison between distributional and traditional (non-distributional) approach. We conducted five experiments (with different seeds) for no-CSI using the traffic model of table 1a and a maximum number of users K = 75. In the first figure we depict the average over those five experiments; in the other figures we consider two specific experiments (named exp. 1 and exp. 2)." }, { "heading": "3.3 BENCHMARKS USING CONVENTIONAL APPROACHES", "text": "" }, { "heading": "3.3.1 FULL-CSI", "text": "For explanation convenience, we use once again the analogy introduced in section 3.1. Having fullCSI means that the server knows for all people (active users) at each time, the size of their glasses and their importance. If we try to solve the problem myopically ignoring the impact on the future steps, we have a reformulated knapsack problem with the size of the glasses being the weight of the objects, whose value is the importance of its “holder”. The size of the server’s pitcher is the capacity of the knapsack that we try to fill with objects so as to maximize the sum of their value. We refer to this benchmark simply by myopic Knapsack. More details are given in appendix B.2.1.\nAccounting for the impact on the future is not trivial (discussed in appendix B.2.2). One way is to assume that the scheduler acts as an oracle and knows in advance for the future T − 1 time steps which users will appear and what will be their channel qualities. In that case, the problem can be written as an Integer Linear Programming and can be solved by standard Branch and Cut approach. We obtain therefore an upper bound which we call oracle ILP. More details in appendix B.2.2." }, { "heading": "3.3.2 NO-CSI", "text": "Without CSI, one cannot do much so one may want at least to learn (usually with autoregression) the channel model. Using a DRL approach, the dynamics of the environment do not change through the training (and testing) and the agent learns how to react well under those conditions. Therefore, even though it acts in a model-free manner, in a way it learns the statistics of the problem. For fair comparison, we consider a benchmark that knows the statistics of the channel and traffic dynamics. Based on knowing the distributions of the problem, one can reach to an optimization problem with multiple local optimums. The Frank-Wolfe algorithm guarantees reaching to a local optimum, so we run this method Ninit times and pick the best found local optimum. More details in appendix B.1." }, { "heading": "4 EXPERIMENTS", "text": "We consider two cases for the traffic, described in table 1. The first one has two classes, one with requiring a low amount of data but with a stringent latency constraint (of just two time slots) and another one with the opposite. Every class has the same importance which is the main difference with the second scenario where a certain part of users are of highest priority. An important remark is that when users are all of equal importance, then the objective of the scheduler coincides with indiscriminately increasing the satisfaction probability of every user. Finally the Prob. column describes the probability a user of the class to appear at a time slot (note that they do not add up to one since it is possible that no user appears).\nThe channel dynamics is described through a parameter ρ. For ρ = 0 the dynamics behaves in an i.i.d. manner increasing the unpredictability of the future but also the chances to recover from a bad quality connection. We consider users appear with distance from the base station varying from 0.05 km to 1 km. We keep the energy per symbol(/bandwidth) equal to 1µJ . For more information on the architecture of the networks and the setting, the interested reader is referred to appendix C. Hereafter we refer to the DRL approach as “Deep Scheduler”.\nIn Figures 3a,3b we demonstrate the high performance of the Deep Scheduler for the full CSI case with a maximum number of K = 100 users. It consistently manages to outperform the myopic Knapsack approach, which is myopically optimal. We emphasize that solving a Knapsack problem is non-polynomial. We also show that actually the performance of Deep Scheduler is close to optimal since it is close to the “oracle ILP”, which is an upper bound since the scheduler knows in advance the future.\nIn Figures 3c and 3d we focus on the no-CSI case with K = 60 users. In that case, we know that the Frank Wolfe (FW) method reaches to a suboptimal solution. We repeat the algorithm for many initialization (for each time step) to get the best possible solution among suboptimal ones. Note that this process is unfortunately very slow (see appendix B.1) and gets slower for increasing K. So even if the method shows considerable improvements with higher Ninit, we were obliged to stop at 20. Moreover, as K increases, unfortunately so does the number of local optimums and and that of solutions with poor performance. This is why we see the Deep Scheduler to substantially outperform the FW for even a moderate K = 60.\nFinally we include Figure 3e, which showcases that the Deep Scheduler consistently keeps its high performance even if the traffic model gets more complicated so as to represent more accurately a real-world scenario." }, { "heading": "5 CONCLUSION", "text": "The problem of scheduling and resource allocation for a time varying set of clients with heterogeneous traffic and QoS requirements in wireless networks has been investigated here. Leveraging on deep reinforcement learning, we have proposed a deep deterministic policy gradient algorithm, which builds upon distributional reinforcement learning and Deep Sets. Experimental evaluation of our proposed method in scenarios with different traffic and wireless channel dynamics, shows significant gains against state-of-the-art conventional combinatorial optimization methods." } ]
null
null
SP:858bb0278078b780b1fe163c7a7a084fd142f186
[ "The paper introduces a general framework dubbed Generalized Data Transformations (GDT) for self supervised learning. The framework is used to perform video-audio self supervised learning and analyze what kind of transformations the representations should be invariant to or on the contrary variant to thanks to a contrastive loss. The author demonstrate the effectiveness of the proposed approach by showing that the resulting learned video representations achieve very good performance on the HMDB51 and UCF101 downstream task. " ]
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations, such as image distortions. In this paper, we show that, for videos, the answer is more complex, and that better results can be obtained by accounting for the interplay between invariance, distinctiveness, multiple modalities, and time. We introduce Generalized Data Transformations (GDTs) as a way to capture this interplay. GDTs reduce most previous selfsupervised approaches to a choice of data transformations, even when this was not the case in the original formulations. They also allow to choose whether the representation should be invariant or distinctive w.r.t. each effect and tell which combinations are valid, thus allowing us to explore the space of combinations systematically. We show in this manner that being invariant to certain transformations and distinctive to others is critical to learning effective video representations, improving the state-of-the-art by a large margin, and even surpassing supervised pretraining. We demonstrate results on a variety of downstream video and audio classification and retrieval tasks, on datasets such as HMDB-51, UCF-101, DCASE2014, ESC-50 and VGG-Sound. In particular, we achieve new state-ofthe-art accuracies of 72.8% on HMDB-51 and 95.2% on UCF-101.
[]
[ { "authors": [ "Jean-Baptiste Alayrac", "Adrià Recasens", "Rosalia Schneider", "Relja Arandjelović", "Jason Ramapuram", "Jeffrey De Fauw", "Lucas Smaira", "Sander Dieleman", "Andrew Zisserman" ], "title": "Self-supervised multimodal versatile networks", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Humam Alwassel", "Bruno Korbar", "Dhruv Mahajan", "Lorenzo Torresani", "Bernard Ghanem", "Du Tran" ], "title": "Self-supervised learning by cross-modal audio-video clustering", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Relja Arandjelovic", "Andrew Zisserman" ], "title": "Look, listen and learn", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Yuki M Asano", "Mandela Patrick", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Labelling unlabelled videos from scratch with multi-modal self-supervision", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Yuki M Asano", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Self-labelling via simultaneous clustering and representation learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Yusuf Aytar", "Carl Vondrick", "Antonio Torralba" ], "title": "Soundnet: Learning sound representations from unlabeled video", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Sagie Benaim", "Ariel Ephrat", "Oran Lang", "Inbar Mosseri", "William T. Freeman", "Michael Rubinstein", "Michal Irani", "Tali Dekel" ], "title": "Speednet: Learning the speediness in videos", "venue": null, "year": 2020 }, { "authors": [ "Uta Buchler", "Biagio Brattoli", "Bjorn Ommer" ], "title": "Improving spatiotemporal self-supervision by deep reinforcement learning", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Julien Mairal", "Armand Joulin" ], "title": "Unsupervised pre-training of image features on non-curated data", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Honglie Chen", "Weidi Xie", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Vggsound: A large-scale audiovisual dataset", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Hyeon Cho", "Taehoon Kim", "Hyung Jin Chang", "Wonjun Hwang" ], "title": "Self-supervised spatiotemporal representation learning using variable playback speed prediction", "venue": "arXiv preprint arXiv:2003.02692,", "year": 2020 }, { "authors": [ "Joon Son Chung", "Andrew Zisserman" ], "title": "Out of time: automated lip sync in the wild", "venue": "In Workshop on Multi-view Lip-reading,", "year": 2016 }, { "authors": [ "Virginia R. de Sa" ], "title": "Learning classification with unlabeled data", "venue": "In NeurIPS,", "year": 1994 }, { "authors": [ "Ali Diba", "Vivek Sharma", "Luc Van Gool", "Rainer Stiefelhagen" ], "title": "Dynamonet: Dynamic action and motion network", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Basura Fernando", "Hakan Bilen", "Efstratios Gavves", "Stephen Gould" ], "title": "Self-supervised video representation learning with odd-one-out networks", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [ "Chuang Gan", "Boqing Gong", "Kun Liu", "Hao Su", "Leonidas J Guibas" ], "title": "Geometry guided convolutional neural networks for self-supervised video representation learning", "venue": null, "year": 2019 }, { "authors": [ "Jort F. Gemmeke", "Daniel P.W. Ellis", "Dylan Freedman", "Aren Jansen", "Wade Lawrence", "R. Channing Moore", "Manoj Plakal", "Marvin Ritter" ], "title": "Audio set: An ontology and human-labeled dataset for audio", "venue": null, "year": 2017 }, { "authors": [ "Deepti Ghadiyaram", "Du Tran", "Dhruv Mahajan" ], "title": "Large-scale weakly-supervised pre-training for video action recognition", "venue": null, "year": 2019 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": null, "year": 2018 }, { "authors": [ "Spyros Gidaris", "Andrei Bursuc", "Nikos Komodakis", "Patrick Pérez", "Matthieu Cord" ], "title": "Learning representations by predicting bags of visual words", "venue": null, "year": 2020 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In AISTATS,", "year": 2010 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "In CVPR,", "year": 2006 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In ICCV Workshops,", "year": 2019 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Self-supervised co-training for video representation learning", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Olivier J Hénaff", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Di Hu", "Feiping Nie", "Xuelong Li" ], "title": "Deep multimodal clustering for unsupervised audiovisual learning", "venue": null, "year": 2019 }, { "authors": [ "Xu Ji", "João F. Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation, 2018", "venue": null, "year": 2018 }, { "authors": [ "Longlong Jing", "Yingli Tian" ], "title": "Self-supervised spatiotemporal feature learning by video geometric transformations", "venue": "arXiv preprint arXiv:1811.11387,", "year": 2018 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The kinetics human action video dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "arXiv preprint arXiv:2004.11362,", "year": 2020 }, { "authors": [ "Dahun Kim", "Donghyeon Cho", "In So Kweon" ], "title": "Self-supervised video representation learning with space-time cubic puzzles", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "H. Kuehne", "H. Jhuang", "E. Garrote", "T. Poggio", "T. Serre" ], "title": "HMDB: a large video database for human motion recognition", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Hildegard Kuehne", "Hueihan Jhuang", "Estíbaliz Garrote", "Tomaso Poggio", "Thomas Serre" ], "title": "HMDB: a large video database for human motion recognition", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Hsin-Ying Lee", "Jia-Bin Huang", "Maneesh Singh", "Ming-Hsuan Yang" ], "title": "Unsupervised representation learning by sorting sequences", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Junnan Li", "Pan Zhou", "Caiming Xiong", "Richard Socher", "Steven CH Hoi" ], "title": "Prototypical contrastive learning of unsupervised representations", "venue": "arXiv preprint arXiv:2005.04966,", "year": 2020 }, { "authors": [ "Tianhao Li", "Limin Wang" ], "title": "Learning spatiotemporal features via video and text pair discrimination", "venue": "arXiv preprint arXiv:2001.05691,", "year": 2020 }, { "authors": [ "Yang Liu", "Samuel Albanie", "Arsha Nagrani", "Andrew Zisserman" ], "title": "Use what you have: Video retrieval using representations from collaborative experts", "venue": null, "year": 2019 }, { "authors": [ "Dezhao Luo", "Chang Liu", "Yu Zhou", "Dongbao Yang", "Can Ma", "Qixiang Ye", "Weiping Wang" ], "title": "Video cloze procedure for self-supervised spatio-temporal learning", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Zelun Luo", "Boya Peng", "De-An Huang", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Unsupervised learning of long-term motion dynamics for videos", "venue": null, "year": 2017 }, { "authors": [ "Antoine Miech", "Jean-Baptiste Alayrac", "Lucas Smaira", "Ivan Laptev", "Josef Sivic", "Andrew Zisserman" ], "title": "End-to-end learning of visual representations from uncurated instructional", "venue": "videos. arXiv.cs,", "year": 2019 }, { "authors": [ "Antoine Miech", "Jean-Baptiste Alayrac", "Lucas Smaira", "Ivan Laptev", "Josef Sivic", "Andrew Zisserman" ], "title": "End-to-end learning of visual representations from uncurated instructional videos", "venue": null, "year": 2020 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Ishan Misra", "C Lawrence Zitnick", "Martial Hebert" ], "title": "Shuffle and learn: unsupervised learning using temporal order verification", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Pedro Morgado", "Nuno Vasconcelos", "Ishan Misra" ], "title": "Audio-visual instance discrimination with cross-modal agreement", "venue": "arXiv preprint arXiv:2004.12943,", "year": 2020 }, { "authors": [ "Arsha Nagrani", "Chen Sun", "David Ross", "Rahul Sukthankar", "Cordelia Schmid", "Andrew Zisserman" ], "title": "Speech2action: Cross-modal supervision for action recognition", "venue": null, "year": 2020 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Mehdi Noroozi", "Hamed Pirsiavash", "Paolo Favaro" ], "title": "Representation learning by learning to count", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Andrew Owens", "Alexei A Efros" ], "title": "Audio-visual scene analysis with self-supervised multisensory features", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Andrew Owens", "Jiajun Wu", "Josh H McDermott", "William T Freeman", "Antonio Torralba" ], "title": "Ambient sound provides supervision for visual learning", "venue": null, "year": 2016 }, { "authors": [ "Daniel S. Park", "William Chan", "Yu Zhang", "Chung-Cheng Chiu", "Barret Zoph", "Ekin D. Cubuk", "Quoc V. Le" ], "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "venue": null, "year": 2019 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": null, "year": 2016 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Karol J. Piczak" ], "title": "Esc: Dataset for environmental sound classification", "venue": "In ACM Multimedia,", "year": 2015 }, { "authors": [ "AJ Piergiovanni", "Anelia Angelova", "Michael S. Ryoo" ], "title": "Evolving losses for unsupervised video representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Hardik B. Sailor", "Dharmesh M Agrawal", "Hemant A Patil" ], "title": "Unsupervised filterbank learning using convolutional restricted boltzmann machine for environmental sound classification", "venue": null, "year": 2017 }, { "authors": [ "Nawid Sayed", "Biagio Brattoli", "Björn Ommer" ], "title": "Cross and learn: Cross-modal self-supervision", "venue": "German Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "UCF101: A dataset of 101 human action classes from videos in the wild", "venue": "In CRCV-TR-12-01,", "year": 2012 }, { "authors": [ "D. Stowell", "D. Giannoulis", "E. Benetos", "M. Lagrange", "M.D. Plumbley" ], "title": "Detection and classification of acoustic scenes and events", "venue": "IEEE Transactions on Multimedia,", "year": 2015 }, { "authors": [ "Chen Sun", "Fabien Baradel", "Kevin Murphy", "Cordelia Schmid" ], "title": "Contrastive bidirectional transformer for temporal representation learning", "venue": "arXiv preprint arXiv:1906.05743,", "year": 2019 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": null, "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Jiangliu Wang", "Jianbo Jiao", "Linchao Bao", "Shengfeng He", "Yunhui Liu", "Wei Liu" ], "title": "Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics", "venue": null, "year": 2019 }, { "authors": [ "Donglai Wei", "Joseph J Lim", "Andrew Zisserman", "William T Freeman" ], "title": "Learning and using the arrow of time", "venue": null, "year": 2018 }, { "authors": [ "Jason Wei", "Kai Zou" ], "title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X. Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": null, "year": 2018 }, { "authors": [ "Fanyi Xiao", "Yong Jae Lee", "Kristen Grauman", "Jitendra Malik", "Christoph Feichtenhofer" ], "title": "Audiovisual slowfast networks for video recognition", "venue": "arXiv preprint arXiv:2001.08740,", "year": 2020 }, { "authors": [ "Dejing Xu", "Jun Xiao", "Zhou Zhao", "Jian Shao", "Di Xie", "Yueting Zhuang" ], "title": "Self-supervised spatiotemporal learning via video clip order prediction", "venue": null, "year": 2019 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A. Efros" ], "title": "Colorful image colorization", "venue": "In Proc. ECCV,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Split-brain autoencoders: Unsupervised learning by cross-channel prediction", "venue": null, "year": 2017 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Wei-Chiu Ma", "Antonio Torralba" ], "title": "The sound of motions", "venue": "In ICCV,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent works such as PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Tian et al., 2019) have shown that it is possible to pre-train state-of-the-art image representations without the use of any manually-provided labels. Furthermore, many of these approaches use variants of noise contrastive learning (Gutmann & Hyvärinen, 2010). Their idea is to learn a representation that is invariant to transformations that leave the meaning of an image unchanged (e.g. geometric distortion or cropping) and distinctive to changes that are likely to alter its meaning (e.g. replacing an image with another chosen at random).\nAn analysis of such works shows that a dominant factor for performance is the choice of the transformations applied to the data. So far, authors have explored ad-hoc combinations of several transformations (e.g. random scale changes, crops, or contrast changes). Videos further allow to leverage the time dimension and multiple modalities. For example, Arandjelovic & Zisserman (2017); Owens et al. (2016) learn representations by matching visual and audio streams, as a proxy for objects that have a coherent appearance and sound. Their formulation is similar to noise contrastive ones, but does not quite follow the pattern of expressing the loss in terms of data transformations. Others (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) depart further from standard contrastive schemes by learning representations that can tell whether visual and audio streams are in sync or not; the difference here is that the representation is encouraged to be distinctive rather than invariant to a time shift.\nOverall, it seems that finding an optimal noise contrastive formulation for videos will require combining several transformations while accounting for time and multiple modalities, and understanding how invariance and distinctiveness should relate to the transformations. However, the ad-hoc nature of these choices in previous contributions make a systematic exploration of this space rather difficult.\nIn this paper, we propose a solution to this problem by introducing the Generalized Data Transformations (GDT; fig. 1) framework. GDTs reduce most previous methods, contrastive or not, to a noise contrastive formulation that is expressed in terms of data transformations only, making it\nsimpler to systematically explore the space of possible combinations. This is true in particular for multi-modal data, where separating different modalities can also be seen as a transformation of an input video. The formalism also shows which combinations of different transformations are valid and how to enumerate them. It also clarifies how invariance and distinctiveness to different effects can be incorporated in the formulation and when doing so leads to a valid learning objective. These two aspects allows the search space of potentially optimal transformations to be significantly constrained, making it amenable to grid-search or more sophisticated methods such as Bayesian optimisation.\nBy using GDTs, we make several findings. First, we find that using our framework, most previous pretext representation learning tasks can be formulated in a noise-contrastive manner, unifying previously distinct domains. Second, we show that just learning representations that are invariant to more and more transformations is not optimal, at least when it comes to video data; instead, balancing invariance to certain factors with distinctiveness to others performs best. Third, we find that by investigating what to be variant to can lead to large gains in downstream performances, for both visual and audio tasks.\nWith this, we are able to set the new state of the art in audio-visual representation learning, with both small and large video pretraining datasets on a variety of visual and audio downstream tasks. In particular, we achieve 95.2% and 72.8% on the standardized UCF-101 and HMDB-51 action recognition benchmarks." }, { "heading": "2 RELATED WORK", "text": "Self-supervised learning from images and videos. A variety of pretext tasks have been proposed to learn representations from unlabelled images. Some tasks leverage the spatial context in images (Doersch et al., 2015; Noroozi & Favaro, 2016) to train CNNs, while others create pseudo classification labels via artificial rotations (Gidaris et al., 2018), or clustering features (Asano et al., 2020b; Caron et al., 2018; 2019; Gidaris et al., 2020; Ji et al., 2018). Colorization (Zhang et al., 2016; 2017), inpainting (Pathak et al., 2016), solving jigsaw puzzles (Noroozi et al., 2017), as well as the contrastive methods detailed below, have been proposed for self-supervised image representation learning. Some of the tasks that use the space dimension of images have been extended to the space-time dimensions of videos by crafting equivalent tasks. These include jigsaw puzzles (Kim et al., 2019), and predicting rotations (Jing & Tian, 2018) or future frames (Han et al., 2019). Other tasks leverage the temporal dimension of videos to learn representations by predicting shuffled frames (Misra et al., 2016), the direction of time (Wei et al., 2018), motion (Wang et al., 2019), clip and sequence order (Lee et al., 2017; Xu et al., 2019), and playback speed (Benaim et al., 2020; Cho et al., 2020; Fernando et al., 2017). These pretext-tasks can be framed as GDTs.\nMulti-modal learning. Videos, unlike images, are a rich source of a variety of modalities such as speech, audio, and optical flow, and their correlation can be used as a supervisory signal. This\nidea has been present as early as 1993 (de Sa, 1994). Only recently, however, has multi-modal learning been used to successfully learn effective representations by leveraging the natural correspondence (Alwassel et al., 2020; Arandjelovic & Zisserman, 2017; Asano et al., 2020a; Aytar et al., 2016; Morgado et al., 2020; Owens et al., 2016) and synchronization (Chung & Zisserman, 2016; Korbar et al., 2018; Owens & Efros, 2018) between the audio and visual streams. A number of recent papers have leveraged speech as a weak supervisory signal to train video representations (Li & Wang, 2020; Miech et al., 2020; Nagrani et al., 2020; Sun et al., 2019a;b) and recently Alayrac et al. (2020), which uses speech, audio and video. Other works incorporate optical flow and other modalities (Han et al., 2020; Liu et al., 2019; Piergiovanni et al., 2020; Zhao et al., 2019) to learn representations. In (Tian et al., 2019), representations are learned with different views such as different color channels or modalities) to induce invariances. In contrast, our work analyses multi-modal transformations and examines their utility when used as an invariant or variant learning signal.\nNoise Contrastive Loss. Noise contrastive losses (Gutmann & Hyvärinen, 2010; Hadsell et al., 2006) measure the similarity between sample pairs in a representational space and are at the core of several recent works on unsupervised feature learning. It has been shown to yield good performance for learning image (Chen et al., 2020b; He et al., 2019; Hénaff et al., 2019; Hjelm et al., 2019; Li et al., 2020; Misra & van der Maaten, 2020; Oord et al., 2018; Tian et al., 2019; 2020; Wu et al., 2018) and video (Han et al., 2019; Li & Wang, 2020; Miech et al., 2020; Morgado et al., 2020; Sohn, 2016; Sun et al., 2019a) representations, and circumvents the need to explicitly specify what information needs to be discarded via a designed task.\nWe leverage the noise contrastive loss as a learning framework to encourage the network to learn desired invariance and distinctiveness to data transformations. The GDT framework can be used to combine and extend many of these cues, contrastive or not, in a single noise contrastive formulation." }, { "heading": "3 METHOD", "text": "A data representation is a function f : X → RD mapping data points x to vectors f(x). Representations are useful because they help to solve tasks such as image classification. Based on the nature of the data and the task, we often know a priori some of the invariances that the representation should possess (for example, rotating an image usually does not change its class). We can capture those by means of the contrast function1 c(x1, x2) = δf(x1)=f(x2), where c(x1, x2) = 1 means that f is invariant to substituting x2 for x1, while c(x1, x2) = 0 means that f is distinctive to this change. Any partial knowledge of the contrast c can be used as a cue to learn f , but c is not arbitrary: in order for c to be valid, the expression c(x1, x2) = 1 must be an equivalence relation on X , i.e. be reflexive c(x, x) = 1, symmetric c(x1, x2) = c(x2, x1) and transitive c(x1, x2) = c(x2, x3) = 1⇒ c(x1, x3) = 1. This is justified in Appendix A.1 and will be important in establishing which particular learning formulations are valid and which are not.\nWe introduce next our Generalized Data Transformations (GDTs) framework by generalizing two typical formulations: the first is analogous to ‘standard’ methods such as MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b) and the second tackles multi-modal data.\nStandard contrastive formulation. Recall that the goal is to learn a function f that is compatible with a known contrast c, in the sense explained above. In order to learn f , we require positive (c(x1, x2) = 1) and negative (c(x1, x2) = 0) example pairs (x1, x2). We generate positive pairs by sampling x1 from a data source and then by setting x2 = g(x1) as a random transformation of the first sample, where g ∈ G is called a data augmentation (e.g. image rotation). We also generate negative pairs by sampling x1 and x2 independently.\nIt is convenient to express these concepts via transformations only. To this end, let D = (x1, . . . , xN ) ∈ XN be a collection of N i.i.d. training data samples. A Generalized Data Transformation (GDT) T : XN → Z is a mapping that acts on the set of training samplesD to produce a new sample z = TD. Note that the GDT is applied to the entire training set, so that sampling itself can be seen as a transformation. In the simplest case, Z = X and a GDT T = (i, g) extracts the sample corresponding to a certain index i and applies an augmentation g : X → X to it, i.e. TD = g(xi).\n1We use the symbol δ to denote the Kronecker delta.\nUsually, we want the function f to be distinctive to the choice of sample but invariant to its augmentation. This is captured by setting the contrast c(T, T ′)2 to c((i, g), (i′, g′)) = δi=i′ . Given a batch T = {T1, . . . , TK} of K GDTs, we then optimize a pairwise-weighted version of the noisecontrastive loss (Chen et al., 2020b; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Tian et al., 2019; Wu et al., 2018), the GDT-NCE loss:\nL(f ; T ) = − ∑\nT,T ′∈T\nc(T, T ′)w(T, T ′) log ( exp 〈f(TD), f(T ′D)〉/ρ∑\nT ′′∈T w(T, T ′′) exp 〈f(TD), f(T ′′D)〉/ρ\n) . (1)\nHere, the scalar ρ is a temperature parameter and the weights w(T, T ′) are set to δT 6=T ′ in order to discount contrasting identical transformations, which would result in a weak learning signal. Minimizing eq. (1) pulls together vectors f(TD) and f(T ′D) if c(T, T ′) = 1 and pushes them apart if c(T, T ′) = 0, similar to a margin loss, but with a better handling of hard negatives (Chen et al., 2020b; Khosla et al., 2020; Tian et al., 2019).3 When using a single modality, T = T ′ and positive pairs are computed from two differently augmented versions.\nMulti-modal contrastive formulation. We now further extend GDTs to handle multi-modal data. In this case, several papers (Arandjelovic & Zisserman, 2017; Aytar et al., 2016; Korbar et al., 2018; Owens et al., 2016; Wei et al., 2018) have suggested to learn from the correlation between modalities, albeit usually not in a noise-contrastive manner. In order to encode this with a GDT, we introduce modality projection transformationsm ∈M. For example, a video x = (v, a) has a visual component v and an audio component a and we we have two projectionsM = {ma,mv} extracting respectively the visualmv(x) = v and audioma(x) = a signals. We can plug this directly in eq. (1) by considering GDTs T = (i,m) and setting TD = m(xi), learning a representation f which is distinctive to the choice of input video, but invariant to the choice of modality.4\nGeneral case. Existing noise contrastive formulations learn representations that are invariant to an ad-hoc selection of transformations. We show here how to use GDTs to build systematically new valid combinations of transformations while choosing whether to encode invariance or distinctiveness to each factor. Together with the fact that all components, including data sampling and modality projection, are interpreted as transformations, this results in a powerful approach to explore a vast space of possible formulations systematically, especially for the case of video data with its several dimensions.\nIn order to do so, note that to write the contrastive loss eq. (1), we only require: the contrast c(T, T ′), the weight w(T, T ′) and a way of sampling the transformations T in the batch. Assuming that each generalized transformation T = tM ◦ · · · ◦ t1 is a sequence of M transformations tm, we start by defining the contrast c for individual factors as:\nc(tm, t ′ m) = { 1, if we hypothesize invariance, δtm=t′m , if we hypothesize distinctiveness.\n(2)\nThe overall contrast is then c(T, T ′) = ∏M m=1 c(tm, t ′ m). In this way, each contrast c(tm, t ′ m) is an equivalence relation and so is c(T, T ′) (see Appendix A.1), making it valid in the sense discussed above. We also assume that w(T, T ′) = 1 unless otherwise stated.\nNext, we require a way of sampling transformations T in the batch. Note that each batch must contain transformations that can be meaningfully contrasted, forming a mix of invariant and distinctive pairs, so they cannot be sampled independently at random. Furthermore, based on the definition above, a single ‘distinctive’ factor in eq. (2) such that tm 6= t′m implies that c(T, T ′) = 0. Thus, the batch must contain several transformations that have equal distinctive factors in order to generate a useful learning signal.\nA simple way to satisfy these constraints is to use a hierarchical sampling scheme (fig. 1) First, we sample K1 instances of transformation t1; then, for each sample t1, we sample K2 instances\n2Note that, differently from the previous section, we have now defined c on transformations T rather than on samples x directly. In Appendix A.1, we show that this is acceptable provided that c(T, T ′) = 1 also defines an equivalence relation.\n3We can think of eq. (1) as a softmax cross-entropy loss for a classification problem where the classes are the equivalence classes T /c of transformations.\n4For this, as f must accept either a visual or audio signal as input, we consider a pair of representations f = (fv, fa), one for each modality.\nof transformation t2 and so on, obtaining a batch of K = ∏M m=1Km transformations T . In this manner, the batch contains exactly KM × · · · ×Km+1 transformations that share the same first m factors (t1 = t′1, . . . , tm = t ′ m). While other schemes are possible, in Appendix A.2.1, we show that this is sufficient to express a large variety of self-supervised learning cues that have been proposed in the literature. In the rest of the manuscript, however, we focus on audio-visual data." }, { "heading": "3.1 EXPLORING CONTRASTIVE AUDIO-VISUAL SELF-SUPERVISION", "text": "Within multi-modal settings, video representation learning on audio-visual data is particularly well suited for exploring the GDT framework. Especially compared to still images, the space of transformations is much larger in videos due to the additional time dimension and modality. It is therefore an ideal domain to explore how GDTs can be used to limit and explore the space of possible transformations and their quality as a learning signal when used as variances or invariances. In order to apply our framework to audio-visual data, we start by specifying how transformations are sampled by using the hierarchical scheme introduced above (see also Figure 1). We consider in particular GDTs of the type T = (i, τ,m, g) combining the following transformations. The first component i selects a video in the dataset. We sample Ki 2 indices/videos and assume distinctiveness, so that c(i, i′) = δi=i′ . The second component τ contrasts different temporal shifts. We sample Kτ = 2 different values of a delay τ uniformly at random, extracting a 1s clip xiτ starting at time τ . For this contrast, we will test the distinctiveness and invariance hypotheses. The third component m contrasts modalities, projecting the video xiτ to either its visual or audio component m(xiτ ). We assume invariance c(m,m′) = 1 and always sample two such transformations mv and ma to extract both modalities, so Km = 2. The fourth and final component g applies a spatial and aural augmentation TD = g(m(xiτ )), also normalizing the data. We assume invariance c(g, g′) = 1 and pickKg = 1. The transformation g comprises a pair of augmentations (gv, ga), where gv(v) extracts a fixed-size tensor by resizing to a fixed resolution a random spatial crop of the input video v, and ga(a) extracts a spectrogram representation of the audio signal followed by SpecAugment (Park et al., 2019) with frequency and time masking. These choices lead to K = KiKτKmKg = 4Ki transformations T in the batch T . Testing invariance and distinctiveness hypotheses. The transformations given above combine cues that were partly explored in prior work, contrastive and non-contrastive. For example, Korbar et al. (2018) (not noise-contrastive) learns to detect temporal shifts across modalities. With our formulation, we can test whether distinctiveness or invariance to shifts is preferable, simply by setting c(τ, τ ′) = 1 or c(τ, τ ′) = δτ=τ ′ (this is illustrated in fig. 1). We can also set w(τ, τ ′) = 0 for τ 6= τ ′ to ignore comparisons that involve different temporal shifts. We also test distinctiveness and invariance to time reversal (Wei et al., 2018), which has not previously been explored cross-modally, or contrastively. This is given by a transformation r ∈ R = {r0, r1}, where r0 is the identity and r1 flips the time dimension of its input tensor. We chose these transformations, time reversal and time shift, because videos, unlike images, have a temporal dimension and we hypothesize that these signals are very discriminative for representation learning.\nIgnoring comparisons. Another degree of freedom is the choice of weighting function w(T, T ′). Empirically, we found that cross-modal supervision is a much stronger signal than within-modality supervision, so if T and T ′ slice the same modality, we setw(T, T ′) = 0 (see Appendix for ablation).\nUnderstanding combinations. Finally, one may ask what is the effect of combining several different transformations in learning the representation f . A first answer is the rule given in eq. (2) to combine individual contrasts c(tm, t′m) in a consistent manner. Because of this rule, to a first approximation, f possesses the union of the invariances and distinctivenesses of the individual factors. To obtain a more accurate answer, however, one should also account for the details of the batch sampling scheme and of the choice of weighing function w. This can be done by consulting the diagrams given in fig. 1 by: (1) choosing a pair of transformations Ti and Tj , (2) checking the value in the table (where 1 stands for invariance, 0 for distinctiveness and · for ignoring), and (3) looking up the composition of Ti and Tj in the tree to find out the sub-transformations that differ between them as the source of invariance/distinctiveness." }, { "heading": "4 EXPERIMENTS", "text": "We compare self-supervised methods on pretraining audio-visual representations. Quality is assessed based on how well the pretrained representation transfers to other (supervised) downstream tasks. We first study the model in order to determine the best learning transformations and setup. Then, we use the latter to train for longer and compare them to the state of the art.\nSelf-supervised pretraining. For pretraining, we consider the standard audio-visual pretraining datasets, Kinetics-400 (Kay et al., 2017) and AudioSet (Gemmeke et al., 2017), and additionally, the recently released, VGG-Sound dataset (Chen et al., 2020a). Finally, we also explore how our algorithm scales to even larger, less-curated datasets and train on IG65M (Ghadiyaram et al., 2019) as done in XDC (Alwassel et al., 2020).\nOur method learns a pair of representations f = (fv, fa) for visual and audio information respectively and we refer to Appendix A.6 for architectural details.\nDownstream tasks. To assess the visual representation fv , we consider standard action recognition benchmark datasets, UCF-101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011b). We test the performance of our pretrained models on the tasks of finetuning the pretrained representation, conducting few-shot learning and video action retrieval. To assess the audio representation fa, we train a linear classifier on frozen features for the common ESC-50 (Piczak, 2015) and DCASE2014 (Stowell et al., 2015) benchmarks and finetune for VGG-Sound (Chen et al., 2020a). The full details are given in the Appendix." }, { "heading": "4.1 ANALYSIS", "text": "" }, { "heading": "OF GENERALIZED TRANSFORMATIONS", "text": "In this section, we conduct an extensive study on each parameter of the GDT transformation studied here, T = (i, τ,m, g), and evaluate the performance by finetuning our network on the UCF-101 and HMDB-51 action recognition benchmarks.\nSample distinctiveness and invariances. First, we experiment with extending SimCLR to video data, as shown in Table 1(a)-(d). This is an important base case as it is the standard approach followed by all recent self-supervised methods (Chen et al., 2020b; He et al., 2019; Wu et al., 2018).\nFor this, consider GDT of the type T = (i,m, τ, g) described above and set Ki = 768 (the largest we can fit in our setup),Km = 1 (only visual modality) andKg = 1 and only pick a single time shift Kτ = 1. We also set all transformation components to invariance (c(tm, t′m) = 1) except the first that does sample selection. Comparing row (a) to (b-d), we find that adding invariances to time-shift (TS) and time-reversal (TR) consistently degrades the performance compared to the baseline in (a).\nGDT variances and invariances Our framework allows fine-grained and expressive control of which invariance and distinctiveness are learned. To demonstrate this flexibility, we first experiment with having a single audio-visual (AV) invariance transformation, in this case data-sampling (DS), i.e. T = (i, τ,m, g). We find immediately an improvement in finetuning and retrieval performance compared to the SimCLR baselines, due to the added audio-visual invariance. Second, we also find that adding invariances to TR and TS does not yield consistent benefits, showing that invariance to these transformations is not a useful signal for learning.\nIn rows (i-l), we explore the effect of being variant to two transformations, which is unique to our method. We find that: (1) explicitly encoding variance improves representation performance for the TS and TR transformations (58.0 and 58.2 vs 56.9). (2) Ignoring (·) the other transformation as\nopposed to forcefully being invariant to it works better (58.2 vs 57.0 and 58.0 vs 57.5). Finally, row (m), shows the (DS, TR, TS)-variance case, yields the best performance when finetuned and improves upon the initial SimCLR baseline by more than 12% in accuracy and more than 15% in retrieval @5 performance. (DS, TR, TS) Compared to row (l), we find that using three variances compared to two does give boost in finetuning performance (58.2 vs 60.0), but there is a slight decrease in retrieval performance (50.2 vs 47.8). We hypothesize that this decrease in retrieval might be due to the 3-variance model becoming more tailored to the pretraining dataset and, while still generalizeable (which the finetuning evaluation tests), its frozen features have a slightly higher domain gap compared to the downstream dataset.\nIntuition While we only analyse a subset of possible transformations for video data, we nevertheless find consistent signals: While both time-reversal and time-shift could function as a meaningful invariance transformation to provide the model with more difficult positives a-priori, we find that using them instead to force variances consistently works better. One explanation for this might be that there is useful signal in being distinct to these transformations. E.g., for time-reversal, opening a door carries different semantics from from closing one, and for time-shift, the model might profit from being able to differentiate between an athlete running vs an athlete landing in a sandpit, which could be both in the same video. These findings are noteworthy, as they contradict results from the image self-supervised learning domain, where learning pretext-invariance can lead to more transferable representations (Misra & van der Maaten, 2020). This is likely due to the fact that time shift and reversal are useful signals that both require learning strong video representations to pick up on. If instead invariance is learned against these, the “free” information that we have from construction is discarded and performance degrades. Instead, GDT allows one to leverage these strong signals for learning robust representations." }, { "heading": "4.2 COMPARISON TO THE STATE OF THE ART", "text": "Given one of our best learning setups from Sec. 4.1 (row (l)), we train for longer and compare our feature representations to the state of the art in common visual and aural downstream benchmarks." }, { "heading": "Downstream visual benchmarks.", "text": "For video retrieval we report recall at 1, 5, 20 retrieved samples for split-1 of the HMDB-51 and UCF-101 datasets in table 2 (the results for recall at 10 and 50 are provided in the Appendix). Using our model trained on Kinetics-400, GDTsignificantly beats all other self-supervised methods by a margin of over 35% for both datasets.\nFor few-shot classification, as shown in table 2, we significantly beat the RotNet3D baseline on UCF-101 by more than 10% on average for each shot with our Kinetics-400 pretrained model.\nFor video action recognition, we finetune our GDT pretrained network for UCF-101 and HMDB-51 video classification, and compare against state-of-the-art self-supervised methods in table 4. When constrained to pretraining on the Kinetics datasets, we find that our GDT pretrained model achieves very good results, similar to Morgado et al. (2020) (developed concurrently to our own work). When\nconstrained to pretraining on the AudioSet (Gemmeke et al., 2017) dataset, we also find state-ofthe-art performance among all self-supervised methods, particularly on HMDB-51.\nWe get similar performance to XDC on UCF-101. Lastly, we show the scalability and flexibility of our GDT framework by pretraining on the IG65M dataset (Ghadiyaram et al., 2019). With this, our visual feature representation sets a new state of the art among all self-supervised methods, particularly by a margin of > 4% on the HMDB-51 dataset. On UCF-101, we set similar state-of-the-art performance with XDC. Along with XDC, we beat the Kinetics supervised pretraining baseline using the same architecture and finetuning protocol.\nFor audio classification we find that we achieve state-of-theart performance among all self-supervised methods on both DCASE2014 (DC) and ESC-50 (ESC), and also surpass supervised performance on VGG-Sound with 54.8% mAP and 97.5% AUC (see Tab. 5)." }, { "heading": "5 CONCLUSION", "text": "We introduced the framework of Generalized Data Transformations (GDTs), which allows one to capture, in a single noise-contrastive objective, cues used in several prior contrastive and non-contrastive learning formulations, as well as easily incorporate new ones. The framework shows how new meaningful combinations of transformations can be obtained, encoding valuable invariance and distinctiveness that we want our representations to learn. Following this methodology, we achieved state-of-the-art results for self-supervised pretraining on standard downstream video action recognition benchmarks, even surpassing supervised pretraining. Overall, our method significantly increases the expressiveness of contrastive learning for self-supervision, making it a flexible tool for many multi-modal settings, where a large pool of transformations exist and an optimal combination is sought." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 THEORY", "text": "Full knowledge of the contrast function c only specifies the level sets of the representation f .\nLemma 1. The contrast c(x1, x2) = δf(x1)=f(x2) defines f = ι◦ f̂ up to an injection ι : X/f → Y , where X/f is the quotient space and f̂ : X → X/f is the projection on the quotient.\nProof. This is a well known fact in elementary algebra. Recall that the quotient X/f is just the collection of subsets X ⊂ X where f(x) is constant. It is easy to see that this is a partition of X . Hence, we can define the map f̂ : X 7→ f(x) where x is any element of X (this is consistent since f(x) has, by definition, only one value over X). Furthermore, if ι : x 7→ X = {x ∈ X : f(x′) = f(x)} is the projection of x to its equivalence class X , we have f(x) = f̂(ι(x)).\nLemma 2. c(x1, x2) = 1 is an equivalence relation if, and only if, there exists a function f such that c(x1, x2) = δf(x1)=f(x2).\nProof. If c(x1, x2) = 1 defines an equivalence relation on X , then such a function is given by the projection on the quotient f̂ : X → X/c = Y . On the other hand, setting c(x1, x2) = δf(x1)=f(x2) = 1 for any given function f is obviously reflexive, symmetric and transitive because the equality f(x1) = f(x2) is.\nThe following lemma suggests that defining a contrast c(T, T ′) on transformations instead of data samples is usually acceptable. Lemma 3. If c(T, T ′) = 1 defines an equivalence relation on GDTs, and if TD = TD′ ⇒ T = T ′ (i.e. different transformations output different samples), then setting c(TD, T ′D) = c(T, T ′) defines part of an admissible sample contrast function.\nProof. If x = TD, x′ = T ′D are obtained from some transformations T and T ′, then these must be unique by assumption. Thus, setting c(x, x′) = c(T, T ′) is well posed. Reflectivity, symmetry and transitivity are then inherited from the latter. Lemma 4. Let c(tm, t′m) = 1 be reflexive, symmetric and transitive. Their product c(T, T ′) =∏M m=1 c(tm, t ′ m) = has then the same properties.\nProof. The reflexive and symmetric properties are obviously inherited. For the transitive property, note that c(T, T ′) = 1 if, and only if, ∀m : c(tm, t′m) = 1. Then consider:\nc(T, T ′) = c(T ′, T ′′) = 1 ⇒ ∀m : c(tm, t′m) = c(t′m, t′′m) = 1 ⇒ ∀m : c(tm, t′′m) = 1 ⇒ c(T, T ′′) = 1." }, { "heading": "A.2 GENERALITY OF GDT", "text": "Here, we show that our GDT formulation can encapsulate and unify other self-supervised works in the literature. We break it down it into two sections:\nMapping contrastive to GDT contrastive Recently, a number of papers have presented contrastive formulations for image representation learning such as, NPID (Wu et al., 2018), PIRL (Misra & van der Maaten, 2020), MoCo (He et al., 2019) and SimCLR (Chen et al., 2020b). These methods are all essentially built on what we have introduced as the “data-sampling transformation” T = (i, g), that samples an image with index i and applies augmentation g. For NPID, MoCo and SimCLR, the main objective is to solely be distinctive to the image index, hence K = KiKg = B (i.e. the batchsize B) for NPID, due to the use of a memorybank and K = KiKg = 2B for SimCLR and MoCo. For PIRL, one additional transformation to be invariant to is added. For example, in the case of rotation, the PIRL encodes sample-distinctiveness to the non-rotated inputs\nK = KiKg = B in the memorybank, while the rotated examples are used for constructing both invariance to the original inputs, as well as sample distinctiveness.\nNon-contrastive to GDT contrastive reduction. In non-contrastive self-supervised formulations, one trains Φ(x) = y to regress y from x, where y is some “pretext” task label. These labels can be obtained from the data, e.g. arrow of time (Wei et al., 2018), rotation (Gidaris et al., 2018; Jing & Tian, 2018), shuffled frames (Misra et al., 2016), jigsaw configurations (Kim et al., 2019; Noroozi et al., 2017), or playback speed (Benaim et al., 2020; Cho et al., 2020).\nWe can reduce these pretext tasks to GDTs in two ways. The first ‘trivial’ reduction amounts to interpreting the supervision y as an additional pseudo-modality. Consider for example RotNet; in this case, the label y should record the amount of rotation applied to the input image. We can achieve this effect by starting from data z = (x, 0) where x is an image and 0 a rotation angle. We then sample transformation tr (rotation) and define its action as tr(z) = (tr(x), tr(0)) where tr(0) = r is simply the rotation angle applied and tr(x) the rotated image. We consider modality slicing transformations mx(z) = x and mr(z) = r. To form a batch, we sample GDTs of the type T = (i, tr,m), where i is sampled at random, for each i, tr is exhaustively sampled in a set of four rotations (0, 90, 180, 270 degrees) and, for each rotation tr, m is also exhaustively sampled, for a total of KiKrKm = 8Ki transformations in the batch. We define c(T, T ′) = c((i, tr,m), (i ′, tr′ ,m ′)) = δr=r′ (note that we do not learn to distinguish different images; GDTs allow us to express this case naturally as well). We define w(T, T ′) = δi=i′δm6=m′ so that images are treated independently in the loss and we always compare a pseudo modality (rotated image) with the other (label). Finally, the network fr(r) = er ∈ {0, 1}4 operating on the label pseudo-modality trivially encodes the latter as a 1-hot vector. Then we see that the noise-contrastive loss reduces to∑\ni ∑ r log exp〈f(tr(xi)), er〉∑ r′ exp〈f(tr(xi)), er′〉\n(3)\nwhich is nearly exactly the same as a softmax loss for predicting the rotation class applied to an image.\nThere are other reductions as well, which capture the spirit if not the letter of a training signal. For instance, in RotNet, we may ask if two images are rotated by the same amount. This is an interesting example as we do not wish to be distinctive to which image sample is taken, only to which rotation is applied. This can also be captured as a GDT because the sampling process itself is a transformation. In this case, the set of negatives will be the images rotated by a different amount, while the positive example will be an image rotated by the same amount.\nThus, pretext task-originating transformations that have not even been explored yet can be put into our framework and, as we show in this paper, be naturally combined with other transformations leading to even stronger representations." }, { "heading": "A.2.1 POTENTIAL APPLICATION TO TEXT-VIDEO LEARNING", "text": "While we focus on audio-visual representation learning due to the multitude of potentially interesting learning signals, it is also possible to apply our framework to other multi-modal settings, such as video-text. Instead of a ResNet-9 as audio encoder, a text-encoder such as wordembeddings (Mikolov et al., 2013; Pennington et al., 2014) with an MLP or a transformer (Vaswani et al., 2017) can be used for encoding the textual inputs and we can train with a cross-modal NCE loss as done currently for audio-visual representation learning in our GDT framework. While the visual transformations can be kept as described in the paper, we can use transformations for text, such as sentence shuffling (Wei & Zou, 2019), or random word swaps (Wei & Zou, 2019). Moreover, unlike prior works in the literature (Alayrac et al., 2020; Li & Wang, 2020; Miech et al., 2019), which mostly focused on model and loss improvements for video-text learning, our framework would allow us to investigate whether it is more desirable to encode either invariance or disctinctiveness to these text transformations for effective video-text representation learning." }, { "heading": "A.3 MODALITY ABLATION", "text": "In Table A.1, we provide the results of running our baseline model (sample-distinctiveness only) within-modally instead of across modalities and find a sharp drop in performance." }, { "heading": "A.4 DATASET DETAILS", "text": "The Kinetics-400 dataset (Kay et al., 2017) is human action video dataset, consisting of 240k training videos, with each video representing one of 400 action classes. After filtering out videos without audio, we are left with 230k training videos, which we use for pretraining our model.\nVGGSound (Chen et al., 2020a) is a recently released audio-visual dataset consisting of 200k short video clips of audio sounds, extracted from videos uploaded to YouTube. We use the training split after filtering out audio (170k) for pretraining our model.\nAudioset (Gemmeke et al., 2017) is a large-scale audio-visual dataset of 2.1M videos spanning 632 audio event classes. We use the training split (1.8M) for pretraining our model.\nIG65M (Ghadiyaram et al., 2019) is a large-scale weakly supervised dataset collected from a social media website, consisting of 65M videos of human action events. We use the all the videos in the dataset for pretraining.\nHMDB-51 (Kuehne et al., 2011a) consists of 7K video clips spanning 51 different human activities. HMDB-51 has three train/test splits of size 5k/2k respectively.\nUCF-101 (Soomro et al., 2012) contains 13K videos from 101 human action classes, and has three train/test splits of size 11k/2k respectively.\nESC-50 (Piczak, 2015) is an environmental sound classification dataset which has 2K sound clips of 50 different audio classes. ESC-50 has 5 train/test splits of size 1.6k/400 respectively.\nDCASE2014 (Stowell et al., 2015) is an acoustic scenes and event classification dataset which has 100 training and 100 testing sound clips spanning 10 different audio classes." }, { "heading": "A.5 PREPROCESSING DETAILS", "text": "The video inputs are 30 consecutive frames from a randomly chosen starting point in the video. These frames are resized such that the shorter side is between 128 and 160, and a center crop of size 112 is extracted, with no color-jittering applied. A random horizontal flip is then applied with probability 0.5, and then the inputs’ channels are z-normalized using mean and standard deviation statistics calculated across each dataset.\nOne second of audio is processed as a 1× 257× 99 image, by taking the log-mel bank features with 257 filters and 199 time-frames after random volume jittering between 90% and 110% is applied to raw waveform, similar to (Arandjelovic & Zisserman, 2017). The spectrogram is then Z-normalized, as in (Korbar et al., 2018). Spec-Augment is then used to apply random frequency masking to the spectrogram with maximal blocking width 3 and sampled 1 times. Similarly, time-masking is applied with maximum width 6 and sampled 1 times." }, { "heading": "A.6 PRETRAINING DETAILS", "text": "We use R(2+1)D-18 (Tran et al., 2018) as the visual encoder fv and ResNet (He et al., 2016) with 9 layers as the audio encoder fa unless otherwise noted; both encoders produce a fixed-dimensional output (512-D) after global spatio-temporal average pooling. Both vectors are then passed through two fully-connected layers with intermediate size of 512 to produce 256-D embeddings as in (Bachman et al., 2019) which are normalized by their L2-norm (Wu et al., 2018). The embedding is used for computing the contrastive loss, while for downstream tasks, a linear layer after the global spatiotemporal average pooling is randomly intialized. For NCE contrastive learning, the temperature ρ is set as 1/0.07. For optimizing these networks, we use SGD. The SGD weight decay is 10−5 and\nthe SGD momentum is 0.9. We use a mini-batch size of 12 on each of our 64 GPUs giving an effective batch size of 768 for distributed training. The initial learning rate is set to 0.01 which we linearly scale with the number of GPUs, after following a gradual warm-up schedule for the first 10 epochs (Goyal et al., 2017). For both Kinetics and VGG-Sound, we train for 200 epochs (3 days), while for Audioset and IG65M, we train for 50 epochs (5 days) and 2 epochs (7 days) respectively." }, { "heading": "A.7 ABLATION EXPERIMENT DETAILS", "text": "For the ablations, we only train for 100 epochs on the Kinetics-400 dataset.\nFor both downstream tasks, we only evaluate on the first fold each but found the performance between folds to be close (within 1-2%)." }, { "heading": "A.8 FULL VIDEO ACTION RETRIEVAL TABLE", "text": "In Table A.2 we show the full table on video action retrieval and compare to several of our models, pretrained on different datasets." }, { "heading": "A.9 FULL VIDEO ACTION RECOGNITION TABLE", "text": "" }, { "heading": "A.10 EVALUATION DETAILS", "text": "All evaluation code is provided in the Supplementary Material.\nVideo During training, we take 10 random clips of length 32 frames from each video. For video clip augmentations, we follow a standard protocol as in (Korbar et al., 2018). During evaluation, we uniformly sample 10 clips from each video, average softmax scores, and predict the class having the highest mean softmax score. We then measure the mean video top-1 accuracy across all videos and all official folds. During training, we use SGD with initial learning rate 0.0025, which we gradually warm up to 2 · 10−2 in the first 2 epochs. The weight decay is set to 5 · 10−3 and momentum to 0.9. We use a mini-batch size of 32 and train for 12 epochs with the learning rate multiplied by 5 · 10−2 at 6 and 10 epochs. We compare our GDT pretrained model with both self-supervised methods, and supervised pretraining, and report average top-1 accuracies on UCF101 and HMDB-51 action recognition task across three folds in table A.3.\nFew-shot classification We follow the protocol in (Jing & Tian, 2018) and evaluate our our GDT pretrained network using few-shot classification on the UCF-101 dataset, and additionally on HMDB-51. We randomly sample n videos per class from the train set, average the encoder’s global average pooling features from ten clips per training sample and measure classification accuracy performance on the validation set using a k-nearest neighbor classifier, with k set to 1.\nRetrieval We follow the standard protocol as outlined in (Xu et al., 2019). We use the split 1 of UCF101, and additionally HMDB-51. We uniformly sample 10 clips per video, and average the max-pooled features after the last residual block for each clip per video. We use these averaged features from the validation set to query the videos in the training set. The cosine distance of representations between the query clip and all clips in the training set are computed. When the class of a test clip appears in the classes of k nearest training clips, it is considered to be correctly predicted. We report accuracies for k = 1, 5, 10, 20, 50 and compare with other self-supervised methods on UCF101 and HMDB-51 in table A.2.\nAudio We extract 10 equally spaced 2-second sub-clips from each full audio sample of ESC50 (Piczak, 2015) and 60 1-second sub-clips from each full sample of DCASE2014 (Stowell et al., 2015). We save the activations that result from the audio encoder to quickly train the linear classifiers. We use activations after the last convolutional layer of the ResNet-9 and apply a max pooling with kernelsize (1,3) and stride of (1,2) without padding to the output. For both datasets, we then optimize a L2 regularized linear layer with batch size 512 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4, weight-decay set to 5 · 10−4 and the default parameters. The classification score for each audio sample is computed by averaging the sub-clip scores in the sample, and then predicting the class with the highest score. The mean top-1 accuracy is then taken across all audio clips and averaged across all official folds. For VGG-Sound (Chen et al., 2020a), we follow their evaluation metrics but follow a much shorter training schedule as our model is pretrained. We optimize the network with batch size 128 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 1 · 10−4 for the pretrained backbone and 1 · 10−3 for the newly randomly initialized linear layer, weight-decay set to 1 · 10−5 and the default parameters. We drop the learning rate at 10 and 20 epochs and train for 30 epochs, which takes less than 10h on a single Nvidia GTX 1080 Titan GPU." } ]
2,020
null
SP:3e9c01477200929c84f6725472107beab75a573e
[ "The paper builds upon prior work that shows that overparameterized networks learned by ERM can have poor worst-case performance over pre-defined groups. Specifically, the paper demonstrates that this result is not necessarily due to overparameterized learning poor representations for rare subgroups, but rather mis-calibration in the classification layer that can be addressed with two simple correct techniques: thresholding and re-training the classification layer. They show improvements over ERM in worst-case subgroup error. " ]
Overparameterised neural networks have demonstrated the remarkable ability to perfectly fit training samples, while still generalising to unseen test samples. However, several recent works have revealed that such models’ good average performance does not always translate to good worst-case performance: in particular, they may perform poorly on subgroups that are under-represented in the training set. In this paper, we show that in certain settings, overparameterised models’ performance on under-represented subgroups may be improved via post-hoc processing. Specifically, such models’ bias can be restricted to their classification layers, and manifest as structured prediction shifts for rare subgroups. We detail two post-hoc correction techniques to mitigate this bias, which operate purely on the outputs of standard model training. We empirically verify that with such post-hoc correction, overparameterisation can improve average and worst-case performance.
[ { "affiliations": [], "name": "OR FOE" }, { "affiliations": [], "name": "Aditya Krishna Menon" }, { "affiliations": [], "name": "Ankit Singh Rawat" }, { "affiliations": [], "name": "Sanjiv Kumar" } ]
[ { "authors": [ "Alekh Agarwal", "Alina Beygelzimer", "Miroslav Dudik", "John Langford", "Hanna Wallach" ], "title": "A reductions approach to fair classification", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hyojin Bahng", "Sanghyuk Chun", "Sangdoo Yun", "Jaegul Choo", "Seong Joon Oh" ], "title": "Learning de-biased representations with biased representations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Tolga Bolukbasi", "Kai-Wei Chang", "James Zou", "Venkatesh Saligrama", "Adam Kalai" ], "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "venue": "In Proceedings of the 30th International Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Mateusz Buda", "Atsuto Maki", "Maciej A. Mazurowski" ], "title": "A systematic study of the class imbalance problem in convolutional neural networks", "venue": null, "year": 2017 }, { "authors": [ "Joy Buolamwini", "Timnit Gebru" ], "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "venue": "Conference on Fairness, Accountability, and Transparency,", "year": 2018 }, { "authors": [ "Jonathon Byrd", "Zachary Chase Lipton" ], "title": "What is the effect of importance weighting in deep learning", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Toon Calders", "Sicco Verwer" ], "title": "Three Naive Bayes approaches for discrimination-free classification", "venue": "Data Mining and Knowledge Discovery,", "year": 2010 }, { "authors": [ "Evgenii Chzhen", "Christophe Denis", "Mohamed Hebiri", "Luca Oneto", "Massimiliano Pontil" ], "title": "Leveraging labeled and unlabeled data for consistent fair binary classification", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Guillem Collell", "Drazen Prelec", "Kaustubh R. Patil" ], "title": "Reviving threshold-moving: a simple plug-in bagging ensemble for binary and multiclass imbalanced data", "venue": null, "year": 2016 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Richard Zemel" ], "title": "Fairness through awareness", "venue": "In Innovations in Theoretical Computer Science Conference (ITCS),", "year": 2012 }, { "authors": [ "Karan Goel", "Albert Gu", "Yixuan Li", "Christopher Ré" ], "title": "Model patching: Closing the subgroup performance gap with data augmentation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Hila Gonen", "Yoav Goldberg" ], "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them, 2019", "venue": null, "year": 2019 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nathan Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Tatsunori Hashimoto", "Megha Srivastava", "Hongseok Namkoong", "Percy Liang" ], "title": "Fairness without demographics in repeated loss minimization", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Haibo He", "Edwardo A. Garcia" ], "title": "Learning from imbalanced data", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2009 }, { "authors": [ "Ray Jiang", "Aldo Pacchiano", "Tom Stepleton", "Heinrich Jiang", "Silvia Chiappa" ], "title": "Wasserstein fair classification", "venue": "Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Bingyi Kang", "Saining Xie", "Marcus Rohrbach", "Zhicheng Yan", "Albert Gordo", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Decoupling representation and classifier for long-tailed recognition", "venue": "In Eighth International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Byungju Kim", "Hyunwoo Kim", "Kyungsu Kim", "Sungjin Kim", "Junmo Kim" ], "title": "Learning not to learn: Training deep neural networks with biased data", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Yi Li", "Nuno Vasconcelos" ], "title": "REPAIR: removing representation bias by dataset resampling", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zachary Lipton", "Julian McAuley", "Alexandra Chouldechova" ], "title": "Does mitigating ML’s impact disparity require treatment disparity", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X. Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Aditya Krishna Menon", "Sadeep Jayasumana", "Ankit Singh Rawat", "Himanshu Jain", "Andreas Veit", "Sanjiv Kumar" ], "title": "Long-tail learning via logit adjustment, 2020", "venue": null, "year": 2020 }, { "authors": [ "Mehryar Mohri", "Gary Sivek", "Ananda Theertha Suresh" ], "title": "Agnostic federated learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Junhyun Nam", "Hyuntak Cha", "Sungsoo Ahn", "Jaeho Lee", "Jinwoo Shin" ], "title": "Learning from failure: Training debiased classifier from biased classifier, 2020", "venue": null, "year": 2020 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "The role of over-parametrization in generalization of neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "S. Sagawa", "P.W. Koh", "T.B. Hashimoto", "P. Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "S. Sagawa", "A. Raghunathan", "P.W. Koh", "P. Liang" ], "title": "An investigation of why overparameterization exacerbates spurious correlations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "N. Sohoni", "J. Dunnmon", "G. Angus", "A. Gu", "C. Ré" ], "title": "No subclass left behind: Fine-grained robustness in coarse-grained classification problems", "venue": "In To appear in Conference on Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Grant Van Horn", "Pietro Perona" ], "title": "The devil is in the tails: Fine-grained classification in the wild", "venue": "arXiv preprint arXiv:1709.01450,", "year": 2017 }, { "authors": [ "Dennis Wei", "Karthikeyan Natesan Ramamurthy", "Flavio Calmon" ], "title": "Optimized score transformation for fair classification", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Junfeng Wen", "Chun-Nam Yu", "Russell Greiner" ], "title": "Robust learning under uncertain test distributions: Relating covariate shift to model misspecification", "venue": "In Proceedings of the 31st International Conference on International Conference on Machine Learning, ICML’14, pp. II–631–II–639. JMLR.org,", "year": 2014 }, { "authors": [ "Han-Jia Ye", "Hong-You Chen", "De-Chuan Zhan", "Wei-Lun Chao" ], "title": "Identifying and compensating for feature deviation in imbalanced deep learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Muhammad Bilal Zafar", "Isabel Valera", "Manuel Gomez Rodriguez", "Krishna P. Gummadi" ], "title": "Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment", "venue": "In Proceedings of the 26th International Conference on World Wide Web, WWW ’17,", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Junjie Zhang", "Lingqiao Liu", "Peng Wang", "Chunhua Shen" ], "title": "To balance or not to balance: A simple-yet-effective approach for learning with long-tailed distributions, 2019", "venue": null, "year": 2019 }, { "authors": [ "Marvin Zhang", "Henrik Marklund", "Abhishek Gupta", "Sergey Levine", "Chelsea Finn" ], "title": "Adaptive risk minimization: A meta-learning approach for tackling group shift, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zhi-Hua Zhou", "Xu-Ying Liu" ], "title": "Training cost-sensitive neural networks with methods addressing the class imbalance problem", "venue": "IEEE Transactions on Knowledge and Data Engineering (TKDE),", "year": 2006 }, { "authors": [ "Sagawa" ], "title": "2020a)) has a positive effect on the score distributions, making them almost perfectly align across subgroups", "venue": null, "year": 2020 }, { "authors": [ "per Sagawa" ], "title": "Subsampling is seen to make the scores equitable across the subgroups", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Overparameterised neural networks have demonstrated the remarkable ability to perfectly fit training samples, while still generalising to unseen test samples (Zhang et al., 2017; Neyshabur et al., 2019; Nakkiran et al., 2020). However, several recent works have revealed that overparameterised models’ good average performance does not translate to good worst-case performance (Buolamwini & Gebru, 2018; Hashimoto et al., 2018; Sagawa et al., 2020a;b). In particular, the test performance of such models may be poor on certain subgroups that are under-represented in the training data. Worse still, such degradation can be exacerbated as model complexity increases. This indicates the unsuitability of such models in ensuring fairness across subgroups, a topical concern given the growing societal uses of machine learning (Dwork et al., 2012; Hardt et al., 2016; Buolamwini & Gebru, 2018).\nWhy does overparameterisation induce such unfavourable bias, and how can one correct for it? Sagawa et al. (2020a) demonstrated how such models may fit to spurious correlations that explain under-represented samples, which can generalise poorly. Sagawa et al. (2020b) further posited that overparameterised models have an inductive bias towards memorising labels for as few samples as possible, which are invariably those from under-represented subgroups. To mitigate such bias, existing approaches include subsampling majority subgroups (Sagawa et al., 2020b), and modifying the training objective (Sagawa et al., 2020a; Nam et al., 2020; Zhang et al., 2020; Goel et al., 2020). This suggests two important points regarding overparameterised models’ performance:\n(a) with standard training, increasing model complexity exacerbates degradation on rare subgroups; (b) controlling this degradation may require alternate training objectives or procedures.\nIn this paper, we establish that while overparameterised models are biased against under-represented examples, in certain settings, such bias may be easily corrected via post-hoc processing of the model outputs. Specifically, such models’ bias can be largely restricted to their classification layers, and manifest as structured shifts in predictions for rare subgroups. We thus show how two simple techniques applied to the model outputs — classifier retraining based on the learned representations, and correction of the classification threshold — can help overparameterised models improve worstsubgroup performance over underparameterised counterparts. Consequently, even with standard training, overparameterised models can learn sufficient information to model rare subgroups.\nTo make the above concrete, Figure 1 plots a histogram of model predictions for a synthetic dataset from Sagawa et al. (2020b) (cf. §2). The data comprises four subgroups generated from combinations (y, a(x)) of labels y ∈ {±1} and a feature a(x) ∈ {±1}. Most samples (x, y) have y = a(x), and so these comprise two dominant subgroups within the positive and negative samples. We train an overparameterised linear model, yielding logits f±1(x). We then plot the decision scores f+1(x)− f−1(x), which are expected to be > 0 iff y = +1. Strikingly, there is a distinct separation amongst the subgroup scores: e.g., samples with y = +1, a(x) = −1 have systematically lower scores than those with y = +1, a(x) = +1. Consequently, the model incurs a significant error rate on rare subgroups. The structured nature of the separation implies suggests to post-hoc shift the scores to align the distributions; this markedly improves performance on the rare subgroups (Figure 1b).\nScope and contributions. The primary aim of this work is furthering the understanding of the behaviour of overparameterised models, rather than proposing new techniques. Indeed, the post-hoc correction techniques we employ have been well-studied in the related problem setting of longtail learning or learning under class imbalance (He & Garcia, 2009; Buda et al., 2017; Van Horn & Perona, 2017). Several works have demonstrated that the representations learned by standard networks contain sufficient information to distinguish between dominant and rare labels (Liu et al., 2019; Zhang et al., 2019; Kang et al., 2020; Menon et al., 2020). Similar techniques are also common the fairness literature (Hardt et al., 2016; Chzhen et al., 2019). However, it is not a-priori clear whether such techniques are effective for overparameterised models, whose ability to perfectly fit the training labels can thwart otherwise effective approaches (Sagawa et al., 2020a).\nExisting techniques for improving the worst-subgroup error of overparameterised models involve altering the inputs to the model (Sagawa et al., 2020b), or the training objective (Sagawa et al., 2020a). By contrast, the techniques we study alter the outputs of a standard network, trained to minimise the softmax cross-entropy on the entire data. Our findings illustrate that such models do not necessarily require bespoke training modifications to perform well on rare subgroups: even with standard training, overparameterised models can (in certain settings) learn useful information about rare subgroups.\nIn summary, our contributions are:\n(i) we demonstrate that, in certain settings, overparameterised models’ poor performance on underrepresented subgroups is the result of a structured bias in the classification layer (cf. §3);\n(ii) we show that two simple post-hoc correction procedures (cf. §4) can mitigate the above bias, and thus significantly reduce their worst-subgroup error (cf. §5)." }, { "heading": "2 BACKGROUND AND SETTING", "text": "Suppose we have a labelled training sample S = {(xi, yi)}ni=1 ∈ (X × Y)n, for instance space X ⊂ Rd and label space Y. One typically assumes S is an i.i.d. draw from some unknown distribution P(x, y). Further, suppose each (x, y) has an associated group membership g(x, y) ∈ G, with\nG . = |G|. This induces G data subgroups, with a prior P(g) and conditional distributions P(x, y | g). Following Sagawa et al. (2020a;b), we consider groups g(x, y) = (y, a(x)), where a(x) ∈ R is some attribute within x. We assume a(x) is fully specified during train and test time; while not always realistic, such an assumption has precedent in the fairness literature (Lipton et al., 2018).\nThe standard goal in classification is to learn a classifier h : X→ Y that minimises the average error\nLavg(h) . = E\ng E x,y|g [`01(y, h(x))] ,\nwhere `01(y, h(x)) = Jy 6= h(x)K is the 0-1 loss. Typically, one constructs h(x) = argmaxy fy(x), where f(x) ∈ RY comprises real-valued logits, as learned by empirical risk minimisation (ERM): minf∈F 1 n ∑n i=1 `(yi, f(xi)). Here, ` is a surrogate loss such as the softmax cross-entropy, and F is a function class, such as a neural networks with a fixed architecture. A network is overparametrised if it can perfectly fit the training labels, and thus drive the training error to zero. Remarkably — and in apparent contrast to orthodox statistical wisdom — this does not come at the expense of generalisation on test samples (Zhang et al., 2017; Belkin et al., 2019; Nakkiran et al., 2020).\nThis apparent power comes at a price, however. Let us define the worst-subgroup error as\nLmax(h) . = max\ng∈G E x,y|g [`01(y, h(x))] , (1)\ni.e., the worst-case error over all data subgroups. Prior work (Sagawa et al., 2020a;b) established that for overparameterised models, the worst-subgroup training error can go to zero (since the model can fit all samples), but the worst-subgroup test error can devolve to that of random guessing (since the model can fit spurious correlations for rare subgroups). Further, the degree of degradation can increase with the model complexity. This indicates that the naïve use of overparametrised models may be at odds with ensuring fairness across data subgroups, a core concern in moden applications of machine learning (Calders & Verwer, 2010; Dwork et al., 2012; Hardt et al., 2016; Zafar et al., 2017).\nThere are several potential strategies to cope with this. One is to perform distributionally robust optimisation (Hashimoto et al., 2018; Mohri et al., 2019; Sagawa et al., 2020a), and minimise:\nLDRO(h) . = max\ng∈G\n[ E\nx,y|g [`(y, f(x))] + Ωg(f)\n] ,\nwhere Ωg is some per-group regulariser. In settings where P(g) is non-uniform, Sagawa et al. (2020a) proposed to set Ωg(f) ≡ 1√ng , where ng is the number of training samples with group g. Alternatively, one can reweight samples to upweight the contribution of rarer groups and minimise:\nLRW(h) . = ∑ g∈G wg · E x,y|g [`(y, f(x))] , (2)\nwhere, e.g., wg = P(g) leads to the standard average error, while wg = 1 implicitly upweights rare subgroups. While intuitive, Sagawa et al. (2020b) established that such an approach is also subject to poor worst-subgroup performance, owing to a broader issue with using importance weighting in conjunction with neural networks (Wen et al., 2014; Byrd & Lipton, 2019). Sagawa et al. (2020b) established that one can achieve good performance by instead subsampling dominant groups, an operation equivalent in expectation to minimising LRW(h) with wg = 1. Recent developments in the mitigation of worst-subgroup errors include Nam et al. (2020); Zhang et al. (2020); Goel et al. (2020).\nIn the sequel, we shall make extensive use of three datasets from Sagawa et al. (2020a;b), each of which involve binary labels y ∈ Y and a binary attribute a(x) ∈ A: (i) synth, a synthetic dataset where X ⊂ R200, Y = {±1}, and A ∈ {±1}.\n(ii) waterbirds, a dataset of bird images with Y = {land bird, water bird} corresponding to the bird type, and A = {land background, water background} corresponding to the background.\n(iii) celebA, a dataset of celebrity images with Y = {blond, dark} corresponding to individuals’ hair colour, and A = {male, female}.\nFor each dataset, we construct four subgroups g(x, y) = (y, a(x)), with two such subgroups being under-represented. On synth and waterbirds, these correspond to subgroups with y 6= a(x) , while on celebA, these corespond to the subgroups {(blond, male)} and {(dark, female)}. Owing to\nthe rarity of certain subgroups, it is intuitively easy for an overparameterised network to learn to predict a(x) rather than y, and memorise spurious patterns to predict the rare subgroups.\nTo train overparameterised models, we follow the setup of Sagawa et al. (2020a;b), which we briefly summarise. For celebA and waterbirds, we use a ResNet-50, which can attain perfect training accuracy. For synth, we train a weakly regularised (λ = 10−16) logistic regression model on a fixed representation Φ constructed as follows: for fixed m, we construct Φ(x) = ReLU(V x), where V ∈ Rm×200 is a random Gaussian matrix with normalised rows. Overparameterised models consistently demonstrate a significant gap between the average and worst-subgroup error: e.g. (see Figure 4), on synth, the model achieves 91% average accuracy, but 36% worst-subgroup accuracy." }, { "heading": "3 THE DOMINANT SUBGROUP BIAS OF OVERPARAMETERISED MODELS", "text": "We study the nature of overparameterised models’ poor performance on rare subgroups more closely. We make two observations: first, this under-performance is largely owing to a bias in the classification layer. Second, this bias manifests in the form of a distribution shift in the model scores for rare subgroups. This shall subsequently motivate post-hoc correction procedures." }, { "heading": "3.1 CAN THE LEARNED REPRESENTATIONS DISTINGUISH BETWEEN SUBGROUPS?", "text": "Neural models make predictions based on logits fy(x) = w>y Φ(x), for classification weights wy ∈ RK and representations Φ(x) ∈ RK . Suppose such a model performs poorly on a subgroup (ȳ, ā). This implies that for a sample (x, ȳ) in this subgroup, fȳ(x) fy′(x) for some competing y′. Why do overparameterised models underperform on rare subgroups? The factorised nature of the logits suggests this arises from issues with the representations, the classification weights, or both. To determine which of these is likely, we begin by inspecting the representations. Recall that we train a ResNet-50 on celebA and waterbirds, which produce K = 2048 dimensional instance embeddings. We may thus embed test instances from each of the four data subgroups, and study their geometric structure. Intuitively, if instances from rare and dominant subgroups with the same label have limited similarity, the representations for rare samples are insufficiently rich.\nThe high dimensionality of the embeddings prohibits an exact inspection, but as a rough surrogate, we employ a two-dimensional tSNE (Maaten & Hinton, 2008) visualisation. As tSNE attempts to preserve neighbourhood information amongst samples, we hope to evince the relative geometries of samples from the various subgroups. Figure 2 reveals that for celebA and waterbirds, samples from the rare subgroups tend to be closely clustered with those belonging to the same class.\nThe above suggests that the representations learned by the models contain information to (at least partly) help distinguish samples from rare versus dominant subgroups.1 This suggests the poor worst-subgroup performance of such models may result from issues with the classification layer." }, { "heading": "3.2 HOW DOES BIAS MANIFEST IN THE CLASSIFICATION LAYER?", "text": "To study the potential issues in the classification layer, we continue our strategy of visualising the model outputs. Since the datasets we consider involve binary labels, we may simply study the distribution of the decision scores f+1(x)− f−1(x). Since our predicted label is the highest scoring logit, we desire this score to be > 0 iff the sample has a positive label. As in the previous section, we may break down these scores for each of the subgroups induced by label y and attribute a(x).\nWe earlier illustrated this distribution for synth in Figure 1; Figure 3 further provides distributions for celebA and waterbirds. On both datasets, within each label, there is a shift in the score distributions for one or both of the rare subgroups. This is in keeping with the poor performance of the model on these subgroups: e.g., on waterbirds, rare samples in the positive class systematically have negative decision scores, implying the model incurs a high false-negative rate on these samples.\nIn light of the above findings, we may revisit ablations from prior work that tease apart the factors causing poor worst-subgroup performance. For example, Sagawa et al. (2020b) showed that increasing model complexity on synth can degrade worst-subgroup performance beyond a certain critical point, and that this can be mitigated with a combination of strong regularisation and subsampling. The same conclusions largely hold (see Appendix D) for the varying decision scores amongst subgroups: e.g., the difference in rare and dominant subgroup scores is exacerbated as we increase model complexity." }, { "heading": "3.3 DISCUSSION AND IMPLICATIONS", "text": "The systematic under-prediction of scores for rare subgroups can be seen as a particular manifestation of neural networks producing uncalibrated probability estimates (Guo et al., 2017): from Figure 3, the model will systematically under- or over-estimate the probability of rare samples being positive.\nWe emphasise here that our illustrations above are for test samples not observed during training. Training samples exhibit qualitatively different trends, reflective of overparameterised models’ ability to perfectly fit them: e.g., the decision scores for all samples are consistently on the correct side of the decision boundary (see Appendix D). The fact that the scores on unseen samples exhibit a distinction amongst subgroups suggest the network encodes an implicit bias against such samples.\nAt the same time, this bias largely manifests as a translation of the scores. This suggests a simple post-hoc correction of the scores may suffice to improve performance; e.g., bumping up scores for samples with a(x) = land background in waterbirds can make the error on the rare subgroup (water bird, land background) more equitable. It remains now to more carefully describe such post-hoc procedures, and study their performance.\n1We crucially rely on a classification layer to help distinguish samples between subgroups; by themselves, however, the embeddings can be systematically biased (Bolukbasi et al., 2016; Gonen & Goldberg, 2019)." }, { "heading": "4 CORRECTING THE SUBGROUP BIAS OF OVERPARAMETERISED MODELS", "text": "Drawing from the literature on long-tail learning (Zhang et al., 2017; Kang et al., 2020; Ye et al., 2020) and fairness (Hardt et al., 2016; Chzhen et al., 2019), we now detail two post-hoc correction techniques to mitigate overparameterised models’ bias against rare subgroups." }, { "heading": "4.1 CLASSIFIER RETRAINING", "text": "Given that §3.1 demonstrates that learned representations Φ(x) appear meaningful across subgroups, a natural thought is to fit a linear classifier on top of them; i.e., we treat {(Φ(xi), yi)}ni=1 as a new training set for a linear model. Overparameterisation introduces a challenge, however: since the original network can find a classifier with the lowest possible (i.e., zero) training error, simple modifications to the loss (e.g., reweighting samples) will result in learning the same classifier.\nFortunately, there are several options to find a distinct classifier than the original network: e.g., one can subsample elements from the majority subgroups, per Sagawa et al. (2020b). We emphasise an important difference between employing such techniques in standard training, and in classifier retraining: the latter uses representations learned from a standard network, as opposed to changing the network objective itself. The success of the latter shall thus demonstrate that the standard network representations are rich enough to reasonably distinguish between subgroups." }, { "heading": "4.2 THRESHOLD CORRECTION", "text": "The illustrations in Figures 3 suggest a simple approach to improving classification performance: rather than using an identical classification threshold for all samples, one can employ per-subgroup thresholds. In detail, observe that in the case of binary labels, we predict h(x) = +1 ⇐⇒ f+1(x)− f−1(x) > 0. Instead, we can predict\nh(x) = +1 ⇐⇒ f+1(x)− f−1(x) > ta(x),\nwhere {ta ∈ R : a ∈ A} are per-attribute thresholds. Equivalently, this translates the scores for all samples with a given attribute a(x) so that the decision boundary is at 0. This can compensate for the distribution shifts observed in Figures 1 and 3: intuitively, by enforcing a lower threshold for samples with a(x) = −1, we can account for the fact that most of these samples obtain a low model score. It remains to specify how to choose the thresholds ta. One simple option is to perform a parameter sweep, and employ the thresholds that minimise the worst-subgroup error on a holdout set. This is feasible in settings where |A| and |Y| are small, and can directly target the performance measure of interest. For more complex problems, one may suitably parameterise the thresholds, or attempt to learn so as to minimise a suitable objective (see discussion below)." }, { "heading": "4.3 DISCUSSION AND RELATED WORK", "text": "The long-tail learning literature has demonstrated the value of both classifier retraining (Zhang et al., 2017; Kang et al., 2020) and threshold correction (Zhou & Liu, 2006; Collell et al., 2016; Menon et al., 2020). Similarly, in the fairness literature, post processing of classifier outputs based on per-subgroup thresholds is a similarly well-established technique (Hardt et al., 2016). Our aim here is to investigate the effectiveness of such techniques in the overparameterised setting. Given such models can perfectly fit training labels, it is less clear whether the representations learned from standard training are sufficiently useful to learn a good classifier, and whether their outputs can be easily corrected post-hoc; e.g., a model that merely memorised certain training labels could not be meaningfully corrected to perform well on test samples.\nTo extend threshold correction to multi-class settings, one could tie the thresholds to the frequencies P((y, a(x))) of various subgroups, rather than tune them. This is akin to class prior or logit correction techniques from long-tail learning (Collell et al., 2016; Menon et al., 2020). In the fairness literature, relevant techniques include Hardt et al. (2016), who propose an objective to select optimal thresholds for ensuring a particular notion of fairness; and recent techniques that learn a post-processing of model scores (Chzhen et al., 2019; Jiang et al., 2020; Wei et al., 2020) to de-bias predictions. Exploring such techniques in the overparameterised setting is an interesting direction for future work.\nThe contemporaneous work of Sohoni et al. (2020) also considers the viability of improving worstsubgroup performance using the learned representations. Specifically, they propose to cluster these embeddings, and train a new distributionally robust classifier to predict cluster assignments. While very much in the spirit of our classifier retraining proposal, their study does not consider the efficacy of such a procedure under varying model complexity, nor correction techniques that operate purely on the classification outputs." }, { "heading": "5 EXPERIMENTS: HOW EFFECTIVE IS POST-HOC CORRECTION?", "text": "We now show that the above post-hoc correction techniques can significantly improve overparameterised models’ performance on rare subgroups. This indicates that even when trained in the usual manner, such models can learn useful information about rare subgroups." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "We follow the same basic setup as Sagawa et al. (2020a;b). We instantiate overparameterised models on each dataset: a ResNet-50 on celebA and waterbirds; a linear logistic regression on synth, using fixed features as described in §2; and a linear logistic regression on waterbirds, using the embeddings from a ResNet-18 pre-trained on ImageNet. See Appendix A for details of experimental hyper-parameters. We measure both the average and worst-subgroup errors on both the train and test set, repeating each experiment 5 times.\nWe apply post-hoc correction to these learned models, via classifier retraining (CRT) on the learned representations, using a linear logistic regression model with subsampling of the dominant subgroups per Sagawa et al. (2020b); and threshold correction (THR) on the decision scores, using a holdout set to estimate thresholds {ta : a ∈ {±1}} that minimise the worst-subgroup error. For waterbirds, we use the holdout set from Sagawa et al. (2020a); for celebA, we use the standard holdout set; and for synth, we construct a holdout set using 20% of the training samples.\nAs reference, we report the results of standard minimisation on a balanced subsample (SAM) of the training set, where following Sagawa et al. (2020b) we down-sample each subgroup to have nmin examples, where nmin is the number of examples in the smallest subgroup. We also report the results of the regularised distributionally robust optimisation (DRO) procedure of Sagawa et al. (2020a). This involves strongly regularising the model, and modifying the training objective to target the worst-subgroup error. Our aim is not to improve on the performance of this method, which directly trains to minimise the worst-subgroup error; rather, our goal is to understand and quantify how much useful information standard model training learns about rare subgroups." }, { "heading": "5.2 RESULTS AND DISCUSSION", "text": "Table 1 summarises the test set results on all datasets. We highlight several key points.\nERM has poor worst-subgroup error. In keeping with Sagawa et al. (2020a;b), standard ERM performs poorly in terms of worst-subgroup error. This may be strongly mitigated by modifying the training objective per DRO, or by modifying the training sample per SAM. (Each of these incurs only a mild penalty in terms of the average error.) This confirms the finding of prior work that modification of the training procedure can achieve a suitable trade-off between average- and worst-subgroup error.\nPost-hoc correction improves worst-subgroup error. Encouragingly, post-hoc techniques consistently and significantly reduce the worst-subgroup error of ERM. For example, on celebA, THR reduces the worst-subgroup error from 56.11% to 12.96%, with only a mild increase in the average error. Between the post-hoc techniques, we generally find THR to yield the best performance. We reiterate that this technique does not modify the training procedure directly, but simply post-processes the learned ERM model output. This confirms the analysis of the previous section, which indicates that (on the considered datasets) the result of standard ERM can by themselves contain sufficient information to overcome poor worst-subgroup error.\nPost-hoc correction is comparable to training modification. Post-hoc techniques generally compare favourably with DRO and SAM in terms of the trade-off between average- and worst-subgroup error. While DRO is notably superior on synth, this gap appears to be in part due to the challenge of tuning hyperparameters given holdout data with limited subgroup representation. For example, using a larger holdout set size of n = 2400 samples improves the performance of THR on synth to 24%.\nThe superior overall performance of DRO is in keeping with findings about the efficacy of training modification for fairness (Agarwal et al., 2018). Nonetheless, the generally competitive performance of CRT and THR suggests the latter can extract non-trivial gains from overparameterised models.\nOverparameterisation with post-hoc correction can improve worst-subgroup error. We now confirm that increasing model complexity can result in improved average and worst-subgroup performance, provided the ERM outputs are suitably corrected. As the synth and waterbirds datasets with fixed features allow for modifying the feature dimensionality m — which controls the degree of overparameterisation – we perform an experiment where we vary m between 101 and 104. Figure 4 shows the average and worst-subgroup error on both the train and test set, over 5 independent trials. The overall and worst-case training error approaches 0% as m increases (i.e., the model becomes overparameterised). Further, the average test error appears reasonable, approaching 10%; however, the worst-case error devolves as m increases, exceeding 60%.\nThe plots also show the results of THR as m is varied. Here, both average and worst-case error steadily improve: indeed, overparameterisation aids worst-subgroup performance. Further, the gains in the worst-subgroup test error are dramatic compared to ERM. Thus, while ERM is susceptible to a tradeoff between overparameterisation and worst group accuracy, this tension can be mitigated with simple mechanisms to encourage equitable performance. This general trend is largely robust to distributional properties such as the fraction of majority samples in synth, though significantly increasing this expectedly degrades the best achievable performance; see Appendix D.\nIllustration of classification thresholds. Figure 5 studies the effect of tuning the classification thresholds ta on each of the per-subgroup errors for synth (m = 104). Here, we plot the errors for\neach of the four subgroups as we sweep over the thresholds. The case ta = 0 corresponds to the baseline. Modifying ta trades off performance on the rare subgroups compared to the dominant ones. One may then pick optimal thresholds for a = ±1, which correspond to cross-over points of the error curves for each value of a: for example, when t+1 ∼ 10, the errors on the subgroups y = +1, a = +1 and y = −1, a = +1 are equitable, and thus the maximum of the two errors is minimised. Choosing such thresholds aligns the scores distributions for rare and dominant subgroups (right)." }, { "heading": "6 DISCUSSION AND FUTURE WORK", "text": "Post-hoc correction relies on knowledge of the data subgroups at train and test time. An important practical challenge is extending this to settings with unknown subgroups. One natural strategy is to attempt to unearth these subgroups via clustering of the model outputs, but careful study is needed to inform design choices (e.g., choosing the number of clusters). For a contemporaneous study of a similar technique, see Sohoni et al. (2020). Exploring the viability of post-hoc correction in overparameterised settings with multi-class labels, and multiple subgroups — e.g., through adaptations of techniques noted in §4.3 — is also of interest. More broadly, the study of learning bias-free representations has received significant interest (Bahng et al., 2020; Kim et al., 2019; Li & Vasconcelos, 2019; Arjovsky et al., 2020; Nam et al., 2020). Exploring the efficacy of such approaches in overparameterised settings would be of interest." }, { "heading": "A EXPERIMENTAL SETUP", "text": "For the logistic regression experiments, we train the models using LogisticRegression package in sklearn with C = 1/(n · λ), for number of training examples n and minimal regularisation strength λ = 10−16. This employs the liblinear solver under-the-hood, and thus finds an approximate minimiser for the convex empirical risk objective on the entire training sample.\nFor the ResNet-50 experiments, we initalise the model using a ResNet-50 pre-trained on ImageNet. We train the models using SGD with a momentum value of 0.9. We use a batch size of 128, weight decay 10−4, and a learning rate of decayed according to a cosine schedule. We train with a base learning rate of 10−4 for 1000 epochs2 on waterbirds, and a base learning rate of 10−2 for 50 epochs on celebA. We also employ data augmentation in the form of random cropping and flipping, which offers a consistent boost in rare subgroup performance." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "We present additional experimental results that explore a few key hypotheses:\n• how sensitive are the results to inexact subgroup specification? • how does the precise choice of target label and spurious attribute affect results? • how generalisable are the results to multi-class settings?\nB.1 INEXACT SUBGROUP SPECIFICATION\nThe post-hoc modification techniques in the body crucially rely on knowledge of the precise subgroup specification of each example. This is unrealistic in practice, where the subgroups may be latent or inexactly specified. Following Sagawa et al. (2020a)[Appendix B], we simulate a setting of inexact specification of the subgroups on celebA. Here, we use as spurious attributes A′ = WearingLipstick× Eyeglasses× Smiling× DoubleChin× OvalFace, comprising 32 distinct values. We then learn using the subgroups Y×A′ as input, and then measure performance with respect to the original subgroups Y×A′. As noted in the body, the threshold adjustment technique (THR) is challenging to apply as-is in this setting, as it requires setting 32 distinct thresholds. As suggested in §4.3, we thus apply a simple heuristic of tying the thresholds to the subgroup frequencies, i.e., ta(x) = logP(y = +1 | a(x))− logP(y = −1 | a(x)). This has the effect of implicitly requiring higher model confidences to classify examples into a dominant subgroup. This technique is seen to have a worst-subgroup error of 16.67%, which is a modest increase compared to the 12.10% obtained when using the exact subgroups Y×A′. This illustrates that one can still make useful predictions given imperfect subgroup information.\nB.2 CHOICE OF TARGET LABEL AND SPURIOUS ATTRIBUTE\nThe results in the body involve data with one or more rare subgroups. Is this rarity the primary factor that influences performance, or does the definition of the subgroups themselves matter? To test this, we consider a variant of the celebA dataset in the body where the target label and sensitive attribute are swapped. In this variant of the dataset, we have Y = {male, female} and A = {blond, dark}. This exactly preserves the subgroup definitions and their rarity, but fundamentally changes the target label and feature used in training.\nInterestingly, this simple modification dramatically improves performance of the baseline: on the rarest subgroup, the error is 13.67%, which is a significant reduction over the 56.94% for the original dataset. This indicates that the precise choice of subgroup definition can play a non-trivial role in final performance. Intuitively, performance can be hampered when the target variable is spuriously correlated with many features in the training set.\n2This large number of epochs, coupled with the small learning rate, endows some stability in the worst subgroup performance. One may however obtain qualitatively similar results with much fewer epochs and a larger learning rate.\nNonetheless, even with this improved model, we find that threshold adjustment (THR) can further improve performance to 9.11%. The average subgroup errors of both techniques are similar, being 1.29% and 1.28% respectively.\nB.3 MULTI-CLASS SETTINGS\nThe results in the body involve problems with binary labels. To assess the effect of working with multiclass labels, we employ a modified version of MNIST based on Goel et al. (2020). Here, one mixes the standard MNIST dataset with samples from a corrupted version of MNIST comprising zig-zag images. The zig-zag images are made to be strongly correlated with the digit parity, so that most odd digits are zig-zagged. We then consider subgroups defined by Y×A, where A = {normal, zig− zag}. Note that we consider Y = {0, 1, . . . , 9} to illustrate performance in a multi-class setting, unlike Goel et al. (2020) who consider Y = {0, 1} to be the digit parity. We train a LeNet-5 for 100 epochs using a learning rate of 0.0001, momentum 0.9, weight decay 0.05, and batch size 100. Here, ERM achieves a worst-subgroup error of 67.11%. Classifier retraining (cRT) based on subsampling all non-minority samples improves this to 74.52%. Similarly, threshold adjustment (THR) based on the heuristic of tying the thresholds to the subgroup frequencies (as described in the previous section) achieves 78.95%. This illustrates the potential for post-hoc techniques to also be useful in scenarios other than binary classification.\nC VISUALISATION OF EMBEDDINGS UNDER GROUP-BASED DRO\nFigure 6 shows a tSNE visualisation of the embeddings learned by a model trained to minimise the group-based DRO objective of Sagawa et al. (2020a). Similar to the results of ERM, there is generally a notable separation of samples from the four subgroups." }, { "heading": "D ADDITIONAL EXPERIMENTAL ABLATIONS", "text": "We present additional experimental results, highlighting several key points:\n• the separation of scores between rare and dominant subgroups is consistent across all datasets considered in the paper; however, the score distributions are markedly different on train and test sets, owing to models being overly confident on training samples.\n• increasing model complexity systematically exacerbates the distribution shift in decision scores. • early stopping has limited effect on the score distributions; even after a single of epoch of training,\nthere may be a distinction between rare and dominant subgroups’ scores. • increased `2 regularisation strength has a favourable effect on the score distributions, encouraging\nsamples from both rare and dominant subgroups to be correctly classified. • subsampling (per Sagawa et al. (2020a)) has a positive effect on the score distributions, making\nthem almost perfectly align across subgroups.\n• increasing the fraction of majority samples has a deleterious effect on overall performance; however, even at extreme levels of imbalance, the score distribution for rare samples may be shifted to correct for bias.\nD.1 HISTOGRAM OF TRAIN AND TEST SCORES\nFigure 7 plots histograms of test scores for all datasets considered in this paper. We consistently find that there is a separation between the scores for rare and dominant subgroups.\nWe see similar behaviour on training scores in Figure 8. However, note the vastly different scale, owing to the model being more confident in its predictions for these samples. In general, while there are differences in the distributions for the rare and dominant subgroup scores, nearly all such scores are on the correct side of the decision boundary. This is expected, since overparameterised models perfectly fit the training data, and thus correctly classify all samples. The ability of these models to nonetheless produce meaningful results on test samples is owing to their inductive bias.\nD.2 IMPACT OF MODEL COMPLEXITY ON SCORES\nFigure 9 shows model scores on test samples on synth as number of projection features m is varied. We see that as the model complexity increases, there is a steady increase in the separation of decision scores between the rare and dominant subgroups for a label. This is in keeping with overparameterisation exacerbating worst-subgroup error: as the decision scores have more pronounced separation, using a default classification threshold will lead to significantly worse performance.\nD.3 IMPACT OF EARLY STOPPING ON SCORES\nFigures 10 and 11 shows the evolution of model scores on test samples on the CelebA and Waterbirds datasets. Here, the distinction between the scores amongst subgroups of the positive class is visible even after early stopping. With increased training epochs, there is a systematic shift of the negative scores, as the network becomes increasingly confident on them.\nD.4 IMPACT OF REGULARISATION ON SCORES\nFigures 12 and 13 shows how model scores on test samples vary as we modify the strength of regularisation. Increasing the strength is seen to favourably impact the scores on the negative class, for both the dominant and rare subgroups. This provides another perspective on why regularisation can be somewhat effective at improving worst-subgroup performance.\nD.5 IMPACT OF SUBSAMPLING ON SCORES\nFigure 14 shows histograms of model scores on test samples on synthetic dataset, with and without subsampling per Sagawa et al. (2020a). Subsampling is seen to make the scores equitable across the subgroups, which provides another perspective on why this technique can effectively mitigate a bias against rare subgroups.\nD.6 IMPACT OF FRACTION OF RARE SUBGROUPS ON SCORES\nThe synth dataset involves a parameter pdomin its construction, which controls the relative number of samples belonging to the dominant class. By default, following Sagawa et al. (2020b), we use pdom= 0.90. Figure 15 shows how the tradeoff is affected by changing pdom. As pdomincreases, as expected, it is more challenging to minimise the worst-subgroup error at large m. Figure 16 further shows how the test scores for each subgroup are affected by the choice of pdom. As pdomincreases, the rare subgroup scores are seen to significantly diverge from the dominant one." } ]
2,021
null
SP:a0c493b218741a8b49a12458bf78c88dc3aa596a
[ "This paper proposes some techniques to improve the accuracy of binary networks without adding much computational overhead. To improve model capacity, the author proposes mixture-of-experts convolution with a winner-takes-all gating mechanisms. To deal with the limited representation power of binary activations, the paper proposes utilizing group convolutions. The performance is further improved by careful selection of hyperparameters and improved training techniques." ]
Network binarization is a promising hardware-aware direction for creating efficient deep models. Despite its memory and computational advantages, reducing the accuracy gap between binary models and their real-valued counterparts remains an unsolved challenging research problem. To this end, we make the following 3 contributions: (a) To increase model capacity, we propose Expert Binary Convolution, which, for the first time, tailors conditional computing to binary networks by learning to select one data-specific expert binary filter at a time conditioned on input features. (b) To increase representation capacity, we propose to address the inherent information bottleneck in binary networks by introducing an efficient width expansion mechanism which keeps the binary operations within the same budget. (c) To improve network design, we propose a principled binary network search mechanism that unveils a set of network topologies of favorable properties. Overall, our method improves upon prior work, with no increase in computational cost, by ∼ 6%, reaching a groundbreaking ∼ 71% on ImageNet classification. Code will be made available here.
[ { "affiliations": [], "name": "Adrian Bulat" }, { "affiliations": [], "name": "Brais Martinez" }, { "affiliations": [], "name": "Georgios Tzimiropoulos" } ]
[ { "authors": [ "Milad Alizadeh", "Javier Fernández-Marqués", "Nicholas D Lane", "Yarin Gal" ], "title": "An empirical study of binary neural networks", "venue": "optimisation. In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Emmanuel Bengio", "Pierre-Luc Bacon", "Joelle Pineau", "Doina Precup" ], "title": "Conditional computation in neural networks for faster models", "venue": "arXiv preprint arXiv:1511.06297,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Adrian Bulat", "Georgios Tzimiropoulos" ], "title": "XNOR-Net++: Improved binary neural networks", "venue": "In British Machine Vision Conference,", "year": 2019 }, { "authors": [ "Adrian Bulat", "Georgios Tzimiropoulos", "Jean Kossaifi", "Maja Pantic" ], "title": "Improved training of binary networks for human pose estimation and image recognition", "venue": null, "year": 1904 }, { "authors": [ "Adrian Bulat", "Brais Martinez", "Georgios Tzimiropoulos" ], "title": "BATS: Binary ArchitecTure search", "venue": "European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Zhaowei Cai", "Xiaodong He", "Jian Sun", "Nuno Vasconcelos" ], "title": "Deep learning with low precision by half-wave gaussian quantization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Jianlong Chang", "Xinbang Zhang", "Yiwen Guo", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Differentiable architecture search with ensemble Gumbel-Softmax", "venue": null, "year": 1905 }, { "authors": [ "Zhourong Chen", "Yang Li", "Samy Bengio", "Si Si" ], "title": "You look twice: GaterNet for dynamic filter selection in CNNs", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "BinaryConnect: Training deep neural networks with binary weights during propagations", "venue": "In Advances on Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Matthieu Courbariaux", "Itay Hubara", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1", "venue": "arXiv preprint arXiv:1602.02830,", "year": 2016 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": "In IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Ruizhou Ding", "Ting-Wu Chin", "Zeye Liu", "Diana Marculescu" ], "title": "Regularizing activation distribution for training binarized deep networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Julian Faraone", "Nicholas Fraser", "Michaela Blott", "Philip HW Leong" ], "title": "SYQ: Learning symmetric quantization for efficient deep neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ruihao Gong", "Xianglong Liu", "Shenghu Jiang", "Tianxiang Li", "Peng Hu", "Jiazhen Lin", "Fengwei Yu", "Junjie Yan" ], "title": "Differentiable soft quantization: Bridging full-precision and low-bit neural networks", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Sam Gross", "Marc’Aurelio Ranzato", "Arthur Szlam" ], "title": "Hard mixtures of experts for large scale weakly supervised vision", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Emil Julius Gumbel" ], "title": "Statistical theory of extreme values and some practical applications: a series of lectures, volume 33", "venue": "US Government Printing Office,", "year": 1948 }, { "authors": [ "Pengsheng Guo", "Chen-Yu Lee", "Daniel Ulbricht" ], "title": "Learning to branch for multi-task learning", "venue": "arXiv preprint arXiv:2006.01895,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification", "venue": "In IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Koen Helwegen", "James Widdicombe", "Lukas Geiger", "Zechun Liu", "Kwang-Ting Cheng", "Roeland Nusselder" ], "title": "Latent weights do not exist: Rethinking binarized neural network optimization", "venue": "Advances on Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Paras Jain", "Ajay Jain", "Aniruddha Nrusimha", "Amir Gholami", "Pieter Abbeel", "Kurt Keutzer", "Ion Stoica", "Joseph E Gonzalez" ], "title": "Checkmate: Breaking the memory wall with optimal tensor rematerialization", "venue": null, "year": 1910 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with Gumbel-Softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Sangil Jung", "Changyong Son", "Seohyung Lee", "Jinwoo Son", "Jae-Joon Han", "Youngjun Kwak", "Sung Ju Hwang", "Changkyu Choi" ], "title": "Learning to quantize deep networks by optimizing quantization intervals with task loss", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dahyun Kim", "Kunal Pratap Singh", "Jonghyun Choi" ], "title": "Learning architectures for binary networks", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Xiaofan Lin", "Cong Zhao", "Wei Pan" ], "title": "Towards accurate binary convolutional neural network", "venue": "In Advances on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chunlei Liu", "Wenrui Ding", "Xin Xia", "Baochang Zhang", "Jiaxin Gu", "Jianzhuang Liu", "Rongrong Ji", "David Doermann" ], "title": "Circulant binary convolutional networks: Enhancing the performance of 1-bit DCNNs with circulant back propagation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Zechun Liu", "Baoyuan Wu", "Wenhan Luo", "Xin Yang", "Wei Liu", "Kwang-Ting Cheng" ], "title": "Bi-Real Net: Enhancing the performance of 1-bit CNNs with improved representational capability and advanced training algorithm", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "arXiv preprint arXiv:1611.00712,", "year": 2016 }, { "authors": [ "Brais Martinez", "Jing Yang", "Adrian Bulat", "Georgios Tzimiropoulos" ], "title": "Training binary neural networks with real-to-binary convolutions", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "PyTorch: An imperative style, high-performance deep learning library", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Hai Phan", "Yihui He", "Marios Savvides", "Zhiqiang Shen" ], "title": "Mobinet: A mobile binary network for image classification", "venue": "In IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Haotong Qin", "Ruihao Gong", "Xianglong Liu", "Mingzhu Shen", "Ziran Wei", "Fengwei Yu", "Jingkuan Song" ], "title": "Forward and backward information retention for accurate binary neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "XNOR-Net: ImageNet classification using binary convolutional neural networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "Mingzhu Shen", "Kai Han", "Chunjing Xu", "Yunhe Wang" ], "title": "Searching for accurate binary neural architectures", "venue": "In IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "EfficientNet: Rethinking model scaling for convolutional neural networks", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ravi Teja Mullapudi", "William R Mark", "Noam Shazeer", "Kayvon Fatahalian" ], "title": "HydraNets: Specialized dynamic architectures for efficient inference", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Trevor Darrell", "Joseph E Gonzalez" ], "title": "SkipNet: Learning dynamic routing in convolutional networks", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Ziwei Wang", "Jiwen Lu", "Chenxin Tao", "Jie Zhou", "Qi Tian" ], "title": "Learning channel-wise interactions for binary convolutional neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zuxuan Wu", "Tushar Nagarajan", "Abhishek Kumar", "Steven Rennie", "Larry S Davis", "Kristen Grauman", "Rogerio Feris" ], "title": "Blockdrop: Dynamic inference paths in residual networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhe Xu", "Ray CC Cheung" ], "title": "Accurate and compact convolutional neural networks with trained binarization", "venue": "British Machine Vision Conference,", "year": 2019 }, { "authors": [ "Brandon Yang", "Gabriel Bender", "Quoc V Le", "Jiquan Ngiam" ], "title": "CondConv: Conditionally parameterized convolutions for efficient inference", "venue": "In Advances on Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "British Machine Vision Conference,", "year": 2016 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": null, "year": 2016 }, { "authors": [ "Chenzhuo Zhu", "Song Han", "Huizi Mao", "William J Dally" ], "title": "Trained ternary quantization", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Shilin Zhu", "Xin Dong", "Hao Su" ], "title": "Binary ensemble neural network: More bits per network or more networks per bit", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Bohan Zhuang", "Chunhua Shen", "Mingkui Tan", "Lingqiao Liu", "Ian Reid" ], "title": "Structured binary neural networks for accurate image classification and semantic segmentation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Martinez" ], "title": "2020) argues that, for the binarization stage, a weaker augmentation, compared to real-valued networks, should be used on large datasets such as ImageNet", "venue": null, "year": 2020 }, { "authors": [ "Zhang" ], "title": "Cin to Cin/n (here n = 4) and then back to Cout", "venue": "in He et al", "year": 2016 }, { "authors": [ "Martinez" ], "title": "For example, using mixup on top of random scaling and cropping improves the results by 0.4", "venue": null, "year": 2020 }, { "authors": [ "Liu" ], "title": "double-skip connection mechanism adds a skip (i.e. an identity) connection around each binary convolutional layer. This is in contrast with a typical ResNet block (He et al., 2016) where the skip connection is applied at a block level. The main idea behind it is to preserve a real-valued signal alongside the binary one, improving overall the network’s capacity", "venue": null, "year": 2016 }, { "authors": [ "Bulat" ], "title": "2019) proposes to use a PReLU (He et al., 2015) activation instead. Thanks to its negative slope, it can better preserve the full spectrum of values produced by a binary convolution. B.4 2-STAGE BNN TRAINING Binary neural networks", "venue": null, "year": 2015 }, { "authors": [ "Martinez" ], "title": "induced by the quantization process (Rastegari et al., 2016", "venue": null, "year": 2020 }, { "authors": [ "Martinez" ], "title": "modulating the output of the binary convolution. We note that this process is used only on top of our best models, and is marked in the tables using an ”*”. Finally, we note that the gap between the real-valued and binary models is∼ 3.5−4% (depending on the configuration)", "venue": "In comparison,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "A promising, hardware-aware, direction for designing efficient deep learning models case is that of network binarization, in which filter and activation values are restricted to two states only: ±1 (Rastegari et al., 2016; Courbariaux et al., 2016). This comes with two important advantages: (a) it compresses the weights by a factor of 32× via bit-packing, and (b) it replaces the computationally expensive multiply-add with bit-wise xnor and popcount operations, offering, in practice, a speed-up of ∼ 58× on a CPU (Rastegari et al., 2016). Despite this, how to reduce the accuracy gap between a binary model and its real-valued counterpart remains an open problem and it is currently the major impediment for their wide-scale adoption.\nIn this work, we propose to approach this challenging problem from 3 key perspectives: 1. Model capacity: To increase model capacity, we firstly introduce the first application of Conditional Computing (Bengio et al., 2013; 2015; Yang et al., 2019) to the case of a binary networks, which we call Expert Binary Convolution. For each convolutional layer, rather than learning a weight tensor that is expected to generalize well across the entire input space, we learn a set of N experts each of which is tuned to specialize to portions of it. During inference, a very light-weight gating function dynamically selects a single expert for each input sample and uses it to process the input features. Learning to select a single, tuned to the input data, expert is a key property of our method which renders it suitable for the case of binary networks, and contrasts our approach to previous works in conditional computing (Yang et al., 2019). 2. Representation capacity: There is an inherent information bottleneck in binary networks as only 2 states are used to characterize each feature, which hinders the learning of highly accurate models. To this end, for the first time, we highlight the question of depth vs width in binary networks and propose a surprisingly unexplored efficient mechanism for increasing the effective width of the network by preserving the original computational budget. We show that our approach leads\nto noticeable gains in accuracy without increasing computation. 3. Network design: Finally, and inspired by similar work in real-valued networks (Tan & Le, 2019), we propose a principled approach to search for optimal directions for scaling-up binary networks. Main results: Without increasing the computational budget of previous works, our method improves upon the state-of-the-art (Martinez et al., 2020) by∼ 6%, reaching a groundbreaking∼ 71% on ImageNet classification." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 NETWORK BINARIZATION", "text": "Since the seminal works of Courbariaux et al. (2015; 2016) which showed that training fully binary models (both weights and activations) is possible, and Rastegari et al. (2016) which reported the very first binary model of high accuracy, there has been a great research effort to develop binary models that are competitive in terms of accuracy when compared to their real-valued counterparts, see for example (Lin et al., 2017; Liu et al., 2018; Alizadeh et al., 2018; Bulat et al., 2019; Bulat & Tzimiropoulos, 2019; Ding et al., 2019; Wang et al., 2019; Zhuang et al., 2019; Zhu et al., 2019; Kim et al., 2020; Bulat et al., 2020; Martinez et al., 2020). Notably, many of these improvements including real-valued down-sampling layers (Liu et al., 2018), double skip connections (Liu et al., 2018), learning the scale factors (Bulat & Tzimiropoulos, 2019), PReLUs (Bulat et al., 2019) and two-stage optimization (Bulat et al., 2019) have been put together to build a strong baseline in Martinez et al. (2020) which, further boosted by a sophisticated distillation and data-driven channel rescaling mechanism, yielded an accuracy of ∼ 65% on ImageNet. This method, along with the recent binary NAS of Bulat et al. (2020) reporting accuracy of ∼ 66%, are to our knowledge, the state-of-the-art in binary networks.\nOur method further improves upon these works achieving an accuracy of ∼ 71% on ImageNet, crucially without increasing the computational complexity. To achieve this, to our knowledge, we propose for the first time to explore ideas from Conditional Computing (Bengio et al., 2013; 2015) and learn data-specific binary expert weights which are dynamically selected during inference conditioned on the input data. Secondly, we are the first to identify width as an important factor for increasing the representation capacity of binary networks, and introduce a surprisingly simple yet effective mechanism to enhance it without increasing complexity. Finally, although binary architecture design via NAS (Liu et al., 2018; Real et al., 2019) has been recently explored in (Kim et al., 2020; Bulat et al., 2020), we propose to approach it from a different perspective that is more related to Tan & Le (2019), which was developed for real-valued networks." }, { "heading": "2.2 CONDITIONAL COMPUTATION", "text": "Conditional computation is a very general data processing framework which refers to using different models or different parts of a model conditioned on the input data. Wang et al. (2018) and Wu et al. (2018) propose to completely bypass certain parts of the network during inference using skip connections by training a policy network via reinforcement learning. Gross et al. (2017) proposes to train large models by using a mixture of experts trained independently on different partitions of the data. While speeding-up training, this approach is not end-to-end trainable nor tuned towards improving the model accuracy. Shazeer et al. (2017) trains thousands of experts that are combined using a noisy top-k expert selection while Teja Mullapudi et al. (2018) introduces the HydraNet in which a routing function selects and combines a subset of different operations. The later is more closely related to online network search. Chen et al. (2019) uses a separate network to dynamically select a variable set of filters while Dai et al. (2017) learns a dynamically computed offset.\nMore closely related to the proposed EBConv is Conditional Convolution, where Yang et al. (2019) propose to learn a Mixture of Experts, i.e. a set of filters that are linearly combined using a routing function. In contrast, our approach learns to select a single expert at a time. This is critical for binary networks for two reasons: (1) The linear combination of a binary set of weights is nonbinary and, hence, a second binarization is required giving rise to training instability and increased memory consumption. In Section 5, we compare with such a model and show that our approach works significantly better. (2) The additional computation to multiply and sum the weights, while negligible for real-valued networks, can lead to a noticeable computational increase for binary ones.\nFinally, we note that our single expert selection mechanism is akin to the Gumbel-max trick (Gumbel, 1948) and the Gumbel-Softmax Estimator (Jang et al., 2016; Maddison et al., 2016) previously used in various forms for NAS (Chang et al., 2019), multi-task learning (Guo et al., 2020) and variational auto-encoders (Jang et al., 2016). To our knowledge, the proposed EBConv is the very first adaptation for conditional computing within binary neural networks." }, { "heading": "3 BACKGROUND ON BINARY NETWORKS", "text": "Following Rastegari et al. (2016); Bulat & Tzimiropoulos (2019), a binary convolution is defined as:\nBConv(x,θ) = (sign(x)©∗ sign(θ)) α, (1)\nwhere x is the input, θ the weights,©∗ denotes the binary convolutional operation, the Hadamard product, and α ∈ RC is learned via back-propagation, as in Bulat & Tzimiropoulos (2019). The binarization is performed in two stages Bulat et al. (2019); Martinez et al. (2020). During Stage I, we train a network with binary activations and real-valued weights. Note that the accuracy of Stage I models are very representative to that of the final fully binary model (see Table 4). During Stage II, we initialize from Stage I to train a network with both weights and activations binary. When reporting results, if no stage is specified, the model (weights and activations) is fully binary.\nWe set as baseline the Strong Baseline model (denoted as SBaseline) from Martinez et al. (2020) on top of which we implemented the proposed method. We denote as Real-to-bin their full model." }, { "heading": "4 METHOD", "text": "" }, { "heading": "4.1 EXPERT BINARY CONVOLUTION", "text": "Assume a binary convolutional layer with input x ∈ RCin×W×H and weight tensor θ ∈ RCin×Cout×kH×kW . In contrast to a normal convolution that applies the same weights to all input features, we propose to learn a set of expert weights (or simply experts) {θ0,θ1, ...,θN−1}, θi ∈ RCin×Cout×kH×kW alongside a selector gating function which, given input x, selects only a single expert to be applied to it. The proposed EBConv layer is depicted in Fig. 1a. To learn the experts, let us first stack them in matrix Θ ∈ RN×CinCoutkHkW . We propose to learn the following\nfunction: EBConv(x,θ) = BConv(x, ( ϕ(ψ(x))TΘ ) r ), (2) where ϕ(.) is a gating function (returning an N−dimensional vector as explained below) that implements the expert selection mechanism using as input ψ(x) which is an aggregation function of the input tensor x, and (.)r simply reshapes its argument to a tensor of appropriate dimensions.\nGating function ϕ: A crucial component of the proposed approach is the gating function that implements the expert selection mechanism. An obvious solution would be to use a Winners-TakeAll (WTA) function, however this is not differentiable. A candidate that comes in mind to solve this problem is the softargmax with temperature τ : as τ → 0, the entry corresponding to the max will tend to 1 while the rest to 0. However, as τ → 0, the derivative of the softargmax converges to the Dirac function δ which provides poor gradients and hence hinders the training process. This could be mitigated if a high τ is used, however this would require hard thresholding at test time which, for the case of binary networks, and given that the models are trained using Eq. 2, leads to large errors.\nTo mitigate the above, and distancing from reinforcement learning techniques often deployed when discrete decisions need to be made, we propose, for the forward pass, to use a WTA function for defining ϕ(.), as follows:\nϕ(z) = { 1, if i = argmax(z) 0, otherwise . (3)\nNote that we define ϕ as ϕ : RC → RN i.e. as a function that returns an N−dimensional vector which is used to multiply (element-wise) Θ in Eq. 2. This is crucial as, during training, we wish to back-propagate gradients for the non-selected experts. To this end, we propose, for the backward pass, to use the Softmax function for approximating the gradients ϕ(.):\n∂φ ∂z := ∂ ∂z Softmax(z). (4)\nOverall, our proposal, WTA for forward and Softmax for backward, effectively addresses the mismatch during inference between training and testing while, at the same time, it allows meaningful gradients to flow to all experts during training. In Section A.3.3 of the appendix, we also explore the impact of adding a temperature to the softmax showing how its value affects the training process. Note that backpropagating gradients for the non-selected experts applies to the gating function, only; the binary activations and weights continue to use the STE introduced in (Courbariaux et al., 2016; Rastegari et al., 2016).\nAggregation function ψ: The purpose of this function is to give a summary of the input feature tensor which will be used to select the expert. To avoid overfitting and to keep the computational cost low, we opt for a simple and fast linear function:\nψ(x) = [ x̄[0]x̄[1] · · · x̄[C−1] ] ω, (5)\nwhere x̄[i] = 1HW x [i] is the spatial average of the i−th channel and ω ∈ RC×N a learnable projection matrix. Note that no other non-linearity was used as the WTA function is already a non-linear function.\nData-specific experts: One expected property of EBConv implied by the proposed design is that the experts should specialize on portions of data. This is because, for each data sample, a single expert is chosen per convolutional layer. Fig. 1b confirms this experimentally by t-SNE embedding visualisation of the features before the classifier along with the corresponding expert that was activated for each sample of the ImageNet validation set.\nOptimization policy: As in Bulat et al. (2019), we adopt a two-stage training policy where firstly the input features are binarized while learning real-valued weights, and then both input and weights are binarized.\nNote that the aggregation function ψ is kept real across all steps since its computational cost is insignificant. Furthermore, due to the discrete decision making process early on, the training can be unstable. Therefore, to stabilize the training we firstly train one expert, and then use this to initialize the training of all N experts. This ensures that early on in the process any decision made by the gating function is a good decision. Overall, our optimization policy can be summarized as follows:\n1. Train one expert, parametrized by θ0, using real weights and binary activations. 2. Replicate θ0 to all θi, i = {1, N − 1} to initialize matrix Θ. 3. Train the model initialized in step 2 using real weights and binary activations. 4. Train the model obtained from step 3 using binary weights and activations." }, { "heading": "4.2 ENHANCING BINARY INFORMATION FLOW", "text": "While the previous section addressed the issue of model capacity, in this section, we address the problem of the representation capacity of the binary activations. This issue arises due to the fact that only 2 states are used to characterize each feature, resulting in an information bottleneck which hinders the learning of highly accurate binary networks. To our knowledge, there is little prior work which explicitly tries to solve this problem (Liu et al., 2018).\nOur solution is surprisingly simple yet effective: the only parameters one can adjust in order to increase the representational power of binary features are the resolution and the width (i.e. number of channels). The former is largely conditioned on the resolution of the data, being as such problem dependent. Hence, we propose the latter, which is to increase the network width. For example a width expansion of k = 2 can increase the number of unique configurations for a 32 × 7 × 7 binary feature tensor from 232×7×7 = 21568 to 22136. However, increasing the network width directly causes a quadratic increase in complexity with respect to k. Hence, in order to keep the number of binary operations (BOPs) constant, we propose to use Grouped Convolutions with group size G pro-\nportional to the width expansion, i.e. G = k2. Note that we do not simply propose using grouped convolutions within binary networks as in (Phan et al., 2020; Bulat et al., 2020). We propose width expansion to address the inherent information bottleneck within binary networks and use grouped convolutions as a mechanism for increasing the capacity while preserving the computational budget fixed. Moreover, we note that since we are using grouped convolutions, features across groups need to be somehow combined throughout the network. This can be achieved at no extra cost through the 1 × 1 convolutions used for downsampling at the end of each stage where change of spatial resolution occurs. In Section 4.3, we further propose a more effective way to achieve this, based on binary 1× 1 convolutions, which however add some extra complexity. Moreover, in Section 4.3, we will further propose to search for the optimal group size depending on the layer location.\nAs Table 2 clearly shows, models trained with a width multiplier higher than 1 offer consistent accuracy gains, notably without increasing complexity. Importantly, these gains also add up with the ones obtained by using the proposed EBConv. This is not surprising as width expansion improves representation capacity while the expert increases model capacity." }, { "heading": "4.3 DESIGNING BINARY NETWORKS", "text": "In general, there is little work in network design for binary networks. Recently, a few binary NAS techniques have been proposed (Kim et al., 2020; Shen et al., 2019; Bulat et al., 2020). Despite reporting good performance, these methods have the same limitations typical of NAS methods, for\nexample, having to search for an optimal cell using a predefined network architecture, or having to hand pick the search space. Herein, and inspired by Tan & Le (2019), we propose a mixed semi-automated approach that draws from the advantages of both automatic and manual network designing techniques. Specifically, setting the standard ResNet-18 (He et al., 2016) network as a starting point, we focus on searching for optimal binary network structures, gradually exploring a set of different directions (width, depth, groups, layer arrangement).\nEffect of block arrangement: Starting from a ResNet-based topology in mind, we denote a network with Ni, i = {1, 2, 3, 4} blocks at each resolution as N0N1N2N3, with each block having two convolutional layers. We first investigate if re-arranging the blocks, mainly by using a network which is heavier at later stages, can have an impact on accuracy. Note that since the number of features is doubled among stages, this re-arrangement preserves the same complexity. Table 3 shows the results. As it can be observed the accuracy remains largely unchanged while the layers are re-distributed.\nDepth vs width: In Section 4.2, we proposed an efficient width expansion mechanism which is found to increase the accuracy of binary networks without increasing complexity. Herein, we evaluate the effect of increasing depth by adding more blocks. Fig. 2a shows the results of depth expansion. Each constellation represents a different architecture out of which we vary only the number of blocks, i.e. the depth. As we may clearly see, the returns of increasing depth are diminished as complexity also rapidly increases, resulting in very heavy models. Note that previous work for the case of real-valued\nnetworks (Zagoruyko & Komodakis, 2016) has shown that wide models can perform as well as deep ones. Our results show that, for a fixed computation budget, the proposed wide binary models with grouped convolutions actually outperform the deep ones by a large margin.\nEffect of aggregation over groups: Our efficient width expansion mechanism of Section 4.2 uses a very weak way of aggregating the information across different groups. A better way is to explicitly use a 1 × 1 binary convolutional layer (with no groups) after each block. The effect of adding that layer is shown in Fig. 3. Clearly, aggregation across groups via 1× 1 convolutions offers significant accuracy gains, adding at the same time a reasonable amount of complexity.\nEffect of groups: In Section 4.2, we proposed grouped convolutions as a mechanism for keeping the computations under control as we increase the network width. Herein, we go one step further and explore the effect of different group sizes and their placement across the network. This, in turn, allows, with a high degree of granularity, to vary the computational budget at various points in the network while preserving the width and as such the information flow. To describe the space of network structures explored, we use the following naming convention: we denote a network with Ni, i = {1, 2, 3, 4} blocks at each resolution, a corresponding width expansion E (the same E was used for all blocks) and group sizeGi for each convolution in these blocks as: N0N1N2N3−E−G0 : G1 :G2 :G3.\nAs the results from Fig. 2b and Table 4 show, increasing the number of groups (especially for the last 2 stages) results in significantly more efficient models which maintain the high accuracy (with only small decrease) compared to much larger models having the same network structure but fewer groups. Our results suggest that group sizes of 16 or 32 for the last 2 stages provide the best trade-off.\nNetwork search strategy: In summary, the network search space used in this work consists of the following degrees of freedom: a) rearranging the blocks, b) defining the depth of the model, c) defining the width at each stage, and finally d) selecting the optimal number of groups per each stage. In order to search for the optimal configuration, we gradually search in each direction separately while keeping all the others fixed. Then, we identify a set of promising search directions which we then combine to train new candidates. We repeat this step one more time and, then, from the final population of candidates we select the best models shown in Table 4. This procedure results in models that outperform recently proposed binary NAS methods (Bulat et al., 2020; Kim et al., 2020) by more than 5% while also being more computationally efficient." }, { "heading": "5 COMPARISON WITH STATE-OF-THE-ART", "text": "We compared our method against the current state-of-the-art in binary networks on the ImageNet dataset (Deng et al., 2009). Additional comparisons, including on CIFAR-100 (Krizhevsky et al., 2009), can be found in the supplementary material in Section A.2. Training: The training procedure largely follows that of Martinez et al. (2020). In particular, we trained our networks using Adam optimizer (Kingma & Ba, 2014) for 75 epochs using a learning rate of 10−3 that is decreased by 10 at epoch 40, 55 and 65. During Stage I, we set the weight decay to 10−5 and to 0 during Stage II. Furthermore, following Martinez et al. (2020), during the first 10 epochs, we apply a learning rate warm-up Goyal et al. (2017). The images are augmented following the common strategy used in prior-work (He et al., 2016) by randomly scaling and cropping the images to a resolution of 224 × 224px. In addition to this, to avoid overfitting of the given expert filters, we used Mixup (Zhang et al., 2017) with α = 0.2. For testing, we followed the standard procedure of scaling the images to a resolution of 256px first and then center cropping them. All models were trained on 4 V100 GPUs and implemented using PyTorch (Paszke et al., 2019).\nComparison against state-of-the-art: Table 5 shows our results ImageNet. When compared against methods with similar capacity (Rastegari et al., 2016; Courbariaux et al., 2015; 2016; Bulat & Tzimiropoulos, 2019; Martinez et al., 2020) (bottom section of the table), our method improves on top of the currently best performing method of Martinez et al. (2020) by almost 6% in terms of top-1 accuracy. Furthermore, our approach surpasses the accuracy of significantly larger and slower networks (upper section) by a wide margin.\nFinally, we compared our method against two very recent works that use NAS to search for binary networks. As the results from Table 5 (middle section) show, our method outperforms them, again by a large margin, while being significantly more efficient.\nIn terms of computational requirements, our method maintains the same overall budget, having an equal or slightly lower number of FLOPs and BOPs (see Table. 5). Although our method does increase the model size, by 2x for a model that uses 4 experts, the run-time memory largely remains the same. For additional details see Section A.5 in the supplementary material.\nComparison against CondConv: As mentioned in Section 4.1, a direct application of CondConv Yang et al. (2019) for the case of binary networks is problematic due to the so-called “double binarization problem”, i.e. binarization of the weights and then of their linear combination is required. Herein, we verify this experimentally: when training a fully binarized network using CondConv, we noticed a high degree of instability, especially during the initial phases of the training. For example, at the end of epoch 1, the accuracy of the binarized CondConv model is 1% vs 20% of the one using EBConv. The final accuracy of a binarized CondConv on Imagenet was 61.2% vs 63.8% compared to EBConv.\nAdditionally, as mentioned earlier, our proposed EBConv method uses less FLOPs (no multiplications required to combine the experts) and noticeably less memory at run-time (see Section A.5 in the appendix)." }, { "heading": "6 CONCLUSION", "text": "We proposed a three-fold approach for improving the accuracy of binary networks. Firstly, we improved model capacity at negligible cost. To this end, we proposed EBConv, the very first binary conditional computing layer which consists of data-specific expert binary filters and a very lightweight mechanism for selecting a single expert at a time. Secondly, we increased representation capacity by addressing the inherent information bottleneck in binary networks. For this purpose, we introduced an efficient width expansion mechanism which keeps the overall number of binary operations within the same budget. Thirdly, we improved network design, by proposing a principled binary network growth mechanism that unveils a set of network topologies of favorable properties. Overall, our method improves upon prior work, with no increase in computational cost by ∼ 6%, reaching a groundbreaking ∼ 71% on ImageNet classification." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILED NETWORK DEFINITIONS FOR FIG. 2", "text": "" }, { "heading": "A.2 ADDITIONAL COMPARISONS", "text": "Herein we present an extended comparison with both binary and low-bit quantization methods on ImageNet. As the results from Table 7 show, our method significantly surpasses both the binary and the more computationally expensive low-bit quantization networks. Similar results can be observed on the CIFAR-100 (Krizhevsky et al., 2009) dataset where our approach sets a new state-of-the-art result." }, { "heading": "A.3 ABLATION STUDIES", "text": "" }, { "heading": "A.3.1 REAL-VALUED DOWNSAMPLING DECOMPOSITION", "text": "The efficient width expansion mechanism of Section 4.2 preserves the amount of BOPs constant for binary convolutions. However, width expansion also affects the real-valued downsampling (linear) layers. To preserve the number of FLOPs constant, as width expands, for such a layer too, we propose to decompose it into two smaller ones so that the connection between them is reduced by a factor r = k2, i.e. instead of using [Conv(Cin, Cout)], we propose to use [Conv(Cin, Cinr → Conv( Cin r , Cout)]. Herein, we explore a few variants by adding non-linearities between them. Our results, reported in Table 9, show that the non-linear versions are more expressive and bridge the gap caused by the decrease in the layer’s size. The\nproposed adaption and the original one are depicted in Fig. 4.\nA.3.2 DATA AUGMENTATION\nNetwork binarization is considered to be an extreme case of regularization Courbariaux et al. (2015). However, recent work suggests that data augmentation remains an important, necessary aspect for successfully training accurate binary networks Martinez et al. (2020). Due to their lower representational power, Martinez et al. (2020) argues that, for the binarization\nstage, a weaker augmentation, compared to real-valued networks, should be used on large datasets such as ImageNet.\nAs opposed to this, we found that more aggressive augmentation, similar to the one used for realvalued networks in He et al. (2016) or mixup Zhang et al. (2017), leads to consistently better results. For example, using mixup on top of random scaling and cropping improves the results by 0.4%. In comparison, when we trained Real-to-Bin Martinez et al. (2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II. This suggests that, thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations)." }, { "heading": "A.3.3 EFFECT OF TEMPERATURE", "text": "One important component that influences the training efficiency of the gating mechanism is the softmax temperature τ . As mentioned earlier, lower temperatures will produce spikier gradients while lower ones will induce the opposite. We explore the effect of various temperatures in Table 10. It can be seen that our results are stable over a wide range τ = [0.02, 5]. Moreover, to validate the importance of using Eq. 4 for computing the gradients for back-propagation, we did an experiment where we replaced it with that of a sigmoid. Unsurprisingly, Stage I accuracy drops from 65.5% to 62.7%. This further highlights that the proposed form of the gating function is a key enabler for training higher performing models using EBConv." }, { "heading": "A.4 NETWORK ARCHITECTURE NAMING CONVENTION", "text": "This section clarifies the naming convention used in our paper: We define a network using the following notation N0N1N2N3−E−G0 :G1 :G2 :G3. Here E is the expansion rate, defined as a multiplier with respect to a vanilla ResNet. For example a network with the first block having 128 output channels will have an expansion rate of 2. Ni andGi, i = {0, 1, 2, 3} represent the number of convolutional blocks, and respectively, the number of groups used by all convolutions at each stage. Note that a ResNet has 4 stages. We graphically show the correspondence between this notation and the network structure in Fig. 5." }, { "heading": "A.5 MEMORY USAGE ANALYSIS", "text": "Model storage size: Current network binarization methods preserve the first and the last layer real-valued (Rastegari et al., 2016; Liu et al., 2018; Bulat & Tzimiropoulos, 2019). As such, for a ResNet-18 binary model trained on Imagenet, predicting 1000 classes, more than 2MB of the total space is taken by these parameters. As a result, our 4 expert model takes only 2x more space on a device. This is still noticeably less than binary models that attempt to increase their accuracy by increasing their model size (Lin et al., 2017) or by using an ensemble of binary networks (Zhu et al., 2019). Full results are shown in Table 11.\nRun-time memory: In a typical deep network, the memory consumed by activations far outweigh that of the parameters (Jain et al., 2019). As such even a ≈ 4× fold increase in the number of binary parameters (for the case of 4 experts) results in a small difference due to the above effect.\nFurthermore, since only a single expert is active for a given input this effect is further reduced. This is confirmed by our measurements reported below. As a simple test bed for the later we leverage the built-in memory profiler from PyTorch: we measure and report the memory consumption for a convolutional layer with 512 input and output channels and a kernel size of 3× 3. We set the input tensor to be of size 1× 512× 16× 16. As it can be seen, since a single expert is active for a given image, our EBConv layer has a minimal impact on the memory usage. Bellow we show the profiler output with the operations sorted in descending order, based on memory. For brevity, we show only the top 10 contributors.\nMemory profiling output for a normal convolutional layer, in descending order, based on memory:\n---------------------- --------------- --------------- Name CPU Mem Self CPU Mem ---------------------- --------------- --------------- conv2d 512.00 Kb 0 b convolution 512.00 Kb 0 b _convolution 512.00 Kb 0 b\nmkldnn_convolution 512.00 Kb 0 b empty 512.00 Kb 512.00 Kb size 0 b 0 b contiguous 0 b 0 b as_strided_ 0 b 0 b ---------------------- --------------- ---------------\nMemory profiling output for EBConv (ours), in descending order, based on memory:\n----------------------- --------------- --------------- Name CPU Mem Self CPU Mem ----------------------- --------------- --------------- empty 514.02 Kb 514.02 Kb conv2d 512.00 Kb 0 b convolution 512.00 Kb 0 b _convolution 512.00 Kb 0 b mkldnn_convolution 512.00 Kb 0 b adaptive_avg_pool2d 2.00 Kb 0 b mean 2.00 Kb 0 b sum_out 2.00 Kb 0 b addmm 16 b 16 b softmax 16 b 0 b _softmax 16 b 0 b ---------------------- --------------- ---------------\nMemory profiling output for CondConv (Yang et al., 2019), in descending order, based on memory:\n----------------------- --------------- --------------- Name CPU Mem Self CPU Mem ----------------------- --------------- --------------- matmul 9.00 Mb 0 b mm 9.00 Mb 0 b resize_ 9.00 Mb 9.00 Mb empty 514.02 Kb 514.02 Kb conv2d 512.00 Kb 0 b convolution 512.00 Kb 0 b _convolution 512.00 Kb 0 b mkldnn_convolution 512.00 Kb 0 b adaptive_avg_pool2d 2.00 Kb 0 b mean 2.00 Kb 0 b\nFurthermore, as the profiler outputs show, for the case of CondConv (Yang et al., 2019), the additional multiplication operations required to combine the experts together significantly increase the run-time memory consumption, dominating it, in fact, for low batch sizes – a typical scenario for models deployed on mobile devices. This further showcases the efficiency of the proposed method. We note that the numbers of BOPs and FLOPs of our binary model will remain constant as the batch size increases because the number of operations itself does not change (with the exception of the linear increase induced by the number of samples within the batch). Additionally, for batch sizes larger than 1, there will be a small cost incurred for the actual reading (fetching) of the weights from the memory. However, this cost is insignificant. Finally, we note that, in most cases, when binary networks are deployed on edge devices, a batch size of 1 is expected.\nA.6 IMPROVED TRAINING SCHEME WITH STRONGER TEACHER\nA key improvement proposed by Martinez et al. (2020) is the real-to-binary attention transfer and knowledge distillation mechanism. Therein, the authors suggest that using a stronger teacher does not improve the accuracy further, hence they use a real-valued ResNet-18 model as a teacher. Here, we speculate that the increase in representational capacity offered by the proposed model could benefit in fact from a stronger teacher. To validate this hypothesis, we train two real-valued teacher models of different capacity (controlled by depth): one scoring 72.5% Top-1 accuracy on ImageNet\nand a larger one scoring 76.0%. As the results from Table 12 show, our model can exploit the knowledge contained in a stronger teacher network, improving the overall performance by 1.2%. Throughout the paper, we mark the results obtained using the stronger teacher with ‡. We note that for training we largely preserve the gradual distillation approach described in (Martinez et al., 2020): In particular, at Step I, we train a full precision model with a structure that matches that of our binary network. At Step II, we use the previous model as a teacher and train a student with binary activations and real-valued weights. At the end of this step, we also perform our weight expansion strategy, propagating the trained weights across all experts following the optimization procedure described in Section 4.1. Finally, we use the model produced at the previous step as a teacher, training a fully binary network." }, { "heading": "B SUMMARY OF PRIOR WORK COMPONENTS USED", "text": "Herein we detail some of the methodological improvements proposed in prior works and also adopted for our strong baseline. We note, that most of these improvements are put together to create the strong baseline introduced in (Martinez et al., 2020) which is also the starting point of our work." }, { "heading": "B.1 PER-CHANNEL SCALING FACTORS", "text": "In order to minimize the reconstruction error between the full precision and binary convolution, in Rastegari et al. (2016), channel-wise real-valued scaling factors are used to modulate the output of the binary convolutions. In Rastegari et al. (2016), the authors proposed to calculate their values using an analytical solution that attempts to minimize the quantization error. The subsequent work of Bulat & Tzimiropoulos (2019) advocates for scaling factors learned via back-propagation by minimizing the task loss. In this work, we adopted the latter, learning one scaling factor per channel via back-propagation." }, { "heading": "B.2 DOUBLE-SKIP CONNECTIONS", "text": "Originally proposed by Liu et al. (2018), the double-skip connection mechanism adds a skip (i.e. an identity) connection around each binary convolutional layer. This is in contrast with a typical ResNet block (He et al., 2016) where the skip connection is applied at a block level. The main idea behind it is to preserve a real-valued signal alongside the binary one, improving overall the network’s capacity. We also note that a network with skip connections around all binary layers will also preserve a full precision data path that can improve both the gradients and the information flow." }, { "heading": "B.3 PRELU ACTIVATIONS", "text": "Rastegari et al. (2016) showed that, despite the non-linear nature of binary networks, ReLU nonlinearities added after the binary convolutions can further improve the model’s accuracy. However, a ReLU completely eliminates negative values which in Bulat et al. (2019) is found to cause training instabilities. To alleviate this, Bulat et al. (2019) proposes to use a PReLU (He et al., 2015) activation instead. Thanks to its negative slope, it can better preserve the full spectrum of values produced by a binary convolution.\nB.4 2-STAGE BNN TRAINING\nBinary neural networks are notably harder to optimize in comparison with their full precision counterparts (Rastegari et al., 2016; Courbariaux et al., 2015; 2016). Since most of the performance\ndegradation comes from binarizing the signal itself (i.e. activations), (Bulat et al., 2019) proposes a two-staged optimization strategy where the network is gradually binarized. During Stage I, a network with full precision weights and binary activations is trained. Then, in Stage II, a fully binary network is trained by initializing the model from the previous stage. As detailed in Section 5, the training scheduler in both stages is identical, with the exception of the weight decay, which for Stage II, is set to 0 Martinez et al. (2020)." }, { "heading": "B.5 REAL-TO-BINARY KNOWLEDGE DISTILLATION", "text": "A reasonable objective for training highly accurate binary networks is that the features learned by a binary network should closely match those of a full precision one up to an approximation error induced by the quantization process (Rastegari et al., 2016; Martinez et al., 2020). In order to explicitly enforce this, Martinez et al. (2020) proposes to add after each block an `2 loss between attention maps calculated from the binary and full precision activations. The full precision guiding signal typically comes from an identically structured pretrained real-valued model. To further enhance the efficacy of this process, Martinez et al. (2020) introduces a trainable data-driven scaling factor for modulating the output of the binary convolution. We note that this process is used only on top of our best models, and is marked in the tables using an ”*”.\nFinally, we note that the gap between the real-valued and binary models is∼ 3.5−4% (depending on the configuration). In comparison, the next best method, Martinez et al. (2020) has a gap of ∼ 5%. This shows that while the proposed structure is tuned for binary networks, it will also perform well for the case of full precision networks. This is perhaps not too surprising since a model easy to binarize should be also easy to train in full precision, the opposite however is not always true." }, { "heading": "C OVERALL BINARY NETWORKS STRUCTURE", "text": "Herein, we would like to add a few general notes about how a Binary Network is typically constructed. Following (Rastegari et al., 2016) and Courbariaux et al. (2016) most works binarize all convolutional layers except for the first and last ones (i.e. the classifier) alongside the batch normalization layers and the per-channel scaling factors (Rastegari et al., 2016; Lin et al., 2017; Liu et al., 2018; Liu et al., 2019; Zhu et al., 2019; Bulat & Tzimiropoulos, 2019; Qin et al., 2020; Wang et al., 2019; Martinez et al., 2020). Rastegari et al. (2016) notes that the first layer is not binarized because of the low number of channels (i.e. 3), the speed-up offered by binarization is not high. Furthermore, because the input to the network is typically real-valued, it is more natural to process it initially using real-valued operations. Similarly, the last layer sees smaller speedups in practice when binarized (Rastegari et al., 2016; Courbariaux et al., 2016) and often, depending on the task requires outputting continuous values instead of discrete ones. Finally, the batch normalization layers are kept real too since they significantly improve the training stability, and also implicitly adjust the quantization point.\nWe note that, as shown in Table 5, the vast majority of the operations are binary, with only a small proportion of them remaining real valued." } ]
2,021
HIGH-CAPACITY EXPERT BINARY NETWORKS
SP:b8f49fdda704b0206febd3c09d1f475047919099
[ ".** The authors describe how to apply a log signature to temporal datasets. This operation reduces dimensionality along the time axis at the price of adding some dimensionality to the spatial dimension. Then they train a neural controlled differential equation (Neural CDE) on the transformed dataset and show that their model learns more quickly and achieves better test generalization. They report results on two real-world datasets (EigenWorms and the TSR vitals dataset)." ]
Neural Controlled Differential Equations (Neural CDEs) are the continuous-time analogue of an RNN, just as Neural ODEs are analogous to ResNets. However just like RNNs, training Neural CDEs can be difficult for long time series. Here, we propose to apply a technique drawn from stochastic analysis, namely the logODE method. Instead of using the original input sequence, our procedure summarises the information over local time intervals via the log-signature map, and uses the resulting shorter stream of log-signatures as the new input. This represents a length/channel trade-off. In doing so we demonstrate efficacy on problems of length up to 17k observations and observe significant training speed-ups, improvements in model performance, and reduced memory requirements compared to the existing algorithm.
[]
[ { "authors": [ "Anthony Bagnall", "James Lines", "Aaron Bostrom", "James Large", "Eamonn Keogh" ], "title": "The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances", "venue": "Data Mining and Knowledge Discovery,", "year": 2017 }, { "authors": [ "Shaojie Bai", "J Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": "arXiv preprint arXiv:1803.01271,", "year": 2018 }, { "authors": [ "Patric Bonnier", "Patrick Kidger", "Imanol Perez Arribas", "Cristopher Salvi", "Terry Lyons" ], "title": "Deep Signature Transforms", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Youness Boutaib", "Lajos Gergely Gyurkó", "Terry Lyons", "Danyu Yang" ], "title": "Dimension-free Euler estimates of rough differential equations", "venue": "Revue Roumaine de Mathmatiques Pures et Appliques,", "year": 2014 }, { "authors": [ "Vı́ctor Campos", "Brendan Jou", "Xavier Giró-i Nieto", "Jordi Torres", "Shih-Fu Chang" ], "title": "Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks", "venue": "arXiv preprint arXiv:1708.06834,", "year": 2017 }, { "authors": [ "Shiyu Chang", "Yang Zhang", "Wei Han", "Mo Yu", "Xiaoxiao Guo", "Wei Tan", "Xiaodong Cui", "Michael Witbrock", "Mark A Hasegawa-Johnson", "Thomas S Huang" ], "title": "Dilated recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ricky T.Q. Chen" ], "title": "torchdiffeq, 2018. https://github.com/rtqichen/ torchdiffeq", "venue": null, "year": 2018 }, { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural Ordinary Differential Equations", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Edward De Brouwer", "Jaak Simm", "Adam Arany", "Yves Moreau" ], "title": "Gru-ode-bayes: Continuous modeling of sporadically-observed time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Wim De Mulder", "Steven Bethard", "Marie-Francine Moens" ], "title": "A survey on the application of recurrent neural networks to statistical language modeling", "venue": "Computer Speech & Language,", "year": 2015 }, { "authors": [ "Joscha Diehl", "Terry Lyons", "Rosa Preiß", "Jeremy Reizenstein" ], "title": "Areas of areas generate the shuffle algebra", "venue": "arXiv preprint arXiv:2002.02338,", "year": 2020 }, { "authors": [ "Guy Flint", "Terry Lyons" ], "title": "Pathwise approximation of SDEs by coupling piecewise abelian rough paths", "venue": "arXiv preprint arXiv:1505.01298,", "year": 2015 }, { "authors": [ "James Foster", "Harald Oberhauser", "Terry Lyons" ], "title": "An optimal polynomial approximation of Brownian motion", "venue": "SIAM Journal on Numerical Analysis,", "year": 2020 }, { "authors": [ "Peter K Friz", "Nicolas B Victoir" ], "title": "Multidimensional stochastic processes as rough paths: theory and applications, volume 120", "venue": null, "year": 2010 }, { "authors": [ "Jessica G Gaines", "Terry J Lyons" ], "title": "Variable step size control in the numerical solution of stochastic differential equations", "venue": "SIAM Journal on Applied Mathematics,", "year": 1997 }, { "authors": [ "Alex Graves" ], "title": "Supervised sequence labelling", "venue": null, "year": 2012 }, { "authors": [ "Albert Gu", "Tri Dao", "Stefano Ermon", "Atri Rudra", "Christopher Re" ], "title": "HiPPO: Recurrent Memory with Optimal Polynomial Projections", "venue": null, "year": 2008 }, { "authors": [ "Lajos Gergely Gyurkó" ], "title": "Numerical methods for approximating solutions to rough differential equations", "venue": "DPhil thesis, University of Oxford,", "year": 2008 }, { "authors": [ "Lajos Gergely Gyurkó", "Terry Lyons" ], "title": "Rough paths based numerical algorithms in computational finance", "venue": "Mathematics in Finance: UIMP-RSME Lluis A. Santaló Summer School,", "year": 2008 }, { "authors": [ "Ben Hambly", "Terry Lyons" ], "title": "Uniqueness for the signature of a path of bounded variation and the reduced path group", "venue": "Annals of Mathematics,", "year": 2010 }, { "authors": [ "Arend Janssen" ], "title": "Order book models, signatures and numerical approximations of rough differential equations", "venue": "PhD thesis, University of Oxford,", "year": 2011 }, { "authors": [ "Li Jing", "Caglar Gulcehre", "John Peurifoy", "Yichen Shen", "Max Tegmark", "Marin Soljacic", "Yoshua Bengio" ], "title": "Gated orthogonal recurrent units: On learning to forget", "venue": "Neural computation,", "year": 2019 }, { "authors": [ "Patrick Kidger", "Terry Lyons" ], "title": "Universal Approximation with Deep Narrow Networks", "venue": "COLT 2020,", "year": 2020 }, { "authors": [ "Patrick Kidger", "Terry Lyons" ], "title": "Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU. arXiv:2001.00706, 2020b. URL https: //github.com/patrick-kidger/signatory", "venue": null, "year": 2020 }, { "authors": [ "Patrick Kidger", "James Morrill", "James Foster", "Terry Lyons" ], "title": "Neural controlled differential equations for irregular time series", "venue": "arXiv preprint arXiv:2005.08926,", "year": 2020 }, { "authors": [ "Mathias Lechner", "Ramin Hasani" ], "title": "Learning long-term dependencies in irregularly-sampled time series", "venue": "arXiv preprint arXiv:2006.04418,", "year": 2020 }, { "authors": [ "Shiyang Li", "Xiaoyong Jin", "Yao Xuan", "Xiyou Zhou", "Wenhu Chen", "Yu-Xiang Wang", "Xifeng Yan" ], "title": "Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shuai Li", "Wanqing Li", "Chris Cook", "Ce Zhu", "Yanbo Gao" ], "title": "Independently recurrent neural network (indrnn): Building a longer and deeper rnn", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Shujian Liao", "Terry Lyons", "Weixin Yang", "Hao Ni" ], "title": "Learning stochastic differential equations using RNN with log signature features", "venue": null, "year": 1908 }, { "authors": [ "Terry Lyons" ], "title": "Rough paths, signatures and the modelling of functions on streams", "venue": "Proceedings of the International Congress of Mathematicians,", "year": 2014 }, { "authors": [ "Terry Lyons", "Caruana Michael", "Lévy Thierry" ], "title": "Differential equations driven by rough paths", "venue": "In École d’été de probabilités de Saint-Flour XXXIV-2004, edited by J. Picard in Volume 1908 of Lecture Notes in Mathematics, Berlin, Springer,", "year": 2007 }, { "authors": [ "Allan Pinkus" ], "title": "Approximation theory of the MLP model in neural networks", "venue": "Acta Numer.,", "year": 1999 }, { "authors": [ "Jeremy Reizenstein" ], "title": "Calculation of Iterated-Integral Signatures and Log Signatures", "venue": "arXiv preprint arXiv:1712.02757,", "year": 2017 }, { "authors": [ "Raymond A. Ryan" ], "title": "Introduction to Tensor Products of Banach Spaces", "venue": null, "year": 2002 }, { "authors": [ "Vsevolod Sourkov" ], "title": "Igloo: Slicing the features space to represent sequences", "venue": "arXiv preprint arXiv:1807.03402,", "year": 2018 }, { "authors": [ "Chang Wei Tan", "Anthony Bagnall", "Christoph Bergmeir", "Eamonn Keogh", "Francois Petitjean", "Geoffrey I. Webb" ], "title": "Monash university, uea, ucr time series regression archive, 2020", "venue": "http: //timeseriesregression.org/", "year": 2020 }, { "authors": [ "Scott Wisdom", "Thomas Powers", "John Hershey", "Jonathan Le Roux", "Les Atlas" ], "title": "Full-capacity unitary recurrent neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Eugene Wong", "Moshe Zakai" ], "title": "On the Convergence of Ordinary Integrals to Stochastic Integrals", "venue": "Annals of Mathematical Statistics,", "year": 1965 }, { "authors": [ "Lyons" ], "title": "f̂(z)LogSigs,t(X) is either globally bounded and Lipschitz continuous or linear. Hence both the Taylor and log-ODE methods are well defined. Remark A.10 It is well known that the log-signature of a path X lies in a certain free Lie algebra (this is detailed in section", "venue": null, "year": 2007 }, { "authors": [ "Lyons" ], "title": "α-Höl is the standard α-Hölder norm with α", "venue": "T ]→ R", "year": 2007 }, { "authors": [ "Note that this is a small inconsistency between this work", "the original model proposed in Kidger" ], "title": "Here, we applied the tanh function as the final hidden layer nonlinearity, whilst in the original paper the tanh nonlinearity is applied after the final linear map", "venue": "Both methods are used to constrain the rate of change of the hidden state; we do not know of a reason to prefer one over the other.", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural controlled differential equations (Neural CDEs) (Kidger et al., 2020) are the continuous-time analogue to a recurrent neural network (RNN), and provide a natural method for modelling temporal dynamics with neural networks.\nNeural CDEs are similar to neural ordinary differential equations (Neural ODEs), as popularised by Chen et al. (2018). A Neural ODE is determined by its initial condition, without a direct way to modify the trajectory given subsequent observations. In contrast the vector field of a Neural CDE depends upon the time-varying data, so that the trajectory of the system is driven by a sequence of observations." }, { "heading": "1.1 CONTROLLED DIFFERENTIAL EQUATIONS", "text": "We begin by stating the definition of a CDE.\nLet a, b ∈ R with a < b, and let v, w ∈ N. Let ξ ∈ Rw. LetX : [a, b]→ Rv be a continuous function of bounded variation (which is for example implied by it being Lipschitz), and let f : Rw → Rw×v be continuous.\nThen we may define Z : [a, b]→ Rw as the unique solution of the controlled differential equation\nZa = ξ, Zt = Za + ∫ t a f(Zs)dXs for t ∈ (a, b], (1)\nThe notation “f(Zs)dXs” denotes a matrix-vector product, and if X is differentiable then∫ t a f(Zs)dXs = ∫ t a f(Zs) dX ds (s)ds.\nIf in equation (1), dXs was replaced with ds, then the equation would just be an ODE. Using dXs causes the solution to depend continuously on the evolution ofX . We say that the solution is “driven by the control, X”." }, { "heading": "1.2 NEURAL CONTROLLED DIFFERENTIAL EQUATIONS", "text": "We recall the definition of a Neural CDE as introduced in Kidger et al. (2020).\nConsider a time series x as a collection of points xi ∈ Rv−1 with corresponding time-stamps ti ∈ R such that x = ((t0, x0), (t1, x1), ..., (tn, xn)), and t0 < ... < tn.\nLetX : [t0, tn]→ Rv be some interpolation of the data such thatXti = (ti, xi). Kidger et al. (2020) use natural cubic splines. Here we will actually end up finding piecewise linear interpolation to be a more convenient choice. (We avoid issues with adaptive solvers as discussed in Kidger et al. (2020, Appendix A) simply by using fixed solvers.)\nLet ξθ : Rv → Rw and fθ : Rw → Rw×v be neural networks. Let `θ : Rw → Rq be linear, for some output dimension q ∈ N. Here θ is used to denote dependence on learnable parameters. We define Z as the hidden state and Y as the output of a neural controlled differential equation driven by X if\nZt0 = ξθ(t0, x0), with Zt = Zt0 + ∫ t t0 fθ(Zs)dXs and Yt = `θ(Zt) for t ∈ (t0, tn]. (2)\nThat is – just like an RNN – we have evolving hidden state Z, which we take a linear map from to produce an output. This formulation is a universal approximator (Kidger et al., 2020, Appendix B). The output may be either the time-evolving Yt or just the final Ytn . This is then fed into a loss function (L2, cross entropy, . . . ) and trained via stochastic gradient descent in the usual way.\nThe question remains how to compute the integral of equation (2). Kidger et al. (2020) let\ngθ,X(Z, s) = fθ(Z) dX\nds (s), (3)\nwhere the right hand side denotes a matrix multiplication, and then note that the integral can be written as\nZt = Zt0 + ∫ t t0 gθ,X(Zs, s)ds. (4)\nThis reduces the CDE to an ODE, so that existing tools for Neural ODEs may be used to evaluate this, and to backpropagate.\nBy moving from the discrete-time formulation of an RNN to the continuous-time formulation of a Neural CDE, then every kind of time series data is put on the same footing, whether it is regularly or irregularly sampled, whether or not it has missing values, and whether or not the input sequences are of consistent length.\nBesides this, the continuous-time or differential equation formulation may be useful in applications where such models are explicitly desired, as when modelling physics." }, { "heading": "1.3 CONTRIBUTIONS", "text": "Neural CDEs, as with RNNs, begin to break down for long time series. Training loss/accuracy worsens, and training time becomes prohibitive due to the sheer number of forward operations within each training epoch.\nHere, we apply the log-ODE method, which is a numerical method from stochastic analysis and rough path theory. It is a method for converting a CDE to an ODE, which may in turn be solved via standard ODE solvers. Thus this acts as a drop-in replacement for the original procedure that uses the derivative of the control path.\nIn particular, we find that this method is particularly beneficial for long time series (and incidentally does not require differentiability of the control path). With this method both training time and model performance of Neural CDEs are improved, and memory requirements are reduced.\nThe resulting scheme has two very neat interpretations. In terms of numerical differential equation solvers, this corresponds to taking integration steps larger than the discretisation of the data, whilst incorporating substep information through additional terms1. In terms of machine learning, this corresponds to binning the data prior to running a Neural CDE, with bin statistics carefully chosen to extract precisely the information most relevant to solving a CDE.\n1For the reader familiar with numerical methods for SDEs, this is akin to the additional correction term in Milstein’s method as compared to Euler-Maruyama." }, { "heading": "2 THEORY", "text": "We begin with motivating theory, though we note that this section is not essential for using the method. Readers more interested in practical applications should feel free to skip to section 3." }, { "heading": "2.1 SIGNATURES AND LOG-SIGNATURES", "text": "The signature transform is a map from paths to a vector of real values, specifying a collection of statistics about the path. It is a central component of the theory of controlled differential equations, since these statistics describe how the data interacts with dynamical systems. The log-signature is then formed by representing the same information in a compressed format.\nWe begin by providing a formal definition of the signature, and a description of the log-signature. We will then give some intuition, first into the geometry of the first few terms of the (log-)signature, and then by providing a short example of how these terms appear when solving CDEs.\nSignature transform Let x = (x1, ..., xn), where xi ∈ Rv . Let T > 0 and 0 = t1 < t2 < ... < tn−1 < tn = T be arbitrary. Let X = (X1, ..., Xd) : [0, T ] → Rd be the unique continuous function such that X(ti) = xi and is affine on the intervals between (essentially just a linear interpolation of the data). Letting2\nSi1,...ika,b (X) =\n∫ ... ∫ 0<t1<...<tk<T k∏ j=1 dXij dt (tj)dtj , (5)\nthen the depth-N signature transform of X is given by SigNa,b(X) = ({ S(X)(i) }d i=1 , { S(x)(i,j) }d i,j=1 , . . . , { S(x)(i1,...,iN ) }d i1,...,iN=1 ) . (6)\nThis definition is independent of the choice of T and ti (Bonnier et al., 2019, Proposition A.7).\nWe see that the signature is a collection of integrals, with each integral defining a real value. It is a graded sequence of statistics that characterise the input time series. In particular, (Hambly & Lyons, 2010) show that under mild conditions, Sig∞(X) completely determines X up to translation (provided time is included in a channel in X).\nLog-signature transform However, the signature transform has some redundancy: a little algebra shows that for example S1,2a,b (X) + S 2,1 a,b (X) = S 1 a,b(X)S 2 a,b(X), so that for instance we already know S2,1a,b (X) provided we know the other three quantities.\n2This is a slightly simplified definition, and the signature is often instead defined using the notation of stochastic calculus; see Definition A.2.\nThe log-signature transform is then essentially obtained by computing the signature transform, and throwing out redundant terms, to obtain some (nonunique) minimal collection.\nStarting from the depth-N signature transform and removing some fixed set of redundancies produces the depth-N log-signature transform.3 We denote this LogSigNa,b, which is a map from Lipschitz continuous paths [a, b] → Rv into Rβ(v,N), where β(v,N) denotes the dimension of the log-signature. The precise procedure is a little involved; both this and a formula for β(v,N) can be found in Appendix A.\nGeometric intuition In figure 2 we provide a geometric intuition for the first two levels of the log-signature (which have particularly natural interpretations).\n(Log-)Signatures and CDEs (Log-)signatures are intrinsically linked to solutions of CDEs. Let Df denote the Jacobian of a function f . Now expand equation (1) by linearising the vector field f and neglecting higher order terms:\nZt ≈ Za + ∫ t a ( f(Za) +Df (Za)(Zs − Za) )dX dt (s)ds\n= Za + ∫ t a ( f(Za) +Df (Za) ∫ s a f(Zu) dX dt (u) du ) dX dt (s) ds\n≈ Za + f(Za) ∫ t a dX dt (s) ds+Df (Za)f(Za) ∫ t a ∫ s a dX dt (u) du dX dt (s)ds\n= Za + f(Za) { S(X)(i)}di=1 +Df (Za)f(Za) { S(X)(i,j) }d i,j=1 . (7)\nThis gives a Taylor expansion of the solution, and moreover the coefficients involve the terms in the signature. Higher order Taylor expansions results in corrections using higher order signature terms. We refer the reader to section 7.1 of Friz & Victoir (2010) for further details." }, { "heading": "2.2 THE LOG-ODE METHOD", "text": "Recall for X : [a, b]→ Rv that LogSigNa,b(X) ∈ Rβ(v,N). The log-ODE method states\nZb ≈ Ẑb where Ẑu = Ẑa + ∫ u a f̂(Ẑs) LogSigNa,b(X) b− a ds, and Ẑa = Za. (8)\nwhere Z is as defined in equation (2), and the relationship between f̂ to f is given in Appendix A.\nThat is, the solution of the CDE may be approximated by the solution to an ODE. In practice, we go further and pick some points ri such that a = r0 < r1 < · · · < rm = b. We split up the CDE of\n3Similar terminology such as “step-N log-signature” is also used in the literature.\nequation (1) into an integral over [r0, r1], an integral over [r1, r2], and so on, and apply the log-ODE method to each interval separately.\nSee Appendix A for more details and Appendix B for a proof of convergence.\nAlso see Janssen (2011); Lyons (2014); Boutaib et al. (2014) for other discussions of the log-ODE method. See Gaines & Lyons (1997); Gyurkó & Lyons (2008); Flint & Lyons (2015); Foster et al. (2020) for applications of the log-ODE method to stochastic differential equations (SDEs)." }, { "heading": "3 METHOD", "text": "We move on to discussing the application of the log-ODE method to Neural CDEs.\nRecall that we observe some time series x = ((t0, x0), (t1, x1), ..., (tn, xn)), and have constructed a piecewise linear interpolation X : [t0, tn]→ Rv such that Xti = (ti, xi). We now pick points ri such that t0 = r0 < r1 < · · · < rm = tn. In principle these can be variably spaced but in practice we will typically space them equally far apart. The total number of points m should be much smaller than n.\nIn section 2 the log-signature transform was introduced. To recap, for X : [t0, tn] → Rv and t0 ≤ ri < ri+1 ≤ tn the depth-N log-signature of X over the interval [ri, ri+1] is some collection of statistics LogSigNri,ri+1(X) ∈ R β(v,N).\nIn particular, these statistics are precisely those most relevant for solving the CDE equation (1)." }, { "heading": "3.1 UPDATING THE NEURAL CDE HIDDEN STATE EQUATION VIA THE LOG-ODE METHOD", "text": "Recall how the Neural CDE formulation of equation (2) was solved via equations (3), (4). For the log-ODE approach we replace (3) with the piecewise\nĝθ,X(Z, s) = f̂θ(Z) LogSigNri,ri+1(X)\nri+1 − ri for s ∈ [ri, ri+1), (9)\nwhere f̂θ : Rw → Rw×β(v,N) is an arbitrary neural network, and the right hand side denotes a matrix-vector product between f̂θ and the log-signature. Equation (4) then becomes\nZt = Zt0 + ∫ t t0 ĝθ,X(Zs, s)ds. (10)\nThis may now be solved as a (neural) ODE using standard ODE solvers." }, { "heading": "3.2 RELATIONSHIP TO THE ORIGINAL METHOD", "text": "Suppose we happened to choose ri = ti and ri+1 = ti+1. Then the log-signature term is\nLogSigNti,ti+1(X)\nti+1 − ti (11)\nThe depth 1 the log-signature is just the increment of the path over the interval, and so this becomes\n∆X[ti,ti+1]\nti+1 − ti =\ndX linear\ndt (ti) for s ∈ [ti, ti+1), (12)\nthat is to say the same as obtained via the original method if using linear interpolation." }, { "heading": "3.3 DISCUSSION", "text": "Ease of Implementation This method is straightforward to implement using pre-existing tools. There are standard libraries available for computing the log-signature transform; we use Signatory (Kidger & Lyons, 2020b). Then, as equation (10) is an ODE, it may be solved directly using tools such as torchdiffeq (Chen, 2018).\nAs an alternative, we note that the formulation in equation (11) can be written in precisely the same form as equation (3), with the driving path taken to be piecewise linear in log-signature space. Computation of the log-signatures can therefore be considered as a preprocessing step, producing a sequence of logsignatures. From this we may construct a path in log-signature space, and apply existing tools for neural CDEs. This idea is summarised in figure 1. We make this approach available in the [redacted] open source project.\nStructure of f̂ The description here aligns with the log-ODE scheme described in equation (8). There is one discrepancy: we do not attempt to model the specific structure of f̂ . This is in principle possible, but is computationally expensive. Instead, we model f̂ as a neural network directly. This need not necessarily exhibit the requisite structure, but as neural networks are universal approximators (Pinkus, 1999; Kidger & Lyons, 2020a) then this approach is at least as general from a modelling perspective.\nLossy Representation The log-signature transform can be thought of as a lossy representation for time series. This is made rigorous in Diehl et al. (2020), where it is shown that the log-signature can be obtained by iterating an “area” operation between paths. For CDEs, these geometric features precisely encode the interaction between the data and the system.\nLength/Channel Trade-Off The sequence of log-signatures is now of length m, which was chosen to be much smaller than n. As such, it is much more slowly varying over the interval [t0, tn] than the original data, which was of length n. The differential equation it drives is better behaved, and so larger integration steps may be used in the numerical solver. This is the source of the speed-ups of this method; we observe typical speed-ups by a factor of about 100.\nEach element is a log-signature of size β(v,N) ≥ v; the additional channels are higher-order corrections to compensate for the larger integration steps.\nGenerality of the Log-ODE Method If depth N = 1 and steps ri = ti are used, then the above formulation exactly reduces onto the original Neural CDE formulation using linear interpolation. Thus the log-ODE method in fact generalises the original approach.\nApplications In principle the log-ODE method may be applied to solve any Neural CDE. In practice, the reduction in length (from n to m), coupled with the loss of information (from using the log-signature as a summary statistic) makes this particularly useful for long time series.\nMemory Efficiency Long sequences need large amounts of memory to perform backpropagationthrough-time (BPTT). As with the original Neural CDEs, the log-ODE approach supports memoryefficient backpropagation via the adjoint equations, alleviating this issue. See Kidger et al. (2020).\nThe Depth and Step Hyperparameters To solve a Neural CDE accurately via the log-ODE method, we should be prepared to take the depth N suitably large, or the intervals ri+1− ri suitably small. Accomplishing this would realistically require that they are taken very large or very small, respectively. Instead, we treat these as hyperparameters. This makes use of the log-ODE method a modelling choice rather than an implementation detail.\nIncreasing step size will lead to faster (but less informative) training by reducing the number of operations in the forward pass. Increasing depth will lead to slower (but more informative) training, as more information about each local interval is used in each update." }, { "heading": "4 EXPERIMENTS", "text": "We investigate solving a Neural CDE with and without the log-ODE method on four real-world problems. Every problem was chosen for its long length. The lengths are in fact sufficiently long that adjoint-based backpropagation (Chen et al., 2018) was needed to avoid running out of memory at any reasonable batch size. Every problem is regularly sampled, so we take ti = i.\nWe will denote a Neural CDE model with log-ODE method, using depth N and step s, as NCDEsN . Taking N = 1 (and any s) corresponds to not using the log-ODE method, with the data subsampled at rate 1/s, as per section 3.3. Thus we use NCDE11 as our benchmark: no subsampling, no log-ODE method.\nIn principle we could compare against RNN variants. This is for simple practical reasons: RNNbased models do not fit in the memory of the GPU resources we have available. This is one of the main advantages of using differential equation models in the first place, for which adjoint backpropagation is available. (As per the first paragraph of this section.)\nEach model is run three times and we report the mean and standard deviation of the test metrics along with the mean training times and memory usages.\nFor each task, the hyperparameters were selected by performing a grid search on the NCDEs1 model, where s was chosen so that the length of the sequence was 500 steps. This was found to create a reasonable balance between training time and sequence length. (Doing hyperoptimisation on the baseline NCDE11 model would have been more difficult due to the larger training times.)\nPrecise details of the experiments can be found in Appendices C and D." }, { "heading": "4.1 CLASSIFYING EIGENWORMS", "text": "Our first example uses the EigenWorms dataset from the UEA archive from Bagnall et al. (2017). This consists of time series of length 17 984 and 6 channels (including time), corresponding to the movement of a roundworm. The goal is to classify each worm as either wild-type or one of four mutant-type classes.\nSee Table 1. We see that the straightforward NCDE11 model takes roughly a day to train. Using the log-ODE method (NCDE2, NCDE3) speeds this up to take roughly minutes. Doing so additionally improves model performance dramatically, and reduces memory usage. Naive subsampling approaches (NCDE81, NCDE 32 1 , NCDE 128 1 ) only achieve speed-ups without performance improvements, this can be seen in the NCDE1 column which corresponds to naive subsampling for a step size greater than 1.\nWe notice that the NCDE3 model has faster training times than the depth 2 model (and sometimes better then depth 1) over each step size. This is due to the fact we imposed a stopping criterion if the loss failed to decrease after 60 epochs, meaning that the NCDE3 has converged with less epochs (time per epoch will still be larger though).\nSee also Figure 3, in which we summarise results for a larger range of step sizes." }, { "heading": "4.2 ESTIMATING VITALS SIGNS FROM PPG AND ECG DATA", "text": "Next we consider the problem of estimating vital signs from PPG and ECG data. This comes from the TSR archive (Tan et al., 2020) using data from the Beth Israel Deaconess Medical Centre (BIDMC). We consider three separate tasks, in which we aim to predict a person’s respiratory rate (RR), their heart rate (HR), and their oxygen saturation (SpO2). This data is sampled at 125Hz with each series having a length of 4 000. There are 7 949 training samples, and 3 channels (including time).\nWe train a model on each of the three vitals sign prediction tasks. The metric used to evaluate performance is the L2 loss. The results over a range of step sizes are presented in table (2). We also provide heatmaps in Figure 4 for each dataset containing the loss values (normalised to [0, 1]) for each task. The full results over all step sizes may be found in Appendix D.\nWe find that the depth 3 model is the top performer for every task at any step size. What’s more, it does so with a significantly reduced training time. We attribute the improved performance to\nthe log-ODE model being better able to learn long-term dependencies due to the reduced sequence length. Note that the performance of the NCDEs2, NCDE s 3 models actually improves as the step size is increased. This is in contrast to NCDEs1, which sees a degradation in performance." }, { "heading": "5 LIMITATIONS OF THE LOG-ODE METHOD", "text": "Number of hyperparameters Two new hyperparameters – truncation depth and step size – with substantial effects on training time and memory usage must now also be tuned.\nNumber of input channels The log-ODE method is most feasible for low numbers of input channels, as the number of log-signature channels β(v,N) grows exponentially in v." }, { "heading": "6 RELATED WORK", "text": "There has been some work on long time series for classic RNN (GRU/LSTM) models.\nWisdom et al. (2016); Jing et al. (2019) show that unitary or orthogonal RNNs can mitigate the vanishing/exploding gradients problem. However, they are expensive to train due to the need to compute a matrix inversion at each training step. Chang et al. (2017) introduce dilated RNNs with skip connections between RNN states, which help improve training speed and learning of longterm dependencies. Campos et al. (2017) introduce the ‘Skip-RNN’ model, which extend the RNN by adding an additional learnt component that skips state updates. Li et al. (2018) introduce the ‘IndRNN’ model, with particular structure tailored to learning long time series.\nOne important comparison is to hierarchical subsampling as in Graves (2012); De Mulder et al. (2015). There the data is split into windows, an RNN is run over each window, and then an additional RNN is run over the first RNN’s outputs; we may describe this as an RNN/RNN pair. Liao et al. (2019) then perform the equivalent operation with a log-signature/RNN pair. In this context, our use of log-ODE method may then be described as a log-signature/NCDE pair.\nIn comparison to Liao et al. (2019), this means moving from an inspired choice of pre-processing to an actual implementation of the log-ODE method. In doing so the differential equation structure is preserved. Moreover this takes advantage of the synergy between log-signatures (which extract statistics on how data drives differential equations), and the controlled differential equation it then drives. Broadly speaking these connections are natural: at least within the signature/CDE/rough path community, it is a well-known but poorly-published fact that (log-)signatures, RNNs and (Neural) CDEs are all related; see for example Kidger et al. (2020) for a little exposition on this.\nCNNs and Transformers have been shown to offer improvements over RNNs for modelling longterm dependencies (Bai et al., 2018; Li et al., 2019). However, both can be expensive in their own right; Transformers are famouslyO(L2) in the length of the time seriesL. Whilst several approaches have been introduced to reduce this, for example Li et al. (2019) reduce this to O(L(logL)2), this can still be difficult with long series. Extensions specifically to long sequences do exist (Sourkov, 2018), but these typically focus on language modelling rather than multivariate time series data.\nDe Brouwer et al. (2019); Lechner & Hasani (2020) amongst others consider continuous time analogues of GRUs and LSTMs, going some way to improving the learning of long-term dependencies. Voelker et al. (2019); Gu et al. (2020) consider links with ODEs and approximation theory, to improve the long-term memory capacity of RNNs." }, { "heading": "7 CONCLUSION", "text": "We demonstrate how to effectively apply Neural CDEs to long (17k) time series, via the log-ODE method. The model may still be solved via ODE methods and thus retains adjoint backpropagation and continuous dynamics. In doing so we see significant training speed-ups, improvements in model performance, and reduced memory requirements." }, { "heading": "A AN INTRODUCTION TO THE LOG-ODE METHOD FOR CONTROLLED DIFFERENTIAL EQUATIONS", "text": "The log-ODE method is an effective method for approximating the controlled differential equation:\ndYt = f(Yt) dXt, (13)\nY0 = ξ,\nwhereX : [0, T ]→ Rd has finite length, ξ ∈ Rn and f : Rn → L(Rd,Rn) is a function with certain smoothness assumptions so that the CDE (13) is well posed. Throughout these appendices, L(U, V ) denotes the space of linear maps between the vector spaces U and V . In rough path theory, the function f is referred to as the “vector field” of (13) and usually assumed to have Lip(γ) regularity (see definition 10.2 in Friz & Victoir (2010)). In this section, we assume one of the below conditions on the vector field:\n1. f is bounded and has N bounded derivatives. 2. f is linear.\nIn order to define the log-ODE method, we will first consider the tensor algebra and path signature. Definition A.1 We say that T ( Rd )\n:= R ⊕ Rd ⊕ (Rd)⊗2 ⊕ · · · is the tensor algebra of Rd and T (( Rd )) := { a = ( a0, a1, · · · ) : ak ∈ ( Rd )⊗k ∀k ≥ 0} is the set of formal series of tensors of Rd.\nMoreover, T ( Rd ) and T (( Rd ))\ncan be endowed with the operations of addition and multiplication. Given a = (a0, a1, · · · ) and b = (b0, b1, · · · ), we have\na+ b = ( a0 + b0, a1 + b1, · · · ) , (14)\na⊗ b = ( c0, c1, c2, · · · ) , (15)\nwhere for n ≥ 0, the n-th term cn ∈ ( Rd )⊗n can be written using the usual tensor product as\ncn := n∑ k=0 ak ⊗ bn−k.\nThe operation ⊗ given by (15) is often referred to as the “tensor product”.\nDefinition A.2 The signature of a finite length path X : [0, T ] → Rd over the interval [s, t] is defined as the following collection of iterated (Riemann-Stieltjes) integrals:\nSs,t ( X ) := ( 1 , X (1) s,t , X (2) s,t , X (3) s,t , · · · ) ∈ T (( Rd )) , (16)\nwhere for n ≥ 1, X\n(n) s,t :=\n∫ · · · ∫\ns<u1<···<un<t\ndXu1 ⊗ · · · ⊗ dXun ∈ ( Rd )⊗n .\nSimilarly, we can define the depth-N (or truncated) signature of the path X on [s, t] as\nSNs,t ( X ) := ( 1 , ∫ s<u1<t dXu , · · · , ∫ · · · ∫\ns<u1<···<uN<t\ndXu1 ⊗ · · · ⊗ dXuN\n) ∈ TN ( Rd ) , (17)\nwhere TN ( Rd ) := R⊕ Rd ⊕ (Rd)⊗2 ⊕ · · · ⊕ (Rd)⊗N denotes the truncated tensor algebra.\nThe (truncated) signature provides a natural feature set that describes the effects a path X has on systems that can be modelled by (13). That said, defining the log-ODE method actually requires the so-called “log-signature” which efficiently encodes the same integral information as the signature. The log-signature is obtained from the path’s signature by removing certain algebraic redundancies, such as ∫ t\n0 ∫ s 0 dXiudX j s + ∫ t 0 ∫ s 0 dXjudX i s = X i tX j t ,\nfor i, j ∈ {1, · · · , d}, which follows by the integration-by-parts formula. To this end, we will define the logarithm map on the depth-N truncated tensor algebra TN ( Rd ) := R⊕ Rd ⊕ · · · ⊕ (Rd)⊗N . Definition A.3 (The logarithm of a formal series) For a = (a0, a1, · · · ) ∈ T (( Rd ))\nwith a0 > 0, define log(a) to be the element of T (( Rd )) given by the following series:\nlog(a) := log(a0) + ∞∑ n=1 (−1)n n ( 1− a a0 )⊗n , (18)\nwhere 1 = (1, 0, · · · ) is the unit element of T (( Rd ))\nand log(a0) is viewed as log(a0)1.\nDefinition A.4 (The logarithm of a truncated series) For a = (a0, a1, · · · , aN ) ∈ T (( Rd ))\nwith a0 > 0, define log N (a) to be the element of TN ( Rd ) defined from the logarithm map (18) as\nlogN (a) := PN ( log(ã) ) , (19)\nwhere ã := (a0, a1, · · · , aN , 0, · · · ) ∈ T (( Rd ))\nand PN denotes the standard projection map from T (( Rd )) onto TN ( Rd ) .\nDefinition A.5 The log-signature of a finite length path X : [0, T ] → Rd over the interval [s, t] is defined as LogSigs,t(X) := log(Ss,t(X)), where Ss,t(X) denotes the path signature of X given by Definition A.2. Likewise, the depth-N (or truncated) log-signature of X is defined for each N ≥ 1 as LogSigNs,t(X) := log N (SNs,t(X)).\nThe log-signature is a map from X : [0, T ] → Rd → Rβ(d,N). The exact form of β(d,N) is given by\nβ(d,N) = N∑ k=1 1 k ∑ i|k µ ( k i ) di\nwith µ the Möbius function. We note that the order of this remains an open question.\nThe final ingredient we use to define the log-ODE method are the derivatives of the vector field f . It is worth noting that these derivatives also naturally appear in the Taylor expansion of (13).\nDefinition A.6 (Vector field derivatives) We define f◦k : Rn → L((Rd)⊗k,Rn) recursively by\nf◦(0)(y) := y,\nf◦(1)(y) := f(y), f◦(k+1)(y) := D ( f◦k ) (y)f(y),\nfor y ∈ Rn, where D ( f◦k ) denotes the Fréchet derivative of f◦k.\nUsing these definitions, we can describe two closely related numerical methods for the CDE (13).\nDefinition A.7 (The Taylor method) Given the CDE (13), we can use the path signature of X to approximate the solution Y on an interval [s, t] via its truncated Taylor expansion. That is, we use\nTaylor(Ys, f, S N s,t(X)) := N∑ k=0 f◦k(Ys)πk ( SNs,t(X) ) , (20)\nas an approximation for Yt where each πk : TN (Rd)→ (Rd)⊗k is the projection map onto ( Rd )⊗k .\nDefinition A.8 (The Log-ODE method) Using the Taylor method (20), we can define the function f̂ : Rn → L(TN (Rd),Rn) by f̂(z) := Taylor(z, f, ·). By applying f̂ to the truncated log-signature of the path X over an interval [s, t], we can define the following ODE on [0, 1]\ndz du = f̂(z)LogSigNs,t(X), (21)\nz(0) = Ys.\nThen the log-ODE approximation of Yt (given Ys and LogSigNs,t(X)) is defined as\nLogODE(Ys, f,LogSig N s,t(X)) := z(1). (22)\nRemark A.9 Our assumptions of f ensure that z 7→ f̂(z)LogSigNs,t(X) is either globally bounded and Lipschitz continuous or linear. Hence both the Taylor and log-ODE methods are well defined.\nRemark A.10 It is well known that the log-signature of a path X lies in a certain free Lie algebra (this is detailed in section 2.2.4 of Lyons et al. (2007)). Furthermore, it is also a theorem that the Lie bracket of two vector fields is itself a vector field which doesn’t depend on choices of basis. By expressing LogSigNs,t(X) using a basis of the free Lie algebra, it can be shown that only the vector field f and its (iterated) Lie brackets are required to construct the log-ODE vector field f̂(z)LogSigNs,t(X). In particular, this leads to our construction of the log-ODE (8) using the Lyndon basis of the free Lie algebra (see Reizenstein (2017) for a precise description of the Lyndon basis). We direct the reader to Lyons (2014) and Boutaib et al. (2014) for further details on this Lie theory.\nTo illustrate the log-ODE method, we give two examples:\nExample A.11 (The “increment-only” log-ODE method) When N = 1, the ODE (21) becomes\ndz du = f(z)Xs,t,\nz(0) = Ys.\nTherefore we see that this “increment-only” log-ODE method is equivalent to driving the original CDE (13) by a piecewise linear approximation of the control path X . This is a classical approach for stochastic differential equations (i.e. when Xt = (t,Wt) with W denoting a Brownian motion) and is an example of a Wong-Zakai approximation (see Wong & Zakai (1965) for further details).\nExample A.12 (An application for SDE simulation) Consider the following affine SDE,\ndYt = a(b− yt) dt+ σyt ◦ dWt, (23) y(0) = y0 ∈ R≥0 ,\nwhere a, b ≥ 0 are the mean reversion parameters, σ ≥ 0 is the volatility andW denotes a standard real-valued Brownian motion. The ◦ means that this SDE is understood in the Stratonovich sense. The SDE (23) is known in the literature as Inhomogeneous Geometric Brownian Motion (or IGBM). Using the control path X = {(t,Wt)}t≥0 and setting N = 3, the log-ODE (21) becomes\ndz du = a(b− zu)h+ σzuWs,t − abσAs,t + abσ2L(1)s,t + a2bσL (2) s,t ,\nz(0) = Ys.\nwhere h := t− s denotes the step size and the random variables As,t, L(1)s,t , L (2) s,t are given by\nAs,t := ∫ t s Ws,r dr − 1 2 hWs,t,\nL (1) s,t := ∫ t s ∫ r s Ws,v ◦ dWv dr − 1 2 Ws,tAs,t − 1 6 hW 2s,t,\nL (2) s,t := ∫ t s ∫ r s Ws,v dv dr − 1 2 hAs,t − 1 6 h2Ws,t.\nIn Foster et al. (2020), the depth-3 log-signature of X = {(t,Wt)}t≥0 was approximated so that the above log-ODE method became practical and this numerical scheme exhibited state-of-the-art convergence rates. For example, the approximation error produced by 25 steps of the high order log-ODE method was similar to the error of the “increment only” log-ODE method with 1000 steps." }, { "heading": "B CONVERGENCE OF THE LOG-ODE METHOD FOR ROUGH DIFFERENTIAL EQUATIONS", "text": "In this section, we shall present “rough path” error estimates for the log-ODE method. In addition, we will discuss the case when the vector fields governing the rough differential equation are linear. We begin by stating the main result of Boutaib et al. (2014) which quantifies the approximation error of the log-ODE method in terms of the regularity of the systems vector field f and control path X . Since this section uses a number of technical definitions from rough path theory, we recommend Lyons et al. (2007) as an introduction to the subject.\nFor T > 0, we will use the notation4T := {(s, t) ∈ [0, T ]2 : s < t} to denote a rescaled 2-simplex.\nTheorem B.1 (Lemma 15 in Boutaib et al. (2014)) Consider the rough differential equation dYt = f(Yt) dXt, (24) Y0 = ξ,\nwhere we make the following assumptions:\n• X is a geometric p-rough path in Rd, that is X : 4T → T bpc(Rd) is a continuous path in the tensor algebra T bpc(Rd) := R⊕ Rd ⊕ ( Rd )⊗2 ⊕ · · · ⊕ (Rd)⊗bpc with increments\nXs,t = ( 1, X (1) s,t , X (2) s,t , · · · , X (bpc) s,t ) , (25)\nX (k) s,t := πk ( Xs,t ) ,\nwhere πk : T bpc ( Rd ) → ( Rd )⊗k is the projection map onto ( Rd )⊗k\n, such that there exists a sequence of continuous finite variation paths xn : [0, T ] → Rd whose truncated signatures converge to X in the p-variation metric:\ndp ( Sbpc(xn), X ) → 0, (26)\nas n→∞, where the p-variation between two continuous paths Z1 and Z2 in T bpc(Rd) is\ndp ( Z1, Z2 ) := max\n1≤k≤bpc sup D ( ∑ ti∈D ∥∥∥πk(Z1ti,ti+1)− πk(Z2ti,ti+1)∥∥∥ pk) kp , (27)\nwhere the supremum is taken over all partitions D of [0, T ] and the norms ‖ · ‖ must satisfy (up to some constant) ‖a⊗ b‖ ≤ ‖a‖‖b‖, for a ∈ (Rd)⊗n and b ∈ (Rd)⊗m. For example, we can take ‖ · ‖ to be the projective or injective tensor norms (see Propositions 2.1 and 3.1 in Ryan (2002)).\n• The solution Y and its initial value ξ both take their values in Rn.\n• The collection of vector fields {f1, · · · , fd} on Rn are denoted by f : Rn → L(Rn,Rd), where L(Rn,Rd) is the space of linear maps from Rn to Rd. We will assume that f has Lip(γ) regularity with γ > p. That is, f it is bounded with bγc bounded derivatives, the last being Hölder continuous with exponent (γ − bγc). Hence the following norm is finite:\n‖f‖Lip(γ) := max 0≤k≤bγc ∥∥Dkf∥∥∞ ∨ ∥∥Dbγcf∥∥(γ−bγc)−Höl , (28) where Dkf is the k-th (Fréchet) derivative of f and ‖ · ‖α-Höl is the standard α-Hölder norm with α ∈ (0, 1).\n• The RDE (24) is defined in the Lyon’s sense. Therefore by the Universal Limit Theorem (see Theorem 5.3 in Lyons et al. (2007)), there exists a unique solution Y : [0, T ]→ Rn.\nWe define the log-ODE for approximating the solution Y over an interval [s, t] ⊂ [0, T ] as follows:\n1. Compute the depth-bγc log-signature of the control path X over [s, t]. That is, we obtain LogSig bγc s,t (X) := logbγc ( S bγc s,t (X) ) ∈ T bγc(Rd), where logbγc(·) is defined by projecting\nthe standard tensor logarithm map onto {a ∈ T bγc(Rd) : π0(a) > 0}.\n2. Construct the following (well-posed) ODE on the interval [0, 1],\ndzs,t\ndu = F\n( zs,t ) , (29)\nzs,t0 = Ys,\nwhere the vector field F : Rn → Rn is defined from the log-signature as\nF (z) := bγc∑ k=1 f◦k(z)πk ( LogSig bγc s,t (X) ) . (30)\nRecall that f◦k : Rn → L((Rd)⊗k,Rn) was defined previously in Definition A.6.\nThen we can approximate Yt using the u = 1 solution of (29). Moreover, there exists a universal constant Cp,γ depending only on p and γ such that∥∥Yt − zs,t1 ∥∥ ≤ Cp,γ‖f‖γLip(γ)‖X‖γp-var;[s,t], (31) where ‖ · ‖p-var;[s,t] is the p-variation norm defined for paths in T bpc(Rd) by\n‖X‖p-var;[s,t] := max 1≤k≤bpc sup D ( ∑ ti∈D ∥∥Xkti,ti+1∥∥ pk) kp , (32) with the supremum taken over all partitions D of [s, t].\nRemark B.2 If the vector fields {f1, · · · , fd} are linear, then it immediately follows that F is linear.\nAlthough the above theorem requires some sophisticated theory, it has a simple conclusion - namely that log-ODEs can approximate controlled differential equations. That said, the estimate (31) does not directly apply when the vector fields {fi} are linear as they would be unbounded. Fortunately, it is well known that linear RDEs are well posed and the growth of their solutions can be estimated.\nTheorem B.3 (Theorem 10.57 in Friz & Victoir (2010)) Consider the linear RDE on [0, T ] dYt = f(Yt) dXt,\nY0 = ξ,\nwhere X is a geometric p-rough path in Rd, ξ ∈ Rn and the vector fields {fi}1≤i≤d take the form fi(y) = Aiy + B where {Ai} and {Bi} are n × n matrices. Let K denote an upper bound on maxi(‖Ai‖ + ‖Bi‖). Then a unique solution Y : [0, T ] → Rn exists. Moreover, it is bounded and there exists a constant Cp depending only on p such that\n‖Yt − Ys‖ ≤ Cp ( 1 + ‖ξ‖ ) K‖X‖p-var;[s,t] exp ( CpK p‖X‖pp-var;[s,t] ) , (33)\nfor all 0 ≤ s ≤ t ≤ T .\nWhen the vector fields of the RDE (24) are linear, then the log-ODE (29) also becomes linear. Therefore, the log-ODE solution exists and is explicitly given as the exponential of the matrix F .\nTheorem B.4 Consider the same linear RDE on [0, T ] as in Theorem B.3, dYt = f(Yt) dXt,\nY0 = ξ.\nThen the log-ODE vector field F given by (30) is linear and the solution of the associated ODE (29) exists and satisfies\n‖zs,tu ‖ ≤ ‖Ys‖ exp ( bγc∑ m=1 Km ∥∥∥πm(LogSigbγcs,t (X))∥∥∥), (34)\nfor u ∈ [0, 1] and all 0 ≤ s ≤ t ≤ T .\nProof B.5 Since F is a linear vector field on Rn, we can view it as an n × n matrix and so for u ∈ [0, 1],\nzs,tu = exp(uF )z s,t 0 ,\nwhere exp denotes the matrix exponential. The result now follows by the standard estimate ‖ exp(F )‖ ≤ exp(‖F‖).\nRemark B.6 Due to the boundedness of linear RDEs (33) and log-ODEs (34), the arguments that established Theorem B.1 will hold in the linear setting as ‖f‖Lip(γ) would be finite when defined on the domains that the solutions Y and z lie in.\nGiven the local error estimate (31) for the log-ODE method, we can now consider the approximation error that is exhibited by a log-ODE numerical solution to the RDE (24). Thankfully, the analysis required to derive such global error estimates was developed by Greg Gyurkó in his PhD thesis. Thus the following result is a straightforward application of Theorem 3.2.1 from Gyurkó (2008).\nTheorem B.7 Let X , f and Y satisfy the assumptions given by Theorem B.1 and suppose that {0 = t0 < t1 < · · · < tN = T} is a partition of [0, T ] with max k ‖X‖p-var;[tk,tk+1] sufficiently small. We can construct a numerical solution {Y logk }0≤k≤N of (24) by setting Y log 0 := Y0 and for each k ∈ {0, 1, · · · , N − 1}, defining Y logk+1 to be the solution at u = 1 of the following ODE:\ndztk,tk+1\ndu := F\n( ztk,tk+1 ) , (35)\nz tk,tk+1 0 := Y log k ,\nwhere the vector field F is constructed from the log-signature of X over the interval [tk, tk+1] according to (30). Then there exists a constant C depending only on p, γ and ‖f‖Lip(γ) such that∥∥Ytk − Y logk ∥∥ ≤ C k−1∑\ni=0\n‖X‖γp-var;[ti,ti+1], (36)\nfor 0 ≤ k ≤ N .\nRemark B.8 The above error estimate also holds when the vector field f is linear (by Remark B.6)).\nSince bγc is the truncation depth of the log-signatures used to construct each log-ODE vector field, we see that high convergence rates can be achieved through using more terms in each log-signature. It is also unsurprising that the error estimate (36) increases with the “roughness” of the control path. So just as in our experiments, we see that the performance of the log-ODE method can be improved by choosing an appropriate step size and depth of log-signature." }, { "heading": "C EXPERIMENTAL DETAILS", "text": "Code The code to reproduce the experiments is available at [redacted; see supplementary material]\nData splits Each dataset was split into a training, validation, and testing dataset with relative sizes 70%/15%/15%.\nNormalisation The training splits of each dataset were normalised to zero mean and unit variance. The statistics from the training set were then used to normalise the validation and testing datasets.\nArchitecture We give a graphical description of the architecture used for updating the Neural CDE hidden state in figure 6. The input is first run through a multilayer perceptron with n layers of size h, with with n, h being hyperparameters. ReLU nonlinearities are used at each layer except the final one, where we instead use a tanh nonlinearity. The goal of this is to help prevent term blow-up over the long sequences.\nNote that this is a small inconsistency between this work and the original model proposed in Kidger et al. (2020). Here, we applied the tanh function as the final hidden layer nonlinearity, whilst in the original paper the tanh nonlinearity is applied after the final linear map. Both methods are used to constrain the rate of change of the hidden state; we do not know of a reason to prefer one over the other.\nNote that the final linear layer in the multilayer perceptron is reshaped to produce a matrix-valued output, of shape v × p. (As f̂θ is matrix-valued.) A matrix-vector multiplication with the logsignature then produces the vector field for the ODE solver.\nODE Solver All problems used the ‘rk4’ solver as implemented by torchdiffeq (Chen, 2018) version 0.0.1.\nComputing infrastructure All EigenWorms experiments were run on a computer equipped with three GeForce RTX 2080 Ti’s. All BIDMC experiments were run on a computed with two GeForce RTX 2080 Ti’s and two Quadro GP100’s.\nOptimiser All experiments used the Adam optimiser. The learning rate was initialised at 0.032 divided by batch size. The batch size used was 1024 for EigenWorms and 512 for the BIDMC problems. If the validation loss failed to decrease after 15 epochs the learning rate was reduced by a factor of 10. If the validation loss did not decrease after 60 epochs, training was terminated and the model was rolled back to the point at which it achieved the lowest loss on the validation set.\nHyperparameter selection Hyperparameters were selected to optimise the score of the NCDE1 model on the validation set. For each dataset the search was performed with a step size that meant the total number of hidden state updates was equal to 500, as this represented a good balance between length and speed that allowed us to complete the search in a reasonable time-frame. In particular, this was short enough that we could train using the non-adjoint training method which helped to speed this section up. The hyperparameters that were considered were:\n• Hidden dimension: [16, 32, 64] - The dimension of the hidden state Zt. • Number of layers: [2, 3, 4] - The number of hidden state layers. • Hidden hidden multiplier: [1, 2, 3] - Multiplication factor for the hidden hidden state, this\nbeing the ‘Hidden layer k’ in figure 6. The dimension of each of these ‘hidden hidden’ layers with be this value multiplied by ‘Hidden dimension’.\nWe ran each of these 27 total combinations for every dataset and the parameters that corresponded were used as the parameters when training over the full depth and step grid. The full results from the hyperparameter search are listed in tables (3, 4) with bolded values to show which values were eventually selected." }, { "heading": "D EXPERIMENTAL RESULTS", "text": "Here we include the full breakdown of all experimental results. Tables 5 and 6 include all results from the EigenWorms and BIDMC datasets respectively." } ]
2,020
null
SP:e3e7028a84d8a272b7714e91bc08e67af40152c1
[ "The paper addresses the problem of preprocessing the data in a way that the predictions of a learning task will be counterfactually fair. The counterfactual fairness definition is borrowed from that of (Kusner et al., 2017). The authors propose ortogonaliza tion and marginal distribution mapping so as to achieve counterfactual fairness. They test their proposed approach on synthetic and real data." ]
Machine learning has become more important in real-life decision-making but people are concerned about the ethical problems it may bring when used improperly. Recent work brings the discussion of machine learning fairness into the causal framework and elaborates on the concept of Counterfactual Fairness. In this paper, we develop the Fair Learning through dAta Preprocessing (FLAP) algorithm to learn counterfactually fair decisions from biased training data and formalize the conditions where different data preprocessing procedures should be used to guarantee counterfactual fairness. We also show that Counterfactual Fairness is equivalent to the conditional independence of the decisions and the sensitive attributes given the processed non-sensitive attributes, which enables us to detect discrimination in the original decision using the processed data. The performance of our algorithm is illustrated using simulated data and real-world applications.
[]
[ { "authors": [ "Ifeoma Ajunwa", "Carlos E Scheidegger", "Suresh Venkatasubramanian" ], "title": "Hiring by algorithm: predicting and preventing disparate impact. Presented at the Yale Law School Information Society Project conference Unlocking the Black Box: The Promise and Limits of Algorithmic Accountability in the Professions, 2016", "venue": null, "year": 2016 }, { "authors": [ "Julia Angwin", "Jeff Larson" ], "title": "Bias in criminal risk scores is mathematically inevitable, researchers say", "venue": "Propublica,", "year": 2016 }, { "authors": [ "Tim Brennan", "William Dieterich", "Beate Ehret" ], "title": "Evaluating the predictive validity of the compas risk and needs assessment system", "venue": "Criminal Justice and Behavior,", "year": 2009 }, { "authors": [ "Alexandra Chouldechova" ], "title": "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments", "venue": "Big data,", "year": 2017 }, { "authors": [ "Simon DeDeo" ], "title": "Wrong side of the tracks: Big data and protected categories", "venue": "arXiv preprint arXiv:1412.4643,", "year": 2014 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Richard Zemel" ], "title": "Fairness through awareness", "venue": "In Proceedings of the 3rd innovations in theoretical computer science conference,", "year": 2012 }, { "authors": [ "Elizabeth Dwoskin" ], "title": "How social bias creeps into web technology", "venue": "The Wall Street Journal,", "year": 2015 }, { "authors": [ "Andreas Fuster", "Paul Goldsmith-Pinkham", "Tarun Ramadorai", "Ansgar Walther" ], "title": "Predictably unequal? the effects of machine learning on credit markets. 2018", "venue": "Available at SSRN: https://ssrn.com/abstract=3072038 or http://dx.doi.org/10", "year": 2038 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nati Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Aria Khademi", "Sanghack Lee", "David Foley", "Vasant Honavar" ], "title": "Fairness in algorithmic decision making: An excursion through the lens of causality", "venue": "In The World Wide Web Conference,", "year": 2019 }, { "authors": [ "Niki Kilbertus", "Mateo Rojas Carulla", "Giambattista Parascandolo", "Moritz Hardt", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Avoiding discrimination through causal reasoning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Matt J Kusner", "Joshua Loftus", "Chris Russell", "Ricardo Silva" ], "title": "Counterfactual fairness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Razieh Nabi", "Ilya Shpitser" ], "title": "Fair inference on outcomes", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 1931 }, { "authors": [ "Judea Pearl" ], "title": "Causal inference in statistics: An overview", "venue": "Statistics surveys,", "year": 2009 }, { "authors": [ "Lyn C Thomas" ], "title": "Consumer credit models: pricing, profit and portfolios", "venue": "OUP Oxford,", "year": 2009 }, { "authors": [ "Xueqin Wang", "Wenliang Pan", "Wenhao Hu", "Yuan Tian", "Heping Zhang" ], "title": "Conditional distance correlation", "venue": "Journal of the American Statistical Association,", "year": 2015 }, { "authors": [ "Yixin Wang", "Dhanya Sridhar", "David M Blei" ], "title": "Equal opportunity and affirmative action via counterfactual predictions", "venue": "arXiv preprint arXiv:1905.10870,", "year": 2019 }, { "authors": [ "Austin Waters", "Risto Miikkulainen" ], "title": "Grade: Machine learning support for graduate admissions", "venue": "AI Magazine,", "year": 2014 }, { "authors": [ "Yongkai Wu", "Lu Zhang", "Xintao Wu" ], "title": "Counterfactual fairness: Unidentification, bound and algorithm", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yongkai Wu", "Lu Zhang", "Xintao Wu", "Hanghang Tong" ], "title": "Pc-fairness: A unified framework for measuring causality-based fairness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Samuel Yeom", "Michael Carl Tschantz" ], "title": "Discriminative but not discriminatory: A comparison of fairness definitions under different worldviews", "venue": "arXiv preprint arXiv:1808.08619,", "year": 2018 }, { "authors": [ "Rich Zemel", "Yu Wu", "Kevin Swersky", "Toni Pitassi", "Cynthia Dwork" ], "title": "Learning fair representations", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Junzhe Zhang", "Elias Bareinboim" ], "title": "Equality of opportunity in classification: A causal approach", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Junzhe Zhang", "Elias Bareinboim" ], "title": "Fairness in decision-making—the causal explanation formula", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Lu Zhang", "Yongkai Wu", "Xintao Wu" ], "title": "A causal framework for discovering and removing direct and indirect discrimination", "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The rapid popularization of machine learning methods and the growing availability of personal data have enabled decision-makers from various fields such as graduate admission (Waters & Miikkulainen, 2014), hiring (Ajunwa et al., 2016), credit scoring (Thomas, 2009), and criminal justice (Brennan et al., 2009) to make data-driven decisions efficiently. However, the community and the authorities have also raised concern that these automatically learned decisions may inherit the historical bias and discrimination from the training data and would cause serious ethical problems when used in practice (Nature Editorial, 2016; Angwin & Larson, 2016; Dwoskin, 2015; Executive Office of the President et al., 2016).\nConsider a training dataset D consisting of sensitive attributes S such as gender and race, nonsensitive attributes A and decisions Y . If the historical decisions Y are not fair across the sensitive groups, a powerful machine learning algorithm will capture this pattern of bias and yield learned decisions Ŷ that mimic the preference of the historical decision-maker, and it is often the case that the more discriminative an algorithm is, the more discriminatory it might be.\nWhile researchers agree that methods should be developed to learn fair decisions, opinions vary on the quantitative definition of fairness. In general, researchers use either the observational or counterfactual approaches to formalize the concept of fairness. The observational approaches often describe fairness with metrics of the observable data and predicted decisions (Hardt et al., 2016; Chouldechova, 2017; Yeom & Tschantz, 2018). For example, Demographic Parity (DP) or Group Fairness (Zemel et al., 2013; Khademi et al., 2019) considers the learned decision Ŷ to be fair if it has the same distribution for different sensitive groups, i.e., P (Ŷ |S = s) = P (Ŷ |S = s′). The Individual Fairness (IF) definition (Dwork et al., 2012) views fairness as treating similar individuals similarly, which means the distance between Ŷ (si, ai) and Ŷ (sj , aj) should be small if individuals i and j are similar.\nThe other branch of fairness and/or discrimination definitions are built upon the causal framework of Pearl (2009a), such as direct/indirect discrimination (Zhang et al., 2017; Nabi & Shpitser, 2018), path-specific effect (Wu et al., 2019b), counterfactual error rate (Zhang & Bareinboim, 2018a) and counterfactual fairness (Kusner et al., 2017; Wang et al., 2019; Wu et al., 2019a). These definitions often involve the notion of counterfactuals, which means what the attributes or decision would be\nif an individual were in a different sensitive group. With the help of the potential outcome concept, the measuring of fairness is no longer restricted to the observable quantities (Kilbertus et al., 2017; Zhang & Bareinboim, 2018b). For example, the Equal Opportunity (EO) definition Wang et al. (2019) has the same idea as IF but it can directly compare the actual and counterfactual decisions of the same individual instead of the actual decisions of two similar individuals. The Counterfactual Fairness (CF) definition (Kusner et al., 2017) or equivalently, the Affirmative Action (AA) definition (Wang et al., 2019) goes one step further than EO and derives the counterfactual decisions from the counterfactual non-sensitive attributes. We adopt CF as our definition of fairness and it is formally described in Section 2. We believe causal reasoning is the key to fair decisions as DeDeo (2014) pointed out that even the most successful algorithms would fail to make fair judgments due to the lack of causal reasoning ability.\nFor the observational definitions, fair decisions can be learned by solving optimization problems, either adding the fairness condition as a constraint (Dwork et al., 2012) or directly optimize the fairness metric as an object (Zemel et al., 2013). When using the counterfactual definitions, however, an approximation of the causal model or the counterfactuals is often needed since the counterfactuals are unobservable. In the FairLearning algorithm proposed by Kusner et al. (2017), the unobserved parts of the graphical causal model are sampled using the Markov chain Monte Carlo method. Then they use only the non-descendants of S to learn the decision, which ensures CF but will have a low prediction accuracy. In Wang et al. (2019), the counterfactual of A had S been s′ is imputed as the sum of the counterfactual group mean E(A|S = s′) and the residuals from the original group A− E(A|S = s). As we discuss later, this approach would only work when a strong assumption of the relationship between A and S is satisfied." }, { "heading": "1.1 CONTRIBUTIONS", "text": "We develop the Fair Learning through dAta Preprocessing (FLAP) algorithm to learn counterfactually fair decisions from biased training data. While current literature is vague about the assumptions needed for their algorithms to achieve fairness, we formalize the weak and strong conditions where different data preprocessing procedures should be used to guarantee CF and prove the results under the causal framework of Pearl (2009a). We show that our algorithm can predict fairer decisions with similar accuracy when compared with other counterfactual fair learning algorithms using three simulated datasets and three real-world applications, including the loan approval data from a fintech company, the adult income data, and the COMPAS recidivism data.\nOn the other hand, the processed data also enable us to detect discrimination in the original decision. We prove that CF is equivalent to the conditional independence of the decisions and the sensitive attributes given the processed non-sensitive attributes under certain conditions. Therefore any wellestablished conditional independence tests can be used to test CF with the processed data. To our knowledge, it is the first time that a formal statistical test for CF is proposed. We illustrate the idea using the Conditional Distance Correlation test (Wang et al., 2015) in our simulation and test the fairness of the decisions in the loan approval data using a parametric test." }, { "heading": "2 CAUSAL MODEL AND COUNTERFACTUAL FAIRNESS", "text": "For the discussion below, we consider the sensitive attributes S ∈ S to be categorical, which is a reasonable restriction for the commonly discussed sensitive information such as race and gender. The non-sensitive attributes A ∈ A ⊆ Rd, and the decision Y is binary as admit or not in graduate admission, hire or not in the hiring process, approve or not in loan assessment.\nTo bring the discussion of fairness into the framework of causal inference, we begin by constructing the Structural Causal Model (SCM) for the data. As described in Pearl (2009b), an SCMM consists of a set of exogenous variables U , a set of endogenous variables V , and F , a set of functions that assign value to each endogenous variable given its parents in V and the exogenous variables U . In our case (Figure 1), we consider V = {S,A, Y, Ŷ }, where {S,A, Y } are the observed data and Ŷ is the prediction of Y we made based on S and A. The only exogenous variable affecting Ŷ is a Uniform(0, 1) random variable UŶ so that we can conveniently express the value of Ŷ with a structural equation. We assume that US , UA, and UY , which are the exogenous variables that affect S, A, and Y respectively, are independent of each other. The structural equations on the right side of Figure 1 are described with the functions in F , one for each component in V . Here we express fŶ as an indicator function so that Ŷ is a Bernoulli random variable that takes value one with probability p(S,A). In general, p(s, a) could be any function that maps S × A to [0, 1], but we are more interested in such functions that will result in a fair decision, more details of which will be discussed in Section 3. It can be seen that the subset of exogenous variables {US , UA, UY } characterize everything we should know about a unit. Any two units with the same realization will have the same behavior and result irrespective of the other differences in their identities.\nHere we give a simplified loan approval model as a running example to help understand the SCM we considered. Example 1. A bank asks each loan applicant for her/his race S and annual income A to decide to approve the application (Y = 1) or not (Y = 0). There are two races in the population of the applicants, S = 1 represents the advantageous group, and S = 0 for the disadvantageous one. Let US ∼ Uniform(0, 1), we generate S = 1{US < 0.7}. The annual income is log-normally distributed for each race group and its scale and location parameters may depend on race:\nA = c1 exp{c2 + λaS + c3σSaUA}, where UA is a standard normal random variable, c1, c3 > 0, and c2 are constants that affect the median and spread of the population income, λa decides the difference in mean log income between the two race groups, and σa > 0 determines the standard deviation ratio of the log incomes. The decision by the bank can be simulated from a logistic model:\nY = 1{UY < expit(β0 + βaA+ βsS)}, where UY ∼ Uniform(0, 1) and expit(u) = (1 + e−u)−1.\nIn this example, βs characterizes the direct effect of the sensitive attribute on the decision: when βs > 0, the applications from the advantageous group are more likely to be approved by the bank when holding the income fixed. On the other hand, λa partly describes the indirect effect because when both λa and βa are positive, the advantageous group will have a higher income than the other group on average and thus be favored by the bank even if βs = 0. It is worth noting that, apart from the difference in the mean, the difference in higher moments could also cause unfairness indirectly as alluded to in Fuster et al. (2018). In general, if there are any differences in the distribution of A across the categories in S, the decision based on A might be unfair. However, the indirect effect caused by the differences in the higher moments of A could be case dependent and thus harder to interpret. In our case, σa > 1 will lead to a higher average income and hence higher approval probability on average for the advantageous group since the income distribution is right-skewed.\nWith the SCM in hand, we are ready to define the causal quantity we are interested in. Since most sensitive attributes, such as gender and race, cannot be altered in experiments, we will look into the counterfactuals, namely, what the results Y would be had S been different from the observed facts. This quantity is expressed as Ys(U) had S been s for a random unit with exogenous variables U sampled from the population. Define Ms to be the modified SCM from M (Figure 1) with the equation for S replaced with S = s. Then for any realization U = u, the unit level counterfactuals Ys(u) can be calculated from Ms. Similarly, we can define Ŷs(U) and Ŷs(u) as the counterfactual predicted decision and its realization. The counterfactual fairness can then be defined on both the decision and the prediction based on the counterfactual result. Here we denote Y as a placeholder for either Y or Ŷ . Definition 1. Counterfactual Fairness. Given a new pair of attributes (s∗, a∗), a (predicted) decision Y is counterfactually fair if for any s′ ∈ S,\nYs′(U)|{S = s∗, A = a∗} d = Ys∗(U)|{S = s∗, A = a∗}.\nIn other words, the conditional distribution of the counterfactual result should not depend on the sensitive attributes. It should be noted that there are two stages in evaluating the conditional counterfactuals. The first is updating the conditional distribution ofU . Take the decision Y from Example 1, if s∗ = 0, then US |{S = s∗, A = a∗} is from Uniform(0.7, 1) and UA|{S = s∗, A = a∗} is a constant (log(a∗/c1)− c2)/c3, but UY |{S = s∗, A = a∗} is still a Uniform(0, 1) random variable since UY is independent of S and A from the SCM. The next stage is deriving the conditional distribution of the counterfactuals from the structural equations of Ms and the conditional distribution of U . Continuing with our example, Y1(U)|{S = 0, A = a∗} would be equal in distribution to\nfY (1, fA(1, UA), UY )|{S = 0, A = a∗} d =fY (1, fA(1, (log(a\n∗/c1)− c2)/c3), UY ) d =1{UY < expit(β0 + βac1(a∗/c1)σa exp{λa + (1− σa)c2}+ βs)}\nand Y0(U)|{S = 0, A = a∗} d = 1{UY < expit(β0 + βaa∗)}. Thus the bank’s decision Y would be counterfactually fair if σa = 1, λa = 0 and βs = 0." }, { "heading": "3 PREPROCESSING, LEARNING, AND TESTING", "text": "Define a preprocessing procedure PD(s, a) : S × A → A′ to be a function that maps attributes (s, a) to the processed attributes a′ given the training dataD. Here we consider two such procedures. Denote Pn(S = s) as the empirical p.m.f. of S and En(A|S = s) as the empirical conditional mean of A given S learned from data D. Definition 2 (Orthogonalization). An orthogonalization procedurePDO is a preprocessing procedure such that\nPDO (s∗, a∗) = ∑ s â(s)Pn(S = s),\nwhere â(s) = a∗ − En(A|S = s∗) + En(A|S = s),∀s ∈ S.\nIt is easy to see thatPDO (s∗, a∗) = a∗−En(A|S = s∗)+En(A) is a one-to-one function of a∗ for any fixed s∗. Denote F̂js(x) = Pn(Aj ≤ x|S = s) as the empirical marginal cumulative distribution function (CDF) of the jth element of the non-sensitive attributes given the sensitive attribute S = s. Define its inverse as\nF̂−1js (z) = inf{x : Pn(Aj ≤ x|S = s) ≥ z}. (3.1)\nDefinition 3 (Marginal Distribution Mapping). A marginal distribution mapping PDM is a preprocessing procedure such that\nPDM (s∗, a∗) = ∑ s â(s)Pn(S = s),\nwhere the jth element of â(s) is [â(s)]j = F̂−1js (F̂js∗([a ∗]j) for j = 1, · · · , d.\nLet P , PO, and PM denote the population level preprocessing procedure corresponding to PD, PDO , andPDM , respectively. It is obvious thatPO(s∗, a∗) = a∗−E(A|S = s∗)+E(A) is still a one-to-one function of a∗ for any fixed s∗, and the jth element of PM (s∗, a∗) is\n[PM (s∗, a∗)]j = ∑ s F−1js (Fjs∗([a ∗]j)P(S = s),\nwhere Fjs is the marginal CDF of the jth element of A given S = s and F−1js is defined similarly to (3.1) but replacing Pn with P. It can be seen that if Aj is a discrete variable, then F−1js (Fjs∗(x)) is strictly increasing for s = s∗; and if Aj is a continuous variable, then F−1js (Fjs∗(x)) may not be strictly increasing when Fjs∗(x) is constant on some interval of x. Therefore PM (s∗, a∗) is only a one-to-one function of a∗ for any fixed s∗ when the marginal CDF of each continuous element in A given S = s∗ is strictly increasing." }, { "heading": "3.1 FAIR LEARNING ALGORITHM", "text": "Besides preprocessing procedures, we also have different choices of learners. A Fairness-ThroughUnawareness (FTU) predictor fFTU (a) only uses the non-sensitive attributes A to predict the conditional mean of Y . A Machine Learning predictor fML(s, a) uses both the sensitive and non-sensitive attributes to predict E(Y |S,A). An Averaged Machine Learning (AML) predictor fAML(a) = ∑ s fML(s, a)Pn(S = s)ds. Note that we still need to train the ML predictor to obtain the AML predictor, but it only needs the non-sensitive attributes as its input when making a prediction since the sensitive attributes are averaged out. Algorithm 1 could use any learner f ∈ {f : A → [0, 1]} to learn the decisions from the processed data, and we would consider the FTU and AML learners in our numerical studies.\nAlgorithm 1: Fair Learning through dAta Preprocessing (FLAP) Input: Training data D, preprocessing procedure PD, learner f , test attributes (s, a).\n1 for (si, ai, yi) in D do 2 a′i = PD(si, ai) 3 end 4 Create the processed data D′ = {(si, a′i, yi)}ni=1 5 Learn predictor f from D′ 6 Calculate a′ = PD(s, a) 7 Draw Ŷ from Bernoulli(f(a′))\nOutput: Ŷ\nApart from the structural assumptions made in Figure 1, extra conditions of the structural equation fA(s, uA) must be satisfied for the preprocessing method to work.\nCondition 1 (Strong non-sensitive). The partial derivative ∂∂uA fA(s, uA) does not involve s.\nCondition 2 (Weak non-sensitive). The sign of ∂∂uA fAj (s, uA) does not change with s for all uA and all j = 1, · · · , d.\nThese two conditions describe the relationship between the sensitive and non-sensitive attributes. Condition 2 is weaker than Condition 1. For example, an additive model fA(s, uA) = β0 + β1s + β2uA satisfies both conditions, while an interaction model fA(s, uA) = β0 + β1s+ β2uA + β3suA does not satisfy Condition 1 but will satisfy Condition 2 if β2 + β3s is greater than (or less than, or equal to) zero for all s. In our running example, ∂∂uA fA(s, uA) = c1c3σ s a exp{c2+λas+c3σsauA} > 0 for s = 0, 1. So it meets Condition 2 but not Condition 1. We prove in the following theorem that these conditions, together with the SCM, are sufficient for Algorithm 1 to generate counterfactually fair decisions.\nTheorem 1. Let Ŷ be the output from Algorithm 1, i.e., 1{UŶ < f(PD(s, a))}.\n1. If the procedure PDO is adopted, Ŷ is counterfactually fair under Condition 1.\n2. If the procedure PDM is adopted, Ŷ is counterfactually fair under Condition 2.\nWe prove Theorem 1 in Appendix A. The intuition is that the FLAP algorithm learns the decision from processed data only, and the processed data contain no sensitive information since the preprocessing procedure can remove A’s dependence on S under the non-sensitive condition." }, { "heading": "3.2 TEST FOR COUNTERFACTUAL FAIRNESS", "text": "Data preprocessing not only allows us to learn a counterfactually fair decision but also enables us to test if the decisions made in the original data are fair. When Condition 1 holds, we can use the data processed by the orthogonalization procedure to test fairness. When the strong condition does not hold but Condition 2 is satisfied, we need an extra condition to utilize the marginal distribution mapping procedure for fairness testing.\nCondition 3. The conditional marginal CDF Fjs(x) is strictly increasing for all such j that Aj is continuous and all s ∈ S.\nIn other words, each non-sensitive attributes Aj should be either a discrete random variable or a continuous one with non-zero density on R. This condition ensures that PM (s∗, a∗) is a one-to-one function as discussed earlier. With these conditions, we can establish the equivalence between CF and the conditional independence of decision and sensitive information given the processed nonsensitive information. Theorem 2. Consider the original decision Y :\n1. Under Condition 1, Y is counterfactually fair if and only if Y⊥S|PO(S,A).\n2. Under Conditions 2 and 3, Y is counterfactually fair if and only if Y⊥S|PM (S,A).\nIts proof is in Appendix A. Theorem 2 allows us to test CF using any well-established conditional independence test. In practice, given a decision dataset D = (si, ai, yi)ni=1, we can obtain the empirical processed non-sensitive attributes PD(si, ai) and test if Y⊥S|PD(S,A). If the p-value of the test is small enough for us to reject the conditional independence hypothesis, then the original decision is probably biased and algorithms such as FLAP should be used to learn fair decisions." }, { "heading": "4 NUMERICAL STUDIES", "text": "In this section, we compare the decisions made by different algorithms in terms of fairness and accuracy using simulated and real data, and also investigate the empirical performance of the fairness test using simulated data with small sample sizes. We consider three cases for generating the simulation data. The first one is Example 1 and the second one is a multivariate extension of it where we introduce one more sensitive group and include the education years of the loan applicants as another non-sensitive attribute and let their annual income depend on it. The third example is a replica of the admission example constructed by Wang et al. (2019). The details of these examples and the parameters chosen in the simulation are presented in Appendix B.\nAs discussed before, Condition 2 is satisfied in Example 1 but Condition 1 is not. Moreover, both Examples 2 and 3 do not satisfy either condition in general due to the cutoff in the value of their non-sensitive attributes, and hence neither of the proposed preprocessing methods can achieve CF in theory. However, the weaker Condition 2 will hold in Example 2 when the mean education years of the three sensitive groups are the same, in which case the marginal distribution mapping method should work." }, { "heading": "4.1 FAIRNESS EVALUATION", "text": "We compare our FLAP algorithm with\n1. ML: the machine learning method using both sensitive and non-sensitive attributes without preprocessing, which is a logistic regression of Y on S and A;\n2. FTU: the Fairness-Through-Unawareness method which fits a logistic model of Y on nonsensitive attributes A alone without preprocessing;\n3. FL: the FairLearning algorithm proposed by Kusner et al. (2017); 4. AA: the Affirmative Action algorithm proposed by Wang et al. (2019).\nAll these methods can output a predicted score p given the training data D and test attributes (s, a), denoted p(s, a;D) and draw the random decision Ŷ from Bernoulli(p(s, a;D)). For ML method, p(s, a;D) = fML(s, a); for FTU method, that is fFTU (a). We denote the predicted scores of the FairLearning and AA algorithms as fFL(s, a;D) and fAA(s, a;D), respectively. For our FLAP method, we use the marginal distribution mapping procedure and try both the AML and the FTU learners described in Section 3 and name the methods as FLAP-1 and FLAP-2. Their predicted scores are fAML(PDM (s, a)) and fFTU (PDM (s, a)), respectively. We mainly use the Mean Absolute Error (MAE) of the predicted score on the test set to measure the prediction performance, and also include the area under the ROC curve and average precision of the prediction in Appendix B for\ncompleteness. All these metrics show similar results about the prediction performance. The metric for measuring the counterfactual fairness (CF-metric) is defined as\nmax r,t∈S\n1\nNtest Ntest∑ i=1 |p(r, âDM (r, si, ai);D)− p(t, âDM (t, si, ai);D)|,\nwhere âDM (s, s ∗, a∗) is defined as â(s) in Definition 3. Note that the CF-metric should be zero when decisions are CF under Condition 2. In real world applications where the condition cannot be verified, a CF decision is not guaranteed to have zero CF-metric, although we expect the CFmetric to be lower for fairer decisions in general. This definition is different from the AA-metric proposed by Wang et al. (2019) in two folds. First, it allows us to consider more than two sensitive groups by taking the maximum of the pairwise difference of predicted scores, but it reduces to the AA-metric for two sensitive groups. Second, we use the marginal distribution mapping method to compute the counterfactual non-sensitive attributes âDM (s, s\n∗, a∗) had the unit been in a different sensitive group s. This ensures that all the derived counterfactual attributes are within the range of observed attribute values. In comparison, Wang et al. (2019) use the orthogonalization method to compute the counterfactual attributes and thus a female student having test score 0.98 would have a counterfactual score of 1.48 had she been a male if the male mean test score is 0.5 higher than female. This out-of-range counterfactual score is unreasonable and problematic when being used as the input of the score prediction function p.\nFor Example 1, we hold other parameters fixed while increase σa from 1 to 2.8 to see how the difference in the variation of the non-sensitive attribute between sensitive groups affects fairness. As expected, the AA algorithm which essentially uses the orthogonalization method cannot achieve CF since Condition 1 is not met. However, both FLAP algorithms’ CF-metrics are zero when using the marginal distribution mapping preprocessing (Figure 2a).\nWang et al. (2019) showed that the AA algorithm can achieve zero AA-metric in Example 3, but it does not satisfy either of the non-sensitive conditions for achieving CF. As shown in Figure 2b, all algorithms we consider cannot achieve CF, but the FLAP algorithms still have the lowest CF-metric. The results of Example 2 are shown in Appendix B and there is no significant difference between the MAE of the AA and FLAP algorithms in all examples. In general, we expect fairer predictions to have higher MAEs since they correct the discriminatory bias of the original decisions." }, { "heading": "4.2 FAIRNESS TEST", "text": "The Conditional Distance Correlation (CDC) test (Wang et al., 2015) is a well-established nonparametric test for conditional independence. We use it here to illustrate the performance of the fairness test with the three simulated examples. For each example, we use different combinations of parameters to obtain simulated datasets with different fairness levels, which are measured by the CFmetric. A CDC test with a significance level of 0.05 is then conducted to test if Y⊥S|PD(S,A) for each dataset. The simulation-test process is repeated 1000 times for each combination of parameters to estimate the power of the test, namely the probability of rejecting the null hypothesis that the decisions are counterfactually fair. The results are summarized in Figure 3.\nWhen the decisions are generated fair, which are shown as the points with CF-metrics equal to zero, the type I error rate is around 0.05 for all examples. The power of the test grows as we make the decisions more unfair, or increase the sample size." }, { "heading": "5 REAL DATA ANALYSIS", "text": "We apply our methods to a loan application dataset from a fintech company, the adult income dataset from UCI Machine Learning Repository1 and the COMPAS recidivism data from ProPublica2 (Angwin & Larson, 2016). Here we present our analysis of the loan application data, and the results from the other two publicly available datasets are shown in Appendix C since the conclusions are similar.\nIn the loan application case, the fintech lender aims to provide short-term credit to young salaried professionals by using their mobile and social footprints to determine their creditworthiness even when a credit history may not be available. To get a loan, a customer has to download the lending app, submit all the requisite details and documentation, and give permission to the lender to gather additional information from her/his smartphone, such as the number of apps, number of calls, and SMSs, and number of contacts and social connections. We obtained data from the lending firm for all loans granted from February 2016 to November 2018. The decisions Y are whether or not the lender approves the loan applications. The attributes are applicants’ gender, age, salary, and other information collected from their smartphones. Both gender and age are regarded as sensitive information here and we find that the decisions are made in favor of the senior and female applicants. Since we can only deal with categorical sensitive attributes, we divide the applicants into two age groups by the lower quartile of the age distribution and create a categorical variable S ∈ {0, 1, 2, 3} to denote the group of the applicants: female younger than 28; male younger than 28; female older than 28; and male older than 28. The effective sample size after removing missing values is 203,656.\nNon-parametric conditional independence tests will not be efficient for this real case due to the large sample size. Therefore we test the conditional independence of Y and S given PDM (S,A) by fitting a simple logistic model for Y with S and PDM (S,A) as the explanatory variables and testing if the coefficient of S is significantly different from zero. The p-value of the F-test is almost zero and indicates that the decisions are unfair for applicants in different groups. When other attributes are fixed to their means, the predicted approval probabilities of the four groups from the logistic model are 0.924 (young female), 0.899 (young male), 0.948 (senior female), and 0.946 (senior male), also indicating that the decisions are most in favor of the senior and female applicants.\nWe then separate the data into a training set of 193,656 samples and a test set of 10,000 samples. The training dataset is used to learn the decisions with different algorithms and the test dataset is used to evaluate the CF-metric and MAE. The results are summarized in Table 1. Our FLAP algorithms have lower CF-metrics compared with other algorithms and their MAEs are only greater than the ML method. Among the FLAP algorithms, the two using the marginal distribution mapping preprocessing procedure have better CF-metric and similar MAE. The FLAP algorithm using the FTU learner (FLAP-2) performs slightly better than the one using the AML learner (FLAP-1). Note that in real-world applications, fairer decisions may not have higher MAEs as expected in the simulation studies because we do not have access to all the variables possessed by the original decision-maker. When the original decisions depend on additional information, the FLAP and other\n1https://archive.ics.uci.edu/ml/machine-learning-databases/adult/ 2https://github.com/propublica/compas-analysis\nfair learning methods may yield predictions closer to or further away from the original decisions, and thus leading to lower or higher MAEs." }, { "heading": "6 DISCUSSION", "text": "We propose two data preprocessing procedures and the FLAP algorithm to make counterfactually fair decisions. The algorithm is general enough so that any learning methods from logistic regression to neural networks can be used, and counterfactual fairness is guaranteed regardless of the learning methods. The orthogonalization procedure is faster and ensures counterfactually fair decisions when the strong non-sensitive condition is met. The marginal distribution mapping procedure is more complex but guarantees fairness under the weaker non-sensitive condition.\nWe also prove the equivalence between counterfactual fairness and the conditional independence of decisions and sensitive attributes given the processed non-sensitive attributes under the non-sensitive assumptions. We illustrate that the CDC test is reliable for testing counterfactual fairness when the sample size is small. When the size gets bigger, however, we need a more efficient testing method for the fairness test.\nIt is well understood but still worth noting that causal inference comes with strong assumptions, such as the SCM and the non-sensitive conditions in our case. Moreover, these assumptions are often unverifiable in general, although we may test some of them when only considering a restricted class of models. As the saying goes, “all models are wrong, but some are useful”. The FLAP method may require unverifiable assumptions in practice, but we make it general enough and easy to follow in the hope that this would encourage decision-makers to address the fairness issue with its help." }, { "heading": "A PROOF OF MAIN RESULTS", "text": "A.1 PROOF OF THEOREM 1\nProof. We prove the theorem for a general class of learners {f : A → [0, 1]} that only take the non-sensitive attribute a as the input. Clearly, both fFTU or fAML belong to this class. We follow the Abduction-Action-Prediction steps in Theorem 7.1.7 Pearl (2009b) to evaluate the conditional expectation of Ŷs′(U) given the evidence S = s∗, A = a∗,\nE(Ŷs′(U)|S = s∗, A = a∗) = ∫ f(PD(s′, fA(s′, u)))pUA|S,A(u|S = s ∗, A = a∗)du,\nwhere pUA|S,A(u|s∗, a∗) denotes the conditional density of UA given S = s∗ and A = a∗. If PD(s′, fA(s′, u)) does not depend on s′, so will E(Ŷs′(U)|S = s∗, A = a∗) and we will have\nE(Ŷs′(U)|S = s∗, A = a∗) = E(Ŷs∗(U)|S = s∗, A = a∗). Note that PD(s′, fA(s′, u)) = ∑ s â(s)Pn(S = s) for both the preprocessing procedures we are considering. Therefore, it suffices to show that â(s) does not depend on s′.\nFirst, consider the Orthogonalization procedure PDO where â(s) = fA(s\n′, u)− En(A|S = s′) + En(A|S = s) = fA(s\n′, u)− E(A|S = s′) + En(A|S = s)− (En − E)(A|S = s′). Note that A|{S = s′} = fA(s′, UA) and the first order Taylor expansion of it gives\nE(A|S = s′) = E ( fA(s ′, u) + ∂\n∂u fA(s, u) ∣∣∣∣ s=s′,u=u′ (UA − u) )\n= fA(s ′, u) +\n∂\n∂u fA(s, u) ∣∣∣∣ s=s′,u=u′ E(UA − u)\nfor some u′ between u and UA. By Condition 1\nâ(s) = ∂\n∂u fA(s, u) ∣∣∣∣ s=s∗,u=u′ E(u− UA) + En(A|S = s) + oP(n)\nand thus it does not depend on s′.\nSecond, consider the Marginal Distribution Mapping procedure PDM . Let fAj (s, u) = eTj fA(s, u) where ej is a d-dimensional vector with the jth element being one and all other elements being zeros. The jth element of â(s) is [â(s)]j = F̂−1js (F̂js′(fAj (s\n′, u))) for j = 1, · · · , d. Again, the first order Taylor expansion of fAj (s ′, UA) gives\nF̂js′(fAj (s ′, u)) = Pn(Aj ≤ fAj (s′, u)|S = s′)\n= P(fAj (s′, UA) ≤ fAj (s′, u)) + (Pn − P)(Aj ≤ fAj (s′, u)|S = s′)\n= P ( fAj (s ′, u) + ∂\n∂u fAj (s, u) ∣∣∣∣ s=s′,u=u′ (UA − u) < fAj (s′, u) ) + oP(n)\nfor some u′ between u and UA. Under Condition 2,\nF̂js′(fAj (s ′, u)) = P\n( sign ( ∂\n∂u fAj (s, u) ∣∣∣∣ s=s∗,u=u′ ) (UA − u) < 0 ) + oP(n)\ndoes not depend on s′ and hence a(s) is a function of s and u alone.\nA.2 PROOF OF THEOREM 2\nProof. The steps of proving the two statements of Theorem 2 are similar. To remove redundancy, we use the notation P whenever the argument is true for both the preprocessing procedures PO and PM .\nFirst, we show that Y is counterfactually fair if Y⊥S|P(S,A). The posterior mean of the counterfactual Ys′(U) given S = s∗ and A = a∗ can be evaluated in two steps: first find the conditional distribution of U = {UA, US , UY }, and then calculate the conditional expectation of the counterfactuals from the SCM. Since the effect of US is blocked by setting S = s′ and UY is independent of S and A, only the distribution of UA will be affected by the given information and effect the counterfactuals Ys′(U).\nE(Ys′(U)|S = s∗, A = a∗) = ∫ E(fY (s′, fA(s′, u), UY )pUA|S,A(u|S = s ∗, A = a∗)du. (A.1)\nUnder the SCM, E(fY (s′, fA(s′, u), UY ) is the same as the expectation of the observed decision Y given the attributes S = s′, A = fA(s′, u). Therefore (A.1) is equal to∫\nE(Y |S = s′, A = fA(s′, u))pUA|S,A(u|S = s ∗, A = a∗)du (A.2)\n= ∫ E(Y |S = s′,P(S,A) = P(s′, fA(s′, u)))pUA|S,A(u|S = s ∗, A = a∗)du (A.3)\n= ∫ E(Y |P(S,A) = P(s′, fA(s′, u)))pUA|S,A(u|S = s ∗, A = a∗)du (A.4)\n= ∫ E(Y |P(S,A) = P(s∗, fA(s∗, u)))pUA|S,A(u|S = s ∗, A = a∗)du (A.5)\n=E(Ys∗(U)|S = s∗, A = a∗). (A.6) Equation (A.3) replaces the condition A = fA(s′, u) with P(S,A) = P(s′, fA(s′, u)) because PO(S,A) is a one-to-one function of A given S and PM (S,A) is also a one-to-one function of A given S under Condition 3. Equation (A.4) is due to the conditional independence of Y and S, and (A.5) uses the result that P(s′, fA(s′, u)) = P(s∗, fA(s∗, u)), which can be shown following the proof of Theorem 1. Repeat the steps (A.1) to (A.5) and we shall get the same result for E(Ys′(U)|S = s∗, A = a∗). Note that both Ys′(U) and Ys∗(U) are binary random variables, therefore the equivalence in expectation implies that\nYs′(U)|{S = s∗, A = a∗} d = Ys∗(U)|{S = s∗, A = a∗}.\nThe above result holds for any s′, s∗ ∈ S, so the definition of counterfactual fairness is satisfied. Next we show that Y⊥S|P(S,A) if Y is counterfactually fair. The counterfactual fairness of Y implies\nE[fY (s′, fA(s′, UA), UY )|S = s∗, A = a∗] =E[fY (s∗, fA(s∗, UA), UY )|S = s∗, A = a∗].\n(A.7)\nLet a′ = P(s∗, a∗), then\n(UA, UY )|{S = s∗, A = a∗} d = (UA, UY )|{S = s∗,P(s∗, A) = a′} (A.8)\nsince P(s∗, a∗) is a one-to-one function of a∗ for each s∗. Using the Bayesian formula, the posterior density of UA is\npUA|S,P(u|S = s ∗,P(s∗, A) = a′) (A.9)\n= pUA(u)P(S = s∗)pP|S,UA(a′|S = s∗, UA = u) P(S = s∗) ∫ pUA(u)pP|S,UA(a ′|S = s∗, UA = u)du , (A.10)\nwhere pUA|S,P(u|s∗, a′) denotes the conditional density of UA given S = s∗ and P(s∗, A) = a′, pUA(u) denotes the prior density of UA, and pP|S,UA(a\n′|S = s∗, UA = u) denotes the conditional density of P(s∗, A) given S = s∗ and UA = u. As a density function, (A.10) is proportional to its kernel pUA(u)pP|S,UA(a\n′|S = s∗, UA = u), which equals pUA(u)pP|S,UA(a′|S = s′, UA = u) because P(S,A) does not depend on S when UA is given as shown in the proof of Theorem 1. Repeating the steps and we can show that the posterior density of UA given S = s′,P(s′, A) = a′ is also proportional to pUA(u)pP|S,UA(a\n′|S = s′, UA = u). Together with the assumption in the SCM that UY is independent of S,P(S,A), we have\n(UA, UY )|{S = s′,P(s′, A) = a′} d = (UA, UY )|{S = s∗,P(s∗, A) = a′}. (A.11)\nThe intuition here is that if the processed non-sensitive data are equal, then they provide the same information about UA regardless of the sensitive information in the original data. Substituting the conditions {S = s∗, A = a∗} in (A.7) with the equivalent conditions in (A.8) and (A.11) gives\nE[fY (s′, fA(s′, UA), UY )|S = s′,P(s′, A) = a′] =E[fY (s∗, fA(s∗, UA), UY )|S = s∗,P(s∗, A) = a′].\n(A.12)\nUnder the SCM and structural equations defined in Figure 1, (A.12) implies\nE[Y |S = s′,P(S,A) = a′] = E[Y |S = s∗,P(S,A) = a′]. (A.13) Since (A.13) holds for any s′, s∗ ∈ S, it yields that\nE[Y |S,P(S,A)] = E[Y |P(S,A)] and hence Y⊥S|P(S,A) for binary Y ." }, { "heading": "B DETAILS OF SIMULATION EXAMPLES", "text": "Example 2. The bank now collects the race S, education year E and annual income A information from loan applicants. There are three possible race groups S = {0, 1, 2} and S = 1{US > 0.76} + 1{US > 0.92}, meaning that a random applicant could be from the majority race group (0) with probability 0.76, or from the minority group 1 or 2 with probability 0.16 or 0.08. Let UE be a standard normal random variable and µE = λe0 + 1{S = 1}λe1 + 1{S = 2}λe2, the education year is E = max{0, µE + 0.4µEUE}. Let µA = log(λa0 + 1{S = 1}λa1 + 1{S = 2}λa2), the annual income is\nA = exp{µA + 0.4µEUE + 0.1UA}. The decision of the bank is modeled as\nY = 1{UY < expit(β0 + 1{S = 1}β1 + 1{S = 2}β2 + βaA+ βeE)}.\nHere λe0, λe1, and λe2 decide the mean education year of the three race groups. λa0, λa1, and λa2 decide the median annual income. The annual income and the education year are positively correlated through UE . β1 and β2 characterize the direct effect of the race information while the λ’s indicate the indirect effect together with βe and βa. In this example, neither of Conditions 1 and 2 holds if βe and λe1 and/or λe2 are not zero due to the maximum operator in fE . Even if λe1 = λe2 = 0, only the weaker Condition 2 will hold due to the same reason for Example 1.\nExample 3 is a replica of the admission example constructed by Wang et al. (2019). Example 3. The admission committee of a university collects the gender S and test score T information from applicants. The gender is simulated from S = 1{US < 0.5}, where S = 1 for male and S = 0 for female. Let UT ∼ Uniform(0, 1) and we generate the test score as\nT = min{max{0, λS + UT }, 1}. The decision of the committee is\nY = 1{UY < expit(β0 + βtT + βsS)}.\nIt is worth noting that both Examples 2 and 3 do not satisfy either of Conditions 1 and 2 due to the cutoff in the value of their non-sensitive attributes education years and test score. Take Example 3, there will be a positive probability (λ to be exact) of seeing male students with test score 1 if λ > 0. Check that\n∂\n∂uT fT (s, uT ) = { 1, 0 < uT < 1− λs 0, 1− λs < uT < 1\nand we can see that its sign does change with s for any fixed uT . Therefore, neither of the proposed preprocessing methods can achieve CF in theory.\nIn Figures 2a and 4c, we choose c1 = 0.01, c2 = 4, c3 = 0.2 and fix β0 = −1, βa = 2, βs = 1, λa = 0.5 while increase σa from 1 to 2.8 to see how the difference in the variation of non-sensitive\n(0.0 , 0.0 ) (0.0 , 0.2 ) (0.0 , 0.4 ) (-0. 2, 0 .0) (-0. 2, 0 .2) (-0. 2, 0 .4) (-0. 4, 0 .0) (-0. 4, 0 .2) (-0. 4, 0 .4)\nDifference due to ( i1, i2)\n0.00\n0.05\n0.10\n0.15\n0.20\n0.25\nCF -m\net ric\n(0.0 , 0.0 ) (0.0 , 0.2 ) (0.0 , 0.4 ) (-0. 2, 0 .0) (-0. 2, 0 .2) (-0. 2, 0 .4) (-0. 4, 0 .0) (-0. 4, 0 .2) (-0. 4, 0 .4)\nDifference due to ( i1, i2)\n0.31\n0.32\n0.33\n0.34\n0.35\n0.36\nM AE\n(0.0 , 0.0 ) (0.0 , 0.2 ) (0.0 , 0.4 ) (-0. 2, 0 .0) (-0. 2, 0 .2) (-0. 2, 0 .4) (-0. 4, 0 .0) (-0. 4, 0 .2) (-0. 4, 0 .4)\nDifference due to ( i1, i2)\n0.68\n0.70\n0.72\n0.74\nRO C\nAU C\n(0.0 , 0.0 ) (0.0 , 0.2 ) (0.0 , 0.4 ) (-0. 2, 0 .0) (-0. 2, 0 .2) (-0. 2, 0 .4) (-0. 4, 0 .0) (-0. 4, 0 .2) (-0. 4, 0 .4)\nDifference due to ( i1, i2)\n0.86\n0.88\n0.90\nAv er\nag e\npr ec\nisi on ML\nFTU FL AA FLAP-1 FLAP-2\nattribute between sensitive groups affects fairness. In Figures 2b and 4d, we set β0 = −1, βt = 2, βs = 1 and increase λ from 0 to 0.8 to see how the mean difference of test scores affects fairness.\nIn Figure 4a and 4b, we choose β0 = −1, β1 = β2 = 0, βa = 1, βe = 2, λe0 = 1.07, λi0 = 0.58. In Figure 4a, we change (λi1, λi2) while fix λe1 = 0 and λe2 = 0 to see how the mean difference of income affect fairness. The results are telling the same story as Figure 2a: since only the weaker non-sensitive condition is met, the AA-algorithm cannot achieve CF but the FLAP algorithms with marginal distribution mapping procedure can.\nIn Figure 4b, we change (λe1, λe2) while fix λi1 = 0 and λi2 = 0 to see how the mean difference of education affect fairness. The results are similar to those of Figure 2b where all algorithms we consider cannot achieve CF but the FLAP algorithms still have the lowest CF-metric." }, { "heading": "C DETAILS OF REAL DATA ANALYSIS", "text": "We use the adult income data to predict whether an individual’s income is higher than $50K with information including sex, race, age, workclass, education, occupation, marital-status, capital gain and loss. Sex and race are regarded as sensitive attributes. The training set has 32,561 samples and the test set has 16281 samples. The comparison of the FLAP and other methods are shown in Table 2.\nThe COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) recidivism data contains the demographic data such as sex, age, race, and record data such as priors count, juvenile felonies count, and juvenile misdemeanors count of over 10,000 criminal defendants in Broward County, Florida. The task is to predict whether they will re-offend in two years. According to ProPublica, “Black defendants were often predicted to be at a higher risk of recidivism than they actually were.” Here we treat sex and race as sensitive attributes and try to predict recidivism in a counterfactually fair manner. We only use the data for Caucasian, Hispanic, and African-American individuals due to the small sample sizes of other races. The remaining data are divided into a training set of 5,090 samples and a test set of 1697 samples. The results are shown in Table 3\nSimilar to the results we have shown for the loan application data, the FLAP methods using the marginal distribution mapping preprocessing procedure have lower CF-metric and are thus considered fairer than other fair learning algorithms, and their MAEs are comparable to other methods." } ]
2,020
null
SP:4f59251101a0aad11518673e5571dceb4fcff65e
[ "The paper presents a study of using machine learning methods to calibrate a radio telescope using information from sensor data on, e.g., atmospheric conditions. The authors consider tree- and neighbourhood-based methods for predicting amplitudes and phases for seven antennas. The results show that the methods perform quite well in terms of RMSE and explained variance." ]
Calibration is the most critical data processing step needed for generating images of high dynamic range (CASA cookbook, 2009). With ever-increasing data volumes produced by modern radio telescopes (Aniyan & Thorat, 2017), astronomers are overwhelmed by the amount of data that needs to be manually processed and analyzed using limited computational resources (Yatawatta, 2020). Therefore, intelligent and automated systems are required to overcome these challenges. Traditionally, astronomers use a package such as Common Astronomy Software Applications (CASA) to compute the gain solutions based on regular observations of a known calibrator source (Thompson et al., 2017) (Abebe, 2015) (Grobler et al., 2016) (CASA cookbook, 2009). The traditional approach to calibration is iterative and time-consuming (Jajarmizadeh et al., 2017), thus, the proposal of machine learning techniques. The applications of machine learning have created an opportunity to deal with complex problems currently encountered in radio astronomy data processing (Aniyan & Thorat, 2017). In this work, we propose the use of supervised machine learning models to first generation calibration (1GC), using the KAT-7 telescope environmental and pointing sensor data recorded during observations. Applying machine learning to 1GC, as opposed to calculating the gain solutions in CASA, has shown evidence of reducing computation, as well as accurately predicting the 1GC gain solutions and antenna behaviour. These methods are computationally less expensive, however they have not fully learned to generalise in predicting accurate 1GC solutions by looking at environmental and pointing sensors. We use an ensemble multi-output regression models based on random forest, decision trees, extremely randomized trees and K-nearest neighbor algorithms. The average prediction error obtained during the testing of our models on testing data is ≈ 0.01 < rmse < 0.09 for gain amplitude per antenna, and 0.2rad < rmse < 0.5rad for gain phase. This shows that the instrumental parameters used to train our model strongly correlate with gain amplitude effects than a phase.
[]
[ { "authors": [ "Ermias Abebe" ], "title": "A study of potential calibrators using the kat-7 radio telescope", "venue": null, "year": 2015 }, { "authors": [ "AK Aniyan", "Kshitij Thorat" ], "title": "Classifying radio galaxies with the convolutional neural network", "venue": "The Astrophysical Journal Supplement Series,", "year": 2017 }, { "authors": [ "Nicholas M Ball", "Robert J Brunner" ], "title": "Data mining and machine learning in astronomy", "venue": "International Journal of Modern Physics D,", "year": 2010 }, { "authors": [ "Hanen Borchani", "Gherardo Varando", "Concha Bielza", "Pedro Larrañaga" ], "title": "A survey on multi-output regression. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery", "venue": null, "year": 2015 }, { "authors": [ "AR Foley", "T Alberts", "RP Armstrong", "A Barta", "EF Bauermeister", "H Bester", "S Blose", "RS Booth", "DH Botha", "SJ Buchner" ], "title": "Engineering and science highlights of the kat-7 radio telescope", "venue": "Monthly Notices of the Royal Astronomical Society,", "year": 2016 }, { "authors": [ "TL Grobler", "AJ Stewart", "SJ Wijnholds", "JS Kenyon", "OM Smirnov" ], "title": "Calibration artefacts in radio interferometry–iii. phase-only calibration and primary beam correction", "venue": "Monthly Notices of the Royal Astronomical Society,", "year": 2016 }, { "authors": [ "Milad Jajarmizadeh", "Lariyah Mohd Sidek", "Sobri Harun", "Mohsen Salarpour" ], "title": "Optimal calibration and uncertainty analysis of swat for an arid climate", "venue": "Air, Soil and Water Research,", "year": 2017 }, { "authors": [ "Jan E Noordam", "Oleg M Smirnov" ], "title": "The meqtrees software system and its use for third-generation calibration of radio interferometers", "venue": "Astronomy & Astrophysics,", "year": 2010 }, { "authors": [ "A Richard Thompson", "James M Moran", "George W Swenson Jr." ], "title": "Interferometry and synthesis in radio astronomy", "venue": null, "year": 2017 }, { "authors": [ "Greg B Taylor", "Chris Luke Carilli", "Richard A Perley" ], "title": "Synthesis imaging in radio astronomy", "venue": "ii. ASPC,", "year": 1999 }, { "authors": [ "A Richard Thompson", "James M Moran", "George W Swenson Jr." ], "title": "Interferometry and synthesis in radio astronomy", "venue": "In isra,", "year": 2001 }, { "authors": [ "Anthony Richard Thompson", "James M Moran", "George Warner Swenson" ], "title": "Interferometry and synthesis in radio", "venue": null, "year": 2017 }, { "authors": [ "Sarod Yatawatta" ], "title": "Stochastic calibration of radio interferometers", "venue": "Monthly Notices of the Royal Astronomical Society,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern-day astronomy is at an unprecedented stage, with a deluge of data from different telescopes. In contrast to conventional methods, today astronomical discoveries are data-driven. The upcoming Square Kilometer Array (SKA) is expected to produce terabytes of data every hour (The SKA telescope). With this exponential growth of data, challenges for data calibration, reduction, and analysis also increase (Aniyan & Thorat, 2017), making it difficult for astronomers to manually process and analyse the data (Yatawatta, 2020). Therefore, intelligent and automated systems are required to overcome these challenges. One of the main issues in radio astronomy is determining the quality of observational data. Astronomical signals are very weak by the time they reach the Earth’s surface. They are easily corrupted by atmospheric interferences, incorrect observational parameters (e.g. telescope locations or telescope pointing parameters), malfunctioning signal receivers, interference from terrestrial man-made radio sources and tracking inaccuracies (Taylor et al., 1999). Therefore, it is required to do proper corrections to the observational data before processing the data. Radio astronomers spend a considerable amount of time performing a series of preprocessing steps called calibration, which involves the determination of a set of parameters to correct the received data. These generally include instrumental as well as astronomical parameters. The general strategy for\ndoing these corrections makes use of a calibrator source. Calibrator sources are well suited for determining astronomical parameters for data corrections because they have known characteristics such as the brightness, shape, and frequency spectrum (Taylor et al., 1999). This process of calibration is iterative and time-consuming. During scientific observations, different external parameters such as atmospheric pressure, temperature wind conditions, and relative humidity are collected through thousands of sensors attached to the telescopes and its adjoining instrumentation. The data coming from different sensors may provide information about the external conditions that may have corrupted the observed data. This piece of information is not always included in the conventional calibration steps. We propose to use machine learning methods to predict the calibration solutions, looking at pointing and environmental sensor data. This is mainly motivated by the fact that calibration steps make corrections to data that has been corrupted by environmental parameters. In this study, we make use of data from the Karoo Array Telescope (KAT-7), an array consisting of seven telescopes, which is a precursor to the MeerKAT radio telescope The SKA telescope. We look at eight types of sensor data recorded during observations, with a calibrator source PKS1613-586 to generate the training and testing dataset. The overall generated dataset contains sensor data per telescope and calibration solutions for the signal received by each telescope in horizontal polarization (H-pol) and vertical polarization (V-pol). These calibrator solutions are calculated using the astronomy software called Common Astronomy Software Applications." }, { "heading": "2 CALIBRATION", "text": "In radio astronomy, ideally one might think that after obtaining the observed visibilities the next step would be to directly retrieve the actual visibilities of the target source and perform imaging. However, the measured visibilities V obs are different from the actual visibilities V True and this is due to instrumental and environmental effects (Richard Thompson et al., 2017). An example of these effects on the signal measured by a radio interferometry include antenna gains (slowly and fast time-varying instrumental part), atmospheric effects, pointing errors (tracking inaccuracies) and incorrect observation parameters (antenna pointing parameters). Signal effects are classified into two types, direction-independent effects (affecting the signal from all directions equally) and directiondependent effects (which vary based on the sky position of the signal) (Taylor et al., 1999). These effects can be corrected by estimating the errors associated with the measured visibilities, thereby recovering the true visibilities. This process is called calibration. In its simplest form, calibration minimizes the error between observed and predicted (model) visibilities by estimating the correct complex instrumental gain response (Grobler et al., 2016). Suppose for baseline pair (i, j), the observed visibility is V obsi,j (t) and the true visibility is V True i,j (t) at observation time t. The basic calibration formula is written as, V obsi,j = Gi,jV True i,j + i,j(t) (1)\nwhere, Gi,j(t) denotes the complex antenna gains for baseline (i, j) as a result of unwanted effects and may vary with time (Thompson et al., 2001). The extra term i,j(t) is a stochastic complex noise (Taylor et al., 1999). Most of the corruptions in data occur before the signal is correlated and the response associated with antenna i does not depend on the response of antenna j. Note that the sources that are the subject of astronomical investigation will be referred to as ”target sources” to distinguish them from calibrator sources (Thompson et al., 2001)." }, { "heading": "3 KAT-7 TELESCOPE", "text": "The KAT-7 is a seven-dish interferometry that was built as an engineering prototype for techniques and technologies in preparation for the 64-dish Karoo Array Telescope (MeerKAT) (Foley et al., 2016). These instruments are located in the Northern Cape Karoo desert region and are operated remotely from Cape Town. The construction of KAT-7 began in 2008 with the writing of the telescope requirements specification and was completed in 2010. It was then operated in engineering (commissioning) mode until its shut-down in 2016 (Foley et al., 2016)." }, { "heading": "3.1 SENSOR DATA", "text": "During science observations, different external parameters like atmospheric pressure, temperature wind conditions, and relative humidity are also collected through thousands of sensors attached\nto the telescopes and its adjoining instrumentation. The data coming from different sensors may provide information about the external conditions that may have corrupted the observed data. This piece of information is not always included in the conventional calibration steps. We propose to use machine learning methods to predict the calibration solutions looking at pointing and environmental sensor data. This is mainly motivated by the fact that calibration steps do corrections to data that is corrupted by environmental parameters. In this study, we make use of the data from the Karoo Array Telescope (KAT-7). We look at pointing azimuth, elevation, scan, offset, temperature, wind speed, air pressure, relative humidity sensor data recorded during observations with a calibrator source PKS1613 − 586 to generate the training and testing dataset. The overall generated dataset contains sensor data per telescope and calibration solutions for correcting the signal received by each telescope in horizontal polarization (h-pol) and vertical polarization (v-pol). These calibrator solutions are calculated using one of the traditional astronomy software called CASA which is used for data calibration and imaging in radio astronomy." }, { "heading": "3.2 PREPARATION OF TRAINING DATA", "text": "The objective of this study is to find correlations between calibration solutions and sensor information on the telescope. Therefore, the main dataset for the study is the time-based sensor information of each antenna. The process of data collection encompasses all of the steps required to obtain the desired data in digital format. Methods of data collection include acquiring and archiving new observations, querying existing databases according to the science problem at hand, and performing as necessary any cross-matching or data combining (Ball & Brunner, 2010). In every observation, the collected data are stored by the data capturing system in the Hierarchical Data Format (HDF5), which is a set of file formats designed to store and organize large amounts of data. The HDF5 file consists of two parts meta-data and observed visibilities. In meta-data one finds static information of the data set, including observer, dump rate and all the available subarrays and spectral windows in the data set selection criteria (antennas, channel frequencies, targets, scan information) and sensor data of interest as a function of time. The data observed by the radio telescope are in the form of complex numbers referred to as visibilities. Each source observed contains its own visibilities as a function of time along with sensor data, which keep a record of the telescope’s activity and behaviour as these are observed.\nIn preparation for the training and testing dataset, we look at environmental sensors and instrumental sensors recorded during observations with a flux calibrator and a phase calibrator source PKS1613586 in Figure 1. The chosen sensors of interest from each observation are: air temperature, wind speed, wind direction, air pressure, relative humidity, actual refraction elevation, actual refraction azimuth, actual scan elevation, actual scan azimuth, actual pointing elevation and actual pointing azimuth." }, { "heading": "4 PROPOSED METHOD", "text": "Different calibration techniques have been developed with the enhancement of the dynamics of the modern radio astronomy instruments to address these challenges raised by the new instruments, providing precise calibration performance. These techniques are loosely classified into first generation calibration (1GC), second generation calibration (2GC) and third generation calibration (3GC) (Noordam & Smirnov, 2010). In this study, we concentrate on generating 1GC calibration with the help of machine learning techniques. Our aim is to provide a machine learning model that predicts calibration solutions from sensor data from the telescope. This approach would help to speed up the calibration processes and decrease the time period of the calibrator monitoring, thus improving the time duration duration for tracking the target source observed as shown in 2.\nSeveral different approaches are employed in machine learning regression. These approaches learn the relationship between the input and output by fitting a model directly from the data. In this study, we consider tree-based approaches: Decision tree, random forest, extremely randomized tree, and the neighborhood search approach: (K-nearest neighbor) to tackle our problem. We call our approach ZCal model which is multi-output regression. We formulate our regression estimation problem as follows: Suppose we have a feature matrix of sensor data\nXt = x11 x12 x13 . . . x1n x21 x22 x23 . . . x2n\n... ...\n... . . . ... xd1 xd2 xd3 . . . xdn = (xi,j) ∈ Rd×n , i ∈ {1, 2, . . . , d} , j ∈ {1, 2, . . . , n} and corresponding complex target variables to learn and predict on,\nYt = y11 y12 y13 . . . y1m x21 y22 y23 . . . y2m\n... ...\n... . . . ... yd1 yd2 yd3 . . . ydm = (yk,l) ∈ Cd×m, k ∈ {1, 2, . . . , d} , l ∈ {1, 2, . . . ,m} where each column represents a vector of length d, containing unique calibration solutions as function of time t per observation represented as a complex variable Aeiφ = A(cosφ+ i sinφ) for each polarization H&V (Thompson et al., 2017).\nDue to different physical causes on the received signal, we therefore choose to treat the antenna phases and amplitudes separately by splitting equation the complex variable into gain amplitude solutions ∣∣Aeiφ∣∣ and gain phase solutions φ. We construct a learning machine, M : Xt → Yt, which when given a validation set of sensor examples, X∗t , minimises some measure of discrepancy between its prediction M(X ∗ t ) ≈ Ŷt, and the value of Yt, where M represents the predictor. We measure the discrepancy using four commonly used statistical measures in regression (Borchani et al., 2015): coefficient of determination, explained variance, root mean squared error (RMSE) and root mean absolute error (RMAE).\nThe aim of this regression exercise is to predict multiple target variables Ŷt hence it is referred to as multi-output regression. The learned model will then be used to predict multi-output values Ŷt+1\nof all target variables of the new incoming unlabelled instances Xt+1. It has been proven that multioutput regression methods provide means to model the multi-output datasets effectively and produce better predictive results (Borchani et al., 2015). This method does not only consider the underlying relationships between the features and the corresponding targets but also the relationships between the targets themselves, thereby producing simpler models with better computational efficiency (Borchani et al., 2015). Borchani et al. (2015) discuss several applications of multi-output regression including the challenges such as missing data, i.e., when some features or target variables are not observed." }, { "heading": "5 RESULTS AND DISCUSSION", "text": "From Figure 3, we observes that the learning algorithms have learned this behaviour pattern with rms error accuracy≈≤ 0.5 Radians during test time, resulting in predicting zero phase gain solutions for\nantenna 5 for both polarizations. Similarly for amplitude gain, we have achieved an error of less than 0.02 during training time. We validated our models looking at one observational data for a calibrator source PKS1613-586. We provide our models with sensor data during tracking of the calibrator source as input to our trained model. The phase and amplitude output predictions in Figures 4 and 5 shows that the models have learned to generalize for the reference antenna 5, by predicting zero phases for antenna 5 H and V polarization, assuming that the antenna was stable without any dropouts during the period of the observation with antenna 6 and antenna 7 being a bit nosier. In Figure 3, though the models were supposed to perform differently because of their parameter settings, one notices that the random forest, K-nearest neighbor and the extremely randomised trees methods are very close to one another in rms error as a function of antenna 1H, 2H, 5H, 6H and 7H, whereas there are large large rms error bars in each model per antenna 3H and 4H. Such behaviour gives us an idea about the instability of these two antennas. This is valuable information that will make contribution towards flagging corrupted data due to unstable antennas.\nThese models managed to learn the most critical part, i.e., the sinusoidal variation of the gain amplitude solutions over time as shown in Figures 5. However, We observe that the machine learning amplitude is performing lower than CASA solutions with a factor of 33.33%.\nIn Figure 5, The model predicts a drop in amplitude as a function of time in the middle of the observation from 0.12 to 0.09 level of CASA, while as the true amplitude have not dropped. From this unique behaviour observed, we can conclude that it is either due to prediction failure as the models have been trained on limited amount of observations and therefore failing on different observation settings, or it could be a behaviour that CASA did not detect as it does not include sensor data when calculating its calibration solutions." }, { "heading": "6 CONCLUSION", "text": "We have shown that the application of machine learning techniques to telescope sensor data opens a new avenue in the development of calibration methods. We used the telescope sensor data to learn the variability of the complex gain calibration solutions for the calibrator PKS1613-586 as a function of time. The implementation of the ZCal algorithm is based on regression machine learning algorithms, to predict the calibration solutions and study each antenna’s behaviour. Using the 1GC calibration solutions obtained with CASA, we constructed a matrix of training sample nL and testing sample nT to train the machine learning algorithms (decision tree, random forest, extremely randomised trees, and K-nearest neighbor) to be able to discern the patterns that relate complex gain solutions to external parameters. Since gain solutions are complex, we have implemented ZCal to learn on phase and amplitude separately. Each learning algorithm ran on the learning sample N times and its error was estimated on the test sample. We presented a statistical framework to measure the accuracy of each multi-output regression mode and our results are encouraging with an rms error of ≈ ≤ 0.5 rad during testing of our models using the testing data for gain amplitude and phase. Comparing the performance of these algorithms, the random forest, extremely randomized tree and K-nearest neighbours were shown to be the best for our purpose. We observed that the environmental and the pointing sensor readings were more strongly correlated to the amplitude than phase. Consequently, the ability to predict gain-phase was overall poor; gain-amplitude prediction\nwas accurate in some cases (capturing non-trivial behaviour such as oscillations), and completely failed in others. The purpose of the study was to show that machine learning techniques can make available connections ”blindly” without access to physical intuition. The accurate prediction of gain-amplitudes, in some cases, suggests that this is indeed feasible. It is not clear what caused the failed predictions, although we can always speculate on physical differences between observations that our sensors were not sensitive to. We can therefore expect that with access to a larger array of sensors, the ZCal approach will be able to make better gain predictions." } ]
2,020
ZCAL: CALIBRATING RADIO INTERFEROMETRIC DATA WITH MACHINE LEARNING
SP:62750e67412021ffe9ef18e104833255aa6ed606
[ "The paper develops a hierarchical reinforcement learning algorithm and analyzes its behaviour in four robotic manipulation and navigation tasks. The approach is based on a two-level hierarchy, *scheduler* at the top and *worker* at the bottom. This is similar to other approaches in the literature and the algorithm uses many ideas and elements from existing algorithms. However, these ideas and elements are combined in a novel and well-justified manner. The result is an algorithm that yields good results in a range of problems. The experiments are well done. The paper is generally organised well and written clearly. Relevant literature is reviewed well. " ]
We propose a hierarchical reinforcement learning method, HIDIO, that can learn task-agnostic options in a self-supervised manner while jointly learning to utilize them to solve sparse-reward tasks. Unlike current hierarchical RL approaches that tend to formulate goal-reaching low-level tasks or pre-define ad hoc lowerlevel policies, HIDIO encourages lower-level option learning that is independent of the task at hand, requiring few assumptions or little knowledge about the task structure. These options are learned through an intrinsic entropy minimization objective conditioned on the option sub-trajectories. The learned options are diverse and task-agnostic. In experiments on sparse-reward robotic manipulation and navigation tasks, HIDIO achieves higher success rates with greater sample efficiency than regular RL baselines and two state-of-the-art hierarchical RL methods. Code available at https://www.github.com/jesbu1/hidio.
[ { "affiliations": [], "name": "Jesse Zhang" }, { "affiliations": [], "name": "Haonan Yu" }, { "affiliations": [], "name": "Wei Xu" } ]
[ { "authors": [ "Joshua Achiam", "Harrison Edwards", "Dario Amodei", "Pieter Abbeel" ], "title": "Variational option discovery", "venue": "algorithms. arXiv,", "year": 2018 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Akhil Bagaria", "George Konidaris" ], "title": "Option discovery using deep skill chaining", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "D. Barber", "F. Agakov" ], "title": "The im algorithm: A variational approach to information maximization", "venue": "In NeurIPS,", "year": 2003 }, { "authors": [ "Emma Brunskill", "Lihong Li" ], "title": "Pac-inspired option discovery in lifelong reinforcement learning", "venue": "Proceedings of Machine Learning Research,", "year": 2014 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Caleb Chuck", "Supawit Chockchowwat", "Scott Niekum" ], "title": "Hypothesis-driven skill discovery for hierarchical deep reinforcement learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Alfredo V. Clemente", "Arjun Chandra" ], "title": "Efficient parallel methods for deep reinforcement learning", "venue": "CoRR, abs/1705.04862,", "year": 2017 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton" ], "title": "Feudal reinforcement learning", "venue": "In NeurIPS, pp", "year": 1993 }, { "authors": [ "Thomas G Dietterich" ], "title": "Hierarchical reinforcement learning with the maxq value function decomposition", "venue": "Journal of artificial intelligence research,", "year": 2000 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "William Fedus", "Prajit Ramachandran", "Rishabh Agarwal", "Yoshua Bengio", "Hugo Larochelle", "Mark Rowland", "Will Dabney" ], "title": "Revisiting fundamentals of experience replay", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Mohammad Ghavamzadeh", "Sridhar Mahadevan" ], "title": "Hierarchical policy gradient algorithms", "venue": "ICML, 2003", "year": 2003 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv, abs/1611.07507,", "year": 2016 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In ICML, 2018", "year": 2018 }, { "authors": [ "Karol Hausman", "Jost Tobias Springenberg", "Ziyu Wang", "Nicolas Heess", "Martin Riedmiller" ], "title": "Learning an embedding space for transferable robot skills", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Allan Jabri", "Kyle Hsu", "Ben Eysenbach", "Abhishek Gupta", "Sergey Levine", "Chelsea Finn" ], "title": "Unsupervised curricula for visual meta-reinforcement learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Arushi Jain", "Khimya Khetarpal", "Doina Precup" ], "title": "Safe option-critic: Learning safety in the optioncritic architecture. arXiv, 2018", "venue": null, "year": 2018 }, { "authors": [ "Martin Klissarov", "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "Learnings options end-to-end for continuous action", "venue": "tasks. arXiv,", "year": 2017 }, { "authors": [ "Hoang M. Le", "Nan Jiang", "Alekh Agarwal", "Miroslav Dudı́k", "Yisong Yue", "Hal Daumé III" ], "title": "Hierarchical imitation and reinforcement learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Youngwoon Lee", "Shao-Hua Sun", "Sriram Somasundaram", "Edward Hu", "Joseph J. Lim" ], "title": "Composing complex skills by learning transition policies with proximity reward induction", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Youngwoon Lee", "Jingyun Yang", "Joseph J. Lim" ], "title": "Learning to coordinate manipulation skills via skill behavior diversification", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Andrew Levy", "Robert Platt Jr.", "Kate Saenko" ], "title": "Learning multi-level hierarchies with hindsight", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Alexander C. Li", "Carlos Florensa", "Ignasi Clavera", "Pieter Abbeel" ], "title": "Sub-policy adaptation for hierarchical reinforcement learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Zhuoru Li", "Akshay Narayan", "Tze-Yun Leong" ], "title": "An efficient approach to model-based hierarchical reinforcement learning", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Corey Lynch", "Mohi Khansari", "Ted Xiao", "Vikash Kumar", "Jonathan Tompson", "Sergey Levine", "Pierre Sermanet" ], "title": "Learning latent plans from play", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Neville Mehta", "Soumya Ray", "Prasad Tadepalli", "Thomas Dietterich" ], "title": "Automatic discovery and transfer of maxq hierarchies", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Josh Merel", "Arun Ahuja", "Vu Pham", "Saran Tunyasuvunakool", "Siqi Liu", "Dhruva Tirumala", "Nicolas Heess", "Greg Wayne" ], "title": "Hierarchical visuomotor control of humanoids", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Jacob Rafati", "David C Noelle" ], "title": "Learning representations in model-free hierarchical reinforcement learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Pravesh Ranchod", "Benjamin Rosman", "George Konidaris" ], "title": "Nonparametric bayesian reward segmentation for skill discovery using inverse reinforcement learning", "venue": "In IROS,", "year": 2015 }, { "authors": [ "Martin Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom van de Wiele", "Vlad Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by playing solving sparse reward tasks from scratch", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Matthew Riemer", "Miao Liu", "Gerald Tesauro" ], "title": "Learning abstract options", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Archit Sharma", "Shixiang Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "arXiv, abs/1907.01657,", "year": 1907 }, { "authors": [ "Arjun Sharma", "Mohit Sharma", "Nicholas Rhinehart", "Kris M. Kitani" ], "title": "Directed-info gail: Learning hierarchical policies from unsegmented demonstrations using directed information", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Sungryull Sohn", "Junhyuk Oh", "Honglak Lee" ], "title": "Hierarchical reinforcement learning for zero-shot generalization with subtask dependencies", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Richard S. Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial Intelligence,", "year": 1999 }, { "authors": [ "Saket Tiwari", "Philip S. Thomas" ], "title": "Natural option critic", "venue": "In AAAI,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Imagine a wheeled robot learning to kick a soccer ball into a goal with sparse reward supervision. In order to succeed, it must discover how to first navigate in its environment, then touch the ball, and finally kick it into the goal, only receiving a positive reward at the end for completing the task. This is a naturally difficult problem for traditional reinforcement learning (RL) to solve, unless the task has been manually decomposed into temporally extended stages where each stage constitutes a much easier subtask. In this paper we ask, how do we learn to decompose the task automatically and utilize the decomposition to solve sparse reward problems?\nDeep RL has made great strides solving a variety of tasks recently, with hierarchical RL (hRL) demonstrating promise in solving such sparse reward tasks (Sharma et al., 2019b; Le et al., 2018; Merel et al., 2019; Ranchod et al., 2015). In hRL, the task is decomposed into a hierarchy of subtasks, where policies at the top of the hierarchy call upon policies below to perform actions to solve their respective subtasks. This abstracts away actions for the policies at the top levels of the hierarchy. hRL makes exploration easier by potentially reducing the number of steps the agent needs to take to explore its state space. Moreover, at higher levels of the hierarchy, temporal abstraction results in more aggressive, multi-step value bootstrapping when temporal-difference (TD) learning is employed. These benefits are critical in sparse reward tasks as they allow an agent to more easily discover reward signals and assign credit.\nMany existing hRL methods make assumptions about the task structure (e.g., fetching an object involves three stages: moving towards the object, picking it up, and combing back), and/or the skills needed to solve the task (e.g., pre-programmed motor skills) (Florensa et al., 2016; Riedmiller et al., 2018; Lee et al., 2019; Hausman et al., 2018; Lee et al., 2020; Sohn et al., 2018; Ghavamzadeh & Mahadevan, 2003; Nachum et al., 2018). Thus these methods may require manually designing the correct task decomposition, explicitly formulating the option space, or programming pre-defined options for higher level policies to compose. Instead, we seek to formulate a general method that can learn these abstractions from scratch, for any task, with little manual design in the task domain.\nThe main contribution of this paper is HIDIO (HIerarchical RL by Discovering Intrinsic Options), a hierarchical method that discovers task-agnostic intrinsic options in a self-supervised manner while ∗Denotes equal contribution. Email to jessez@usc.edu, {haonan.yu,wei.xu}@horizon.ai †Work done as an intern at Horizon Robotics.\nlearning to schedule them to accomplish environment tasks. The latent option representation is uncovered as the option-conditioned policy is trained, both according to the same self-supervised worker objective. The scheduling of options is simultaneously learned by maximizing environment reward collected by the option-conditioned policy. HIDIO can be easily applied to new sparsereward tasks by simply re-discovering options. We propose and empirically evaluate various instantiations of the option discovery process, comparing the resulting options with respect to their final task performance. We demonstrate that HIDIO is able to efficiently learn and discover diverse options to be utilized for higher task reward with superior sample efficiency compared to other hierarchical methods." }, { "heading": "2 PRELIMINARIES", "text": "We consider the reinforcement learning (RL) problem in a Markov Decision Process (MDP). Let s ∈ RS be the agent state. We use the terms “state” and “observation” interchangeably to denote the environment input to the agent. A state can be fully or partially observed. Without loss of generality, we assume a continuous action space a ∈ RA for the agent. Let πθ(a|s) be the policy distribution with learnable parameters θ, and P(st+1|st,at) the transition probability that measures how likely the environment transitions to st+1 given that the agent samples an action by at ∼ πθ(·|st). After the transition to st+1, the agent receives a deterministic scalar reward r(st,at, st+1).\nThe objective of RL is to maximize the sum of discounted rewards with respect to θ:\nE πθ,P [ ∞∑ t=0 γtr(st,at, st+1) ] (1)\nwhere γ ∈ [0, 1] is a discount factor. We will omit P in the expectation for notational simplicity. In the options framework (Sutton et al., 1999), the agent can switch between different options during an episode, where an option is translated to a sequence of actions by an option-conditioned policy with a termination condition. A set of options defined over an MDP induces a hierarchy that models temporal abstraction. For a typical two-level hierarchy, a higher-level policy produces options, and the policy at the lower level outputs environment actions conditioned on the proposed options. The expectation in Eq. 1 is taken over policies at both levels." }, { "heading": "3 HIERARCHICAL RL BY DISCOVERING INTRINSIC OPTIONS", "text": "We now introduce our hierarchical method for solving sparse reward tasks. We assume little prior knowledge about the task structure, except that it can be learned through a hierarchy of two levels. The higher-level policy (the scheduler πθ), is trained to maximize environment reward, while the lower-level policy (the worker πφ) is trained in a self-supervised manner to efficiently discover options that are utilized by πθ to accomplish tasks. Importantly, by self-supervision the worker gets access to dense intrinsic rewards regardless of the sparsity of the extrinsic rewards.\nWithout loss of generality, we assume that each episode has a length of T and the scheduler outputs an option every K steps. The scheduled option u ∈ [−1, 1]D (where D is a pre-defined dimensionality), is a latent representation that will be\nlearned from scratch given the environment task. Modulated by u, the worker executes K steps before the scheduler outputs the next option. Let the time horizon of the scheduler be H = d TK e. Formally, we define\nScheduler policy: uh ∼ πθ(·|sh,0), 0 ≤ h < H Worker policy: ah,k ∼ πφ(·|sh,k,uh), 0 ≤ k < K Environment dynamics: sh,k+1 ∼ P(·|sh,k,ah,k), 0 ≤ h < H, 0 ≤ k < K\n(2)\nwhere we denote sh,k and ah,k as the k-th state and action respectively, within the h-th option window of length K. Note that given this sampling process, we have sh,K ≡ sh+1,0, namely, the last state of the current option uh is the initial state of the next option uh+1. The overall framework of our method is illustrated in Figure 1." }, { "heading": "3.1 LEARNING THE SCHEDULER", "text": "Every time the scheduler issues an option uh, it receives an reward Rh computed by accumulating environment rewards over the next K steps. Its objective is:\nmax θ Eπθ [ H−1∑ h=0 βhRh ] ,where β = γK and Rh = Eπφ [ K−1∑ k=0 γkr(sh,k,ah,k, sh,k+1) ] (3)\nThis scheduler objective itself is not a new concept, as similar ones have been adopted by other hRL methods (Vezhnevets et al., 2017; Nachum et al., 2018; Riedmiller et al., 2018). One significant difference between our option with that of prior work is that our option u is simply a latent variable; there is no explicit constraint on what semantics u could represent. In contrast, existing methods usually require their options to reside in a subspace of the state space, to be grounded to the environment, or to have known structures, so that the scheduler can compute rewards and termination conditions for the worker. Note that our latent options can be easily re-trained given a new task." }, { "heading": "3.2 LEARNING THE WORKER", "text": "The main focus of this paper is to investigate how to effectively learn the worker policy in a selfsupervised manner. Our motivation is that it might be unnecessary to make an option dictate the worker to reach some “ -space” of goals (Vezhnevets et al., 2017; Nachum et al., 2018). As long as the option can be translated to a short sequence of primitive actions, it does not need to be grounded with concrete meanings such as goal reaching. Below we will treat the option as a latent variable that modulates the worker, and propose to learn its latent representation in a hierarchical setting from the environment task." }, { "heading": "3.2.1 WORKER OBJECTIVE", "text": "We first define a new meta MDP on top of the original task MDP so that for any h, k, and t:\n1) sh,k := (sh,0, . . . , sh,k), 2) ah,k := (ah,0, . . . ,ah,k), 3) r(sh,k,ah,k, sh,k+1) := r(sh,k,ah,k, sh,k+1), and 4) P(sh,k+1|sh,k,ah,k) := P(sh,k+1|sh,k,ah,k). This new MDP equips the worker with historical state and action information since the time (h, 0) when an option hwas scheduled. Specifically, each state sh,k or action ah,k encodes the history from the beginning (h, 0) up to (h, k) within the option. In the following, we will call pairs {ah,k, sh,k+1} option sub-trajectories. The worker policy now takes option sub-trajectories as inputs: ah,k ∼ πφ(·|sh,k,ah,k−1,uh), 0 ≤ k < K, whereas the scheduler policy still operates in the original MDP.\nDenote ∑ h,k ≡ ∑H−1 h=0 ∑K−1 k=0 for simplicity. The worker objective, defined on this new MDP, is to minimize the entropy of the option uh conditioned on the option sub-trajectory {ah,k, sh,k+1}:\nmax φ E πθ,πφ ∑ h,k log p(uh|ah,k, sh,k+1)︸ ︷︷ ︸ negative conditional option entropy −β log πφ(ah,k|sh,k,ah,k−1,uh)︸ ︷︷ ︸ worker policy entropy\n(4)\nwhere the expectation is over the current πθ and πφ but the maximization is only with respect to φ. Intuitively, the first term suggests that the worker is optimized to confidently identify an option given\na sub-trajectory. However, it alone will not guarantee the diversity of options because potentially even very similar sub-trajectories can be classified into different options if the classification model has a high capacity, in which case we say that the resulting sub-trajectory space has a very high “resolution”. As a result, the conditional entropy alone might not be able to generate useful options to be exploited by the scheduler for task solving, because the coverage of the sub-trajectory space is poor. To combat this degenerate solution, we add a second term which maximizes the entropy of the worker policy. Intuitively, while the worker generates identifiable sub-trajectories corresponding to a given option, it should act as randomly as possible to separate sub-trajectories of different options, lowering the “resolution” of the sub-trajectory space to encourage its coverage.\nBecause directly estimating the posterior p(uh|ah,k, sh,k+1) is intractable, we approximate it with a parameterized posterior log qψ(uh|ah,k, sh,k+1) to obtain a lower bound (Barber & Agakov, 2003), where qψ is a discriminator to be learned. Then we can maximize this lower bound instead:\nmax φ,ψ E πθ,πφ ∑ h,k log qψ(uh|ah,k, sh,k+1)− β log πφ(ah,k|sh,k,ah,k−1,uh). (5)\nThe discriminator qψ is trained by maximizing likelihoods of options given sampled sub-trajectories. The worker πφ is trained via max-entropy RL (Soft Actor-Critic (SAC) (Haarnoja et al., 2018)) with the intrinsic reward rloh,k+1 := log qψ(·)− β log πφ(·). β is fixed to 0.01 in our experiments.\nNote that there are at least four differences between Eq. 5 and the common option discovery objective in either VIC (Gregor et al., 2016) or DIAYN (Eysenbach et al., 2019):\n1. Both VIC and DIAYN assume that a sampled option will last through an entire episode, and the option is always sampled at the beginning of an episode. Thus their option trajectories “radiate” from the initial state set. In contrast, our worker policy learns options that initialize every K steps within an episode, and they can have more diverse semantics depending on the various states sh,0 visited by the agent. This is especially helpful for some tasks where new options need to be discovered after the agent reaches unseen areas in later stages of training. 2. Actions taken by the worker policy under the current option will have consequences on the next option. This is because the final state sh,K of the current option is defined to be the initial state sh+1,0 of the next option. So in general, the worker policy is trained not only to discover diverse options across the current K steps, but also to make the discovery easier in the future steps. In other words, the worker policy needs to solve the credit assignment problem across options, under the expectation of the scheduler policy. 3. To enable the worker policy to learn from a discriminator that predicts based on option subtrajectories {ah,k, sh,k+1} instead of solely on individual states sh,k, we have constructed a new meta MDP where each state sh,k encodes history from the beginning (h, 0) up to (h, k) within an option h. This new meta MDP is critical, because otherwise one simply cannot learn a worker policy from a reward function that is defined by multiple time steps (sub-trajectories) since the learning problem is no longer Markovian. 4. Lastly, thanks to the new MDP, we are able to explore various possible instantiations of the discriminator (see Section 3.3). As observed in the experiments, individual states are actually not the optimal features for identifying options.\nThese differences constitute the major novelty of our worker objective." }, { "heading": "3.2.2 SHORTSIGHTED WORKER", "text": "It’s challenging for the worker to accurately predict values over a long horizon, since its rewards are densely computed by a complex nonlinear function qψ . Also each option only lasts at most K steps. Thus we set the discount η for the worker in two shortsighted ways:\n1. Hard: setting η = 0 every K-th step and η = 1 otherwise. Basically this truncates the temporal correlation (gradients) between adjacent options. Its benefit might be faster and easier value learning because the value is bootstrapped over at most K steps (K T ). 2. Soft: η = 1− 1K , which considers rewards of roughly K steps ahead. The worker policy still needs to take into account the identification of future option sub-trajectories, but their importance quickly decays.\nWe will evaluate both versions and compare their performance in Section 4.1." }, { "heading": "3.3 INSTANTIATING THE DISCRIMINATOR", "text": "We explore various ways of instantiating the discriminator qψ in order to compute useful intrinsic rewards for the worker. Previous work has utilized individual states (Eysenbach et al., 2019; Jabri et al., 2019) or full observation trajectories (Warde-Farley et al., 2019; Sharma et al., 2019a; Achiam et al., 2018) for option discrimination. Thanks to the newly defined meta MDP, our discriminator is able to take option sub-trajectories instead of current individual states for prediction. In this paper, we investigate six sub-trajectory feature extractors fψ:\nFeature extractor Name Formulation Explanation\nfψ(ah,k, sh,k+1) = State MLP(sh,k+1) Next state alone Action MLP([sh,0,ah,k]) Action in context StateDiff MLP(sh,k+1 − sh,k) Difference between state pairs StateAction MLP([ah,k, sh,k+1]) Action and next state StateConcat MLP([sh,k+1]) Concatenation of states ActionConcat MLP([sh,0,ah,k]) Concatenation of actions\nwhere the operator [·] denotes concatenation and MLP denotes a multilayer perceptron1. Our State feature extractor is most similar to DIAYN (Eysenbach et al., 2019), and StateConcat is similar to (Warde-Farley et al., 2019; Sharma et al., 2019a; Achiam et al., 2018). However we note that unlike these works, the distribution of our option sub-trajectories is also determined by the scheduler in the context of hRL. The other four feature extractors have not been evaluated before. With the extracted feature, the log-probability of predicting an option is simply computed as the negative squared L2 norm: log qψ(uh|ah,k, sh,k+1) = −‖fψ(ah,k, sh,k+1) − uh‖22, by which we implicitly assume the discriminator’s output distribution to be a N (0, ID) multivariate Gaussian." }, { "heading": "3.4 OFF-POLICY TRAINING", "text": "The scheduler and worker objectives (Eq. 3 and Eq. 5) are trained jointly. In principle, on-policy training such as A2C (Clemente et al., 2017) is needed due to the interplay between the scheduler and worker. However, to reuse training data and improve sample efficiency, we employ off-policy training (SAC (Haarnoja et al., 2018)) for both objectives with some modifications.\nModified worker objective In practice, the expectation over the scheduler πθ in Eq. 5 is replaced with the expectation over its historical versions. Specifically, we sample options uh from a replay buffer, together with sub-trajectories {ah,k, sh,k+1}. This type of data distribution modification is conventional in off-policy training (Lillicrap et al., 2016).\nIntrinsic reward relabeling We always recompute the rewards in Eq. 5 using the up-to-date discriminator for every update of φ, which can be trivially done without any additional interaction with the environment.\nImportance correction The data in the replay buffer was generated by historical worker policies. Thus a sampled option sub-trajectory will be outdated under the same option, causing confusion to the scheduler policy. To resolve this issue, when minimizing the temporal-difference (TD) error between the values of sh,0 and sh+1,0 for the scheduler, an importance ratio can be multiplied:∏K−1 k=0\nπφ(ah,k|sh,k,ah,k−1,uh) πoldφ (ah,k|sh,k,ah,k−1,uh) . A similar correction can also be applied to the discriminator loss.\nHowever, in practice we find that this ratio has a very high variance and hinders the training. Like the similar observations made in Nachum et al. (2018); Fedus et al. (2020), even without importance correction our method is able to perform well empirically2.\n1In this paper we focus on non-image observations that can be processed with MLPs, although our method doesn’t have any assumption about the observation space.\n2One possible reason is that the deep RL process is “highly non-stationary anyway, due to changing policies, state distributions and bootstrap targets” (Schaul et al., 2016)." }, { "heading": "4 EXPERIMENTS", "text": "Environments We evaluate success rate and sample efficiency across two environment suites, as shown in Figure 2. Important details are presented here with more information in appendix Section B. The first suite consists of two 7-DOF reaching and pushing environments evaluated in Chua et al. (2018). They both emulate a one-armed PR2 robot. The tasks have sparse rewards: the agent gets a reward of 0 at every timestep where the goal is not achieved, and 1 upon achieved. There is also a small L2 action penalty applied. In 7-DOF REACHER, the goal is achieved when the gripper reaches a 3D goal position. In 7-DOF PUSHER, the goal is to push an object to a 3D goal position. Episodes have a fixed length of 100; a success of an episode is defined to be if the goal is achieved at the final step of the episode.\nWe also propose another suite of environments called SOCIALROBOT 3. We construct two sparse reward robotic navigation and manipulation tasks, GOALTASK and KICKBALL. In GOALTASK, the agent gets a reward of 1 when it successfully navigates to a goal, -1 if the goal becomes too far, -0.5 every time it is too close to a distractor object, and 0 otherwise. In KICKBALL, the agent receives a reward of 1 for successfully pushing a ball into the goal, 0 otherwise, and has the same distractor object penalty. At the beginning of each episode, both the agent and the ball are spawned randomly. Both environments contain a small L2 action penalty, and terminate an episode upon a success.\nComparison methods One baseline algorithm for comparison is standard SAC (Haarnoja et al., 2018), the building block of our hierarchical method. To verify if our worker policy can just be replaced with a naı̈ve action repetition strategy, we compare with SAC+ActRepeat with an action repetition for the same length K as our option interval. We also compare against HIRO (Nachum et al., 2018), a data efficient hierarchical method with importance-based option relabeling, and HiPPO (Li et al., 2020) which trains the lower level and higher level policies together with one unified PPO-based objective. Both are state-of-the-art hierarchical methods proposed to solve sparse reward tasks. Similar to our work, HiPPO makes no assumptions about options, however it utilizes a discrete option space and its options are trained with environment reward.\nWe implement HIDIO based on an RL framework called ALF 4. A comprehensive hyperparameter search is performed for every method, with a far greater search space over HiPPO and HIRO than our method HIDIO to ensure maximum fairness in comparison; details are presented in Appendix D.\nEvaluation For every evaluation point during training, we evaluate the agent with current deterministic policies (by taking arg max of action distributions) for a fixed number of episodes and compute the mean success rate. We plot the mean evaluation curve over 3 randomly seeded runs with standard deviations shown as the shaded area around the curve." }, { "heading": "4.1 WORKER DESIGN CHOICES", "text": "We ask and answer questions about the design choices in HIDIO specific to the worker policy πφ.\n1. What sub-trajectory feature results in good option discovery? We evaluate all six features proposed in Section 3.3 in all four environments. These features are selected to evaluate how different types of subtrajectory information affect option discovery and final performance. They encompass varying types of both local and global subtrajectory information. We plot comparisons of\n3https://github.com/HorizonRobotics/SocialRobot 4https://github.com/HorizonRobotics/alf\nsample efficiency and final performance in Figure 3 across all environments (solid lines), finding that Action, StateAction, and StateDiff are generally among the top performers. StateAction includes the current action and next state, encouraging πφ to differentiate its options with different actions even at similar states. Similarly, Action includes the option initial state and current action, encouraging option diversity by differentiating between actions conditioned on initial states. Meanwhile StateDiff simply encodes the difference between the next and current state, encouraging πφ to produce options with different state changes at each step.\n2. How do soft shortsighted workers (Soft) compare against hard shortsighted workers (Hard)? In Figure 3, we plot all features with Soft in dotted lines. We can see that in general there is not much difference in performance between Hard and Soft except some extra instability of Soft in REACHER regarding the StateConcat and State features. One reason of this similar general performance could be that since our options are very short-term in Hard, the scheduler policy has the opportunity of switching to a good option before the current one leads to bad consequences. In a few cases, Hard seems better learned, perhaps due to an easier value bootstrapping for the worker." }, { "heading": "4.2 COMPARISON RESULTS", "text": "We compare our three best sub-trajectory features of Hard, in Section 4.1, against the SAC baselines and hierarchical RL methods across all four environments in Figure 4. Generally we see that HIDIO (solid lines) achieves greater final performance with superior sample efficiency than the compared methods. Both SAC and SAC+ActRepeat perform poorly across all environments, and all baseline methods perform significantly worse than HIDIO on REACHER, GOALTASK, and KICKBALL.\nIn PUSHER, HiPPO displays competitive performance, rapidly improving from the start. However, all three HIDIO instantiations achieve nearly 100% success rates while HiPPO is unable to do so. Furthermore, HIRO and SAC+ActRepeat take much longer to start performing well, but never achieve similar success rates as HIDIO. HIDIO is able to solve REACHER while HiPPO achieves only about a 60% success rate at best. Meanwhile, HIRO, SAC+ActRepeat, and SAC are unstable or non-competitive. REACHER is a difficult exploration problem as the arm starts far from the goal position, and we see that HIDIO’s automatically discovered options ease exploration for the higher level policy to consistently reach the goal. HIDIO performs well on GOALTASK, achieving 60-80% success rates, while the task is too challenging for every other method. In KICKBALL, the most challenging task, HIDIO achieves 30-40% success rates while every other learns poorly again, highlighting the need for the intrinsic option discovery of HIDIO in these environments.\nIn summary, HIDIO demonstrates greater sample efficiency and final reward gains over all other baseline methods. Regular RL (SAC) fails on all four environments, and while HiPPO is a strong baseline on PUSHER and REACHER, it is still outperformed in both by HIDIO. All other methods fail on GOALTASK and KICKBALL, while HIDIO is able to learn and perform better in both. This demonstrates the importance of the intrinsic, short-term option discovery employed by HIDIO, where the options are diverse enough to be useful for both exploration and task completion.\n4.3 JOINT πφ AND πθ TRAINING\nWe ask the next question: is jointly training πθ and πφ necessary? To answer this, we compare HIDIO against a pre-training baseline where we first pre-train πφ, with uniformly sampled options u for a portion ρ of total numbers of training time steps, and then fix πφ while training πθ for the remaining (1 − ρ) time steps. This is essentially using pre-trained options for downstream higher-level tasks as demonstrated in DIAYN (Eysenbach et al., 2019). We conduct this experiment with the StateAction feature on both KICKBALL and PUSHER, with ρ = { 116 , 1 8 , 1 4}. The results are shown in Figure 6. We can see that in\nPUSHER, fewer pre-training time steps are more sample efficient, as the environment is simple and options can be learned from a small amount of samples. The nature of PUSHER also only requires options that can be learned independent of the scheduler policy evolution. Nevertheless, the pretraining baselines seem less stable. In KICKBALL, the optimal pre-training baseline is on ρ = 18 of the total time steps. However without the joint training scheme of HIDIO, the learned options are unable to be used as efficiently for the difficult obstacle avoidance, navigation, and ball manipulation subtasks required for performing well." }, { "heading": "4.4 OPTION BEHAVIORS", "text": "Finally, since options discovered by HIDIO in our sparse reward environments help it achieve superior performance, we ask, what do useful options look like? To answer this question, after training, we sample options from the scheduler πθ to visualize their behaviors in different environments in Figure 5. For each sampled option u, we fix it until the end of an episode and use the worker πφ to output actions given u. We can see that the options learned by HIDIO are low-level navigation and manipulation skills useful for the respective environments. We present more visualizations in Figure 9 and more analysis in Section C.2 in the appendix. Furthermore, we present an analysis of task performance for different option lengths in appendix Section C.1 and Figures 7 and 8." }, { "heading": "5 RELATED WORK", "text": "Hierarchical RL Much of the previous work in hRL makes assumptions about the task structure and/or the skills needed to solve the task. While obtaining promising results under specific settings, they may have difficulties with different scenarios. For example, SAC-X (Riedmiller et al., 2018) requires manually designing auxiliary subtasks as skills to solve a given downstream task. SNN4HRL (Florensa et al., 2016) is geared towards tasks with pre-training and downstream components. Lee\net al. (2019; 2020) learns to modulate or compose given primitive skills that are customized for their particular robotics tasks. Ghavamzadeh & Mahadevan (2003) and Sohn et al. (2018) operate under the assumption that tasks can be manually decomposed into subtasks.\nThe feudal reinforcement learning proposal (Dayan & Hinton, 1993) has inspired another line of works (Vezhnevets et al., 2017; Nachum et al., 2018; Levy et al., 2019; Rafati & Noelle, 2019) which make higher-level manager policies output goals for lower-level worker policies to achieve. Usually the goal space is a subspace of the state space or defined according to the task so that lower-level rewards are easy to compute. This requirement of manually “grounding” goals in the environment poses generalization challenges for tasks that cannot be decomposed into state or goal-reaching.\nThe MAXQ decomposition (Dietterich, 2000) defines an hRL task decomposition by breaking up the target MDP into a hierarchy of smaller MDPs such that the value function in the target MDP is represented as the sum of the value functions of the smaller ones. This has inspired works that use such decompositions (Mehta et al., 2008; Winder et al., 2020; Li et al., 2017) to learn structured, hierarchical world models or policies to complete target tasks or perform transfer learning. However, building such hierarchies makes these methods limited to MDPs with discrete action spaces.\nOur method HIDIO makes few assumptions about the specific task at hand. It follows from the options framework (Sutton et al., 1999), which has recently been applied to continuous domains (Bacon et al., 2017), spawning a diverse set of recent hierarchical options methods (Bagaria & Konidaris, 2020; Klissarov et al., 2017; Riemer et al., 2018; Tiwari & Thomas, 2019; Jain et al., 2018). HIDIO automatically learns intrinsic options that avoids having explicit initiation or termination policies dependent on the task at hand. HiPPO (Li et al., 2020), like HIDIO, also makes no major assumptions about the task, but does not employ self-supervised learning for training the lower-level policy.\nSelf-supervised option/skill discovery There are also plenty of prior works which attempt to learn skills or options without task reward. DIAYN (Eysenbach et al., 2019) and VIC (Gregor et al., 2016) learn skills by maximizing the mutual information between trajectory states and their corresponding skills. VALOR (Achiam et al., 2018) learns options by maximizing the probability of options given their resulting observation trajectory. DADS (Sharma et al., 2019a) learns skills that are predictable by dynamics models. DISCERN (Warde-Farley et al., 2019) maximizes the mutual information between goal and option termination states to learn a goal-conditioned reward function. Brunskill & Li (2014) learns options in discrete MDPs that are guaranteed to improve a measure of sample complexity. Portable Option Discovery (Topin et al., 2015) discovers options by merging options from source policies to apply to some target domain. Eysenbach et al. (2019); Achiam et al. (2018); Sharma et al. (2019a); Lynch et al. (2020) demonstrate pre-trained options to be useful for hRL. These methods usually pre-train options in an initial stage separate from downstream task learning; few works directly integrate option discovery into a hierarchical setting. For higher dimensional input domains, Lynch et al. (2020) learns options from human-collected robot interaction data for image-based, goal-conditioned tasks, and Chuck et al. (2020) learns a hierarchy of options by discovering objects from environment images and forming options which can manipulate them. HIDIO can also be applied to image-based environments by replacing fully-connected layers with convolutional layers in the early stages of the policy and discriminator networks. However, we leave this to future work to address possible practical challenges arising in this process." }, { "heading": "6 CONCLUSION", "text": "Towards solving difficult sparse reward tasks, we propose a new hierarchical reinforcement learning method, HIDIO, which can learn task-agnostic options in a self-supervised manner and simultaneously learn to utilize them to solve tasks. We evaluate several different instantiations of the discriminator of HIDIO for providing intrinsic rewards for training the lower-level worker policy. We demonstrate the effectiveness of HIDIO compared against other reinforcement learning methods in achieving high rewards with better sample efficiency across a variety of robotic navigation and manipulation tasks." }, { "heading": "A PSEUDO CODE FOR HIDIO", "text": "Algorithm 1: Hierarchical RL with Intrinsic Options Discovery" }, { "heading": "Input:", "text": "T Episode length M Batches per iteration πφ(ah,k|sh,k,uh) Worker B Batch size α Learning rate qψ(uh|ah,k, sh,k+1) Discriminator K Option interval P(sh,k+1|ss,k,ah,k) Environment dynamics πθ(uh|sh,0) Scheduler Output: Learned parameters θ, φ, and ψ. Initialize: Random model parameters θ, φ, and ψ; empty replay buffers Dscheduler and Dworker. while termination not met do\n/* Data collection */ for scheduler step h = 0.. TK − 1 do\nSample an option uh ∼ πθ(·|sh,0). for worker step k = 0..K − 1 do\nSample an action ah,k ∼ πφ(·|sh,k,uh). Step through the environment sh,k+1 ∼ P(·|sh,k,ah,k). ah,k, sh,k+1 ← [ah,k−1,ah,k], [sh,k, sh,k+1] Dworker ← Dworker ∪ (uh,ah,k, sh,k+1)\nend Rh ← ∑K−1 k=0 r(sh,k,ah,k, sh,k+1)\nDscheduler ← Dscheduler ∪ (sh,0,uh, sh+1,0, Rh) end /* Model training */ for batch m = 0..M − 1 do\n/* Scheduler training */ Uniformly sample transitions {(st,ut, st+1)}Bb=1 ∼ Dscheduler. Compute gradient ∆θ according to Eq. 3. Update models θ ← θ + α∆θ. /* Worker training */ Uniformly sample transitions {(uh,ah,k, sh,k+1)}Bb=1 ∼ Dworker. Compute intrinsic rewards rloh,k ← qψ(uh|ah,k, sh,k+1). Compute gradient ∆ψ and ∆φ according to Eq. 5. Update models φ← φ+ α∆φ and ψ ← ψ + α∆ψ.\nend end" }, { "heading": "B MORE ENVIRONMENT DETAILS", "text": "" }, { "heading": "B.0.1 PUSHER AND REACHER", "text": "These environments both have a time horizon of 100 with no early termination: each episode always runs for 100 steps regardless of goal achievement. For both, a success is when the agent achieves the goal at the final step of an episode. In REACHER, observations are 17-dimensional, including the positions, angles, and velocities of the robot arm, and in PUSHER observations also include the 3D object position. Both include the goal position in the observation space. Actions are 7-dimensional vectors for joint velocity control. The action range is [−20, 20] in REACHER and [−2, 2] in PUSHER. There is an action penalty in both environments: at every timestep the squared L2 norm of the agent action is subtracted from the reward. In PUSHER, this penalty is multiplied by a coefficient of 0.001. In REACHER, it’s multiplied by 0.0001." }, { "heading": "B.0.2 GOALTASK AND KICKBALL", "text": "For both SOCIALROBOT environments, an episode terminates early when either a success is reached or the goal is out of range. For each episode, the positions of all objects (including the agent) are randomly picked. Observations are 18-dimensional. In GOALTASK, these observations include egocentric positions, distances, and directions from the agent to different objects while in KICKBALL, they are absolute positions and directions. In KICKBALL, the agent receives a reward of 1 for successfully pushing a ball into the goal (episode termination) and 0 otherwise. At the beginning of each episode, the ball is spawned randomly inside the neighborhood of the agent. Three distractor objects are included on the ground to increase task difficulty. In GOALTASK, the number of distractor objects increases to 5. Both environments contain a small L2 action penalty: at every time step the squared L2 norm of the agent action, multiplied by 0.01, is subtracted from the reward. GOALTASK has a time horizon of 100 steps, while KICKBALL’s horizon is 200. Observations are 30-dimensional, including absolute poses and velocities of the goal, the ball, and the agent. Both GOALTASK and KICKBALL use the same navigation robot PIONEER2DX which has 2-dimensional actions that control the angular velocities (scaled to [−1, 1]) of the two wheels." }, { "heading": "C OPTION DETAILS", "text": "" }, { "heading": "C.1 OPTION LENGTH ABLATION", "text": "We ablate the option length K in all four environments on the three best HIDIO instantiations in Figure 7. K = {1, 3, 5} timesteps per option are shown, with K = 3 and K = 5 performing similarly across all environments, but K = 1 performing very poorly in comparison. K = 1 provides no temporal abstraction, resulting in worse sample efficiency in PUSHER and REACHER, and failing to learn in GOALTASK and KICKBALL. Although K = 5 and K = 3 are generally similar, we see in GOALTASK that K = 5 results in better performance than K = 3 across all three instantiations, demonstrating the potential benefit of longer temporal abstraction lengths.\nWe also plot the distribution of (x, y) velocities5 in GOALTASK and (x, y) coordinates in KICKBALL of randomly sampled options of different lengths in Figure 8. Despite the fact that these two dimensions only represent a small subspace of the entire (30-dimensional) state space, they still demonstrate a difference in option behavior at different option lengths. We can see that as the option length K increases, the option behaviors become more consistent within a trajectory. Meanwhile regarding coverage, K = 1’s (blue) trajectory distribution in both environments is less concentrated near the center, while K = 5 (green) is the most concentrated at the center. K = 3 (orange) lies somewhere in between. We believe that this difference in behavior signifies a trade off between the coverage of the state space and how consistent the learned options can be depending on the option length. Given the same entropy coefficient (β in Eq 5), with longer option lengths, it is likely that the discriminator can more easily discriminate the sub-trajectories created by these options, so that their coverage does not have to be as wide for the worker policy to obtain high intrinsic rewards. Meanwhile, with shorter option lengths, the shorter sub-trajectories have to be more distinct for the discriminator to be able to successfully differentiate between the options." }, { "heading": "C.2 OPTION VISUALIZATIONS", "text": "We visualize more option behaviors in Figure 9, produced in the same way as in Figure 5 and as detailed in Section 4.4. The top 4 picture reels are from KICKBALL. We see that KICKBALL options lead to varied directional driving behaviors that can be utilized for efficient navigation. For example, the second, third, and fourth highlight options that produce right turning behavior, however at different speeds and angles. The option in the third reel is a quick turn that results in the robot tumbling over into an unrecoverable state, but the options in the second and fourth reels turn more slowly and do not result in the robot flipping. The first option simply proceeds forward from the robot starting position, kicking the ball into the goal.\nThe bottom 4 reels are from PUSHER. Each option results in different sweeping behaviors with varied joint positioning and arm height. These sweeping and arm folding behaviors, when utilized in short sub-trajectories, are useful for controlling where and how to move the arm to push the puck into the goal." }, { "heading": "D HYPERPARAMETERS", "text": "To ensure a fair comparison across all methods, we perform a hyperparameter search over the following values for each algorithm and suite of environments.\n5Velocities are relative to the agent’s yaw rotation. Because GOALTASK has egocentric inputs, the agent is not aware of the absolute (x, y) coordinates in this task." }, { "heading": "D.1 PUSHER AND REACHER", "text": "Shared hyperparameters across all methods are listed below (where applicable, and except when overridden by hyperparameters listed for each individual method). For all methods, we take the hyperparameters that perform best across 3 random seeds in terms of the area under the evaluation success curve (AUC) in the PUSHER environment.\n• Number of parallel actors/environments per rollout: 20 • Steps per episode: 100 • Batch size: 2048 • Learning rate: 10−4 for all network modules • Policy/Q network hidden layers: (256, 256, 256) with ReLU non-linearities • Polyak averaging coefficient for target Q: 0.999 • Target Q update interval (training iterations): 1 • Training batches per iteration: 100 • Episodes per evaluation: 50 • Initial environment steps for data collection before training: 10000\nRollouts and training iterations are performed alternatively, one after the other. The rollout length searched below refers to how many time steps in each environment are taken per rollout/training iteration, effectively controlling the ratio of gradient steps to environment steps. A smaller rollout length corresponds to a higher ratio. This ratio is also searched over for HIPPO and HIRO. Other hyperparameters searched separately for each algorithm are listed below, and selected ones are bolded.\nD.1.1 SAC\n• Target entropy min prob ∆6: {0.1, 0.2, 0.3} • Replay buffer length per parallel actor: {50000, 200000} • Rollout Length: {12, 25, 50, 100}" }, { "heading": "D.1.2 SAC W/ ACTION REPETITION", "text": "• Action repetition length7: 3 • Rollout Length: {4, 8, 16, 33}\nOther hyperparameters are kept the same as the optimal SAC ones." }, { "heading": "D.1.3 HIDIO", "text": "The hyperparameters of HIDIO were mostly heuristically chosen due to the hyperparameter search space being too large.\n• Latent option u vector dimension (D): {8, 12} • Policy/Q network hidden layers for πφ : (128, 128, 128) • Steps per option (K): 3 • πφ has a fixed entropy coefficient α of 0.01. Target entropy min prob ∆ for πθ is 0.2. • Discriminator network hidden layers: (64, 64) • Replay buffer length per parallel actor: {50000, 200000} • Rollout Length: {25, 50, 100}\n6The target entropy used for automatically adjusting α is calculated as: ∑\ni[ln(Mi −mi) + ln ∆] where Mi/mi are the maximium/minimum value of action dim i. Intuitively, the target distribution concentrates on a segment of length (Mi −mi)∆ with a constant probability.\n7Chosen to match the option interval K of HIDIO.\nD.1.4 HIRO\n• Steps per option: {3, 5, 8} • Replay buffer size (total): {500000, 2000000} • Meta action space (actions are relative, e.g., meta-action is current obs + action): (-np.ones(obs space - 3D goal pos)*2, np.ones(obs space - 3D goal pos)*2)\n• Policy stddev noise: {0.1, 0.3, 0.5} • Number of gradient updates per training iteration: {100, 200, 400}" }, { "heading": "D.1.5 HIPPO", "text": "For most hyperparameters, the search ranges chosen were derived after discussion with the first author of HiPPO.\n• Learning rate: 3× 10−4\n• Policy network hidden layers: (256, 256) • Skill selection network hidden layers: {(32, 32), (128, 64)} • Latent skill vector size: {5, 10, 15} • PPO clipping parameter: {0.05, 0.1} • Time commitment range: {(2, 5), (3, 7)} • Policy training steps per epoch: {25, 50, 100}" }, { "heading": "D.2 SOCIALROBOT", "text": "For all methods, we select the hyperparameters with the best area under the evaluation success curve (AUC) in the KICKBALL environment, and apply them to both KICKBALL and GOALTASK. The shared hyperparameters are as follows (if applicable to the algorithm, and except when overridden by the respective algorithm’s list of hyperparameters):\n• Number of parallel actors/environments per rollout: 10 • Steps per episode: 100 (GOALTASK), 200 (KICKBALL) • Batch size: 1024 • Learning rate: 5× 10−4 for all network modules • Policy/Q network hidden layers: (256, 256, 256) with ReLU non-linearities • Polyak averaging coefficient for target Q: 0.95 • Target Q update interval (training iterations): 1 • Training batches per iteration: 100 • Episodes per evaluation: 100 • Evaluation interval (training iterations): 100 • Initial environment steps for data collection before training: 100000\nThe training terminology here generally follows section D.1.\nD.2.1 SAC\n• Target entropy min prob ∆: {0.1, 0.2, 0.3} • Replay buffer length per parallel actor: {20000, 100000} • Rollout length: {12, 25, 50, 100}" }, { "heading": "D.2.2 SAC W/ ACTION REPETITION", "text": "• Action repetition length8: 3 • Rollout Length: {4, 8, 16, 33}\nOther hyperparameters are kept the same as the optimal SAC ones." }, { "heading": "D.2.3 HIDIO", "text": "Due to the large hyperparameter search space, we only search over the option vector size and rollout length, and select everything else heuristically.\n• Latent option u vector dimension (D): {4, 6} • Policy/Q network hidden layers for πφ (128, 128, 128) • Steps per option (K): 3 • πφ has a fixed entropy coefficient α of 0.01. Target entropy min prob ∆ for πθ is 0.2. • Discriminator network hidden layers: (32, 32) • Replay buffer length per parallel actor: 20000 • Rollout Length: {50, 100}\nD.2.4 HIRO\n• Learning rate: 3× 10−4\n• Steps per option: {3, 5, 8} • Replay buffer size (total): {500000, 2000000} • Meta action space (actions are relative, e.g., meta-action is current obs + action):\n– GOALTASK: (-np.ones(obs space) * 2, np.ones(obs space) * 2)\n– KICKBALL: (-np.ones(obs space - goal space) * 2, np.ones(obs space - goal space) * 2) (because the goal position is given but will not change in the observation space)\n• Policy stddev noise {0.1, 0.3, 0.5} • Number of gradient updates per training iteration: {100, 200, 400}" }, { "heading": "D.2.5 HIPPO", "text": "• Learning rate: 3× 10−4\n• Policy network hidden layers: {(64, 64), (256, 256)} • Skill selection network hidden layers: {(32, 32), (128, 64)} • Latent skill vector size: {4, 8} • PPO clipping parameter: {0.05, 0.1} • Time commitment range: {(2, 5), (3, 7)} • Policy training steps per epoch: {25, 50, 100}\n8Chosen to match the option interval K of HIDIO." } ]
2,021
DISCOVERING INTRINSIC OPTIONS
SP:8bdbbc8a8bc54620675393fd822f56fb9ec53ffc
[ "This work addresses practical challenges in applying full matrix pre-conditioner methods (such as Shampoo) on problems involving large datasets and architectures trained using a distributed setup. In particular, this work presents a practical extension for the Shampoo algorithm by (1) using only a left or right preconditioner for large layers (2) computing inverse pth roots via coupled Newton iteration algorithms (3) distributing preconditioner computation across CPU cores in a CPU-GPU/TPU cluster and (4) delaying preconditioner computation to occur only once per several steps. The proposed modifications lead to an implementation of Shampoo that consistently decreases the number of training steps and in certain cases provides a direct wall time improvement over Adagrad/Adam. " ]
Optimization in machine learning, both theoretical and applied, is presently dominated by first-order gradient methods such as stochastic gradient descent. Second-order optimization methods, that involve second derivatives and/or second order statistics of the data, are far less prevalent despite strong theoretical properties, due to their prohibitive computation, memory and communication costs. In an attempt to bridge this gap between theoretical and practical optimization, we present a scalable implementation of a second-order preconditioned method (concretely, a variant of full-matrix Adagrad), that along with several critical algorithmic and numerical improvements, provides significant convergence and wall-clock time improvements compared to conventional first-order methods on state-of-the-art deep models. Our novel design effectively utilizes the prevalent heterogeneous hardware architecture for training deep models, consisting of a multicore CPU coupled with multiple accelerator units. We demonstrate superior performance compared to state-of-the-art on very large learning tasks such as machine translation with Transformers, language modeling with BERT, click-through rate prediction on Criteo, and image classification on ImageNet with ResNet-50.
[]
[ { "authors": [ "Naman Agarwal", "Brian Bullins", "Elad Hazan" ], "title": "Second order stochastic optimization in linear time", "venue": "arXiv preprint arXiv:1602.03943,", "year": 2016 }, { "authors": [ "Naman Agarwal", "Brian Bullins", "Xinyi Chen", "Elad Hazan", "Karan Singh", "Cyril Zhang", "Yi Zhang" ], "title": "The case for full-matrix adaptive regularization", "venue": null, "year": 2018 }, { "authors": [ "Naman Agarwal", "Rohan Anil", "Elad Hazan", "Tomer Koren", "Cyril Zhang" ], "title": "Disentangling adaptive gradient methods from learning rates", "venue": "arXiv preprint arXiv:2002.11803,", "year": 2020 }, { "authors": [ "Jimmy Ba", "James Martens", "Roger Grosse" ], "title": "Distributed second-order optimization using kroneckerfactored approximations", "venue": "In International conference on machine learning,", "year": 2017 }, { "authors": [ "Ondrej Bojar", "Christian Buck", "Christian Federmann", "Barry Haddow", "PhilippKoehn", "Johannes Leveling", "Christof Monz", "Pavel Pecina", "Matt Post", "Herve Saint-Amand", "Radu Soricut", "Lucia Specia", "Ale s Tamchyna" ], "title": "Findings of the 2014 workshop on statistical machine translation", "venue": "In Proceedings of the Ninth Workshop on Statistical Machine Translation,", "year": 2014 }, { "authors": [ "Raghu Bollapragada", "Jorge Nocedal", "Dheevatsa Mudigere", "Hao-Jun Shi", "Ping Tak Peter Tang" ], "title": "A progressive batching l-bfgs method for machine learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew R Conn", "Nicholas IM Gould", "Philippe L Toint" ], "title": "Trust region methods", "venue": null, "year": 2000 }, { "authors": [ "Criteo Labs" ], "title": "Criteo releases industry’s largest-ever dataset for machine learning to academic community", "venue": "URL https://www.criteo.com/news/press-releases/2015/07/ criteo-releases-industrys-largest-ever-dataset/", "year": 2015 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc' Aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V. Le", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Murat A Erdogdu", "Andrea Montanari" ], "title": "Convergence rates of sub-sampled newton methods", "venue": "In Proceedings of the 28th International Conference onNeural Information Processing Systems-Volume", "year": 2015 }, { "authors": [ "Roger Fletcher" ], "title": "Practical methods of optimization", "venue": null, "year": 2013 }, { "authors": [ "Thomas George", "César Laurent", "Xavier Bouthillier", "Nicolas Ballas", "Pascal Vincent" ], "title": "Fast approximate natural gradient descent in a Kronecker factored eigenbasis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alon Gonen", "Shai Shalev-Shwartz" ], "title": "Faster sgd using sketched conditioning", "venue": "arXiv preprint arXiv:1506.02649,", "year": 2015 }, { "authors": [ "Chun-Hua Guo", "Nicholas J Higham" ], "title": "A Schur-Newton method for the matrix p’th root and its inverse", "venue": "SIAM Journal On Matrix Analysis and Applications,", "year": 2006 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Vineet Gupta", "Tomer Koren", "Yoram Singer" ], "title": "Shampoo: Preconditioned stochastic tensor optimization", "venue": "In Proceedings of the 35th International Conference onMachine Learning,", "year": 2018 }, { "authors": [ "Elad Hazan" ], "title": "Introduction to online convex optimization", "venue": "Foundations and Trends in Optimization,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Tom Heskes" ], "title": "On “natural” learning and pruning in multilayered perceptrons", "venue": "Neural Computation,", "year": 2000 }, { "authors": [ "Nicholas J Higham", "Srikara Pranesh" ], "title": "Simulating low precision floating-point arithmetic", "venue": "SIAM Journal on Scientific Computing,", "year": 2019 }, { "authors": [ "Bruno Iannazzo" ], "title": "On the Newton method for the matrix p-th root", "venue": "SIAM journal on matrix analysis and applications,", "year": 2006 }, { "authors": [ "Norman P Jouppi", "Cliff Young", "Nishant Patil", "David Patterson", "Gaurav Agrawal", "Raminder Bajwa", "Sarah Bates", "Suresh Bhatia", "Nan Boden", "Al Borchers" ], "title": "In-datacenter performance analysis of a tensor processing unit", "venue": "In Computer Architecture (ISCA),", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "FrederikKunstner", "PhilippHennig", "Lukas Balles" ], "title": "Limitations of the empirical fisher approximation for natural gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adrian S Lewis", "Michael L Overton" ], "title": "Nonsmooth optimization via quasi-newton methods", "venue": "Mathematical Programming,", "year": 2013 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with Kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "H Brendan McMahan", "Matthew Streeter" ], "title": "Adaptive bound optimization for online convex optimization", "venue": "COLT", "year": 2010 }, { "authors": [ "Maxim Naumov", "Dheevatsa Mudigere", "Hao-Jun Michael Shi", "Jianyu Huang", "Narayanan Sundaraman", "Jongsoo Park", "Xiaodong Wang", "Udit Gupta", "Carole-Jean Wu", "Alisson G Azzolini" ], "title": "Deep learning recommendation model for personalization and recommendation systems", "venue": "arXiv preprint arXiv:1906.00091,", "year": 2019 }, { "authors": [ "Jorge Nocedal" ], "title": "Updating quasi-newton matrices with limited storage", "venue": "Mathematics of computation,", "year": 1980 }, { "authors": [ "Kazuki Osawa", "Yohei Tsuji", "Yuichiro Ueno", "Akira Naruse", "Rio Yokota", "Satoshi Matsuoka" ], "title": "Large-scale distributed second-order optimization using kronecker-factored approximate curvature for deep convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Michael L Overton" ], "title": "Numerical computing with IEEE floating point arithmetic", "venue": null, "year": 2001 }, { "authors": [ "Mert Pilanci", "Martin J. Wainwright" ], "title": "Newton sketch: A near linear-time optimization algorithm with linear-quadratic convergence", "venue": "SIAM Journal on Optimization,", "year": 2017 }, { "authors": [ "Haidong Rong", "Yangzihao Wang", "Feihu Zhou", "Junjie Zhai", "Haiyang Wu", "Rui Lan", "Fan Li", "Han Zhang", "Yuekui Yang", "Zhenyu Guo" ], "title": "Distributed equivalent substitution training for large-scale recommender systems", "venue": "In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Mike Schuster", "Kaisuke Nakajima" ], "title": "Japanese and Korean voice search", "venue": "In ICASSP,", "year": 2012 }, { "authors": [ "Shai Shalev-Shwartz" ], "title": "Online learning and online convex optimization", "venue": "Foundations and Trends in Machine Learning,", "year": 2012 }, { "authors": [ "Noam Shazeer", "Youlong Cheng", "Niki Parmar", "Dustin Tran", "Ashish Vaswani", "Penporn Koanantakool", "Peter Hawkins", "HyoukJoong Lee", "Mingsheng Hong", "Cliff Young" ], "title": "Mesh-tensorflow: Deep learning for supercomputers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jonathan Shen", "Patrick Nguyen", "Yonghui Wu", "Zhifeng Chen" ], "title": "Lingvo: a modular and scalable framework for sequence-to-sequence modeling, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ruoxi Wang", "Bin Fu", "Gang Fu", "Mingliang Wang" ], "title": "Deep & cross network for ad click predictions", "venue": "In Proceedings of the ADKDD’17,", "year": 2017 }, { "authors": [ "Shibo Wang", "Pankaj Kanwar" ], "title": "Bfloat16: The secret to high performance on cloud tpus", "venue": "https://cloud.google.com/blog/products/ai-machine-learning/ bfloat16-the-secret-to-high-performance-on-cloud-tpus,", "year": 2019 }, { "authors": [ "Carole-Jean Wu", "Robin Burke", "Ed Chi", "Joseph Konstan", "Julian McAuley", "Yves Raimond", "Hao Zhang" ], "title": "Developing a recommendation benchmark for mlperf training and inference", "venue": null, "year": 2003 }, { "authors": [ "Peng Xu", "Jiyan Yang", "Farbod Roosta-Khorasani", "Christopher Ré", "Michael W Mahoney" ], "title": "Subsampled newtonmethodswith non-uniform sampling", "venue": "InAdvances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yuanzhong Xu", "HyoukJoong Lee", "Dehao Chen", "Hongjun Choi", "Blake Hechtman", "Shibo Wang" ], "title": "Automatic cross-replica sharding of weight update in data-parallel training", "venue": "arXiv preprint arXiv:2004.13336,", "year": 2020 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training bert in 76", "venue": null, "year": 1904 }, { "authors": [], "title": "K-FAC preconditioners are closely related to Shampoo preconditioners, especially when one uses the empirical Fisher (Kunstner et al., 2019). The main difficulty in implementing K-FAC on a model is that current optimizer APIs", "venue": null, "year": 2019 }, { "authors": [ "Vaswani" ], "title": "d/t, following the suggestion", "venue": null, "year": 2017 } ]
[ { "heading": "1 Introduction", "text": "Second order methods are among the most powerful algorithms in mathematical optimization. Algorithms in this family often use a preconditioning matrix to transform the gradient before applying each step. Classically, the preconditioner is the matrix of second-order derivatives (i.e., the Hessian) in the context of exact deterministic optimization (e.g., Fletcher, 2013; Lewis & Overton, 2013; Nocedal, 1980). While second-order methods often have significantly better convergence properties than first-order methods, the size of typical problems prohibits their use in practice, as they require quadratic storage and cubic computation time for each gradient update. Approximate algorithms such as quasi-Newton methods are aimed at significantly reducing these requirements; nonetheless, they still impose non-trivial memory costs equivalent to storing several copies of the model (and often quadratic computation, as in the popular two-loop recursion (Nocedal, 1980)), which severely limits their use at the immense scale of present-day deep learning.\nArguably, one of the greatest challenges of modern optimization is to bridge this gap between theoretical and practical optimization towards making second-order methods feasible to implement and deploy at immense scale. Besides the compelling scientific and mathematical developments it may stimulate, this challenge has also a clear real-world significance: recent practice of training deep learning models suggests that the utility of common first-order methods is quickly reaching a plateau, in large part because their time-per-step is already negligible (compared to other parts of the computation) and cannot be optimized further; thus, the only way to obtain faster training performance is by drastically reducing the number of update steps. To this end, utilizing second-order methods seem a very natural and promising approach.\nIn this paper we attempt to narrow the gap between theory and practice of second-order methods, focusing on second-order adaptivemethods for stochastic optimization. These methods can be thought of as full-matrix analogues of common adaptive algorithms such as AdaGrad (Duchi et al., 2011; McMahan & Streeter, 2010) and Adam (Kingma & Ba, 2014): they precondition each gradient with a second moment matrix, akin to a covariance matrix, that accumulates the outer products of the stochastic gradients. Full-matrix versions are potentially more powerful than first-order methods as they can exploit statistical correlations between (gradients of) different parameters; geometrically,\nthey can scale and rotate gradients whereas first order methods only scale gradients. However they suffer from similar prohibitive runtime and memory costs as Hessian-based methods.\nRecent developments in the space of second-order methods, on which we focus on in this paper, include the K-FAC (Heskes, 2000; Martens & Grosse, 2015) and Shampoo (Gupta et al., 2018) algorithms that exploit the structure of deep networks (and more generally, models described by a collection of tensors) for mitigating the space and runtime costs of full-matrix second-order algorithms. These methods approximate each preconditioning matrix using a factored representation that stems from the network structure. However, in very large applications, such algorithms are still impractical due to a number of numerical and infrastructural pitfalls and are difficult to parallelize.\nContributions. We provide solutions to practical concerns and challenges that arise in implementing and using second-order methods at large scale. Our focus will be on the Shampoo algorithm, but most of the challenges we address are relevant to the implementation of many other second-order methods. These include:\n• We design and implement an pipelined version of the optimization algorithm, critically exploiting the heterogeneity and computing power of CPU-Accelerator coupled architectures; • We extend Shampoo in a number of ways so as to make it applicable to a larger range of deep architectures; in particular, the extensions allow Shampoo to be used for training very large layers such as embedding layers ubiquitous in language and translation models; • We replace expensive spectral decompositions (e.g., SVD) used formanipulating preconditioners with an efficient and numerically-stable iterative method for computing roots of PSD matrices; • We describe practical challenges and limitations we faced in our design, which we argue could be useful for the design considerations of next-generation accelerator hardware architectures.\nOur distributed implementation demonstrates significant improvements in performance, both in terms of number of steps, and often in actual wall-clock time, on some extremely large deep learning tasks:\n• Machine translation: we train Transformer models (Vaswani et al., 2017) on the WMT’14 English to French translation task (Bojar et al., 2014) in half as many steps compared to state-of-the-art (well tuned Adam), resulting with up to 45% reduction in wall-time. • Language modeling: we trained BERT (Devlin et al., 2018) in 16% fewer steps and achieve higher masked-LM accuracy compared to state-of-the-art optimizer (You et al., 2019) at 32K batch size; overall wall-time decreased by 4% from 3.8 to 3.65 hours. (For this task, our system has not yet been tuned for performance; we discuss several possible optimizations below.) • Click-Through Rate (CTR) prediction: we trained the DLRM model (Naumov et al., 2019) on the terabyte Criteo dataset (Criteo Labs, 2015) at 64K batch size in half as many steps as the current state-of-the-art optimizer, with a wall-time reduction of 37.5%. We achieve a new state-of-the-art performance of 80.56% AUC (≈ 0.3% improvement) on this task. (An improvement of 0.1% is considered significant; see Rong et al., 2020; Wang et al., 2017.) • Image classification: we achieve MLPerf target accuracy of 75.9% (Mattson et al., 2019) at 32K batch size on the standard ResNet-50 ImageNet benchmark in 10% fewer steps than previous state-of-the-art. Here we do not see wall-time gains, mainly because the problem is too small (only few thousand steps for convergence which does not allow for amortization of costs). However, we expect that one would be able to better exploit parallelism via improved software and hardware support.\nWe note that one of our main points in this work was to demonstrate wall-time speedups with secondorder methods implemented on a real-world distributed setup being used to train state-of-the-art deep models. In our view, this is important for influencing future hardware accelerator design and runtime software. Indeed, first-order methods have received huge investments in tuning, implementation, platform support and tailored accelerator hardware over the last decade; we believe there are numerous opportunities to improve the per-step time performance of preconditioned methods as well. For example, our results provide a concrete justification for incorporating 64bit accumulation units in hardware for distributed training, adding larger on-chip memory, better model parallelism and tighter coupling between accelerators and CPUs, which would make second order methods feasible across more domains and models.\nRelated work. Classic techniques for addressing the high storage and computation costs of secondorder methods mostly belong to the quasi-Newton or the trust-region families of algorithms (Conn et al., 2000; Nocedal & Wright, 2006). Traditionally, these methods need nearly-accurate gradients in\norder to construct useful quadratic approximations and implement reliable line searches, rendering them as suitable for training with very large batch sizes, and resulting in expensive iterations that make the overall algorithm slow compared with stochastic first-order methods (see, e.g., Bollapragada et al., 2018 for a recent account). Hence, our focus in this paper is on adaptive second-order methods which are directly applicable in a stochastic setting. That said, our effort could be relevant to quasi-Newton and trust-region methods as well: e.g., each iteration of typical trust-region methods amounts to solving a certain generalized eigenvalue problem, which presents numerical difficulties of similar nature to those encountered in matrix root/inverse computations, being addressed here.\nVarious approximations to the preconditioning matrix have been proposed in the recent literature (e.g., Gonen & Shalev-Shwartz, 2015; Erdogdu & Montanari, 2015; Agarwal et al., 2016; Xu et al., 2016; Pilanci & Wainwright, 2017). However, so far the only prevalent and pragmatic approximation is the diagonal approximation. Some recent approaches for approximating a full-matrix preconditioner are K-FAC (Martens & Grosse, 2015), Shampoo (Gupta et al., 2018) and GGT (Agarwal et al., 2018). K-FAC uses a factored approximation of the Fisher-information matrix as a preconditioner. While our focus in this paper is on Shampoo, we believe that many of the techniques presented here could also be applied to make K-FAC practical in large scale (see Appendix C). GGT uses a clever trick to compute a low-rank approximation to the AdaGrad preconditioner. However, GGT maintains several hundred copies of the gradient in memory, which is too expensive even for mid-sized models.\nBa et al. (2017) took a first important step at experimenting with distributed K-FAC for training deep models, using a single machine with 8 GPUs to simulate a distributed environment for training. In contrast, a main thrust of our work is to demonstrate wall-time speedups with second-order methods on a real-world distributed setup used for training state-of-the-art deep models, that call for design considerations crucially different than in (Ba et al., 2017). More recently, Osawa et al. (2019) scaled up K-FAC for training convolutional networks, but fell short of reaching the accuracy of first order methods, despite making changes to data augmentation and model architecture." }, { "heading": "2 Preliminaries", "text": "Adaptive preconditioning methods. First order methods iteratively update the parameters solely based on gradient information: wt+1 = wt − ηt ḡt where wt and ḡt are (column) vectors in Rd. Here ḡt denotes a linear combination of the current and past gradients g1, . . . ,gt , where different algorithms use different combinations. Preconditioned methods take the form wt+1 = wt − Pt ḡt where Pt is an d × d matrix. Whereas in Newton-type methods this matrix is related to the Hessian matrix of second-order derivatives, adaptive preconditioning is based on gradient-gradient correlations.\nThe parameters of a deep network are structured as a set of tensors of order two (i.e., a matrix), three, or four. For simplicity of presentation we focus on the matrix case—however our design, analysis, and implementation hold for tensors of arbitrary order. We denote the space of parameters by the matrix W ∈ Rm×n and an estimate of its gradient by G. Full matrix Adagrad flattens W,G to vectors of dimension mn, it thus requires m2n2 space to store the preconditioner and m3n3 time to perform the update. m and n are in the 1000’s in state-of-the-art models, thus rendering full-matrix preconditioning impractical. For this reason, both AdaGrad and Adam constrain the preconditioning matrices to be diagonal. Shampoo bridges the gap between full matrix preconditioning and the diagonal version by approximating the matrices.\nThe Shampoo algorithm. We describe Shampoo in the context of the Online Convex Optimization (OCO) framework, which generalizes stochastic optimization (see, e.g., Shalev-Shwartz, 2012; Hazan, 2016). In OCO, learning progresses in rounds where on round t the learner receives an input Xt and then uses the parameters Wt to form a prediction denoted ŷt . After making the prediction, the true outcome yt is revealed. The discrepancy between the true and predicted outcomes is assessed by a loss function ` which takes values in R+. The learner then uses the discrepancy to update the matrix to Wt+1 and prepare for the next round. For instance, the input on round t can be an example xt ∈ Rn for which the learner predicts ŷ = f (Wt, xt ) where f : Rm → R and the loss is a function ` : R ×R→ R+ such as `(ŷ, y) = (y − ŷ)2 or `(ŷ, y) = log(1 + exp(−yŷ)). Stochastic gradient methods use the gradient Gt = ∇W`( f (W, xt ), yt ), thus Gt ∈ Rm×n if the parameters are shaped as a matrix W ∈ Rm×n. For matrix-shaped parameters, Shampoo tracks two statistics over the course of its run, Lt and Rt , which are defined as follows:\nLt = ²Im + ∑t s=1 GsG T s ; Rt = ²In + ∑t s=1 G T sGs .\nNote that Lt ∈ Rm×m and Rt ∈ Rn×n. These are used to precondition the gradient and update W :\nWt+1 = Wt − η L−1/4t GtR −1/4 t .\nThe primary complexity of Shampoo arises from the computation of L−1/4t and R −1/4 t , which was naively implemented using spectral decompositions (i.e., SVD)." }, { "heading": "3 Full-Matrix Preconditioning: Challenges", "text": "Wediscuss themain challenges and design choices in the development of the distributed implementation of Shampoo. These largely arose from the fact that modern accelerators are highly optimized for training using first-order optimizers, which have low computational and memory requirements. The Shampoo algorithm is computationally expensive. The extra computation in Shampoo compared to standard first-order methods is in the following steps:\n• Preconditioner statistics computation: Lt = Lt−1 + GtGTt and Rt = Rt−1 + GTt Gt ; • Inverse p’th root computation: L−1/4t and R −1/4 t ; • Preconditioned gradient computation: L−1/4t GtR −1/4 t .\nPreconditioner statistics and gradient computations are expensive for large fully connected as well as embedding layers, we address these below. For other layers we show in Section 5 that they do not add significantly to the runtime of each step. Computing the inverse p’th roots is very slow—as much as 100 times the step time in some cases—and performing these without slowing down training was a key challenge in our system." }, { "heading": "3.1 Algorithmic challenges", "text": "Modern ML architectures often use very large embedding layers, where the longer dimension can be in the millions. For example, DLRM (Naumov et al., 2019) on Criteo-1Tb uses a vocabulary with ∼186 million hash buckets, while in Transformer models (Shazeer et al., 2018) the largest layer can have up to 65536 units per dimension. This makes preconditioning impossible due to O(d2) memory and O(d3) computational complexity. We show how to extend Shampoo to overcome these problems; we provide proofs and convergence results in Appendix B.\nLarge layers. For embedding layers specifically, we extend the Shampoo algorithm to allow us use only one of the preconditioners, in case both preconditioners are too expensive to compute. Our choice is empirically supported by the experiments shown in Figs. 2b, 3a and 5a which suggest that there is a benefit from preconditioning one dimension of the large softmax and embedding layers with minimal increase in time. The following result allows us to choose a subset of preconditioners: Lemma 1. Let G1, . . . ,Gt ∈ Rm×n be matrices of rank at most r. Let gs = vec(Gs) and define Ĥt = ²Imn+ ∑t s=1 gsg T s .Let Lt,Rt be defined as above: Lt = ²Im+ ∑t s=1 GsG T s ,Rt = ²In+ ∑t s=1 G T sGs . Then for any p,q > 0 such that 1/p + 1/q = 1, we have Ĥt rL1/pt ⊗ R1/qt .\nA consequence is that for any p,q > 0 such that 1/p + 1/q = 1, the full AdaGrad preconditioned gradient Ĥ−1/2t gt is approximated by (L1/pt ⊗ R1/qt )−1/2gt , giving us G̃t = L−1/2pt GtR−1/2qt . Now, by choosing (p,q) = (1,∞) and (p,q) = (∞,1) we obtain the simple preconditioned gradients: GtR−1/2t and L−1/2t Gt . Theorem 3 shows that Lemma 1 can be used to prove a regret bound for this extended Shampoo in the online convex optimization setting – this provides intuitive justification for the usefulness of this approximation. We further optimize the computation of these preconditioned gradients for embedding layers by taking advantage of the sparse inputs, see details in Appendix D.\nPreconditioning blocks from large tensors. In addition to embedding layers, large models occasionally have large fully connected layers. To reduce the computational cost of computing statistics and preconditioned gradient: we divide the tensor into blocks and treating individual block as a separate tensor. Concretely this would entail dividing tensor W ∈ Rkm×kn, into W1,1 . . .Wm,n such that Wi, j ∈ Rk×k ∀i, j. Shampoo still converges in this case in the convex setting (Theorem 4), showing that the extension is justified. Lemma 2. Assume that g1, . . . ,gt ∈ Rmk are vectors, and let gi = [gi,1, . . . ,gi,k] where gi, j ∈ Rm. Define Ĥt = ²Imn + ∑t s=1 gsg T s , and let Bt ∈ Rmk×mk be the block diagonal matrix with k m × m\nblocks, where the j-th block is B(j)t = ²Im + ∑t s=1 gs, jg T s, j . Then Ĥt kBt .\nWe performed experiments to study the effect of partitioning intermediate layers into blocks, in which we observed that the latter had minimal impact on quality of the solution while providing faster step time as well as reduced memory overheads; see Fig. 3b.\nDelayed preconditioners. As remarked above, computing the preconditioners is the most expensive computation in every Shampoo step. In Fig. 3c we show that we can compute the preconditioners once every few hundred steps without a significant effect on the accuracy which indicates that the loss function landscape does not change significantly with each step. We observe that there is a performance/quality tradeoff here — in our experiments we set the frequency of computing preconditioners to the smallest value that does not degrade performance, i.e. the number of training steps that can be completed in the amount of time needed to compute the largest preconditioner. The only way to increase the frequency of computing preconditioners is with better hardware/software support." }, { "heading": "3.2 Numerical challenges", "text": "Inverse p’th roots (where typically p = 2,4,8) can be computed using SVD, but there are efficient iterative algorithms such as the coupled Newton iteration algorithm (Guo & Higham, 2006; Iannazzo, 2006) that can compute the inverse p’th root via a sequence of matrix-vector and matrix-matrix products, which are highly optimized on modern accelerators. However, our experiments suggest that on real workloads the condition numbers of the Lt,Rt matrices are very large (see Fig. 6 in Appendix E) so both SVD and the coupled iteration must be run in double-precision, but this is very expensive on accelerators. We applied several further optimizations to speedup the coupled Newton iteration in our implementation; these are described in Appendix E." }, { "heading": "3.3 Infrastructural challenges", "text": "Heterogeneous training hardware. Neural network accelerators are custom designed to run machine learning workloads faster and at lower cost. Accelerator design is trending towards preferring lower-precision (8-bit/16-bit) arithmetic that satisfy both of these goals on existing benchmarks. Our method demands double-precision arithmetic as described above, which makes running computation on accelerators a non-starter, and therefore we had to design the system to leverage the existing underutilized CPUs attached to the accelerators (Section 4).\nAPI inflexibility. Deep learning libraries such as TensorFlow (Abadi et al., 2016) offer APIs for optimizer implementation that are well suited for first-order optimizers and for mini-batch training. Our design requires that we interact with the training loop in non-standard ways, which requires framework level changes. Our Transformer experiments were carried out in the Lingvo (Shen et al., 2019) TensorFlow framework, while BERT-Large, DRLM, as well as ResNet-50 used the MLPerf v0.7 Tensorflow baselines (Mattson et al., 2019). Experimentation required changes to the training loop such as gathering statistics at regular intervals, distributing computation across all the CPUs available in the cluster without blocking the TPU training, as well as updating the preconditioners. We anticipate that this proof-of-concept for full-matrix preconditioning will encourage the development of more flexible API’s to fully utilize heterogeneous hardware." }, { "heading": "4 Distributed System Design", "text": "We present our distributed system design of the modified Shampoo algorithm. Our method is designed to run effectively on modern neural network accelerators such as TPUs (Jouppi et al., 2017) or GPUs. We first describe the standard paradigm of data parallelism used in training models on these accelerators (Dean et al., 2012). Parameters are replicated on each core of the accelerator, and each core computes forward propagation and back propagation on a sub-batch (a subset of a mini-batch, which itself is a small randomly selected subset of the training set) of input examples. These gradients are averaged across all cores via all-reduction to get the average gradient for the mini-batch. Each core uses the average mini-batch gradient to update its copy of the parameters.\nAll-reduction adds a barrier as all the cores need to synchronize to compute the mini-batch gradient. In Fig. 2b we measure the overhead of each of the steps on a Transformer model (Vaswani et al., 2017) described in the experiment section. We observe that the overheads from all-reduction and weight updates are a minor part (< 5%) of the overall step time.\nThe overall design of our implementation is illustrated by the timeline in Fig. 1. As discussed in the previous section the preconditioner computation (inverse pth root) is expensive and requires double\nprecision, also we need to do this computation once every few hundred steps. These observations naturally suggested using the often underutilized CPUs on the machines to which the accelerators such as GPUs or Cloud TPUs are attached. CPUs offer double precision arithmetic but are slower than GPUs or Cloud TPUs, which makes them a perfect choice to run the preconditioner computation without adding any extra cost to the training run, as the computation is pipelined and runs asynchronously without blocking the training loop.\nPreconditioners need to be computed for every layer of the network so we distribute the computation across all the CPUs that are part of the training system. As a result, the most expensive step in Shampoo adds almost nothing to the overall training time. Moreover, the computational overhead of preconditioned gradient is independent of the batch size. Thus, increasing the batch size allows us to linearly decrease the overhead making Shampoo practical for very large scale training setups. On smaller problems such as CIFAR-10, we find that our design still results in training time improvements (Appendix G.3) as preconditioner computations take very little time." }, { "heading": "5 Experiments", "text": "We compare our method against various widespread optimization algorithms for training large state-of-the-art deep models for machine translation, language modeling, recommendation systems as well as image classification. Details of the experiments are given in Appendix G and we will opensource our code before publication." }, { "heading": "5.1 Machine Translation with a Transformer", "text": "We demonstrate the effectiveness of our implementation on the standard machine translation dataset from WMT’14 English to French (en→fr) with 36.3M sentence pairs. We used the state-of-the-art Transformer architecture (Vaswani et al., 2017). This architecture contains 93.3M parameters and consists of 6 layers for its encoder and decoder. Each layer is composed of 512 model dimensions, 2048 hidden dimensions, and 8 attention heads. The model makes use of a sub-word vocabulary that contains 32K word pieces (Schuster & Nakajima, 2012). The experiment was run on 32 cores of a Cloud TPU v3 Pod, and the implementation of the optimizer was carried out in the Lingvo (Shen et al., 2019) sequence to sequence modeling based on TensorFlow. Our results are shown in Fig. 2a: our algorithm achieves the same accuracy as AdaGrad or Adam in about half as many steps.\nPreconditioning of embedding and softmax layers. Following the first extension in Section 3.1 the algorithm preconditions the large layers with only one of the preconditioners (GtR−1/2t or L −1/2 t Gt ) to make it tractable. Fig. 2b shows the increase in step time is only 6% while Fig. 3a shows that we can reduce the number of steps to convergence by ≈20%. Reducing overhead in fully-connected layers. Following the second extension in Section 3.1 we ran two experiments where we partitioned fully connected layer of size [512, 2048] into two blocks of size [512, 1024] and four blocks of size [512, 512]. Our experiments show no drop in quality under this approximation with a small reduction in runtime (<3%)." }, { "heading": "5.2 Transformer-Big model", "text": "We also ran experiments with a larger Transformer model with 375.4M parameters, consisting of 6 layers for its encoder and decoder. Each layer is composed of 1024 model dimensions, 8192 hidden dimensions, and 16 attention heads. Results are presented in Fig. 4a where again we see an improvement in the end-to-end wall-clock time. For the softmax, embedding and the projection fully-connected layer (with 8192 hidden dimensions) we only make use of the left preconditioner. We note that step time is dominated by the preconditioned gradient computation which can be reduced by sub-blocking the layers.\nOn the overhead of the optimizer. We show the computational and memory complexity of the Shampoo extensions described in Section 3.1 in Table 2 in the appendix. We note that the overhead from computing the statistics, as well as from computing the preconditioned update for single step of training, can be further reduced by increasing the batch sizes (indeed, these overheads are independent of the batch size) as shown in Fig. 4b where the overhead dramatically reduces from 40% to 19%." }, { "heading": "5.3 Ads Click-Through Rate (CTR) prediction", "text": "We trained the Deep Learning Recommendations Model (DLRM) of Naumov et al. (2019) on the terabyte Criteo click logs dataset for online advertisement click-through-rate prediction task (Criteo Labs, 2015). We compared Shampoo against the highly tuned SOTA baseline from MLPerf v0.7 training benchmarks (Wu et al., 2020). We trained the model with a batch size of 65536 for 64000 steps (1 epoch). We trained a version of the model where Shampoo is applied only to the hidden layers as well as one where we apply it for all layers. We only tune the learning rate, and keep the exact same setup as the baseline. We found that Shampoo achieves the target accuracy of 80.25% in only 30.97K steps compared to 64K steps for the baseline. Moreover, Shampoo achieves new\nstate-of-the-art performance of 80.56% AUC (an ≈0.3% improvement) on this dataset, note that an improvement of 0.1% is considered significant in this task; see Rong et al., 2020; Wang et al., 2017. Here preconditioning embedding layers further reduced the number of steps needed to reach the target accuracy from 39.96K to 30.97K." }, { "heading": "5.4 Language modeling", "text": "We trained BERT-Large (the Bidirectional Encoder Representation architecture of Devlin et al., 2018) for the language modeling task on the concatenation of Wikipedia and BooksCorpus, with 2.5B and 800M words respectively. BERT-Large is a large bidirectional transformer model containing 24 transformer blocks with 1024 hidden dimensions and 16 self attention heads. It has 340M parameters and is set up to jointly optimize two objectives: (a) masked language model (Masked-LM) loss where the task is to predict masked tokens based on surrounding context, and (b) next sentence prediction (NSP) loss where the task is to predict whether two given sentences are consecutive in the text. In Fig. 5b we compare our results against the current state of the art in training BERT (You et al., 2019). Models were trained with batch size 16K; in these experiments we replaced the Adam update rule in Lamb that produces the preconditioned gradient with Shampoo. Both experiments used existing well-tuned hyperparameters of the baseline." }, { "heading": "5.5 Image classification", "text": "We trained a ResNet-50 model (He et al., 2016) on the ImageNet-2012 (Russakovsky et al., 2015) dataset and compared it against the state-of-the-art baseline using SGD+Momentum. We base our experiments off the Tensorflow baseline available from Mattson et al. (2019) where the target criteria is reaching 75.9% accuracy. See results in Table 1; in particular, we find that Shampoo reaches the target accuracy in fewer steps than the current state of the art. Tuning details are in Appendix G.4." }, { "heading": "6 Concluding Remarks", "text": "We have presented an implementation of a second order optimizer, and demonstrated step time as well as wall time improvements on multiple large tasks in different domains — in each case our implementation performed as well or better than state-of-the-art optimizers specialized for each domain. The main point of our work is to demonstrate that second order methods implemented on a real-world distributed setup can be used to train state-of-the-art deep models. We hope that this work will influence future hardware accelerator design and runtime software — first order methods have received large investments in tuning, implementation, platform support and hardware tailored for them, and we believe there are several opportunities to improve the per-step time performance of second order methods as well:\n• Most second order methods use symmetric matrices, but we haven’t found support for typing operands as symmetric, which can reduce compute flops and storage by upto 50%.\n• Several optimizations that are currently tuned towards first order methods could be extended to second order methods. For example, weight update sharding pattern matches first order methods (Xu et al., 2020) and dramatically reduces the time spent in the update step as well as memory used. This change can also be applied to Shampoo with blocked preconditioners – but we do not have support for it yet as it requires compiler level support, and is not expressible at the program layer. Currently every core must update all layers which is quite inefficient.\n• Mixed precision algorithms may work for inverse pth roots and can help increase the frequency of preconditioner computation.\n• Increased memory per chip can allow larger preconditioners.\n• Hardware support for high-precision arithmetic in accelerators can allow more frequent preconditioner computation. The benefits of high precision arithmetic for optimization run counter to the prevailing wisdom in ML1 which has led to the focus on low-precision formats such as bfloat16 (Wang & Kanwar, 2019).\n• Hardware support for storing/packing and using upper/lower triangular matrices efficiently, as available in LAPACK.\nOur hope is that these suggestions could result in innovations that would make second-order methods practical across more domains and models, especially in data limited regimes where we may not able to amortize the latency added in the data transfer between the accelerator and the CPU.\n1For example, (Gupta et al., 2015) say \"It is well appreciated that in the presence of statistical approximation and estimation errors, high-precision computation in the context of learning is rather unnecessary (Bottou & Bousquet, 2007)\" and (Higham & Pranesh, 2019) say \"... machine learning provides much of the impetus for the development of half precision arithmetic in hardware ...\"" }, { "heading": "A Notation", "text": "We use lowercase letters to denote scalars and vectors, and uppercase letters to denote matrices. ‖A‖F denotes the Frobenius norm of A, i.e., ‖A‖2F = ∑ i, j A 2 i j . A• B denotes the Hadamard or element-wise product of A and B which have the same shape, so C = A • B ⇐⇒ Ci j = Ai jBi j . D α is the element-wise power, (D α)i j = Dαi j . We use to denote the Loewner order: given square symmetric matrices A,B, we write A B iff B − A is positive semidefinite (PSD). Given a symmetric PSD matrix A, and α ∈ R, Aα is defined as follows: let A = UDUT be the singular value decomposition of A, where U is a unitary matrix and D is a diagonal matrix (with Dii ≥ 0 as A is PSD), then Aα = UDαUT, where (Dα)ii = Dαii . If α < 0, this is defined for positive definite matrices only, where Dii > 0.\nWe use vec(A) to denote the flattening of the m × n matrix A: if A has rows a1, . . . ,am, then vec(A) is the mn × 1 column vector vec(A) = (a1, . . . ,am)T. A ⊗ B denotes the Kronecker product of two matrices A and B, and we will use the identities (A ⊗ B)α = Aα ⊗ Bα for α ∈ R, and (A ⊗ B) vec(C) = vec(ACBT)." }, { "heading": "B Deferred Proofs", "text": "Proof (of Lemma 1). Lemma 8 in Gupta et al. (2018) shows that Ĥt rLt ⊗ In and Ĥt r Im ⊗ Rt . By using Ando’s inequality (Ando et al., 2004), we get\nĤt r(Lt ⊗ In)1/p(Im ⊗ Rt )1/q\n= r(L1/pt ⊗ In)(Im ⊗ R 1/q t ) = rL1/pt ⊗ R 1/q t ,\nwhich concludes the proof.\nThis lemma immediately allows us to prove a regret bound for Shampoo with extended exponents: Theorem 3. Assume that the gradients G1, . . . ,GT are matrices of rank at most r . Then the regret of Shampoo with extended exponents compared to any W? ∈ Rm×n is bounded as follows,\nT∑ t=1 ft (Wt ) − T∑ t=1 ft (W?) ≤ √ 2rD Tr(L 1 2p T )Tr(R 1 2q T ) ,\nwhere\nLT = ²Im + T∑ t=1 GtGTt , RT = ²In + T∑ t=0 GTt Gt , D = max t∈[T ] ‖Wt −W?‖2 .\nand 1/p + 1/q = 1, p,q ≥ 1.\nProof. The proof follows the proof of Theorem 7 in Gupta et al. (2018). Let Ht = L 1 2p t ⊗ R 1 2q t . Then the update rule of the extended Shampoo algorithm is equivalent to wt+1 = wt − ηH−1t gt . Since 0 L1 . . . LT and 0 R1 . . . RT , standard properties of the Kronecker product and the operator monotonicity of the function x 7→ xα for α ≤ 1 (an immediate consequence of Ando’s inequality) ensure that 0 H1 . . . HT . Following the aforementioned proof, we have the regret bound\nT∑ t=1 ft (Wt ) − T∑ t=1 ft (W?) ≤ D2 2η Tr(HT ) + η 2 T∑ t=1 ‖gt ‖2H∗t ,\nwhere D = maxt ‖Wt −W?‖2. Define gt = vec(Gt ) and Ĥt = (²Im + ∑t s=1 gsg T s)1/2, then Lemma 1 shows that Ĥt √\nrHt , using operator monotonicity. Using this equation twice, along with Equation (6) from the proof of Theorem 7, we have\nT∑ t=1 ‖gt ‖2H∗t ≤ √ r T∑ t=1 ‖gt ‖2 Ĥ∗t ≤ 2 √ r Tr(ĤT ) ≤ 2r Tr(HT ).\nThis gives us T∑ t=1 ft (Wt ) − T∑ t=1 ft (W?) ≤ D2 2η Tr(HT ) + ηr Tr(HT ). Setting η = D/ √\n2r and observing that Tr(Ht ) = Tr(L1/2pt )Tr(R 1/2q t ) gives us the required bound.\nProof (of Lemma 2). Let x ∈ Rmk , and x = [x1, x2, . . . , xk], where xj ∈ Rm. Then\nxTĤt x = ²‖x‖22 + t∑\ns=1 xTgsgTs x = ²‖x‖22 + t∑ s=1 (gTs x)2 = ²‖x‖22 + t∑ s=1 ( k∑ j=1 gTs, j xj )2 ≤ k²‖x‖22 + k\nt∑ s=1 k∑ j=1 (gTs, j xj)2 = k k∑ j=1 ( ²‖xj ‖22 + t∑ s=1 xTj gs, jg T s, j xj ) = k\nk∑ j=1 xTj ( ²Im + t∑ s=1 gs, jg T s, j ) xj = k k∑ j=1 xTj B (j) t xj = k x TBt x.\nHere we used the inequality (∑k j=1 αj )2 ≤ k ∑kj=1 α2j , which follows from the convexity of x 7→ x2 (or from the fact that variance of a random variable is non-negative).\nThis lemma once again allows us to prove a regret bound, exactly following the proof of the regret bound above: Theorem 4. Assume that the gradients are g1, . . . ,gT ∈ Rmk , and let gi = [gi,1, . . . ,gi,k] where gi, j ∈ Rm. Then the regret of Shampoo with blocking compared to any w? ∈ Rmk is bounded as follows:\nT∑ t=1 ft (wt ) − T∑ t=1 ft (w?) ≤ √ 2kD k∑ j=1 Tr (( ²Im + T∑ t=1 gt , jg T t , j ) 1 2 ) .\nThe two regret bounds can be combined to show that Shampoo with both extensions also converges." }, { "heading": "C Comparison with K-FAC", "text": "K-FAC is a natural gradient algorithm, and approximates the curvature of the loss using the Fisher Information Matrix:\nF = E p(x |θ)\n[ ∇ log p(x |θ) ∇ log p(x |θ)T ] = E\np(x |θ)\n[ gp(x |θ) g T p(x |θ) ] .\nFor a fully connected layer with W ∈ Rm×n, where W x = s, the gradient for the layer Gt ∈ Rm×n can be written via the chain rule as Gt = ∇s`(st, yt )xT and in vectorized form as: ∇s`(st, yt ) ⊗ x. We can then write the Fisher information matrix as:\nF = E p(x |θ)\n[ (∇s`(st, yt ) ⊗ x) (∇s`(st, yt ) ⊗ x)T ] = E\np(x |θ)\n[ (∇s`(st, yt )∇s`(st, yt )T) ⊗ (xt xTt ) ] .\nAssuming independence between ∇s`(st, yt ) and x, K-FAC rewrites the Fisher in tractable form as: F ≈ E [ (∇s`(st, yt )∇s`(st, yt )T) ] ⊗ E [ xt xTt ] .\nIf we let D = E [ (∇s`(st, yt )∇s`(st, yt )T) ] and X = E [ xt xTt ] , the update rule then becomes:\nWt+1 ≈ Wt − ηD−1Gt X−1.\nWe note some of the differences and similarities between the two updates here. KFAC preconditioners use exponent of −1 (as original Fisher is inverted) whereas Shampoo uses −1/2p where p is the rank of the tensor. KFAC computes statistics based on gradients with labels sampled from the model’s\npredictive distribution (hence requiring strictly more computation) where as Shampoo relies on the gradient of the mini-batch.\nNow we can compute each term in the Shampoo preconditioners as: GtGTt = ∇s`(st, yt )xTt xt∇s`(st, yt )T = ‖xt ‖22∇s`(st, yt )∇s`(st, yt )T; GTt Gt = xt∇s`(st, yt )T∇s`(st, yt )xTt = ‖∇s`(st, yt )‖22 xt xTt .\nDividing by the scale, and taking expectations on both sides:\nE [ GtGTt ‖xt ‖22 ] = E [ ∇s`(st, yt )∇s`(st, yt )T ] = D;\nE\n[ GTt Gt\n‖∇s`(st, yt )‖22\n] = E [ xt xTt ] = X .\nThis shows that K-FAC preconditioners are closely related to Shampoo preconditioners, especially when one uses the empirical Fisher (Kunstner et al., 2019).\nThe main difficulty in implementing K-FAC on a model is that current optimizer APIs make it difficult to send additional information such as ‖xt ‖22, ‖∇s`(st, yt )‖22 to the optimizer, so K-FAC implementations have to register the structure of each layer. Moreover, due to the dependence of K-FAC on the structure of the network, it is difficult to implement standard operators like batch norm, weight norm, layer norm, etc., which are prevalent in the tasks and models we considered. For example, if we write a fully connected layer with weight norm as s = W x/‖W ‖, then the gradient\nGt = 1 ‖W ‖ ∇s`(st, yt )x T − ∇s`(st, yt ) TW x ‖W ‖3 W,\nso rewriting E[vec(Gt ) vec(Gt )T] as a Kronecker product is not an easy task. The similarity between K-FAC and Shampoo preconditioners also allows us to use techniques explored by the K-FAC community for Shampoo. One of the extensions for KFAC is the E-KFAC algorithm (George et al., 2018) which constructs a better approximation of the Fisher matrix by using the eigenbasis computed from the Kronecker approximation, but rescaling the eigenvalues to match the diagonal of the Fisher matrix in this eigenbasis. This method produces a provably better approximation, and can immediately be applied to Shampoo too with a simple modification:\nLet Ĥt ≈ L1/2t ⊗ R1/2t . Let the singular value decompositions of the factors be L1/2t = UDUT and R1/2t = V D\n′VT. The L1/2t ⊗ R1/2t = (U ⊗ V)(D ⊗ D′)(U ⊗ V)T. Now the EKFAC correction replaces D ⊗ D′ by the optimal diagonal\nΛ = diag((U ⊗ V)TĤt (U ⊗ V))\n= ²I + t∑\ns=1 diag((U ⊗ V)T vec(Gs) vec(Gs)T(U ⊗ V))\n= ²I + t∑\ns=1 diag(vec(UTGsV) vec(UTGsV)T)\n= ²I + t∑\ns=1 vec(UTGsV) 2,\nThus we can approximately compute Λt+1 ≈ Λt + (UTGtV) 2, and the new update becomes: Wt+1 = Wt − ηtU(Λ−1/2t • (UTGtV))VT. This technique does have the disadvantage that it requires computing the singular value decompositions (which we already observed are much slower than coupled Newton iterations), and doubles the number of matrix multiplications in the preconditioned gradient computation. At this time our experiments did not show significant improvements over the standard Shampoo implementation, but we plan to explore this further." }, { "heading": "D Shampoo for embedding layers", "text": "In modern networks, embedding layers are usually very large, and even computing the left preconditioner as described in Section 3.1 can be prohibitively expensive. However we can take advantage\nof the fact that the inputs to the network are very sparse, and use this to reduce the computation significantly.\nLet our input example to such a network consist of a set of categorical features: each feature such as user language, user country etc consists of one out of a set of options. Then the output of the embedding layer is the concatenation of the embeddings for each such feature. If the embeddings are of width d and there are N such embeddings, then the embedding layer is W ∈ Rd×N . The input can be represented as x ∈ RN×m, where m is the number of categorical features, and each column is one-hot: if the k-th feature is x(k), then xjk = δj ,x(k). The output of the layer is y = W x.\nNow G = ∇W` = ∇y` xT, so GGT = ∇y` xT x ∇y T̀. But xT x = Im, so GGT = ∇y`∇y T̀. Thus we can compute the preconditioner for W by computing it on the output of the embedding layer, and this is a much smaller computation since y is of dimension b × m, this computation is O(d2m) rather than O(d2N). Note that sparse multiplication would also be O(d2m), but accelerators usually implement sparse operations by densifying the tensors.\nIf each column of x is multi-hot, as is the case when the features are words and their embeddings are averaged, xT x is a diagonal matrix, where each diagonal entry is a function of the number of ones in each column of x. Computing GGT = ∇y`(xT x)∇y T̀ is still O(d2m) O(d2N).\nE A coupled Newton iteration for computation of inverse p-th roots The Newton method for solving the matrix equation X−p − A = 0 produces the iteration Xk+1 = 1 p [(p + 1)Xk − X p+1 k\nA], where we take X0 = 1c I. This iteration satisfies Xk → A−1/p as k →∞, but it is not numerically stable. Introducing the matrix Mk = Xpk A, we get\nXk+1 = Xk\n( (p + 1)I − Mk\np\n) , X0 =\n1 c I,\nand\nMk+1 = X p k+1 A =\n( (p + 1)I − Mk\np\n)p Xp k A = ( (p + 1)I − Mk\np\n)p Mk, M0 =\n1 cp A,\nsince Xk,Mk and A commute with each other. This is the coupled Newton iteration for computing inverse p-th roots, and was shown to be numerically stable in (Guo & Higham, 2006; Iannazzo, 2006).\nWe implemented the following optimizations to the coupled Newton iteration method:\n• Warm Start: The coupled Newton iteration to compute G−1/p starts with X = I,M = G and maintains the invariant M = XpG while driving M → I, resulting in X → G−1/p. We need to find the p-th root of a sequence Gt , so we instead set X = G−1/pt ,M = XpGt+1; since the difference between Gt and Gt+1 is small, this ensures that M is already close to I. In our experiments warmstart improves convergence (by upto 4x fewer steps).\n• Projecting top singular values: In practice our Gt matrices have large condition numbers, which sometimes leads to inaccurate results. As a rule of thumb, computing G−1/p leads to a loss of log2( 1pκ(G)) bits of precision (Overton, 2001), where κ(G) is the condition number of the G. However we also find that usually there is a sharp falloff within the first few singular values, so in order to reduce the condition number, we project away the largest singular values, since these are the least important after taking inverses. We find the top-k singular values λ1, . . . ,λk and their associated singular vectors using a standard iterative method, and replace each with λk+1. This produces a better approximation than adding ²I to each Gt : the latter can wash out the smallest (and most crucial) singular values, see Fig. 7 where the smallest singular value for a layer can be as small as 10−10 to 10−6 during the course of optimization.\n• Dynamic tuning of projection: We dynamically tune the number of singular values we need to project in the previous step, by computing the condition number κ(Gt ) and using it to estimate the smallest singular value of Gt+1 as λmax(Gt+1)/κ(Gt ). We then keep projecting out singular values of Gt+1 until we get an acceptable condition number.\n• Correcting for projection: If Gt = ∑ i λivivTi , then G −1/p t = ∑ i λ −1/p i vivTi . Projection above\nmeans replacing λ1, . . . λk by λk+1, but since we have already computed the corresponding v1, . . . vk , we correct the approximate p-th root by adding ∑k i=1(λ −1/p i − λ −1/p k+1 )viv T i . This is a small effect, but adding it is a straightforward modification (details deferred to Appendix E).\nAlgorithm I A coupled Newton iteration procedure for computing inverse p-th roots of a PSD matrix, with warm start and singular value projection 1: procedure MaxSV(G) 2: Parameters: ² > 0, nstep 3: v ∈ Rn, where G ∈ Rn×n 4: i = 0, error = ∞, λ = 0 5: while i < nstep and error > ² do 6: v̂ = v/‖v‖ 7: v = Gv̂ 8: λold = λ;λ = v̂Tv 9: error = |λ − λold |; i = i + 1 10: return λ,v/‖v‖ 11: 12: procedure Project(G, κ (optional), κd (optional), nproj (optional)) 13: i = 0 14: ∆ = 0 15: λ,v = MaxSV(G)" }, { "heading": "16: λmax = λ", "text": "17: while λ > κd\nκ λmax or i < nproj do\n18: G = G − λvvT 19: ∆ = ∆ + vvT 20: λ,v = MaxSV(G) 21: i = i + 1 22: return G + λ∆ 23: 24: procedure CoupledIteration(G, p ∈ N, X (optional), κ (optional)) 25: Parameters: ² > 0, κd , nproj 26: Outputs: G−1/p 27: G = Project(G,κ,κd,nproj) 28: α = − 1p 29: if X is provided then 30: M = XpG 31: else 32: z = 1+p2‖G‖F 33: X = 1zα I 34: M = zG 35: while ‖M − I‖∞ > ² do 36: M1 = (1 − α)I + αM 37: X = XM1 38: M =Mp1 M 39: return X\nF Implementation Details of Shampoo Our implementation of the Shampoo algorithm for fully-connected layers is described in Algorithm II. The algorithm can use heavy-ball momentum for its updates, as well an exponential moving average over the preconditioners, like Adam. The configuration parameter τ1 denotes the number of steps between subsequent fetches of the latest available preconditioner by the accelerator. τ1 must be set sufficiently high so that there is enough time for the CPU to complete the computation of the preconditioner asynchronously and pipeline it efficiently, but otherwise its setting does not have a significant effect on convergence. The configuration parameter τ2 (default value = 1) determines the frequency of gathering gradient statistics - we update Lt,Rt every τ2 steps only for efficiency.\nF.1 Computation cost of Shampoo\nWe capture the computational and memory complexity under various schemes described in Section 3.1 of handling large layers in Table 2.\nAlgorithm II Sketch of the Shampoo algorithm 1: parameters: learning rate ηt , momentum: β1, β2 2: for t = 1, . . . ,T do 3: Receive stochastic gradients Gt for each layer 4: if t % τ2 = 0 then 5: if β2 < 1 then 6: Lt ← β2 Lt−τ2 + (1 − β2) GtGTt 7: Rt ← β2 Rt−τ2 + (1 − β2) GTt Gt 8: else 9: Lt ← Lt−τ2 + GtGTt 10: Rt ← Rt−τ2 + GTt Gt 11: Dt ← Dt−1 + Gt • Gt 12: Mt ← β1 Mt−1 + (1 − β1) D −1/2t • Gt 13: if t % τ1 = 0 then 14: Gather preconditioners L−1/4(t−τ1),R −1/4 (t−τ1) from CPUs 15: Send Lt,Rt to CPU host to compute L−1/4t ,R −1/4 t 16: if t > τ1 then 17: Pt ← β1Pt−1 + (1 − β1) L−1/4t GtR −1/4 t" }, { "heading": "18: ηt ← η0", "text": "Mt F / Pt F 19: Wt = Wt−1 − ηtPt 20: else 21: ηt ← η0 22: Wt = Wt−1 − ηt Mt" }, { "heading": "G Further Details on Experiments", "text": "Layer wise learning rates. As seen in Fig. 7 the step size scale for each layer is dependent on the operator norm of the preconditioners (inverse-pth root of the smallest singular value of the statistics\nmatrix) has large spread in its range which results in optimization instabilities in practice. Moreover, as statistics as well as preconditioner computation are amortized across many steps the norm does not grow at every step. Hence, we rely on a learning rate schedule based on the update directions of a well tuned first order optimizer (in our experiments we use diagonal AdaGrad for Transformers in machine translation, as well as Criteo, layer-wise scaling heuristic proposed in LARS/LAMB optimizer, where each layer’s learning rate is set to be\nWt F / Gt F for BERT and ResNet training. For example, when used with diagonal AdaGrad: Shampoo is used to determine the direction of the update, and AdaGrad to determine its magnitude.\nThis procedure termed Grafting in (Agarwal et al., 2020) allows us to bootstrap a reasonable learning rate schedule for a specific problem that is well tuned, and study the effect of preconditioned gradient directions in isolation. The weight matrix Wt is updated as Wt = Wt−1 − At Ŝt, where:\nDt = t∑\ns=1 Gs • Gs; At = η0 D −1/2t • Gt F (Adagrad magnitude) Ŝt =\nL−1/4t GtR −1/4 t L−1/4t GtR−1/4t F (Shampoo direction).\nG.1 Transformer model on WMT’14 en→fr For all optimizers, we make use of a warmup schedule where the learning rate is increased from 0.0 to η over 40k steps. For the smaller transformer experiments, we use a quadratic warmup, and for the larger transformer experiments we use a linear warmup. We found that quadratic warmup improves all optimizers equally and provides a better log-perplexity. For the Adam optimizer experiments, we use a learning rate decay schedule of the form ηt = η √ d/t, following the suggestion of Vaswani et al. (2017). For the smaller Transformer experiments, we tuned the hyperparameters for each algorithm over 100 trials. We took the best settings for the momentum and second-moment parameters, and tuned the learning rates until either the model became unstable, or did not increase performance. For Shampoo, we used a per layer learning rate derived from AdaGrad (see Appendix G for details), and found that for the exact same hyperparameter settings as AdaGrad, Shampoo provides a modest improvement in performance. Moreover, Shampoo allows for larger learning rates than AdaGrad does, as shown in Fig. 4a.\nG.2 Step time for BERT-Large\nOur current implementation showed a 14% increase in step time for BERT-Large, nearly wiping out all the gains from reduced number of steps (16%). We note that due amount of resources it would require to tune BERT, we used Shampoo with exact same hyper-parameters as LAMB with grafting to understand the effect of preconditioner. Moreover, step time can be optimized considerably as the current implementation is not heavily optimized. For example, larger batch sizes help amortize the preconditioning overhead, and reduce overall wall time to reach the same accuracy. Furthermore,\nin our current implementation, all TPU cores compute all the preconditioning statistics and the preconditioned gradients, which involves over a hundred 1024 × 1024 matrix multiplications. This repeated work can be avoided by cross-replica sharding of weight update (Xu et al., 2020), which distributes this computation across cores, and should save at least half the step time overhead.\nG.3 CIFAR-10\nWe train a ResNet-50 model on CIFAR-10 (Krizhevsky et al., 2009) with 2 cores of CloudTPU-v2 at batch size 2048. Our baseline achieves 93.45% accuracy at 300 epochs, where as Shampoo reaches the same accuracy in 143 epochs. We see an overall training time reduction of 42% (1428 seconds to 827 seconds). As it is a smaller problem, the time taken for preconditioner inverse computation for the largest preconditioning matrix is less than 1ms on the CPU. We use a total of 8 CPU cores to run these inverses.\nG.4 ImageNet\nFor SGD with Momentum, the learning rate is warmed up over the first 5 epochs from 0 to 1.6, followed by a 10x drops of the learning rate at 30, 60 and 80 epochs. For LARS, we use warmup learning rate over 20 epochs for 4K and 16K batch sizes, 25 epochs for 32K batch size with a polynomial decay (p=2) until end of training. For Shampoo we use the same layer-wise heuristics and hyperparameters as LARS with Grafting such that the direction is changed to the one computed by Shampoo. We make use weight decay with value: λ2 = 2x10−4 and label smoothing of 10−1.\nG.5 Detailed results for experiments\nApproximate wall clock times for the various tasks are as follows:\nTask Model Baseline Shampoo Recommendations: Criteo-1Tb DLRM 13 min 8.2 min Translation: WMT-14 En-Fr Transformer ≈ 12 hrs 6.5 hrs Translation: WMT-14 En-Fr Transfomer-Big ≈ 47 hrs 29.5 hrs Language Modeling: Wikipedia+Books BERT-Large 228 mins 219 mins\nG.6 Breakdown of step-time in Fig. 2b\nEach step of training consists of the following phases, whose times are shown in Fig. 2b.\n• Forward Pass: Each core independently computes the predictions for each training example in its sub-batch.\n• Gradient: The gradient is for the sub-batch is computed using the back-propagation algorithm. • All reduction: The gradients for the sub-batches from all cores are averaged to compute the gradient for the minibatch. This is then sent back to each core.\n• Preconditioner statistics: The preconditioner statistics for adaptive algorithms are updated, e.g. for AdaGrad, we set Hi := Hi + g2i for all parameters, while for Shampoo, we set Li := Li + GGT etc.\n• Preconditioned gradient: The preconditioned gradient is computed - e.g. for AdaGrad, we compute gi/ √ Hi , while for Shampoo, we compute L−1/4GR−1/4.\n• Parameter updates: The parameters are updated using the preconditioned gradients. This step is the same for all algorithms: W := W − ηG̃, where G̃ is the preconditioned gradient.\nNote that the Shampoo computation of the preconditioners L−1/4,R−1/4 is pipelined on the host CPU, so does not show up in the step times." } ]
2,020
Towards Practical Second Order Optimization for Deep Learning
SP:f30f2cd322e3995e29563d5f6045e0f427c267af
[ "The authors claimed in this paper that as the most empirically successful approach to defending adversarial examples, PGD-based adversarial training, is computationally inefficient. Fast adversarial training could mitigate this issue by training a model using FGSM attacks initialized with large randomized perturbations, but the underlying reason for its success remains unclear and it may still suffer from catastrophic overfitting. The authors conducted a series of experiments to figure out the key to the success and properties of fast adversarial training. The experimental results showed that fast adversarial training cannot avoid catastrophic overfitting, but could be able to recover from catastrophic overfitting quickly. Based on all of the observations, the authors proposed a simple method to improve fast adversarial training by using PGD attack as training instead of R+FGSM attack (proposed in fast adversarial training) when overfitting happens, or using fast adversarial training as a warmup. The proposed methods could achieve slightly better performance than the current state-of-art approach while reducing the training time significantly." ]
Current neural-network-based classifiers are susceptible to adversarial examples. The most empirically successful approach to defending against such adversarial examples is adversarial training, which incorporates a strong self-attack during training to enhance its robustness. This approach, however, is computationally expensive and hence is hard to scale up. A recent work, called fast adversarial training, has shown that it is possible to markedly reduce computation time without sacrificing significant performance. This approach incorporates simple self-attacks, yet it can only run for a limited number of training epochs, resulting in sub-optimal performance. In this paper, we conduct experiments to understand the behavior of fast adversarial training and show the key to its success is the ability to recover from overfitting to weak attacks. We then extend our findings to improve fast adversarial training, demonstrating superior robust accuracy to strong adversarial training, with much-reduced training time.
[]
[ { "authors": [ "Maksym Andriushchenko", "Francesco Croce", "Nicolas Flammarion", "Matthias Hein" ], "title": "Square attack: a query-efficient black-box adversarial attack via random search", "venue": null, "year": 1912 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Minimally distorted adversarial examples with a fast adaptive boundary attack", "venue": "arXiv preprint arXiv:1907.02044,", "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "arXiv preprint arXiv:2003.01690,", "year": 2020 }, { "authors": [ "Krishnamurthy Dvijotham", "Robert Stanforth", "Sven Gowal", "Timothy Mann", "Pushmeet Kohli" ], "title": "A dual approach to scalable verification of deep networks", "venue": "arXiv preprint arXiv:1803.06567,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "J Zico Kolter", "Eric Wong" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "arXiv preprint arXiv:1711.00851,", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "arXiv preprint arXiv:1802.03471,", "year": 2018 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Certified adversarial robustness with additive noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory Diamos", "Erich Elsen", "David Garcia", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh" ], "title": "Mixed precision training", "venue": "arXiv preprint arXiv:1710.03740,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy Liang" ], "title": "Certified defenses against adversarial examples", "venue": "arXiv preprint arXiv:1801.09344,", "year": 2018 }, { "authors": [ "Leslie Rice", "Eric Wong", "J Zico Kolter" ], "title": "Overfitting in adversarially robust deep learning", "venue": "arXiv preprint arXiv:2002.11569,", "year": 2020 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Mohammad Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Leslie N Smith", "Nicholay Topin" ], "title": "Super-convergence: Very fast training of neural networks using large learning rates", "venue": "In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Yusuke Tashiro", "Yang Song", "Stefano Ermon" ], "title": "Output diversified initialization for adversarial attacks", "venue": "arXiv preprint arXiv:2003.06878,", "year": 2020 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Shiqi Wang", "Yizheng Chen", "Ahmed Abdou", "Suman Jana" ], "title": "Mixtrain: Scalable training of formally robust neural networks", "venue": "arXiv preprint arXiv:1811.02625,", "year": 2018 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "James Bailey", "Jinfeng Yi", "Bowen Zhou", "Quanquan Gu" ], "title": "On the convergence and robustness of adversarial training", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Duane Boning", "Inderjit S Dhillon", "Luca Daniel" ], "title": "Towards fast computation of certified robustness for relu networks", "venue": "arXiv preprint arXiv:1804.09699,", "year": 2018 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Jan Hendrik Metzen", "J Zico Kolter" ], "title": "Scaling provable adversarial defenses", "venue": "arXiv preprint arXiv:1805.12514,", "year": 2018 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan L Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Accelerating adversarial training via maximal principle", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "arXiv preprint arXiv:1901.08573,", "year": 2019 }, { "authors": [ "Huan Zhang", "Tsui-Wei Weng", "Pin-Yu Chen", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Adversarial examples are carefully crafted versions of the original data that successfully mislead a classifier (Szegedy et al., 2013), while realizing minimal change in appearance when viewed by most humans. Although deep neural networks have achieved impressive success on a variety of challenging machine learning tasks, the existence of such adversarial examples has hindered the application of deep neural networks and drawn great attention in the deep-learning community.\nEmpirically, the most successful defense thus far is based on Projected Gradient Descent (PGD) adversarial training (Goodfellow et al., 2014; Madry et al., 2017), augmenting the data of interest with strong adversarial examples, to help improve model robustness. Although effective, this approach is not efficient and may take multiple days to train a moderately large model. On the other hand, one of the early versions of adversarial training, based on a weaker Fast Gradient Signed Method (FGSM) attack, is much more efficient but suffers from “catastrophic overfitting,” a phenomenon where the robust accuracy with respect to strong attacks suddenly drops to almost zero during training (Tramèr et al., 2017; Wong et al., 2019), and fails to provide robustness against strong attacks.\nFast adversarial training (Wong et al., 2019) is a simple modification to FGSM, that mitigates this issue. By initializing FGSM attacks with large randomized perturbations, it can efficiently obtain robust models against strong attacks. Although the modification is simple, the underlying reason for its success remains unclear. Moreover, fast adversarial training is only compatible with a cyclic learning rate schedule (Smith & Topin, 2019), with a limited number of training epochs, resulting in sub-optimal robust accuracy compared to PGD adversarial training (Rice et al., 2020). When fast adversarial training runs for a large number of epochs, it still suffers from catastrophic overfitting, similar to vanilla FGSM adversarial training. Therefore, it remains an unfinished task to obtain the effectiveness of PGD adversarial training and the efficiency of FGSM adversarial training simultaneously.\nIn this paper, we conduct experiments to show that the key to the success of fast adversarial training is not avoiding catastrophic overfitting, but being able to retain the robustness of the model when catastrophic overfitting occurs. We then utilize this understanding to propose a simple fix to fast adversarial training, making possible the training of it for a large number of epochs, without sacrificing efficiency. We demonstrate that, as a result, we yield improved performance.\nWe also revisit a previously developed technique, FGSM adversarial training as a warmup (Wang et al., 2019), and combine it with our training strategy to further improve performance with small additional computational overhead. The resulting method outperforms the state-of-the-art approach, PGD adversarial training (Rice et al., 2020), while consuming much less training time.\nOur contributions are summarized as follows:\n• We conduct experiments to explain both the success and the failure of fast adversarial training for various cases.\n• We propose an alternative training strategy as a fix to fast adversarial training, which is equivalently efficient but allows training for a large number of epochs, and hence achieves better performance.\n• We propose to utilize the improved fast adversarial training as a warmup for PGD adversarial training, to outperform the state-of-the-art adversarial robustness, with reduced computation." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "The existence of adversarial examples in deep learning was initially reported in (Szegedy et al., 2013). Since then, many approaches have been proposed to mitigate this issue and improve the adversarial robustness of models. A straightforward method is data augmentation, where adversarial examples are generated before the back-propagation at each iteration and used for model updates. This approach is referred to as adversarial training. It was first used with a gradient-based single-step adversarial attack, also known as the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014). Later, (Kurakin et al., 2016) found that models trained with FGSM tend to overfit and remain vulnerable to stronger attacks. They proposed a multi-step version of FGSM, namely the Basic Iterative Method (BIM), seeking to address its weaknesses. Randomized initialization for FGSM then was introduced in (Tramèr et al., 2017), leading to R+FGSM to increase the diversity of attacks and mitigate the overfitting issue. Finally, (Madry et al., 2017) combined randomized initialization with multi-step attacks to propose projected gradient descent (PGD) attacks, and showed its corresponding adversarial training is able to provide strong adversarial robustness (Athalye et al., 2018). As PGD adversarial training is effective, many works have tried to improve upon it (Zhang et al., 2019b; Xie et al., 2019). However, a recent study (Rice et al., 2020) conducted extensive experiments on adversarially trained models and demonstrated that the performance gain from almost all recently proposed algorithmic modifications to PGD adversarial training is no better than a simple piecewise learning rate schedule and early stopping to prevent overfitting.\nIn addition to adversarial training, a great number of adversarial defenses have been proposed, yet most remain vulnerable to stronger attacks (Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Kurakin et al., 2016; Carlini & Wagner, 2017; Brendel et al., 2017; Athalye et al., 2018). A major drawback of many defensive models is that they are heuristic and vulnerable to adaptive attacks that are specifically designed for breaking them (Carlini et al., 2019; Tramer et al., 2020). To address this concern, many works have focused on providing provable/certified robustness of deep neural networks (Hein & Andriushchenko, 2017; Raghunathan et al., 2018; Kolter & Wong, 2017; Weng et al., 2018; Zhang et al., 2018; Dvijotham et al., 2018; Wong et al., 2018; Wang et al., 2018; Lecuyer et al., 2018; Li et al., 2019; Cohen et al., 2019), yet their certifiable robustness cannot match the empirical robustness obtained by adversarial training.\nAmong all adversarial defenses that claim empirical adversarial robustness, PGD adversarial training has stood the test of time. The only major caveat to PGD adversarial training is its computational cost, due to the iterative attacks at each training step. Many recent works try to reduce the computational overhead of PGD adversarial training. (Shafahi et al., 2019) proposes to update adversarial perturbations and model parameters simultaneously. By performing multiple updates on the same batch, it is possible to imitate PGD adversarial training with accelerated training speed. Redundant calculations are removed in (Zhang et al., 2019a) during back-propagation for constructing adversarial examples, to reduce computational overhead. Recently, (Wong et al., 2019) shows surprising results that FGSM adversarial training can obtain strongly robust models if a large randomized initialization is used for FGSM attacks. However, they are forced to use a cyclic learning rate schedule (Micikevicius et al., 2017) and a small number of epochs for the training. This issue limits its performance, especially when compared to state-of-the-art PGD adversarial training with early stopping (Rice et al., 2020)." }, { "heading": "3 FAST ADVERSARIAL TRAINING", "text": "" }, { "heading": "3.1 PRELIMINARIES", "text": "We consider the task of classification over samples (x, y) ∈ (X ,Y). Consider a classifier fθ : X → Y parameterized by θ, and a loss function L. For a natural example x ∈ X , an adversarial example x′ satisfies D(x, x′) < for a small > 0, and fθ(x) 6= fθ(x′), where D(·, ·) is some distance metric, i.e., x′ is close to x but yields a different classification result. The distance is often described in terms of an `p metric, and we focus on the `∞ metric in this paper.\nAdversarial training is an approach for training a robust model against adversarial attacks. It represents the objective of obtaining adversarial robustness in terms of a robust optimization problem, defined as\nmin θ E(x,y)∼X max ‖x′−x‖∞< (L(fθ(x′), y)) (1)\nIt approximates the inner maximization by constructing adversarial examples based on natural examples, and then the model parameters θ are updated via an optimization method with respect to the adversarial examples, instead of the natural ones. One of the simplest choices of attack for adversarial training is the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014):\nx′ = x+ sign(∇xL(fθ(x), y)) (2) Before the introduction of fast adversarial training (Wong et al., 2019), which we will introduce later, it was commonly believed that FGSM adversarial training fails to provide strong robustness (Kurakin et al., 2016). During FGSM adversarial training, the robust accuracy of the model would suddenly drop to almost 0% after a certain point, when evaluated against PGD attacks. This phenomenon was referred to as “catastrophic overfitting” in (Wong et al., 2019). The cause of catastrophic overfitting was studied extensively in (Tramèr et al., 2017): during training, since FGSM is a simple attack, the model learns to fool the FGSM attacks by inducing gradient masking/obfuscated gradient (Athalye et al., 2018); that is, the gradient is no longer a useful direction for constructing adversarial examples. The existence of catastrophic overfitting has prohibited the use of FGSM adversarial training.\nTo mitigate this issue, (Madry et al., 2017) introduced a multi-step variant of FGSM, namely Projected Gradient Descent (PGD), which takes multiple small steps with stepsize α to construct adversarial examples instead of one large step as in FGSM:\nx′t+1 = Π‖x′−x‖∞≤ ( x′t + αsign(∇x′tL(fθ(x ′ t), y)) ) (3)\nExtensive experimental results (Madry et al., 2017; Athalye et al., 2018) have shown that, unless the model is particularly designed for creating obfuscated gradients (Tramer et al., 2020), PGD attacks are generally exempt from overfitting. Consequently, adversarial training with PGD leads to robust models against strong attacks, although its computational cost is often an order of magnitude more expensive than standard training and FGSM adversarial training.\nRecently, in contrast to conventional believe, (Wong et al., 2019) proposed fast adversarial training and suggested it is possible to construct strongly robust models via FGSM adversarial training. They showed it is important to initialize a FGSM attack with large randomized perturbations, to protect FGSM adversarial training from overfitting. Although randomly initialized FGSM (R+FGSM) has been used in previous works (Tramèr et al., 2017), (Wong et al., 2019) points out that the scale of the randomized initialization was restrictive and needs to be enlarged. As a result, this simple modification enables R+FGSM adversarial training to obtain reasonable robustness against strong attacks." }, { "heading": "3.2 SUB-OPTIMAL PERFORMANCE OF FAST ADVERSARIAL TRAINING", "text": "In (Wong et al., 2019) it is claimed that fast adversarial training has comparable performance as PGD adversarial training, yet they only compared to the original results from (Madry et al., 2017). Recent work (Rice et al., 2020) has shown PGD adversarial training can be greatly improved with the standard piecewise learning rate schedule and early stopping.\nIn Figure 1 we compare fast adversarial training (solid lines) and PGD adversarial training (dash lines) on a PreAct ResNet-18 (He et al., 2016a) model for classifying CIFAR-10 (Krizhevsky et al., 2009) under 10-step PGD attacks ( = 8/255). For fast adversarial training, we use a cyclic learning\nrate schedule (Smith & Topin, 2019), which linearly increases and decreases the learning rate. In particular, we linearly increase the learning rate from 0 to 0.2 in the first 12 epochs and decrease to 0 for the last 18 epochs as recommended in (Wong et al., 2019). For PGD adversarial training, we use a piecewise learning rate schedule which starts at 0.1 and decay by a factor of 0.1 at the 50th and the 75th epoch for a total of 100 epochs.\nA clear gap in the robust accuracy is illustrated in Figure 1. There are two main factors accounting for this performance gap. First, although a model trained with the cyclic learning rate can converge in only a few epochs, it often results in sub-optimal results in the adversarial training setting (Rice et al., 2020) compared to the piecewise learning rate schedule. The issue is that fast adversarial training is forced to use a cyclic learning rate schedule. If a piecewise learning rate schedule is used for fast adversarial training for a large number of epochs, the model will still encounter catastrophic overfitting. We ran fast adversarial training with 25 different random seeds for 100 epochs, with the same piecewise learning rate schedule for PGD adversarial training, and terminated it when catastrophic overfitting happened. We add in Figure 1 the epoch number where the overfitting happens, versus the best clean and robust accuracy before overfitting.\nThe results show that none of the training progress exceeds even the 70th epochs without encountering catastrophic overfitting. For training progress terminated before the 50th epoch, where the learning rate drops, their performance is inferior due to insufficient training. On the other hand, although the rest of training progress also terminate early, they consistently outperformed fast adversarial training with the cyclic learning rate schedule. In other words, if fast adversarial training can run for more epochs with the piecewise learning rate schedule, it has the potential to improve upon fast adversarial training with the cyclic learning rate schedule.\nAnother reason for the inferior performance of fast adversarial training is the inherent weakness of FGSM compared to PGD attacks. As PGD is in general a better approximation to the solution for the inner maximization problem in (1), it is expected to produce more robust models. We seek to address this issue in Section 5." }, { "heading": "3.3 UNDERSTANDING FAST ADVERSARIAL TRAINING", "text": "Although (Wong et al., 2019) has shown that initialization with a large randomized perturbation results in effective FGSM adversarial training, the underlying mechanism for its effectiveness remains a puzzle. Moreover, even with the recommended setting, catastrophic overfitting still happens on occasion. Plot (a) in Figure 2 shows both a success mode (orange) and a failure mode (green) when we use fast adversarial training with cyclic learning rate for 30 epochs. It seems that the model\nin the success mode never encounters overfitting, while the model in the failure mode encounters catastrophic overfitting at around the 26th epoch. However, surprisingly, if we look closer at the training progress in plot (b), where we report the test clean and robust accuracy for every 10 batches (with a batch size of 128) for the last 5 epochs, it is observed that the model in the success mode also encounters a sudden drop in test robust accuracy, indicating catastrophic overfitting, but it recovers immediately.\nThis finding explains why fast adversarial training could run more epochs than vanilla FGSM adversarial training. It is not because it can completely avoid catastrophic overfitting, as previously believed; rather, it is because it can recover from the catastrophic overfitting in a few batches. This has not been observed before because normally a model is only evaluated per epoch, while such “overfit-and-recover” behavior happens within a span of a few batches.\nThe observation in Figure 2 also suggests that, when catastrophic overfitting happens, although the model quickly transforms into a non-robust one, it is fundamentally different from an ordinary non-robust model. In fact, the non-robust model due to catastrophic overfitting can quickly retain its robustness once the corresponding attack find the correct direction for constructing attacks again. Thus, the root of the effectiveness of randomized initialization in fast adversarial training is its ability to help escaping from catastrophic overfitting. On the other hand, it also explains why fast adversarial training still overfits after long training progress. Randomized initialization works with a high probability, but not always. As the training progress continues, the model is more capable of overfitting, and fast adversarial training is less likely to find the correct direction for constructing FGSM attacks.\nTo verify our analysis, we conduct experiments to exclude the use of randomized initialization, that is to use vanilla FGSM adversarial training, but also run PGD adversarial training for a few batches when catastrophic overfitting happens. In particular, we monitor the PGD robust accuracy on a validation set during training. Once there is a sudden drop of the validation robust accuracy, which indicates the occurrence of overfitting, we run PGD adversarial training for a few batches to help the model recover from overfitting, as an alternative to R+FGSM. The same piecewise learning rate schedule as in Figure 1 is used. We also run the vanilla fast adversarial training as a reference.\nThe results are illustrated in Figure 3, where we report the robust accuracy under 10-step PGD attacks with size = 8/255 for every 20 batches. Comparing plot (a) and (b), while the fast adversarial training overfits after around 15,000 batches, FGSM adversarial training without randomized initialization obtains robust accuracy despite several “overfit-and-recover” behaviors. This confirms our hypothesis that the essential nature of successful FGSM adversarial training is not the randomized initialization, but the ability to recover from catastrophic overfitting." }, { "heading": "4 A SIMPLE FIX TO FAST ADVERSARIAL TRAINING", "text": "The analysis and experimental results above suggest that (i) FGSM adversarial training is useful as long as it can recover from catastrophic overfitting, and (ii) fast adversarial training can only run for limited epochs because the randomized initialization is not reliable. Therefore, a more reliable way\nto mitigate catastrophic overfitting is needed. To this end, we propose a simple fix to fast adversarial training, incorporating PGD adversarial training when catastrophic overfitting is observed.\nAlgorithm 1 Improved fast adversarial training for T epochs, given some radius , N PGD steps, step size α, a threshold c, frequency of detection s, and a dataset of size M for a network fθ.\nfor t = 1 . . . T do for i = 1 . . .M do\nif Acclast > Accvalid + c then δ = PGD Attack(fθ, xi, yi) // Overfitting happens, run PGD adversarial training else δ = R+FGSM Attack(fθ, xi, yi) // No Overfitting, use R+FGSM adversarial training end if θ = θ −∇θ`(fθ(xi + δ), yi) // Update model weights with some optimizer, e.g. SGD\nend for if i%s == 0 then\nLet Acclast = Accvalid. Update robust accuracy Accvalid under PGD attacks. end if\nend for\nThe proposed approach is described in Algorithm 1. The core idea is simple and has been described briefly in the previous section: we hold out a validation set and monitor its robust accuracy for detecting overfitting. When there is a drop on the validation robust accuracy beyond a threshold, at which point catastrophic overfitting happens, we run 10-step PGD adversarial training for a few batches to help the model regain its robustness.\nNote that although the training progress in plot (a) of Figure 3 overfits at around the 15,000th batch, the frequency of catastrophic overfitting is much lower compared to the training progress in plot (b), where no randomized initialization is used. This also imply that although randomized initialization cannot prevent catastrophic overfitting, it effectively reduces its occurrences. To verify this conjecture, we perform the same experiment as in plot (b), but now with randomized initialization, and show the training progress in plot (c) of Figure 3. The occurrences of catastrophic overfitting is much fewer than in plot (b), confirming our conjecture. Therefore, we keep the large randomized initialization for FGSM adversarial training in Algorithm 1, resulting in R+FGSM adversarial training. The infrequent occurrences of catastrophic overfitting also ensures the additional PGD adversarial training adds little computational overhead to the training progress.\nHyperparameters In Table 1 we report the final clean and robust accuracy of the improved fast adversarial training (FastAdv+). We use the same piecewise learning rate schedule and early stopping as used in PGD adversarial training. We detect overfitting every s = 20 batches with a randomly sampled validation batch, and PGD adversarial training runs for s = 20 batches until the when overfitting happens. The threshold for detecting catastrophic overfitting is c = 0.1. The robust accuracy is evaluated against 50-step PGD attacks with 10 restarts for = 8/255. Note we use half-precision computations (Micikevicius et al., 2017) as recommended in (Wong et al., 2019) for all training methods, for acceleration. All experiments are repeated for 5 times, and both the mean and the standard deviation are reported.\nEfficiency Although FastAdv consumes less time, thus seems to be more efficient, it is worth noting that the computational time per epoch is almost the same for FastAdv and FastAdv+. FastAdv consumes less time merely because the training progress is forced to stop before its convergence. On\nthe other hand, our proposed FastAdv+ allows the training process to converge and results in better performance." }, { "heading": "5 FAST ADVERSARIAL TRAINING AS A WARMUP", "text": "We are able to improve the performance of fast adversarial training by allowing longer training progress. However, the associated robust accuracy is still noticeably worse than PGD adversarial training. This is expected, as PGD is inherently a stronger attack than FGSM. In this section, we adapt a previously studied technique (Wang et al., 2019), using FGSM adversarial training as a warmup for PGD adversarial training, to close the gap between fast adversarial training and PGD adversarial training.\nIt has been observed in (Wang et al., 2019) that using FGSM at the early stage of PGD adversarial training does not degrade its performance, and even provides slight improvement. The intuition behind this is that at the early stage of training, the model is vulnerable to adversarial attacks, and therefore there is no difference between using a weak attack and a strong attack. As the training proceeds, the model becomes more robust to weak attacks, and sometimes even overfits, where stronger attacks are more effective at increasing the robustness of the model.\nHowever, due to the risk of catastrophic overfitting, only a few epochs of FGSM adversarial training were used in (Wang et al., 2019) as a warmup, and consequently it does not provide much improvement on the robust accuracy, nor does it save much training time. As FastAdv+ can run for as many epochs as needed, it is possible to use it for most of the training epochs and PGD adversarial training for only a few epochs at the end.\nStarting Point of PGD Adversarial Training Since early stopping always happens a few epochs after the learning rate decay, we starts PGD adversarial training a few epochs before the learning rate decay, to minimize the span of PGD adversarial training for the purpose of efficiency. In the experiments, we run the improved fast adversarial training for the first 70 epochs and change to PGD adversarial training. The early stopping happens at the 78th epoch, meaning we only run PGD adversarial training for no more than 10 epochs.\nWe report in Figure 4 the validated clean and robust accuracy during the whole training progress for FastAdv+ as a warmup, termed FastAdvW. While using FastAdvW improves upon FastAdv+, it still suffers from overfitting (however, not catastrophic overfitting) in the later stage of training. We assume this is due to the fact that the FGSM attack with a large randomized initialization is already strong, in contrast to vanilla FGSM adversarial training used in (Wang et al., 2019). Following the intuition that only a weak attack is needed in the early stage, we reduce the size of perturbation from 8/255 to 4/255 for the stage of fast adversarial training. As shown in Figure 4, this change of the attack size (FastAdvW 4-8) allows the model to reach higher robust accuracy. We also report the final test clean and robust accuracy, which is based on early\nstopping1, on the test set in Table 1. This shows FastAdvW outperforms PGD adversarial training in robust accuracy and is comparable on clean accuracy, while consuming much less time." }, { "heading": "6 ADDITIONAL EXPERIMENTS", "text": "In the above analyses, we only evaluated the proposed approach on CIFAR-10 using the PreAct ResNet-18 architecture. In this section, we run more experiments on various data sets and model architectures to show the generality of our results.\n1For FastAdvW, we only consider stopping after PGD adversarial training starts.\nWe first show in Table 2 results for both CIFAR-10 and CIFAR-100 on the Wide-ResNet 34-10 (Zagoruyko & Komodakis, 2016) as this model structure is widely used in the adversarial training literature (Madry et al., 2017; Shafahi et al., 2019; Zhang et al., 2019b; Rice et al., 2020). The same setting of hyperparameters in Section 4 is used, except the threshold for detecting “catastrophic overfitting” is reduced to c = 0.05 for CIFAR-100 to accommodate for its range of robust accuracy.\nIn addition, we also include in Table 2 results for “free” adversarial training (Shafahi et al., 2019). This approach reduces the computational cost of PGD adversarial training utilizing the “minibatch replay” technique, which adds adversarial perturbations and updates the model simultaneously on the same minibatch for several iterations, to imitate PGD adversarial training. As a result, this approach only needs to run for several epochs to converge. In this experiments, we follow the recommendation in (Shafahi et al., 2019) and replay each batch m = 8 times for a total of 25 epochs.\nFinally, we conduct experiments on Tiny ImageNet, with results also summarized in Table 2. Although previous works (Wong et al., 2019; Shafahi et al., 2019) conduct experiments on ImageNet, it still requires several GPUs to run. As we only run experiments on a single RTX 2080ti, we considered Tiny ImageNet, which consists of 200 ImageNet classes at 64 x 64 resolution. The architecture we use is ResNet-50 (He et al., 2016b) and the hyperparameters, such as learning rate and size of attacks, are kept the same as for CIFAR datasets.\nThe results in Table 2 are consistent with what we have observed on the PreAct ResNet-18 architecture for CIFAR-10. While FastAdv+ outperforms vanilla fast adversarial training as a result of longer training progress, its robust accuracy is no better than PGD adversarial training. However, when we use FastAdv+ as a warmup, its clean accuracy becomes comparable to PGD adversarial training, while its robust accuracy constantly outperforms PGD adversarial training. In addition, FastAdvW only consumes 25% of the training time of PGD adversarial training. It is also worth noting that although free adversarial training uses a piecewise learning rate schedule as well, it only obtains its best performance at the end of the training progress, thus does not benefit from early stopping in both the performance and the efficiency." }, { "heading": "6.1 SANITY CHECK", "text": "We now perform sanity check following the suggestions from (Athalye et al., 2018) to ensure our proposed approaches are truly robust. For all the sanity checks, we evaluate our approaches on both PreAct ResNet-18 and Wide-ResNet 34-10 for classifying CIFAR-10.\nWe first show the clean and robust accuracy under 10-step PGD attacks under varying sizes from 0 to 12/255 in Figure 5. It can be observed that the decreasing trend of robust accuracy of our proposed approaches are consistent with models trained via PGD-AT and FastAdv, where FastAdvW outperforms PGD-AT and FastAdv+ outperforms FastAdv consistently.\nIn addition, we show robust accuracy under other types attacks that do not rely on estimating gradients. AutoAttack (Croce & Hein, 2020) ensembles four diverse attacks to reliably evaluate robustness: two improved PGD attacks, a boundary based attack FAB (Croce & Hein, 2019), and a query-based black-box attack Square Attack (Andriushchenko et al., 2019). We also use the recently proposed state-of-the-art black-box attacks (Tashiro et al., 2020) to evaluate our models. This attack utilizes output diversified sampling to construct transferable black-box attacks and is almost as powerful as white-box PGD attack. The results are summarized on Table 3.\nWe observe that no attack significantly reduces the robust accuracy of our proposed approaches, and the relative performance across different approaches are consistent. AutoAttack reduces approximately 5% more robust accuracy than 10-step PGD, which is consistent with the observations in (Croce & Hein, 2020)." }, { "heading": "7 CONCLUSION", "text": "We have conducted experiments to show that the key to the success of FGSM adversarial training is the ability to recover from “catastrophic overfitting”. Fast adversarial training utilizes randomized initialization to achieve this goal but still suffers from catastrophic overfitting for a large number of training epochs. We design a new training strategy that mitigates this caveat and enables the commonly used piecewise learning rate schedule for fast adversarial training and, as a result, improves clean and robust accuracy. We also use the improved fast adversarial training as a warmup for PGD adversarial training, and find it is sufficient to use this warmup for a majority of the training epochs to save time and further improve model robustness. As a result, we obtain superior performance to the expensive, state-of-the-art PGD adversarial training with much-reduced training time. Our proposed approaches are easy to implemented, thus could be used as baselines for empirical adversarial defense." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "B STRONG ATTACKS AFTER OVERFITTING", "text": "When FastAdv+ is used for training a model, even though the model can recover from catastrophic overfitting via PGD adversarial training, it is possible that the model overfits to PGD attacks and stays vulnerable to other attacks. Therefore, we extract the model right after its recovery from catastrophic overfitting and run several kinds of attacks, including 10-step PGD attacks, 50-step PGD attacks with 10 restarts, C&W attacks (Carlini & Wagner, 2017) and fast adaptive boundary (FAB) attacks (Croce & Hein, 2019), on this model.\nThe result shows the model recovered from catastrophic overfitting is indeed robust. Note the robust accuracy is relatively low as we are not using the final model." }, { "heading": "C ABLATION ANALYSIS ON ADJUSTED ATTACK SIZE", "text": "In Section 5, we show it is possible to improve the performance of FastAdvW via using a smaller size of attacks for FGSM adversarial training. It is possible that the adjusted size of attacks benefits not only our approach, but also PGD adversarial training. Therefore, we use the same setting (4/255 for the first 70 epochs and 8/255 for the rest) for full PGD adversarial training and compare it to vanilla PGD adversarial training.\nThe results show that PGD adversarial training enjoys limited benefits from the adjusted size of attacks. This strategy is more compatible with our proposed FastAdvW." } ]
2,020
TOWARDS UNDERSTANDING FAST ADVERSARIAL TRAINING
SP:b5daf21a7a1df819b39afd967085b64a55d14fb4
[ "Overview of paper: this work tackles the task of adversarial augmentation for better generalization. Instead of augmentation the pixels space, which is expensive and potentially harder, they augment the intermediate feature representation. As the choice of the particular layer for application of the perturbations affects performance, the authors, optimize it jointly with the rest of the parameters. Experiments show this method improves accuracy over standard training." ]
Adversarial training is an effective method to combat adversarial attacks in order to create robust neural networks. By using an auxiliary batch normalization on adversarial examples, it has been shown recently to possess great potential in improving the generalization ability of neural networks for image recognition as well. However, crafting pixel-level adversarial perturbations is computationally expensive. To address this issue, we propose AdversariaL Feature Augmentation (ALFA), which advocates adversarial training on the intermediate layers of feature embeddings. ALFA utilizes both clean and adversarial augmented features jointly to enhance standard trained networks. To eliminate laborious tuning of key parameters such as locations and strength of feature augmentations, we further design a learnable adversarial feature augmentation (L-ALFA) framework to automatically adjust the perturbation magnitude of each perturbed feature. Extensive experiments demonstrate that our proposed ALFA and L-ALFA methods achieve significant and consistent generalization improvement over strong baselines on CIFAR-10, CIFAR-100 and ImageNet benchmarks across different backbone networks for image recognition.
[]
[ { "authors": [ "Zitian Chen", "Yanwei Fu", "Yinda Zhang", "Yu-Gang Jiang", "Xiangyang Xue", "Leonid Sigal" ], "title": "Multi-level semantic feature augmentation for one-shot learning", "venue": "IEEE Transactions on Image Processing,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR, pp", "year": 2009 }, { "authors": [ "Zhe Gan", "Yen-Chun Chen", "Linjie Li", "Chen Zhu", "Yu Cheng", "Jingjing Liu" ], "title": "Large-scale adversarial training for vision-and-language representation learning", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "M. Ishii", "A. Sato" ], "title": "Training deep neural networks with adversarially augmented features for smallscale training datasets", "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN),", "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Boyi Li", "Felix Wu", "Ser-Nam Lim", "Serge Belongie", "Kilian Q Weinberger" ], "title": "On feature normalization and data augmentation", "venue": "arXiv preprint arXiv:2002.11102,", "year": 2020 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Preetum Nakkiran" ], "title": "Adversarial robustness may be at odds with simplicity", "venue": "arXiv preprint arXiv:1901.00532,", "year": 2019 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John C Duchi", "Percy Liang" ], "title": "Adversarial training can hurt generalization", "venue": "In ICMLW,", "year": 2019 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "Swami Sankaranarayanan", "Arpit Jain", "Rama Chellappa", "Ser Nam Lim" ], "title": "Regularizing deep networks using efficient layerwise adversarial training", "venue": "arXiv preprint arXiv:1705.07819,", "year": 2017 }, { "authors": [ "Ludwig Schmidt", "Shibani Santurkar", "Dimitris Tsipras", "Kunal Talwar", "Aleksander Madry" ], "title": "Adversarially robust generalization requires more data", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Bernhard Schölkopf", "Chris Burges", "Vladimir Vapnik" ], "title": "Incorporating invariances in support vector learning machines", "venue": "In ICANN,", "year": 1996 }, { "authors": [ "Patrice Simard", "Yann LeCun", "John S Denker" ], "title": "Efficient pattern recognition using a new transformation distance", "venue": "In NeurIPS, pp", "year": 1993 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Disentangling adversarial robustness and generalization", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Ke Sun", "Zhanxing Zhu", "Zhouchen Lin" ], "title": "Towards understanding adversarial examples systematically: Exploring data size, task and model factors", "venue": null, "year": 1902 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In ICLR,", "year": 2013 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "R. Volpi", "P. Morerio", "S. Savarese", "V. Murino" ], "title": "Adversarial feature augmentation for unsupervised domain adaptation", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Dilin Wang", "Chengyue Gong", "Qiang Liu" ], "title": "Improving neural language modeling via adversarial training", "venue": "arXiv preprint arXiv:1906.03805,", "year": 2019 }, { "authors": [ "Jingkang Wang", "Tianyun Zhang", "Sijia Liu", "Pin-Yu Chen", "Jiacen Xu", "Makan Fardad", "Bo Li" ], "title": "Towards a unified min-max framework for adversarial exploration and robustness", "venue": "arXiv preprint arXiv:1906.03563,", "year": 2019 }, { "authors": [ "Colin Wei", "Tengyu Ma" ], "title": "Improved sample complexities for deep networks and robust classification via an all-layer margin", "venue": null, "year": 1910 }, { "authors": [ "Cihang Xie", "Mingxing Tan", "Boqing Gong", "Jiang Wang", "Alan L Yuille", "Quoc V Le" ], "title": "Adversarial examples improve image recognition", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Yinghui Zhang", "Bo Sun", "Yongkang Xiao", "Rong Xiao", "YunGang Wei" ], "title": "Feature augmentation for imbalanced classification with conditional mixture wgans", "venue": "Signal Processing: Image Communication,", "year": 2019 }, { "authors": [ "Chen Zhu", "Yu Cheng", "Zhe Gan", "Siqi Sun", "Tom Goldstein", "Jingjing Liu" ], "title": "Freelb: Enhanced adversarial training for natural language understanding", "venue": "In ICLR,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks often fall vulnerable when presented adversarial examples injected with imperceptible perturbations, and suffer significant performance drop when facing such attacks (Szegedy et al., 2013; Goodfellow et al., 2015b). Such susceptibility has motivated abundant studies on adversarial defense mechanisms for training robust neural networks (Schmidt et al., 2018; Sun et al., 2019; Nakkiran, 2019; Stutz et al., 2019; Raghunathan et al., 2019), among which adversarial training based methods (Madry et al., 2018b; Zhang et al., 2019a) have achieved consistently superior robustness than others.\nThe general focus of adversarial training is to enhance the robustness of gradient-based adversarial examples. A few recent studies (Zhu et al., 2020; Gan et al., 2020) turn to investigate the generalization ability of adversarial training on language models. However, in-depth exploration of extending this to the vision domain is still missing. Xie et al. (2020) proposes to utilize adversarial examples with an auxiliary batch normalization to improve standard accuracy for image recognition, but it still suffers from expensive computational cost from the generation of pixel-level perturbations.\nTo address this issue, we propose AdversariaL Feature Augmentation (ALFA) as a natural extension of adversarial training, with a focus on leveraging adversarial perturbations in the feature space to improve image recognition on clean data. As illustrated in Figure 1, ALFA introduces adversarial perturbations to multiple intermediate layers. These perturbed feature embeddings act as a special feature augmentation and implicit regularization to enhance the generalization ability of deep neural networks. Consequently, two challenges arise: (i) how to efficiently find the best locations to introduce adversarial perturbations; and (ii) how to decide on the strength of the created perturbations. Although a few recent works (Zhu et al., 2020; Gan et al., 2020; Sankaranarayanan et al., 2017) look into this field, they either add perturbations in the input embeddings or all the intermediate features, yet have not reached a coherent conclusion.\nTo efficiently learn an optimal strategy of perturbation injection, we further propose a learnable adversarial feature augmentation (L-ALFA) framework, which is capable of automatically adjusting the position and strength of introduced feature perturbations. The proposed approach not only\ncircumvents laborious hyper-parameter tuning, but also fully unleashes the power of adversarial feature augmentation. Experiments show that this strategy gains a substantial performance margin over existing feature augmentation methods (Li et al., 2020). In addition, we find that learnable ALFA and exhaustively-tuned ALFA exhibit consistent patterns: applying weak adversarial feature augmentations to the last layers of deep neural networks can boost generalization performance.\nThe main contributions are summarized as follows. (i) We introduce a new approach of adversarial feature augmentation (ALFA) to improve the generalization ability of neural networks, which applies adversarial perturbations to the feature space rather than raw image pixels. (ii) To tackle the dilemma of laborious hyper-parameter tuning in generating adversarial features, we propose learnable adversarial feature augmentation (L-ALFA) to automatically tailor target perturbations and their locations. (iii) Comprehensive experiments on CIFAR-10, CIFAR-100, and ImageNet datasets across multiple backbone networks demonstrate the superiority of the proposed methods." }, { "heading": "2 RELATED WORK", "text": "Adversarial Training Deep neural networks are notoriously vulnerable to adversarial samples (Szegedy et al., 2013; Goodfellow et al., 2015b), which are crafted with malicious yet negligible perturbations (Goodfellow et al., 2015a; Kurakin et al., 2016; Madry et al., 2018a). In order to improve the robustness against adversarial samples, various defense mechanisms have been proposed (Zhang et al., 2019a; Schmidt et al., 2018; Sun et al., 2019; Nakkiran, 2019; Stutz et al., 2019; Raghunathan et al., 2019). Among these works, adversarial-training-based methods (Madry et al., 2018b; Zhang et al., 2019a) have achieved consistently superior performance in defending stateof-the-art adversarial attacks (Goodfellow et al., 2015a; Kurakin et al., 2016; Madry et al., 2018a). Although adversarial training substantially improves model robustness, it usually comes at the price of compromising the standard accuracy (Tsipras et al., 2019), which has been demonstrated both empirically and theoretically (Zhang et al., 2019a; Schmidt et al., 2018; Sun et al., 2019; Nakkiran, 2019; Stutz et al., 2019; Raghunathan et al., 2019).\nRecently, researchers start to investigate improving clean set accuracy with adversarial training (Xie et al., 2020; Zhu et al., 2020; Wang et al., 2019a; Gan et al., 2020; Wei & Ma, 2019) (Ishii & Sato, 2019). Xie et al. (2020) shows that performance on the clean dataset can be enhanced by using adversarial samples with pixel-level perturbation generation. Zhu et al. (2020) and Wang et al. (2019a) apply adversarial training to natural language understanding and language modeling, both successfully achieving better standard accuracy. Gan et al. (2020) achieves similar success on many vision-and-language tasks. There also exist parallel studies that employ handcrafted or auto-\ngenerated perturbed features to ameliorate generalization (Wei & Ma, 2019) (Ishii & Sato, 2019) or robustness (Sankaranarayanan et al., 2017).\nHowever, two key issues remain unexplored: (i) which layers to introduce adversarial feature augmentations; (ii) how strong the perturbations should be. For the former, Zhu et al. (2020); Wang et al. (2019a); Gan et al. (2020) try to perturb the input embeddings of transformer models, while Wei & Ma (2019); Sankaranarayanan et al. (2017) insert perturbations to all layers of a convolutional network. Regarding the above issue, all the methods need arduous and heuristic tunings. In our paper, we present a different observation that augmenting the last layers’ feature embeddings with weak adversarial feature perturbations can gain higher standard accuracy. The L-ALFA framework inspired by this observation can effectively alleviate laborious tuning, which otherwise is inevitable.\nFeature Augmentation Although pixel-level data augmentation techniques (Simard et al., 1993; Schölkopf et al., 1996) have been widely adopted, feature space augmentations have not received the same level of attention. A few pioneering works propose generative-based feature augmentation approaches for domain adaptation (Volpi et al., 2018), imbalance classification (Zhang et al., 2019b), and few-shot learning (Chen et al., 2019). Another loosely related field is feature normalization (Ioffe & Szegedy, 2015; Li et al., 2020). MoEx (Li et al., 2020) is a newly proposed method that can be regarded as a feature augmentation technique, which leverages the first and second-order moments extracted and re-injected by feature normalization. It is worth mentioning that all the approaches aforementioned are orthogonal to our proposed method, and can be combined for further generalization improvement, which is left as future work." }, { "heading": "3 ADVERSARIAL FEATURE AUGMENTATIONS (ALFA)", "text": "In the proposed ALFA framework, we generate adversarial perturbations in the intermediate feature embedding space, rather than applying perturbations to raw image pixels as in common practice. Thus, adversarial training can be formulated as an effective regularization to improve the generalization ability of deep neural networks." }, { "heading": "3.1 NOTATIONS", "text": "Given a dataset D = {x, y}, where x is the input image and y is the corresponding one-hot groundtruth label. Let f(x; Θ) represents predictions of a deep neural networks, and fi(x; Θ(i))|r+1i=1 is the intermediate feature embedding from the i-th layer. The (r + 1)-th layer denotes the classifier, therefore fr+1(x; Θ(r+1)) = f(x; Θ). Adversarial training can be formulated as the following min-max optimization problem:\nmin Θ\nE(x,y)∈D [\nmax ||δ||p≤\nLat(f(x + δ; Θ); Θ; y) ] , (1)\nwhere δ is the adversarial perturbation bounded by the `p norm ball, which is centered at x with radius which is the maximum perturbation magnitude. Lat is the cross-entropy loss for adversarial training (AT). E(x,y)∈D takes the expectation over the empirical objective on the training dataset D. The inner optimization generates adversarial perturbation δ via maximizing the empirical objective. It can be reliably solved by multi-step projected gradient descent (PGD) (Madry et al., 2018b) (without loss of generality, we take || · ||∞ perturbation as an example):\nδt+1 = Π||δ||∞≤ [δt + α · sgn(∇xLat(f(x + δt; Θ); Θ; y)] , (2) where t is the number of steps, α denotes the learning rate of inner maximization, sgn is the sign function, and Lat is the adversarial training objective on adversarial images." }, { "heading": "3.2 PERTURBATIONS IN THE EMBEDDING SPACE VIA ALFA", "text": "Here, we extend the conventional adversarial perturbations to the feature embedding space. We start from the training objective of ALFA as follows:\nmin Θ E(x,y)∈D\n[ Lstd(x; Θ; y) + λ ·\n∑ i max ||δ(i)||∞≤ Lat(fi(x; Θ(i)) + δ(i); Θ; y)\n] , (3)\nwhere Lstd is the cross-entropy (XE) loss on clean images, Lat here is the cross-entropy loss for adversarial training (AT) on adversarial augmented feature embeddings. λ is the hyperparamter to control the influence of AT regularization, which is tuned by grid search. δ(i) is the adversarial perturbation on the feature of layer i, generated as follows:\nδ (i) t+1 = Π||δ||∞≤ [ δ (i) t + α · sgn(∇xLat(fi(x; Θ(i)) + δ (i) t ; Θ; y)) ] . (4)\nIt is worth noting that, for crafting δ(i), at each step, the gradient is only back-propagated to the i-th layer without going further, which is much more computationally efficient compared to generating perturbations in the input embedding space. In practice, we set the maximum magnitude of crafted feature perturbation to be unbounded, and the projected gradient descent will be replaced by gradient descent.\nIn ALFA, the two most essential factors are: (i) where to introduce adversarial perturbations; and (ii) how strong the perturbations should be. Table 1 and Figure 2 present some preliminary results to understand this. Results shows that the performance of ALFA relies particularly on the location (i.e., which blocks) and strength (i.e., step size α) of the introduced feature perturbations. An inadequate configuration (e.g., applying ALFA to all blocks 1, 2 and 3 as shown in Figure 2) might cause accuracy degradation. More analyses are provided in Section 4.3. To determine the best configuration, we further design a learnable adversarial feature augmentation (L-ALFA) approach to automatically adjusting the location and strength of perturbations for best augmentation performance, which will be explained in the next sub-section." }, { "heading": "3.3 LEARNABLE PERTURBATIONS VIA L-ALFA", "text": "To ascertain the two critical settings (locations and strength) of feature augmentations, we introduce an enhanced ALFA method, L-ALFA, which also eliminates laborious tuning. Specifically, in layer i, for the PGD-generated perturbation δ(i), we apply a learnable parameter ηi to control the magnitude of δ(i) before adding it to the corresponding feature embeddings. Thus, a learned near-zero ηi indicates that it is unnecessary to inject feature perturbations on layer i. Furthermore, we also introduce the `1 sparsity constraint on the learnable perturbation magnitude η. The design philosophy is that applying ALFA on all layers of deep neural network does not benefit (or even hurt) the standard accuracy, as exemplified in Figure 2 and Figure 3.\nThe optimization problem is then formulated as:\nmin Θ,{ηi}r1,η∈P E(x,y)∼D\n[ Lstd + λ ·\nr∑ i=1 max ||δ(i)||∞≤ Lat(fi(x; Θ(i)) + ηi · δ(i); Θ; y) + γ · ||η||1\n] ,\n(5)\nwhere Lstd = LXE(x; y; Θ), P = {η|1Tη = 1}, η = (η1, η2, · · · , ηr) is the learnable strength of feature perturbations, γ is the hyperparameter to control the sparsity level. γ can be chosen from {0.5, 1.0, 2.0}. To solve equation 5, we first generate feature perturbations δ(i) via multi-step PGD.\nAlgorithm 1 Learnable Adversarial Feature Augmentation (L-ALFA). 1: Input: given Θ0, η0, δ0. (In our case, η0 = (1, · · · , 1) ∈ Rr) 2: for n = 1, 2, · · · , N iterations do 3: Given Θn−1, ηn−1, generate adversarial perturbation δn = (δ (1) n , · · · , δ(r)n ) via multi-step PGD;\n4: Given δn, perform SGD to update Θn, ηn; 5: Project ηn into P via the bisection method (Wang et al., 2019b). 6: end for\nThen, we minimize the empirical training objective to update the network weights Θ and η through stochastic gradient descent (SGD) (Ruder, 2016). In the end, we project η into P and repeat the above steps until the training converges. The full algorithm is summarized in Algorithm 1." }, { "heading": "4 EXPERIMENTS", "text": "We conduct extensive experiments on multiple benchmarks to validate the generalization ability of ALFA and L-ALFA, evaluating across different backbone networks for image recognition. Ablation studies and analysis of the learned distribution of perturbation magnitude are also provided." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Datasets and Backbones We consider three representative datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet (Deng et al., 2009). In our experiments, the original training datasets are randomly split into 90% training and 10% validation. The early stopping technique is applied to find the top-performing checkpoints on the validation set. Then, the selected checkpoints are evaluated on the test set to report the performance. From our observations, the hyperparameters are quite stable from validation to test sets. We evaluate large backbone networks (ResNet18/50/101/152 (He et al., 2016)) on all three datasets, and test smaller backbones (ResNet-20s/56s) as well on CIFAR-10 and CIFAR-100. Ablation studies are implemented on CIFAR-10, where key observations can be generalized to other datasets.\nTraining and Metrics For network training on CIFAR-10 and CIFAR-100, we adopt an SGD optimizer with a momentum of 0.9, weight decay of 5 × 10−4, and batch size of 128 for 200 epochs. The learning rate starts from 0.1 and decays to one-tenth at 50-th and 150-th epochs. We also perform a linear learning rate warm-up in the first 200 iterations. For ImageNet experiments, following the official setting in Pytorch repository,1 we train deep networks for 90 epochs with a batch size of 512, and the learning rate decay at 30-th and 60-th epoch. The SGD optimizer is adopted with a momentum of 0.9 and a weight decay of 1 × 10−4. We evaluate the generalization ability of a network with Standard Testing Accuracy (SA), which represents image recognition accuracy on the original clean test dataset." }, { "heading": "4.2 EVALUATION AND ANALYSIS OF ALFA", "text": "For ALFA experiments, all hyperparameters are tuned by grid search, including PGD steps, step size α, and the layers to introduce adversarial perturbations. For generated adversarial feature embeddings, we set the maximum perturbation magnitude to be unbounded 2, since there are no explicit constraints for feature perturbations, and the effect of tuning can be absorbed by adjusting PGD steps and step size α.\nTable 2 presents the standard testing accuracy of different models on CIFAR-10. Comparing the standard training with our proposed ALFA, here are the main observations:\n(i) ALFA obtains a consistent and substantial improvement over standard accuracy, e.g., 1.27% on ResNet-20s, 1.69% on ResNet-56s, and 0.51% on ResNet-50. This suggests that training with augmented features generated by ALFA effectively enhances the generalization of\n1https://github.com/pytorch/examples/tree/master/imagenet 2In practice, the magnitude of crafted feature perturbation steadily stays in a range from 0.97 to 1.10 under the `2 norm. Adversarial perturbations usually are applied to the normalized feature from batch normalization.\nResults on CIFAR-100 and ImageNet are summarized in Table 3. We observe that ALFA consistently boosts the generalization ability of multiple ResNets on both CIFAR-100 and ImageNet, e.g., 1.14% for ResNet-56s on CIFAR-100, 1.02% for ResNet-50 on ImageNet. Furthermore, we notice that ALFA advocates different steps of PGD to achieve superior performance on diverse datasets. To fully understand these sensitive yet critical factors, we conduct a systematical and comprehensive ablation study in the next sub-section." }, { "heading": "4.3 ABLATION STUDIES", "text": "Strength and Locations of ALFA To understand the effect or the strength of injected adversarial perturbations, we implement ResNet-18 on CIFAR-10 and examine the performance across different step sizes and numbers of PGD steps. Table 4 shows that perturbing with step size α = 1.0/255 obtains a larger gain by 0.35% SA. In addition, excessive weak (e.g., α = 0.5/255) or strong (e.g., α = 1.5/255) adversarial feature augmentations may incur performance degradation. For the ablation of PGD steps, we implement ResNet-18 on ImageNet as well. Table 5 demonstrates that ALFA with PGD-5 and PGD-1 works the best for CIFAR-10 and ImageNet, respectively, indicating that the strength of generated perturbations is an essential and sensitive hyperparameter for ALFA.\nThen, we analyze the effect of locations (i.e., where to apply ALFA) via two typical backbones: ResNet-56s on CIFAR-10 and ResNet-18 on ImageNet. In each setting, we present a detailed analysis on which layer and how many layers the feature embeddings should be adversarially augmented for achieving the best performance. Figure 3 presents the layer preference of feature perturbations when applying ALFA to different blocks or some combinations of blocks. We notice that introducing ALFA to the last block achieves better standard accuracy, while the performance deteriorates\nTable 4: Standard testing accuracy (%) on CIFAR-10 dataset. We perturb the feature embeddings in the last block of ResNet-18 via PGD-5 and diverse step size α. For reference, SA of standard trained model is 94.30%. ↑/↓ indicates SA improvement/degradation compared to the corresponding baseline in standard training.\nStep size α 0.5255 1.0 255\n1.5 255\n2.0 255\n4.0 255\nALFA 93.15 (↓ 1.15) 94.65 (↑ 0.35) 94.34 (↑ 0.04) 94.36 (↑ 0.06) 93.30 (↓ 1.00)\nTable 5: Standard testing accuracy (%) on CIFAR-10 and ImageNet datasets. For CIFAR-10, we perturb the feature embeddings in the last block of ResNet-18, via PGD-1/3/5/7/10 with step size α = 1.0/255. As a reference, the SA of standard trained model is 94.30%. As for ImageNet, we also perturb the last block features of ResNet-18 via PGD-1/3/5 and step size α = 0.5/255. The reference SA is 69.38%.\nafter injecting ALFA to multiple blocks. These results demonstrate that the strength and location for ALFA play a crucial role and need to be cautiously selected, which motivates us to design the learnable framework, L-ALFA.\nALFA vs. Other Feature Augmentations One natural baseline is adding random noise to the feature embeddings. For each training iteration, a new random noise sampled from a Gaussian distribution N (0, (1.0/255)2) is applied to the same feature embeddings. Another representative feature augmentation recently proposed is MoEx3 (Li et al., 2020), which is compared as another baseline. For ALFA, we choose the best hyperparameter configurations for ResNet-56s and ResNet-18 on CIFAR-10, i.e., perturbing the last block feature embeddings with PGD-5, and step size (α = 1.5255 for ResNet-56s and α = 1.0255 for ResNet-18). As shown in Table 7, ALFA significantly surpasses MoEx and random-noise-based feature augmentation, demonstrating that feature perturbations generated from ALFA are non-trivial.\nALFA vs. AdvProp We compare ALFA with AdvProp (Xie et al., 2020) on CIFAR-10 with ResNet-18 and on ImageNet with EfficientNet-B0 (Tan & Le, 2019), as presented in Table 6. Training on a single GTX1080 Ti GPU for CIFAR-10 experiments, AdvProp achieves 94.52% accuracy with 123 seconds per epoch; ALFA obtains 94.65% with 28 seconds per epoch, where standard training takes 23 seconds per epoch. The experiments on ImageNet (batch size 256) are conducted on 2 Quadro RTX 6000 GPUs with 24G×2 memory in total, and the reported running time is per epoch.\n3We implement MoEx based on their released repository, https://github.com/Boyiliee/MoEx\nAs shown in Table 6, ALFA obtains a similar performance improvement with less computational cost, compared with pixel-level adversarial augmentations (e.g., AdvProp).\nRobust Performance of ALFA Although the robust testing accuracy (RA) is not the focus of ALFA, we report it for completeness. We implement the standard, ALFA and the adversarial trained ResNet-18 networks on CIFAR10. The adversarial trained model uses PGD-10 with step size α = 2255 and = 8 255 for training. Then, PGD-20 with the same α and is applied to evaluate the robust performance of the three models. We observe that ALFA trained models (4.86% RA) yield moderate robustness, compared to models from standard (0.00% RA) and adversarial (50.72% RA) trained models." }, { "heading": "4.4 EVALUATION AND ANALYSIS OF L-ALFA", "text": "Results on L-ALFA are presented in Table 8 and Figure 4. L-ALFA consistently improves the generalization of trained networks by 0.32% on CIFAR-10, 1.18% on CIFAR-100, and 0.91% on ImageNet. Although the achieved performance is close to ALFA, L-ALFA saves toilsome tuning by automatically adjusting the strength and locations of adversarial feature augmentations. Another interesting finding is that L-ALFA automatically learns the effective trick of only perturbing the last block feature embeddings, which is consistent with our observations made from ALFA.\nNumber of Perturbed Layers To study the effect of the dimension of the learnable perturbation magnitude η in LALFA, we implement ResNet18 on CIFAR-10 for the additional experiment. ResNet-18\nhas four residual blocks and twenty convolution layers, and perturbing them will result in dim(η) being equal to 4 and 20, respectively. We also try introducing adversarial feature augmentations to only some of the intermediate layers, such as perturbing the features after each skip connection (i.e., dim(η) = 9). As shown in Table 9, an unduly high dimension of η, i.e., perturbing features in almost all the layers, is harmful to model generalization. Therefore, learning feature perturbations by blocks or by skip connections is adequate for L-ALFA.\nAblation of the `1 regularization in L-ALFA Results of ResNet-56s on CIFAR-100 are collected in Table 10. We observe that γ can be roughly chosen from {0.5, 1.0, 2.0} and obtain similar competitive performance. Excessive large or small values of γ obtain less performance improvements or even incur degradation." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present ALFA, an advanced adversarial training framework for image recognition. By applying adversarial perturbations on feature embeddings, and by jointly training with both clean and adversarial augmented features, ALFA improves the generalization of diverse neural network backbones across multiple image recognition datasets such as ImageNet. Systematical ablation studies reveal that introducing weak adversarial feature augmentations to the last layers of networks contributes more, which is different from previous findings. Furthermore, we propose L-ALFA to learn a better augmentation, and also save the laborious tuning on ALFA. For future work, we plan to extend the ALFA to other vision tasks, such as object detection and semantic segmentation." } ]
2,020
null
SP:bba6a0856c8f3bb5a7ef8a768c38b999e6438df9
[ "The paper describes variants of an intelligent tutoring system (ITS) developed using a newer (but previously published) variant of Knowledge Tracing (HOT-DINA) for assessing student proficiency and an RL algorithm (PPO) for making decisions on items and content areas to try next. An empirical simulation calibrated to 8 students is reassessed on the same student simulations and improvements over the original tutoring system are empirically demonstrated. Four variants with differing levels action granularity and knowledge racing are analyzed." ]
We present STEP, a novel Deep Reinforcement Learning solution to the problem of learning instructional sequencing. STEP has three components: 1. Simulate the tutor by specifying what to sequence and the student by fitting a knowledge tracing model to data logged by an intelligent tutoring system. 2. Train instructional sequencing policies by using Proximal Policy Optimization. 3. Evaluate the learned instructional policies by estimating their local and global impact on learning gains. STEP leverages the student model by representing the student’s knowledge state as a probability vector of knowing each skill and using the student’s estimated learning gains as its reward function to evaluate candidate policies. A learned policy represents a mapping from each state to an action that maximizes the reward, i.e. the upward distance to the next state in the multi-dimensional space. We use STEP to discover and evaluate potential improvements to a literacy and numeracy tutor used by hundreds of children in Tanzania.
[ { "affiliations": [], "name": "SEQUENCING POLI" }, { "affiliations": [], "name": "CIES FOR AN" }, { "affiliations": [], "name": "INTELLIGENT TUTORING" } ]
[ { "authors": [ "Y.B. David", "A. Segal", "Y.K. Gal" ], "title": "Sequencing educational content in classrooms using bayesian knowledge tracing", "venue": "Proceedings of the Sixth International COnference on Learning Analytics Knowledge,", "year": 2016 }, { "authors": [ "S. Doroudi", "V. Aleven", "Brunskill" ], "title": "Robust evaluation matrix: Towards a more principled offline exploration of instructional policies", "venue": "Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale,", "year": 2017 }, { "authors": [ "S. Doroudi", "V. Aleven", "E. Brunskill" ], "title": "Where’s the reward", "venue": "Int J Artif Intell Educ,", "year": 2019 }, { "authors": [ "Chung Laung Liu" ], "title": "A study in machine-aided learning", "venue": "PhD thesis, Massachussets Institute of Technology,", "year": 1960 }, { "authors": [ "Z.A. Pardos", "N.T. Heffernan" ], "title": "Kt-idem: Introducing item difficulty to the knowledge tracing model", "venue": "UMAP,", "year": 2011 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "A. Segal", "Y.B. David", "J.J. Williams", "K. Gal", "Y. Shalom" ], "title": "Combining difficulty ranking with multi-armed bandits to sequence educational content", "venue": "arXiv preprint arXiv:1804.05212,", "year": 2018 }, { "authors": [ "S. Shen", "M.S. Ausin", "B. Mostafavi", "M. Chi" ], "title": "Improving learning & reducing time: A constrained action-based reinforcement learning approach", "venue": "Proceedings of the 2018 Conference on User Modeling Adaptation and Personalization,", "year": 2018 }, { "authors": [ "J. Whitehill", "J. Movellan" ], "title": "Approximately optimal teaching of approximately optimal learners", "venue": "IEEE Transactions on Learning Technologies,", "year": 2017 }, { "authors": [ "Yanbo Xu", "Jack Mostow" ], "title": "A unified 5-dimensional framework for student models", "venue": "Proceedings of the EDM2014 Workshop on Approaching Twenty Years of Knowledge Tracing,", "year": 2014 }, { "authors": [ "Michael V. Yudelson", "Kenneth R. Koedinger", "Geoffrey J. Gordon" ], "title": "Individualized bayesian knowledge tracing models", "venue": "International Conference on Artificial Intelligence in Education,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "An Intelligent Tutoring System (ITS) aims at teaching a set of skills to users by individualizing instructions. Giving instruction to users requires many sequential decisions, such as what to teach, what activities to present, what problems to include, and what help to give. Our aim is to take decisions which maximize long-term rewards in the form of learning gains, so Reinforcement Learning (RL) is a natural approach to pursue, and was first proposed by Liu (1960).\nThe goal of an RL agent is to learn a policy π, defined as a mapping from state space S to action space A. Given any state, the RL agent follows a series of actions proposed by the learned policy to maximize the long-term expected reward. In the context of an ITS, we specify the RL agent as follows:\n• State st: We define the state as a combination of the student state and the tutor state. The tutor state determines the set of actions available to the RL agent at a given timestep. We represent the student state as a vector of probabilities where element i is the estimated probability that the student knows skill i.\n• Action at: The action taken by the RL agent corresponds to a tutor decision at a particular grain size.\n• Reward rt(st, at): Defined as the average difference between prior and posterior knowledge states based on the simulated student’s response to the tutor action at to the student simulator.\n• Next state st+1: The knowledge vector of a student after a Bayesian update based on the simulated student’s response to tutor action at in state st is the updated student knowledge state. The updated tutor state is given by the tutor simulator. The updated student knowledge state and tutor state, together gives the next state st+1.\nWe instantiate STEP in the context of RoboTutor, a Finalist in the Global Learning XPRIZE Competition to develop an open source Android tablet tutor to teach basic literacy and numeracy to chil-\ndren without requiring adult intervention. XPRIZE independently field-tested the Swahili version of RoboTutor for 15 months in 28 villages in Tanzania.\nFigure 1 shows an diagrammatic overview of STEP and the rest of the paper is organized as follows. Section 2 discusses the simulation of tutor and student (the environment block). Section 3 elaborates on the training of decision policies (the RL agent block). Section 4 evaluates the learned policies. Section 5 relates this work to prior research. Section 6 concludes." }, { "heading": "2 SIMULATING THE TUTOR AND THE STUDENT", "text": "To apply RL, we need to simulate the tutor’s actions and the student’s responses to them." }, { "heading": "2.1 TUTOR SIMULATOR", "text": "The data for this paper comes from the version of RoboTutor used during the last 3 months of XPRIZE’s 15-month field study. This version rotates through three content areas (literacy, numeracy, and stories), tracking the child’s position in each area’s curricular sequence of successively more advanced activities. It lets the child select among doing the activity at that position, advancing to the next activity, repeating the same activity (from the previous content area), or exiting RoboTutor. After selecting an activity, the child may complete all or part of it before selecting the next activity. RoboTutor has 1710 learning activities, each of which gives assisted practice of one or more skills on a sequence of items, such as letters or words to write, number problems to solve, or sentences to read. Each item requires one or more steps. Each step may take one or more attempts.\nThe simulated tutor state identifies the current content area and the child’s position in it. RoboTutor (actual or simulated) updates the position in the content area based on the percentage of correct attempts to perform the steps in an activity. Specifically, it uses fixed heuristic thresholds (called\nLOW, MID, HI) on this percentage to demote BACK to the previous position, stay at the SAME position, promote to the NEXT position, or SKIP to the position thereafter. Figure 2 gives an illustration of the same." }, { "heading": "2.2 STUDENT SIMULATOR", "text": "A student simulator should behave like students who use the tutor. Accordingly, the simulator uses a Bayesian Knowledge Tracing (BKT) student model trained on logged data using HOT-DINA. It has the same Guess, Slip, and Learn parameters as standard BKT, but estimates the Knew parameter based on skill difficulty and discrimination and student proficiency from Item Response Theory. Thus, HOT-DINA extrapolates from the student’s knowledge of other skills, and other students’ knowledge of this skill, albeit at a high computational cost to fit the model. Xu & Mostow (2014) found HOT-DINA to have higher predictive accuracy than standard BKT.\nTo limit computation time, we fit the model on logged data from a single village, consisting of 42,010 attempts by 8 children to apply 22 skills. We fit one proficiency parameter for each child and 5 parameters for each skill (Guess, Slip, Learn, Difficulty, and Discrimination), 118 parameters in total. (Fitting 5 separate parameters per activity instead of per skill might achieve higher accuracy but would require fitting 8,558 parameters.) We use MCMC sampling for Bayesian inference with PyStan rather than the OpenBUGS Gibbs sampling package used in the original HOT-DINA work because PyStan is faster and handles larger datasets. Nevertheless, fitting the 118-parameter HOTDINA model to 42,010 attempts took approximately 4 days on a supercomputer with 128 GB and 28 cores.\nTable 1 shows converged values for a subset of HOT-DINA parameters. For example, the eight θ values refer to the 8 student proficiency parameters of the student model. For simplicity, we show only the first 6 values of b (skill difficulty parameter) in the table. Once we obtain the model parameters, we need two things to be done for the student simulator to be successful: given an activity, we should be able to simulate whether a student gets an activity right or wrong. Based on this response, we should be able to perform knowledge tracing over multiple skills to update the student’s knowledge probabilities. For simulating a student’s performance on an activity we first estimate P(Getting Activity j Correct) as in equations 2 and 3 below. We then simulate the student response (right or wrong) by doing a biased coin flip based on this estimated probability. Since we now have a simulated student response, we perform knowledge tracing over multiple skills using the update equations 3-5. The next few lines cover some basic notation and update equations for simulated learning of a student. It should be noted that variables α, y, and Y are all binary, ie., they take on value of either 1 or 0.\nθn Proficiency of student n ak Discrimination of skill k\nbk Difficulty of skill k qjk 1 if activity j exercises skill k, 0 otherwise α (t) nk = 1 Probability that student n knows skill k at time-step t y (t) nk = 1 Probability that student n answers an activity exercising only skill k at time-step t Y (t)nj = 1 Probability that student n gets activity j correct at time-step t\nα (0) nk = K∏ k=1 ( 1 1 + exp(−1.7ak(θn − bk)) )qjk (1)\n(y (t) nk = 1) = (1− slipk)(α (t) nk = 1) + guessk(α (t) nk = 0) (2)\n(Y (t)nj = 1) = K∏ k=1 (y (t) nk = 1) qjk (3)\n(α (t) nk = 1|Y (t) nj = 1) = (α (t) nk = 1) ∗ ( 1− slipk (y\n(t) nk = 1)\n)qjk (4)\n(α (t) nk = 1|Y (t) nj = 0) = (α (t) nk = 0) ∗\n( guessk\n(y (t) nk = 1)\n)qjk (5)\n(α (t+1) nk = 1) = (α (t) nk = 1|Y (t) nj ) + (learnk ∗ (α (t) nk = 0|Y (t) nj )) (6)" }, { "heading": "3 TRAINING POLICIES WITH PPO", "text": "We have already discussed the student simulator and tutor simulator in last section. In this section, we discuss the training a policy using STEP in the context of RoboTutor." }, { "heading": "3.1 THE REWARD FUNCTION", "text": "The RL agent learns a decision policy – that is, a mapping from states to actions – that maximizes the total expected reward of following the policy πθ. As the reward function for student n, we use the knowledge gain as estimated by the student model, i.e. posterior minus prior estimates of Pr(student i knows skill k), averaged over all skills. The posterior and prior refer to the knowledge states before and after applying the bayesian updates (equations 3-5) on an activity decided by action at. The information for prior knowledge is implicitly present in the knowledge state of st In order to save computational time, we learn policy for episodes of 100 timesteps using PPO after which the episode terminates. Though our experiments stick to finite-horizon undiscounted returns with 100 steps, it is trivial to extend this approach to any finite number of steps or even to infinite-horizon discounted returns with discount factor γ ∈ (0, 1) so the rewards vanish at large timesteps. The reward function rt for student n at a given step is given by learning gains of a student due to attempting an activity, as given in equation (7) where K is the total number of skills (22 for RoboTutor). The returns are just the sum of rewards over T=100 steps. (Previous methods used reward=0 or 1 based on correct attempt or something else. Useful to mention this?)\nrt(st, at) =\nK∑ k=1 (α (t+1) nk = 1)− (α (t) nk = 1)\nK (7)\nAccording to the student model trained by HOT-DINA on the 8 children’s log data, their prior averaged 0.55 and their posterior averaged 0.73, a gain of 0.18 over their final usage consisting of 42,010 attempts (up to 3 months). Their posterior after their first 100 attempts averaged across the 8 students was 0.64, for an average gain per attempt of 0.09/100 = 0.0009. We can train different types of RL agents depending on their state space and range of actions, which depend on how far they depart from RoboTutor’s current decision policy." }, { "heading": "3.2 STATES, ACTIONS AND RL AGENT TYPES", "text": "We model student n’s state as the vector of estimated probabilities of student n knowing skill k [(α\n(t) n1 = 1), ..., (α (t) nK = 1)] T . Depending on the RL agent type, the tutor state may include the current (active) content area (literacy, numeracy or stories) and the student’s current position in the curricular sequence for that area; just the content area; or neither.\nAlternative ranges of actions for each agent type:\n• Type 1: 3 threshold actions (LOW, MID, HI), each action ∈ (0.0, 1.0) • Type 2: promote-demote decisions choosing one of BACK, SAME, NEXT, and SKIP. 1\naction from a Discrete(4) action space. • Type 3: an activity from the current content area. 1 action from a Discrete(x) action space\nwhere x is the number of activities in the current content area. • Type 4: any activity from any content area. 1 action from a Discrete(1710) action space.\nTable 2 summarizes 4 types of RL agents we consider, whose tutor simulators operate in the following ways:\n• Type 1 preserves RoboTutor’s current choices but adjusts thresholds that affect promotedemote decisions indirectly\n• Type 2 eliminates the need for thresholds by choosing promote-demote decisions directly from state rather than from thresholds\n• Type 3 can jump to any activity within the current content area • Type 4 can jump to any activity in any content area; area rotation constraint is removed" }, { "heading": "4 EVALUATING LEARNED POLICIES", "text": "We evaluate the learned policies along two metrics which assess the local and global impacts. The local impact is the average change in reward by replacing a single historical choice of activity with the activity chosen by the policy. The global impact is the overall change in reward per attempt by following the learned policy from the first attempt. Table 3 evaluates the learned policies, per agent type, by their impact on learning gains (expected reward) of the first 100 attempts averaged across the 8 children, compared to the historical baseline of 0.0009.\nFrom the table, we see an increasing trend for both the local and global impacts and is in-line with our beliefs that less constrained the RL agent is, the greater the impact. Interestingly, local impacts seem to exceed global impacts for all 4 cases. This is because successive attempts have independent\nrewards since the Prior(Know at step t) does not depend on the policy-proposed-action at step t− 1 for local impacts. Thus, if some less-known activity allows a large one-time gain at step t − 1, the local impact allows it to occur multiple times in subsequent steps, beating average global gains per step.\nFigure 3 evaluates the current RoboTutor policy (red) versus the agent’s learned policy on the simulated student for agent types 1 to 4. Every agent type has 8 subplots associated to each simulated student built off the 8 students’ data that we logged through RoboTutor. The y-axis corresponds to student’s average knowledge across skills and the x-axis corresponds to the number of attempts of a student. Since we restricted the time-horizon to 100, we can see that the x-values have an upper limit of 100 attempts." }, { "heading": "5 RELATION TO PRIOR WORK", "text": "Various researchers have worked on Reinforcement Learning for instructional sequencing Doroudi et al. (2019). Table 4 summarizes work that used BKT for student modeling or Deep RL for optimization.\nPrior work by Yudelson et al. (2013) and Pardos & Heffernan (2011) used BKT methods that fit a parameter for the probability of already knowing a skill prior to instruction. In contrast, we use HOT-DINA, a higher-order BKT-IRT hybrid that estimates this probability based on skill difficulty and student proficiency, achieving higher accuracy than standard BKT.\nRecent work by Shen et al. (2018) on instructional sequencing used deep reinforcement learning, specifically Deep Q-Networks. STEP uses a more powerful deep RL method, namely Proximal Policy Optimization (PPO) Schulman et al. (2017).\nSome prior work reviewed by Doroudi et al. (2019) specifies the reward as 1 when the probability of knowing a skill reaches 0.95 and 0 otherwise. In contrast, we define reward as estimated learning gain, so as to differentiate between actions that yield different gains in student knowledge." }, { "heading": "6 CONCLUSION", "text": "This paper contributes a novel framework for optimizing instructional sequencing in an intelligent tutoring system by combining knowledge tracing with deep reinforcement learning and evaluating the learned decision policy on historical data. We fit a simulated student on authentic log data from real children using RoboTutor in Tanzania, in contrast to earlier work that used synthetic data. We trained the student model using HOT-DINA because it is more accurate than other knowledge tracing methods. We used Proximal Policy Optimization because it learns better than previous reinforcement learning methods applied previously to ITS. We use knowledge probabilities estimated by the student model as a state and directly optimize for learning gains which we set as the reward. We evaluated the learned policies’ local and global impact on expected knowledge gains relative to a historical baseline and explained the somewhat surprising results we observed.\nThe work has several limitations. The evaluation is based on data from 8 children from one village to save computational expense. The evaluation extrapolates from historical data. Future work should test the actual impact of learned policies on children’s learning. We also do not make predictions on kids backing out while doing activities and remove the 10-item per activity constraint while performing our experiments. Future work should include predicting student disengagement.\nWe use a 118-parameter HOT-DINA model to save computational expense, while the 8,558- parameter HOT-DINA model might have been more accurate since it has parameters per activity instead of per skill. Developing other student models that are more accurate than HOT-DINA might be fruitful. We focused on learning decision policies for choosing activities. Future work could explore optimizing tutor decisions at other levels of granularity, such as selecting which items to practice and what assistance to provide." } ]
2,020
null
SP:f4fc140928d2b4901d76664e62569545c70d8a5e
[ "This paper analyses the convergence of episodic memory-based continual learning methods by looking at it as a nonconvex optimisation problem. They analyse the convergence rates for the case where all memory from past tasks is stored, and then consider the case where there is only a subset of past data, leading to overfitting on the episodic memory. They then introduce a method that scales the learning rates of the their update method, with the goal of tightening the bound obtained in the convergence analysis. Finally, experiments are shown on different benchmarks, and the proposed method is compared to some competing baselines." ]
Continual learning aims to prevent catastrophic forgetting while learning a new task without accessing data of previously learned tasks. The memory for such learning scenarios build a small subset of the data for previous tasks and is used in various ways such as quadratic programming and sample selection. Current memory-based continual learning algorithms are formulated as a constrained optimization problem and rephrase constraints as a gradient-based approach. However, previous works have not provided the theoretical proof on convergence to previously learned tasks. In this paper, we propose a theoretical convergence analysis of continual learning based on stochastic gradient descent method. Our method, nonconvex continual learning (NCCL), can achieve the same convergence rate when the proposed catastrophic forgetting term is suppressed at each iteration. We also show that memory-based approaches have an inherent problem of overfitting to memory, which degrades the performance on previously learned tasks, namely catastrophic forgetting. We empirically demonstrate that NCCL successfully performs continual learning with episodic memory by scaling learning rates adaptive to mini-batches on several image classification tasks.
[]
[ { "authors": [ "Rahaf Aljundi", "Klaas Kelchtermans", "Tinne Tuytelaars" ], "title": "Task-free continual learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Rahaf Aljundi", "Min Lin", "Baptiste Goujaud", "Yoshua Bengio" ], "title": "Gradient based sample selection for online continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with A-GEM", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Marcus Rohrbach", "Mohamed Elhoseiny", "Thalaiyasingam Ajanthan", "Puneet K Dokania", "Philip HS Torr", "Marc’Aurelio Ranzato" ], "title": "On tiny episodic memories in continual learning", "venue": "arXiv preprint arXiv:1902.10486,", "year": 2019 }, { "authors": [ "Sayna Ebrahimi", "Mohamed Elhoseiny", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Uncertainty-guided continual learning with bayesian neural networks", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Robert M. French", "Nick Chater" ], "title": "Using noise to compute error surfaces in connectionist networks: A novel means of reducing catastrophic forgetting", "venue": "Neural Computation,", "year": 2002 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Accelerated gradient methods for nonconvex nonlinear and stochastic programming", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Jeremias Knoblauch", "Hisham Husain", "Tom Diethe" ], "title": "Optimal continual learning has perfect memory and is np-hard", "venue": "arXiv preprint arXiv:2006.05188,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Kibok Lee", "Kimin Lee", "Jinwoo Shin", "Honglak Lee" ], "title": "Overcoming catastrophic forgetting with unlabeled data in the wild", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Soochan Lee", "Junsoo Ha", "Dongsu Zhang", "Gunhee Kim" ], "title": "A neural dirichlet process mixture model for task-free continual learning", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Lihua Lei", "Cheng Ju", "Jianbo Chen", "Michael I Jordan" ], "title": "Non-convex finite-sum optimization via scsg methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Cuong V. Nguyen", "Yingzhen Li", "Thang D. Bui", "Richard E. Turner" ], "title": "Variational continual learning", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Sashank J Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabás Póczos", "Alex Smola" ], "title": "Stochastic variance reduction for nonconvex optimization", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Mark B. Ring" ], "title": "Continual learning in reinforcement environments", "venue": "PhD thesis, University of Texas at Austin, TX,", "year": 1995 }, { "authors": [ "Sebastian Thrun" ], "title": "A lifelong learning perspective for mobile robot control", "venue": "In Intelligent Robots and Systems, Selections of the International Conference on Intelligent Robots and Systems", "year": 1994 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Manzil Zaheer", "Sashank Reddi", "Devendra Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "Proceedings of machine learning research,", "year": 2017 }, { "authors": [ "Dongruo Zhou", "Quanquan Gu" ], "title": "Lower bounds for smooth nonconvex finite-sum optimization", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning new tasks without forgetting previously learned tasks is a key aspect of artificial intelligence to be as versatile as humans. Unlike the conventional deep learning that observes tasks from an i.i.d. distribution, continual learning train sequentially a model on a non-stationary stream of data (Ring, 1995; Thrun, 1994). The continual learning AI systems struggle with catastrophic forgetting when the data acess of previously learned tasks is restricted (French & Chater, 2002).\nTo overcome catastrophic forgetting, continual learning algorithms introduce a memory to store and replay the previously learned examples (Lopez-Paz & Ranzato, 2017; Aljundi et al., 2019b; Chaudhry et al., 2019a), penalize neural networks with regularization methods (Kirkpatrick et al., 2017; Zenke et al., 2017), use Bayesian approaches (Nguyen et al., 2018; Ebrahimi et al., 2020), and other novel methods (Yoon et al., 2018; Lee et al., 2019). Although Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) first formulated the continual learning as a constrained optimization problem, the theoretical convergence analysis of the performance of previously learned tasks, which implies a measure of catastrophic forgetting, has not been investigated yet.\nContinual learning with episodic memory utilizes a small subset of the data for previous tasks to keep the model staying in a feasible region corresponding to moderate suboptimal region. GEM-based approaches use the rephrased constraints, which are inequalities based on the inner product of loss gradient vectors for previous tasks and a current task. This intuitive reformulation of constrained optimization does not provide theoretical guarantee to prevent catastrophic forgetting. In addition, the memory-based approaches have the critical limitation of overfitting to memory. Choosing the perfect memory for continual learning is an NP-hard problem (Knoblauch et al., 2020), then the inductive bias by episodic memory is inevitable. This problem also degrades the performance on previously learned tasks like catastrophic forgetting but has not been discussed quantitatively to analyze backward transfer (BWT).\nIn this paper, we address the continual learning with episodic memory as a smooth nonconvex finitesum optimization problem. This generic form is well studied to demonstrate the convergence and complexity of stochastic gradient methods for the nonconvex setting (Zhou & Gu, 2019; Lei et al.,\n2017; Reddi et al., 2016; Zaheer et al., 2018). Unlike the convex case, the convergence is generally measured by the expectation of the squared norm of the gradient E‖∇f(x)‖2. The theoretical complexity is derived from the -accurate solution, which is also known as a stationary point with E‖∇f(x)‖2 ≤ . We formulate the proposed continual learning algorithm as a Stochastic gradient descent (SGD) based method that updates both previously learned tasks from episodic memory and the current task simultaneously. By leveraging the update method, we can introduce a theoretical analysis of continual learning problems.\nWe highlight our main contributions as follows.\n• We develop convergence analysis for continual learning with episodic memory • We show the degradation of backward transfer theoretically and experimentally as prob-\nlems of catastrophic forgetting and overfitting to memory. • We propose a nonconvex continual learning algorithm that scales learning rates based on\nsampled mini-batch." }, { "heading": "1.1 RELATED WORK", "text": "The literature in continual learning can be divided into episodic learning and task-free learning. Episodic learning based methods assume that a training model is able to access clear task boundaries and stores observed examples in the task-wise episodic memory (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a). On the other hand, an AI system experiences arbitrarily shifting data streams, which we are not able to access task boundaries in the real world. Task-free continual learning studies the general scenario without the task-boundary assumption. Aljundi et al. (2019a) introduces Memory-aware Synapses (MAS) and applies a learning protocol without waiting until a task is finished. Furthermore, the following work (Aljundi et al., 2019b) adopt the memory system of GEM selecting observed examples to store for preventing catastrophic forgetting.\nSmooth nonconvex finite-sum optimization problem has been widely employed to derive the theoretical complexity of computation for stochastic gradient methods (Ghadimi & Lan, 2013; 2016; Lei et al., 2017; Zaheer et al., 2018; Reddi et al., 2016). Unlike the convex optimization, the gradient based algorithms are not expected to converge to the global minimum but are evaluated by measuring the convergence rate to the stationary points in the nonconvex case. The complexity to reach a stationary point is a key aspect of building a new stochastic gradient method for nonconvex optimization. In constrast with general optimization, memory-based continual learning methods have a limited data pool for previously learned tasks, which causes an overfitting problem to memory. (Knoblauch et al., 2020) found that optimal continual learning algorithms and building a perfect memory is equivalent. Furthermore, the authors proved that these two problems are NP-hard. The theoretical result shows that overfitting to memory is inevitable." }, { "heading": "2 PRELIMINARIES", "text": "We consider a continual learning problem with episodic memory where a learner can access the boundary between the previous task and the current task. The continuum of data in (Lopez-Paz & Ranzato, 2017) is adopted as our task description of continual learning. First, we formulate our goal as the smooth nonconvex finite-sum optimization problems with two objectives,\nmin x∈Rd\nF (x) = f(x) + g(x) = 1\nnf nf∑ i=1 fi(x) + 1 ng ng∑ j=1 gj(x) (1)\nwhere x ∈ Rd is the model parameter, each objective component fi(x), gj(x) is differentiable and nonconvex, and nf , ng are the numbers of components. We define two different components of the finite-sum optimization as objectives from a sample i of previously learned tasks fi(x) and a sample j of the current task gj(x).\nUnlike the general stochastic optimization problem, we assume that the initial point x0 in continual learning is an -accurate solution of f(x) with E‖∇f(x)‖2 ≤ for some 1. By the property of nonconvex optimization, we know that there might exist multiple local optimal points that satisfy moderate performance on the previously learned task (Garipov et al., 2018). This implies that the\nmodel parameter x stays in the neighborhood of x0 or usually moves from an initial local optimal point x0 to the other local optimal point at the t-th iteration, xt over T iterations of a successful continual learning scenario.\nThe continual learning algorithm with an episodic memory with size m cannot access the whole dataset of the previously learned tasks with nf samples but use limited samples in the memory when a learner trains on the current task. This limited access allows us to prevent catastrophic forgetting partially. However the fixed samples from memory cause a biased gradient and the overfitting problem. In Section 3, we provide the convergence analysis of the previously learned tasks f(x), which are vulnerable to catastrophic forgetting.\nWe denote fi(x) as the component, which indicates the loss of sample i from the previously learned tasks with the model parameter x and ∇fi(x) as its gradient. We use It, Jt as the mini-batch of samples at iteration t and denote bft , b g t as the mini-batch size |It|, |Jt| for brevity throughout the paper. We also note that gj from the current task holds the above and following assumptions.\nTo formulate the convergence over iterations, we introduce the Incremental First-order Oracle (IFO) framework (Ghadimi & Lan, 2013), which is defined as a unit of cost by sampling the pair (∇fi(x), fi(x)). For example, a stochastic gradient descent algorithm requires the cost as much as the batch size bt at each step, and the total cost is the sum of batch sizes ∑T t=1 bt. Let T ( ) be the minimum number of iterations to guarantee -accurate solutions. Then the average bound of IFO complexity is less than or equal to ∑T ( ) t=1 bt.\nTo analyze the convergence and compute the IFO complexity, we define the loss gap between two local optimal points ∆f as\n∆f = f(x 0)− inf 0≤t≤T f(xt), (2)\nwhich might be much smaller than the loss gap of SGD. Suppose that the losses of all optimal points have the same values, i.e., f(x∗) = f(x0), then we have ∆f ≤ 0. This implies that ∆f is not a reason for moving away from a stationary point of f , which we will explain details in Section 3.\nWe also define σf , σg for f, g, respectively, as the upper bounds on the variance of the stochastic gradients of a given mini-batch. For brevity, we write only one of them σf ,\nσf = sup x\n1\nbf bf∑ i=1 ‖∇fi(x)−∇f(x)‖2. (3)\nThroughout the paper, we assume the L-smoothness.\nAssumption 1 fi is L-smooth that there exists a constant L > 0 such that for any x, y ∈ Rd,\n‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖ (4)\nwhere ‖·‖ denotes the Euclidean norm. Then the following inequality directly holds that\n−L 2 ‖x− y‖2 ≤ fi(x)− fi(y)− 〈∇fi(y), x− y〉 ≤ L 2 ‖x− y‖2. (5)\nIn this paper, we consider the framework of continual learning with episodic memory. By the assumption of GEM, we assign each task sample from i.i.d. distribution within its episode to the same memory budget m. In the learning phase at task k ∈ {1, 2, · · · ,K}, we sample a batch with size nf from memories of all task with size [m · (k − 1)]." }, { "heading": "3 NONCONVEX CONTINUAL LEARNING", "text": "In this section, we present the convergence analysis of continual learning in the nonconvex setting. The theoretical result shows why catastrophic forgetting occurs in view of the nonconvex optimization problem. As a result, we can propose the Non-Convex Continual Learning (NCCL) algorithm, where the learning rates for the previously learned tasks and the current tasks are scaled by the value of the inner product by their gradients for the parameter in Section 3.3." }, { "heading": "3.1 ONE EPISODE ANALYSIS", "text": "The key element behind preventing catastrophic forgetting is to use gradient compensation on the training step of the current task. It can be considered as an additive gradient, in turn, is applied to the gradient of the current task, although GEM (Lopez-Paz & Ranzato, 2017) uses the quadratic programming and EWC (Kirkpatrick et al., 2017) introduces the auxiliary loss function. First, we present the proposed gradient compensation, which uses samples of the episodic memory for a single new task episode. We define the gradient update\nxt+1 = xt − αHt∇fIt(xt)− βHt∇gJt(xt) (6) where αHt , βHt are learning rates scaled by the sampled mini-batches for Ht = It ∪ Jt and ∇fHt(xt),∇gHt(xt) are the estimates of the gradient ∇f(xt),∇g(xt) respectively. Equation 6 implies that the parameter is updated on the current task g with a gradient compensation on previously learned tasks f by αHt∇fIt(xt). Our goal is to explain the effect of the gradient update βHt∇gJt(xt) on the convergence to stationary points of f(x) and observe the properties of the expectation of each element over It. For iteration t ∈ [1, T ] and a constant L, we define the catastrophic forgetting term Ct to be the expectation in terms of∇gJt(xt):\nCt = E [ β2HtL\n2 ‖∇gJt(xt)‖2 − βHt〈∇f(xt),∇gJt(xt)〉\n] , (7)\nwhich we derive in Appendix A. We temporally assume the following to show the convergence analysis of continual learning.\nAssumption 2 Suppose that the episodic memory M contains the entire data points of previously learned tasks [k − 1] on the k-th episode and replays the mini-batch It ⊂ M . Then ∇fIt(xt) is an unbiased estimate that E[et] = 0 for et = ∇fIt(xt)−∇f(xt).\nIn the next section, we do not use Assumption 2 and investigate the biasedness of the episodic memory M that causes the overfitting on memory. Our first main result is the following theorem that provides the stepwise change of convergence of our algorithm.\nTheorem 1 Suppose that Lα2Ht − α 2 Ht ≤ γ for some γ > 0 and αHt ≤ 2L . Under Assumption 1, 2, we have\nE‖∇f(xt)‖2 ≤ 1 1− L2 αHt\n( 1\nαHt\n( E[f(xt)− f(xt+1)] + Ct ) + αHtL\n2bf σ2f\n) . (8)\nWe present the proof in Appendix A. Note that the catastrophic forgetting term Ct exists, unlike the general SGD, and this term increases the IFO complexity. Fortunately, we can tighten the upper bound of Equation (8) by minimizing Ct. Now we telescope over a single episode for the current task. Then we obtain the following theorem.\nTheorem 2 Let αHt = α = c√T for some c > 0 and all t ∈ [T ] and 1− L 2 α = 1 A > 0 for some A. Under Theorem 1, we have\nmin t E‖∇f(xt)‖2 ≤ A√ T\n( 1\nc\n( ∆f +\nT−1∑ t=0 Ct\n) + Lc\n2bf σ2f\n) . (9)\nThis theorem can explain the theoretical background of catastrophic forgetting. The cumulative summation of catastrophic forgetting terms ∑ Ct increases drastically over iterations. This fact implies that the stationary point x0 can diverge. An immediate consequence of Equation 9 is that we can consider the amount of catastrophic forgetting as an optimization-viewed factor. Without the additive catastrophic forgetting term, Theorem 2 is equivalent to the result for SGD with a fixed learning rate (Ghadimi & Lan, 2013). Similar to SGD, the upper bound of Equation 9 can be made O( A√\nT (∆f + ∑ Ct)) when we assume that Lc2bf σ 2 f = O(1).\nConversely, we consider the convergence analysis of g(x) by changing roles for f and g in Theorem 2. In the very beginning of iterations, ∆g is dominant in Equation 9, and its catastrophic forgetting term Ct,g with regard to ∇fIt(xt) is relatively small because xt is the neighborhood of the\nstationary point. When we consider Assumption 2 and the samples from previously learned tasks are constantly provided, the norm of gradients ‖fIt(xt)‖ is bounded. Therefore, g(x) can reach a stationary point by the same rate as SGD. However, We cannot access the full dataset of previously learned tasks because of the setting of continual learning. There exists an extra term that interrupts the convergence of g(x), which is called the overfitting. We now ignore the extra term to conjecture that ‖∇gJt(x)‖ is at least bounded. Then we have the following corollary.\nCorollary 1 Let the expected stationary of g(x) be O( δ√ T ) for a constant δ > 0 and the upper bound of learning rate for g(x) be β > 0. The cumulative sum of the catastrophic forgetting term C is O(β2δ √ T ). Nonconvex continual learning by Equation (6) does not converge as iterating the\nalgorithm for the worst case, where min t\nE‖∇f(xt)‖2 isO(β2δ) for 1 β2δ √ T . When β2δ ≤ 1√\nT ,\nwe have\nmin t\nE‖∇f(xt)‖2 = O (\n1√ T\n) . (10)\nThen, the IFO complexity for achieving an -accurate solution of f(x) is O ( 1/ 2 ) .\nWe would like to emphasize that catastrophic forgetting is inevitable in the worst case scenario because the stationary of f(x) is not decreasing and the convergence on f(x) cannot be recovered no matter how long we proceed training. Building a tight bound of C is the key to preventing catastrophic forgetting. Note that the general setting to minimize C is scaling down the learning rate β to β2δ ≤ 1/ √ T . Then we have the decreasingC = O(1/ √ T ). However, this method is slowing down the convergence of the current task g(x) and not an appropriate way. The other option is to minimize Ct itself rather than tightening the loose upper bound O(β2δ √ T ). We discuss how to minimize this term by scaling two learning rates in Section 3.3. The constrained optimization problem of GEM provided a useful rephrased constraint but cannot explain and guarantee the catastrophic forgetting in the nonconvex setting. Our convergence analysis of continual learning is the first quantitative result of catastrophic forgetting in the manner of nonconvex optimization." }, { "heading": "3.2 OVERFITTING TO EPISODIC MEMORY", "text": "In section 3, we discussed the theoretical convergence analysis of continual learning for smooth nonconvex finite-sum optimization problems. The practical continual learning tasks have the restriction on full access to the entire data points of previously learned tasks, which is different from Assumption 2. The episodic memory with limited size [M ] incurs the bias on ∇f(xt). Suppose that we sample a mini-batch of previously learned tasks from episodic memory M . Then we can formulate this bias E[eM ] as\nE[eM ] = E [ ∇fIt(xt)−∇f(xt) ] = ∇fM (xt)−∇f(xt). (11)\nThis equation shows that the bias depends on the choice of M . In the optimization, the bias drag the convergence of f(x) to fM (x). This fact is considered as the overfitting to the memory M . (Knoblauch et al., 2020) prove that selecting a perfect memory is hard. We can conclude that E[eM ] 6= 0. Now we extract the overfitting bias on M from the ignored element in Equation 21 at Appendix A and the catastrophic forgetting term in Equation 7. The bias related term BtM is added to the upper bound of Equation 9 and reformulates the catastrophic forgetting term to a practical catastrophic forgetting term Ĉt as\nBtM = γ〈(∇f(xt),∇fM (xt)−∇f(xt)〉+ βHt〈∇fM (xt)−∇f(xt),∇gJt(xt)〉 (12) Ĉt = E [ β2HtL\n2 ‖∇gJt(xt)‖2 − βHt〈∇fIt(xt),∇gJt(xt)〉\n] . (13)\nNote that the upper bound of Ĉt is the same as Ct even if we modify it to the version with the limited memory size scenario. The cumulative sum of BtM over iterations is the amount of disturbance by overfitting to memory. This inherent defect of a memory-based continual learning framework can be considered as a generalization gap phenomenon Keskar et al. (2016), and small mini-batch size can resolve this problem. In Section 4, we demonstrate the effect of different mini-batch sizes to alleviate the overfitting problem on the memory M ." }, { "heading": "3.3 SCALING LEARNING RATES", "text": "The result of convergence analysis provides a simple continual learning framework that only scales two learning rates in the gradient update of Equation 6. As we proved in the above, we should tighten the upper bound of Ĉt to prevent catastrophic forgetting. We propose an adaptive scaling method for learning rates that can minimize or reduce Ĉt in the both case of 〈∇fIt(xt),∇gJt(xt)〉 ≤ 0 and 〈∇fIt(xt),∇gJt(xt)〉 > 0. We note that Equation 13 is a quadratic polynomial of βHt where βHt > 0. First, we can solve the minimum of the polynomial on βHt when 〈∇fIt(xt),∇gJt(xt)〉 > 0. By differentiating on βHt , we can easily find the minimum Ĉ ∗ t and the optimal learning rate β ∗ Ht\nβ∗Ht = 〈∇fIt(xt),∇gJt(xt)〉\nL‖∇gJt(xt)‖2 , Ĉ∗t = − 〈∇fIt(xt),∇gJt(xt)〉 2L‖∇gJt(xt)‖2 . (14)\nA direct consequence C∗It < 0 implies that the optimal catastrophic forgetting surprisingly helps f(x) to decrease the upper bound of stationary. For 〈∇fIt(xt),∇gJt(xt)〉 ≤ 0, however, βHt should be negative to achieve the global minimum of Ĉ∗t , which violates our assumption. Instead, we propose a surrogate of∇gJt(xt),\n∇g̃Jt(xt) = ∇gJt(xt)− 〈 ∇fIt(xt) ‖∇fIt(xt)‖ ,∇gJt(xt) 〉 ∇fIt(xt) ‖∇fIt(xt)‖ . (15)\nThe surrogate borrows the gradient∇fIt(xt) to cancel out the negative component of∇fIt(xt) from ∇gJt(xt). Now we can reduce the catastrophic forgetting term drastically by boosting learning rate αHt without correcting ∇gJt(xt) directly. The remaining non-negative value of Ĉt is caused by the magnitude of ∇gJt(xt) itself. This phenomenon cannot be inevitable when we should learn the current task for all continual learning framework.\nWe summarize our results as follows.\nαHt =\n{ α(1− 〈∇fIt (x t),∇gJt (x t)〉\n‖∇fIt (xt)‖2 ), 〈∇fIt(xt),∇gJt(xt)〉 ≤ 0\nα, 〈∇fIt(xt),∇gJt(xt)〉 > 0 (16)\nβHt = { α, 〈∇fIt(xt),∇gJt(xt)〉 ≤ 0 〈∇fIt (x t),∇gJt (x t)〉\nL‖∇gJt (xt)‖2 , 〈∇fIt(xt),∇gJt(xt)〉 > 0\n(17)\nWe derive the details of the result in this section in Appendix B. The existing GEM-based algorithms have only focused on canceling out the negative direction of ∇fM (xt) from ∇gJt(xt) with the highly computation cost for the only case 〈∇fIt(xt),∇gJt(xt)〉 ≤ 0. The proposed\nAlgorithm 1 Nonconvex Continual Learning (NCCL) Input: K task data stream {D1, · · ·DK}, initial model x0, memory {Mk} with each size m for k = 1 to K do\nfor t = 0 to T − 1 do Uniformly sample a mini-batch It ⊂ [m · (k − 1)] with |It| = bf Uniformly sample a mini-batch Jt ⊂ Dk with |Jt| = bg and store Jt into Mk Compute learning rates αHt , βHt with∇fIt(xt), ∇gJt(xt) xt+1 ← xt − αHt∇fIt(xt)− βHt∇gJt(xt) end for x0 ← xT−1\nend for\nmethod has the advantage over both leveraging Ĉt to achieve the better convergence for the case 〈∇fIt(xt),∇gJt(xt)〉 > 0 and even reducing the effect of catastrophic forgetting by the term β2HtL\n2 ‖∇gJt(x t)‖2 for the case 〈∇fIt(xt),∇gJt(xt)〉 ≤ 0 . Figure 1 illustrates intuitively how scaling learning rates achieve the convergence to a mutual stationary point x∗P∪C as we proved the theoretical complexity in Corollary 1." }, { "heading": "4 EXPERIMENTS", "text": "Based on our theoretical analysis of continual learning, we evaluate the proposed NCCL model in episodic continual learning with 3 benchmark datasets. We run our experiments on a GPU server with Intel i9-9900K, 64 GB RAM, and 2 NVIDIA Geforce RTX 2080 Ti GPU." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Baselines. We compare NCCL to the following continual learning algorithms. Fine-tune is a basic baseline that the model trains data naively without any support, such as memory. Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) uses the regularized loss by Fisher Information. Reservoir Sampling (Chaudhry et al., 2019b) show that simple experience replay can be a power continual learning algorithm. It randomly selects a fixed number of examples from the stream of data tasks, which is similar with GEM and A-GEM. GEM and A-GEM Lopez-Paz & Ranzato (2017); Chaudhry et al. (2019a) is the original and a variant of Gradient Episodic Learning.\nDatasets. We use the following datasets. 1) Kirkpatrick et al. (2017) design Permuted-MNIST, a MNIST (LeCun et al., 1998) based dataset, where we apply a fixed permutation of pixels to transform a data point to make the input data distribution unrelated. 2) Zenke et al. (2017) introduce Split-MNIST dataset, which splits MNIST dataset into five tasks. Each task consists of two classes, for example (1, 7), and has approximately 12K images. 3) Split-CIFAR10 is one of most commonly used continual learning datasets based on CIFAR10 dataset (Krizhevsky et al., 2009), respectively (Lee et al., 2020; Rebuffi et al., 2017; Zenke et al., 2017; Lopez-Paz & Ranzato, 2017; Aljundi et al., 2019b).\nTraining details. We use fully-connected neural networks with two hidden layers of [100, 100] with ReLU activation. For CIFAR10 datasets, we use a smaller viersion of ResNet18 from the setting in GEM. To show the empirical result of our theoretical analysis, we apply vanilla SGD into all train networks.\nPerformance measurement. We conduct our experiment on K tasks. We evaluate our experiments by two measures, ACC and BWT. ACC is the average test accuracy of all tasks after the whole learning is finished. Backward Transfer (BWT) is a measure of for forgetting, which shows how much learning new tasks has affected the previously learned tasks. WhenBWT < 0, it implies that catastrophic forgetting happens. Formally, we define ACC and BWT as\nACC = 1\nK K∑ k=1 ACCk,K , BWT = 1 K K∑ k=1 ACCk,K − ACCk,k, (18)\nwhere ACCi,j is the accuracy of task i at the end of episode j." }, { "heading": "4.2 RESULTS", "text": "Table 1 and Table 2 show our main experimental results. We explain the property of Split dataset first. Split dataset divide the whold dataset by the number of tasks, so we get a partial version of dataset. For example, 5 Split-MNIST, we can consider the number of data points per task as the number of 0.2 epoch. Then, we can call a single epoch of 5 Split-MNIST as a 5 repeated sets of its datapoints for a task. We conduct experiments on 20 Permuted-MNIST, 5-Split MNIST, and 5-Split CIFAR10. We can notice that NCCL does not outperform GEM and A-GEM. We conjecture that the reason of the lower performance is the differences of optimization techniques for new task. GEMbased methods apply the quadratic programming algorithm to continual learning, which spends more iterations to find a better surrogate for the negative direction between the previous task and the current task, but this procedure requires the very longer computation time which is not effective. We also expect that the theoretical convergence analysis for GEM surrogates can be achieved in future work. Compared to other reported methods, the performance of NCCL has a reasonable result. By these observations, we conclude the followings.\n• Our theoretical convergence of analysis is reasonable for explaining catastrophic forgetting. • NCCL has both theoretical and empirical supports. • We observe that the small mini-batch size from memory is more effective." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have presented the first generic theoretical convergence analysis of continual learning. Our proof shows that a training model can circumvent catastrophic forgetting by suppressing the disturbance term on the convergence of previously learned tasks. We also demonstrate theoretically and empirically that the performance of past tasks by nonconvex continual learning with episodic memory is degraded by two separate reasons, catastrophic forgetting and overfitting to memory. To tackle this problem, nonconvex continual learning applies two methods, scaling learning rates adaptive to mini-batches and sampling mini-batches from the episodic memory. Compared to other constrained optimization methods, the mechanism of NCCL utilizes both positive and negative directions between two stochastic gradients from the memory and the current task to keep a stable performance on previous tasks. Finally, it is expected the proposed nonconvex framework if helpful to analyze the convergence rate of other continual learning algorithms." }, { "heading": "A THEORETICAL ANALYSIS", "text": "Proof of Theorem 1 We analyze the convergence of nonconvex continual learning with episodic memory here. Recall that the gradient update is the following\nxt+1 = xt − αHt∇fIt(xt)− βHt∇gJt(xt)\nfor all t ∈ {1, 2, · · · , T}. Since we assume that f, g is L-smooth, we have the following inequality by applying Equation 5:\nf(xt+1) ≤ f(xt) + 〈∇f(xt), xt+1 − xt〉+ L 2 ‖xt+1 − xt‖2\n= f(xt)− 〈∇f(xt), αHt∇fIt(xt) + βHt∇gJt(xt)〉+ L\n2 ‖αHt∇fIt(xt) + βHt∇gJt(xt)‖2\n≤ f(xt)− αHt〈∇f(xt),∇fIt(xt)〉 − βHt〈∇f(xt),∇gJt(xt)〉\n+ L\n2 α2Ht‖∇fIt(x t)‖2 + L 2 β2Ht‖∇gJt(x t)‖2. (19)\nLet et = ∇fIt(xt)−∇f(xt) and define\nC̃t = L\n2 β2Ht‖∇gJt(x t)‖2 − βHt〈∇f(xt),∇gJt(xt)〉,\nfor t ≥ 1. We have\nf(xt+1) ≤ f(xt)− αHt〈∇f(xt),∇fIt(xt)〉+ L\n2 α2Ht‖∇fIt(x t)‖2 + C̃t ≤ f(xt)− ( αHt − L\n2 α2Ht\n) ‖∇f(xt)‖2 − (αHt − Lα2Ht)〈∇f(x t), et〉+ L\n2 α2Ht‖et‖ 2 + C̃t.\nTaking expectations with respect to It on both sides, noting that\nCt = E[C̃t]\n, we obtain( αHt − L\n2 α2Ht\n) ‖∇f(xt)‖2 ≤ f(xt)− f(xt+1)− (αHt − Lα2Ht)E[〈∇f(x t), et〉] + L\n2 α2Ht‖et‖ 2 + E[C̃t]\n≤ f(xt)− f(xt+1) + Ct + L\n2 α2Ht‖et‖ 2 + (Lα2Ht − αHt)E[〈∇f(x t), et〉].\nRearranging the terms and assume that Lα2Ht − αHt ≤ γ and 1− L 2 αHt > 0, we have\n‖∇f(xt)‖2 ≤ 1 αHt(1− L2 αHt)\n( f(xt)− f(xt+1) + Ct + (Lα2Ht − αHt)E[〈∇f(x t), et〉] ) + L 2 αHt‖et‖ 2\n1− L2 αHt\n≤ 1 αHt(1− L2 αHt)\n( f(xt)− f(xt+1) + Ct + γE[〈∇f(xt), et〉] ) + L 2 αHt‖et‖ 2\n1− L2 αHt .\n(20)\nNote that under Assumption 2, E[〈∇f(xt), et〉] = 0, we conclude\n‖∇f(xt)‖2 ≤ 1 αHt(1− L2 αHt)\n( f(xt)− f(xt+1) + Ct ) + L 2 αHt‖et‖ 2\n1− L2 αHt . (21)\nFurthermore, the batch size b\nProof of Theorem 2 Suppose that the learning rate αHt is a constant α = c/ √ T , for c > 0, 1− L2 α = 1 A > 0. Then, by summing Equation 21 from t = 0 to T − 1, we have\nmin t E‖∇f(xt)‖2 ≤ 1 T T−1∑ t=0 E‖∇f(xt)‖2\n≤ 1 1− L2 α\n( 1\nαT\n( f(x0)− f(xT ) +\nT−1∑ t=0 Ct\n) + L\n2bf ασ2f\n)\n= 1\n1− L2 α\n( 1\nc √ T\n( ∆f +\nT−1∑ t=0 Ct\n) + Lc\n2bf √ T σ2f\n)\n= A√ T\n( 1\nc\n( ∆f +\nT−1∑ t=0 Ct\n) + Lc\n2bF σ2f\n) .\nLemma 1 Let a constant δ > 0 and an upper bound β > βHt > 0. The sum of the catastrophic forgetting term over iterations T ∑T−1 t=0 Ct is O(δ √ T ). For δ ≤ 1√ T , we have O(1).\nProof The upper bound of the catastrophic forgetting term is Ct = E [ β2HtL\n2 ‖∇gJt(xt)‖2 − βHt〈∇f(xt),∇gJt(xt)〉 ] ≤ E [ β2HtL\n2 ‖∇gJt(xt)‖2 + βHt‖∇f(xt)‖‖∇gJt(xt)‖ ] = O ( E [ ‖∇gJt(xt)‖2 ]) .\nSince\n‖∇gJt(xt)‖2 ≤ ‖∇g(xt)‖2 + ‖∇gJt(xt)− g(xt)‖2\n≤ ‖∇g(xt)‖2 + σ2g bg\nwhere σg is analogous to Equation 3 and bg is the mini-batch size of g. Then we have Ct = O ( E‖∇g(xt)‖2 ) = O ( β2δ√ T\n) where t ∈ [T ] and for some δ > 0. Summing over time t, we have\nC = T−1∑ t=0 Ct = T ·O ( β2δ√ T ) = O ( β2δ √ T ) .\nTherefore, we obtain O(1) when β2δ √ T ≤ 1.\nProof of Corollary 1 To formulate the IFO calls, let T ( )" }, { "heading": "T ( ) = min {T : min E‖∇f(xt)‖2 ≤ }.", "text": "Recall that E‖∇f(xt)‖2 = O( ∑ Ct√ T ) by Theorem 2. Then by Lemma 1, we have\nmin t\nE‖∇f(xt)‖2 = O ( β2δ √ T√\nT\n) = O(β2δ).\nIt implies that min t\nE‖∇f(xt)‖2 is not decreasing when 1 β2δ √ T . Then, xt cannot reach to the\nstationary point.\nOn the other hand, f(x) can be converged to the stationary point when β2δ ≤ 1√ T such that\nmin t\nE‖∇f(xt)‖2 = O(β2δ) = O (\n1√ T\n) . (22)\nTo derive a bound for T ( ), we note that\nO ( 1√ T ) ≤ .\nThen we have\nT ( ) = O\n( 1\n2\n) .\nThe IFO call is defined as ∑T ( ) t=1 bf,t. Therefore, the IFO call is O(1/ 2)." }, { "heading": "B DERIVATION OF EQUATIONS IN SECTION 3", "text": "Proof of Equations 15 Let the surrogate∇g̃Jt(xt) as ∇g̃Jt(xt) = ∇gJt(xt)− 〈 ∇fIt(xt) ‖∇fIt(xt)‖ ,∇gJt(xt) 〉 ∇fIt(xt) ‖∇fIt(xt)‖ . (23)\nThen, we have Ĉt = E [ β2HtL\n2 ‖∇g̃Jt(xt)‖2 − βHt〈∇fIt(xt),∇g̃Jt(xt)〉 ] = E [ β2HtL\n2\n( ‖∇gJt(xt)‖2 − 2 〈∇fIt(xt),∇gJt(xt)〉2\n‖∇fIt(xt)‖2 + 〈∇fIt(xt),∇gJt(xt)〉2 ‖∇fIt(xt)‖2\n) − βHt〈∇fIt(xt),∇g̃Jt(xt)〉 ] = E [ β2HtL\n2\n( ‖∇gJt(xt)‖2 − 〈∇fIt(xt),∇gJt(xt)〉2\n‖∇fIt(xt)‖2\n) − βHt ( 〈∇fIt(xt),∇gJt(xt)〉 − 〈∇fIt(xt),∇gJt(xt)〉 )] = E [ β2HtL\n2\n( ‖∇gJt(xt)‖2 − 〈∇fIt(xt),∇gJt(xt)〉2\n‖∇fIt(xt)‖2\n)] . (24)" } ]
2,020
null
SP:84f9003af6de793a1fd9c75c2cf9bb9dc495d56e
[ "This paper aims to address an important question in reinforcement learning: policy learning from high-dimensional sensory observations. The authors propose an algorithm for Learning Controllable Embedding (LCE) based on policy iteration in the latent space. The authors provide a theorem to show how the policy performance in latent-space policy improvement depends on the learned representation and develop three algorithmic variations that attempt to maximize the theoretical lower bounds. In the experiments, the proposed algorithm CARL shows improved performance when compared with other LCE baseline algorithms. " ]
A major challenge in modern reinforcement learning (RL) is efficient control of dynamical systems from high-dimensional sensory observations. Learning controllable embedding (LCE) is a promising approach that addresses this challenge by embedding the observations into a lower-dimensional latent space, estimating the latent dynamics, and utilizing it to perform control in the latent space. Two important questions in this area are how to learn a representation that is amenable to the control problem at hand, and how to achieve an end-to-end framework for representation learning and control. In this paper, we take a few steps towards addressing these questions. We first formulate a LCE model to learn representations that are suitable to be used by a policy iteration style algorithm in the latent space. We call this model control-aware representation learning (CARL). We derive a loss function and three implementations for CARL. In the offline implementation, we replace the locally-linear control algorithm (e.g., iLQR) used by the existing LCE methods with a RL algorithm, namely model-based soft actor-critic, and show that it results in significant improvement. In online CARL, we interleave representation learning and control, and demonstrate further gain in performance. Finally, we propose value-guided CARL, a variation in which we optimize a weighted version of the CARL loss function, where the weights depend on the TD-error of the current policy. We evaluate the proposed algorithms by extensive experiments on benchmark tasks and compare them with several LCE baselines.
[ { "affiliations": [], "name": "REINFORCEMENT LEARNING" }, { "affiliations": [], "name": "Brandon Cui" }, { "affiliations": [], "name": "Yinlam Chow" }, { "affiliations": [], "name": "Mohammad Ghavamzadeh" } ]
[ { "authors": [ "R. Abachi", "M. Ghavamzadeh", "A. Farahmand" ], "title": "Policy-aware model learning for policy gradient methods", "venue": null, "year": 2003 }, { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "arXiv preprint arXiv:1705.10528,", "year": 2017 }, { "authors": [ "E. Banijamali", "R. Shu", "M. Ghavamzadeh", "H. Bui", "A. Ghodsi" ], "title": "Robust locally-linear controllable embedding", "venue": "In Proceedings of the Twenty First International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "V. Borkar" ], "title": "Q-learning for risk-sensitive control", "venue": "Mathematics of operations research,", "year": 2002 }, { "authors": [ "M. Breivik", "T. Fossen" ], "title": "Principles of guidance-based path following in 2D and 3D", "venue": "In Proceedings of the 44th IEEE Conference on Decision and Control,", "year": 2005 }, { "authors": [ "Y. Chow", "B. Cui", "M. Ryu", "M. Ghavamzadeh" ], "title": "Variational model-based policy optimization", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "M. Deisenroth", "C. Rasmussen" ], "title": "PILCO: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "F. Ebert", "C. Finn", "S. Dasari", "A. Xie", "A. Lee", "S. Levine" ], "title": "Visual Foresight: Model-based deep reinforcement learning for vision-based robotic control", "venue": null, "year": 1812 }, { "authors": [ "A. Farahmand" ], "title": "Iterative value-aware model learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "A. Farahmand", "A. Barreto", "D. Nikovski" ], "title": "Value-aware loss function for model-based reinforcement learning", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "K. Furuta", "M. Yamakita", "S. Kobayashi" ], "title": "Swing up control of inverted pendulum", "venue": "In Proceedings of International Conference on Industrial Electronics, Control and Instrumentation,", "year": 1991 }, { "authors": [ "S. Geva", "J. Sitte" ], "title": "A cartpole experiment benchmark for trainable controllers", "venue": "IEEE Control Systems Magazine,", "year": 1993 }, { "authors": [ "T. Haarnoja", "A. Zhou", "P. Abbeel", "S. Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "D. Hafner", "T. Lillicrap", "J. Ba", "M. Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "D. Hafner", "T. Lillicrap", "M. Norouzi", "J. Ba" ], "title": "Mastering atari with discrete world models", "venue": "arXiv preprint arXiv:2010.02193,", "year": 2020 }, { "authors": [ "M. Janner", "J. Fu", "M. Zhang", "S. Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "D. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "D. Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": null, "year": 2013 }, { "authors": [ "X. Lai", "A. Zhang", "M. Wu", "J. She" ], "title": "Singularity-avoiding swing-up control for underactuated three-link gymnast robot using virtual coupling between control torques", "venue": "International Journal of Robust and Nonlinear Control,", "year": 2015 }, { "authors": [ "A. Lee", "A. Nagabandi", "P. Abbeel", "S. Levine" ], "title": "Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "N. Levine", "Y. Chow", "R. Shu", "A. Li", "M. Ghavamzadeh", "H. Bui" ], "title": "Prediction, consistency, curvature: Representation learning for locally-linear control", "venue": "In Proceedings of the 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A. Graves", "I. Antonoglou", "D. Wierstra", "M. Riedmiller" ], "title": "Playing Atari with deep reinforcement learning", "venue": null, "year": 2013 }, { "authors": [ "A. Ruszczyński", "A. Shapiro" ], "title": "Optimization of convex risk functions", "venue": "Mathematics of operations research,", "year": 2006 }, { "authors": [ "J. Schulman", "S. Levine", "P. Abbeel", "M. Jordan", "P. Moritz" ], "title": "Trust region policy optimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "J. Schulman", "F. Wolski", "P. Dhariwal", "A. Radford", "O. Klimov" ], "title": "Proximal policy optimization algorithms", "venue": null, "year": 2017 }, { "authors": [ "R. Shu", "T. Nguyen", "Y. Chow", "T. Pham", "K. Than", "M. Ghavamzadeh", "S. Ermon", "H. Bui" ], "title": "Predictive coding for locally-linear control", "venue": null, "year": 2003 }, { "authors": [ "M. Spong" ], "title": "The swing up control problem for the acrobot", "venue": "IEEE Control Systems Magazine,", "year": 1995 }, { "authors": [ "M. Watter", "J. Springenberg", "J. Boedecker", "M. Riedmiller" ], "title": "Embed to control: A locally linear latent dynamics model for control from raw images", "venue": "In Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "M. Zhang", "S. Vikram", "L. Smith", "P. Abbeel", "M. Johnson", "S. Levine" ], "title": "SOLAR: Deep structured representations for model-based reinforcement learning", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Control of non-linear dynamical systems is a key problem in control theory. Many methods have been developed with different levels of success in different classes of such problems. The majority of these methods assume that a model of the system is known and its underlying state is low-dimensional and observable. These requirements limit the usage of these techniques in controlling dynamical systems from high-dimensional raw sensory data (e.g., image), where the system dynamics is unknown, a scenario often seen in modern reinforcement learning (RL).\nRecent years have witnessed a rapid development of a large arsenal of model-free RL algorithms, such as DQN (Mnih et al., 2013), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), and SAC (Haarnoja et al., 2018), with impressive success in solving high-dimensional control problems. However, most of this success has been limited to simulated environments (e.g., computer games), mainly due to the fact that these algorithms often require a large number of samples from the environment. This restricts their applicability in real-world physical systems, for which data collection is often a difficult process. On the other hand, model-based RL algorithms, such as PILCO (Deisenroth & Rasmussen, 2011), MBPO (Janner et al., 2019), and Visual Foresight (Ebert et al., 2018), despite their success, still face difficulties in learning a model (dynamics) in a high-dimensional (pixel) space.\nTo address the problems faced by model-free and model-based RL algorithms in solving highdimensional control problems, a class of algorithms have been developed, whose main idea is to first learn a low-dimensional latent (embedding) space and a latent model (dynamics), and then use this model to control the system in the latent space. This class has been referred to as learning controllable embedding (LCE) and includes algorithms, such as E2C (Watter et al., 2015), RCE (Banijamali et al., 2018), SOLAR (Zhang et al., 2019), PCC (Levine et al., 2020), Dreamer (Hafner et al., 2020a;b), PC3 (Shu et al., 2020), and SLAC (Lee et al., 2020). The following two properties are extremely important in designing LCE models and algorithms. First, to learn a representation that is the most suitable for the control problem at hand. This suggests incorporating the control algorithm in the\nprocess of learning representation. This view of learning control-aware representations is aligned with the value-aware and policy-aware model learning, VAML (Farahmand, 2018) and PAML (Abachi et al., 2020), frameworks that have been recently proposed in model-based RL. Second, to interleave the representation learning and control, and to update them both, using a unifying objective function. This allows to have an end-to-end framework for representation learning and control.\nLCE methods, such as SOLAR, Dreamer, and SLAC, have taken steps towards the second objective by performing representation learning and control in an online fashion. This is in contrast to offline methods like E2C, RCE, PCC, and PC3 that learn a representation once and then use it in the entire control process. On the other hand, methods like PCC and PC3 address the first objective by adding a term to their representation learning loss function that accounts for the curvature of the latent dynamics. This term regularizes the representation towards smoother latent dynamics, which are suitable for the locally-linear controllers, e.g., iLQR (Li & Todorov, 2004), used by these methods.\nIn this paper, we take a few steps towards the above two objectives. We first formulate a LCE model to learn representations that are suitable to be used by a policy iteration (PI) style algorithm in the latent space. We call this model control-aware representation learning (CARL) and derive a loss function for it that exhibits a close connection to the prediction, consistency, and curvature (PCC) principle for representation learning (Levine et al., 2020). We derive three implementations of CARL: offline, online, and value-guided. Similar to offline LCE methods, such as E2C, RCE, PCC, and PC3, in offline CARL, we first learn a representation and then use it in the entire control process. However, in offline CARL, we replace the locally-linear control algorithm (e.g., iLQR) used by these LCE methods with a PI-style (actor-critic) RL algorithm. Our choice of RL algorithm is the model-based implementation of soft actor-critic (SAC) (Haarnoja et al., 2018). Our experiments show significant performance improvement by replacing iLQR with SAC. Online CARL is an iterative algorithm in which at each iteration, we first learn a latent representation by minimizing the CARL loss, and then perform several policy updates using SAC in this latent space. Our experiments with online CARL show further performance gain over its offline version. Finally, in value-guided CARL (V-CARL), we optimize a weighted version of the CARL loss function, in which the weights depend on the TD-error of the current policy. This would help to further incorporate the control algorithm in the representation learning process. We evaluate the proposed algorithms by extensive experiments on benchmark tasks and compare them with several LCE baselines: PCC, SOLAR, and Dreamer." }, { "heading": "2 PROBLEM FORMULATION", "text": "We are interested in learning control policies for non-linear dynamical systems, where the states s ∈ S ⊆ Rns are not fully observed and we only have access to their high-dimensional observations x ∈ X ⊆ Rnx , nx ns. This scenario captures many practical applications in which we interact with a system only through high-dimensional sensory signals, such as image and audio. We assume that the observations x have been selected such that we can model the system in the observation space using a Markov decision process (MDP)1 MX = 〈X ,A, r, P, γ〉, where X and A are observation and action spaces; r : X ×A → R is the reward function with maximum value Rmax, defined by the designer of the system to achieve the control objective;2 P : X×A → P(X ) is the unknown transition kernel; and γ ∈ (0, 1) is the discount factor. Our goal is to find a mapping from observations to control signals, µ : X → P(A), with maximum expected return, i.e., J(µ) = E[ ∑∞ t=0 γ\ntr(xt, at) | P, µ]. Since the observations x are high-dimensional and the observation dynamics P is unknown, solving the control problem in the observation space may not be efficient. As discussed in Section 1, the class of learning controllable embedding (LCE) algorithms addresses this by learning a low-dimensional latent (embedding) space Z ⊆ Rnz , nz nx, together with a latent dynamics, and controlling the system there. The main idea behind LCE is to learn an encoder E : X → P(Z), a latent space dynamics F : Z × A → P(Z), and a decoder D : Z → P(X ),3 such that a good or optimal controller (policy) in Z performs well in the observation space X . This means that if we model the control problem in Z as a MDPMZ = 〈Z,A, r̄, F, γ〉 and solve it using a model-based RL algorithm to obtain a policy π : Z → P(A), the image of π back in the observation space, i.e.,\n1A method to ensure observations are Markovian is to buffer them for several time steps (Mnih et al., 2013). 2For example, in a goal tracking problem in which the agent (robot) aims at finding the shortest path to reach the observation goal xg (the observation corresponding to the goal state sg), we may define the reward for each observation x as the negative of its distance to xg , i.e., −‖x− xg‖2.\n3Some recent LCE models, such as PC3 (Shu et al., 2020), are advocating latent models without a decoder. Although we are aware of the merits of such approach, we use a decoder in the models proposed in this paper.\nAlgorithm 1 Latent Space Learning with Policy Iteration (LSLPI)\n1: Inputs: E(0), F (0), D(0); 2: Initialization: µ(0) = random policy; D ← samples generated from µ(0); 3: for i = 0, 1, . . . do 4: Compute π(i) as the projection of µ(i) in the latent space w.r.t. DKL ( π ◦ E || µ ) ; # µ(i) ≈ π(i) ◦ E(i) 5: Compute the value function of π(i) and set V (i) = Vπ(i) ; # policy evaluation (critic) 6: Compute the greedy policy w.r.t. V (i) and set π(i)+ = G[V (i)]; # policy improvement (actor) 7: Set µ(i+1) = π(i)+ ◦ E(i); # project the improved policy π (i) + back into the observation space 8: Learn (E(i+1), F (i+1), D(i+1), r̄(i+1)) from D, π(i), and π(i)+ ; # representation learning 9: Generate samples D(i+1) = {(xt, at, rt, xt+1)}nt=1 from µ(i+1); D ← D ∪D(i+1);\n10: end for\n(π ◦E)(a|x) = ∫ z dE(z|x)π(a|z), should have high expected return. Thus, the loss function to learn Z and (E,F,D) from observations {(xt, at, rt, xt+1)} should be designed to comply with this goal. This is why in this paper, we propose a LCE framework that tries to incorporate the control algorithm used in the latent space in the representation learning process. We call this model, control-aware representation learning (CARL). In CARL, we set the class of control (RL) algorithms used in the latent space to approximate policy iteration (PI), and more specifically to soft actor-critic (SAC) (Haarnoja et al., 2018). Before describing CARL in details in the following sections, we present a number of useful definitions and notations here.\nFor any policy µ in X , we define its value function Uµ and Bellman operator Tµ as\nUµ(x) = E[ ∞∑ t=0 γtrµ(xt) | Pµ, x0 = x], Tµ[U ](x) = Ex′∼Pµ(·|x)[rµ(x) + γU(x ′)], (1)\nfor all x∈X and U : X →R, where rµ(x)= ∫ a dµ(a|x)r(x, a) and Pµ(x′|x)= ∫ a dµ(a|x)P (x′|x, a) are the reward function and dynamics induced by µ. Similarly, for any policy π in Z , we define its induced reward function and dynamics as r̄π(z) = ∫ a dπ(a|z)r̄(z, a) and Fπ(z′|z) =∫\na dπ(a|z)F (z′|z, a). We also define its value function Vπ and Bellman operator Tπ as\nVπ(z) = E[ ∞∑ t=0 γtr̄π(zt) | Fπ, z0 = z], Tπ[V ](z) = Ez′∼Fπ(·|z)[r̄π(z) + γV (z ′)]. (2)\nFor any policy π and value function V in the latent space Z , we denote by π ◦ E and V ◦ E, their image in the observation space X , given encoder E, and define them as\n(π ◦ E)(a|x) = ∫ z dE(z|x)π(a|z), (V ◦ E)(x) = ∫ z dE(z|x)V (z). (3)" }, { "heading": "3 CARL MODEL: A CONTROL PERSPECTIVE", "text": "In this section, we formulate our LCE model, which we refer to as control-aware representation learning (CARL). As described in Section 2, CARL is a model for learning a low-dimensional latent space Z and the latent dynamics, from data generated in the observation space X , such that this representation is suitable to be used by a policy iteration (PI) style algorithm in Z . In order to derive the loss function used by CARL to learn Z and its dynamics, i.e., (E,F,D, r̄), we first describe how the representation learning can be interleaved with PI in Z . Algorithm 1 contains the pseudo-code of the resulting algorithm, which we refer to as latent space learning policy iteration (LSLPI).\nEach iteration i of LSLPI starts with a policy µ(i) in the observation space X , which is the mapping of the improved policy in Z in iteration i − 1, i.e., π(i−1)+ , back in X through the encoder E(i−1) (Lines 6 and 7). We then compute π(i), the current policy in Z , as the image of µ(i) in Z through the encoder E(i) (Line 4). Note that E(i) is the encoder learned at the end of iteration i− 1 (Line 8). We then use the latent space dynamics F (i) learned at the end of iteration i − 1 (Line 8), and first compute the value function of π(i) in the policy evaluation or critic step, i.e., V (i) = Vπ(i) (Line 5), and then use V (i) to compute the improved policy π(i)+ , as the greedy policy w.r.t. V (i),\ni.e., π(i+1) = G[V (i)], in the policy improvement or actor step (Line 6). Using the samples in the buffer D, together with the current policies in Z , i.e., π(i) and π(i)+ , we learn the new representation (E(i+1), F (i+1), D(i+1), r̄(i+1)) (Line 8). Finally, we generate samples D(i+1) by following µ(i+1), the image of the improved policy π(i)+ back in X using the old encoder E(i) (Line 7), and add it to the buffer D (Line 9), and the algorithm iterates. It is important to note that both critic and actor operate in the low-dimensional latent space Z . LSLPI is a PI algorithm in Z . However, what is desired is that it also acts as a PI algorithm in X , i.e., it results in (monotonic) policy improvement in X , i.e., Uµ(i+1) ≥ Uµ(i) . Therefore, we define the representation learning loss function for CARL, such that it ensures LSLPI also results in policy improvement in X . The following theorem, whose proof is reported in Appendix A, shows the relationship between the value functions of two consecutive polices generated by LSLPI in X .\nTheorem 1. Let µ, µ+, π, π+, and (E,F,D, r̄) be the policies µ(i), µ(i+1), π(i), π (i) + , and the learned latent representation (E(i+1), F (i+1), D(i+1), r̄(i+1)) at iteration i of the LSLPI algorithm (Algorithm 1). Then, the following holds for the value functions of µ and µ+:\nUµ+(x) ≥ Uµ(x)− ( 1 1− γ ∑\nπ̃∈{π,π+}\nEdγ π̃◦E [∆(E,F,D, r̄, π̃, ·)|x0 = x]\n+ √ 2γRmax 1− γ · Edγπ◦E [ √ DKL ( (π ◦ E)(·′|·) || µ(·′|·) )︸ ︷︷ ︸ Lreg(E,µ,π,·) |x0 = x] ) , (4)\nfor all x ∈ X , where dγπ◦E(x′|x0) = (1 − γ) · ∑∞ `=0 γ\n`P(x` = x′|x0; π ◦ E) is the γ-stationary distribution induced by policy π ◦ E, and the error term ∆ for a policy π is given by\n∆(E,F,D, r̄, π, x) = Rmax 1− γ (I)=Led(E,D,x)︷ ︸︸ ︷√ −1 2 ∫ z dE(z|x) logD(x|z) + 2 (II)=Lr(E,̄r,π,x)︷ ︸︸ ︷∣∣rπ◦E(x)− ∫ z dE(z|x)r̄π(z) ∣∣ (5)\n+ γRmax√ 2(1− γ)\n(√ DKL ( Pπ◦E(·|x) || (D ◦ Fπ ◦ E)(·|x) )︸ ︷︷ ︸ (III)=Lp(E,F,D,π,x) + √ DKL ( (E ◦ Pπ◦E)(·|x) || (Fπ ◦ E)(·|x) )︸ ︷︷ ︸ (IV) ) .\nIt is easy to see that LSLPI guarantees (policy) improvement in X , if the terms in the parentheses on the RHS of (4) are zero. We now describe these terms. The last term on the RHS of (4) is the KL between π(i) ◦ E and µ(i) = π(i) ◦ E(i). This term can be seen as a regularizer to keep the new encoder E close to the old one E(i). The four terms in (5) are: (I) The encoding-decoding error to ensure x ≈ (D ◦ E)(x); (II) The error that measures the mismatch between the reward of taking action according to policy π ◦ E at x ∈ X , and the reward of taking action according to policy π at the image of x in Z under E; (III) The error in predicting the next observation through paths in X and Z . This is the error between x′ and x̂′ shown in Fig. 1(a); and (IV) The error in predicting the next latent state through paths in X and Z . This is the error between z′ and z̃′ shown in Fig. 1(b).\nRepresentation Learning in CARL Theorem 1 provides us with a recipe (loss function) to learn the latent space Z and (E,F,D, r̄). In CARL, we propose to learn a representation for which the terms in the parentheses on the RHS of (4) are small. As mentioned earlier, the second term,\nLreg(E,µ, π, x), can be considered as a regularizer to keep the new encoderE close to the old oneE−, when the policy µ is given by π ◦ E−. Term (I) minimizes the reconstruction error between encoder and decoder, which is standard for training auto-encoders (Kingma & Welling, 2013). Term (II) that measures the mismatch between rewards can be kept small, or even zero, if the designer of the system selects the rewards in a compatible way4. Although CARL allows us to learn a reward function in the latent space, similar to several other LCE works (Watter et al., 2015; Banijamali et al., 2018; Levine et al., 2020; Shu et al., 2020), in this paper, we assume that a compatible latent reward function is given. Terms (III) and (IV) are the equivalent of the prediction and consistency terms in PCC (Levine et al., 2020) for a particular latent space policy π. Since PCC has been designed for an offline setting (i.e., one-shot representation learning and control), its prediction and consistency terms are independent of a particular policy and are defined for state-action pairs. While CARL is designed for an online setting (i.e., interleaving representation learning and control), and thus, its loss function at each iteration depends on the current latent space policies π and π+. As we will see in Section 4, in our offline implementation of CARL, these two terms are similar to prediction and consistency terms in PCC. Note that (IV) is slightly different than the consistency term in PCC. However, if we upper-bound it using Jensen inequality: (IV) ≤ Lc(E,F, π, x) := ∫ x′∈X dPπ◦E(x ′|x) ·DKL ( E(·|x′) || (Fπ ◦E)(·|x) ) , the resulted loss, Lc(E,F, π, x), would be similar to the consistency term in PCC. Similar to PCC, we also add a curvature loss to the loss function of CARL to encourage having a smoother latent space dynamics Fπ . Putting all these terms together, we obtain the following loss function for CARL:\nmin E,F,D ∑ x∼D λedLed(E,D, x) + λpLp(E,F,D, π, x) + λcLc(E,F, π, x)\n+ λcurLcur(F, π, x) + λregLreg(E,µ, π, x),\n(6)\nwhere (λed, λp, λc, λcur, λreg) are hyper-parameters5 of the algorithm, (Led, Lp) are the encodingdecoding and prediction losses defined in (5), Lc is the consistency loss defined above, Lcur = Ex,u[E [ fZ(z+ z, u+ u)− fZ(z, u)− (∇zfZ(z, u) · z +∇ufZ(z, u) · u)‖22 ] | E] is the curvature loss that regulates the 2nd derivative of fZ , the mean of latent dynamics F , in which z, u are standard Gaussian noise, and Lreg is the regularizer that ensures the new encoder remains close to the old one." }, { "heading": "4 DIFFERENT IMPLEMENTATIONS OF CARL", "text": "The CARL loss function in (6) introduces an optimization problem that takes a policy π in Z as input and learns a representation suitable for its evaluation and improvement. To optimize this loss in practice, similar to the PCC model (Levine et al., 2020), we define P̂ = D ◦ Fπ ◦ E as a latent variable model that is factorized as P̂ (xt+1, zt, ẑt+1|xt, π) = P̂ (zt|xt)P̂ (ẑt+1|zt, π)P̂ (xt+1|ẑt+1), and use a variational approximation to the interactable negative log-likelihood of the loss terms in (6). The variational bounds for these terms can be obtained similar to Eqs. 6 and 7 in Levine et al. (2020). Below we describe three instantiations of the CARL model in practice. Implementation details can be found in Algorithm 2 in Appendix D. Although CARL is compatible with most PI-style (actor-critic) RL algorithms, we choose soft actor-critic (SAC) (Haarnoja et al., 2018) as its control algorithm. Since most actor-critic algorithms are based on first-order gradient updates, as discussed in Section 3, we regularize the curvature of the latent dynamics F (see Eqs. 8 and 9 in Levine et al. 2020) in CARL to improve its empirical stability and performance in policy learning.\n1. Offline CARL We first implement CARL in an offline setting, where we generate a (relatively) large batch of observation samples {(xt, at, rt, xt+1)}Nt=1 using an exploratory (e.g., random) policy. We then use this batch to optimize the CARL’s loss function (6) via the variational approximation scheme described above, and learn a latent representation Z and (E,F,D). Finally, we solve the decision problem in Z using a model-based RL algorithm, which in our case is model-based SAC6. The learned policy π̂∗ inZ is then used to control the system from observations as at ∼ (π̂∗◦E)(·|xt). This is the setting that has been used in several recent LCE works, such as E2C (Watter et al., 2015), RCE (Banijamali et al., 2018), PCC (Levine et al., 2020), and PC3 (Shu et al., 2020). Our offline\n4For example, in goal-based RL problems, a compatible reward function can be the one that measures the negative distance between a latent state and the image of the goal in the latent space.\n5Theorem 1 provides a high-level guideline for selecting the hyper-parameters of the loss function: λed = 2Rmax/(1− γ)2, λc = λp = √ 2γRmax/(1− γ)2, and λreg = √ 2γRmax/(1− γ).\n6By model-based SAC, we refer to learning a latent policy with SAC using synthetic trajectories generated by unrolling the learned latent dynamics model F , similar to the MBPO algorithm (Janner et al., 2019).\nimplementation is different than those in which 1) we replace their locally-linear control algorithm, namely iterative LQR (iLQR) (Li & Todorov, 2004), with model-based SAC, which results in significant performance improvement, as shown in Section 5, and 2) we optimize the CARL loss function, that despite close connection, is still different than the one used by PCC.\nThe CARL loss function presented in Section 3 has been designed for an online setting in which at each iteration, it takes a policy as input and learns a representation that is suitable for evaluating and improving this policy. However, in the offline setting, the learned representation should be good for any policy generated in the course of running the PI-style control algorithm. Therefore, we marginalize out the policy from the (online) CARL’s loss function and use the RHS of the following corollary (proof in Appendix B) to construct the CARL loss function used in our offline experiments. Corollary 2. Let µ and µ+ be two consecutive policies inX generated by a PI-style control algorithm in the latent space constructed by (E,F,D,r̄). Then, the following holds for the value functions of µ and µ+, where ∆ is defined by (5) (in modulo replacing sampled action a∼π◦E with action a):\nUµ+(x) ≥ Uµ(x)− 2\n1− γ · max x,∈X ,a∈A ∆(E,F,D, r̄, a, x), ∀x ∈ X . (7)\n2. Online CARL In the online implementation of CARL, at each iteration i, the current policy π(i) is the improved policy of the last iteration, π(i−1)+ . We first generate a relatively (to offline CARL) small batch of samples using the image of the current policy in X , i.e., µ(i) = π(i) ◦E(i−1), and then learn a representation (E(i), F (i), D(i)) suitable for evaluating and improving the image of µ(i) in Z under the new encoder E(i). This means that with the new representation, the current policy that was the image of µ(i) in Z under E(i−1), should be replaced by its image π(i) under the new encoder, i.e., π(i) ◦E(i) ≈ µ(i). In online CARL, we address this by the following policy distillation step in which we minimize the following loss:7\nπ(i) ∈ arg min π ∑ x∼D DKL ( (π ◦ E(i))(·|x) || (π(i−1)+ ◦ E(i−1))(·|x) ) . (8)\nAfter the current policy π(i) is set, we perform multiple steps of (model-based) SAC in Z using the current model, (F (i), r̄(i)), and then send the resulting policy π(i)+ to the next iteration.\n3. Value-Guided CARL (V-CARL) While Theorem 1 shows that minimizing the loss in (6) guarantees performance improvement, this loss does not contain any information about the performance of the current policy µ, and thus, the LCE model trained with this loss may have low accuracy in regions of the latent space that are crucial for learning good RL policies. In V-CARL, we tackle this issue by modifying the loss function in a way that the resulted LCE model has more accuracy in regions with higher anticipated future returns.\nTo derive the V-CARL’s loss function, we use the variational model-based policy optimization (VMBPO) framework by Chow et al. (2020) in which the optimal dynamics for model-based RL can be expressed in closed-form as P ∗(x′|x, a) = P (x′|x, a) · exp ( τ γ (r(x, a) + γŨµ(x ′) −\nW̃µ(x, a)) ) , where Ũµ(x) := 1τ logE [ exp ( τ ∑∞ t=0 γ trµ,t ) |Pµ, x0 = x ] and W̃µ(x, a) := r(x, a) + γ τ\nlogEx′∼P (·|x,a) [exp(τUµ(x′))] are the optimistic value and action-value functions8 of policy µ, and τ > 0 is a temperature parameter. Note that in the VMBPO framework, the optimal dynamics P ∗ is value-aware, because it re-weighs P with an exponential-twisting weight exp( τ\nγ w(x, a, x′)),\nwhere w(x, a, x′) := r(x, a) + γŨµ(x′)− W̃µ(x, a) is the temporal difference (TD) error.\nIn V-CARL, we use the VMBPO framework to modify the CARL’s prediction loss Lp(E,F,D, π, x). Since the regularizer loss Lreg(E,µ, π, x) in CARL forces policies π◦E and µ to be close to each other, we may replace the transition dynamics Pπ◦E with Pµ in Lp. This makes minimizing Lp equivalent to maximizing the log-likelihood ∫ x′ dPµ(x\n′|x) · log(D ◦ Fπ ◦ E)(x′|x). Finally, we replace Pµ with P ∗µ in this log-likelihood and obtain ∫ a dµ(a|x) ∫ x′ dP (x ′|x, a) · exp( τ γ ·w(x, a, x′)) · log(D ◦Fπ ◦E)(x′|x), which is a weighted (by the exponential TD w(x, a, x′)) log-likelihood function (w.r.t. P ). Note\n7Our experiments reported in Appendix F.1 show that adding distillation improves the performance in online CARL. Thus, all our results for online CARL and V-CARL, unless mentioned, are with policy distillation.\n8We refer to Ũµ as the optimistic value function (Ruszczyński & Shapiro, 2006), because it models the right tail of the return via the exponential utility ρτ (U(·)|x, a) = 1τ logEx′∼P (·|x,a)[exp(τ · U(x ′))].\nthat this weight depends on the optimistic value functions Ũµ and W̃µ. When τ > 0 is small (see Appendix C for more details), these value functions can be approximated by their standard counterparts, i.e., Ũµ(x)≈Uµ(x) and W̃µ(x, a)≈Wµ(x, a) :=r(x, a)+ ∫ x′dP (x\n′|x, a)Uµ(x′), which can be further approximated by their latent-space counterparts, i.e., Uµ(x) ≈ (Vπ ◦ E)(x) and Wµ(x, a) ≈ (Qπ ◦ E)(x, a), according to Lemma 5 in Appendix A.1. Since the latent reward function r̄ is defined such that r(x, a) ≈ (r̄ ◦E)(x, a), we may write the TD-error w(x, a, x′) in terms of the encoder E and the latent value functions as ŵ(x, a, x′) := ∫ z,z′ dE(z|x) · dE(z ′|x′) · (r̄(z, a)−Qπ(z, a) + γVπ(z′))." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "In this section, we experiment with the following continuous control domains: (i) Planar System, (ii) Inverted Pendulum (Swingup), (iii) Cartpole, (iv) Three-link Manipulator (3-Pole), and compare the performance of our CARL algorithms with three LCE baselines: PCC (Levine et al., 2020), SOLAR (Zhang et al., 2019), SLAC (Lee et al., 2020), and two implementations of Dreamer (Hafner et al., 2020a) (described below).9 These tasks have underlying start and goal states that are “not” observable, instead, the algorithms only have access to the start and goal observations. We report the detailed setup of the experiments in Appendix E, in particular, the description of the domains in Appendix E.1 and the implementation of the algorithms in Appendix E.3.\nTo evaluate the performance of the algorithms, similar to Levine et al. (2020), we report the %- time spent in the goal. The initial policy that is used for data generation is uniformly random (see Appendix E.2 for more details). To measure performance reproducibility for each experiment, we (i) train 25 models, and (ii) perform 10 control tasks for each model. For SOLAR, due to its high computation cost, we only train and evaluate 10 different models. Besides the average results, we also report the results from the best LCE models, averaged over the 10 control tasks.\nGeneral Results Table 1 shows the means and standard errors of %-time spent in goal, averaged over all models and control tasks, and averaged over all control tasks for the best model. To compare data efficiency, we also report the number of samples required to train the latent space and controller in each algorithm. We also show the training curves (performance vs. number of samples) of the algorithms in Fig. 2. We report more experiments and ablation studies in Appendix F.\nBelow summarizes our main observations of the experiments. First, offline CARL that uses modelbased SAC as its control algorithm achieves significantly better performance than PCC that uses iLQR in all tasks. This can be attributed to the advantage that SAC is more robust and effective in non(locally)-linear environments. We report more detailed comparison between PCC and offline CARL in Appendix F.3, where we explicitly compare their control performance and latent representation maps. Second, in all tasks, online CARL is more data-efficient than its offline counterpart, i.e., it achieves similar or better performance with fewer samples. In particular, online CARL is notably superior in Planar, Cartpole, and Swingup, in which it achieves similar performance to offline CARL with 2, 2.5, and 4 times less samples, respectively (see Fig. 2). In Appendix F.3, we show how the latent representation of online CARL progressively improves through the iterations of the algorithm (in particular, see Fig. 11). Third, in the simpler tasks (Planar, Swingup, Cartpole), V-CARL performs even better than online CARL. This corroborates our hypothesis that CARL can achieve extra improvement when its LCE model is more accurate in the regions of the latent space with higher temporal difference (regions with higher anticipated future return). In 3-pole, the performance of V-CARL is worse than online CARL. This is likely due to the instability in representation learning resulted from sample variance amplification by the exponential-TD weight. Fourth, SOLAR requires significantly more samples to learn a reasonable latent space for control, and with limited data it fails to converge to a good policy. Even with the fine-tuned latent space from Zhang et al. (2019), its performance is incomparable to those of CARL variants and Dreamer. We report more experiments with SOLAR in Appendix F.5, in which we show that SOLAR can perform better, especially in Planar when we fix the start and goal locations. However, the improved performance is still incomparable with those of CARL and Dreamer. Fifth, we include an ablation study in Appendix F.2 to demonstrate how each term of the CARL’s loss function impacts policy learning. It shows the importance of the prediction and consistency terms, without which the resulting algorithms struggle, and the (relatively) minor role of the curvature and encoder-decoder terms in the performance of the algorithms.\n9We did not include E2C and RCE in our experiments, because Levine et al. (2020) has previously shown that PCC outperforms them.\nDreamer As described in Section 2, most LCE algorithms, including E2C, PCC, and CARL variants, assume the observation space X is selected such that the system is Markovian there. In contrast, Dreamer does not make this assumption and has been designed for more general class of control problems that can be modeled as POMDPs. Thus, it is expected that it performs inferior (requires more samples to achieve the same performance) to CARL when the system is Markov in the observation space. Moreover, CARL and other LCE methods define the reward as the negative distance to the goal in the latent space. This cannot be done in Dreamer, where the encoder is an RNN that takes an entire observation trajectory as input. To address this, we propose two methods to train the Dreamer’s reward function in the latent space, which we refer to as Dreamer Pixel and Dreamer Oracle. While Dreamer Pixel uses the negative distance to the goal in the observation space X as the signal to train the reward function, Dreamer Oracle uses the negative distance in the (unobserved) underlying state space S. Thus, it is more fair to compare the CARL algorithms with Dreamer Pixel than Dreamer Oracle that has the advantage of having access to the underlying state space (see Appendix F.6 for more details). As it was expected, our results show that although both Dreamer’s implementations learn reasonably-performing policies for most tasks (except Planar), they require twice to 100-times more samples to achieve the same performance as the CARL algorithms. We report longer (more samples) experiments with Dreamer on all tasks in Appendix F.6 (Fig. 12).\nResults with Environment-biased Sampling In the previous experiments, all the online LCE algorithms are warm-started with data collected by a uniformly random policy over the entire\nenvironment. With sufficient data the latent dynamics is accurate enough on most parts of the state space for control, therefore we do not observe a significant difference between online CARL and VCARL. To further illustrate the advantage of V-CARL over online CARL, we modify the experimental setting by gathering initial samples only from a specific region of the environment (see Appendix E.1 for more details). Fig. 3 shows the learning curves of online CARL and V-CARL in this case. As expected, with biased data, both algorithms experience a certain level of performance degradation, yet, V-CARL clearly outperforms online CARL — this verifies our conjecture that control-aware LCE models are more robust to initial data distribution and superior in policy optimization." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we argued for incorporating control in the representation learning process and for the interaction between control and representation learning in learning controllable embedding (LCE) algorithms. We proposed a LCE model called control-aware representation learning (CARL) that learns representations suitable for policy iteration (PI) style control algorithms. We proposed three implementations of CARL that combine representation learning with model-based soft actor-critic (SAC), as the controller, in offline and online fashions. In the third implementation, called valueguided CARL, we further included the control process in representation learning by optimizing a weighted version of the CARL loss function, in which the weights depend on the TD-error of the current policy. We evaluated the proposed algorithms on benchmark tasks and compared them with several LCE baselines. The experiments show the importance of SAC as the controller and of the online implementation. Future directions include 1) investigating other PI-style algorithms in place of SAC, 2) developing LCE models suitable for value iteration style algorithms, and 3) identifying other forms of bias for learning an effective embedding and latent dynamics." } ]
2,021
null
SP:892315ac5e3431d1be76ae8dbeb2121ea22b4ed8
[ "In this paper, the authors proposed Search Data Structure Learning (SDSL), which they claim to be a generalization of the standard Search Data Structure. They also present a new metric called Sequential Search Work Ratio (SSWR) to evaluate the quality and efficiency of the search. They introduced a new loss called F-beta Loss, showing their algorithm is better than two previous results, MIHash (Cakir et al. 2017) and HashNet (Cao et al. 2017)." ]
In our modern world, an enormous amount of data surrounds us, and we are rarely interested in more than a handful of data points at once. It is like searching for needles in a haystack, and in many cases, there is no better algorithm than a random search, which might not be viable. Previously proposed algorithms for efficient database access are made for particular applications such as finding the min/max, finding all points within a range or finding the k-nearest neighbours. Consequently, there is a lack of versatility concerning what we can search when it comes to a gigantic database. In this work, we propose Search Data Structure Learning (SDSL), a generalization of the standard Search Data Structure (SDS) in which the machine has to learn how to search in the database. To evaluate approaches in this field, we propose a novel metric called Sequential Search Work Ratio (SSWR), a natural way of measuring a search’s efficiency and quality. Finally, we inaugurate the field with the Efficient Learnable Binary Access (ELBA), a family of models for Search Data Structure Learning. It requires a means to train two parametric functions and a search data structure for binary codes. For the training, we developed a novel loss function, the F-beta Loss. For the SDS, we describe the Multi-Bernoulli Search (MBS), a novel approach for probabilistic binary codes. Finally, we exhibit the F-beta Loss and the MBS synergy by experimentally showing that it is at least twice as better than using the alternative loss functions of MIHash and HashNet and twenty times better than with another SDS based on the Hamming radius.
[]
[ { "authors": [ "Martin Aumüller", "Erik Bernhardsson", "Alexander Faithfull" ], "title": "Ann-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms", "venue": "In International Conference on Similarity Search and Applications,", "year": 2017 }, { "authors": [ "I Bayer", "E McCreight. R" ], "title": "organization and maintenance of large ordered indices", "venue": "In Proc. ACMSIGFIDET Workshop on Data Description and Access,", "year": 1970 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "arXiv preprint arXiv:1312.6203,", "year": 2013 }, { "authors": [ "Fatih Cakir", "Kun He", "Sarah Adel Bargal", "Stan Sclaroff" ], "title": "Mihash: Online hashing with mutual information", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Yue Cao", "Bin Liu", "Mingsheng Long", "Jianmin Wang" ], "title": "Hashgan: Deep learning to hash with pair conditional wasserstein gan", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhangjie Cao", "Mingsheng Long", "Jianmin Wang", "Philip S Yu" ], "title": "Hashnet: Deep learning to hash by continuation", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Lawrence Cayton", "Sanjoy Dasgupta" ], "title": "A learning framework for nearest neighbor search", "venue": "In Advances in Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "Paolo Ciaccia", "Marco Patella", "Pavel Zezula" ], "title": "M-tree: An e cient access method for similarity search in metric spaces", "venue": "In Proceedings of the 23rd VLDB conference,", "year": 1997 }, { "authors": [ "Paul Covington", "Jay Adams", "Emre Sargin" ], "title": "Deep neural networks for youtube recommendations", "venue": "In Proceedings of the 10th ACM conference on recommender systems,", "year": 2016 }, { "authors": [ "Rene De La Briandais" ], "title": "File searching using variable length keys", "venue": "In Papers presented at the the March 3-5,", "year": 1959 }, { "authors": [ "Jerome H Friedman", "Jon Louis Bentley", "Raphael Ari Finkel" ], "title": "An algorithm for finding best matches in logarithmic expected time", "venue": "ACM Transactions on Mathematical Software (TOMS),", "year": 1977 }, { "authors": [ "Cong Fu", "Deng Cai" ], "title": "Efanna: An extremely fast approximate nearest neighbor search algorithm based on knn graph", "venue": "arXiv preprint arXiv:1609.07228,", "year": 2016 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Antonin Guttman" ], "title": "R-trees: A dynamic index structure for spatial searching", "venue": "In Proceedings of the 1984 ACM SIGMOD international conference on Management of data,", "year": 1984 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Masajiro Iwasaki", "Daisuke Miyazaki" ], "title": "Optimization of indexing based on k-nearest neighbor graph for proximity search in high-dimensional data", "venue": "arXiv preprint arXiv:1810.07355,", "year": 2018 }, { "authors": [ "Qing-Yuan Jiang", "Wu-Jun Li" ], "title": "Asymmetric deep supervised hashing", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ankit Kumar", "Ozan Irsoy", "Peter Ondruska", "Mohit Iyyer", "James Bradbury", "Ishaan Gulrajani", "Victor Zhong", "Romain Paulus", "Richard Socher" ], "title": "Ask me anything: Dynamic memory networks for natural language processing", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Wu-Jun Li", "Sheng Wang", "Wang-Cheng Kang" ], "title": "Feature learning based deep supervised hashing with pairwise labels", "venue": "arXiv preprint arXiv:1511.03855,", "year": 2015 }, { "authors": [ "Zhen Li", "Huazhong Ning", "Liangliang Cao", "Tong Zhang", "Yihong Gong", "Thomas S Huang" ], "title": "Learning to search efficiently in high dimensions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2011 }, { "authors": [ "Yury A Malkov", "Dmitry A Yashunin" ], "title": "Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, 2018", "venue": null, "year": 2018 }, { "authors": [ "Franco Manessi", "Alessandro Rozza", "Mario Manzo" ], "title": "Dynamic graph convolutional networks", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Andriy Mnih", "Geoffrey E Hinton" ], "title": "A scalable hierarchical distributed language model", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Frederic Morin", "Yoshua Bengio" ], "title": "Hierarchical probabilistic neural network language model", "venue": "In Aistats,", "year": 2005 }, { "authors": [ "Apurva Narayan", "Peter HO’N Roe" ], "title": "Learning graph dynamics using deep neural networks. IFACPapersOnLine", "venue": null, "year": 2018 }, { "authors": [ "David Nister", "Henrik Stewenius" ], "title": "Scalable recognition with a vocabulary tree", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Rodrigo Paredes", "Edgar Chávez" ], "title": "Using the k-nearest neighbor graph for proximity searching in metric spaces", "venue": "In International Symposium on String Processing and Information Retrieval,", "year": 2005 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Yuming Shen", "Jie Qin", "Jiaxin Chen", "Li Liu", "Fan Zhu", "Ziyi Shen" ], "title": "Embarrassingly simple binary representation learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Shupeng Su", "Chao Zhang", "Kai Han", "Yonghong Tian" ], "title": "Greedy hash: Towards fast optimization for accurate hash coding in cnn", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Avery Wang" ], "title": "An industrial strength audio search algorithm", "venue": "In Ismir,", "year": 2003 }, { "authors": [ "intelligence", "2014. Xin Yuan", "Liangliang Ren", "Jiwen Lu", "Jie Zhou" ], "title": "Relaxation-free deep hashing via policy", "venue": null, "year": 2014 }, { "authors": [ "HASHNET HashNet Cao" ], "title": "2017) optimize an increasingly closer to discrete sequence of tasks to alleviate the challenge of solving a discrete task with differentiable methods. It is possible to frame their approach entirely with sigmoids but it is is simpler to use the tanh function as in the original article", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "In many applications, the machines need to perform many searches in a gigantic database where the number of relevant documents is minuscule, e.g. ten in a billion. It is like searching for some needles in a haystack. In those cases, considering every document is extremely inefficient. For productivity, the search should not consider the whole database. Traditionally, this is accomplished by building a search data structure and seeking within it. Those data structures can take many forms. For example, there are tree-based structures such as the B-Tree (Bayer & McCreight, 1970), the k-d tree (Friedman et al., 1977), the R-Tree (Guttman, 1984) or the M-Tree (Ciaccia et al., 1997) to name a few. In addition to trees, KNNG (Paredes & Chávez, 2005) build a graph designed for the k-nearest neighbour search. Later approaches improve on KNNG, both for construction and search time and for the search quality itself. In those lines, there is Efanna (Fu & Cai, 2016), HNSW (Malkov & Yashunin, 2018) and ONNG (Iwasaki & Miyazaki, 2018). One of the most common types of search data structures is the hash table. It is so useful that it is implemented natively in programming languages such as Python (with the dictionary type). Hash table is often the main tool an application will use for efficiency. For example, from a short and noisy song sample, Shazam (Wang et al., 2003) can retrieve the whole song by using hash tables filled with well-designed fingerprints of each song.\nTraditionally, the design of a search data structure was for a particular type of search. For example, hash tables can retrieve documents very quickly, even in gigantic databases. However, the query must be equal to the key. This requirement makes the hash table not always applicable. For instance, if the database is indexed by date and time and we seek all documents from a specific day, then it might not be optimal to query every second of that day with an equality search. B-Tree (Bayer & McCreight, 1970) was precisely introduced for applications where a range search is preferable (and faster insertion than dichotomic search is needed). Equality and range are far from being the only\ntypes of search. For instance, the k-nearest neighbours is another well-studied type a search. Also, the subset search is a more exotic example that occurs when every queries and documents are sets and when a document is relevant if and only if it is a subset of the query. As a final example, the auto-complete function often uses a Trie data structure (De La Briandais, 1959) to suggest the end of each word.\nIt is easy not to realize how the problem of efficiently finding needles in a haystack was solved multiple times for specific applications. This is the ascertainment that make Search Data Structure Learning (SDSL) a significant subject. Machine Learning has been a very flexible paradigm, whether by solving multiple NLP (Natural Language Processing) tasks with a unique Transformer (Vaswani et al., 2017) or solving most Atari games with Reinforcement Learning (Mnih et al., 2013), the capacity of a single learning algorithm to perform on multiple tasks is outstanding. Search Data Structure Learning aims at developing generic learning algorithms meant for multiple types of search. Furthermore, what makes a document relevant need not to be described formally or even understood by a human. It might be k-nearest neighbour with a complex metric or something else altogether, the only thing we need for learning is a dataset. While we use the term ”Search Data Structure Learning” for the first time, algorithms that fall into its paradigm already exist. The large video-hosting platform YouTube implements an SDSL algorithm (Covington et al., 2016) for its recommendation system (the user being the query and the videos being the documents).\nNot having a formalized definition of what makes a document relevant and relying on Machine Learning has its challenges, the most important being the evaluation. Traditional search data structure, such as the hash table, the Trie, the B-Tree, are exact, meaning the documents retrieved contains all and uniquely the relevant documents. For comparing those exact search data structures, when possible, the comparison between two exact search data structures is made with the asymptotic time complexity (the big-O notation). However, when the search is not exact, it is unclear how to compare structures with different efficiency and exactitude. The precision at Hamming distance of 2 is an attempt to unify those two properties into a single measure specific to the context of binary encoding. However, as described below, it fails in many aspects. It might seem like it is up to the programmer to decide what is more important between the speed and the quality of the retrieved documents. For example, the recall-queries per second (Aumüller et al., 2017) helps to visually understand the trade-off between speed and quality. In section 3, we describe a reliable measure to evaluate the efficiency and quality simultaneously of any search data structure. This metric solidifies the Machine Learning subfield of Search Data Structure Learning.\nThis article presents the SDSL framework that brings two crucial generalization w.r.t. its predecessors (Li et al., 2011; Cayton & Dasgupta, 2008). First, it allows for dynamic databases, i.e. databases that might change or evolve after training. For example, it is plausible that a company wants to design and train a search engine ready for distribution to multiple clients without further training on each client’s database. The current mindset is to retrain each time a new database is given; however, this is not feasible in many cases. Hopefully, this article motivates the research towards models that can generalize to never seen databases. Secondly, the previous framework does not support relative relations, i.e. when the relevance of a document w.r.t. a query depends on the other documents in the database. The most studied relative relations is probably the KNN task, which is relative since it is impossible to know if a document is in the k-nearest neighbour of a query without knowing the other documents. In contrast, radius search is an example of what we call an absolute relation because it is possible to know if a document is relevant to a query only by looking at the query-document pair. In this work, however, we did not introduce relative relations only for KNN. Many interesting relative relation tasks exist; for example, another rather exciting example of relative relation is the multiple supporting facts:\nA harder task is to answer questions where two supporting statements have to be chained to answer the question [...] where to answer the question “Where is the football?” one has to combine information from two sentences “John is in the playground” and “John picked up the football”. (Weston et al., 2015).\nIn this work, we first introduce a general framework to formalize the SDSL task in which we present a novel metric to simultaneously evaluate the efficiency and quality of the search. Then, we inaugurate the field of SDSL with Efficient Learning Binary Access (ELBA) (Section 4) that describes a family of models that use a traditional search data structure and parametric functions (e.g. neural networks) to create a discrete binary code(s) for both the queries and documents. A reader familiar\nwith the field must appreciate the difficulty that has to be overcome when dealing with (semi)discrete structure. To instantiate ELBA, we concocted the F-beta Loss used for the training and Multi-Bernoulli Search (MBS), a novel SDS technique designed for probabilistic binary codes. Finally, for comparisons, we will instantiate ELBA with other loss functions and another SDS, namely the MIHash’s loss (Cakir et al., 2017), the HashNet’s loss (Cao et al., 2017), and the Hamming Radius Search C.4. We will then experimentally show the F-beta Loss and MBS’s advantage by putting in evidence their synergy." }, { "heading": "2 RELATED WORK", "text": "In data structure terminology, the concepts of dynamic and static structures describe whether or not the structure can change via insertion, deletion or merge. In SDSL, if the database(s) used for training are not the same as the one(s) used for evaluation, then the structure has to search for documents only seen once at insertion. From a Machine Learning perspective, this is known as a One-Shot Learning task. For example, Matching Network (Vinyals et al., 2016) tries to match never seen elements together. However, applying their technique would require a database scan. Hence it is incompatible with a gigantic database. In the same vein, soft addressing (or attention) is a differentiable mechanism to select an element from many, thus compatible with gradient descent. Memory Network (Kumar et al., 2016), Neural-Turing Machine (Graves et al., 2014) and Transformer (Vaswani et al., 2017) all use some kind of soft addressing. It is interesting for training our models but cannot be used alone in SDSL. For the same reason as above, it would require considering the whole database.\nFinding the k-nearest neighbour is trivial with unlimited resources. In this field, the research focuses mainly on the efficiency of both the search and the structure’s construction. The exact algorithms are as efficient in higher dimensions than a random search due to the curse of dimensionality. Consequently, the focus has recently been on approximate k-nearest neighbour. The search data structure developed are mostly tree-based, such as the k-d tree (Friedman et al., 1977) or the K-Means tree (Nister & Stewenius, 2006), and graph-based, such as the KNNG (Paredes & Chávez, 2005), Efanna (Fu & Cai, 2016), HNSW (Malkov & Yashunin, 2018) or ONNG (Iwasaki & Miyazaki, 2018) just to name a few. A good resource for comparing those approaches is the ann-benchmark Aumüller et al. (2017). In this work, we generalize the problem to conceive algorithms able to learn what to search efficiently.\nEfficient Learnable Binary Access, described below, encodes queries and documents into binary vectors. In this work, we will use neural networks as the encoders. Such encoders already exists in the literature. For example, CNNH (Xia et al., 2014), DPSH (Li et al., 2015), DHN (Zhu et al., 2016), GreedyHash (Su et al., 2018), PGDH (Yuan et al., 2018), HashGan (Cao et al., 2018), ADSH (Jiang & Li, 2018) or JMLH (Shen et al., 2019), just to name a few. Below we compare different loss functions, the F-beta Loss 4 the MIHash’s loss Cakir et al. (2017) and HashNet’s loss Cao et al. (2017).\nGraph learning, introduced in Zhu et al. (2003) for semi-supervised learning, is a type of data structure learning that has shown experimentally to be a strong idea. Those models learn to do inference from graphs, sometime by generating them first. Some approaches work with static graphs (static structures) (Zhu et al., 2003; Perozzi et al., 2014; Scarselli et al., 2008; Bruna et al., 2013) while other work with dynamic graphs (dynamic structures) (Narayan & Roe, 2018; Manessi et al., 2020). While this literature does not focus on retrieval, they learn to compute using a data structure.\nTo put SDSL in contrast with the Learning to Search framework (Li et al., 2011). As mentioned in the introduction, it does not support dynamic databases and relative relation. It is possible to update the framework to deal with dynamic databases by taking an expectation over the databases in the retrieval quality Q(T ) and computational cost C(T ). However, it is not clear how to deal with relative relations because the selection function T (q, x) is a ”matching function” that does not exist for relative tasks. Generalizing the selection function by allowing it to consider the whole database (i.e. with T (q,X)) does not work because T (q,X) could use the ranking function s(x, q) on every document and nothing would penalize such exhaustive strategies since the computational cost is the number of candidates. Nevertheless, this is not the main issue. As with the framework proposed in Cayton & Dasgupta (2008), the computational cost does not consider the retrieval cost but only the size of the candidates set (divided by the number of documents in the database for the\nlatter framework). Those frameworks fail to quantify the work needed to retrieve the candidates. For example, while proposing the Learning to Search framework, the authors relied on timing to evaluate their model. The SDSL framework, proposed below, provides a unique quantity that quantifies both the cost of retrieval and the candidates’ quality simultaneously.\nFinally, while not introduced as such, an SDSL algorithm is used in NLP. In this field, many articles attempt to accelerate the training and inference of the neural network based models, in which the main bottleneck is the normalization over a large vocabulary. Morin & Bengio (2005) use a precomputed tree and train their model to travel from the root to a leaf, where each leaf corresponds to a word. Doing so accelerates both training and inference. Latter, Mnih & Hinton (2009) proposed a way to learn the structure of the tree." }, { "heading": "3 FORMALISATION OF THE PROBLEM", "text": "Let Q be the query universe, let U be the document universe, and let D be the set of databases, i.e. the set of all finite sets of documents. The task is formulated with a set of relations corresponding to each database R = {RD : Q → 2D | D ∈ D}. This is to allow the general case where the relation is relative. Definition 3.1. The relation set R is absolute if there is a match function M : Q × U → {True, False} s.t. ∀q ∈ Q, D ∈ D, d ∈ D, M(q, d) ⇔ d ∈ RD(q) otherwise, we say it is relative.\nThis definition can easily be generalized to the cases where each RD is probabilistic map by using probability instead of a truth value.\nFor the F-beta loss to be defined in Section 4, we restrict ourselves to absolute relation sets and thus, only the query-document pair determines if the document is relevant. However, the rest of this formalization is for both relative and absolute relation sets.\nThe mAP is a widely used metric in information retrieval. However, it does not consider the work done to perform the ranking. An SDS could compare the query to every document in the database and have a good mAP. In SDSL, we want to monitor the quality as well as the efficiency of retrieval. The Recall Query per second (RQPS) (Aumüller et al., 2017) is also used in the ANN literature. However, it is not suitable for theoretical analysis since the results depend on the implementation and the hardware. It is possible to generalize the RQPS by changing what quantifies the amount of work done for retrieval (to something else than the number of seconds). Nevertheless, the RQPS has a parameter (k) that limits the number of candidates to generate. This parameter prevents a model from generating all documents in the database as its candidates, which would give 100% recall without any computation and, consequently, having an excellent score doing nothing. Ultimately, the parameter k is a fix to the flaw that the RQPS does not consider the precision. In SDSL, we want to legitimately compare models that might not generate the same number of candidates. To the best of our knowledge, the precision at a Hamming distance of two (p@2) is the only proposal in the literature to consolidate the search’s efficiency and quality without relying on the hardware and the implementation. Obviously, this metric has many limitations. First, it is only relevant when the model transforms the queries and documents into binary codes. More importantly, it does not consider the recall quality. For example, if a query should return several relevant documents but only one relevant document (and no irrelevant ones) are within a Hamming distance of two according to the model, the p@2 score would be maximal for this query even though the system has a very poor recall. Another significant limitation is the fact that it does not weight the score w.r.t. the distance. In many contexts, the amount of work increases a thousandfold when comparing a perfect match (Hamming distance 0) and distance 2.\nIn this work, we present a generic metric for any SDSL task. We grounded the metric on a very pragmatic standpoint by asking what kind of strategy a programmer would use to find relevant documents in a database quickly. At first, one might consider a random search with a good matching function (e.g. a neural network). However, if the database is enormous, this strategy will give poor performances. One could then consider filtering a significant portion of the database using a search data structure, but the retrieved documents might contain multiple false positives, decreasing the precision. We believe we can have the best of both solutions by combining them, first filtering a large portion of the database with a search data structure and then using a good matching function to filter\nthe retrieved documents. Finally, to evaluate if the search data structure is useful, the programmer could consider the cost of searching in the structure plus the cost of a random search in the retrieved documents versus the cost of a random search in the whole database. This is the central idea of the Search Work Ratio, the precursor of the Sequential Search Work Ratio, both defined below. Definition 3.2. A relevance oracle is an oracle capable of computing if a document is relevant. Definition 3.3. The relevance oracle cost, noted C(N,K, k), is the expected number of oracle’s calls needed to find k relevant documents within a set containingN documents whereK are relevant, when doing a random search without replacement. Lemma 3.1. C(N, K, k) = k(N + 1)/(K + 1). (Proof in appendix A.)\nAs mentioned previously, we do not intend for the search data structure to produce the final results. Instead, another entity (e.g. a program or a human) should refine the retrieved documents. In many applications, mainly when the relevance function is absolute, it is conceivable that this entity is nearly perfect or, at least, significantly more precise than the SDS. Consequently, we define the cost of finding k relevant documents in a set of size N containing K relevant documents as the relevance oracle cost C(N, K, k). As a generalization, we can weigh differently when the oracle receives a relevant document versus when it does not. The weighting would consider a real refinement entity’s potential errors and put different values to the false positives and false negatives. For simplicity, in this work, we will not weight the calls to the oracle.\nThe Search Work Ratio (SWR) is the ratio between the work done using the SDS versus the work done without using the SDS. Consequently, an SWR score of less than 1 implies that it is less costly to use the SDS and vice-versa. Furthermore, the SWR has a simple interpretation. For example, an SWR of 1/2 implies that using the SDS reduces the cost by a factor of two. Definition 3.4. Let D ∈ D, R ∈ 2D, k ∈ N, ω0 ∈ R and δ0 ∈ 2D, then the Search Work Ratio is\nSWR(D, R, k, ω0, δ0) = C(|δ0|, |δ0 ∩R|, k) + ω0\nC(|D|, |R|, k) ∈ R+,\nwhere D is a database, R is the relevant documents in this database, k is the minimum number of documents we want to find, ω0 ∈ R+ is the cost of searching with the SDS, and δ0 is the candidates retrieved by the SDS. The cost could be any complexity measure, e.g. time or space. In this work, since we will work only with hash tables, ω0 will be the number of hashes computed. We assume that using the oracle has the same cost as computing a hash. The SWR has a significant flaw; it requires that the SDS find enough relevant documents. Otherwise, it is not defined. We will now slightly generalize this definition using a relevance generator to avoid this issue.\nIt is not rare that an SDS can be slightly modified to produce a sequence of sets of candidates.. For example, an approximate tree or graph search often employs a limit of nodes in the exploration. It is possible to modify those algorithms to generate candidates with an increasing number of nodes to explore. Definition 3.5. Let D ∈ D, R ∈ 2D, k ∈ N, ω ∈ RN and δ ∈ (2D)N s.t. T = min{t s.t. t ∈ N and | ∪ti=0 δi ∩R| ≥ k} exists, then the Sequential Search Work Ratio is\nSSWR(D, R, k, ω, δ) = C(|δT |, |δT ∩R|, k − |H ∩R|) + |H|+\n∑T t=0 ωt\nC(|D|, |R|, k) ∈ R+,\nwith H = ∪T−1i=0 δi if T > 0 and H = ∅ otherwise.\nThe SSWR’s numerator corresponds to a random search with the relevance oracle on the last generated candidates set plus the cost of considering all previously generated candidates plus the amount of work for computing all candidates sets up to T . The SSWR uses the relevance oracle cost only over the last sets of candidates because the generator did not found enough relevant documents before generating the last sets of candidates. Consequently, an exhaustive search with the oracle in the previous sets of candidates was performed before asking the generator to yield more candidates. The SSWR account for this exhaustive search with the |H| term. Finally, the sets of candidates are intended to be mutually exclusive because this will reduce the relevance oracle cost computed over the last sets of candidates and give a better SSWR. However, it is not mandatory.\nDefinition 3.6. The Search Data Structure Learning (SDSL) framework consists of minimizing the expected SSWR over generators (of sets) of candidates w.r.t. database-query pairs. Formally, given a work function W : GD, q 7→ (ω0, ω1, . . . ) ∈ RN where q ∈ Q and GD is a generators w.r.t. a database D ∈ D the goal of SDSL is to minimize,\nmin G\nED,q [SSWR(D,RD(q), k, W(GD, q), GD(q))] .\nWe minimize the expectation over all databases to ensure the generator’s quality even if the database changes, i.e. for dynamic databases. By letting the distribution over D to be deterministic, we fall into the framework with a static database." }, { "heading": "4 EFFICIENT LEARNABLE BINARY ACCESS", "text": "This section describes a family of models to tackle SDSL tasks: the Efficient Learnable Binary Access (ELBA). It consists of two parametric families of functionFQ andFU (e.g. neural networks) called the queries and documents encoders, and a Multi-Bernoulli Search (MBS) data structure S that will be made explicit later. Any function from FQ and FU must have their domain onQ and U , respectively, and their image in [0, 1]n to be interpreted as the parameters of a Multi-Bernoulli1 (in its canonical form). Precisely, ELBA is specified by the following triplet ELBA = (FQ, FU , S) with FQ = {fθ : Q → [0, 1]n | θ ∈ ΘQ} and FU = {fθ : U → [0, 1]n | θ ∈ ΘU}. Note that in the particular case where Q = U , it is possible to use the same function for the queries and the documents (ELBA = (F , S)). We call this the shared variant of ELBA. As opposed to the unshared variant where the parametric families might be the same, but the parameters are free to be different (e.g. the same neural network but with different parameters).\nMulti-Bernoulli Search (MBS) data structure is a key-value based data structure that implements insert and search. This data structure uses M back-end key-value based data structures compatible with binary vectors keys. The back-end data structures S1, S2, . . . , SM , must also implement insert(S, key, value) where S ∈ {S1, S2, . . . , SM} is the data structure into which we insert the value w.r.t the key. Similarly, the back-end structures must implement search(S, key), which must return the appropriate set of values. While the key given to the back-end data structures are binary vectors, the key given the MBS must be the parameters of a Multi-Bernoulli distribution of dimension n, i.e.\nkey = π = (π1, π2, . . . , πn) ∈ [0, 1]n.\nFor insertion, the MBS computes the M most probable outcomes of the Multi-Bernoulli (which might not be unique) and uses them as the keys for inserting in a back-end data structures. For searching, the MBS computes the T most probable outcomes of the Multi-Bernoulli (which, again, might not be unique) and uses them as the keys for searching in each back-end data structures. Consequently, the search performs TM back-end searches. The pseudo-code is in the appendix B.2. Note that the insert method does not require T . Consequently, we can make the insertions and then choose T. This fact makes possible the modification of the search to generate candidates every time it searches in a back-end structure. Finally, to conform with the SDSL’s framework, it is possible to generate candidates by yielding candidates each time we search in a back-end structure.\nComputing efficiently the top-k most probable outcomes of a Multi-Bernoulli is not trivial. In the appendix B, we describe how to do it. Throughout this work, we will use the Hashing MultiBernoulli Search (HMBS), an implementation of the MBS that uses hash tables as its back-end data structures. An example of how inserting, searching and generating can be found in the appendix B.3.\nTo implement ELBA, we need a means to select a function from each parametric family. As the parametric families, we consider neural networks, which we aim to train with gradient descent. Consequently, we need a loss function. Thus in this section, we will describe the F-beta Loss, a novel loss function design to perform well with MBS.\n1The Multi-Bernoulli is a random vector composed of n independent but not identical Bernoulli.\nWe will restrain ourselves to absolute relevance function. For this reason, we will use a dataset of the form {(qi, di, ri)}Ni=1 = (qi, di, M(qi, di))}Ni=1. The model will try to predict the Matching functionM. We will denote the matching prediction function M̂θ. Since the Multi-Bernoulli Search requires the (canonical) parameters of the Multi-Bernoulli representation of the query {πqi }ni=1 for search and of the document {πdi }ni=1 for insert (with n the chosen number of Bernoulli in the Multi-Bernoulli random variables), it is primordial that the model provides such quantity in its computational pipeline. Let fQθ (q) = π q and fUθ (d) = π d be the parametric functions for the queries and the documents, both implemented with a neural network ending with a sigmoid. Note that, depending on the case, the two neural networks might or might not share parameters. Finally, since we want to create a synergy with Hashing Multi-Bernoulli Search, we need the bits to be all equal if and only if the matching prediction function is one. Thus we define,\nM̂θ(q, d) = n∏ i=1 πqi π d i + (1− π q i )(1− π d i ),\nwhich is the probability of both Multi-Bernoulli random variables are equal according to the distributions {πqi }ni=1 and {πdi }ni=1. We define three essential quantities: the recall, the fallout, and the predicted matching marginal (pmm for short), respectively:\nrθ = Eq,d|M(q,d) [ M̂θ(q, d) ] , sθ = Eq,d|¬M(q,d) [ M̂θ(q, d) ] , mθ = Eq,d [ M̂θ(q, d) ] .\nWe can compute empirical averages to produce unbiased estimators of those quantities.\nr̂θ = 1 |I+| ∑ i∈I+ M̂θ(qi, di), ŝθ = 1 |I−| ∑ i∈I− M̂θ(qi, di), m̂θ = qr̂θ + (1− q)ŝθ.\nwhere I+ and I− are the sets of indexes for query-document pairs in the dataset that match and do not match, respectively, and with q being the probability of having a matching pair. It is also possible to derive an estimator for the precision pθ = qrθmθ , with p̂θ = qr̂θ m̂θ . However, it is biased.\nFor numerical stability and because the gradients w.r.t rθ, sθ and mθ are near zero when the training starts, we need to consider their logarithm for the loss function. Maximizing precision gives a model capable of discriminating between positive and negative pairs, but it leaves the recall untouched, and having a high recall is vital for the HMBS to find the relevant documents. Furthermore, only maximizing the recall induce the model towards a constant Multi-Bernoulli distribution with zero entropy, i.e., independent of the input and where all the probabilities are near 0 or 1. We tried maximizing the recall while minimizing the fallout, i.e. with a loss similar to maxθ log(rθ) − λ log(sθ). However, we found it extremely hard to optimize — there was no sweet spot for lambda. When the model’s recall was sufficient, it was because it collapsed to a constant function. Scheduling λ to alternate between a small value and a relatively high value has shown limited experimental success. In the end, we were looking for a tradeoff between the precision and the recall. The F-beta came naturally. However, since our precision estimator is biased, it is simpler to reparameterize the standard F-beta with the pmm using pθ = qrθmθ given us,\nFβ = (1 + β2)qrθ qβ2 +mθ with β ∈ R+\nFor the above reasons, we considered the logarithm of the F-beta,\nlogFθ = log((1 + β 2)q) + log(rθ)− log(qβ2 +mθ) with β ∈ R+\nHowever, if we replace the recall term by its estimator directly we will get the LogSumExp (LSE) w.r.t. log M̂θ(qi, di) for i ∈ I+. i.e., log(r̂θ) = LSE{log M̂θ(qi, di) | i ∈ I+} which is known to act as a soft maximum (not to be confused with its gradient, the Softmax). Doing this will yield a near-zero gradient for the matching pairs with the lowest predicted matching value. It would be problematic since those pairs are the ones that need the most gradient.\nInstead, we propose an alternative estimator of the log F-beta, which we call the F-beta Loss, where we replace the LSE with the average logarithm of the predicted matching value.\nlog F̂θ = c+ 1 |I+| ∑ i∈I+ log M̂θ(qi, di)− log(qβ2 + m̂θ)\nwith β ∈ R+ and c = log(1 + β2) + log q. Note that it is simple to compute with numerical stability the logarithm of the sigmoid function. Most, if not all, Machine Learning libraries natively define this function." }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "We performed all experiments on a dataset build from MNIST (LeCun et al., 1998), which we call NoisyMnist. We do not intend NoisyMnist to be a challenging task but rather a tool to analyze the convergence properties and draw comparisons between each models’ qualities. In this dataset, the document and query are MNIST images with value ranging from 0 to 1 with additive Gaussian noise (the value can consequently go below 0 and above 1). The mean and std of the noise is 0 and 0.2, respectively. The relevance function of NoisyMnist is absolute, and we define the matching function as follows: a query match with a document if and only if their original MNIST image was the same before we added the noise. In figure 1, there are 6 examples of queries-documents pairs. For evaluation, We build a fixed database with 10K different images from MNIST not accessible while training. From those 10K, we randomly selected 1K to create the queries. Finally, we added the noise on each image (the query and their corresponding document do not share the same noise). In this evaluation database, there is a unique document relevant for each query. It is one in ten thousand, making it a proper database for SDSL.\nWe considered two alternatives for the F-beta Loss and one alternative to the MBS. For the F-beta Loss, we selected the loss function of MIHash (Cakir et al., 2017) and the loss function of HashNet (Cao et al., 2017) because of their compatibility with with ELBA. More specifically, they both produce quantities that can be interpreted as the parameters of a Multi-Bernoulli, and they both can be trivially generalized to the unshared case. Furthermore, as an alternative to MBS, we choose the Hamming Radius Search (HRS) described in appendix C.4. Combining the three losses with the two data structures creates six Efficient Learnable Binary Access models. We deployed each model in both the shared and unshared categories, for a total of twelve scenarios.\nTraining was run 5 times for each model, and the top 5 sets of parameters w.r.t the SSWR were selected for a total of 25 sets of parameters for each of the twelve models. All the values provided are the average of those 25 points. Each training consisted of 100K batches of size 32, which was plenty for all models to converge. At each 500 batches, we performed an evaluation giving us 200 sets of parameters to select from. We performed all evaluations using the same fixed set of 10K validation documents and a corresponding fixed set of 1K validation query described above. Those documents and queries were never seen while training. All networks are ResNet18 (He et al., 2016) adapted to MNIST, i.e. the first convolution takes from one channel with no stride and the last linear layer output a vector of size 64 (for 64 bits). The hyperparameters and training schedule can be found in the appendix C. Finally, we use halting for all six models as is describe below.\nWhen using generators iterating over binary codes, such as HRS and MBS, it is crucial to halt the iteration before it finishes. For example, for 64 bits, there are 264 possible hashes, which is certainly larger than any database. In those cases, it does not make sense to compute every possible hash. Halting is the mechanism that decides when to stop a relevance generator and produce the database’s remaining document as the final candidates set. Note that, when halting, the SSWR is always greater than one. In all experiments with HRS, we used a halting of 2081. It corresponds to generating every document with binary codes within a radius of two from the query’s code (for 64 bits). In this case, halting afterward is arbitrary since the next 41664 codes (distance of 3) come in no particular order. Finally, in all experiments with HMBS, we use a halting of 5001, which corresponds to stopping when the amount of work done exceeds the expected amount of work without an SDS, i.e. C(10000, 1, 1). Table 1 shows that F-beta (from this work) outperforms MIHash and HashNet in both unshared and shared categories. Noteworthily, MIHash and HashNet fail in the unshared categories. While HashNet successfully produces binary codes with lower Hamming distance for positive than negative pairs, the distance is way too high to be used with hash tables. MIHash, on the other hand, only push towards increasing the mutual information between the Hamming distance and whether or not the pair matches. It implies that there is no (explicit) pressure towards having a small Hamming distance. The synergy between the loss function and shared parameters is why MIHash and HashNet produce low Hamming distances for positive pairs for the shared problem. The found solutions for the unshared problem are not available when the queries and documents networks are the same. Parameter sharing acts as a colossal regularization. Those poor results suggest that both MIHash and HashNet are constrained to similarity search. Making F-beta the superior loss function for ELBA with hash table based SDS.\nOn the other hand, HMBS’s results are far superior to those of HRS. In the case of shared F-beta, they are 46 times better. The only case for which it is not true is for shared HashNet. However, both of those models are not competitive. The fat tail of its Hamming distances distribution is at cause. In our experiments, we noted that varying the halting did not seem to change the results drastically. Additional figures provided in the appendix D might convey a more intuitive understanding of the contrast between the different scenarios." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this article, we proposed a novel and practical field of Machine Learning called Search Data Structure Learning, for which we propose a natural metric, the SSWR. We inaugurated this field with a new family of models, the Efficient Learnable Binary Access, which we instantiated with the F-beta Loss and the MBS that outperformed multiple alternatives, reducing by at least a factor of two the SSWR. We cannot overstate the importance of F-beta in this project. The capacity to obtain convergence on discrete output without being caught in a local minimum is a powerful tool. In NLP, the F-beta led us to several exciting breakthroughs when learning semantically rich discrete embedding for words. The fact that studying SDSL led to breakthroughs in other domains is, for us, an attestation of its significance.\nFurthermore, we plan to extend the formalization to consider insertion, deletion, dependent queries, and dependent retrieved documents in future work. Such generalization could be useful for tasks like dialogue modelling or question answering. Also, in this work, the halting procedure was simplistic. In the future, we are interested in working with models that can decide when to stop based on the query and the retrieved documents. Finally, we are eager to work with non-hashing-based approaches, such as trees or graphs." }, { "heading": "A THE RELEVANCE ORACLE COST", "text": "Formally, let S be a set of N documents containing K ≤ N relevant documents. From this K elements, we want to find at least k ≤ K elements. We want to compute how many calls in expectation are needed to find those k elements if we sample from S uniformly without replacement. We will denote this expectation C(N,K, k). Let G(k ; N,K, n) be the Hypergeometric Distribution with parameters N , K, n. This distribution gives the probability that from a set with N documents from which K are relevant, we sample exactly k relevant documents in n uniform trials without replacement.\nG(k ; N,K, n) = ( K k )( N−K n−k )( N n\n) . Let P(n ; N,K, k) be probability distribution, defined below, with parameters N , K, k. This gives the probability that from a set with N documents from which K are relevant, it takes n uniform trials without replacement to get precisely k relevant documents. We have,\nP(n ; N,K, k) = G(k − 1 ; N,K, n− 1)K − (k − 1) N − (n− 1) ,\nwhich is the probability that we have k − 1 relevant documents in n − 1 trials multiplied by the probability that we sample a relevant document in a set with N − (n − 1) documents from which K − (k − 1) are relevant. Finally, the expectation of P(n ; N,K, k) yields the wanted measure. I.e. if X ∼ P(· ; N,K, k) then,\nC(N, K, k) = E [X] = N∑ n=1 nP(n ; N,K, k) = ( N + 1 K + 1 ) k,\nwith the last equality shown below A.1.\nLemma A.1. If X ∼ P(· ; N,K, k) then,\nE [X] = ( N + 1\nK + 1\n) k\nProof. with A ≥ a, B ≥ b and A ≥ B all natural numbers.\nE [X] = N∑ n=1 nP(n ; N,K, k)\n= N−(K−k)∑ n=k nP(n ; N,K, k) remove zeros\n= N−(K−k)∑ n=k nG(k − 1 ; N,K, n− 1)K − (k − 1) N − (n− 1)\n= N−(K−k)∑ n=k n\n( K k−1 )( N−K n−k )( N n−1 ) K − (k − 1) N − (n− 1)\n= N−(K−k)∑ n=k n k\nK−k+1 ( K k )( N−K n−k ) n\nN−n+1 ( N n ) K − (k − 1) N − (n− 1)\nwith ( A\na− 1\n) =\na\nA− a+ 1\n( A\na\n)\n= N−(K−k)∑ n=k k\n( K k )( N−K n−k )( N n\n) =\nN−(K−k)∑ n=k k\n( n k )( N−n K−k )( N K ) with (Bb )(A−Ba−b )( A a ) = (ab)(A−aB−b)( A B\n) =\nk( N K ) N−(K−k)∑ n=k ( n k )( N − n K − k ) =\nk( N K )(N + 1 K + 1 ) a variant of Vandermonde’s identity\n= k( N K ) N + 1 K + 1\n( N\nK\n) with ( A+ 1\na+ 1\n) = A+ 1\na+ 1\n( A\na ) = ( N + 1\nK + 1\n) k" }, { "heading": "B TOP-K MAXIMAL MULTI-BERNOULLI OUTCOMES", "text": "Let bi be n independent and non-identical Bernoulli random variables with probability πi to be one. Let zi be the most probable outcome of bi and let pi be the probability that bi be the most probable outcome (bi = zi) i.e.\nzi = { 1 if πi > 1/2 0 otherwise\npi = { πi if πi > 1/2 1− πi otherwise\nnote that pi is always greater or equal than 1/2. The theorem B.2 gives us the following result.\nP (b=x) > P (b=y)⇐⇒ n∑ i=1\ns.t. xi 6=zi\nlog(pi)− log(1− pi) < n∑ i=1\ns.t. yi 6=zi\nlog(pi)− log(1− pi)\nThis implies that finding the Top-K Minimal Subset Sum of A = {a1, a2, . . . , an} with ai = log(pi) − log(1 − pi) ≥ 0 will yield the Top-K Maximal Multi-Bernoulli Outcomes. The Top-K Minimal Subset Sum problem can be solve using standard programming techniques. The pseudocode for the reduction is in the appendix B.2." }, { "heading": "B.1 REDUCTION TO TOP-K MINIMAL SUBSET SUM", "text": "Lemma B.1. Let pi ∈ [0, 1], ai ∈ {0, 1} and bi ∈ {0, 1}, for i ∈ {0, 1, . . . , n} then\nn∏ i=1 paii (1− pi) 1−ai < n∏ i=1 pbii (1− pi) 1−bi\n⇐⇒ n∏ i=1 p1−aii (1− pi) ai > n∏ i=1 p1−bii (1− pi) bi\nProof. Let I+ = {i | i ∈ N, 1 ≤ i ≤ n, ai = bi} and let I+ = {i | i ∈ N, 1 ≤ i ≤ n, ai 6= bi} we have,\nn∏ i=1 paii (1− pi) 1−ai < n∏ i=1 pbii (1− pi) 1−bi\n⇐⇒ ∏ i∈I+ paii (1− pi) 1−ai ∏ i∈I− paii (1− pi) 1−ai < ∏ i∈I+ pbii (1− pi) 1−bi ∏ i∈I− pbii (1− pi) 1−bi\n⇐⇒ ∏ i∈I− paii (1− pi) 1−ai < ∏ i∈I− pbii (1− pi) 1−bi\n⇐⇒ ∏ i∈I− p1−bii (1− pi) bi < ∏ i∈I− p1−aii (1− pi) ai since ∀i ∈ I−, ai = 1− bi\n⇐⇒ ∏ i∈I+ p1−bii (1− pi) bi ∏ i∈I− p1−bii (1− pi) bi < ∏ i∈I+ p1−aii (1− pi) ai ∏ i∈I− p1−aii (1− pi) ai\n⇐⇒ n∏ i=1 p1−bii (1− pi) bi < n∏ i=1 p1−aii (1− pi) ai\nTheorem B.2. Let b be a Multi-Bernoulli of parameter π ∈ ]0, 1[n and let\nzi = { 1 if πi > 1/2 0 otherwise\npi = { πi if πi > 1/2 1− πi otherwise\nthen, for any x ∈ {0, 1}n and y ∈ {0, 1}n we have\nP (b=x) > P (b=y)⇐⇒ n∑ i=1\ns.t. xi 6=zi\nlog(pi)− log(1− pi) < n∑ i=1\ns.t. yi 6=zi\nlog(pi)− log(1− pi)\nProof. For notational simplicity, let x̄i = xi ⊕ zi and let ȳi = yi ⊕ zi (with ⊕ being the exclusive or)\nP (b=x) >P (b=y) ⇐⇒ n∏ i=1 P (bi=xi) > n∏ i=1 P (bi=yi) since the Bernoulli are independent\n⇐⇒ n∏ i=1 πxii (1− πi) 1−xi > n∏ i=1 πyii (1− πi) 1−yi\n⇐⇒ n∏ i=1 p1−x̄ii (1− pi) x̄i > n∏ i=1 p1−ȳii (1− pi) ȳi proved by cases (πi > 1/2 and πi ≤ 1/2)\n⇐⇒ n∏ i=1 px̄ii (1− pi) 1−x̄i < n∏ i=1 pȳii (1− pi) 1−ȳi by lemma B.1\n⇐⇒ n∑ i=1 x̄i log(pi) + (1− x̄i) log(1− pi) < n∑ i=1 ȳi log(pi) + (1− ȳi) log(1− pi)\n⇐⇒ n∑ i=1 x̄i (log(pi)− log(1− pi)) + log(1− pi) <\nn∑ i=1 ȳi (log(pi)− log(1− pi)) + log(1− pi)\n⇐⇒ n∑ i=1 x̄i (log(pi)− log(1− pi)) < n∑ i=1 ȳi (log(pi)− log(1− pi))\n⇐⇒ n∑ i=1\ns.t. x̄i=1\nlog(pi)− log(1− pi) < n∑ i=1\ns.t. ȳi=1\nlog(pi)− log(1− pi)\nRemark. The case where πi is exactly one or zero for some i can be trivially taken into account by setting the bit to the only possible value. All outcomes that are not generated will have probability zero." }, { "heading": "B.2 ALGORITHMS", "text": "Algorithm 1 Multi-Bernoulli Search Data Structure - insert Require: Int n, Int M , SDSArray[M] S, RealArray[n] key, Object value\noutcomes← Top-K Maximal Multi-Bernoulli Outcomes(M, n, key) for i ∈ {1, . . . , M} do\ninsert(Si, outcomesi, value) end for\nAlgorithm 2 Multi-Bernoulli Search Data Structure- search Require: Int n, Int M , Int T , SDSArray[M] S, RealArray[n] key\nvalues← set() outcomes← Top-K Maximal Multi-Bernoulli Outcomes(T, n, key) for j ∈ {1, . . . , T} do\nfor i ∈ {1, . . . , M} do values← values ∪ search(Si, outcomesj)\nend for end for return values\nAlgorithm 3 Top-K Maximal Multi-Bernoulli Outcomes Require: Int k, Int n, RealArray[n] π\n1: z ← BinaryArray[n] 2: p← RealArray[n] 3: a← RealArray[n] 4: for i = 1, . . . , n do 5: if πi > 1/2 then 6: zi ← 1 7: pi ← πi 8: else 9: zi ← 0\n10: pi ← 1− πi 11: end if 12: ai ← log(pi)− log(1− pi) 13: end for 14: outcomes← BinaryArray[k × n] 15: index← TopKMinimalSubsetSumIndexes(k, n, a) 16: for j = 1, . . . , k do 17: for i = 1, . . . , n do 18: if i ∈ indexesj then 19: outcomesji ← 1− zi 20: else 21: outcomesji ← zi 22: end if 23: end for 24: end for 25: return outcomes" }, { "heading": "B.3 HMBS EXAMPLE", "text": "In this example, the number of bits in the Multi-Bernoulli is n = 3, the number of back-end data structures is M = 3, and the number of search keys is T = 2. Since this is a Hashing MultiBernoulli Search data structure, the M = 3 back-end structures will be hash tables; we will call them H1, H2 and H3.\nLet say we want to insert a document with the following key,\nπ1 = (0.3, 0.1, 0.8).\nWe first need to compute its M = 3 most probable outcomes. Here they are in order:\n(0, 0, 1), (1, 0, 1), (0, 0, 0)\nthen we will insert the document in H1 using the most probable outcome as the key, insert the document in H2 with the second most probable outcome as the key and insert the document in H3 with the third most probable outcome as the key.\nNow, let say we have two other documents to insert with the following keys, respectively:\nπ2 = (0.7, 0.9, 0.2), π3 = (0.3, 0.2, 0.1).\nHere are their M = 3 most probable outcomes, respectively:\n(1, 1, 0), (0, 1, 0), (1, 1, 1) and (0, 0, 0), (1, 0, 0), (0, 1, 0).\nWe will then insert both of them in the M = 3 hash tables like we did for the first document.\nTo search in the HMBS given a query π = (0.1, 0.6, 0.2), we first need to compute its T = 2 most probable outcomes:\n(0, 1, 0), (0, 0, 0).\nWe will then search in the M = 3 hash tables with these T = 2, doing a total of TM = 6 search in the back-end structures. These searches will find all of the three documents since the first document\nis in H3 with the key (0, 0, 0), the second document is in H2 with the key (0, 1, 0) and the third document is both in H1 with the key (0, 0, 0) and in H3 with the key (0, 1, 0).\nTo generate documents as in the SDSL’s framework, we do not need the parameter T . However, we might want to halt before considering all possible outcomes of the query. With the same query as above, say we want to generate document but halt after the fifth hashes, i.e. after doing 5 searches in the back-end structures. We will first generate the most probable outcome (0, 1, 0) and search in H1; however, we will find nothing. We will then try the same outcome in H2 and find the second document, which we will yield. Then we will try the same outcome in H3 and find the third document, which we will yield. After this, we will compute the second most probable outcome (0, 0, 0) and search in H1 to find the third document again; thus, we will do nothing. After we will try the same outcome in H2 and find nothing. Finally, we will halt since we computed 5 hashes, ultimately never finding the first documents. Note that if there were multiple documents simultaneously, we would have yielded them together as a set." }, { "heading": "C EXPERIMENTS’ MODELS", "text": "" }, { "heading": "C.1 FBETA", "text": "For the F-beta model 4 we use ramping for the β hyperparameter using this equation,\nlog2 βi = { i 32−810K − 32 if i < 10K −8 otherwise\nfor each batch i = 0, . . . , 100K." }, { "heading": "C.2 MIHASH", "text": "MIHash Cakir et al. (2017) is based on the mutual information\nI(X,Y ) = ∑ z∈Ω P (X=z, Y=z) log ( P (X=z, Y=z) P (X=z)P (Y=z) ) and a generalization of the Hamming distance,\nd(x, y) = 1\n2 (n− x · y)\nfor x and y in Rn. Note that if x and y are in {0, 1}n, d(2x− 1, 2y − 1) is their Hamming distance.\nLets use the above notation, i.e. let fQθ (q) = π q ∈ [0, 1]n and fUθ (d) = πd ∈ [0, 1]n be the parametric query and document functions (in the original article, they uses the same function for the queries and the documents). Let X and Y be two Multi-Bernoulli of dimension n with parameters πq and πd respectively. Finally, with H = d(2X − 1, 2Y − 1), they aim to maximize,\nI(H,M(q, d))\nTo allow gradient descent, they use differentiable histogram binning, i.e. they approximate P (H=k | M(q, d)=1) with\nP (H=k | M(q, d)=1) = 1 |I+| ∑ i∈I+ δi,k\nwith\nδi,k = { di − (k − 1) if di ∈ [k − 1, k] (k + 1)− di if di ∈ [k, k + 1] 0 otherwise\nand di = d(2πqi − 1, 2πdi − 1). Similarly they estimate P (H=k | M(q, d)=0) and P (H=k) to compute the mutual information." }, { "heading": "C.3 HASHNET", "text": "HashNet Cao et al. (2017) optimize an increasingly closer to discrete sequence of tasks to alleviate the challenge of solving a discrete task with differentiable methods. It is possible to frame their approach entirely with sigmoids but it is is simpler to use the tanh function as in the original article. Let NetQθ (q) = logits q ∈ Rn and NetUθ (d) = logits d ∈ Rn be the parametric query and document\nfunctions before activation. Let gQθ (q) = tanh ( βNetQθ (q) ) and gUθ (d) = tanh ( βNetUθ (d) ) be the activated functions. Note that,\nfQθ (q) = gQθ (q) + 1\n2 fUθ (d) =\ngUθ (d) + 1\n2\nFor simplicity, let gQi = g Q θ (qi) and g U i = g U θ (di). In their work they modelize the matching random variable with,\nM(qi, di) ∼ Ber ( σ ( α gQi · g U i )) This gives,\nP (M(qi, di)=m) = σ ( α gQi · g U i )m ( 1− σ ( α gQi · g U i ))1−m Finally, they train the model with the (weighted) negative log-likelihood.\nJi = wi ( log ( 1 + exp ( α gQi · g U i )) −miα gQi · g U i ) with wi a positive real number useful when match and non-match are unbalanced. In the following experiment we ignored this term since the task is way to unbalanced and adding a weighting term would break the loss function.\nThe β term, in the tanh, is first set to 1 and then increased when a convergence criteria is obtain. This process repeats ten times, creating a sequence of of increasingly harder optimization which, if repeated infinitely, would converge to a discrete optimization.\nlim β→∞ tanh(βx) = sign(x)\nIn all experiments, we use α = 0.2 for the sigmoid to have enough signal in the range [−32, 32]." }, { "heading": "C.4 HAMMING RADIUS SEARCH", "text": "The Hamming Radius Search (HRS) is a naive approach to quickly find documents indexed with a binary code which have low Hamming distance r (the radius) to a particular binary code (the query). For insertion, we map each document’s binary code the the document using a hash table. For searching, we compute all binary codes at distance than 0 (i.e. only the query) and use the hash table, then we do the same for distance up to r, the radius.\nThe number a binary codes to consider grow very quickly w.r.t to the radius. For example, at radius 2 with 64 bits codes, the number of codes to consider is 2081.(\n64\n0\n) + ( 64\n1\n) + ( 64\n2\n) = 2081\nand for the radius 3 we there is 43745 codes to consider. Radius 3 would not make sense for a database of 10K elements as the last 41664 elements comes in no particular order. This is why we consider for the following experiments 2081 hash before halting for Hamming Radius Search." }, { "heading": "D SUPPLEMENTARY FIGURES", "text": "Figure 2: The average Hamming distance of the 25 models of each 6 HMBS models w.r.t. the positive (matching) pairs and negative (non-matching) pairs. Using the fixed 10K documents and 1K queries, creating 1K positive pairs and 9999K negative pairs for which we computed the Hamming distance.\nFigure 3: The average SSWR Curves W.r.t Halt number for Fbeta, Shared-Fbeta and SHaredMIHash. The colored area is±0.01xSTD of the respective curve. The range of the y axis is changing throughout each graphs, this could be misleading when comparing the STD. As a reference, the average STD is 0.1259, 0.0374 and 0.0864 for Fbeta, Shared-Fbeta and Shared-MIHash respectively." } ]
2,020
null
SP:d366dee57fb1f10beeef03e52f8a93ee6ff39f33
[ "The authors address neural architecture search (NAS) scenarios. In particular, a framework, MetaD2A, is proposed, which yields a neural architecture for a new dataset. In a nutshell, the framework learns a \"dataset-to-neural-network-architecture\" transformation using a database of datasets and architectures. Each dataset is encoded via a \"set encode\" and the architecutres are obtained via a \"graph decoder\". The experiments demonstrate the usefullness of the approach and its improvements over conventual NAS approaches. The approach could be described " ]
Despite the success of recent Neural Architecture Search (NAS) methods on various tasks which have shown to output networks that largely outperform humandesigned networks, conventional NAS methods have mostly tackled the optimization of searching for the network architecture for a single task (dataset), which does not generalize well across multiple tasks (datasets). Moreover, since such task-specific methods search for a neural architecture from scratch for every given task, they incur a large computational cost, which is problematic when the time and monetary budget are limited. In this paper, we propose an efficient NAS framework that is trained once on a database consisting of datasets and pretrained networks and can rapidly search for a neural architecture for a novel dataset. The proposed MetaD2A (Meta Dataset-to-Architecture) model can stochastically generate graphs (architectures) from a given set (dataset) via a cross-modal latent space learned with amortized meta-learning. Moreover, we also propose a meta-performance predictor to estimate and select the best architecture without direct training on target datasets. The experimental results demonstrate that our model meta-learned on subsets of ImageNet-1K and architectures from NAS-Bench 201 search space successfully generalizes to multiple unseen datasets including CIFAR-10 and CIFAR-100, with an average search time of 33 GPU seconds. Even under MobileNetV3 search space, MetaD2A is 5.5K times faster than NSGANetV2, a transferable NAS method, with comparable performance. We believe that the MetaD2A proposes a new research direction for rapid NAS as well as ways to utilize the knowledge from rich databases of datasets and architectures accumulated over the past years. Code is available at https://github.com/HayeonLee/MetaD2A.
[ { "affiliations": [], "name": "Hayeon Lee" }, { "affiliations": [], "name": "Eunyoung Hyung" }, { "affiliations": [], "name": "Sung Ju Hwang" } ]
[ { "authors": [ "Antreas Antoniou", "Harrison Edwards", "Amos Storkey" ], "title": "How to train your maml", "venue": "arXiv preprint arXiv:1810.09502,", "year": 2018 }, { "authors": [ "Bowen Baker", "Otkrist Gupta", "Nikhil Naik", "Ramesh Raskar" ], "title": "Designing neural network architectures using reinforcement learning", "venue": "In In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Han Cai", "Chuang Gan", "Tianzhe Wang", "Zhekai Zhang", "Song Han" ], "title": "Once for all: Train one network and specialize it for efficient deployment", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xiangning Chen", "Ruochen Wang", "Minhao Cheng", "Xiaocheng Tang", "Cho-Jui Hsieh" ], "title": "Dr{nas}: Dirichlet neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "One-shot neural architecture search via self-evaluated template network", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (CVPR),", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Searching for a robust neural architecture in four gpu hours", "venue": "In Proceedings of the IEEE Conference on computer vision and pattern recognition (CVPR),", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Thomas Elsken", "Benedikt Staffler", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Meta-learning of neural architectures for few-shot learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR),", "year": 2020 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "Bohb: Robust and efficient hyperparameter optimization at scale", "venue": "arXiv preprint arXiv:1807.01774,", "year": 2018 }, { "authors": [ "Jiemin Fang", "Yuzhu Sun", "Kangjian Peng", "Qian Zhang", "Yuan Li", "Wenyu Liu", "Xinggang Wang" ], "title": "Fast neural network adaptation via parameter remapping and architecture", "venue": null, "year": 2001 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Ruibing Hou", "Hong Chang", "MA Bingpeng", "Shiguang Shan", "Xilin Chen" ], "title": "Cross attention network for few-shot classification", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Kirthevasan Kandasamy", "Willie Neiswanger", "Jeff Schneider", "Barnabas Poczos", "Eric P Xing" ], "title": "Neural architecture search with bayesian optimisation and optimal transport. In Advances in neural information processing systems (NeurIPS), 2018", "venue": null, "year": 2018 }, { "authors": [ "Jaehong Kim", "Sangyeul Lee", "Sungwan Kim", "Moonsu Cha", "Jung Kwon Lee", "Youngduck Choi", "Yongseok Choi", "Dong-Yeon Cho", "Jiwon Kim" ], "title": "Auto-meta: Automated gradient based meta learner search", "venue": "arXiv preprint arXiv:1806.06927,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2012 }, { "authors": [ "Yann LeCun", "Corinna Cortes. MNIST handwritten digit database." ], "title": "URL http://yann", "venue": "lecun.com/exdb/mnist/.", "year": 2010 }, { "authors": [ "Hae Beom Lee", "Hayeon Lee", "Donghyun Na", "Saehoon Kim", "Minseop Park", "Eunho Yang", "Sung Ju Hwang" ], "title": "Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam Kosiorek", "Seungjin Choi", "Yee Whye Teh" ], "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Kwonjoon Lee", "Subhransu Maji", "Avinash Ravichandran", "Stefano Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "Dongze Lian", "Yin Zheng", "Yintao Xu", "Yanxiong Lu", "Leyu Lin", "Peilin Zhao", "Junzhou Huang", "Shenghua Gao" ], "title": "Towards fast adaptation of neural architectures with meta learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "In In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Zhichao Lu", "Kalyanmoy Deb", "Erik Goodman", "Wolfgang Banzhaf", "Vishnu Naresh Boddeti" ], "title": "Nsganetv2: Evolutionary multi-objective surrogate-assisted neural architecture search", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization. In Advances in neural information processing systems (NeurIPS), 2018", "venue": null, "year": 2018 }, { "authors": [ "Subhransu Maji", "Esa Rahtu", "Juho Kannala", "Matthew Blaschko", "Andrea Vedaldi" ], "title": "Fine-grained visual classification of aircraft", "venue": "arXiv preprint arXiv:1306.5151,", "year": 2013 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "O.M. Parkhi", "A. Vedaldi", "A. Zisserman", "C.V. Jawahar" ], "title": "Cats and dogs", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence (AAAI),", "year": 2019 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization. 2019", "venue": null, "year": 2019 }, { "authors": [ "Albert Shaw", "Wei Wei", "Weiyang Liu", "Le Song", "Bo Dai" ], "title": "Meta architecture search", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning. In Advances in neural information processing systems (NIPS), 2017", "venue": null, "year": 2017 }, { "authors": [ "Yehui Tang", "Yunhe Wang", "Yixing Xu", "Hanting Chen", "Boxin Shi", "Chao Xu", "Chunjing Xu", "Qi Tian", "Chang Xu" ], "title": "A semi-supervised assessor of neural architectures", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning. In Advances in neural information processing systems (NIPS), 2016", "venue": null, "year": 2016 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Yuhui Xu", "Lingxi Xie", "Xiaopeng Zhang", "Xin Chen", "Guo-Jun Qi", "Qi Tian", "Hongkai Xiong" ], "title": "Pc-darts: Partial channel connections for memory-efficient architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "Nasbench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Muhan Zhang", "Shali Jiang", "Zhicheng Cui", "Roman Garnett", "Yixin Chen" ], "title": "D-vae: A variational autoencoder for directed acyclic graphs", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Dongzhan Zhou", "Xinchi Zhou", "Wenwei Zhang", "Chen Change Loy", "Shuai Yi", "Xuesen Zhang", "Wanli Ouyang" ], "title": "Econas: Finding proxies for economical neural architecture search", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The rapid progress in the design of neural architectures has largely contributed to the success of deep learning on many applications (Krizhevsky et al., 2012; Cho et al., 2014; He et al., 2016; Szegedy et al.; Vaswani et al., 2017; Zhang et al., 2018). However, due to the vast search space, designing a novel neural architecture requires a time-consuming trial-and-error search by human experts. To tackle such inefficiency in the manual architecture design process, researchers have proposed various Neural Architecture Search (NAS) methods that automatically search for optimal architectures, achieving models with impressive performances on various tasks that outperform human-designed counterparts (Baker et al., 2017; Zoph & Le, 2017; Kandasamy et al., 2018; Liu et al., 2018; Luo et al., 2018; Pham et al., 2018; Liu et al., 2019; Xu et al., 2020; Chen et al., 2021).\nRecently, large benchmarks for NAS (NAS-101, NAS-201) (Ying et al., 2019; Dong & Yang, 2020) have been introduced, which provide databases of architectures and their performances on benchmark datasets. Yet, most conventional NAS methods cannot benefit from the availability of such databases, due to their task-specific nature which requires repeatedly training the model from scratch for each new dataset (See Figure 1 Left). Thus, searching for an architecture for a new task (dataset) may require a large number of computations, which may be problematic when the time and mon-\n∗These authors contributed equally to this work.\nConventional NAS Approach\nTraining\nNAS Model\nNAS Model\nNAS Model\n...\nSearch Cost O(N)\nTarget Dataset 1\nTarget Dataset 2\nTarget Dataset N\nHeavy Computation Cost\nTraining Training\nOur Meta-Trained NAS Model\nOur NAS Approach\n...\nSearch Cost O(1)\nTarget Dataset 1\nTarget Dataset 2\nTarget Dataset N\nRapid Search\nNo Retraining on Target\nTarget Dataset\nSearch Space\nValid Zone\nSource Database\nMeta-Training\nOur Meta-Learning Framework\nMeta-Test\nFigure 1: Left: Most conventional NAS approaches need to repeatedly train NAS model on each given target dataset, which results in enormous total search time on multiple datasets. Middle: We propose a novel NAS framework that generalizes to any new target dataset to generate specialized neural architecture without additional NAS model training after only meta-training on the source database. Thus, our approach cut down the search cost for training NAS model on multiple datasets from O(N) to O(1). Right: For unseen target dataset, we utilize amortized meta-knowledge represented as set-dependent architecture generative representations.\netary budget are limited. How can we then exploit the vast knowledge of neural architectures that have been already trained on a large number of datasets, to better generalize over an unseen task?\nIn this paper, we introduce amortized meta-learning for NAS, where the goal is to learn a NAS model that generalizes well over the task distribution, rather than a single task, to utilize the accumulated meta-knowledge to new target tasks. Specifically, we propose an efficient NAS framework that is trained once from a database containing datasets and their corresponding neural architectures and then generalizes to multiple datasets for searching neural architectures, by learning to generate a neural architecture from a given dataset. The proposed MetaD2A (Meta Dataset-to-Architecture) framework consists of a set encoder and a graph decoder, which are used to learn a cross-modal latent space for datasets and neural architectures via amortized inference. For a new dataset, MetaD2A stochastically generates neural architecture candidates from set-dependent latent representations, which are encoded from a new dataset, and selects the final neural architecture based on their predicted accuracies by a performance predictor, which is also trained with amortized meta-learning. The proposed meta-learning framework reduces the search cost from O(N) to O(1) for multiple datasets due to no training on target datasets. After one-time building cost, our model only takes just a few GPU seconds to search for neural architecture on an unseen dataset (See Figure 1).\nWe meta-learn the proposed MetaD2A on subsets of ImageNet-1K and neural architectures from the NAS-Bench201 search space. Then we validate it to search for neural architectures on multiple unseen datasets such as MNIST, SVHN, CIFAR-10, CIFAR-100, Aircraft, and Oxford-IIIT Pets. In this experiment, our meta-learned model obtains a neural architecture within 33 GPU seconds on average without direct training on a target dataset and largely outperforms all baseline NAS models. Further, we compare our model with representative transferable NAS method (Lu et al., 2020) on MobileNetV3 search space. We meta-learn our model on subsets of ImageNet-1K and neural architectures from the MobileNetV3 search space. The meta-learned our model successfully generalizes, achieving extremely fast search with competitive performance on four unseen datasets such as CIFAR-10, CIFAR-100, Aircraft, and Oxford-IIIT Pets.\nTo summarize, our contribution in this work is threefold:\n• We propose a novel NAS framework, MetaD2A, which rapidly searches for a neural architecture on a new dataset, by sampling architectures from latent embeddings of the given dataset then selecting the best one based on their predicted performances.\n• To this end, we propose to learn a cross-modal latent space of datasets and architectures, by performing amortized meta-learning, using a set encoder and a graph decoder on subsets of ImageNet-1K.\n• The meta-learned our model successfully searches for neural architectures on multiple unseen datasets and achieves state-of-the-art performance on them in NAS-Bench201 search space, especially searching for architectures within 33 GPU seconds on average." }, { "heading": "2 RELATED WORK", "text": "Neural Architecture Search (NAS) NAS is an automated architecture search process which aims to overcome the suboptimality of manual architecture designs when exploring the extensive search space. NAS methods can be roughly categorized into reinforcement learning-based methods (Zoph & Le, 2017; Zoph et al., 2018; Pham et al., 2018), evolutionary algorithm-based methods (Real et al., 2019; Lu et al., 2020), and gradient-based methods (Liu et al., 2019; Cai et al., 2019; Luo et al., 2018; Dong & Yang, 2019b; Chen et al., 2021; Xu et al., 2020; Fang et al., 2020). Among existing approaches, perhaps the most relevant approach to ours is NAO (Luo et al., 2018), which maps DAGs onto a continuous latent embedding space. However, while NAO performs graph reconstruction for a single task, ours generates data-dependent Directed Acyclic Graphs (DAGs) across multiple tasks. Another important open problem in NAS is reducing the tremendous computational cost resulting from the large search space (Cai et al., 2019; Liu et al., 2018; Pham et al., 2018; Liu et al., 2019; Chen et al., 2021). GDAS (Dong & Yang, 2019b) tackles this by optimizing sampled sub-graphs of DAG. PC-DARTS (Xu et al., 2020) reduces GPU overhead and search time by partially selecting channel connections. However, due to the task-specific nature of those methods, they should be retrained from the scratch for each new unseen task repeatedly and each will take a few GPU hours. The accuracy-predictor-based transferable NAS called NSGANetV2 (Lu et al., 2020) alleviates this issue by adapting the ImageNet-1K pre-trained network to multiple target datasets, however, this method is still expensive due to adapting procedure on each dataset.\nMeta-learning Meta-learning (learning to learn) aims to train a model to generalize over a distribution of tasks, such that it can rapidly adapt to a new task (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017; Nichol et al., 2018; Lee et al., 2019b; Hou et al., 2019). Recently, LEO (Rusu et al., 2019) proposed a scalable meta-learning framework which learns the latent generative representations of model parameters for a given data in a low-dimensional space for few-shot classification. Similarly to LEO (Rusu et al., 2019), our method learns a low-dimensional latent embedding space, but we learn a cross-modal space for both datasets and models for task-dependent model generation.\nNeural Architecture Search with Meta-Learning Recent NAS methods with gradient-based meta-learning (Elsken et al., 2020; Lian et al., 2019; Shaw et al., 2019) have shown promising results on adapting to different tasks. However, they are only applicable on small scale tasks such as few-shot classification tasks (Elsken et al., 2020; Lian et al., 2019) and require high-computation time, due to the multiple unrolling gradient steps for one meta-update of each task. While some attempt to bypass the bottleneck with a first-order approximation (Lian et al., 2019; Shaw et al., 2019) or parallel computations with GPUs (Shaw et al., 2019), but their scalability is intrinsically limited due to gradient updates over a large number of tasks. To tackle such a scalability issue, we perform amortized inference over the multiple tasks by encoding a dataset into the low-dimensional latent vector and exploit fast GNN propagation instead of the expensive gradient update." }, { "heading": "3 METHOD", "text": "Our goal is to output a high-performing neural architecture for a given dataset rapidly by learning the prior knowledge obtained from the rich database consisting of datasets and their corresponding\nneural architectures. To this end, we propose Meta Dataset-to-Architecture (MetaD2A) framework which learns the cross-modal latent space of datasets and their neural architectures. Further, we introduce a meta-performance predictor, which predicts accuracies of given architectures without training the predictor on an unseen target dataset. Overview of the proposed approach is illustrated in Figure 1." }, { "heading": "3.1 META-TRAINING NAS MODEL", "text": "To formally define the problem, let us assume that we have a source database of Nτ number of tasks, where each task τ = {D,G, s} consists of a dataset D, a neural architecture represented as a Directed Acyclic Graph (DAG) G and an accuracy s obtained from the neural architecture G trained onD. In the meta-training phase, the both dataset-to-architecture generator and meta-predictor learn to generalize over task distribution p(τ) using the source database. We describe how to empirically construct source database in Section 4.1.1." }, { "heading": "3.1.1 LEARNING TO GENERATE GRAPHS FROM DATASETS", "text": "We propose a dataset-to-architecture generator which takes a dataset and then generates high-quality architecture candidates for the set. We want the generator to generate even novel architectures, which are not contained in the source database, at meta-test. Thus, the generator learns the continuous cross-modal latent space Z of datasets and neural architectures from the source database. For each task τ , the generator encodes datasetD as a vector z through the set encoder qφ(z|D) parameterized by φ and then decodes a new graph G̃ from z which are sampled from the prior p(z) by using the graph decoder pθ(G|z) parameterized by θ. Then, our goal is that G̃ generated from D to be the true G which is pair of D. We meta-learn the generator using set-amortized inference, by maximizing the approximated evidence lower bound (ELBO) as follows:\nmax φ,θ ∑ τ∼p(τ) Lτφ,θ ( D,G ) (1)\nwhere Lτφ,θ ( D,G ) = Ez∼qφ(z|D) [ log pθ ( G|z )] − λ · LτKL [ qφ ( z|D ) ||p ( z )]\n(2)\nEach dimension of the prior p(z) factorizes into N (0, 1). LτKL is the KL divergence between two multivariate Gaussian distributions which has a simple closed form (Kingma & Welling, 2014) and λ is the scalar weighting value. Using the reparameterization trick on z, we optimize the above objective by stochastic gradient variational Bayes (Kingma & Welling, 2014). We use a set encoder described in Section 3.1.3 and we adopt a Graph Neural Network (GNN)-based decoder for directed acyclic graph (DAG)s (Zhang et al., 2019), which allows message passing to happen only along the topological order of the DAGs. For detailed descriptions for the generator, see Section A of Suppl." }, { "heading": "3.1.2 META-PERFORMANCE PREDICTOR", "text": "While many performance predictor for NAS have been proposed (Luo et al., 2018; Cai et al., 2020; Lu et al., 2020; Zhou et al., 2020; Tang et al., 2020), those performance predictors repeatedly collect architecture-accuracy database for each new dataset, which results in huge total cost on many datasets. Thus, the proposed predictor fω(s|D,G) takes a dataset as well as graph as an input to support multiple datasets, while the existing performance predictor takes a graph only. Then, the proposed predictor meta-learns set-dependent performance proxy generalized over the task distribution p(τ) in the meta-training stage. This allows the meta-learned predictor to accurately predict performance on unseen datasets without additional training. The proposed predictor fω consists of a dataset encoder and a graph encoder, followed by two linear layers with relu. For dataset encoding, we use the set encoder of Section 3.1.3 which takes D as an input. We adopt direct acyclic graph encoder (Zhang et al., 2019) for DAG G (Please refer to Section B of Suppl.). We concatenate the outputs of both graph encoder and the set encoder, and feed them to two linear layers with relu to predict accuracy. We train the predictor fω to minimize the MSE loss Lτω(s,D,G) between the predicted accuracy and the true accuracy s of the model on each task sampled from the source database:\nmin ω ∑ τ∼p(τ) Lτω(s,D,G) = ∑ τ∼p(τ) (s− fω(D,G))2 (3)" }, { "heading": "3.1.3 SET ENCODER", "text": "The efficacy of the proposed framework is dependent on how accurately set encoder captures the distribution of the target dataset and extracts information related with the goal of the generator and the predictor. To compress the entire instances from a dataset D into a single latent code z, the set encoder should process input sets of any size and summarize consistent information agnostically to the order of the instances (permutation-invariance). Existing set encoders such as DeepSet (Zaheer et al., 2017), SetTransformer (Lee et al., 2019a), and StatisticsPooling (Lee et al., 2020) fulfill those requirements and might be used. However, DeepSet and SetTransformer are non-hierarchical poolings, thus cannot accurately model individual classes in the given dataset. Moreover, DeepSet and StatisticsPooling resort to simple averaging of the instance-wise representations.\nTherefore, we introduce a novel set encoder which stacks two permutation-invariant modules with attention-based learnable parameters. The lower-level intra-class encoder captures the class prototypes that reflect label information, and the high-level inter-class encoder considers the relationship between class prototypes and aggregates them into a latent vector. The proposed structure of the set encoder models high-order interactions between the set elements allowing the generator and predictor to effectively extract useful information to achieve each goal.\nSpecifically, for a given dataset D = {X,Y }, where X = {Xc}Cc=1 and Y = {Yc}Cc=1 are the set of instances and target labels of C classes respectively. We randomly sample instances {x|x ∈ Bc} ∈ Rbc×dx of class c, where x is a dx dimensional feature vector, Bc ⊂ Xc and ||Bc|| = bc. We input the sampled instances into the IntraSetPool, the intra-class encoder, to encode class prototype vc ∈ R1×dvc for each class c = 1, ..., C. Then we further feed the class-specific set representations {vc}Cc=1 into the InterSetPool, the inter-class encoder, to generate the dataset representation he ∈ R1×dhe as follows:\nvc = IntraSetPool ( {x|x ∈ Bc} ) , he = InterSetPool ( {vc}Cc=1 ) (4)\nBoth the set poolings are stacked attention-based blocks borrowed from Lee et al. (2019a). Note that while Lee et al. (2019a) is an attention-based set encoder, it ignores class label information of given dataset, which may lead to poor performance. Please see Section C of the Suppl. for more details." }, { "heading": "3.2 META-TEST (SEARCHING)", "text": "In the meta-test stage, for an unseen dataset D̂, we can obtain n set-dependent DAGs {Ĝi}ni=1, with the meta-trained generator parameterized byφ∗ and θ∗, by feeding D̂ as an input. Through such setlevel amortized inference, our method can easily generate neural architecture(s) for the novel dataset. The latent code z ∈ R1×dz can be sampled from a dataset-conditioned Gaussian distribution with diagonal covariance where NNµ,NNσ are single linear layers:\nz ∼ qφ(z|D) = N (µ,σ2) where µ,σ = NNµ(he),NNσ(he) (5)\nIn the meta-test, the predictor fω∗(ŝi|D̂, Ĝi) predicts accuracies {ŝi}ni=1 for a given unseen dataset D̂ and each generated architecture of {Ĝi}ni=1 and then select the neural architecture having the highest predicted accuracy among {ŝi}ni=1." }, { "heading": "4 EXPERIMENT", "text": "We conduct extensive experiments to validate MetaD2A framework. First, we compare our model with conventional NAS methods on NAS-Bench-201 search space in Section 4.1. Second, we compare our model with transferable NAS method under a large search space in Section 4.2. Third, we compare our model with other Meta-NAS approaches on few-shot classification tasks in Section 4.3. Finally, we analyze the effectiveness of our framework in Section 4.4." }, { "heading": "4.1 NAS-BENCH-201 SEARCH SPACE", "text": "" }, { "heading": "4.1.1 EXPERIMENT SETUP", "text": "We learn our model on source database consisting of subsets of ImageNet-1K and neural architectures of NAS-Bench-201 (Dong & Yang, 2020) and (meta-)test our model by searching for architectures on 6 benchmark datasets without additional NAS model training.\nNAS-Bench-201 search space contains cell-based neural architectures, where each cell is represented as directed ayclic graph (DAG) consisting of the 4 nodes and the 6 edge connections. For each edge connection, NAS models select one of 5 operation candidates such as zerorize, skip connection, 1-by-1 convolution, 3-by-3 convolution, and 3-by-3 average pooling.\nSource Database To meta-learn our model, we practically collect multiple tasks where each task consists of (dataset, architecture, accuracy). We compile ImageNet-1K (Deng et al., 2009) as multiple sub-sets by randomly sampling 20 classes with an average of 26K images for each sub-sets and assign them to each task. All images are downsampled by 32×32 size. We search for the setspecific architecture of each sampled dataset using random search among high-quality architectures which are included top-5000 performance architecture group on ImageNet-16-120 or GDAS (Dong & Yang, 2019b). For the predictor, we additionally collect 2,920 tasks through random sampling. We obtain its accuracy by training the architecture on dataset of each task. We collect Nτ =1,310/4,230 meta-training tasks for the generator/predictor and 400/400 meta-validation tasks for them, respectively. Meta-training time is 12.7/8.4 GPU hours for the generator/the predictor and note that metatraining phase is needed only once for all experiments of NAS-Bench-201 search space.\nMeta-Test Datasets We apply our model trained from source database to 6 benchmark datasets such as 1) CIFAR-10 (Krizhevsky et al., 2009), 2) CIFAR-100 (Krizhevsky et al., 2009), 3) MNIST (LeCun & Cortes, 2010), 4) SVHN (Netzer et al., 2011), 5) Aircraft (Maji et al., 2013), and 6) OxfordIIIT Pets (Parkhi et al., 2012). On CIFAR10 and CIFAR100, the generator generates 500 neural architectures and we select 30 architectures based on accuracies predicted by the predictor. Following SETN (Dong & Yang, 2019a), we retrieve the accuracies of N architecture candidates from the NAS-bench-201 and report the highest final accuracy for each run. While N = 1000 in SETN, we set a smaller number of samples (N = 30) for MetaD2A. We report the mean accuracies over 10 runs of the search process by retrieving accuracies of searched architectures from NAS-Bench-201. On MNIST, SVHN, Aircraft, and Oxford-IIIT Pets, the generator generates 50 architectures and select the best one with the highest predicted accuracy. we report the accuracy averaging over 3 runs with different seeds. For fair comparison, the searched architectures from our model are trained on each target datasets from the scratch. Note that once trained MetaD2A can be used for more datasets without additional training. Our model is performed with a single Nvidia 2080ti GPU." }, { "heading": "4.1.2 RESULTS ON UNSEEN DATASETS", "text": "Table 1 shows that our model meta-learned on the source database can successfully generalize to 6 unseen datasets such as MNIST, SVHN, CIFAR-10, CIFAR-100, Aircraft, and Oxford-IIIT Pets by outperforming all baselines. Since the meta-learned MetaD2A can output set-specialized architectures on target datasets through inference process with no training cost, the search speed is extremely fast. As shown in Table 1, the search time of MetaD2A averaging on 6 benchmark datasets is within 33 GPU second. This is impressive results in that it is at least 147× (maximum: 12169×) faster than conventional set-specific NAS approaches which need training NAS models on each target dataset. Such rapid search of REA, RS, REINFORCE and BOHB is only possible where all of the accuracies are pre-computed like NAS-Bench201 so that it can retrieve instantly on the target dataset, therefore, it is difficult to apply them to other non-benchmark datasets. Especially, we observe that MetaD2A which is learned over multiple tasks benefit to search set-dependent neural architectures for fine-grained datasets such as Aircraft and Oxford-IIIT Pets." }, { "heading": "4.2 MOBILENETV3 SEARCH SPACE", "text": "" }, { "heading": "4.2.1 EXPERIMENT SETUP", "text": "We apply our meta-trained model on four unseen datasets, comparing with transferable NAS (NSGANetV2 (Lu et al., 2020)) under the same search space of MobileNetV3, where it contains more than 1019 architectures. Each CNN architecture consists of five sequential blocks and the targets of searching are the number of layers, the number of channels, kernel size, and input resolutions. For a fair comparison, we also exploit the supernet for the parameters as NSGANetV2 does. We collectNτ = 3, 018/153, 408 meta-training tasks for the generator/predictor and 646/32, 872 metavalidation tasks, respectively as a source database from the ImageNet-1K dataset and architectures of MobileNetV3 search space. Meta-training time is 2.21/1.41 GPU days for the generator/the predictor. Note that the meta-training phase is needed only once on the source database.\nTable 1: Performance on Unseen Datasets (Meta-Test) MetaD2A conducts amortized inference on unseen target datasets after meta-training on source database consisting of subsets of ImageNet-1K and architectures of NAS-Bench-201 search space. Meta-training time is 12.7/8.4 GPU hours for the generator/the predictor. For fair comparison, the parameters of searched architectures are trained on each dataset from scratch instead of transferring parameters from ImageNet. T is the time to construct precomputed architecture database for each target. We report accuracies with 95% confidence intervals.\nTarget Dataset NAS Method NAS Training-free Params Search Time Speed Up Search Cost Accuracyon Target (M) (GPU Sec) ($) (%)\nCIFAR-10\nResNet (He et al., 2016) 0.86 N/A N/A N/A 93.97±0.00 REA (Real et al., 2019) - 0.02+T - - 93.92±0.30 RS (Bergstra & Bengio, 2012) - 0.01+T - - 93.70±0.36 REINFORCE (Williams, 1992) - 0.12+T - - 93.85±0.37 BOHB (Falkner et al., 2018) - 3.59+T - - 93.61±0.52 RSPS (Li & Talwalkar, 2019) - 10200 147× 4.13 84.07±3.61 SETN (Dong & Yang, 2019a) - 30200 437× 12.25 87.64±0.00 GDAS (Dong & Yang, 2019b) - 25077 363× 10.17 93.61±0.09 PC-DARTS (Xu et al., 2020) 1.17 10395 150× 4.21 93.66±0.17 DrNAS (Chen et al., 2021) 1.53 21760 315× 8.82 94.36±0.00 MetaD2A (Ours) X 1.11 69 1× 0.028 94.37±0.03\nCIFAR-100\nResNet (He et al., 2016) 0.86 N/A N/A N/A 70.86±0.00 REA (Real et al., 2019) - 0.02+T - - 71.84±0.99 RS (Bergstra & Bengio, 2012) - 0.01+T - - 71.04±1.07 REINFORCE (Williams, 1992) - 0.12+T - - 71.71±1.09 BOHB (Falkner et al., 2018) - 3.59+T - - 70.85±1.28 RSPS (Li & Talwalkar, 2019) - 18841 196× 7.64 52.31±5.77 SETN (Dong & Yang, 2019a) - 58808 612× 23.85 59.09±0.24 GDAS (Dong & Yang, 2019b) - 51580 537× 20.91 70.70±0.30 PC-DARTS (Xu et al., 2020) 0.26 19951 207× 8.09 66.64±2.34 DrNAS (Chen et al., 2021) 1.20 34529 359× 14.00 73.51±0.00 MetaD2A (Ours) X 1.07 96 1× 0.039 73.51±0.00\nMNIST\nResNet (He et al., 2016) 0.86 N/A N/A N/A 99.67±0.01 RSPS (Li & Talwalkar, 2019) 0.25 22457 3208× 9.10 99.63±0.02 SETN (Dong & Yang, 2019a) 0.56 69656 9950× 28.24 99.69±0.04 GDAS (Dong & Yang, 2019b) 0.82 60186 8598× 24.40 99.64±0.04 PC-DARTS (Xu et al., 2020) 0.62 24857 3551× 10.08 99.66±0.04 DrNAS (Chen et al., 2021) 1.53 44131 6304× 17.89 99.59±0.02 MetaD2A (Ours) X 0.61 7 1× 0.002 99.71±0.08\nSVHN\nResNet (He et al., 2016) 0.86 N/A N/A N/A 96.13±0.19 RSPS (Li & Talwalkar, 2019) 0.48 27962 3994× 11.34 96.17±0.12 SETN (Dong & Yang, 2019a) 0.48 85189 12169× 34.54 96.02±0.12 GDAS (Dong & Yang, 2019b) 0.24 71595 10227× 10.17 95.57±0.57 PC-DARTS (Xu et al., 2020) 0.47 31124 4446× 12.62 95.40±0.67 DrNAS (Chen et al., 2021) 1.53 52791 7541× 21.40 96.30±0.05 MetaD2A (Ours) X 0.86 7 1× 0.004 96.34±0.37\nAircraft\nResNet (He et al., 2016) 0.86 N/A N/A N/A 47.01±1.16 RSPS (Li & Talwalkar, 2019) 0.22 18697 1869× 7.58 42.19±3.88 SETN (Dong & Yang, 2019a) 0.44 18564 1856× 7.52 44.84±3.96 GDAS (Dong & Yang, 2019b) 0.62 18508 1850× 7.50 53.52±0.48 PC-DARTS (Xu et al., 2020) 0.32 3524 352× 1.42 26.33±3.40 DrNAS (Chen et al., 2021) 1.03 34529 3452× 13.14 46.08±7.00 MetaD2A (Ours) X 0.83 10 1× 0.004 58.43±1.18\nOxford-IIIT Pets ResNet (He et al., 2016) 0.86 N/A N/A N/A 25.58±3.43 RSPS (Li & Talwalkar, 2019) 0.32 3360 420× 1.36 22.91±1.65 SETN (Dong & Yang, 2019a) 0.32 8625 1078× 3.49 25.17±1.68 GDAS (Dong & Yang, 2019b) 0.83 6965 870× 2.82 24.02±2.75 PC-DARTS (Xu et al., 2020) 0.44 2844 355× 1.15 25.31±1.38 DrNAS (Chen et al., 2021) 0.44 6019 752× 2.44 26.73±2.61 MetaD2A (Ours) X 0.83 8 1× 0.003 41.50±4.39\n97.5\n97.6\n97.7\n97.8\n97.9\n98\n98.1\n260 280 300 320\nA C C (%\n)\nFLOPS(M)\nCIFAR10 NSGANetV2 MetaD2A\n85.3\n85.8\n86.3\n86.8\n87.3\n250 300 350 400\nA C C (%\n)\nFLOPS(M)\nAIRCRAFT NSGANetV2 MetaD2A\n86.7\n86.9\n87.1\n87.3\n265 300 335\nA C C (%\n)\nFLOPS(M)\nCIFAR100 NSGANetV2 MetaD2A\n94.55\n94.95\n95.35\n256 272 288\nA C C (%\n)\nFLOPS(M)\nOxford-IIIT-Pets NSGANetV2 MetaD2A\nFigure 3: Performance on Unseen Datasets (Meta-Test) We show accuracy over flop of both MetaD2A and a transferable NAS referred as to NSGANetV2 (Lu et al., 2020) after meta-training MetaD2A on source database consisting of subsets of ImageNet-1K and architectures in MobileNetV3 search space. Note that each plot point is searched within 125 GPU seconds by MetaD2A." }, { "heading": "4.2.2 RESULTS ON UNSEEN DATASETS", "text": "We search and evaluate the architecture multiple times with both NSGANetV2 and ours on four unseen datasets such as CIFAR-10, CIFAR-100, Aircraft, and Oxford-IIIT Pets with different random seeds. Search times of MetaD2A for CIFAR-10, CIFAR-100, Aircraft, and Oxford-IIIT Pets are within 57, 195, 77, and 170 GPU seconds on average with a single Nvidia RTX 2080ti GPU respectively, while NSGANetV2 needs 1 GPU day with 8 1080ti GPUs on each dataset, which is 5,523 times slower than MetaD2A. Besides the huge speed up, Figure 3 shows that our model can search for a comparable architecture to the NSGANetV2 over flops without a performance drop. Interestingly, even we use naive flop filtering and NSGANetV2 uses an objective function for flop constraints, MetaD2A performs consistently comparably to NSGANetV2 over the different flops. Overall, the results demonstrate that our model also can generalize to unseen datasets not only under the NAS-Bench-201 space but also under a larger MobileNetV3 space with its meta-knowledge." }, { "heading": "4.3 COMPARISON WITH META-NAS APPROACHES", "text": "We further compare our method against Meta-NAS methods (Kim et al., 2018; Elsken et al., 2020; Lian et al., 2019; Shaw et al., 2019) on few-shot classification tasks, which are the main setting existing MetaNAS methods have been consider. Following (Elsken et al., 2020; Lian et al., 2019), we adopt bi-level optimization (e.g., MAML framework) to meta-learn initial weights of\nneural architectures searched by our model on a meta-training set of mini-imagenet. As shown in the Table 2, the few-shot classification results on MiniImageNet further clearly show the MetaD2A’s effectiveness over existing Meta-NAS methods, as well as the conventional meta-learning methods without NAS (Finn et al., 2017; Antoniou et al., 2018)." }, { "heading": "4.4 EFFECTIVENESS OF METAD2A", "text": "Now, we verify the efficacy of each component of MetaD2A with further analysis.\nAblation Study on MetaD2A We train different variations of our model on the subsets of ImageNet-1K, and test on CIFAR10, CIFAR100, and Aircraft in Table 3 with the same experimental setup as the main experiments in Table 1. The MetaD2A generator without the performance predictor (Generator only) outper-\nforms the simple random architecture sampler (Random Sampling), especially by 15.3% on Aircraft, which demonstrates the effectiveness of MetaD2A over the random sampler. Also, we observe that combining the meta-performance predictor to the random architecture sampler (Predictor only) enhances the accuracy of the final architecture on all datasets. Finally, MetaD2A combined with the performance predictor (MetaD2A) outperforms all baselines, especially by 20.28% on Aircraft, suggesting that our MetaD2A can output architectures that are more relevant to the given task.\nEffectiveness of Set-to-Architecture Generator\nMNIST\nSVHN\nCIFAR10\nCIFAR100\nAIRCRAFT\nPETS\nNAS-Bench-201 Search Space\nFigure 4: T-SNE vis. of Latent Space\nCIFAR10 AccuracyT h\ne N\nu m\nb er\no f\nN et\nw o\nrk s\nCIFAR100 AccuracyT h\ne N\nu m\nb er\no f\nN et\nw o\nrk s\nFigure 5: The Quality of Generated Architectures\nWe first visualize cross-modal latent embeddings {z} of unseen datasets encoded by the meta-\nlearned generator with T-SNE in Figure 4. Each marker indicates {z} of the sampled subsets of each dataset with different seeds. We observe that the generator classifies well embeddings {z} by datasets in the latent space while clusters z of the subset of the same dataset. Furthermore, we investigate the quality of generated architectures from those embeddings {z}. In the Figure 5, the generator sample 2000 architecture candidates from the embeddings encoded each target dataset and computes the validate accuracy of those architectures. The proposed generator generates more high-performing architectures than the simple random architecture sampler for each target dataset. These results are consistent with Table 3, where the generator (Generator only) enhances the performance compared with the simple random architecture sampler (Random Sampling) consistently on CIFAR10 and CIFAR100. The meta-learned generator allows us to effective and efficient search by excluding the poor-performing architectures of broad search space. We believe the generator replaces the random sampling stage of other NAS methods. We leave to valid it as the future work.\nCould the Generator Create Novel Architectures? Since the generator maps set-architecture pairs in the continuous latent space, it can generate novel architectures in the meta-test, which are not contained in the source database. To validate it, we evaluate generated 10,000 neural architecture samples of both\nsearch space with the measures Validity, Uniqueness, and Novelty following (Zhang et al., 2019) in Table 4. Each is defined as how often the model can generate valid neural architectures from the prior distribution, the proportion of unique graphs out of the valid generations, and the proportion of valid generations that are not included in the training set, respectively. For NAS-Bench-201 search space and MobileNetV3 search space, respectively, the results show the meta-learned generator can generate 67.31%/100% new graphs that do not belong to the training set and can generate 35.19%/100% various graphs, not picking always-the-same architecture seen of the source database.\nEffectiveness of Meta-Predictor We first demonstrate the necessity of set encoding to handle multiple datasets with a single predictor. In Table 5, we metatrain all models on the source database of NAS-Bench-201 search space and measure Pearson correlation coefficient on the validation tasks (400 unseen tasks) of the source database. Pearson correlation co-\nefficient is the linear correlation between the actual performance and the predicted performance (higher the better). Using both the dataset and the computational graph of the target architecture as inputs, instead of using graphs only (Graph Encoder Only), clearly leads to better performance to support multiple datasets. Moreover, the predictor with the proposed set encoder clearly shows a higher correlation than other set encoders (DeepSet (Zaheer et al., 2017), SetTransformer (Lee et al., 2019a), and Statistical Pooling (Lee et al., 2020))." }, { "heading": "5 CONCLUSION", "text": "We proposed a novel NAS framework, MetaD2A (Meta Dataset-to-Architecture), that can output a neural architecture for an unseen dataset. The MetaD2A generator learns a dataset-to-architecture transformation over a database of datasets and neural architectures by encoding each dataset using a set encoder and generating each neural architecture with a graph decoder. While the model can generate a novel architecture given a new dataset in an amortized inference, we further learn a meta-performance predictor to select the best architecture for the dataset among multiple sampled architectures. The experimental results show that our method shows competitive performance with conventional NAS methods on various datasets with very small search time as it generalizes well across datasets. We believe that our work is a meaningful step for building a practical NAS system for real-world scenarios, where we need to handle diverse datasets while minimizing the search cost.\nAcknowledgements This work was conducted by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD)." }, { "heading": "A DETAILS OF THE GENERATOR", "text": "A.1 GRAPH DECODING\nTo generate the ith node vi, we compute the operation type ovi ∈ R1×no over no operations based on the current graph state hG := hvi−1 and then predict whether the edge exists between the node vi and other existing nodes. Following (Zhang et al., 2019), when we compute the edge probability e{vj ,vi}, we consider nodes { vj |j = i − 1, ..., 1 } in the reverse order to reflect information from nodes close to vi to the root node when deciding whether edge connection. Note that the proposed process guarantees the generation of directed acyclic graph since directed edge is always created from existing nodes to a new node.\nThe graph decoder starts from an initial hidden state hv0 = NNinit(z), where NNinit is an MLP followed by tanh. For ith node vi according to topological order, we compute the probability of each operation type ovi ∈ R1×no over no operations, given the current graph state as the last hidden node hG := hvi . That is, ovi = NNnode(hG), where NNnode is an MLP followed by softmax. When the predicted vi type is the end-of-graph, we stop the decoding process and connect all leaf nodes to vi. Otherwise we update hidden state h (t) vi at time step t as follows:\nh(t+1)vi = UPDATE(i,m (t) vi ) where m(t)vi = ∑ u∈Vinvi AGGREGATE(h(t)u ) (6)\nThe function UPDATE is a gated recurrent unit (GRU) (Cho et al., 2014), i is the order of vi, and m(t)vi is the incoming message to vi. The function AGGREGATE consists of mapping and gating functions with MLPs, where Vinvi is a set of predecessors with incoming edges to vi. For all previously processed nodes { vj |j = i−1, ..., 1 } , we decide whether to link an edge from vj to vi by sampling the edge based on edge connection probability e{vj ,vi} = NNedge(hj ,hi) , where NNedge is a MLP followed by sigmoid. We update hvi by Eq. (6) whenever a new edge is connected to vi. For meta-test, we select the operation with the max probability for each node and edges with e{vj ,vi} > 0.5.\nA.2 META-TRAINING OBJECTIVE\nWe meta-learn the model using Eq. (1). The expectation of the log-likelihood Ez∼qφ(z|D) [ log pθ ( G|z )]\nof (2) can be rewritten with negative cross-entropy loss −LτCE for nodes and binary cross-entropy loss −LτBCE for edges, and we slightly modify it using the generated set-dependent graph G̃ and the ground truth graph G as the input as follows:\n− ∑ i∈V { LτCE ( õi, oi ) + ∑ j∈Vi LτBCE ( ẽ{j,i}, e{j,i} )} (7)\nWe substitute the log-likelihood term of Eq. (2) such as Eq. (7) and learn the proposed generator by maximizing the objective (1) to learn φ,θ, which are shared across all tasks." }, { "heading": "B GRAPH ENCODING OF THE SET-DEPENDENT PREDICTOR", "text": "For a given graph candidate G, we sequentially perform message passing for nodes from the predecessors following the topological order of the DAG G. We iteratively update hidden states h(t)vi using the Eq. (8) by feeding in its predecessors’ hidden states {u ∈ Vinvi }.\nh(t+1)vi = UPDATE(yvi ,m (t) vi ) where m(t)vi = ∑ u∈Vinvi AGGREGATE(h(t)u ) (8)\nFor starting node v0 which the set of predecessors is the empty, we output the zero vector as the hidden state of v0. We use the last hidden states of the ending node as the output of the graph\nencoder hf . Additionally, we exploit Bi-directional encoding (Zhang et al., 2019) which reverses the node orders to perform the encoding process. In this case, the final node becomes the starting point. Thus, the backward graph encoder outputs hb, which is the last hidden states of the starting node. We concatenate the outputs hf of the forward graph encoder and hb of the backward graph encoder as the final output of the Bi-directional graph encoding." }, { "heading": "C BUILDING BLOCKS OF THE SET ENCODER", "text": "We use Set Attention Block (SAB) and Pooling by Multi-head Attention (PMA) (Lee et al., 2019a), where the former learns the features for each element in the set using self-attention while the latter pools the input features into k representative vectors. Set Attention Block (SAB) is an attentionbased block, which makes the features of all of the instances in the set reflect the relations between itself and others such as:\nSAB(X) = LN(H + MLP(H)) where H = LN(X + MH(X,X,X))\n(9)\nwhere LN and MLP is the layer normalization (Ba et al., 2016) and the multilayer perceptron respectively, and H ∈ RnBc×dH is computed with multi-head attention MH(Q,K,V ) (Vaswani et al., 2017) which queries, keys, and values are elements of input setX .\nFeatures encoded from the SAB layers can be pooled by PMA on learnable seed vectors S ∈ Rk×dS to produce k vectors by slightly modifyingH calculation of Eq. (9):\nPMA(X) = LN(H + MLP(H)) where H = LN(X + MH(S,MLP(X),MLP(X)))\n(10)\nWhile k can be any size (i.e. k=1,2,10,16), we set k = 1 for generating the single latent vector. For extracting consistent information not depending the order and the size of input elements, encoding functions should be constructed by stacking permutation-equivariant layers E, which satisfies below condition for any permutation π on a setX (Zaheer et al., 2017):\nE({x|x ∈ πX}) = πE({x|x ∈X}) (11)\nSince all of the components in SAB and PMA are row-wise computation functions, SAB and PMA is permutation equivarint by definition Eq. (11)." }, { "heading": "D SEARCH SPACE", "text": "Following the NAS-Bench-201 (Dong & Yang, 2020), We explore the search space consisting of 15,625 possible cell-based neural architectures for all experiments. Macro skeleton is stacked with one stem cell, three stages consisting of 5 cells for each, and a residual block (He et al., 2016) between stages. The stem cell consists of 3-by-3 convolution with 16 channels and cells of the first, second and third stages have 16, 32 and 64, respectively. Residual blocks have convolution layer with the stride 2 for down-sampling. A fully connected layer is attached to the macro skeleton for classification. Each cell is DAG which consists of the fixed 4 nodes and the fixed 6 edge connections. For each edge connection, NAS models select one of 5 operation candidates such as zerorize, skip connection, 1-by-1 convolution, 3-by-3 convolution, and 3-by-3 average pooling. To effectively encode the operation information as the node features, we represent edges of graphs in NAS-Bench201 as nodes, and nodes of them as edges. Additionally, we add a starting node and an ending node to the cell during training. All nodes which have no predecessors (suceessors) are connected to the starting (ending) node, which we delete after generating the full neural architectures." }, { "heading": "E EXPERIMENTAL SETUP", "text": "E.1 DATASET\n1) CIFAR-10 (Krizhevsky et al., 2009): This dataset is a popular benchmark dataset for NAS, which consists of 32×32 colour images from 10 general object classes. The training set consists of 50K images, 5K for each class, and the test set consists of 10K images, 1K for each class. 2) CIFAR100 (Krizhevsky et al., 2009): This dataset consists of colored images from 100 fine-grained general\nobject classes. Each class has 500/100 images for training and test, respectively. 3) MNIST (LeCun & Cortes, 2010): This is a standard image classification dataset which contains 70K 28×28 grey colored images that describe 10 digits. We upsample the images to 32×32 pixels to satisfy the minimum required pixel size of the NAS-Bench 201 due to the residual blocks in the macro skeleton. We use the training/test split from the original dataset, where 60K images are used for training and 10K are used for test. 4) SVHN (Netzer et al., 2011): This dataset consists of 32×32 color images where each has a digit with a natural scene background. The number of classes is 10 denoting from digit 1 to 10 and the number of training/test images is 73257/26032, respectively. 5) Aircraft (Maji et al., 2013) This is fine-grained classification benchmark dataset containing 10K images from 30 different aircraft classes. We resize all images into 32×32. 6) Oxford-IIIT Pets (Parkhi et al., 2012) This dataset is for fine-grained classification which has 37 breeds of pets with roughly 200 instances for each class. There is no split file provided, so we use the 85% of the dataset for training and the other 15% are as a test set. We also resize all images into 32×32. For CIFAR10 and CIFAR100, we used the training, validation, and test splits from the NAS-Bench-201, and use random validation/test splits for MNIST, SVHN, Aircraft, and Oxford-IIIT Pets by splitting the test set into two subsets of the same size. The validation set is used to update the searching algorithms as a supervision signal and the test set is used to evaluate the performance of the searched architectures.\nE.2 BASELINES\nWe now briefly describe the baseline models and our MetaD2A model. 1) ResNet (He et al., 2016) This is a convolutional network which connects the output of previous layer as input to the current layer. It has achieved impressive performance on many challenging image tasks. We use ResNet56 in all experiments. 2) REA (Real et al., 2019) This is an evolutional-based search method by using aging based tournament selection, showing evolution can work in NAS. 3) RS (Bergstra & Bengio, 2012) This is based on random search and we randomly samples architectures until the total time of training and evaluation reaches the budget. 4) REINFORCE (Williams, 1992) This is a RL-based NAS. We reward the model with the validation accuracy after 12 epochs of training. 5) BOHB (Falkner et al., 2018) This combines the strengths of tree-structured parzen estimator based baysian optimization and hyperband, performing better than standard baysian optimization methods. 5) RSPS (Li & Talwalkar, 2019) This method is a combination of random search and weight sharing, which trains randomly sampled sub-graphs from weight shared DAG of the search space. The method then selects the best performing sub-graph among the sampled ones as the final neural architecture. 6) SETN (Dong & Yang, 2019a) SETN is an one-shot NAS method, which selectively samples competitive child candidates by learning to evaluate the quality of the candidates based on the validation loss. 7) GDAS (Dong & Yang, 2019b) This is a Gumbel-Softmax based differentiable neural architecture sampler, which is trained to minimize the validation loss with the architecture sampled from DAGs. 8) PC-DARTS (Xu et al., 2020) This is a gradientbased NAS which partially samples channels to apply operations, to improve the efficiency of NAS in terms of memory usage and search time compared to DARTS. We exploit the code at https://github.com/yuhuixu1993/PC-DARTS. 9) DrNAS (Chen et al., 2021) This is a NAS approach that introduces Dirichlet distribution to approximate the architecture distribution, to enhance the generalization performance of differentiable architecture search. We use the code at https://github.com/xiangning-chen/DrNAS. We report the results on CIFAR10 and CIFAR100 in this paper using the provided code from the authors on the split set of NAS-Bench 201 while their reported results in the paper of the authors are 94.37 and 73.51, respectively on random training/test splits on CIFAR10 and CIFAR100. 10) MetaD2A (Ours) This is our meta-NAS framework described in section 3, which can stochastically generate task-dependent computational graphs from a given dataset, and use the performance predictor to select the best performing candidates. We follow the same settings of NAS-Bench-201 (Dong & Yang, 2020) for all baselines and use the code at https://github.com/D-X-Y/AutoDL-Projects except for 8), 9) and 10).\nE.3 IMPLEMENTATION DETAILS\nWe use embedding features as inputs of the proposed set encoder instead of raw images, where the embedding features are generated by ResNet18 (He et al., 2016) pretrained with ImageNet-1K (Deng et al., 2009). We adopt the teacher forcing training strategy (Jin et al., 2018), which performs the current decoding process after correcting the decoded graph as the true graph until the previous step. This strategy is only used during meta-training and we progress subsequent generation based on the currently decoded graph part without the true graph information in the meta-test. We use mini-batch gradient descent to train the model with Eq. (1). The values of hyperparameters which we used for both MetaD2A generator and predictor in this paper are described in Table 6. To train searched neural architectures for all datasets, we follow the hyperparameter setting of NAS-Bench201 (Dong & Yang, 2020), which is used for training searched neural architectures on CIFAR10 and CIFAR100. While we report accuracy after training 50 epoch for MNIST, the accuracy of 200 epoch are reported for all datasets except MNIST." } ]
2,021
null
SP:74dc640c4b7e724036bc4f772059fab7e9e33007
[ "In this paper, the authors investigate the inner-loop optimization mechanism of meta-learning algorithms. The analysis shows the effectiveness of the multi-step adaptation and (1) the key of meta-learning is how to design a well-differentiated classifier. They then propose Random Decision Planes (RDP) and Meta Contrastive Learning (MCL) and achieve comparable performance with existing methods." ]
Meta learning, an effective way for learning unseen tasks with few samples, is an important research area in machine learning. Model Agnostic MetaLearning (MAML) (Finn et al. (2017)) is one of the most well-known gradientbased meta learning algorithms, that learns the meta-initialization through the inner and outer optimization loop. The inner loop is to perform fast adaptation in several gradient update steps with the support datapoints, while the outer loop to generalize the updated model to the query datapoints. Recently, it has been argued that instead of rapid learning and adaptation, the learned meta-initialization through MAML has already absorbed the high-quality features prior, where the task-specific head at training facilitates the feature learning. In this work, we investigate the impact of the task-specific adaptation of MAML and discuss the general formula for other gradient-based and metric-based meta-learning approaches. From our analysis, we further devise the Random Decision Planes (RDP) algorithm to find a suitable linear classifier without any gradient descent step and the Meta Contrastive Learning (MCL) algorithm to exploit the inter-samples relationship instead of the expensive inner-loop adaptation. We conduct sufficient experiments on various datasets to explore our proposed algorithms.
[]
[ { "authors": [ "Luca Bertinetto", "Joao F Henriques", "Philip HS Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "arXiv preprint arXiv:1805.08136,", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": null, "year": 1904 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "arXiv preprint arXiv:1703.03400,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic model-agnostic meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Micah Goldblum", "Steven Reich", "Liam Fowl", "Renkun Ni", "Valeriia Cherepanova", "Tom Goldstein" ], "title": "Unraveling meta-learning: Understanding feature representations for few-shot tasks", "venue": "arXiv preprint arXiv:2002.06753,", "year": 2020 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting gradientbased meta-learning as hierarchical bayes", "venue": "arXiv preprint arXiv:1801.08930,", "year": 2018 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML deep learning workshop,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 (canadian institute for advanced research)", "venue": "URL http://www. cs. toronto. edu/kriz/cifar. html,", "year": 2010 }, { "authors": [ "Hae Beom Lee", "Hayeon Lee", "Donghyun Na", "Saehoon Kim", "Minseop Park", "Eunho Yang", "Sung Ju Hwang" ], "title": "Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks", "venue": "arXiv preprint arXiv:1905.12917,", "year": 2019 }, { "authors": [ "Kwonjoon Lee", "Subhransu Maji", "Avinash Ravichandran", "Stefano Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Siyuan Qiao", "Chenxi Liu", "Wei Shen", "Alan L Yuille" ], "title": "Few-shot image recognition by predicting parameters from activations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of maml", "venue": null, "year": 1909 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "arXiv preprint arXiv:1803.00676,", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "arXiv preprint arXiv:1807.05960,", "year": 2018 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "Metalearning with memory-augmented neural networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Qianru Sun", "Yaoyao Liu", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Risto Vuorio", "Shao-Hua Sun", "Hexiang Hu", "Joseph J Lim" ], "title": "Multimodal model-agnostic metalearning via task-aware modulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Few-shot learning, aiming to learn from few labelled examples, is a great challenge for modern machine learning systems. Meta learning, an effective way for tracking this challenge, enables the model to learn general knowledge across a distribution of tasks. Various ideas of meta learning have been proposed to address the few-shot problems. Gradient-based meta learning (Finn et al. (2017); Nichol et al. (2018)) learns the meta-parameters that can be quickly adapted to new tasks by few gradient descent steps. Metric-based meta learning (Koch et al. (2015); Vinyals et al. (2016); Snell et al. (2017)) proposes to learn a metric space by comparing different datapoints. Memorybased meta learning (Santoro et al. (2016)) can rapidly assimilate new data and leverage the stored information to make predictions.\nModel Agnostic Meta-Learning (MAML) (Finn et al. (2017)) is one of the most well-known gradient-based meta learning algorithms, that learns the meta-initialization parameters through the inner optimization loop and the outer optimization loop. For a given task, the inner loop is to perform fast adaptation in several gradient descent steps with the support datapoints, while the outer loop to generalize the updated model to the query datapoints. With the learned meta-initialization, the model can be quickly adapted to the unseen tasks with few labelled samples. Following the MAML algorithm, many significant variants (Finn et al. (2018); Rusu et al. (2018); Oreshkin et al. (2018); Bertinetto et al. (2018); Lee et al. (2019b)) are studied under the few-shot setting.\nTo understand how the MAML works, Raghu et al. (2019) conduct a series of experiments and claim that rather than rapid learning and adaptation, the learned meta-initialization has already absorbed the high-quality features prior, thus the representations after fine-tuning are almost the same for the coming unseen tasks. Also, the task specific head of MAML at training facilitates the learning of better features. In this paper, we further design more representative experiments and present a formal argument to explain the importance of the task specific adaptation. Actually, the multi-step taskspecific adaptation, making the body and head have similar classification capabilities, can provide better gradient descent direction for the features learning of body. We also notice that for both the\ngradient-based methods (e.g. MAML (Finn et al. (2017)), MetaOptNet (Lee et al. (2019b))) and metric-based methods (e.g. Prototypical Networks (Snell et al. (2017))) that attempt to learn a taskspecific head using the support datapoints, the adaptation is a common mode for features learning of body but varied in different methods.\nBased on our analysis, we first propose a new training paradigm to find a decision plane (linear classifier) for guidance with no gradient descent step during the inner loop and get more supporting conclusions. Moreover, we devise another training paradigm that removes the inner loop and trains the model with only the query datapoints. Specifically, inspired by contrastive representation learning (Oord et al. (2018); Chen et al. (2020); He et al. (2020)), we exploit the inter-samples relationship of query set to find a guidance for the body across different tasks. This meta contrastive learning algorithm even achieves competitive results comparable to some state-of-the-art methods. In total, our contributions can be listed as follows:\n1. We present sufficient experiments and formal argument to explore the impact of the taskspecific adaptation for body features learning and discuss the general formula for other gradient-based and metric-based meta-learning approaches.\n2. We devise a training algorithm to obtain a decision plane with no gradient descent step during the inner loop, named as Random Decision Planes (RDP), and get more supporting conclusions.\n3. Unlike prior gradient-based methods, we propose the Meta Contrastive Learning (MCL) algorithm to exploit the inter-samples relations instead of training a task-specific head during the inner loop. Even without the task-specific adaptation for guidance, our algorithm still achieve better results with even less computation costs.\n4. We empirically shows the effectiveness of the proposed algorithm with different backbones on four benchmark datasets: miniImageNet (Vinyals et al. (2016)), tieredImageNet (Ren et al. (2018)), CIFAR-FS (Bertinetto et al. (2018)) and FC100 (Oreshkin et al. (2018))." }, { "heading": "2 RELATED WORKS", "text": "MAML (Finn et al. (2017)) is a highly influential gradient-based meta learning algorithm for fewshot learning. The amazing experiment results on several public few-shot datasets have proved its effectiveness. Following the core idea of MAML, there are numerous works to handle the data insufficiency problem in few-shot learning. Some works (Oreshkin et al. (2018); Vuorio et al. (2019)) introduce the task-dependent representations via conditioning the feature extractor on the specific task to improve the performance. Sun et al. (2019) also employ the meta-learned scaling and shifting parameters for transferring from another large-scale dataset. Others (Grant et al. (2018); Finn et al. (2018); Lee et al. (2019a)) study this problem from the perspective of Bayesian approach. Unlike prior methods, we provide two training paradigms, one with no gradient descent step during the inner loop and another removing the inner loop and exploiting the inter-sample relations for training.\nRecent works also explore the key factors that makes the meta-learned model perform better than others at few-shot tasks. Chen et al. (2019) discovers that a deeper backbone has a large effect on the success of meta learning algorithm, while Goldblum et al. (2020) finds that the meta learning tends to cluster object classes more tightly in feature space for those methods that fix the backbone during the inner loop (Bertinetto et al. (2018); Rusu et al. (2018)). A very recent work (Raghu et al. (2019)) argues that the meta-trained model can be applied to new task due to the high-quality features prior learned by the meta-initialized parameters rather than rapid learning. In this paper, we further study the impact of the task-specific adaptation for feature learning. Based on the analysis, we devise two algorithms, Random Decision Planes (RDP) and Meta Contrastive Learning (MCL) requiring less computation cost but still with competitive performance." }, { "heading": "3 MODEL-AGNOSTIC META LEARNING (MAML)", "text": "The MAML aims to learn the meta-initialized parameters θ for the coming unseen tasks through the inner optimization loop and the outer optimization loop. Under the N -way-K-shot setting, for a task Tb sampled from the task distribution P (T ), we have a support set of N ×K examples T sb and\na query set T qb , where N is the number of sampled class and K is the number of instances for each class. During the inner loop, with the support set T sb , we perform fast adaptation in several gradient descent steps and obtain the task-specific parameters θtTb where t is the number of gradient descent steps, given by:\nθtTb = θ t−1 Tb − α∇θt−1Tb LT sb (θ t−1 Tb ) (1)\nwhere α is the step size for inner loop and LT sb (θ t−1 Tb ) denoted as the loss on the support set T sb after t − 1 steps. With the query set T qb , we compute the meta loss on the task-specific parameters θtTb and backward to update the meta-initialized parameters θ, given by\nθ = θ − β∇θ 1\nB B∑ b=1 LT qb (θ t Tb ) (2)\nwhere β is the learning rate and B is the number of sampled tasks in a batch." }, { "heading": "4 IMPACT OF TASK-SPECIFIC ADAPTATION", "text": "" }, { "heading": "4.1 THE MULTI-STEP TASK-SPECIFIC ADAPTATION IS IMPORTANT.", "text": "To explore the effectiveness of MAML, Raghu et al. (2019) have conducted sufficient experiments, indicating that the network body (the representation layers) has already absorbed the high-quality features prior. During meta-testing, instead of fine tuning on the network head (the classifier), simply building the prototypes with the support set can achieve comparable performance to MAML. Raghu et al. (2019) also shows that the task specificity of head at training can facilitate feature learning and ensure good representation learning in the network body. In our work, we show that besides the task specificity of head, the multi-step adaptation is also essential, and further study the role of network body and head during meta-training. We devise several methods using different training regimes: (1) Multi-Task, where all the tasks simply share one common head and the model is trained in a traditional way without inner loop adaptation; (2) Multi-Head, where different tasks are equipped with different heads for task specificity and the model is trained in a traditional way without inner loop adaptation; (3) Almost No Inner Loop (ANIL), where the network body is fixed during the inner loop; (4) Body Outer Loop, Head Inner Loop (BOHI), where the network body is updated only by the outer loop and the head is adapted only during the inner loop, making the head’s meta-initialized parameters unchanged. More algorithms’ details can be found in Appendix B, and implementation details can be found in Appendix C.1.\nFollowing Raghu et al. (2019), we employ the cosine similarities between prototypes and the query datapoints to evaluate the quality of features learned. As Table 1 shows, even equipped with taskspecific head, the Multi-Head training still performs worse than the standard MAML algorithm by\na large margin, indicating the multi-step adaptation of MAML is helpful for features learning. The results of Multi-Head and Multi-Task show the importance of multi-step task-specific adaptation.\nAs the results shown in Table 1, the ANIL training remains effective comparable to the standard MAML algorithm, indicating that the task-specific adaptation of network body is unnecessary to learn good features. More interestingly, the BOHI training that keeps the meta-initialization of head unchanged even performs better than MAML, further demonstrating that good features learning depends on the multi-step task-specific adaptation of head during inner loop more than updating the meta-initialization of head in outer loop. Also, the ANIL and BOHI have similar performance, indicating that compared with learned prior knowledge in head, the inner loop adaptation, as a guidance, contributes more to the features learning. More experimental results can be found in Appendix C.2." }, { "heading": "4.2 WHY IS MULTI-STEP TASK-SPECIFIC ADAPTATION IMPORTANT?", "text": "Having observed that the MAML algorithm outperforms the Multi-Task training by a large margin and the multi-step task-specific adaptation is important for features learning, we extend our analysis to explore the reason why the inner loop adaptation is essential for MAML at different stages of meta training. Specifically, we freeze the initialized MAML model and model at 5,000 iterations, sample validation tasks from the task distribution, and record the test accuracy of model in different inner loop steps. Both the body accuracy based on prototypes construction and head accuracy based on fine-tuning are given in Figure 1 and Figure 2, where “Task ID” stands for different tasks. As the results shows, at different stages of meta training, the head accuracy increases significantly in the first few adaptation steps since the model has learnt the correspondence between sample and label. However, at the beginning of training, there is only a small improvement on the body accuracy after first adaptation step. In Figure 2, as the model converges, the body accuracy even decreases in the first few adaptation steps. In the following steps, with the task-specific adaptation of head, the network body then learns better representations, further demonstrating that the multi-\nAlgorithm 1 The Random Decision Planes (RDP) Algorithm for N-way-K-shot learning Input: Network Body fθ, Learning Rate β, Task Distribution P (T )\nPerform the Gram-Schmidt method on random metrices to get the classifier set P = {Wi} np i=1 while not done do Sample a batch of tasks {Tb}Bb=1, where Tb ∼ P (T ) for b ∈ {1, ..., B} do\nSample the support set T sb = {(xsi , ysi )} N×K i=1 and query set T qb = {(x q i , y q i )} N×K i=1 from task Tb. for each sample x in {T sb , T q b } do\nz = ‖fθ(x)‖ end for define CrossEntropyLoss(H,D) as the cross entropy loss on the features representations set D with head H . W ? = arg min\nW∈P CrossEntropyLoss(W , {(zsi , ysi )} N×K i=1 )\nLb = CrossEntropyLoss(W ?, {(zqi , y q i )} N×K i=1 )\nend for θ = θ − β∇θ 1B ∑B b=1 Lb\nend while\nstep task-specific adaptation, making the body and head have similar classification capabilities, can be regarded as a guidance to provide better gradient descent direction for the feature learning of body.\nTo understand this intuitive argument better, we consider a sample (x, y) for few-shot classification where the cross entropy loss is employed, formulated as:\nLc = −log( exp(w>y h)∑ k exp(w > k h) ) = −w>y h + log( ∑ k exp(w>k h)) (3)\nwhere {w1,w2, ...,wk} is the weights of the classifier head, h is the body representation of x. The gradients of loss Lc with respect to the body representation h are denoted by,\n∂Lc ∂h\n= −wy + ∑ kwkexp(w > k h)∑\nk exp(w > k h)\n= −wy + w̄ (4)\nwhere w̄ is exactly the weighted average of the weights {w1,w2, ...,wk}. As shown in Equation 4, a reasonable direction for the network body to minimize the target lossLc is to make the representation h closer to the corresponding class weight wy , given by h = h + λ(wy − w̄). As the model converges, in the first few adaptation steps, there is a significant margin between the performance of head and body, and the classifier weights contain little knowledge about correspondence between samples and labels and differences between different classes. With the low-performance head, this updating rule for body may lead to a decline in the quality of features, which also explains why the simpler BOHI, ANIL even performs better than MAML in Table 1. After several adaptation steps during the inner loop, the body then receives the useful guidance for features learning from the taskspecific head since wy can better express its corresponding class. The formulation above shows that the multi-step task-specific adaptation, making the body and head have similar classification capabilities, can provide better gradient descent direction for the features learning of body." }, { "heading": "4.3 TASK-SPECIFIC ADAPTATION IN OTHER META-LEARNING ALGORITHMS", "text": "Having noticed that the multi-step task-specific adaptation of MAML, which promotes the performance of head, can facilitate the features learning of body. It works similarly for other gradientbased methods that use end-to-end fine-tuning, such as Reptile (Nichol et al. (2018)). In the case of meta-learning methods that fix the network body and only update the head during the inner loop, such as MetaOptNet (Lee et al. (2019b)) and R2-D2 (Bertinetto et al. (2018)), the convex optimization of head also aims to provide a classifier with better classification capabilities. For metric-based methods, such as Prototypical Networks (Snell et al. (2017)), the adaptation of head is actually conducted through the nearest neighbor algorithm. In conclusion, the adaptation is a common mode but\nvaried in different methods. These meta-learning algorithms reveal a general formula that the inner loop is for building a task-specific head that matches the classification capabilities of body and the outer loop for task-independent features learning." }, { "heading": "5 THE RANDOM DECISION PLANES ALGORITHM", "text": "As discussed above, the multi-step adaptation based on gradient descent during the inner loop aims to provide guidance for features learning of body. From this consideration, we suppose that if a suitable linear classifier is given, the feature learning can be facilitated even without gradient descent during the inner loop. From this consideration, we devise such an algorithm named Random Decision Planes (RDP), where a classifier is chosen from a predefined set P according to the target loss on the support set. The predefined set of classifier P consists of np different orthonormal matrices that are generated through the Gram-Schmidt method from random matrices. During the inner loop, without gradient descent, we directly choose a most suitable classifier as the network head which minimizes the cross entropy loss on the support set. In the outer loop, we compute the loss based on the chosen head and run backward to update the network body. A formal description of RDP is presented in Algorithm 1. The implementation details can be found in Appendix C.1.\nThe overall evaluation results on three datasets are presented in Table 2. Note that we also remove the head and construct the prototypes from the body network fθ for predictions during meta-testing. The proposed RDP algorithm performs comparably to the standard MAML method on three datasets, especially on the FC100 dataset. Without any task-specific adaptation for the network body, a best performing classifier chosen from a set of randomly generated subspaces can also be a guidance to facilitate the features learning, further suggesting that a head with better classification capabilities, is key factor to learn good representations even if the chosen approximate head performs worse than a gradient-based head, and the main purpose of task-specific adaptation is to adjust the lowperformance head for features learning of body.\nAlgorithm 2 The Meta Contrastive Learning (MCL) Algorithm for N-way learning Input: Network Body fθ, Projection Layer gφ, Learning Rate β,\nConstant τ , Task Distribution P (T ) while not done do\nSample a batch of tasks {Tb}Bb=1, where Tb ∼ P (T ) for b ∈ {1, ..., B} do\nSample the query set T qb from task Tb where T q b = {(x q i , y q i )}2Ni=1, and yq2k−1 = y q 2k where k ∈ {1, ..., N}. for i ∈ {1, ..., 2N} do zi = gφ(fθ(x q i )) end for for i ∈ {1, ..., 2N} and j ∈ {1, ..., 2N} do\nsi,j = z > i zj/(‖zi‖‖zj‖)\nend for define l(i, j) = −log( exp(si,j/τ)∑2N\nk=1 1[k 6=i]exp(si,k/τ) ) Lb = 12N ∑N k=1[l(2k − 1, 2k) + l(2k, 2k − 1)]\nend for θ = θ − β∇θ 1B ∑B b=1 Lb\nφ = φ− β∇φ 1B ∑B b=1 Lb\nend while\nAlso, we conduct experiments to explore the impact of the number of decision planes. Results are shown in Figure 3 on two datasets. With a small set of decision planes, it can be more difficult to find a suitable head to guide the features learning, while with enough decision planes, the performance then reaches the upper limit." }, { "heading": "6 THE META CONTRASTIVE LEARNING ALGORITHM", "text": "We have already seen that the multi-step task-specific adaptation to improve the classifier head can essentially facilitate the features learning of body. In total, prior gradient-based methods based on the cross-entropy loss proposes to learn the correspondence between samples and assigned labels for different tasks, thus requiring the task-specific adaptation for the classifier head during inner loop. Since the task-specific head also serves for features learning of body, we wonder if we can remove the inner loop or adaptation, and make full use of the labels information in other way to be a guidance for features learning. From this consideration and inspired by recent works (Chen et al. (2020); He et al. (2020)) about self-supervised contrastive learning, we further devise the Meta Contrastive Learning (MCL) algorithm that directly removes the inner loop and exploits the inter-sample relationship with only the query set.\nSpecifically, rather than using cross entropy loss for task-specific adaptation, we simply impose that normalized representations from the same class are closer together than representations from different classes. For N -way few-shot learning, we sample two examples per class to build the query set. Next, for a given anchor example, the meta contrastive loss pulls it closer to the point of same class while pushes the anchor farther away from the negative examples of other classes. Following Chen et al. (2020), we also employ a small neural network projection layer that maps the body features to the space where contrastive loss is applied. A formal description of MCL is presented in Algorithm 2. The implementation details can be found in Appendix C.1.\nDuring meta-testing, we discard the projection layer gφ and construct the prototypes from the body network fθ for predictions. The overall evaluation results on the MiniImageNet, TieredImageNet and FC100 datasets are presented in Table 3. Note that TADAM (Oreshkin et al. (2018)) employs a extra task embedding network (TEN) block to predict element-wise scale and shift vectors, and MetaOptNet (Lee et al. (2019b)) proposes to learn a linear support vector machine (SVM) as classifier head during the inner loop. Unlike those methods, our MCL method is arguably simpler. By exploiting the relationship between different samples, we are able to remove the inner loop which contains a complex adaptation process, and devise a contrastive loss to train the network body directly. As the results shows, our method outperforms almost previous well-designed methods and\nalso achieves results comparable to MetaOptNet. More experimental results and time-efficiency analysis can be found in Appendix C.2 and C.3.\nWe also study the impact of the projection layer gφ. Figure 4 shows the evaluation results with different output dimensions. Note that “None” means that there is no projection layer for loss computation. As the results show, for a deeper ResNet12 backbone, the projection layer facilitates the features learning a lot (>6% for 5-shot, >5% for 1-shot). We conjecture that the projection layer is trained to extract task-specific information useful for the contrastive loss, while the body representations h learns more general information. More analysis can be found in Appendix C.4." }, { "heading": "7 CONCLUSION", "text": "In this paper, based on the hypothesis that feature reuse is the dominant factor for the success of MAML algorithm, we further study the impact of task-specific adaptation and devise several training regimes including BOHI, Multi-Head and so on. Also, we provide a more formal argument from the perspective of gradient descent optimization. Based on analysis above, we find that the multistep task-specific adaptation, making the body and head have similar classification capabilities, can provide better gradient descent direction for the features learning of body. We further connect our results to other meta-learning algorithm, showing the adaptation is a common mode but varied in different methods. From our consideration, we devise the RDP algorithm where a suitable linear classifier is chosen without gradient descent and get more supporting conclusions. We also build the\nMCL algorithm that removes the inner loop and exploit the inter-sample relationship, and achieve results comparable to some state-of-the-art methods." }, { "heading": "A FEW-SHOT IMAGE CLASSIFICATION DATASETS", "text": "In this section, we introduce four benchmark datasets often used for few-shot image classification: the miniImageNet (Vinyals et al. (2016)), tieredImageNet (Ren et al. (2018)), CIFAR-FS (Bertinetto et al. (2018)) and FC100 (Oreshkin et al. (2018)).\nThe miniImageNet (Vinyals et al. (2016)) dataset is standard benchmark for few-shot image classification, comprises 100 classes randomly chosen from the original ImageNet (Russakovsky et al. (2015)) dataset, where 64 classes is used for meta-training, 16 classes for meta-validation and 20 classes for meta-testing. Each class contains 600 images of size 84 × 84. Since the original class splits are unavailable, we use the commonly-used split proposed in Ravi & Larochelle (2016).\nThe tieredImageNet (Ren et al. (2018)) dataset is another larger subset of ImageNet (Russakovsky et al. (2015)). This dataset contains 608 classes that are grouped into 34 high-level categories, where 20 categories (351 classes) are used for meta-training, 6 categories (97 classes) for meta-validation and 8 categories(160 classes) for meta-testing. All images are also size of 84 × 84. The CIFAR-FS (Bertinetto et al. (2018)) dataset is a few-shot image classification benchmark, consisting of all 100 classes from CIFAR-100 (Krizhevsky et al. (2010)). These classes are randomly split into 64, 16, and 20 separately for meta-training, meta-validation and meta-testing. Each class contains 600 images of size 32 × 32. The FC100 (Oreshkin et al. (2018)) dataset is another benchmark derived from CIFAR100 (Krizhevsky et al. (2010)). This dataset comprises 100 classes that are grouped into 20 highlevel categories, where 12 categories (60 classes) are used for meta-training, 4 categories (20 classes) for meta-validation and 4 categories (20 classes) for meta-testing. Each class contains 600 images of size 32 × 32." }, { "heading": "B MORE DETAILS ABOUT ALGORITHMS", "text": "In this section, we provide further details about the training regimes and algorithms mentioned above. Note that we denote the meta parameters of the network as θ in previous sections. Considering the network is composed of the body (feature extractor) and head (classifier), we further rewrite θ as θ = [θf , θc], where θf , θc is the parameters of body and head respectively. For a given task Tb = {T sb , T q b }, the meta-initialization updating of MAML can be expressed as follows:\nθtf = θ t−1 f − α∇θt−1f LT sb (θ t−1 f , θ t−1 c ), θ t c = θ t−1 c − α∇θt−1c LT sb (θ t−1 f , θ t−1 c )\nθf = θf − β∇θfLT qb (θ t f , θ t c), θc = θc − β∇θcLT qb (θ t f , θ t c)\n(5)\nwhere α is the step size of the inner loop, β is the learning rate. In our work, we devise several methods using different training regimes including Multi-Task, Multi-Head, Almost No Inner Loop (ANIL) and Body Outer Loop, Head Inner Loop (BOHI) to study the role of network body and head during meta-training.\nThe updating rules of Multi-Task can be expressed as follows\nθf = θf − β∇θfLT qb (θf , θc), θc = θc − β∇θcLT qb (θf , θc) (6)\nFor different tasks, the Multi-head has different heads for task specificity, given by\nθf = θf − β∇θfLT qb (θf , θ Tb c ), θ Tb c = θ Tb c − β∇θTbc LT qb (θf , θ Tb c ) (7)\nwhere θTbc is the specific parameters for task Tb. The network body is fixed during the inner loop for ANIL, given by\nθtc = θ t−1 c − α∇θt−1c LT sb (θ t−1 f , θ t−1 c ) θf = θf − β∇θfLT qb (θf , θ t c), θc = θc − β∇θcLT qb (θf , θ t c)\n(8)\nFor BOHI, the network body is updated only by the outer loop and the head is adapted only during the inner loop, making the head’s meta-initialized parameters unchanged, given by\nθtc = θ t−1 c − α∇θt−1c LT sb (θ t−1 f , θ t−1 c ) θf = θf − β∇θfLT qb (θf , θ t c)\n(9)" }, { "heading": "C MORE EXPERIMENTAL DETAILS", "text": "C.1 IMPLEMENTATION DETAILS\nFor all training regimes, RDP and MCL, we use the Adam optimizer with weight decay of 5e-4 and the learning rate is set to 1e-3. For 4-layer convolution network with 64 filters, we flatten the output feature map of the network body, and obtain 1600-d features for miniImageNet and tieredImageNet, while 256-d features for CIFAR-FS and FC100. For ResNet12 network, we employ a global max pooling layer on the output feature map of the network body, and obtain 512-d features for four public datasets. During meta-training, we adopt horizontal flip, random crop and color (brightness,\ncontrast, and saturation) jitter data augmentation as proposed in Gidaris & Komodakis (2018); Qiao et al. (2018). We train all models 100 epochs and take 500 batches per epoch. For MAML, BOHI and ANIL, both models are trained using 5 gradient steps of size α = 0.01 for Conv4 and α = 0.1 for ResNet12. For the Random Decision Planes algorithm, the number of decision planes np is set to 64. For the Meta Contrastive Learning (MCL) algorithm, we apply a two-layer nonlinear projection layer with hidden size of 512. Also, the query datapoints come from 10 different classes for each sampled task, which is helpful for accelerating model convergence.\nC.2 MORE RESULTS FOR BOHI, ANIL, MAML, MCL\nIn this section, we provide complete experimental results for BOHI, ANIL, MAML and MCL with different backbones on four datasets. The complete results on four datasets are presented in Table 4, Table 5, Table 6 and Table 7 respectively. The results can further verify our description mentioned above. Good features learning depends on the multi-step task-specific adaptation of head during the inner loop more than updating the meta-initialization of head in outer loop. With the lowperformance head, the update of body may even lead to a decline in the quality of features. In addition, the results on four datasets further demonstrate the effectiveness of our proposed MCL algorithm.\nC.3 THE TIME-EFFICIENCY ANALYSIS FOR BOHI, ANIL, MAML, MCL\nIt is obvious that ANIL, BOHI and MCL can speeds up training. The results about the comparison of computation time are presented in Table 8. We implement our methods based on PyTorch and the\ntraining of models is run on two NVIDIA 1080Ti GPU. Notice that our MCL can run much faster than BOHI and ANIL while achieves better evaluation results. The training speedups also illustrate the significant computational benefit of MCL and prove its effectiveness.\nC.4 ABOUT THE PROJECTION LAYER OF MCL\nWe have found that with a deeper backbone, the features learning can be facilitated a lot by the projection layer. We further evaluate the quality of features extracted by the network body and the projection layer. The evaluation results are given in Table 9. Even if the contrastive loss is applied to the projection layer, the network body learns better and general representations. We conjecture that during the meta-training, the projection layer may absorb more task-specific information while the backbone tends to learn task-independent representations." } ]
2,020
null
SP:5b537c8e2d4559f2980b079e46f23eeb8b6f30ad
[ "Authors introduce a new meta-RL algorithm based on SAC. It uses a context variable $c$ that they condition the Q-function on and the adaptation mechanism which is based on the values of the value function (ie. $\\mathbb{E_a} Q(\\dot, a)$) instead of the true returns. Authors claim their method reduces variance and bias of the meta-gradient estimation, is closer to human learning, encourages the agent to learn to explore, is more data-efficient in test-time and has competitive performance among gradient-based algorithms." ]
Meta-reinforcement learning (meta-RL) algorithms have successfully trained agent systems to perform well on different tasks within only few updates. However, in gradient-based meta-RL algorithms, the Q-function at adaptation step is mainly estimated by the return of few trajectories, which can lead to high variance in Q-value and biased meta-gradient estimation, and the adaptation uses a large number of batched trajectories. To address these challenges, we propose a new meta-RL algorithm that can reduce the variance and bias of the meta-gradient estimation and perform few-shot task data sampling, which makes the meta-policy more interpretable. We reformulate the meta-RL objective, and introduce contextual Q-function as a meta-policy critic during task adaptation step and learn the Q-function under a soft actor-critic (SAC) framework. The experimental results on 2D navigation task and meta-RL benchmarks show that our approach can learn an more interpretable meta-policy to explore unknown environment and the performance are comparable to previous gradient-based algorithms.
[]
[ { "authors": [ "Yan Duan", "John Schulman", "Xi Chen", "Peter L Bartlett", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Rl2: Fast reinforcement learning via slow reinforcement learning", "venue": "arXiv preprint arXiv:1611.02779,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "arXiv preprint arXiv:1703.03400,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Vijay R Konda", "John N Tsitsiklis" ], "title": "Actor-critic algorithms. In Advances in neural information processing", "venue": null, "year": 2000 }, { "authors": [ "Hao Liu", "Richard Socher", "Caiming Xiong" ], "title": "Taming maml: Efficient unbiased metareinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Chelsea Finn", "Sergey Levine", "Deirdre Quillen" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": "In International conference on machine learning,", "year": 2019 }, { "authors": [ "Jonas Rothfuss", "Dennis Lee", "Ignasi Clavera", "Tamim Asfour", "Pieter Abbeel" ], "title": "Promp: Proximal meta-policy search", "venue": "arXiv preprint arXiv:1810.06784,", "year": 2018 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Bradly C Stadie", "Ge Yang", "Rein Houthooft", "Xi Chen", "Yan Duan", "Yuhuai Wu", "Pieter Abbeel", "Ilya Sutskever" ], "title": "Some considerations on learning to explore via meta-reinforcement learning", "venue": "arXiv preprint arXiv:1803.01118,", "year": 2018 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt" ], "title": "Learning to learn", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Jane X Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z Leibo", "Remi Munos", "Charles Blundell", "Dharshan Kumaran", "Matt Botvinick" ], "title": "Learning to reinforcement learn", "venue": "arXiv preprint arXiv:1611.05763,", "year": 2016 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning problems have been studied for a long time and there are many impressive works that achieved human-level control in real world tasks (Mnih et al., 2013; Silver et al., 2017; Vinyals et al., 2019; Schrittwieser et al., 2019). These agents are trained separately on each task and may require huge sampled data and millions of trails. However, in a many real world tasks, the cost of sampling data is not negligible, thus we cannot give agent a large number of trails in environment. In contrast, human can laverage past experiences and learn new tasks quickly in few trails, which is very efficient. Many tasks in fact share similar structures that can be extracted as prior knowledge, e.g., shooting games aims to eliminate enemies with weapons in different environments, which can help agent generalize quickly through different tasks. Meta-learn (Thrun & Pratt, 2012) reinforcement learning tasks can be a suitable chioce.\nMeta-reinforcement learning (meta-RL) aims to learn a policy that can adapt to the unknown environment within few interactions with environment. Meta-policy can be seen as a policy that can derive new a policy maximizes the performance in the new environment. Gradient-based algorithms in meta-RL (Finn et al., 2017; Stadie et al., 2018; Rothfuss et al., 2018; Liu et al., 2019) showed that a meta-policy can be obtained by reinforcement learning a policy adapted by few reinforcement learning steps. The experiment results suggests that gradient-based methods can learn to sample and utilize sampled data in some extent. Nevertheless, the learning style and learned meta-policy are still far from human. Human learns a new task by interacting with the task sequentially and efficiently. With the obtaining of environment data, human gradually understanding where to sampling data and how to utilize the sampled data to adjust the policy, while gradient-based algorithms use parallel sampling neglecting the relations between data. Sampling independently is not data-efficient, usually needs a number of stochastic trajectories to do plicy adaptation. This causes the agent relying on the stochasticity to sample and only learns how to utilize data.\nInspired by the human behavior, we propose a K-shot meta-RL problem that constrains on the data amount accessed by agent, e.g., adapting policy within only two trails. Low resource environment simulates the real world tasks that have high costs on data obtaining, therefore, requires agent to learn a stable strategy to explore environment. To address the K-shot problem, we also propose a contextual gradient-based algorithm using actor-critic method. The adptation step uses a trail buffer\nD to store all the transitions in K-shot sampling and optimizes expected value for the states in D. The meta-learning step optimizes the expected return performed by adapted policy while learning the value functions and context encoder using soft actor-critic (Haarnoja et al., 2018) objectives. We learn the policy with reparameterized objective that derives an unbiased meta-gradient estimation and reduces the estimation variance for Q-value. Our contribution can be summarized as follows:\n• We reformulate and propose the K-shot meta-RL problem to simulate the real world environment.\n• We propose a new gradient-based objective to address the K-shot problem.\n• We introduce context based policy and value functions to perform efficient data sampling.\n• We use actor-critic method to reduce the variance and bias of estimation in Q-value and meta-gradien." }, { "heading": "2 RELATED WORK", "text": "Meta-reinforce learning algorithms mainly have three different categories: gradient-based motheds (Finn et al., 2017; Stadie et al., 2018; Rothfuss et al., 2018; Liu et al., 2019; Nichol et al., 2018), recurrent meta-learners (Wang et al., 2016; Duan et al., 2016), multi-task learners (Fakoor et al., 2019; Rakelly et al., 2019). Gradient-based algorithms like MAML (Finn et al., 2017) optimizing the policy updated by one step reinforcement learning, aiming at learning a good initialization of the policy weights. E-MAML (Stadie et al., 2018) considered the impact that the data obtained by meta-policy can influence the adapted policy’s performance and assigned credit for meta-policy. While ProMP (Rothfuss et al., 2018) modified the adaptation gradient estimator to be low variance on second-order gradient. Recurrent meta-learners (Wang et al., 2016; Duan et al., 2016) use RNN as a meta-learner that can learn new task from environment data while exploring. The RNN learners are optimized with sequentially performed episodes end-to-end, which is more similar to the learning process of human and more interpretable in meta-policy. Multi-task learners (Fakoor et al., 2019; Rakelly et al., 2019) aim at learning multi-task objective to solve meta-learning problems. They argue that meta-learning can be done by explicitly resuing the learned features through context variable. MQL (Fakoor et al., 2019) can even perform well without adaptation. PEARL (Rakelly et al., 2019) constructs context encoder to infer the latent task variable and also learns a multi-task objective. The trained policy can perform structured exploration by inferring the task while interacting with environment.Our approach is related closely to the gradient-based researches which also tries to reduce the estimation variance and bias of the second-order gradient, however, we estimate the second-order gardient with value functions, and we still want perform structured exploration in data expensive environments." }, { "heading": "3 BACKGROUND", "text": "This section focuses on the problem definition and notation of reinforcement learning and metareinforcement learning problems." }, { "heading": "3.1 REINFORCEMENT LEARNING", "text": "Reinforcement learning (RL) problems aim to maximize the expectation of episode returns Eτ∼P (τ |θ)[R(τ)] = Eτ∼P (τ |θ)[ ∑ t γtr(st, at)] (1)\nwith single task and agent, where τ = {s0, a0, r0, . . . } is the trajectory performed by the agent, s0 ∼ ρ0 is the initial state, at ∼ πθ(at|st) is the action sampled from the policy π that parameterized by θ, st+1 ∼ P (st+1|at, st) is the state at timestep t, and P (st+1|at, st) is the transition probability. The problem can be represented by a Markov Desicion Process (MDP) with tuple M = (S,A,P,R, ρ0, γ,H), where S ⊆ Rn is the set of states, A ⊆ Rm is the set of actions, P(s′|s, a) ∈ R+ is the system transition probability,R(s, a) ∈ R is the reward function of the task, and H is the horizon.\nOptimizing (1) usually uses gradient descent and the gradient is estimated using vanilla policy gradient (VPG) estimator (Williams, 1992)\n∇θEτ∼P (τ |θ)[R(τ)] = Eτ∼P (τ |θ)[∇θ log π(τ)R(τ)]\n≈ 1 N ∑ i ∑ t ∇θ log πθ(ai,t|si,t)( H∑ t′=t R(si,t′ , ai,t′)) (2)" }, { "heading": "3.2 GRADIENT-BASED META-REINFORCEMENT LEARNING", "text": "Meta-reinforcement learning (meta-RL) aims to learn a fast adaptation procedure that can leverage the learned prior knowledge from training tasks and adapt to new tasks with few steps. A task T in meta-RL can also be defined by an MDPMT = (S,A,PT ,RT , ρ0, γ,H). The task is drawn from a distribution T ∼ P (T ), for simplicity, we only consider tasks with different reward functions or system transitions but the same state and action space.\nGradient-based meta-RL algorithms (Finn et al., 2017; Stadie et al., 2018) are mainly based on the basic meta-objective (Rothfuss et al., 2018)\nJ(θ) = ET∼P (T )[Eτ ′∼PT (τ ′|θ′)[R(τ ′)]], θ′ = U(θ, T ) = θ + α∇θEτ∼PT (τ |θ)[R(τ)], (3) where θ is the weights of meta-policy, and θ′ is the adapted weights after one step gradient descent. The meta-objective J(θ) optimizes the expectation of episode return sampled from the adapted policy πθ′ . The meta-gradient can be written as\n∇θJ(θ) = ET∼P (T )[Eτ ′∼PT (τ ′|θ′)[∇θ′ logPT (τ ′|θ′)R(τ ′)∇θθ′]]\n∇θθ′ = I + α∇2θEτ∼PT (τ |θ)[R(τ)] (4)" }, { "heading": "4 METHOD", "text": "" }, { "heading": "4.1 REFORMULATE META-REINFORCEMENT LEARNING PROBLEM", "text": "Different tasks have different features in MDP, a task can be inferred from few important states and transitions in the environment, e.g., different friction coefficients on floor, different rewards for the same state and action, or some states only exists in certain environments. We name these states and transitions the feature points of the environment. Humans usually learn a task sequentially and efficiently since they can easily recognize the feature points in an environment. The exploration policy of a human changes significantly after obtaining data from the envioronment, thus they can decide where to explore and learn a task quickly. However, as formula (3), fast adaptation U(θ, T ) usually refers to few gradient descent steps in initial weights θ, and unlike humans, the updating is performed in a batched style as normal reinforcement learning. Batched sampling usually contains a large number of trajectories in parallel, which can be inefficient for inferring the task. E-MAML (Stadie et al., 2018) also tried to improve the sampling efficiency of meta-policy by accounting for the fact that samples drawn from meta-policy will impact the adapted policy. Inspired by the learning procedure of human, we reformulate the meta-RL problem as K-shot meta-reinforcement learning.\nDefinition. Given a task T ∼ P (T ), the agent samples data in trail phase and perform good policy in test phase. In trail phase, the agent can only sequentially sample K trajectories in total to adjust its policy, with each trajectory of H length. In test phase, the agent is required to perform only one trajectory and make the return as high as possible.\nK-shot meta-RL problem defined above constrains the amount of data that can be accessed by agent, and is more similar to the real world meta-RL problem, e.g., super mario maker. In K-shot setting, meta-policy can still be updated using U(θ, T ) with batched trajectories, since they can be seen as sampled independently in sequence. However, the variance of the gradient estimation grows as K descends, which means the performance becomes more unstable. To optimize the problem, we propose a new meta-objective\nJK−shot(θ) = ET∼P (T )[Eτ ′∼PT (τ ′|θ′)[R(τ ′)]],\nθ′ = U(θ,D) = θ + α∇θEs∼D[V π(s|c)] (5)\nfor the K-shot setting. Here D is the state buffer sampled by meta-policy in trail phase, and V π(s|c) is the expected return of policy π at state s under context c (see 4.2 for details). The state buffer D contains K ∗H states as described in definition, which means the agent can only use few states to update its policy. Due to the constraint on availble environment information, the agent is encouraged to learn to explore more important states that can help performing well in test phase." }, { "heading": "4.2 INTRODUCING CONTEXT", "text": "In meta-RL, the task T sampled from the task distribution is not given to the agent and can be thought of a latent variable of the task MDP. The latent variable can be inferred and has a strong correlation with the context variable c which is encoded by the experience (st, at, rt, st+1) collected until time step t. The context variable contains the information of two aspects. First, the experience is one step of transition and reward that represents the system dynamics and reward function of the environment. Second, the decision history {(st, at)}n represents the agent policy in the current environment. Q-function uses the state action tuple (st, at) to evaluate the future discounted return of the policy at state st taking action at, which also need the same two-aspect information about policy and dynamics. Therefore, we introduce a contextual Q-function Q(s, a|c) that can evaluate policy in unknown environment. To encourage the agent to learn how to sample efficiently and infer the task from the unknown environment, the agent should also use a context depended policy πθ(a|s, c) to memorize past states. Encoding the context variable c uses a Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997). The context encoder takes as input the history experience so far and output a context variable c deterministicly. LSTM encoder has an advantage of dealing with sequential data like history transitions, thus can give a good representation of context. Addationally, LSTM context can represent for the same current state with different history states, which helps agent to explore more states and Q-function to evaluate the discounted return correctly.\nWe follow the setting in (Duan et al., 2016) to design the context encoding. Transitions are continuously fed into LSTM encoder while agent performing trajectories. The initial context is a zero vector and the context will not be reset after episode ends. This means agent can keep the information between episodes and decide how to explore in next steps. With setting, the adaptation procedure is divided into two parts. First, the agent samples important states for itself in environment according to the data collected so far. Second, the agent uses all data available in buffer D to adapt policy. Through this process, agent can learn how to explore environment and how to utilize the transition data, which is a more structured learning scheme." }, { "heading": "4.3 LEARNING WITH ACTOR-CRITIC METHOD", "text": "Solving the K-shot problem in 4.1 requires value functions to evaluate the future expected return of policy π, therefore, training the agent in an actor-critic style can be a good choice. The adaptation step in (3) uses reward term to estimate the Q-value. Even it is an unbiased point estimation of Q(st, at), the variance have can be very high (Konda & Tsitsiklis, 2000), and may lead to an unstable learning process. Actor-critic algorithms can trade-off between variance and bias of the estimation and the learned value functions can be used to do adaptation.\nTo learn the value functions, we use soft actor-critic (SAC) (Haarnoja et al., 2018) framework. SAC is an off-policy RL algorithm that tries to learn a policy with maximized entropy, thus the agent can trad-off between exploration and expolitation. We modified the SAC objective as\nJSAC(θ) = Es∼D[V π(s|c)] = Es∼D,a∼πθ(a|s,c)[Q π(s, a|c)− α log πθ(a|s, c)]\n= −Es∼D[DKL(πθ(·|s, c) ‖ exp( Qπ(s, ·|c)\nα ))],\n(6)\nadding context dependency to value functions and policy, and the value functions also satisfies Bellman equation\nQπ(st, at|ct) = R(st, at) + Est+1∼P (st+1|st,at)[V π(st+1|ct+1)] (7)\nand V π(st|ct) = Eat∼πθ(at|st,ct)[Q\nπ(st, at|ct)− α log πθ(at|st, ct)] (8) where ct+1 = enc(st, at, rt|ct) and enc is the LSTM context encoder mentioned in 4.2. Learning Q-function, V-function and LSTM encoder requires minimizing loss\nLQ = E(s,a,s′)∼D[(Qπ(s, a|enc(τ1:t−1))− (r(s, a) + γV̂ π(s′|enc(τ1:t))))2] (9)\nand LV = Es∼D[(V π(s|c)− Ea∼πθ(a|s,c)[Q(s, a|c)− α log πθ(a|s, c)]) 2] (10)\nwhere D is the replay buffer (Mnih et al., 2015) storing the transitions experienced by agent, s is the state, a is the action taken at state s, s′ is the next state given state and action (s, a), r(s, a) is the reward at state s after taking action a, τ1:t−1 and τ1:t represents the trajectory before state s and including state s, and V̂ is the target value function to stable value iteration.\nSubstitue the adaptation objective in (5) with (6), we have\nU(θ,D) = θ + α∇θEs∼D,a∼πθ(a|s,c)[Q π(s, a|c)− α log πθ(a|s, c)], (11)\nwhere c refers to the context at state s. The gradient estimation in second term using VPG estimator is\n∇θEs∼D,a∼πθ(a|s,c)[Q π(s, a|c)− α log πθ(a|s, c)]\n≈ 1 N ∑ i ∇θ log πθ(a|s, c)(Qπ(s, a|c)− α log πθ(a|s, c))⊥ (12)\nHere ⊥ means stop gradient. However, the second-order gradient of the analytical form and the Monte Carlo approximation form are not the same, which are\nEs∼D,a∼πθ(a|s,c)[(∇ 2 θ log πθ(a|s, c) +∇θ log πθ(a|s, c) 2 )(Qπ(s, a|c)− α log πθ(a|s, c))] (13)\nand 1\nN ∑ i ∇2θ log πθ(a|s, c)(Qπ(s, a|c)− α log πθ(a|s, c))⊥. (14)\nThis will cause a biased estimation in meta-gradient. Suppose policy πθ is a Gaussian distribution, the action can be rewritten as a deterministic form a = µθ( ; s|c), where ∼ N(0; 1), and the gradient term in (11) can be reparameterized as\n∇θEs∼D, ∼N(0;1)[Qπ(s, µθ( ; s|c)|c)− α log πθ(µθ( ; s|c)|s, c)] (15)\nAlgorithm 1 K-shot Meta-Reinforcement Learning Require: trials K, horizon H , task distribution P (T ), learning rates α, β, δ\nInitialize trail buffer D̂i and replay buffer Di for each training task Initialize weights of µθ, encφ, Qψ , Vη , V̂η′\n1: while not done do 2: for i=1,2,. . . ,N do 3: Clear trial buffer D̂i 4: Sample Ti from P (T ) 5: Sample K trajectories from Ti with µθ while encoding experiences, and add to D̂i and Di 6: Compute adapted policy using θ′ = U(θ, D̂i) in (17) 7: Run adapted policy θ′ for several turns to estimate average return R(τ ′), 8: Compute meta-gradient∇θJK−shoti (θ) 9: Sample a batch of transitions in Di\n10: Compute gradients∇φLQi ,∇ψL Q i ,∇ηLVi using sampled batch 11: end for 12: θ ← θ + β 1N ∑ i∇θJ K−shot i (θ)\n13: ψ ← ψ + β 1N ∑ i∇ψL Q i\n14: η ← η + β 1N ∑ i∇ηLVi\n15: φ← φ+ β 1N ∑ i∇φL Q i 16: Soft update target η′ ← (1− δ)η′ + δη 17: end while\nThus the second-order gradient of Monte Carlo approximation\n1\nN ∑ i (∇2aQ(s, a|c)∇θµθ( i; s|c) 2 +∇aQ(s, a|c)∇2θµθ( i; s|c)\n−α(∇2a log πθ(a|s, c)∇θµθ( i; s|c) 2 +∇a log πθ(a|s, c)∇2θµθ( i; s|c)))\n(16)\nis an unbiased estimation of the analytical form, and from (4) we know that meta-gradient estimation can be unbiased using this adaption form. To utilize all the available data in D, we use deterministic form for adaptation step in (5) and rewrite K-shot meta-objective as\nJK−shot(θ) = ET∼P (T )[Eτ ′∼PT (τ ′|θ′)[R(τ ′)]],\nθ′ = U(θ,D)\n= θ + α∇θ 1\nN |D|∑ i=1 (Qπ(si, µθ( i; si|ci)|ci)− α log πθ(µθ( i; si|ci)|si, ci)) (17)\nwhere si, i, ci are the ith data in replay buffer collected at trail phase. The meta-RL problem proposed in 4.1 can be directly optimized by (17) while learning value functions with (9) and (10)." }, { "heading": "5 EXPERIMENTS", "text": "To evaluate our algorithm proposed above, we implemented our approach in several different metareinforcement learning environments, including 2d navigation task from (Rothfuss et al., 2018) and meta-RL benchmarks previously used by (Rakelly et al., 2019) in mujoco (Todorov et al., 2012)." }, { "heading": "5.1 ENVIRONMENT SETUP", "text": "First we introduce the 2d navigation task. This task requires the agent to explore a sparse reward environment, infer the goal point in an unbounded 2d plane. The plane is divided into four parts with each goal in one part. The agent starts from center of the plane and tries to obtain task data in trail phase and reach the goal in test phase. The observation to be received is its coordinate concatenating the remained available steps. The reward is sparse and set to be the difference between the distances\nto the goal within two steps when near the goal, otherwise is set to zero. We use this environment to test whether the agent can learn to sample different states and use these states to adapt to the right policy. Second, we describe the mujoco benchmarks. Mujoco tasks are environments for controlling robot in simulated physical world to learn task adaptation. We tested three mujoco environments: HalfCheetahForwardBack, AntRandDir, HalfCheetahRandVel. HalfCheetahForwardBack requires the agent to run forward or backward as fast as possible, AntRandDir requires agent to run in two random selected directions as fast as possible, and HalfCheetahRandVel requires agent to run with certain speeds." }, { "heading": "5.2 RESULTS", "text": "In this section we will show the experiment results of our approach. In 2d navigation task, the metapolicy learning curve converged easily in early training steps and the trained meta-policy is shown in figure 3. The agent have three trails on each task, then perform a apdated policy for testing. In each trail, the agent perform 100 steps and total 300 steps in trail phase. This means our approach uses less steps to figure out the task than it is in Rothfuss et al. (2018) which used 2000 steps. As is shown in figure 3, agent chose very different states to explore. Each trail the agent will visit the states that have not been explored, and states are separated in plane with clear bounds. These states helps the agent to infer the task efficiently, and the meta-policy can be performed in data expensive environments.\nWe also evaluated our algorithm in mujoco meta-RL environments. The results1 are showed in figure 4. The performances of our algorithm are slightly higher than the previous gradient-based algorithms. The data amount we sampled at trail phase are also less than it is in MAML, ProMP and even in PEARL (Rakelly et al., 2019). Each trail we sampled 200 steps for total 2 trails, the data used to do adaptation in our algorithm is 10% of in PEARL and 5% in ProMP.\n1The MAML and ProMP results are obtained from published results in Rakelly et al. (2019)" }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed a new meta-RL problem that contrains the data amount utilized by agent, and have given a new meta-RL algorithm that optimizing with contextual policy and actor-critic framework. Our approach can estimate unbiased meta-gradient and reduce the estimation variance of Q-function. From the experiments, we demonstrated that contextual policy can sample efficiently in data constrained environments. Finally, the experiments on mujoco environments suggested that our algorithm can have competitive performance with other gradient-based algorithms. Human behavior can usually bring us inspiration on designing the intelligent system and maybe is a key to AGI." } ]
2,020
null
SP:e7976ca1bd206e20cbff3147a2b607ff6d658b2a
[ "1. In this paper the authors proposed a transferrable framework for multi-agent RL, which enables the learned policies easily generalize to more challenging scenarios. This seems to be a good contribution to the community of multi-agent RL. It bears a potential to handle large-scale tasks with only limited training data, while also demonstrates more explanable policies." ]
Recent advances in multi-agent reinforcement learning have been largely limited training one model from scratch for every new task. This limitation occurs due to the restriction of the model architecture related to fixed input and output dimensions, which hinder the experience accumulation and transfer of the learned agent over tasks across diverse levels of difficulty (e.g. 3 vs 3 or 5 vs 6 multiagent games). In this paper, we make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing a single architecture to fit tasks with different observation and action configuration requirements. Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy by decoupling the policy distribution from the intertwined input observation, using an importance weight determined with the aid of the selfattention mechanism. Compared to a standard transformer block, the proposed model, which we name Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task’s decision process more explainable. UPDeT is general enough to be plugged into any multiagent reinforcement learning pipeline and equip it with strong generalization abilities that enable multiple tasks to be handled at a time. Extensive experiments on large-scale SMAC multi-agent competitive games demonstrate that the proposed UPDeT-based multi-agent reinforcement learning achieves significant improvements relative to state-of-the-art approaches, demonstrating advantageous transfer capability in terms of both performance and training speed (10 times faster). Code is available at https://github.com/hhhusiyi-monash/UPDeT
[ { "affiliations": [], "name": "Siyi Hu" }, { "affiliations": [], "name": "Fengda Zhu" }, { "affiliations": [], "name": "Xiaojun Chang" }, { "affiliations": [], "name": "Xiaodan Liang" } ]
[ { "authors": [ "Haitham B Ammar", "Karl Tuyls", "Matthew E Taylor", "Kurt Driessens", "Gerhard Weiss" ], "title": "Reinforcement learning transfer via sparse coding", "venue": "In Proceedings of the 11th international conference on autonomous agents and multiagent systems,", "year": 2012 }, { "authors": [ "Mariusz Bojarski", "Davide Del Testa", "Daniel Dworakowski", "Bernhard Firner", "Beat Flepp", "Prasoon Goyal", "Lawrence D Jackel", "Mathew Monfort", "Urs Muller", "Jiakai Zhang" ], "title": "End to end learning for self-driving cars", "venue": "arXiv preprint arXiv:1604.07316,", "year": 2016 }, { "authors": [ "Georgios Boutsioukis", "Ioannis Partalas", "Ioannis Vlahavas" ], "title": "Transfer learning in multi-agent reinforcement learning domains", "venue": "In European Workshop on Reinforcement Learning,", "year": 2011 }, { "authors": [ "Xiaojun Chang", "Po-Yao Huang", "Yi-Dong Shen", "Xiaodan Liang", "Yi Yang", "Alexander G. Hauptmann" ], "title": "RCAA: relational context-aware agents for person search", "venue": "In Computer Vision - ECCV 2018 - 15th European Conference,", "year": 2018 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "arXiv preprint arXiv:1412.3555,", "year": 2014 }, { "authors": [ "Felipe Leno Da Silva", "Anna Helena Reali Costa" ], "title": "A survey on transfer learning for multiagent reinforcement learning systems", "venue": "Journal of Artificial Intelligence Research,", "year": 2019 }, { "authors": [ "Yali Du", "Lei Han", "Meng Fang", "Ji Liu", "Tianhong Dai", "Dacheng Tao" ], "title": "Liir: Learning individual intrinsic reward in multi-agent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jakob Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "arXiv preprint arXiv:1705.08926,", "year": 2017 }, { "authors": [ "Abhishek Gupta", "Coline Devin", "YuXuan Liu", "Pieter Abbeel", "Sergey Levine" ], "title": "Learning invariant feature spaces to transfer skills with reinforcement learning", "venue": "arXiv preprint arXiv:1703.02949,", "year": 2017 }, { "authors": [ "Matthew Hausknecht", "Peter Stone" ], "title": "Deep recurrent q-learning for partially observable mdps", "venue": "arXiv preprint arXiv:1507.06527,", "year": 2015 }, { "authors": [ "Todd Hester", "Michael Quinlan", "Peter Stone" ], "title": "Generalized model learning for reinforcement learning on a humanoid robot", "venue": "In 2010 IEEE International Conference on Robotics and Automation,", "year": 2010 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Kyunghwan Son", "Daewoo Kim", "Yung Yi Qtran" ], "title": "Learning to factorize with transformation for cooperative multi-agent reinforcement learning", "venue": "In Proceedings of the 31st International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR,", "year": 2019 }, { "authors": [ "Ryan Lowe", "Yi I Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multiagent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Anuj Mahajan", "Tabish Rashid", "Mikayel Samvelyan", "Shimon Whiteson" ], "title": "Maven: Multi-agent variational exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Frans A Oliehoek", "Christopher Amato" ], "title": "A concise introduction to decentralized POMDPs, volume", "venue": null, "year": 2016 }, { "authors": [ "Ankur P Parikh", "Oscar Täckström", "Dipanjan Das", "Jakob Uszkoreit" ], "title": "A decomposable attention model for natural language inference", "venue": null, "year": 1933 }, { "authors": [ "Emilio Parisotto", "Jimmy Lei Ba", "Ruslan Salakhutdinov" ], "title": "Actor-mimic: Deep multitask and transfer reinforcement learning", "venue": "arXiv preprint arXiv:1511.06342,", "year": 2015 }, { "authors": [ "Peng Peng", "Ying Wen", "Yaodong Yang", "Quan Yuan", "Zhenkun Tang", "Haitao Long", "Jun Wang" ], "title": "Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games", "venue": "arXiv preprint arXiv:1703.10069,", "year": 2017 }, { "authors": [ "Tabish Rashid", "Mikayel Samvelyan", "Christian Schroeder De Witt", "Gregory Farquhar", "Jakob Foerster", "Shimon Whiteson" ], "title": "Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1803.11485,", "year": 2018 }, { "authors": [ "Mikayel Samvelyan", "Tabish Rashid", "Christian Schroeder de Witt", "Gregory Farquhar", "Nantas Nardelli", "Tim GJ Rudner", "Chia-Man Hung", "Philip HS Torr", "Jakob Foerster", "Shimon Whiteson" ], "title": "The starcraft multi-agent challenge", "venue": null, "year": 1902 }, { "authors": [ "Kun Shao", "Yuanheng Zhu", "Dongbin Zhao" ], "title": "Starcraft micromanagement with reinforcement learning and curriculum transfer learning", "venue": "IEEE Transactions on Emerging Topics in Computational Intelligence,", "year": 2018 }, { "authors": [ "Peter Sunehag", "Guy Lever", "Audrunas Gruslys", "Wojciech Marian Czarnecki", "Vinicius Zambaldi", "Max Jaderberg", "Marc Lanctot", "Nicolas Sonnerat", "Joel Z Leibo", "Karl Tuyls" ], "title": "Value-decomposition networks for cooperative multi-agent learning", "venue": "arXiv preprint arXiv:1706.05296,", "year": 2017 }, { "authors": [ "Ming Tan" ], "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", "venue": "In Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "Matthew E Taylor", "Peter Stone" ], "title": "Transfer learning for reinforcement learning domains: A survey", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Weixun Wang", "Tianpei Yang", "Yong Liu", "Jianye Hao", "Xiaotian Hao", "Yujing Hu", "Yingfeng Chen", "Changjie Fan", "Yang Gao" ], "title": "From few to more: Large-scale dynamic multiagent curriculum learning", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yaodong Yang", "Lantao Yu", "Yiwei Bai", "Jun Wang", "Weinan Zhang", "Ying Wen", "Yong Yu" ], "title": "A study of ai population dynamics with million-agent reinforcement learning", "venue": "arXiv preprint arXiv:1709.04511,", "year": 2017 }, { "authors": [ "Yaodong Yang", "Rui Luo", "Minne Li", "Ming Zhou", "Weinan Zhang", "Jun Wang" ], "title": "Mean field multiagent reinforcement learning", "venue": "arXiv preprint arXiv:1802.05438,", "year": 2018 }, { "authors": [ "Yaodong Yang", "Ying Wen", "Lihuan Chen", "Jun Wang", "Kun Shao", "David Mguni", "Weinan Zhang" ], "title": "Multi-agent determinantal q-learning", "venue": "arXiv preprint arXiv:2006.01482,", "year": 2020 }, { "authors": [ "Lianmin Zheng", "Jiacheng Yang", "Han Cai", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Magent: A many-agent reinforcement learning platform for artificial collective intelligence", "venue": "arXiv preprint arXiv:1712.00600,", "year": 2017 }, { "authors": [ "Meng Zhou", "Ziyu Liu", "Pengwei Sui", "Yixuan Li", "Yuk Ying Chung" ], "title": "Learning implicit credit assignment for multi-agent actor-critic", "venue": "arXiv preprint arXiv:2007.02529,", "year": 2020 }, { "authors": [ "Fengda Zhu", "Yi Zhu", "Xiaojun Chang", "Xiaodan Liang" ], "title": "Vision-language navigation with selfsupervised auxiliary reasoning tasks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [], "title": "2019)): this method formulates multi-agent learning", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement Learning (RL) provides a framework for decision-making problems in an interactive environment, with applications including robotics control (Hester et al. (2010)), video gaming (Mnih et al. (2015)), auto-driving (Bojarski et al. (2016)), person search (Chang et al. (2018)) and visionlanguage navigation (Zhu et al. (2020)). Cooperative multi-agent reinforcement learning (MARL), a long-standing problem in the RL context, involves organizing multiple agents to achieve a goal, and is thus a key tool used to address many real-world problems, such as mastering multi-player video games (Peng et al. (2017)) and studying population dynamics (Yang et al. (2017)).\nA number of methods have been proposed that exploit an action-value function to learn a multiagent model (Sunehag et al. (2017), Rashid et al. (2018), Du et al. (2019), Mahajan et al. (2019), Hostallero et al. (2019), Zhou et al. (2020), Yang et al. (2020)). However, current methods have poor representation learning ability and fail to exploit the common structure underlying the tasks this is because they tend to treat observation from different entities in the environment as an integral part of the whole. Accordingly, they give tacit support to the assumption that neural networks are able to automatically decouple the observation to find the best mapping between the whole observation and policy. Adopting this approach means that they treat all information from other agents or different parts of the environment in the same way. The most commonly used method involves concatenating\n∗Corresponding author.\nthe observations from each entity in to a vector that is used as input (Rashid et al. (2018), Du et al. (2019), Zhou et al. (2020)). In addition, current methods ignore the rich physical meanings behind each action. Multi-agent tasks feature a close relationship between the observation and output. If the model does not decouple the observation from the different agents, individual functions maybe misguided and impede the centralized value function. Worse yet, conventional models require the input and the output dimensions to be fixed (Shao et al. (2018), Wang et al. (2020)), which makes zero-shot transfer impossible. Thus, the application of current methods is limited in real-world applications.\nOur solution to these problems is to develop a multi-agent reinforcement learning (MARL) framework with no limitation on input or output dimension. Moreover, this model should be general enough to be applicable to any existing MARL methods. More importantly, the model should be explainable and capable of providing further improvement for both the final performance on singletask scenarios and transfer capability on multi-task scenarios.\nInspired by the self-attention mechanism (Vaswani et al. (2017)), we propose a transformer-based MARL framework, named Universal Policy Decoupling Transformer (UPDeT). There are four key advantages of this approach: 1) Once trained, it can be universally deployed; 2) it provide more robust representation with a policy decoupling strategy; 3) it is more explainable; 4) it is general enough to be applied on any MARL model. We further design a transformer-based function to handle various observation sizes by treating individual observations as ”observation-entities”. We match the related observation-entity with action-groups by separating the action space into several action-groups with reference to the corresponding observation-entity, allowing us to get matched observation-entity — action-group pairs set. We further use a self-attention mechanism to learn the relationship between the matched observation-entity and other observation-entities. Through the use of self-attention map and the embedding of each observation-entity, UPDeT can optimize the policy at an action-group level. We refer to this strategy as Policy Decoupling. By combining the transformer and policy decoupling strategies, UPDeT significantly outperforms conventional RNNbased models.\nIn UPDeT, there is no need to introduce any new parameters for new tasks. We also prove that it is only with decoupled policy and matched observation-entity — action-group pairs that UPDeT can learn a strong representation with high transfer capability. Finally, our proposed UPDeT can be plugged into any existing method with almost no changes to the framework architecture required, while still bringing significant improvements to the final performance, especially in hard and complex multi-agent tasks.\nThe main contributions of this work are as follows: First, our UPDeT-based MARL framework outperforms RNN-based frameworks by a large margin in terms of final performance on state-of-\nthe-art centralized functions. Second, our model has strong transfer capability and can handle a number of different tasks at a time. Third, our model accelerates the transfer learning speed (total steps cost) to make it roughly 10 times faster compared to RNN-based models in most scenarios." }, { "heading": "2 RELATED WORK", "text": "Attention mechanisms have become an integral part of models that capture global dependencies. In particular, self-attention (Parikh et al. (2016)) calculates the response at a specific position in a sequence by attending to all positions within this sequence. Vaswani et al. (2017) demonstrated that machine translation models can achieve state-of-the-art results solely by using a self-attention model. Parmar et al. (2018) proposed an Image Transformer model that applies self-attention to image generation. Wang et al. (2018) formalized self-attention as a non-local operation in order to model the spatial-temporal dependencies in video sequences. In spite of this, self-attention mechanisms have not yet been fully explored in multi-agent reinforcement learning.\nAnother line of research is multi-agent reinforcement learning (MARL). Existing work in MARL focuses primarily on building a centralized function to guide the training of individual value function (Lowe et al. (2017), Sunehag et al. (2017), Rashid et al. (2018), Mahajan et al. (2019), Hostallero et al. (2019), Yang et al. (2020), Zhou et al. (2020)). Few works have opted to form a better individual functions with strong representation and transfer capability. In standard reinforcement learning, this generalization has been fully studied (Taylor & Stone (2009), Ammar et al. (2012), Parisotto et al. (2015), Gupta et al. (2017), Da Silva & Costa (2019)). While multi-agent transfer learning has been proven to be more difficult than the single-agent scenario (Boutsioukis et al. (2011), Shao et al. (2018), Vinyals et al. (2019)). However, the transfer capability of a multi-agent system is of greater significance due to the various number of agents, observations sizes and policy distributions.\nTo the best of our knowledge, we are the first to develop a multi-agent framework capable of handling multiple task at a time. Moreover, we provide a policy decoupling strategy to further improve the model performance and facilitate the multi-agent transfer learning, which is a significant step towards real world multi-agent applications." }, { "heading": "3 METHOD", "text": "We begin by introducing the notations and basic task settings necessary for our approach. We then describe a transformer-based individual function and policy decoupling strategy under MARL. Finally, we introduce different temporal units and assimilate our Universal Policy Decoupling Transformer (UPDeT) into Dec-POMDP." }, { "heading": "3.1 NOTATIONS AND TASK SETTINGS", "text": "Multi-agent Reinforcement Learning A cooperative multi-agent task is a decentralized partially observable Markov decision process (Oliehoek et al. (2016)) with a tuple G = 〈S,A,U, P, r, Z,O, n, γ〉. Let S denote the global state of the environment, while A represents the set of n agents and U is the action space. At each time step t, agent a ∈ A ≡ {1, ..., n} selects an action u ∈ U , forming a joint action u ∈ U ≡ Un, which in turn causes a transition in the environment represented by the state transition function P (s′|s,u) : S × U × S → [0, 1]. All agents share the same reward function r(s,u) : S ×U → R , while γ ∈ [0, 1) is a discount factor. We consider a partially observable scenario in which each agent makes individual observations z ∈ Z according to the observation function O(s, a) : S × A → Z. Each agent has an actionobservation history that conditions a stochastic policy πt, creating the following joint action value: Qπ(st,ut) = Est+1:∞,ut+1:∞ [Rt|st,ut], where Rt = ∑∞ i=0γ irt+i is the discounted return.\nCentralized training with decentralized execution Centralized training with decentralized execution (CTDE) is a commonly used architecture in the MARL context. Each agent is conditioned only on its own action-observation history to make a decision using the learned policy. The centralized value function provides a centralized gradient to update the individual function based on its output. Therefore, a stronger individual value function can benefit the centralized training." }, { "heading": "3.2 TRANSFORMER-BASED INDIVIDUAL VALUE FUNCTION", "text": "In this section, we present a mathematical formulation of our transformer-based model UPDeT. We describe the calculation of the global Q-function with self-attention mechanism. First, the observation O is embedded into a semantic embedding to handle the various observation space. For example, if an agent ai observes k other entities {oi,1, ..., oi,k} at time step t, all observation entities are embedded via an embedding layer E as follows:\neti = {E(oti,1), ..., E(oti,k)}. (1)\nHere, i is the index of the agent, i ∈ {1, ..., n}. Next, the value functions {Q1, ..., Qn} for the n agents for each step are estimated as follows:\nqti = Qi(h t−1 i , e t i,ut). (2)\nWe introduce ht−1i , the temporal hidden state at the last time step t − 1, since POMDP policy is highly dependent on the historical information. eti denotes the observation embedding, while u t i is the candidate action, uti ∈ U . θi is the parameter that defines Qi. Finally, the global Q-function Qπ is calculated by all individual value functions, as follows:\nQπ(st,ut) = F (q t 1, .., q t n) (3)\nFi is the credit assignment function for defined by φi for each agent ai, as utilized in Rashid et al. (2018) and Sunehag et al. (2017). For example, in VDN, F is a sum function that can be expressed as F (qt1, .., q t n) = ∑n i=1 q t i .\nImplement Q-function with Self-attention Vaswani et al. (2017) adopts three matrices, K, Q, V representing a set of keys, queries and values respectively. The attention is computed as follows:\nAttention(Q,K,V) = softmax( QKT√ dk )V, (4)\nwhere dk is a scaling factor equal to the dimension of the key. In our method, we adopt the selfattention to learn the features and relationships from the observation entity embedding and the global temporal information. To learn the independent policy in decentralized multi-agent learning, we\ndefine Ki, Qi and Vi as the key, query and value metrics for each agent ai. We further consider the query, key and value for the same matrices Rli = Ki = Qi = Vi, where l ∈ {1, ..., L} is the number of layers of the transformer. Thus, we formulate our transformer as follows:\nR1i = {ht−1i , e t i}\nQli,K l i , V l i = LFQ,K,V (R l i)\nRl+1i = Attention(Q l i,K l i , V l i ).\n(5)\nwhere LF represents the linear functions used to compute K, Q, V. Finally we project the entity features of the last transformer layerRLi to the output space of the value functionQi. We implement the projection using a linear function P :\nQi(h t−1 i , e t i, ui) = P (R L i , ui). (6)" }, { "heading": "3.3 POLICY DECOUPLING", "text": "A single transformer-based individual function with self-attention mechanism is still unable to handle various required policy distribution. A flexible mapping function P in Eq. 6 is needed to deal with the various input and output dimensions and provide strong representation ability. Using the correlation between input and output, we design a strategy called policy decoupling, which is the key part of UPDeT.\nThe main idea behind the policy decoupling strategy can be summarized into three points:\n• Point 1 : No restriction on policy dimension. The output dimension of a standard transformer block must be equal to or less than the input dimension. This is unacceptable in some MARL tasks, as the action number can be larger than the entity number.\n• Point 2 : Ability to handle multiple tasks at a time. This requires a fixed model architecture without new parameters being introduced for new tasks. Unfortunately, if point 1 is satisfied, point 2 becomes very problematic to achieve. The difficulty lies in how to reconcile points 1 and 2 .\n• Point 3 : Make the model more explainable. It would be preferable if we can could replace the conventional RNN-based model with a more explainable policy generation structure.\nFollowing the above three points, we propose three policy decoupling methods, namely Vanilla Transformer, Aggregation Transformer and Universal Policy Decoupling Transformer (UPDeT). The pipelines are illustrated in Fig. 2. The details of the Vanilla Transformer and Aggregation Transformer are presented in the experiment section and act as our baselines. In this section, we mainly discuss the mechanism of our proposed UPDeT.\nTasking the entity features of the last transformer layer outlined in Eq. 5, the main challenge is to build a strong mapping between the features and the policy distribution. UPDeT first matches the input entity with the related output policy part. This correspondence is easy to find in the MARL task, as interactive action between two agents is quite common. Once we match the corresponding entity features and actions, we substantially reduce the burden of model learning representation using the self-attention mechanism. Moreover, considering that there might be more than one interactive actions of the matched entity feature, we separate the action space into several action groups, each of which consists several actions matched with one entity. The pipeline of this process is illustrated in the left part of Fig. 3. In the mapping function, to satisfy point 1 and point 2 , we adopt two strategies. First, if the action-group of one entity feature contains more than one action, a shared fully connected layer is added to map the output to the action number dimension. Second, if one entity feature has no corresponding action, we abandon it, there is no danger of losing the information carried by this kind of entity feature, as the transformer has aggregated the information necessary to each output. The pipeline of UPDeT can be found in the right part of Fig. 3. With UPDeT, there is no action restriction and no new parameter introduced in new scenarios. A single model can be trained on multiple tasks and deployed universally. In addition, matching the corresponding entity feature and action-group satisfies point 3 , as the policy is explainable using an attention heatmap, as we will discuss in Section 4.4." }, { "heading": "3.4 TEMPORAL UNIT STRUCTURE", "text": "Notably, however a transformer-based individual value function with policy decoupling strategy cannot handle a partial observation decision process without trajectory or history information. In Dec-POMDP (Oliehoek et al. (2016)), each agent a chooses an action according to πa(ua|τa), where u and τ represents for action and action-observation history respectively. In GRU and LSTM, we adopt a hidden state to hold the information of the action-observation history. However, the combination of a transformer block and a hidden state has not yet been fully studied. In this section, we provide two approaches to handling the hidden state in UPDeT:\n1) Global temporal unit treats the hidden state as an additional input of the transformer block. The process is formulated in a similar way to Eq. 5 with the relation: R1 = {ht−1G , et1} and {htG, etL} = RL. Here, we ignore the subscript i and instead use G to represent ’global’. The global temporal unit is simple but efficient, and provides us with robust performance in most scenarios.\n2) Individual temporal unit treats the hidden state as the inner part of each entity. In other words, each input maintains its own hidden state, while each output projects a new hidden state for the next time step. The individual temporal unit uses a more precise approach to controlling history information as it splits the global hidden state into individual parts. We use j to represent the number of entities. The relation of input and output is formulated as R1 = {ht−11 ...h t−1 j , e t 1} and {ht1...htj , etL} = RL. However, this method introduces the additional burden of learning the hidden state independently for each entity. In experiment Section 4.1.2, we test both variants and discuss them further." }, { "heading": "3.5 OPTIMIZATION", "text": "We use the standard squared TD error in DQNs (Mnih et al. (2015)) to optimize our entire framework as follows:\nL(θ) = b∑ i=1 [( yDQNi −Q(s, u; θ) )2] (7)\nHere, b represents the batch size. In partially observable settings, agents can benefit from conditioning on action-observation history. Hausknecht & Stone (2015) propose Deep Recurrent Q-networks (DRQN) for this sequential decision process. For our part, we replace the widely used GRU (Chung et al. (2014))/LSTM (Hochreiter & Schmidhuber (1997)) unit in DRQN with a transformer-based temporal unit and then train the whole model." }, { "heading": "4 STARCRAFT II EXPERIMENT", "text": "In this section, we evaluate UPDeT and its variants with different policy decoupling methods in the context of challenging micromanagement games in StarCraft II. We compare UPDeT with the RNN-based model on a single scenario and test the transfer capability on multiple-scenario transfer tasks. The experimental results show that UPDeT achieves significant improvement compared to the RNN-based model." }, { "heading": "4.1 SINGLE SCENARIO", "text": "In the single scenario experiments, we evaluate the model performance on different scenarios from SMAC (Samvelyan et al. (2019)). Specifically, the scenarios considered are as follows: 3 Marines vs 3 Marines (3m, Easy), 8 Marines vs 8 Marines (8m, Easy), 4 Marines vs 5 Marines (4m vs 5m, Hard+) and 5 Marines vs 6 Marines (5m vs 6m, Hard). In all these games, only the units from player’s side are treated as agents. Dead enemy units will be masked out from the action space to ensure that the executed action is valid. More detailed settings can be acquired from the SMAC environment (Samvelyan et al. (2019))." }, { "heading": "4.1.1 METHODS AND TRAINING DETAILS", "text": "The MARL methods for evaluation include VDN (Sunehag et al. (2017)), QMIX (Rashid et al. (2018)) and QTRAN (Hostallero et al. (2019)). All three SOTA methods’ original implementation can be found at https://github.com/oxwhirl/pymarl. These methods were selected due to their robust performance across different multi-agent tasks. Other methods, including COMA (Foerster et al. (2017)) and IQL (Tan (1993)) do not perform stable across in all tasks, as have been proved in several recent works (Rashid et al. (2018), Mahajan et al. (2019), Zhou et al. (2020)). Therefore, we combined UPDeT with VDN, QMIX and QTRAN to prove that our model can improve the model performance significantly compared to the GRU-based model." }, { "heading": "4.1.2 RESULT", "text": "The model performance result with different policy decoupling methods can be found in Fig. 4a. Vanilla Transformer is our baseline for all transformer-based models. This transformer only satisfies point 2 . Each output embedding can either be projected to an action or abandoned. The vanilla\ntransformer fails to beat the enemies in the experiment. Aggregation Transformer is a variant of vanilla transformer, the embedding of which are aggregated into a global embedding and then projected to a policy distribution. This transformer only satisfies the point 1 . The performance of the aggregation transformer is worse than that of the GRU-based model. The result proves that it is only with a policy decoupling strategy that the transformer-based model can outperform the conventional RNN-based model. Next, we adopt UPDeT to find the best temporal unit architecture in Fig. 4b. The result shows that without a hidden state, the performance is significantly decreased. The temporal unit with global hidden state is more efficient in terms of convergence speed than the individual hidden state. However, the final performances are almost the same. To test the generalization of our model, we combine the UPDeT with VDN / QMIX / QTRAN respectively and compare the final performance with RNN-based methods in Fig. 4c. We evaluate the model performance on 5m vs 6m (Hard) scenarios. Combined with UPDeT, all three MARL methods obtain significant improvement by large margins compared to the GRU-based model. The result proves that our model can be injected into any existing stat- of-the-art MARL method to yield better performance. Further more, we combine UPDeT with VDN and evaluate the model performance on different scenarios from Easy to Hard+ in Fig. 4d and Fig. 4e. The results show that the UPDeT performs stably on easy scenarios and significantly outperforms the GRU-based model on hard scenarios, in the 4m vs 5m(Hard+) scenario, the performance improvement achieved by UPDeT relative to the GRU-based model is of the magnitude of around 80%. Finally, we conduct an ablation study on UPDeT with paired and unpaired observation-entity—action-group, the result of which are presented in Fig. 4f. We disrupt the original correspondence between ’attack’ action and enemy unit. The final performance is heavily decreased compared to the original model, and is even worse than the GRU-based model. We accordingly conclude that only with policy decoupling and a paired observation-entity—action-group strategy can UPDeT learn a strong policy." }, { "heading": "4.2 MULTIPLE SCENARIOS", "text": "In this section, we discuss the transfer capability of UPDeT compared to the RNN-based model. We evaluate the model performance in a curriculum style. First, the model is trained one the 3m (3 Marines vs 3 Marines) scenario. We then used the pretrained 3m model to continually train on the 5m (5 Marines vs 5 Marines) and 7m (7 Marines vs 7 Marines) scenarios. We also conduct a experiment in reverse from 7m to 3m. During transfer learning, the model architecture of UPDeT remains fixed. Considering that the RNN-based model cannot handle various input and output dimensions, we modify the architecture of the source RNN model when training on the target scenario. We preserve the parameters of the GRU cell and initialize the fully connected layer with proper input and output dimensions to fit the new scenario. The final results can be seen in Fig. 5a and Fig. 5b. Our proposed UPDeT achieves significantly better results than the GRU-based model. Statistically, UPDeT’s total timestep cost to converge is at least 10 times less than the GRU-based model and 100 times less than training from scratch. Moreover, the model demonstrates a strong generalization ability without finetuning, indicating that UPDeT learns a robust policy with meta-level skill." }, { "heading": "4.3 EXTENSIVE EXPERIMENT ON LARGE-SCALE MAS", "text": "To evaluate the model performance in large-scale scenarios, we test our proposed UPDeT on the 10m vs 11m and 20m vs 21m scenarios from SMAC and a 64 vs 64 battle game in the MAgent Environment (Zheng et al. (2017)). The final results can be found in Appendix E." }, { "heading": "4.4 ATTENTION BASED STRATEGY: AN ANALYSIS", "text": "The significant performance improvement achieved by UPDeT on the SMAC multi-agent challenge can be credited to the self-attention mechanism brought by both transformer blocks and the policy decoupling strategy in UPDeT. In this section, we mainly discuss how the attention mechanism assists in learning a much more robust and explainable strategy. Here, we use the 3 Marines vs 3 Marines game (therefore, the size of the raw attention matrix is 6x6) as an example to demonstrate how the attention mechanism works. As mentioned in the caption of Fig. 6, we simplify the raw complete attention matrix to a grouped attention matrix. Fig. 6b presents the three different stages in one episode including Game Start, Attack and Survive, with their corresponding attention matrix and strategies. In the Game Start stage, the highest attention is in line 1 col 3 of the matrix, indicating that the agent pays more attention to its allies than its enemies. This phenomenon can be interpreted as follows: in the startup stage of one game, all the allies are spawned at the left side of the map and are encouraged to find and attack the enemies on the right side In the Attack stage, the highest attention is in line 2 col 2 of the matrix, which indicates that the enemy is now in the agent’s attack range; therefore, the agent will attack the enemy to get more rewards. Surprisingly, the agent chooses to attack the enemy with the lowest health value. This indicates that a long term plan can be learned based on the attention mechanism, since killing the weakest enemy first can decrease the punishment from the future enemy attacks. In the Survive stage, the agent’s health value is low, meaning that it needs to avoid being attacked. The highest attention is located in line 1 col 1, which clearly shows that the most important thing under the current circumstances is to stay alive. For as long as the agent is alive, there is still a chance for it to return to the front line and get more reward while enemies are attacking the allies instead of the agent itself.\nIn conclusion, the self-attention mechanism and policy decoupling strategy of UPDeT provides a strong and clear relation between attention weights and final strategies. This relation can help us better understand the policy generation based on the distribution of attention among different entities. An interesting idea presents itself here: namely, if we can find a strong mapping between attention matrix and final policy, the character of the agent could be modified in an unsupervised manner." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose UPDeT, a universal policy decoupling transformer model that extends MARL to a much broader scenario. UPDeT is general enough to be plugged into any existing MARL method. Moreover, our experimental results show that, when combined with UPDeT, existing state-of-the-art MARL methods can achieve further significant improvements with the same training pipeline. On transfer learning tasks, our model is 100 times faster than training from scratch and 10 times faster than training using the RNN-based model. In the future, we aim to develop a centralized function based on UPDeT and apply the self-attention mechanism to the entire pipeline of MARL framework to yield further improvement." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant No.U19A2073 and in part by the National Natural Science Foundation of China (NSFC) under Grant No.61976233 and No.61906109 and Australian Research Council Discovery Early Career Researcher Award (DE190100626), and Funding of “Leading Innovation Team of the Zhejiang Province” (2018R01017)." }, { "heading": "A DETAILS OF SMAC ENVIRONMENT", "text": "The action space contains four movement directions, k attack actions (where k is the fixed maximum number of the enemy units in a map), stop and none-operation. At each time step, the agents receive a joint team reward, which is defined by the total damage incurred by the agents and the total damage from the enemy side. Each agent is described by several attributes, including health point HP , weapon cool down (CD), unit type, last action and the relative distance of the observed units. The enemy units are described in the same way except that CD is excluded. The partial observation of an agent comprises the attributes of the units, including both the agents and the enemy units, that exist within its view range, which is a circle with a specific radius." }, { "heading": "B DETAILS OF MODEL", "text": "The transformer block in all different experiments consists of 3 heads and 2 layer transformer blocks. The other important training hyper parameters are as follows:\nList of Hyper Parameters Name Value batch size 32 test interval 2000 gamma 0.99 buffer size 5000 token dimension (UPDeT) 32 channel dimension (UPDeT) 32 epsilon start 1.0 epsilon end 0.05 rnn hidden dimension 64 target net update interval 200 mixing embeddding dimension (QMIX) 32 hypernet layers (QMIX) 2 hypernet embedding (QMIX) 64 mixing embeddding dimension (QTRAN) 32 opt loss (QTRAN) 1 nopt min loss (QTRAN) 0.1" }, { "heading": "C SOTA MARL VALUE-BASED FRAMEWORK", "text": "The three SOTA method can be briefly summarized as follows:\n• VDN (Sunehag et al. (2017)): this method learns an individual Q-value function and represents Qtot as a sum of individual Q-value functions that condition only on individual observations and actions.\n• QMIX (Rashid et al. (2018)): this method learns a decentralized Q-function for each agent, with the assumption that the centralized Q-value increases monotonically with the individual Q-values.\n• QTRAN (Hostallero et al. (2019)): this method formulates multi-agent learning as an optimization problem with linear constraints and relaxes it with L2 penalties for tractability." }, { "heading": "D UPDET ON SMAC: A REAL CASE", "text": "We take the 3 Marines vs 3 Marines challenge from SMAC with UPDeT as an example; more details can be found in Fig. 7. The observation are separated into 3 groups: main agent, two other ally agents and three enemies. The policy output includes basic action corresponding to the main agent’s observation and attack actions, one for each enemy observation. The hidden state is added after the embedding layer. The output of other agents is abandoned as there is no corresponding\naction. Once an agent or enemy has died, we mask corresponding unavailable action in the action select stage to ensure only the available actions are selected." }, { "heading": "E RESULTS OF EXTENSIVE EXPERIMENT ON LARGE SCALE", "text": "We further test the robustness of UPDeT in a large-scale multi-agent system. To do so, we enlarge the game size in SMAC (Samvelyan et al. (2019)) to incorporate more agents and enemies on the battle field. We use a 10 Marines vs 11 Marines game and a 20 Marines vs 21 Marines game to compare the performance between the UPDeT and GRU-based approaches. In the 20 Marines vs 21 Marines game, to accelerate the training and satisfy the hardware limitations, we decrease the batch size of both the GRU baseline and UPDeT from 32 to 24 in the training stage. The final results can be found in Fig. 8a. The improvement is still significant in terms of both sample efficiency and final performance. Moreover, it is also worth mentioning that the model size of UPDeT stays fixed, while the GRU-based model becomes larger in large-scale scenarios. In the 20 Marines vs 21 Marines game, the model size of GRU is almost double that of UPDeT. This indicates that UPDeT is able to ensure the lightness of the model while still maintaining good performance.\nWe also test the model performance in the MAgent Environment (Zheng et al. (2017)). The settings of MAgent are quite different from those of SMAC. First, the observation size and number of available actions are not related to the number of agents. Second, the 64 vs 64 battle game we tested is a two-player zero-sum game which is another hot research area that combines both MARL and GT (Game Theory), the most successful attempt in this area involves adopting a mean-field approximation of GT in MARL to accelerate the self-play training (Yang et al. (2018)). Third, as for the model architecture, there is no need to use a recurrent network like GRU in MAgent and the\nlarge observation size requires the use of a CNN from embedding. However, ny treating UPDeT as a pure encoder without recurrent architecture, we can still conduct experiments on MAgent; the final results of these can be found in Fig. 8b. As the result show, UPDeT performs better than the DQN baseline, although this improvement is not as significant as it in SMAC." } ]
2,021
null
SP:63859002bed6542b5fe469aecb01e3070572885c
[ "In this paper, the authors present and analyze a class of gradient-descent algorithms for solving min-max problems when the first (minimization) variable is constrained to live on a Riemaniann manifold. In the case when i) a retraction and an isometric transport are available on the manifold; and ii) the objective is strongly convex and smooth in the second variable, the authors show convergence rates. Experiments are performed with the setting of minimizing losses of neural nets whose weights are constrained to live in the Stiefeld manifold while an attacker of small norm perturbs the input." ]
In the paper, we study a class of useful non-convex minimax optimization problems on Riemanian manifolds and propose a class of Riemanian gradient descent ascent algorithms to solve these minimax problems. Specifically, we propose a new Riemannian gradient descent ascent (RGDA) algorithm for the deterministic minimax optimization. Moreover, we prove that the RGDA has a sample complexity of O(κ −2) for finding an -stationary point of the nonconvex stronglyconcave minimax problems, where κ denotes the condition number. At the same time, we introduce a Riemannian stochastic gradient descent ascent (RSGDA) algorithm for the stochastic minimax optimization. In the theoretical analysis, we prove that the RSGDA can achieve a sample complexity of O(κ −4). To further reduce the sample complexity, we propose a novel momentum variance-reduced Riemannian stochastic gradient descent ascent (MVR-RSGDA) algorithm based on a new momentum variance-reduced technique of STORM. We prove that the MVR-RSGDA algorithm achieves a lower sample complexity of Õ(κ −3) without large batches, which reaches near the best known sample complexity for its Euclidean counterparts. Extensive experimental results on the robust deep neural networks training over Stiefel manifold demonstrate the efficiency of our proposed algorithms.
[ { "affiliations": [], "name": "MIN-MAX PROB" }, { "affiliations": [], "name": "LEMS ON" }, { "affiliations": [], "name": "RIEMANNIAN MANIFOLDS" } ]
[ { "authors": [ "Nitin Bansal", "Xiaohan Chen", "Zhangyang Wang" ], "title": "Can we gain more from orthogonality regularizations in training deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Radu Ioan Boţ", "Axel Böhm" ], "title": "Alternating proximal-gradient steps for (stochastic) nonconvexconcave minimax problems", "venue": "arXiv preprint arXiv:2007.13605,", "year": 2020 }, { "authors": [ "Ashutosh Chaubey", "Nikhil Agrawal", "Kavya Barnwal", "Keerat K Guliani", "Pramod Mehta" ], "title": "Universal adversarial perturbations: A survey", "venue": "arXiv preprint arXiv:2005.08087,", "year": 2020 }, { "authors": [ "Robert S Chen", "Brendan Lucier", "Yaron Singer", "Vasilis Syrgkanis" ], "title": "Robust optimization for nonconvex objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Michael Cogswell", "Faruk Ahmed", "Ross Girshick", "Larry Zitnick", "Dhruv Batra" ], "title": "Reducing overfitting in deep networks by decorrelating representations", "venue": "arXiv preprint arXiv:1511.06068,", "year": 2015 }, { "authors": [ "Ashok Cutkosky", "Francesco Orabona" ], "title": "Momentum-based variance reduction in non-convex sgd", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Cong Fang", "Chris Junchi Li", "Zhouchen Lin", "Tong Zhang" ], "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Orizon Pereira Ferreira", "LR Lucambio Pérez", "Sandor Z Németh" ], "title": "Singularities of monotone vector fields and an extragradient-type algorithm", "venue": "Journal of Global Optimization,", "year": 2005 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Andi Han", "Junbin Gao" ], "title": "Riemannian stochastic recursive momentum method for non-convex optimization", "venue": "arXiv preprint arXiv:2008.04555,", "year": 2020 }, { "authors": [ "Andi Han", "Junbin Gao" ], "title": "Variance reduction for riemannian non-convex optimization with batch size adaptation", "venue": "arXiv preprint arXiv:2007.01494,", "year": 2020 }, { "authors": [ "Feihu Huang", "Shangqian Gao", "Jian Pei", "Heng Huang" ], "title": "Accelerated zeroth-order momentum methods from mini to minimax optimization", "venue": "arXiv preprint arXiv:2008.08170,", "year": 2020 }, { "authors": [ "Lei Huang", "Xianglong Liu", "Bo Lang", "Adams Wei Yu", "Yongliang Wang", "Bo Li" ], "title": "Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks", "venue": "In 32nd AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Pratik Jawanpuria", "Bamdev Mishra" ], "title": "A unified framework for structured low-rank matrix learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hiroyuki Kasai", "Hiroyuki Sato", "Bamdev Mishra" ], "title": "Riemannian stochastic recursive gradient algorithm", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hiroyuki Kasai", "Pratik Jawanpuria", "Bamdev Mishra" ], "title": "Riemannian adaptive stochastic gradient algorithms on matrix manifolds", "venue": "arXiv preprint arXiv:1902.01144,", "year": 2019 }, { "authors": [ "Chong Li", "Genaro López", "Victoria Martı́n-Márquez" ], "title": "Monotone vector fields and the proximal point algorithm on hadamard manifolds", "venue": "Journal of the London Mathematical Society,", "year": 2009 }, { "authors": [ "Jun Li", "Li Fuxin", "Sinisa Todorovic" ], "title": "Efficient riemannian optimization on the stiefel manifold via the cayley transform", "venue": "arXiv preprint arXiv:2002.01113,", "year": 2020 }, { "authors": [ "Tianyi Lin", "Chi Jin", "Michael I Jordan" ], "title": "On gradient descent ascent for nonconvex-concave minimax problems", "venue": "arXiv preprint arXiv:1906.00331,", "year": 2019 }, { "authors": [ "Tianyi Lin", "Chi Jin", "Michael Jordan" ], "title": "Near-optimal algorithms for minimax optimization", "venue": "arXiv preprint arXiv:2002.02417,", "year": 2020 }, { "authors": [ "Mingrui Liu", "Youssef Mroueh", "Jerret Ross", "Wei Zhang", "Xiaodong Cui", "Payel Das", "Tianbao Yang" ], "title": "Towards better understanding of adaptive gradient algorithms in generative adversarial nets", "venue": null, "year": 1912 }, { "authors": [ "Yuanyuan Liu", "Fanhua Shang", "James Cheng", "Hong Cheng", "Licheng Jiao" ], "title": "Accelerated first-order methods for geodesically convex optimization on riemannian manifolds", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Luo Luo", "Haishan Ye", "Tong Zhang" ], "title": "Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems", "venue": "arXiv preprint arXiv:2001.03724,", "year": 2020 }, { "authors": [ "Mayank Meghwanshi", "Pratik Jawanpuria", "Anoop Kunchukuttan", "Hiroyuki Kasai", "Bamdev Mishra" ], "title": "Mctorch, a manifold optimization library for deep learning", "venue": "arXiv preprint arXiv:1810.01811,", "year": 2018 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Omar Fawzi", "Pascal Frossard" ], "title": "Universal adversarial perturbations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Lam M Nguyen", "Jie Liu", "Katya Scheinberg", "Martin Takáč. Sarah" ], "title": "A novel method for machine learning problems using stochastic recursive gradient", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Maher Nouiehed", "Maziar Sanjabi", "Tianjian Huang", "Jason D Lee", "Meisam Razaviyayn" ], "title": "Solving a class of non-convex min-max games using iterative first order methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dmitrii M Ostrovskii", "Andrew Lowy", "Meisam Razaviyayn" ], "title": "Efficient search of first-order nash equilibria in nonconvex-concave smooth min-max problems", "venue": "arXiv preprint arXiv:2002.07919,", "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Hassan Rafique", "Mingrui Liu", "Qihang Lin", "Tianbao Yang" ], "title": "Non-convex min-max optimization: Provable algorithms and applications in machine learning", "venue": null, "year": 1810 }, { "authors": [ "Hamed Rahimian", "Sanjay Mehrotra" ], "title": "Distributionally robust optimization: A review", "venue": "arXiv preprint arXiv:1908.05659,", "year": 2019 }, { "authors": [ "Meisam Razaviyayn", "Tianjian Huang", "Songtao Lu", "Maher Nouiehed", "Maziar Sanjabi", "Mingyi Hong" ], "title": "Nonconvex min-max optimization: Applications, challenges, and recent theoretical advances", "venue": "IEEE Signal Processing Magazine,", "year": 2020 }, { "authors": [ "Hiroyuki Sato", "Hiroyuki Kasai", "Bamdev Mishra" ], "title": "Riemannian stochastic variance reduced gradient algorithm with retraction and vector transport", "venue": "SIAM Journal on Optimization,", "year": 2019 }, { "authors": [ "Ju Sun", "Qing Qu", "John Wright" ], "title": "Complete dictionary recovery over the sphere ii: Recovery by riemannian trust-region method", "venue": "IEEE Transactions on Information Theory,", "year": 2016 }, { "authors": [ "Yifan Sun", "Liang Zheng", "Weijian Deng", "Shengjin Wang" ], "title": "Svdnet for pedestrian retrieval", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Kiran K Thekumparampil", "Prateek Jain", "Praneeth Netrapalli", "Sewoong Oh" ], "title": "Efficient algorithms for smooth minimax optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Bart Vandereycken" ], "title": "Low-rank matrix completion by riemannian optimization", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "John Von Neumann", "Oskar Morgenstern" ], "title": "Theory of games and economic behavior (commemorative edition)", "venue": "Princeton university press,", "year": 2007 }, { "authors": [ "JH Wang", "G López", "Victoria Martı́n-Márquez", "Chong Li" ], "title": "Monotone and accretive vector fields on riemannian manifolds", "venue": "Journal of optimization theory and applications,", "year": 2010 }, { "authors": [ "Di Xie", "Jiang Xiong", "Shiliang Pu" ], "title": "All you need is beyond a good init: Exploring better solution for training extremely deep convolutional neural networks with orthonormality and modulation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Tengyu Xu", "Zhe Wang", "Yingbin Liang", "H Vincent Poor" ], "title": "Enhanced first and zeroth order variance reduced algorithms for min-max optimization", "venue": "arXiv preprint arXiv:2006.09361,", "year": 2020 }, { "authors": [ "Zi Xu", "Huiling Zhang", "Yang Xu", "Guanghui Lan" ], "title": "A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems", "venue": "arXiv preprint arXiv:2006.02032,", "year": 2020 }, { "authors": [ "Yan Yan", "Yi Xu", "Qihang Lin", "Wei Liu", "Tianbao Yang" ], "title": "Sharp analysis of epoch stochastic gradient descent ascent methods for min-max optimization", "venue": "arXiv preprint arXiv:2002.05309,", "year": 2020 }, { "authors": [ "Junchi Yang", "Negar Kiyavash", "Niao He" ], "title": "Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems", "venue": "arXiv preprint arXiv:2002.09621,", "year": 2020 }, { "authors": [ "Hongyi Zhang", "Suvrit Sra" ], "title": "First-order methods for geodesically convex optimization", "venue": "In Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Sashank J Reddi", "Suvrit Sra" ], "title": "Riemannian svrg: Fast stochastic optimization on riemannian manifolds", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jingzhao Zhang", "Hongyi Zhang", "Suvrit Sra" ], "title": "R-spider: A fast riemannian stochastic optimization algorithm with curvature independent rate", "venue": "arXiv preprint arXiv:1811.04194,", "year": 2018 }, { "authors": [ "Kaiqing Zhang", "Zhuoran Yang", "Tamer Başar" ], "title": "Multi-agent reinforcement learning: A selective overview of theories and algorithms", "venue": "arXiv preprint arXiv:1911.10635,", "year": 2019 }, { "authors": [ "Kaiqing Zhang", "Sham M Kakade", "Tamer Başar", "Lin F Yang" ], "title": "Model-based multi-agent rl in zero-sum markov games with near-optimal sample complexity", "venue": "arXiv preprint arXiv:2007.07461,", "year": 2020 }, { "authors": [ "Pan Zhou", "Xiaotong Yuan", "Shuicheng Yan", "Jiashi Feng" ], "title": "Faster first-order methods for stochastic non-convex optimization on riemannian manifolds. IEEE transactions on pattern analysis and machine intelligence, 2019", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the paper, we study a class of useful non-convex minimax (a.k.a. min-max) problems on the Riemannian manifoldM with the definition as:\nmin x∈M max y∈Y f(x, y), (1)\nwhere the function f(x, y) is µ-strongly concave in y but possibly nonconvex in x. Here Y ⊆ Rd is a convex and closed set. f(·, y) : M → R for all y ∈ Y is a smooth but possibly nonconvex real-valued function on manifoldM, and f(x, ·) : Y → R for all x ∈ M a smooth and (strongly)concave real-valued function. In this paper, we mainly focus on the stochastic minimax optimization problem f(x, y) := Eξ∼D[f(x, y; ξ)], where ξ is a random variable that follows an unknown distribution D. In fact, the problem (1) is associated to many existing machine learning applications: 1). Robust Training DNNs over Riemannian manifold. Deep Neural Networks (DNNs) recently have been demonstrating exceptional performance on many machine learning applications. However, they are vulnerable to the adversarial example attacks, which show that a small perturbation in the data input can significantly change the output of DNNs. Thus, the security properties of DNNs have been widely studied. One of secured DNN research topics is to enhance the robustness of DNNs under the adversarial example attacks. To be more specific, given training data D := {ξi = (ai, bi)}ni=1, where ai ∈ Rd and bi ∈ R represent the features and label of sample ξi respectively. Each data sample ai can be corrupted by a universal small perturbation vector y to generate an adversarial attack sample ai + y, as in (Moosavi-Dezfooli et al., 2017; Chaubey et al., 2020). To make DNNs robust against adversarial attacks, one popular approach is to solve the following robust training problem:\nmin x max y∈Y\n1\nn n∑ i=1 `(h(ai + y;x), bi) , (2)\nwhere y ∈ Rd denotes a universal perturbation, and x is the weight of the neural network; h(·;x) is the the deep neural network parameterized by x; and `(·) is the loss function. Here the constraint Y = {y : ‖y‖ ≤ ε} indicates that the poisoned samples should not be too different from the original ones.\nRecently, the orthonormality on weights of DNNs has gained much interest and has been found to be useful across different tasks such as person re-identification (Sun et al., 2017) and image classification (Xie et al., 2017). In fact, the orthonormality constraints improve the performances of DNNs (Li et al., 2020; Bansal et al., 2018), and reduce overfitting to improve generalization (Cogswell et al., 2015). At the same time, the orthonormality can stabilize the distribution of activations over layers within DNNs (Huang et al., 2018). Thus, we consider the following robust training problem over the Stiefel manifoldM:\nmin x∈M max y∈Y\n1\nn n∑ i=1 `(h(ai + y;x), bi). (3)\nWhen data are continuously coming, we can rewrite the problem (3) as follows:\nmin x∈M max y∈Y\nEξ[f(x, y; ξ)], (4)\nwhere f(x, y; ξ) = `(h(a+ y;x), b) with ξ = (a, b).\n2). Distributionally Robust Optimization over Riemannian manifold. Distributionally robust optimization (DRO) (Chen et al., 2017; Rahimian & Mehrotra, 2019) is an effective method to deal with the noisy data, adversarial data, and imbalanced data. At the same time, the DRO in the Riemannian manifold setting is also widely applied in machine learning problems such as robust principal component analysis (PCA). To be more specific, given a set of data samples {ξi}ni=1, the DRO over Riemannian manifoldM can be written as the following minimax problem:\nmin x∈M max p∈S { n∑ i=1 pi`(x; ξi)− ‖p− 1 n ‖2 } , (5)\nwhere p = (p1, · · · , pn), S = {p ∈ Rn : ∑n i=1 pi = 1, pi ≥ 0}. Here `(x; ξi) denotes the loss function over Riemannian manifold M, which applies to many machine learning problems such as PCA (Han & Gao, 2020a), dictionary learning (Sun et al., 2016), DNNs (Huang et al., 2018), structured low-rank matrix learning (Jawanpuria & Mishra, 2018), among others. For example, the task of PCA can be cast on a Grassmann manifold.\nTo the best of our knowledge, the existing explicitly minimax optimization methods such as gradient descent ascent method only focus on the minimax problems in Euclidean space. To fill this gap, in the paper, we propose a class of efficient Riemannian gradient descent ascent algorithms to solve the problem (1) via using general retraction and vector transport. When the problem (1) is deterministic, we propose a new deterministic Riemannian gradient descent ascent algorithm. When the problem (1) is stochastic, we propose two efficient stochastic Riemannian gradient descent ascent algorithms. Our main contributions can be summarized as follows:\n1) We propose a novel Riemannian gradient descent ascent (RGDA) algorithm for the deterministic minimax optimization problem (1). We prove that the RGDA has a sample complexity of O(κ2 −2) for finding an -stationary point.\n2) We also propose a new Riemannian stochastic gradient descent ascent (RSGDA) algorithm for the stochastic minimax optimization. In the theoretical analysis, we prove that the SRGDA has a sample complexity of O(κ4 −4).\n3) To further reduce the sample complexity, we introduce a novel momentum variancereduced Riemannian stochastic gradient descent ascent (MVR-RSGDA) algorithm based on a new momentum variance-reduced technique of STORM (Cutkosky & Orabona, 2019). We prove the MVR-RSGDA achieves a lower sample complexity of Õ(κ4 −3) (please see Table 1), which reaches near the best known sample complexity for its Euclidean counterparts.\n4) Extensive experimental results on the robust DNN training over Stiefel manifold demonstrate the efficiency of our proposed algorithms." }, { "heading": "2 RELATED WORKS", "text": "In this section, we briefly review the minimax optimization and Riemannian manifold optimization research works." }, { "heading": "2.1 MINIMAX OPTIMIZATION", "text": "Minimax optimization recently has been widely applied in many machine learning problems such as adversarial training (Goodfellow et al., 2014; Liu et al., 2019), reinforcement learning (Zhang et al., 2019; 2020), and distribution learning (Razaviyayn et al., 2020). At the same time, many efficient min-max methods (Rafique et al., 2018; Lin et al., 2019; Nouiehed et al., 2019; Thekumparampil et al., 2019; Lin et al., 2020; Yang et al., 2020; Ostrovskii et al., 2020; Yan et al., 2020; Xu et al., 2020a; Luo et al., 2020; Xu et al., 2020b; Boţ & Böhm, 2020; Huang et al., 2020) have been proposed for solving these minimax optimization problems. For example, Thekumparampil et al. (2019) have proposed a class of efficient dual implicit accelerated gradient algorithms to solve smooth min-max optimization. Lin et al. (2019) have proposed a class of efficient gradient decent ascent methods for non-convex minimax optimization. Subsequently, accelerated first-order algorithms Lin et al. (2020) have been proposed for minimax optimization. Xu et al. (2020b) have proposed a unified singleloop alternating gradient projection algorithm for (non)convex-(non)concave minimax problems. Ostrovskii et al. (2020) have proposed an efficient algorithm for finding first-order Nash equilibria in nonconvex concave minimax problems. Xu et al. (2020a); Luo et al. (2020) have proposed a class of fast stochastic variance-reduced GDA algorithms to solve the stochastic minimax problems. More recently, Huang et al. (2020) have presented a class of new momentum-based first-order and zeroth-order descent ascent method for the nonconvex strongly concave minimax problems." }, { "heading": "2.2 RIEMANNIAN MANIFOLD OPTIMIZATION", "text": "Riemannian manifold optimization methods have been widely applied in machine learning problems including dictionary learning (Sun et al., 2016), matrix factorization (Vandereycken, 2013), and DNNs (Huang et al., 2018). Many Riemannian optimization methods were recently proposed. E.g. Zhang & Sra (2016); Liu et al. (2017) have proposed some efficient first-order gradient methods for geodesically convex functions. Subsequently, Zhang et al. (2016) have presented fast stochastic variance-reduced methods to Riemannian manifold optimization. More recently, Sato et al. (2019) have proposed fast first-order gradient algorithms for Riemannian manifold optimization by using general retraction and vector transport. Subsequently, based on these retraction and vector transport, some fast Riemannian gradient-based methods (Zhang et al., 2018; Kasai et al., 2018; Zhou et al., 2019; Han & Gao, 2020a) have been proposed for non-convex optimization. Riemannian Adam-type algorithms (Kasai et al., 2019) were introduced for matrix manifold optimization. In addition, some algorithms (Ferreira et al., 2005; Li et al., 2009; Wang et al., 2010) have been studied for variational inequalities on Riemannian manifolds, which are the implicit min-max problems on Riemannian manifolds.\nNotations: ‖ · ‖ denotes the `2 norm for vectors and spectral norm for matrices. 〈x, y〉 denotes the inner product of two vectors x and y. For function f(x, y), f(x, ·) denotes function w.r.t. the second variable with fixing x, and f(·, y) denotes function w.r.t. the first variable with fixing y. Given a convex closed set Y , we define a projection operation on the set Y as PY(y0) = arg miny∈Y 12‖y − y0‖2. We denote a = O(b) if a ≤ Cb for some constant C > 0, and the notation Õ(·) hides logarithmic terms. Id denotes the identity matrix with d dimension. The operation ⊕ denotes the\nWhitney sum. Given Bt = {ξit}Bi=1 for any t ≥ 1, let ∇fBt(x, y) = 1B ∑B i=1∇f(x, y; ξit)." }, { "heading": "3 PRELIMINARIES", "text": "In this section, we first re-visit some basic information on the Riemannian manifoldM. In general, the manifoldM is endowed with a smooth inner product 〈·, ·〉x : TxM× TxM → R on tangent space TxM for every x ∈ M. The induced norm ‖ · ‖x of a tangent vector in TxM is associated with the Riemannian metric. We first define a retraction Rx : TxM →M mapping tangent space TxM onto M with a local rigidity condition that preserves the gradients at x ∈ M (please see Fig.1 (a)). The retraction Rx satisfies all of the following: 1) Rx(0) = x, where 0 ∈ TxM; 2) 〈∇Rx(0), u〉x = u for u ∈ TxM. In fact, exponential mapping Expx is a special case of retraction Rx that locally approximates the exponential mapping Expx to the first order on the manifold.\nNext, we define a vector transport T : TM ⊕ TM → TM (please see Fig.1 (b)) that satisfies all of the following 1) T has an associated retraction R, i.e., for x ∈ M and w, u ∈ TxM, Tuw is a tangent vector at Rx(w); 2) T0v = v; 3) Tu(av + bw) = aTuv + bTuw for all a, b ∈ R a u, v, w ∈ TM. Vector transport T yx v or equivalently Tuv with y = Rx(u) transports v ∈ TxM along the retraction curve defined by direction u. Here we focus on the isometric vector transport T yx , which satisfies 〈u, v〉x = 〈T yx u, T yx v〉y for all u, v ∈ TxM. Let ∇f(x, y) = (∇xf(x, y),∇yf(x, y)) denote the gradient over the Euclidean space, and let gradf(x, y) = (gradxf(x, y), gradyf(x, y)) = ProjTxM(∇f(x, y)) denote the Riemannian gradient over tangent space TxM, where ProjX (z) = arg minx∈X ‖x−z‖ is a projection operator. Based on the above definitions, we provide some standard assumptions about the problem (1). Although the problem (1) is non-convex, following (Von Neumann & Morgenstern, 2007), there exists a local solution or stationary point (x∗, y∗) satisfies the Nash Equilibrium, i.e.,\nf(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗), where x∗ ∈ X and y∗ ∈ Y . Here X ⊂M is a neighbourhood around an optimal point x∗. Assumption 1. X is compact. Each component function f(x, y) is twice continuously differentiable in both x ∈ X and y ∈ Y , and there exist constants L11, L12, L21 and L22, such that for every x, x1, x2 ∈ X and y, y1, y2 ∈ Y , we have\n‖gradxf(x1, y; ξ)− T x1x2 gradxf(x2, y; ξ)‖ ≤ L11‖u‖, ‖gradxf(x, y1; ξ)− gradxf(x, y2; ξ)‖ ≤ L12‖y1 − y2‖, ‖∇yf(x1, y; ξ)−∇yf(x2, y; ξ)‖ ≤ L21‖u‖, ‖∇yf(x, y1; ξ)−∇yf(x, y2; ξ)‖ ≤ L22‖y1 − y2‖,\nwhere u ∈ Tx1M and x2 = Rx1(u).\nAssumption 1 is commonly used in Riemannian optimization (Sato et al., 2019; Han & Gao, 2020a), and min-max optimization (Lin et al., 2019; Luo et al., 2020; Xu et al., 2020b). Here, the terms L11, L12 and L21 implicitly contain the curvature information as in (Sato et al., 2019; Han & Gao, 2020a). Specifically, Assumption 1 implies the partial Riemannian gradient gradxf(·, y; ξ) for all y ∈ Y is retraction L11-Lipschitz continuous as in (Han & Gao, 2020a) and the partial gradient ∇yf(x, ·; ξ) for all x ∈ X is L22-Lipschitz continuous as in (Lin et al., 2019). Since ‖gradxf(x, y1; ξ) − gradxf(x, y2; ξ)‖ = ‖ProjTxM ( ∇xf(x, y1; ξ) ) −\nProjTxM ( ∇xf(x, y2; ξ) ) ‖ ≤ ‖∇xf(x, y1; ξ) − ∇xf(x, y2; ξ)‖ ≤ L12‖y1 − y2‖, we can obtain ‖gradxf(x, y1; ξ) − gradxf(x, y2; ξ)‖ ≤ L12‖y1 − y2‖ by the L12-Lipschitz continuous of ∇xf(x, ·; ξ) for all x ∈ X . Let the partial Riemannian gradient gradyf(·, y; ξ) for all y ∈ Y be retraction L̃21-Lipschitz, i.e., ‖gradyf(x1, y; ξ) − T x1x2 gradyf(x2, y; ξ)‖ ≤ L̃21‖u‖. Since ‖gradyf(x1, y; ξ) − T x1x2 gradyf(x2, y; ξ)‖ = ‖ProjTxM ( ∇yf(x1, y; ξ) ) −\nT x1x2 ProjTxM ( ∇yf(x2, y; ξ) ) ‖ ≤ ‖∇yf(x1, y; ξ)−∇yf(x2, y; ξ)‖ ≤ L21‖u‖, we have L21 ≥ L̃21.\nFor the deterministic problem, let f(x, y) instead of f(x, y; ξ) in Assumption 1. Since f(x, y) is strongly concave in y ∈ Y , there exists a unique solution to the problem maxy∈Y f(x, y) for any x. We define the function Φ(x) = maxy∈Y f(x, y) and y∗(x) = arg maxy∈Y f(x, y). Assumption 2. The function Φ(x) is retraction L-smooth. There exists a constant L > 0, for all x ∈ X , z = Rx(u) with u ∈ TxM, such that\nΦ(z) ≤ Φ(x) + 〈gradΦ(x), u〉+ L 2 ‖u‖2. (6)\nAssumption 3. The objective function f(x, y) is µ-strongly concave w.r.t y, i.e., for any x ∈M\nf(x, y1) ≤ f(x, y2) + 〈∇yf(x, y2), y1 − y2〉 − µ\n2 ‖y1 − y2‖2, ∀y1, y2 ∈ Y. (7)\nAssumption 4. The function Φ(x) is bounded from below inM, i.e., Φ∗ = infx∈M Φ(x). Assumption 5. The variance of stochastic gradient is bounded, i.e., there exists a constant σ1 > 0 such that for all x, it follows Eξ‖gradxf(x, y; ξ) − gradxf(x, y)‖2 ≤ σ21; There exists a constant σ2 > 0 such that for all y, it follows Eξ‖∇yf(x, y; ξ) − ∇yf(x, y)‖2 ≤ σ22 . We also define σ = max{σ1, σ2}.\nAssumption 2 imposes the retraction smooth of function Φ(x), as in Sato et al. (2019); Han & Gao (2020b;a). Assumption 3 imposes the strongly concave of f(x, y) on variable y, as in (Lin et al., 2019; Luo et al., 2020). Assumption 4 guarantees the feasibility of the nonconvex-strongly-concave problems, as in (Lin et al., 2019; Luo et al., 2020). Assumption 5 imposes the bounded variance of stochastic (Riemannian) gradients, which is commonly used in the stochastic optimization (Han & Gao, 2020b; Lin et al., 2019; Luo et al., 2020)." }, { "heading": "4 RIEMANIAN GRADIENT DESCENT ASCENT", "text": "In the section, we propose a class of Riemannian gradient descent ascent algorithm to solve the deterministic and stochastic minimax optimization problem (1), respectively." }, { "heading": "4.1 RGDA AND RSGDA ALGORITHMS", "text": "In this subsection, we propose an efficient Riemannian gradient descent ascent (RGDA) algorithm to solve the deterministic min-max problem (1). At the same time, we propose a standard Riemannian stochastic gradient descent ascent (RSGDA) algorithm to solve the stochastic min-max problem (1). Algorithm 1 summarizes the algorithmic framework of our RGDA and RSGDA algorithms.\nAt the step 5 of Algorithm 1, we apply the retraction operator to ensure the variable xt for all t ≥ 1 in the manifoldM. At the step 6 of Algorithm 1, we use 0 < ηt ≤ 1 to ensure the variable yt for all t ≥ 1 in the convex constraint Y . Here we define a reasonable metric to measure the convergence:\nHt = ‖gradΦ(xt)‖+ L̃‖yt − y∗(xt)‖, (10)\nwhere L̃ = max(1, L11, L12, L21, L22), and the first term of Ht measures convergence of the iteration solutions {xt}Tt=1, and the last term measures convergence of the iteration solutions {yt}Tt=1. Since the function f(x, y) is strongly concave in y ∈ Y , there exists a unique solution y∗(x) to the problem maxy∈Y f(x, y) for any x ∈ M. Thus, we apply the standard metric ‖yt − y∗(xt)‖ to measure convergence of the parameter y. Given y = y∗(xt), we use the standard metric ‖gradΦ(xt)‖ = ‖gradxf(xt, y∗(xt))‖ to measure convergence of the parameter x. Note that we use the coefficient L̃ to balance the scale of metrics of the variable x and the variable y.\nAlgorithm 1 RGDA and RSGDA Algorithms for Min-Max Optimization\n1: Input: T , parameters {γ, λ, ηt}Tt=1, mini-batch size B, and initial input x1 ∈M, y1 ∈ Y; 2: for t = 1, 2, . . . , T do 3: (RGDA) Compute deterministic gradients\nvt = gradxf(xt, yt), wt = ∇yf(xt, yt); (8)\n4: (RSGDA) Draw B i.i.d. samples {ξit}Bi=1, then compute stochastic gradients\nvt = 1\nB B∑ i=1 gradxf(xt, yt; ξ i t), wt = 1 B B∑ i=1 ∇yf(xt, yt; ξit); (9)\n5: Update: xt+1 = Rxt(−γηtvt); 6: Update: ỹt+1 = PY(yt + λwt) and yt+1 = yt + ηt(ỹt+1 − yt); 7: end for 8: Output: xζ and yζ chosen uniformly random from {xt, yt}Tt=1.\nAlgorithm 2 MVR-RSGDA Algorithm for Min-Max Optimization 1: Input: T , parameters {γ, λ, b,m, c1, c2} and initial input x1 ∈M and y1 ∈ Y; 2: Draw B i.i.d. samples B1 = {ξi1}Bi=1, then compute v1 = gradxfB1(x1, y1) and w1 = ∇yfB1(x1, y1);\n3: for t = 1, 2, . . . , T do 4: Compute ηt = b(m+t)1/3 ; 5: Update: xt+1 = Rxt(−γηtvt); 6: Update: ỹt+1 = PY(yt + λwt) and yt+1 = yt + ηt(ỹt+1 − yt); 7: Compute αt+1 = c1η2t and βt+1 = c2η 2 t ; 8: Draw B i.i.d. samples Bt+1 = {ξit+1}Bi=1, then compute\nvt+1 = gradxfBt+1(xt+1, yt+1) + (1− αt+1)T xt+1xt [ vt − gradxfBt+1(xt, yt) ] , (12)\nwt+1 = ∇yfBt+1(xt+1, yt+1) + (1− βt+1) [ wt −∇yfBt+1(xt, yt) ] ; (13)\n9: end for 10: Output: xζ and yζ chosen uniformly random from {xt, yt}Tt=1." }, { "heading": "4.2 MVR-RSGDA ALGORITHM", "text": "In this subsection, we propose a novel momentum variance-reduced stochastic Riemannian gradient descent ascent (MVR-RSGDA) algorithm to solve the stochastic min-max problem (1), which builds on the momentum-based variance reduction technique of STORM (Cutkosky & Orabona, 2019). Algorithm 2 describes the algorithmic framework of MVR-RSGDA method.\nIn Algorithm 2, we use the momentum-based variance-reduced technique of STORM to update stochastic Riemannian gradient vt:\nvt+1 = αt+1 gradxfBt+1(xt+1, yt+1)︸ ︷︷ ︸ SGD\n+ (1− αt+1) ( gradxfBt+1(xt+1, yt+1)− T xt+1xt ( gradxfBt+1(xt, yt)− vt ))︸ ︷︷ ︸ SPIDER\n= gradxfBt+1(xt+1, yt+1) + (1− αt+1)T xt+1xt ( vt − gradxfBt+1(xt, yt) ) , (11)\nwhere αt+1 ∈ (0, 1]. When αt+1 = 1, vt will degenerate a vanilla stochastic Riemannian gradient; When αt+1 = 0, vt will degenerate a stochastic Riemannian gradient based on variance-reduced technique of SPIDER (Nguyen et al., 2017; Fang et al., 2018). Similarly, we use this momentumbased variance-reduced technique to estimate the stochastic gradient wt." }, { "heading": "5 CONVERGENCE ANALYSIS", "text": "In this section, we study the convergence properties of our RGDA, RSGDA, and MVR-RSGDA algorithms under some mild conditions. For notational simplicity, let L̃ = max(1, L11, L12, L21, L22) and κ = L21/µ denote the number condition of function f(x, y). We first give a useful lemma. Lemma 1. Under the assumptions in §3, the gradient of function Φ(x) = maxy∈Y f(x, y) is retraction G-Lipschitz, and the mapping or function y∗(x) = arg maxy∈Y f(x, y) is retraction κLipschitz. Given any x1, x2 = Rx1(u) ∈ X ⊂M and u ∈ Tx1M, we have:\n‖gradΦ(x1)− T x1x2 gradΦ(x2)‖ ≤ G‖u‖, ‖y ∗(x1)− y∗(x2)‖ ≤ κ‖u‖, (14)\nwhere G = κL12 + L11 and κ = L21/µ." }, { "heading": "5.1 CONVERGENCE ANALYSIS OF BOTH THE RGDA AND RSGDA ALGORITHMS", "text": "In the subsection, we study the convergence properties of deterministic RGDA and stochastic RSGDA algorithms. The related proofs of RGDA and RSGDA are provided in Appendix A.1. Theorem 1. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 1 by using deterministic gradients. Given η = ηt for all t ≥ 1, 0 < η ≤ min(1, 12γL ), 0 < λ ≤ 1 6L̃ and 0 < γ ≤ µλ 10L̃κ\n, we have\n1\nT T∑ t=1 [ ‖gradΦ(xt)‖+ L̃‖yt − y∗(xt)‖ ] ≤\n2 √\nΦ(x1)− Φ∗√ γηT . (15)\nRemark 1. Since 0 < η ≤ min(1, 12γL ) and 0 < γ ≤ µλ 10L̃κ , we have 0 < ηγ ≤ min( µλ 10L̃κ , 12L ). Let ηγ = min( µλ 10L̃κ , 12L ), we have ηγ = O( 1 κ2 ). The RGDA algorithm has convergence rate of\nO (\nκ T 1/2\n) . By κ\nT 1/2 ≤ , i.e., E[Hζ ] ≤ , we choose T ≥ κ2 −2. In the deterministic RGDA\nAlgorithm, we need one sample to estimate the gradients vt and wt at each iteration, and need T iterations. Thus, the RGDA reaches a sample complexity of T = O(κ2 −2) for finding an - stationary point. Theorem 2. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 1 by using stochastic gradients. Given η = ηt for all t ≥ 1, 0 < η ≤ min(1, 12γL ), 0 < λ ≤ 1 6L̃ and 0 < γ ≤ µλ 10L̃κ\n, we have\n1\nT T∑ t=1 E [ ‖gradΦ(xt)‖+ L̃‖yt − y∗(xt)‖ ] ≤\n2 √\nΦ(x1)− Φ∗√ γηT + √ 2σ√ B + 5 √ 2L̃σ√ Bµ . (16)\nRemark 2. Since 0 < η ≤ min(1, 12γL ) and 0 < γ ≤ µλ 10L̃κ , we have 0 < ηγ ≤ min( µλ 10L̃κ , 12L ). Let ηγ = min( µλ 10L̃κ , 12L ), we have ηγ = O( 1 κ2 ). Let B = T , the RSGDA algorithm has convergence\nrate of O (\nκ T 1/2\n) . By κ\nT 1/2 ≤ , i.e., E[Hζ ] ≤ , we choose T ≥ κ2 −2. In the stochastic RSGDA\nAlgorithm, we need B samples to estimate the gradients vt and wt at each iteration, and need T iterations. Thus, the RSGDA reaches a sample complexity of BT = O(κ4 −4) for finding an -stationary point." }, { "heading": "5.2 CONVERGENCE ANALYSIS OF THE MVR-RSGDA ALGORITHM", "text": "In the subsection, we provide the convergence properties of the MVR-RSGDA algorithm. The related proofs of MVR-RSGDA are provided in Appendix A.2. Theorem 3. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 2. Given y1 = y∗(x1), c1 ≥ 23b3 + 2λµ, c2 ≥ 2 3b3 + 50λL̃2 µ , b > 0, m ≥ max ( 2, (c̃b)3 ) , 0 < γ ≤ µλ 2κL̃ √ 25+4µλ and 0 < λ ≤ 1 6L̃ , we have\n1\nT T∑ t=1 E [ ‖gradΦ(xt)‖+ L̃‖yt − y∗(xt)‖ ] ≤ √ 2M ′m1/6 T 1/2 + √ 2M ′ T 1/3 , (17)\nwhere c̃ = max(1, c1, c2, 2γL) and M ′ = 2(Φ(x1)−Φ∗) γb + 2σ2 Bλµη0b + 2(c21+c 2 2)σ 2b2 Bλµ ln(m+ T ).\nRemark 3. Let c1 = 23b3 + 2λµ, c2 = 2 3b3 + 50λL̃2 µ , λ = 1 6L̃ , γ = µλ 2κL̃ √ 25+4µλ and η0 = bm1/3 . It is easy verified that γ = O( 1κ2 ), λ = O(1), λµ = O( 1 κ ), c1 = O(1), c2 = O(κ), m = O(κ\n3) and η0 = O( 1 κ ). Without loss of generality, let T ≥ m = O(κ 3), we haveM ′ = O ( κ2+ κ 2 B + κ3 B ln(T ) ) .\nWhen B = κ, we have M ′ = O ( κ2 ln(T ) ) . Thus, the MVR-RSGDA algorithm has a convergence\nrate of Õ (\nκ T 1/3\n) . By κ\nT 1/3 ≤ , i.e., E[Hζ ] ≤ , we choose T ≥ κ3 −3. In Algorithm 2, we require\nB samples to estimate the stochastic gradients vt and wt at each iteration, and need T iterations. Thus, the MVR-RSGDA has a sample complexity of BT = Õ ( κ4 −3 ) for finding an -stationary point of the problem (1). Similarly, when B = 1, the MVR-RSGDA algorithm has a convergence rate of Õ ( κ3/2\nT 1/3\n) , and has a sample complexity of BT = Õ ( κ9/2 −3 ) for finding an -stationary\npoint.\nRemark 4. In the about theoretical analysis, we only assume the convexity of constraint set Y , while Lin et al. (2019) not only assume the convexity of set Y , but also assume and use its bounded (Please see Assumption 4.2 in (Lin et al., 2019)). Clearly, our assumption is milder than (Lin et al., 2019). When there does not exist a constraint set on parameter y, i.e.,Y = Rd, our algorithms and theoretical results still work, while Lin et al. (2019) can‘t work." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we conduct the deep neural network (DNN) robust training over the Stiefel manifold St(r, d) = {W ∈ Rd×r : WTW = Ir} to evaluate the performance of our algorithms. In the experiment, we use MNIST, CIFAR-10, and CIFAR-100 datasets to train the model ( More experimental results on SVHN, STL10, and FashionMNIST datasets are provided in the Appendix B ). Considering the sample size is large in these datasets, we only compare the proposed stochastic algorithms (RSGDA and MVR-RSGDA) in the experiments. Here, we use the SGDA algorithm (Lin et al., 2019) as a baseline, which does not apply the orthogonal regularization in the DNN robust training." }, { "heading": "6.1 EXPERIMENTAL SETTING", "text": "Given a deep neural network h(·;x) parameterized by x as shown in the above problem (2), the weights of l-th layer is xi ∈ St(nlin, nlout), where St(nlin, nlout) is the Stiefel manifold of l-th layer.\nFor both RSGDA and MVR-RSGDA algorithms, we set {γ, λ} to {1.0, 0.1}. We further set {b, m, c1, c2} to 0.5, 8, 512, 512 for MVR-RSGDA. η in RSGDA is set to 0.01. For both algorithms, the mini-batch size is set to 512. We set for y as 0.05 and 0.03 for the MNIST dataset and CIFAR10/100 datasets. The above settings are the same for all datasets. An 8-layer (5 convolution layers and 3 dense layers) deep neural network is used in all experiments. All codes are implemented with McTorch (Meghwanshi et al., 2018) which is based on PyTorch (Paszke et al., 2019)." }, { "heading": "6.2 EXPERIMENTAL RESULTS", "text": "The training loss plots of the robust training problem in the above Eq. (2) are shown in Fig. 2. From the figure, we can see that MVR-RSGDA enjoys a faster convergence speed compared to the baseline RSGDA. It’s also clear that when the dataset becomes complicate (from MNIST to CIFAR-10/100), the advantage of MVR-RSGDA becomes larger.\nWhen it comes to robust training, the training loss is not enough to identify which algorithm is better. We also use a variant of uniform perturbation to attack the model trained by our algorithms. We follow the design of uniform attack in previous works (Moosavi-Dezfooli et al., 2017; Chaubey et al., 2020), and the detail uniform attack objective is shown below:\nmin y∈Y\n1\nn n∑ i=1 max ( hbi(y + ai)−max j 6=bi hj(y + ai), 0 ) , s.t. Y = {‖y‖∞ ≤ ε}\nwhere hj is the j-th logit of the output from the deep neural network, and y here is a uniform permutation added for all inputs. In practice, we sample a mini-batch with 512 samples at each iteration. The optimization of the uniform permutation lasts for 1000 iterations for all settings. The attack loss is presented in Fig 3. The attack loss for the model trained by MVR-RSGDA is higher compared to both RSGDA and SGDA, which indicates the model trained by MVR-RSGDA is harder to attack and thus more robust. The test accuracy with natural image and uniform attack is shown in Tab. 2, which also suggests the advantage of MVR-RSGDA. More results are provided in Appendix B." }, { "heading": "7 CONCLUSION", "text": "In the paper, we investigated a class of useful min-max optimization problems on the Riemanian manifold. We proposed a class of novel efficient Riemanian gradient descent ascent algorithms to solve these minimax problems, and studied the convergence properties of the proposed algorithms. For example, we proved that our new MVR-RSGDA algorithm achieves a sample complexity of Õ(κ4 −3) without large batches, which reaches near the best known sample complexity for its Euclidean counterparts." }, { "heading": "A APPENDIX", "text": "In this section, we provide the detailed convergence analysis of our algorithms. We first review some useful lemmas.\nLemma 2. (Nesterov, 2018) Assume that f(x) is a differentiable convex function and X is a convex set. x∗ ∈ X is the solution of the constrained problem minx∈X f(x), if\n〈∇f(x∗), x− x∗〉 ≥ 0, x ∈ X . (18)\nLemma 3. (Nesterov, 2018) Assume the function f(x) is L-smooth, i.e., ‖∇f(x) − ∇f(y)‖ ≤ L‖x− y‖, and then the following inequality holds\n|f(y)− f(x)−∇f(x)T (y − x)| ≤ L 2 ‖x− y‖2. (19)\nNext, based on the above assumptions and Lemmas, we gives some useful lemmas:\nLemma 4. The gradient of function Φ(x) = maxy∈Y f(x, y) is retraction G-Lipschitz, and the mapping or function y∗(x) = arg maxy∈Y f(x, y) is retraction κ-Lipschitz. Given any x1, x2 = Rx1(u) ∈ X ⊂M and u ∈ Tx1M, we have\n‖gradΦ(x1)− T x1x2 gradΦ(x2)‖ ≤ G‖u‖, ‖y∗(x1)− y∗(x2)‖ ≤ κ‖u‖,\nwhere G = κL12 + L11 and κ = L21/µ, and vector transport T x1x2 transport the tangent space of x1 to that of x2.\nProof. Given any x1, x2 = Rx1(u) ∈ X and u ∈ Tx1M, define y∗(x1) = arg maxy∈Y f(x1, y) and y∗(x2) = arg maxy∈Y f(x2, y), by the above Lemma 2, we have\n(y − y∗(x1))T∇yf(x1, y∗(x1)) ≤ 0, ∀y ∈ Y (20) (y − y∗(x2))T∇yf(x2, y∗(x2)) ≤ 0, ∀y ∈ Y. (21)\nLet y = y∗(x2) in the inequality (20) and y = y∗(x1) in the inequality (21), then summing these inequalities, we have\n(y∗(x2)− y∗(x1))T ( ∇yf(x1, y∗(x1))−∇yf(x2, y∗(x2)) ) ≤ 0. (22)\nSince the function f(x1, ·) is µ-strongly concave, we have\nf(x1, y ∗(x1)) ≤ f(x1, y∗(x2)) + (∇yf(x1, y∗(x2)))T (y∗(x1)− y∗(x2))−\nµ 2 ‖y∗(x1)− y∗(x2)‖2,\n(23)\nf(x1, y ∗(x2)) ≤ f(x1, y∗(x1)) + (∇yf(x1, y∗(x1)))T (y∗(x2)− y∗(x1))−\nµ 2 ‖y∗(x1)− y∗(x2)‖2.\n(24)\nCombining the inequalities (23) with (24), we obtain (y∗(x2)− y∗(x1))T ( ∇yf(x1, y∗(x2))−∇yf(x1, y∗(x1)) ) + µ‖y∗(x1)− y∗(x2)‖2 ≤ 0. (25)\nBy plugging the inequalities (22) into (25), we have µ‖y∗(x1)− y∗(x2)‖2 ≤ (y∗(x2)− y∗(x1))T ( ∇yf(x2, y∗(x2))−∇yf(x1, y∗(x2)) ) ≤ ‖y∗(x2)− y∗(x1)‖‖∇yf(x2, y∗(x2))−∇yf(x1, y∗(x2))‖ ≤ L21‖u‖‖y∗(x2)− y∗(x1)‖, (26)\nwhere the last inequality is due to Assumption 1. Thus, we have\n‖y∗(x1)− y∗(x2)‖ ≤ κ‖u‖, (27)\nwhere κ = L21/µ and x2 = Rx1(u), u ∈ Tx1M.\nSince Φ(x) = f(x, y∗(x)), we have gradΦ(x) = gradxf(x, y ∗(x)). Then we have\n‖gradΦ(x1)−T x1x2 gradΦ(x2)‖ = ‖gradxf(x1, y∗(x1))− T x1x2 gradxf(x2, y\n∗(x2))‖ ≤ ‖gradxf(x1, y∗(x1))−gradxf(x1, y∗(x2))‖+‖gradxf(x1, y∗(x2))−T x1x2 gradxf(x2, y\n∗(x2))‖ ≤ L12‖y∗(x1)− y∗(x2)‖+ L11‖u‖ ≤ (κL12 + L11)‖u‖, (28)\nwhere u ∈ Tx1M.\nLemma 5. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 1 or 2. Given 0 < ηt ≤ 1 2γL , we have\nΦ(xt+1) ≤ Φ(xt) + γL12ηt‖y∗(xt)− yt‖2 + γηt‖gradxf(xt, yt)− vt‖2 − γηt 2 ‖gradΦ(xt)‖2\n− γηt 4 ‖vt‖2. (29)\nProof. According to Assumption 2, i.e., the function Φ(x) is retraction L-smooth, we have\nΦ(xt+1) ≤ Φ(xt)− γηt〈gradΦ(xt), vt〉+ γ2η2tL\n2 ‖vt‖2 (30)\n= Φ(xt) + γηt 2 ‖gradΦ(xt)− vt‖2 − γηt 2 ‖gradΦ(xt)‖2 + (\nγ2η2tL\n2 − γηt 2 )‖vt‖2\n= Φ(xt) + γηt 2 ‖gradΦ(xt)− gradxf(xt, yt) + gradxf(xt, yt)− vt‖2 − γηt 2 ‖gradΦ(xt)‖2\n+ ( γ2η2tL 2 − γηt 2 )‖vt‖2\n≤ Φ(xt) + γηt‖gradΦ(xt)− gradxf(xt, yt)‖2 + γηt‖gradxf(xt, yt)− vt‖2 − γηt 2 ‖gradΦ(xt)‖2\n+ ( Lγ2η2t 2 − γηt 2 )‖vt‖2\n≤ Φ(xt) + γηt‖gradΦ(xt)− gradxf(xt, yt)‖2 + γηt‖gradxf(xt, yt)− vt‖2 − γηt 2 ‖gradΦ(xt)‖2\n− γηt 4 ‖vt‖2,\nwhere the last inequality is due to 0 < ηt ≤ 12γL .\nConsider an upper bound of ‖gradΦ(xt)− gradxf(xt, yt)‖2, we have ‖gradΦ(xt)− gradxf(xt, yt)‖2 = ‖gradxf(xt, y∗(xt))− gradxf(xt, yt)‖2 ≤ L12‖y∗(xt)− yt‖2. (31) Then we have\nΦ(xt+1) ≤ Φ(xt) + γηtL12‖y∗(xt)− yt‖2 + γηt‖gradxf(xt, yt)− vt‖2 − γηt 2 ‖gradΦ(xt)‖2\n− γηt 4 ‖vt‖2. (32)\nLemma 6. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 1 or 2. Under the above assumptions, and set 0 < ηt ≤ 1 and 0 < λ ≤ 16L̃ , we have\n‖yt+1 − y∗(xt+1)‖2 ≤ (1− ηtµλ\n4 )‖yt − y∗(xt)‖2 − 3ηt 4 ‖ỹt+1 − yt‖2\n+ 25ηtλ\n6µ ‖∇yf(xt, yt)− wt‖2 + 25γ2κ2ηt 6µλ ‖vt‖2, (33)\nwhere κ = L21/µ.\nProof. According to the assumption 3, i.e., the function f(x, y) is µ-strongly concave w.r.t y, we have\nf(xt, y) ≤ f(xt, yt) + 〈∇yf(xt, yt), y − yt〉 − µ\n2 ‖y − yt‖2\n= f(xt, yt) + 〈wt, y − ỹt+1〉+ 〈∇yf(xt, yt)− wt, y − ỹt+1〉\n+ 〈∇yf(xt, yt), ỹt+1 − yt〉 − µ\n2 ‖y − yt‖2. (34)\nAccording to the assumption 1, i.e., the function f(x, y) is L22-smooth w.r.t y, and L̃ ≥ L22, we have\nf(xt, ỹt+1)− f(xt, yt)− 〈∇yf(xt, yt), ỹt+1 − yt〉 ≥ − L22 2 ‖ỹt+1 − yt‖2\n≥ − L̃ 2 ‖ỹt+1 − yt‖2. (35)\nCombining the inequalities (34) with (35), we have\nf(xt, y) ≤ f(xt, ỹt+1) + 〈wt, y − ỹt+1〉+ 〈∇yf(xt, yt)− wt, y − ỹt+1〉\n− µ 2 ‖y − yt‖2 + L̃ 2 ‖ỹt+1 − yt‖2. (36)\nAccording to the step 6 of Algorithm 1 or 2, we have ỹt+1 = PY(yt + λwt) = arg miny∈Y 12‖y − yt − λwt‖2. Since Y is a convex set and the function 12‖y − yt − λwt‖\n2 is convex, according to Lemma 2, we have\n〈ỹt+1 − yt − λwt, y − ỹt+1〉 ≥ 0, y ∈ Y. (37) Then we obtain\n〈wt, y − ỹt+1〉 ≤ 1\nλ 〈ỹt+1 − yt, y − ỹt+1〉\n= 1\nλ 〈ỹt+1 − yt, yt − ỹt+1〉+\n1 λ 〈ỹt+1 − yt, y − yt〉\n= − 1 λ ‖ỹt+1 − yt‖2 + 1 λ 〈ỹt+1 − yt, y − yt〉. (38)\nCombining the inequalities (36) with (38), we have\nf(xt, y) ≤ f(xt, ỹt+1) + 1\nλ 〈ỹt+1 − yt, y − yt〉+ 〈∇yf(xt, yt)− wt, y − ỹt+1〉\n− 1 λ ‖ỹt+1 − yt‖2 − µ 2 ‖y − yt‖2 + L̃ 2 ‖ỹt+1 − yt‖2. (39)\nLet y = y∗(xt) and we obtain\nf(xt, y ∗(xt)) ≤ f(xt, ỹt+1) +\n1 λ 〈ỹt+1 − yt, y∗(xt)− yt〉+ 〈∇yf(xt, yt)− wt, y∗(xt)− ỹt+1〉\n− 1 λ ‖ỹt+1 − yt‖2 − µ 2 ‖y∗(xt)− yt‖2 + L̃ 2 ‖ỹt+1 − yt‖2. (40)\nDue to the concavity of f(·, y) and y∗(xt) = arg maxy∈Y f(xt, y), we have f(xt, y∗(xt)) ≥ f(xt, ỹt+1). Thus, we obtain\n0 ≤ 1 λ 〈ỹt+1 − yt, y∗(xt)− yt〉+ 〈∇yf(xt, yt)− wt, y∗(xt)− ỹt+1〉\n− ( 1 λ − L̃ 2 )‖ỹt+1 − yt‖2 − µ 2 ‖y∗(xt)− yt‖2. (41)\nBy yt+1 = yt + ηt(ỹt+1 − yt), we have\n‖yt+1 − y∗(xt)‖2 = ‖yt + ηt(ỹt+1 − yt)− y∗(xt)‖2\n= ‖yt − y∗(xt)‖2 + 2ηt〈ỹt+1 − yt, yt − y∗(xt)〉+ η2t ‖ỹt+1 − yt‖2. (42)\nThen we obtain\n〈ỹt+1 − yt, y∗(xt)− yt〉 ≤ 1\n2ηt ‖yt − y∗(xt)‖2 + ηt 2 ‖ỹt+1 − yt‖2 − 1 2ηt ‖yt+1 − y∗(xt)‖2. (43)\nConsider the upper bound of the term 〈∇yf(xt, yt)− wt, y∗(xt)− ỹt+1〉, we have\n〈∇yf(xt, yt)− wt, y∗(xt)− ỹt+1〉 = 〈∇yf(xt, yt)− wt, y∗(xt)− yt〉+ 〈∇yf(xt, yt)− wt, yt − ỹt+1〉\n≤ 1 µ ‖∇yf(xt, yt)− wt‖2 + µ 4 ‖y∗(xt)− yt‖2 + 1 µ ‖∇yf(xt, yt)− wt‖2 + µ 4 ‖yt − ỹt+1‖2\n= 2\nµ ‖∇yf(xt, yt)− wt‖2 +\nµ 4 ‖y∗(xt)− yt‖2 + µ 4 ‖yt − ỹt+1‖2. (44)\nBy plugging the inequalities (41), (43) to (44), we have\n1\n2ηtλ ‖yt+1 − y∗(xt)‖2 ≤ (\n1 2ηtλ − µ 4 )‖yt − y∗(xt)‖2 + ( ηt 2λ + µ 4 + L̃ 2 − 1 λ )‖ỹt+1 − yt‖2\n+ 2\nµ ‖∇yf(xt, yt)− wt‖2\n≤ ( 1 2ηtλ − µ 4 )‖yt − y∗(xt)‖2 + ( 3L̃ 4 − 1 2λ )‖ỹt+1 − yt‖2 + 2 µ ‖∇yf(xt, yt)− wt‖2\n= ( 1 2ηtλ − µ 4 )‖yt − y∗(xt)‖2 − ( 3 8λ + 1 8λ − 3L̃ 4 ) ‖ỹt+1 − yt‖2\n+ 2\nµ ‖∇yf(xt, yt)− wt‖2\n≤ ( 1 2ηtλ − µ 4 )‖yt − y∗(xt)‖2 − 3 8λ ‖ỹt+1 − yt‖2 + 2 µ ‖∇yf(xt, yt)− wt‖2,\n(45)\nwhere the second inequality holds by L̃ ≥ L22 ≥ µ and 0 < ηt ≤ 1, and the last inequality is due to 0 < λ ≤ 1\n6L̃ . It implies that\n‖yt+1 − y∗(xt)‖2 ≤ (1− ηtµλ\n2 )‖yt − y∗(xt)‖2 − 3ηt 4 ‖ỹt+1 − yt‖2 + 4ηtλ µ ‖∇yf(xt, yt)− wt‖2.\n(46)\nNext, we decompose the term ‖yt+1 − y∗(xt+1)‖2 as follows:\n‖yt+1 − y∗(xt+1)‖2 = ‖yt+1 − y∗(xt) + y∗(xt)− y∗(xt+1)‖2\n= ‖yt+1 − y∗(xt)‖2 + 2〈yt+1 − y∗(xt), y∗(xt)− y∗(xt+1)〉+ ‖y∗(xt)− y∗(xt+1)‖2\n≤ (1 + ηtµλ 4 )‖yt+1 − y∗(xt)‖2 + (1 + 4 ηtµλ )‖y∗(xt)− y∗(xt+1)‖2\n≤ (1 + ηtµλ 4 )‖yt+1 − y∗(xt)‖2 + (1 + 4 ηtµλ )η2t γ 2κ2‖vt‖2, (47)\nwhere the first inequality holds by the Cauchy-Schwarz inequality and Young’s inequality, and the last equality is due to Lemma 4.\nBy combining the above inequalities (46) and (47), we have\n‖yt+1 − y∗(xt+1)‖2 ≤ (1 + ηtµλ 4 )(1− ηtµλ 2 )‖yt − y∗(xt)‖2 − (1 + ηtµλ 4 ) 3ηt 4 ‖ỹt+1 − yt‖2\n+ (1 + ηtµλ 4 ) 4ηtλ µ ‖∇yf(xt, yt)− wt‖2 + (1 + 4 ηtµλ )η2t γ 2κ2‖vt‖2.\n(48)\nSince 0 < ηt ≤ 1, 0 < λ ≤ 16L̃ and L̃ ≥ L22 ≥ µ, we have λ ≤ 1 6L̃ ≤ 16µ and ηt ≤ 1 ≤ 1 6µλ . Then we obtain\n(1 + ηtµλ 4 )(1− ηtµλ 2 ) = 1− ηtµλ 2 + ηtµλ 4 − η\n2 t µ 2λ2\n8 ≤ 1− ηtµλ 4 ,\n−(1 + ηtµλ 4 ) 3ηt 4 ≤ −3ηt 4 ,\n(1 + ηtµλ 4 ) 4ηtλ µ ≤ (1 + 1 24 ) 4ηtλ µ = 25ηtλ 6µ ,\n(1 + 4\nηtµλ )γ2κ2η2t = γ 2κ2η2t + 4γ2κ2ηt µλ ≤ γ 2κ2ηt 6µλ + 4γ2κ2ηt µλ = 25γ2κ2ηt 6µλ . (49)\nThus we have\n‖yt+1 − y∗(xt+1)‖2 ≤ (1− ηtµλ\n4 )‖yt − y∗(xt)‖2 − 3ηt 4 ‖ỹt+1 − yt‖2\n+ 25ηtλ\n6µ ‖∇yf(xt, yt)− wt‖2 + 25γ2κ2ηt 6µλ ‖vt‖2. (50)\nA.1 CONVERGENCE ANALYSIS OF RGDA AND RSGDA ALGORITHMS\nIn the subsection, we study the convergence properties of deterministic RGDA and stochastic RSGDA algorithms, respectively. For notational simplicity, let L̃ = max(1, L11, L12, L21, L22). Theorem 4. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 1 by using deterministic gradients. Given η = ηt for all t ≥ 1, 0 < η ≤ min(1, 12γL ), 0 < λ ≤ 1 6L̃ and 0 < γ ≤ µλ 10L̃κ\n, we have\n1\nT T∑ t=1 [ L̃‖yt − y∗(xt)‖+ ‖gradΦ(xt)‖ ] ≤\n2 √\nΦ(x1)− Φ∗√ γηT . (51)\nProof. According to Lemma 6, we have\n‖yt+1 − y∗(xt+1)‖2 ≤ (1− ηtµλ\n4 )‖yt − y∗(xt)‖2 − 3ηt 4 ‖ỹt+1 − yt‖2 + 25ηtλ 6µ ‖∇yf(xt, yt)− wt‖2\n+ 25γ2κ2ηt\n6µλ ‖vt‖2. (52)\nWe first define a Lyapunov function Λt, for any t ≥ 1\nΛt = Φ(xt) + 6γL̃2\nλµ ‖yt − y∗(xt)‖2. (53)\nAccording to Lemma 5, we have\nΛt+1 − Λt = Φ(xt+1)− Φ(xt) + 6γL̃2\nλµ\n( ‖yt+1 − y∗(xt+1)‖2 − ‖yt − y∗(xt)‖2 ) ≤ γηtL12‖yt − y∗(xt)‖2 + γηt‖gradxf(xt, yt)− vt‖2 −\nγηt 2 ‖gradΦ(xt)‖2 − γηt 4 ‖vt‖2\n+ 6γL̃2\nλµ\n( − µληt\n4 ‖yt − y∗(xt)‖2 − 3ηt 4 ‖ỹt+1 − yt‖2 + 25ληt 6µ ‖∇yf(xt, yt)− wt‖2\n+ 25γ2κ2ηt\n6µλ ‖vt‖2 ) ≤ − L̃\n2γηt 2 ‖yt − y∗(xt)‖2 − γηt 2 ‖gradΦ(xt)‖2 − 9γL̃2ηt 2λµ ‖ỹt+1 − yt‖2\n− (1\n4 − 25κ\n2L̃2γ2 µ2λ2 ) γηt‖vt‖2\n≤ − L̃ 2γηt 2 ‖yt − y∗(xt)‖2 − γηt 2 ‖gradΦ(xt)‖2, (54)\nwhere the first inequality holds by the inequality (52); the second last inequality is due to L̃ = max(1, L11, L12, L21, L22) and vt = gradxf(xt, yt), wt = ∇yf(xt, yt), and the last inequality is due to 0 < γ ≤ µλ\n10L̃κ . Thus, we obtain\nL̃2γηt 2 ‖yt − y∗(xt)‖2 + γηt 2 ‖gradΦ(xt)‖2 ≤ Λt − Λt+1. (55)\nSince the initial solution satisfies y1 = y∗(x1) = arg maxy∈Y f(x1, y), we have\nΛ1 = Φ(x1) + 6γL̃2\nλµ ‖y1 − y∗(x1)‖2 = Φ(x1). (56)\nTaking average over t = 1, 2, · · · , T on both sides of the inequality (55), we have\n1\nT T∑ t=1 [ L̃2ηt 2 ‖yt − y∗(xt)‖2 + ηt 2 ‖gradΦ(xt)‖2 ] ≤ Λ1 − ΛT+1 γT ≤ Φ(x1)− Φ ∗ γT , (57)\nwhere the last equality is due to the above equality (56) and Assumption 4. Let η = η1 = · · · = ηT , we have\n1\nT T∑ t=1 [ L̃2‖yt − y∗(xt)‖2 + ‖gradΦ(xt)‖2 ] ≤ 2(Φ(x1)− Φ ∗) γηT . (58)\nAccording to Jensen’s inequality, we have\n1\nT T∑ t=1 [ L̃‖yt − y∗(xt)‖+ ‖gradΦ(xt)‖ ] ≤ ( 2 T T∑ t=1 [ L̃2‖yt − y∗(xt)‖2 + ‖gradΦ(xt)‖2 )1/2 ≤ (\n4(Φ(x1)− Φ∗) γηT\n)1/2 = 2 √\nΦ(x1)− Φ∗√ γηT . (59)\nTheorem 5. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 1 by using stochastic gradients. Given η = ηt for all t ≥ 1, 0 < η ≤ min(1, 12γL ), 0 < λ ≤ 1 6L̃ and 0 < γ ≤ µλ 10L̃κ\n, we have\n1\nT T∑ t=1 E [ L̃‖yt − y∗(xt)‖+ ‖gradΦ(xt)‖ ] ≤\n2 √\nΦ(x1)− Φ∗√ γηT + √ 2σ√ B + 5 √ 2L̃σ√ Bµ . (60)\nProof. According to Lemma 6, we have\n‖yt+1 − y∗(xt+1)‖2 ≤ (1− ηtµλ\n4 )‖yt − y∗(xt)‖2 − 3ηt 4 ‖ỹt+1 − yt‖2 + 25ηtλ 6µ ‖∇yf(xt, yt)− wt‖2\n+ 25γ2κ2ηt\n6µλ ‖vt‖2. (61)\nWe first define a Lyapunov function Θt, for any t ≥ 1\nΘt = E [ Φ(xt) + 6γL̃2\nλµ ‖yt − y∗(xt)‖2\n] . (62)\nBy Assumption 5, we have\nE‖gradxf(xt, yt)− vt‖2 = E‖gradxf(xt, yt)− 1\nB B∑ i=1 gradxf(xt, yt; ξ i t)‖2 ≤ σ2 B , (63)\nE‖∇yf(xt, yt)− wt‖2 = E‖∇yf(xt, yt)− 1\nB B∑ i=1 ∇yf(xt, yt; ξit)‖2 ≤ σ2 B . (64)\nAccording to Lemma 5, we have\nΘt+1 −Θt = E[Φ(xt+1)]− E[Φ(xt)] + 6γL̃2\nλµ\n( E‖yt+1 − y∗(xt+1)‖2 − E‖yt − y∗(xt)‖2 ) ≤ γηtL12E‖yt − y∗(xt)‖2 + γηtE‖gradxf(xt, yt)− vt‖2 −\nγηt 2 E‖gradΦ(xt)‖2 − γηt 4 ‖vt‖2\n+ 6γL̃2\nλµ\n( − µληt\n4 E‖yt − y∗(xt)‖2 − 3ηt 4 E‖ỹt+1 − yt‖2 + 25ληt 6µ E‖∇yf(xt, yt)− wt‖2\n+ 25γ2κ2ηt\n6µλ ‖vt‖2 ) ≤ − L̃\n2γηt 2 E‖yt − y∗(xt)‖2 − γηt 2 E‖gradΦ(xt)‖2 − 9γL̃2ηt 2λµ E‖ỹt+1 − yt‖2\n− (1\n4 − 25κ\n2L̃2γ2 µ2λ2 ) γηt‖vt‖2 + γηtE‖gradxf(xt, yt)− vt‖2 + 25L̃2γηt µ2 E‖∇yf(xt, yt)− wt‖2\n≤ − L̃ 2γηt 2 E‖yt − y∗(xt)‖2 − γηt 2 E‖gradΦ(xt)‖2 + γηtσ 2 B + 25L̃2γηtσ 2 Bµ2 , (65)\nwhere the first inequality holds by the inequality (61); the second last inequality is due to L̃ = max(1, L11, L12, L21, L22), and the last inequality is due to 0 < γ ≤ µλ10L̃κ and Assumption 5. Thus, we obtain\nL̃2γηt 2 E‖yt − y∗(xt)‖2 + γηt 2 E‖gradΦ(xt)‖2 ≤ Θt −Θt+1 + γηtσ 2 B + 25L̃2γηtσ 2 Bµ2 . (66)\nSince the initial solution satisfies y1 = y∗(x1) = arg maxy∈Y f(x1, y), we have\nΘ1 = Φ(x1) + 6γL̃2\nλµ ‖y1 − y∗(x1)‖2 = Φ(x1). (67)\nTaking average over t = 1, 2, · · · , T on both sides of the inequality (66), we have\n1\nT T∑ t=1 E [ L̃2ηt 2 ‖yt − y∗(xt)‖2 + ηt 2 ‖gradΦ(xt)‖2 ] ≤ Θt −Θt+1 γT + 1 T T∑ t=1 ηtσ 2 B + 1 T T∑ t=1 25L̃2ηtσ 2 Bµ2\n= Φ(x1)− Φ∗\nγT +\n1\nT T∑ t=1 ηtσ 2 B + 1 T T∑ t=1 25L̃2ηtσ 2 Bµ2 ,\n(68)\nwhere the last equality is due to the above equality (67). Let η = η1 = · · · = ηT , we have\n1\nT T∑ t=1 E [ L̃2‖yt − y∗(xt)‖2 + ‖gradΦ(xt)‖2 ] ≤ 2(Φ(x1)− Φ ∗) γηT + σ2 B + 25L̃2σ2 Bµ2 . (69)\nAccording to Jensen’s inequality, we have\n1\nT T∑ t=1 E [ L̃‖yt − y∗(xt)‖+ ‖gradΦ(xt)‖ ] ≤ ( 2 T T∑ t=1 E [ L̃2‖yt − y∗(xt)‖2 + ‖gradΦ(xt)‖2 )1/2 ≤ 4(Φ(x1)− Φ ∗)\nγηT +\n2σ2\nB +\n50L̃2σ2 Bµ2 )1/2\n≤ 2 √\nΦ(x1)− Φ∗√ γηT + √ 2σ√ B + 5 √ 2L̃σ√ Bµ , (70)\nwhere the last inequality is due to (a1 + a2 + a3)1/2 ≤ a1/21 + a 1/2 2 + a 1/2 3 for all a1, a2, a3 > 0.\nA.2 CONVERGENCE ANALYSIS OF THE MVR-RSGDA ALGORITHM\nIn the subsection, we study the convergence properties of the MVR-RSGDA algorithm. For notational simplicity, let L̃ = max(L11, L12, L21, L22, 1).\nLemma 7. Suppose the stochastic gradients vt and wt is generated from Algorithm 2, given 0 < αt+1 ≤ 1 and 0 < βt+1 ≤ 1, we have\nE‖gradxf(xt+1, yt+1)− vt+1‖2 ≤ (1− αt+1)2E‖gradxf(xt, yt)− vt‖2 + 4(1− αt+1)2L211γ2η2t ‖vt‖2\n+ 4(1− αt+1)2L212η2t ‖ỹt+1 − yt‖2 + 2α2t+1σ 2\nB . (71)\nE‖∇yf(xt+1, yt+1)− wt+1‖2 ≤ (1− βt+1)2E‖∇yf(xt, yt)− wt‖2 + 4(1− βt+1)2L221γ2η2t ‖vt‖2\n+ 4(1− βt+1)2L222η2t ‖ỹt+1 − yt‖2 + 2β2t+1σ 2\nB . (72)\nProof. We first prove the inequality (71). According to the definition of vt in Algorithm 2, we have\nvt+1 − T xt+1xt vt = −αt+1T xt+1 xt vt + (1− αt+1) ( gradxfBt+1(xt+1, yt+1)− T xt+1xt gradxfBt+1(xt, yt) ) + αt+1gradxfBt+1(xt+1, yt+1). (73)\nThen we have\nE‖gradxf(xt+1, yt+1)− vt+1‖2 (74) = E‖gradxf(xt+1, yt+1)− T xt+1xt vt − (vt+1 − T xt+1 xt vt)‖ 2\n= E‖gradxf(xt+1, yt+1)− T xt+1xt vt + αt+1T xt+1 xt vt − αt+1gradxfBt+1(xt+1, yt+1) − (1− αt+1) ( gradxfBt+1(xt+1, yt+1)− T xt+1xt gradxfBt+1(xt, yt) ) ‖2\n= E‖(1− αt+1)T xt+1xt (gradxf(xt, yt)− vt) + (1− αt+1) ( gradxf(xt+1, yt+1)− T xt+1xt gradxf(xt, yt)\n− gradxfBt+1(xt+1, yt+1) + T xt+1xt gradxfBt+1(xt, yt) )\n+ αt+1 ( gradxf(xt+1, yt+1)− gradxfBt+1(xt+1, yt+1) ) ‖2\n= (1− αt+1)2E‖gradxf(xt, yt)− vt‖2 + α2t+1E‖gradxf(xt+1, yt+1)− gradxfBt+1(xt+1, yt+1)‖2\n+ (1− αt+1)2E‖gradxf(xt+1, yt+1)− T xt+1xt gradxf(xt, yt)− gradxfBt+1(xt+1, yt+1) + T xt+1xt gradxfBt+1(xt, yt)‖ 2 + 2αt+1(1− αt+1) 〈 gradxf(xt+1, yt+1)− T xt+1xt gradxf(xt, yt)\n− gradxfBt+1(xt+1, yt+1) + T xt+1xt gradxfBt+1(xt, yt), gradxf(xt+1, yt+1)− gradxfBt+1(xt+1, yt+1) 〉\n≤ (1− αt+1)2E‖gradxf(xt, yt)− vt‖2 + 2α2t+1E‖gradxf(xt+1, yt+1)− gradxfBt+1(xt+1, yt+1)‖2\n+ 2(1− αt+1)2E‖gradxf(xt+1, yt+1)− T xt+1xt gradxf(xt, yt)− gradxfBt+1(xt+1, yt+1) + T xt+1xt gradxfBt+1(xt, yt)‖ 2\n≤ (1− αt+1)2E‖gradxf(xt, yt)− vt‖2 + 2α2t+1σ 2\nB\n+ 2(1− αt+1)2 E‖gradxfBt+1(xt+1, yt+1)− T xt+1xt gradxfBt+1(xt, yt)‖ 2︸ ︷︷ ︸\n=T1\n,\nwhere the fourth equality follows by E[gradxfBt+1(xt+1, yt+1)] = gradxf(xt+1, yt+1) and E[gradxfBt+1(xt+1, yt+1) − gradxfBt+1(xt, yt)] = gradxf(xt+1, yt+1) − gradxf(xt, yt); the first inequality holds by Young’s inequality; the last inequality is due to the equality E‖ζ − E[ζ]‖2 = E‖ζ‖2 − ‖E[ζ]‖2 and Assumption 5.\nNext, we consider an upper bound of the above term T1 as follows: T1 = E ∥∥gradxfBt+1(xt+1, yt+1)− T xt+1xt gradxfBt+1(xt, yt)∥∥2 (75)\n= E ∥∥gradxfBt+1(xt+1, yt+1)− T xt+1xt gradxf(xt, yt+1; ξt+1) + T xt+1xt gradxf(xt, yt+1; ξt+1) − T xt+1xt gradxfBt+1(xt, yt)\n∥∥2 ≤ 2E\n∥∥gradxfBt+1(xt+1, yt+1)− T xt+1xt gradxf(xt, yt+1; ξt+1)‖2 + 2E‖gradxf(xt, yt+1; ξt+1)− gradxfBt+1(xt, yt)\n∥∥2 ≤ 2L211γ2η2t ‖vt‖2 + 2L212‖yt+1 − yt‖2\n= 2L211γ 2η2t ‖vt‖2 + 2L212η2t ‖ỹt+1 − yt‖2, (76)\nwhere the last inequality is due to Assumption 1. Thus, we have\nE‖gradxf(xt+1, yt+1)− vt+1‖2 ≤ (1− αt+1)2E‖gradxf(xt, yt)− vt‖2 + 4(1− αt+1)2L211γ2η2t ‖vt‖2\n+ 4(1− αt+1)2L212η2t ‖ỹt+1 − yt‖2 + 2α2t+1σ 2\nB . (77)\nWe apply a similar analysis to prove the above inequality (72). We obtain\nE‖∇yf(xt+1, yt+1)− wt+1‖2 ≤ (1− βt+1)2E‖∇yf(xt, yt)− wt‖2 + 4(1− βt+1)2L221γ2η2t ‖vt‖2\n+ 4(1− βt+1)2L222η2t ‖ỹt+1 − yt‖2 + 2β2t+1σ 2\nB . (78)\nTheorem 6. Suppose the sequence {xt, yt}Tt=1 is generated from Algorithm 2. Given y1 = y∗(x1), c1 ≥ 23b3 + 2λµ, c2 ≥ 2 3b3 + 50λL̃2 µ , b > 0, m ≥ max ( 2, (c̃b)3 ) , 0 < γ ≤ µλ 2κL̃ √ 25+4µλ and 0 < λ ≤ 1 6L̃ , we have\n1\nT T∑ t=1 E [ ‖gradΦ(xt)‖+ L̃‖yt − y∗(xt)‖ ] ≤ √ 2M ′m1/6 T 1/2 + √ 2M ′ T 1/3 , (79)\nwhere c̃ = max(2γL, c1, c2, 1) and M ′ = 2(Φ(x1)−Φ∗) γb + 2σ2 λµη0bB + 2(c21+c 2 2)σ 2b2 λµB ln(m+ T ).\nProof. Since ηt is decreasing and m ≥ b3, we have ηt ≤ η0 = bm1/3 ≤ 1. Similarly, due to m ≥ (2γLb)3, we have ηt ≤ η0 = bm1/3 ≤ 1 2γL . Due to 0 < ηt ≤ 1 andm ≥ max ( (c1b) 3, (c2b) 3 ) , we have αt+1 = c1η2t ≤ c1ηt ≤ c1bm1/3 ≤ 1 and βt+1 = c2η 2 t ≤ c2ηt ≤ c2bm1/3 ≤ 1. According to Lemma 7, we have 1 ηt E‖gradxf(xt+1, yt+1)− vt+1‖2 − 1 ηt−1 E‖gradxf(xt, yt)− vt‖2 (80)\n≤ ( (1− αt+1)2\nηt − 1 ηt−1\n) E‖gradxf(xt, yt)− vt‖2 + 4(1− αt+1)2L211γ2ηt‖vt‖2\n+ 4(1− αt+1)2L212ηt‖ỹt+1 − yt‖2 + 2α2t+1σ 2\nηtB ≤ (1− αt+1\nηt − 1 ηt−1\n) E‖gradxf(xt, yt)− vt‖2 + 4L211γ2ηt‖vt‖2 + 4L212ηt‖ỹt+1 − yt‖2 + 2α2t+1σ 2\nηtB = ( 1 ηt − 1 ηt−1 − c1ηt ) E‖gradxf(xt, yt)− vt‖2 + 4L211γ2ηt‖vt‖2 + 4L212ηt‖ỹt+1 − yt‖2 + 2α2t+1σ 2 ηtB ,\nwhere the second inequality is due to 0 < αt+1 ≤ 1. By a similar way, we also obtain 1 ηt E‖∇yf(xt+1, yt+1)− wt+1‖2 − 1 ηt−1 E‖∇yf(xt, yt)− wt‖2 (81)\n≤ ( 1 ηt − 1 ηt−1 − c2ηt ) E‖∇yf(xt, yt)− wt‖2 + 4L221γ2ηt‖vt‖2 + 4L222ηt‖ỹt+1 − yt‖2 + 2β2t+1σ 2 ηtB .\nBy ηt = b(m+t)1/3 , we have\n1 ηt − 1 ηt−1 = 1 b\n( (m+ t) 1 3 − (m+ t− 1) 13 ) ≤ 1\n3b(m+ t− 1)2/3 ≤ 1 3b ( m/2 + t )2/3 ≤ 2 2/3\n3b(m+ t)2/3 =\n22/3 3b3 b2 (m/2 + t)2/3 = 22/3 3b3 η2t ≤ 2 3b3 ηt, (82)\nwhere the first inequality holds by the concavity of function f(x) = x1/3, i.e., (x + y)1/3 ≤ x1/3 + y\n3x2/3 ; the second inequality is due to m ≥ 2, and the last inequality is due to 0 < ηt ≤ 1.\nLet c1 ≥ 23b3 + 2λµ, we have\n1 ηt E‖gradxf(xt+1, yt+1)− vt+1‖2 − 1 ηt−1 E‖gradxf(xt, yt)− vt‖2 (83)\n≤ −2λµηtE‖gradxf(xt, yt)− vt‖2 + 4L211γ2ηt‖vt‖2 + 4L212ηt‖ỹt+1 − yt‖2 + 2α2t+1σ 2\nηtB .\nLet c2 ≥ 23b3 + 50λL̃2 µ , we have\n1 ηt E‖∇yf(xt+1, yt+1)− wt+1‖2 − 1 ηt−1 E‖∇yf(xt, yt)− wt‖2 (84)\n≤ −50λL̃ 2\nµ ηtE‖∇yf(xt, yt)− wt‖2 + 4L221γ2ηt‖vt‖2 + 4L222ηt‖ỹt+1 − yt‖2 +\n2β2t+1σ 2\nηtB .\nAccording to Lemma 6, we have\n‖yt+1 − y∗(xt+1)‖2 − ‖yt − y∗(xt)‖2 ≤ − ηtµλ\n4 ‖yt − y∗(xt)‖2 − 3ηt 4 ‖ỹt+1 − yt‖2\n+ 25ληt\n6µ ‖∇yf(xt, yt)− wt‖2 + 25γ2κ2ηt 6µλ ‖vt‖2.\n(85)\nNext, we define a Lyapunov function Ωt, for any t ≥ 1\nΩt = E [ Φ(xt) ] + γ\n2λµ ( 1 ηt−1 E‖gradxf(xt, yt)− vt‖2 + 1 ηt−1 E‖∇yf(xt, yt)− wt‖2 ) + 6γL̃2\nλµ E‖yt − y∗(xt)‖2. (86)\nThen we have\nΩt+1 − Ωt = E[Φ(xt+1)]− E[Φ(xt)] + 6γL̃2\nλµ\n( E‖yt+1 − y∗(xt+1)‖2 − E‖yt − y∗(xt)‖2 ) + γ\n2λµ ( 1 ηt E‖gradxf(xt+1, yt+1)− vt+1‖2 − 1 ηt−1 E‖gradxf(xt, yt)− vt‖2\n+ 1\nηt E‖∇yf(xt+1, yt+1)− wt+1‖2 −\n1\nηt−1 E‖∇yf(xt, yt)− wt‖2 ) ≤ L12γηtE‖yt − y∗(xt)‖2 + γηtE‖gradxf(xt, yt)− vt‖2 −\nγηt 2 E‖gradΦ(xt)‖2 − γηt 4 ‖vt‖2\n+ 6γL̃2\nλµ\n( − µληt\n4 E‖yt − y∗(xt)‖2 − 3ηt 4 E‖ỹt+1 − yt‖2 + 25ληt 6µ E‖∇yf(xt, yt)− wt‖2 + 25γ2κ2ηt 6µλ\n‖vt‖2 )\n+ γ\n2λµ\n( − 2λµηtE‖gradxf(xt, yt)− vt‖2 + 4L211γ2ηt‖vt‖2 + 4L212ηtE‖ỹt+1 − yt‖2 + 2α2t+1σ 2\nηtB\n− 50λL̃ 2\nµ ηtE‖∇yf(xt, yt)− wt‖2 + 4L221γ2ηt‖vt‖2 + 4L222ηtE‖ỹt+1 − yt‖2 +\n2β2t+1σ 2\nηtB ) ≤ −γL̃\n2ηt 2 E‖yt − y∗(xt)‖2 − γηt 2 E‖gradΦ(xt)‖2 − γL̃2ηt 2λµ\nE‖ỹt+1 − yt‖2 − (γ\n4 − 25γ\n3κ2L̃2\nµ2λ2 − 4γ\n3L̃2\nµλ\n) ηt‖vt‖2\n+ γα2t+1σ 2 λµηtB + γβ2t+1σ 2 λµηtB\n≤ −γL̃ 2ηt 2 E‖yt − y∗(xt)‖2 − γηt 2 E‖gradΦ(xt)‖2 + γα2t+1σ 2 λµηtB + γβ2t+1σ 2 λµηtB , (87)\nwhere the first inequality holds by Lemmas 5 and the above inequalities (83), (84) and (85); the second inequality is due to L̃ = max(1, L11, L12, L21, L22); the last inequality is due to 0 ≤ γ ≤\nµλ\n2κL̃ √ 25+4µλ and κ ≥ 1.\nAccording to the above inequality (87), we have\nγηt 2\n( E‖gradΦ(xt)‖2 + L̃2E‖yt − y∗(xt)‖2 ) ≤ Ωt − Ωt+1 + γα2t+1σ 2\nλµηtB + γβ2t+1σ 2 λµηtB . (88)\nTaking average over t = 1, 2, · · · , T on both sides of the inequality (88), we have\n1\nT T∑ t=1 Eηt ( ‖gradΦ(xt)‖2 + L̃2‖yt − y∗(xt)‖2 ) ≤ T∑ t=1 2(Ωt − Ωt+1) γT + 1 T T∑ t=1 (2α2t+1σ2 λµηtB + 2β2t+1σ 2 λµηtB ) .\nSince the initial solution satisfies y1 = y∗(x1) = arg maxy∈Y f(x1, y), we have\nΩ1 = Φ(x1) + 6γL̃2\nλµ ‖y1 − y∗(x1)‖2 +\nγ\n2λµ ( 1 η0 ‖gradxf(x1, y1)− v1‖2 + 1 η0 ‖∇yf(x1, y1)− w1‖2 ) = Φ(x1) + γ\n2λµ ( 1 η0 ‖gradxf(x1, y1)− gradxfB1(x1, y1)‖2 + 1 η0 ‖∇yf(x1, y1)−∇yfB1(x1, y1)‖2 ) ≤ Φ(x1) + γσ2\nλµη0B , (89)\nwhere the last inequality holds by Assumption 5.\nTable 3: Benchmark Datasets Used in Experiments\ndatasets #samples #dimension #classes MNIST 60,000 28×28 10\nCIFAR-10 50,000 32×32×3 10 CIFAR-100 50,000 32×32×3 100\nSVHN 73,257 32×32×3 10 Fashion-MNIST 60,000 28×28 10\nSTL-10 5,000 32×32×3 10 Consider ηt is decreasing, i.e., η−1T ≥ η −1 t for any 0 ≤ t ≤ T , we have\n1\nT T∑ t=1 E ( ‖gradΦ(xt)‖2 + L̃2‖yt − y∗(xt)‖2 ) (90)\n≤ T∑ t=1 2(Ωt − Ωt+1) TγηT + 1 TηT T∑ t=1 (2α2t+1σ2 λµηtB + 2β2t+1σ 2 λµηtB ) ≤ 1 TηT (2Φ(x1) γ + 2σ2 λµη0B − 2Φ ∗ γ ) + 1 TηT T∑ t=1 (2α2t+1σ2 λµηtB + 2β2t+1σ 2 λµηtB\n) =\n2(Φ(x1)− Φ∗) TγηT + 2σ2 Tλµη0ηTB + 2(c21 + c 2 2)σ 2 TηTλµB T∑ t=1 η3t\n≤ 2(Φ(x1)− Φ ∗)\nTγηT +\n2σ2\nTλµη0ηTB +\n2(c21 + c 2 2)σ 2\nTηTλµB ∫ T 1 b3 m+ t dt\n≤ 2(Φ(x1)− Φ ∗)\nTγηT +\n2σ2\nTλµη0ηTB +\n2(c21 + c 2 2)σ 2b3\nTηTλµB ln(m+ T )\n= 2(Φ(x1)− Φ∗)\nTγb (m+ T )1/3 +\n2σ2\nTλµη0bB (m+ T )1/3 +\n2(c21 + c 2 2)σ 2b2\nTλµB ln(m+ T )(m+ T )1/3,\nwhere the third inequality holds by ∑T t=1 η 3 t ≤ ∫ T 1 η3t dt. Let M ′ = 2(Φ(x1)−Φ ∗) γb + 2σ2 λµη0bB + 2(c21+c 2 2)σ 2b2\nλµB ln(m+ T ), we rewrite the above inequality as follows:\n1\nT T∑ t=1 E ( ‖gradΦ(xt)‖2 + L̃2‖yt − y∗(xt)‖2 ) ≤ M ′ T (m+ T )1/3. (91)\nAccording to Jensen’s inequality, we have\n1\nT T∑ t=1 E ( ‖gradΦ(xt)‖+ L̃‖yt − y∗(xt)‖ ) ≤ ( 2 T T∑ t=1 E ( ‖gradΦ(xt)‖2 + L̃2‖yt − y∗(xt)‖2 ))1/2 ≤ √ 2M ′\nT 1/2 (m+ T )1/6 ≤\n√ 2M ′m1/6\nT 1/2 +\n√ 2M ′\nT 1/3 ,\n(92)\nwhere the last inequality is due to (a1 + a2)1/6 ≤ a1/61 + a 1/6 2 for all a1, a2 > 0." }, { "heading": "B ADDITIONAL EXPERIMENTAL RESULTS", "text": "In this section, we provide additional experimental results on SVHN, FashionMNIST and STL-10 datasets, given in Table 3. The training loss and attack loss under uniform attack is shown in Fig 4. The test accuracy with natural images and uniform attack is shown in Tab. 4. From these results, our methods are robust to the uniform attack in training DNNs." } ]
2,020
null
SP:2308aac0572e5a7bca7552cfaf89617012da87b4
[ "The authors show that certain complete neural network verifiers can be mislead by carefully crafted neural networks that exploit round-off errors, which when large magnitude values overwhelm low magnitude values. Such a construction can be obfuscated by taking advantage of the compounding effect when there are many layers of the network. This can also be used to add backdoors to existing networks, albeit in a way that looks quite artificial." ]
The efficient and accurate characterization of the robustness of neural networks to input perturbation is an important open problem. Many approaches exist including heuristic and exact (or complete) methods. Complete methods are expensive but their mathematical formulation guarantees that they provide exact robustness metrics. However, this guarantee is valid only if we assume that the verified network applies arbitrary-precision arithmetic and the verifier is reliable. In practice, however, both the networks and the verifiers apply limited-precision floating point arithmetic. In this paper, we show that numerical roundoff errors can be exploited to craft adversarial networks, in which the actual robustness and the robustness computed by a state-of-the-art complete verifier radically differ. We also show that such adversarial networks can be used to insert a backdoor into any network in such a way that the backdoor is completely missed by the verifier. The attack is easy to detect in its naive form but, as we show, the adversarial network can be transformed to make its detection less trivial. We offer a simple defense against our particular attack based on adding a very small perturbation to the network weights. However, our conjecture is that other numerical attacks are possible, and exact verification has to take into account all the details of the computation executed by the verified networks, which makes the problem significantly harder.
[ { "affiliations": [], "name": "Dániel Zombori" }, { "affiliations": [], "name": "Balázs Bánhelyi" }, { "affiliations": [], "name": "Tibor Csendes" }, { "affiliations": [], "name": "István Megyeri" }, { "affiliations": [], "name": "Márk Jelasity" } ]
[ { "authors": [ "Stanley Bak", "Hoang-Dung Tran", "Kerianne Hobbs", "Taylor T. Johnson" ], "title": "Improved geometric path enumeration for verifying relu neural networks", "venue": "Computer Aided Verification,", "year": 2020 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "Proceedings of COMPSTAT’2010, pp. 177–186,", "year": 2010 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Kümmerer", "Ivan Ustyuzhaninov", "Matthias Bethge" ], "title": "Accurate, reliable and fast robustness evaluation", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Rudy Bunel", "Jingyue Lu", "Ilker Turkaslan", "Philip H.S. Torr", "Pushmeet Kohli", "M. Pawan Kumar" ], "title": "Branch and bound for piecewise linear neural network verification", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Nicholas Carlini", "David A. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy, SP 2017,", "year": 2017 }, { "authors": [ "Chih-Hong Cheng", "Georg Nührenberg", "Harald Ruess" ], "title": "Maximum resilience of artificial neural networks", "venue": "Automated Technology for Verification and Analysis,", "year": 2017 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Training deep neural networks with low precision multiplications", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Souradeep Dutta", "Susmit Jha", "Sriram Sankaranarayanan", "Ashish Tiwari" ], "title": "Output range analysis for deep feedforward neural networks", "venue": "NASA Formal Methods – 10th International Symposium,", "year": 2018 }, { "authors": [ "Timon Gehr", "Matthew Mirman", "Dana Drachsler-Cohen", "Petar Tsankov", "Swarat Chaudhuri", "Martin T. Vechev" ], "title": "AI2: safety and robustness certification of neural networks with abstract interpretation", "venue": "IEEE Symposium on Security and Privacy, SP 2018,", "year": 2018 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In 3rd International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Guy Katz", "Clark Barrett", "David L. Dill", "Kyle Julian", "Mykel J. Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": "In Rupak Majumdar and Viktor Kunčak (eds.), Computer Aided Verification,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: A simple and accurate method to fool deep neural networks", "venue": "In The IEEE Conf. on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Jean-Michel Muller", "Nicolas Brisebarre", "Florent de Dinechin", "Claude-Pierre Jeannerod", "Vincent Lefèvre", "Guillaume Melquiond", "Nathalie Revol", "Damien Stehlé", "Serge Torres" ], "title": "Handbook of Floating-Point Arithmetic", "venue": null, "year": 2010 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy Liang" ], "title": "Certified defenses against adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Franca Salis-Madinier" ], "title": "Building trust in human-centric artificial intelligence. Communication INT/887-EESC-2019", "venue": "European Economic and Social Committee,", "year": 2019 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Markus Püschel", "Martin Vechev" ], "title": "An abstract domain for certifying neural networks", "venue": "Proc. ACM Program. Lang.,", "year": 2019 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian J. Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In 2nd International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Vincent Tjeng", "Kai Y. Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Shiqi Wang", "Kexin Pei", "Justin Whitehouse", "Junfeng Yang", "Suman Jana" ], "title": "Efficient formal safety analysis of neural networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shiqi Wang", "Kexin Pei", "Justin Whitehouse", "Junfeng Yang", "Suman Jana" ], "title": "Formal security analysis of neural networks using symbolic intervals", "venue": "In Proceedings of the 27th USENIX Conference on Security Symposium,", "year": 2018 }, { "authors": [ "Lily Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Luca Daniel", "Duane Boning", "Inderjit Dhillon" ], "title": "Towards fast computation of certified robustness for ReLU networks", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "fooled C" ], "title": "RELUVAL ReluVal (Wang et al., 2018b) is a complete method based on symbolic interval arithmetic. We used the implementation available on GitHub1", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In their seminal work, Szegedy et al. found that for a given neural network and input example one can always find a very small adversarial input perturbation that results in an incorrect output (Szegedy et al., 2014). This striking discovery motivated a substantial amount of research. In this area, an important research direction is verification, that is, the characterization of the robustness of a given network in a principled manner. A usual way of defining the verification problem involves the specification of an input domain and a property that should hold over the entire domain. For example, we might require that all the points within a certain distance from an input example share the same output label as the example itself. The verification problem is then to prove or disprove the property over the domain for a given network (Bunel et al., 2020).\nThere are a large number of verifiers offering different types of guarantees about their output. Complete verifiers offer the strongest guarantee: they are able to decide whether a given property holds in any given input domain. For example, the verifier of Tjeng et al. is a state-of-the-art complete verifier that we will focus on in this paper (Tjeng et al., 2019). However, it is currently standard practice to ignore the details of the computations that the network under investigation performs, such as the floating point representation or the order in which input signals are summed.\nIn this paper, we claim that such implicit assumptions make verifiers vulnerable to a new kind of attack where the attacker designs a network that fools the verifier, exploiting the differences between how the verifier models the computation and how the computation is actually performed in the network. We will argue that such attacks can achieve an arbitrary divergence between the modeled and the actual behavior.\nThis new attack has practical implications as well. Concerns about the safety of AI systems are expected to lead to the establishment of standard requirements certified by a designated authority (Salis-Madinier, 2019). These certification procedures might involve verification methods as well. Fooling such methods makes it possible to get unsafe systems certified that might even contain a backdoor allowing for triggering arbitrary behavior.\nNumerical precision has not been a key practical concern in machine learning. Networks do sometimes produce numerical errors (e.g., Inf or NaN values), most often due to the non-linear operations within the loss function (Odena et al., 2019) or divergence during training. However, the network weights are normally robust to small perturbations due to stochastic learning algorithms (Bottou, 2010), and due to regularizers such as standard variants of weight decay and dropout (Srivastava et al., 2014). Due to this robustness, low precision arithmetic can be applied as well (Courbariaux et al., 2015; Gupta et al., 2015). Our results indicate that, when it comes to exact methods for verification, numerical issues become a central problem that can cause arbitrary errors and enable backdoors.\nOur contributions are the following. In Section 3, we introduce a simple adversarial network that misleads the verifier of Tjeng et al. (2019). In Section 4, we show how to hide the large weights that are present in the simple network. In Section 5, we describe a way to add a backdoor to an existing network with the help of the adversarial networks we proposed. Finally, in Section 6 we offer a defense against the attack we presented." }, { "heading": "2 BACKGROUND", "text": "Let us first formulate the verification problem, namely the problem of checking whether a given property holds in a given domain. We adopt the notation used in (Tjeng et al., 2019). For a possible input x, let G(x) denote the set of inputs that are considered similar to x in the sense that we expect all the points in G(x) to get the same label as x. The set G(x) is normally defined as a ball around x in some metric space defined by a suitable vector norm. The input domain we need to consider is given as G(x)∩Xvalid whereXvalid denotes the valid input points. For example, we haveXvalid = [0, 1]m if the input is an image of m pixels with each pixel taking values from the interval [0, 1].\nWe now have to formulate the property that we wish to have in this domain. Informally, we want all the points in the domain G(x) ∩ Xvalid to get the same classification label as x. Let λ(x) denote the true label of x and let f(x; θ) : Rm → Rn denote the neural network, parameterized by θ. This network has n outputs classifying each input x into n classes. The label of x as predicted by the network is given by argmaxi f(x; θ)i. Using this notation, the property we wish to have for an input x′ ∈ (G(x) ∩ Xvalid) is that λ(x) = argmaxi f(x′; θ)i. Putting it all together, the verification problem can be expressed as deciding the feasibility of the constraint\nx′ ∈ (G(x) ∩ Xvalid) ∧ (λ(x) 6= argmax i f(x′; θ)i), (1)\nwith x′ as our variable. If this constraint is feasible then there is an x′ that violates the property. If it is infeasible then (provided G(x) ∩ Xvalid is not empty) there is no such x′." }, { "heading": "2.1 APPROACHES TO VERIFICATION", "text": "There are many approaches to tackle this problem. We can, for example, search for a suitable x′ in the given domain using some heuristic optimization methods (Goodfellow et al., 2015; MoosaviDezfooli et al., 2016; Kurakin et al., 2017; Carlini & Wagner, 2017; Brendel et al., 2019). If the search succeeds, we can decide that equation 1 is feasible. Otherwise we cannot decide.\nOther methods attempt to find a proof for the infeasibility of equation 1, however, they do not guarantee such a proof. Examples include (Wong & Kolter, 2018; Weng et al., 2018; Gehr et al., 2018; Raghunathan et al., 2018; Singh et al., 2019). If a proof is found, we can decide that equation 1 is infeasible. Otherwise we cannot decide. Such methods are sometimes called incomplete (Tjeng et al., 2019; Bunel et al., 2020).\nThe strongest guarantee is given by methods that are able to decide the feasibility of equation 1. These methods are sometimes called complete (Tjeng et al., 2019; Bunel et al., 2020).\nExamples for such methods include Reluplex (Katz et al., 2017), a method based on an SMT solver. A number of verifiers are based on MILP solvers, for example, (Cheng et al., 2017; Dutta et al., 2018). MIPVerify (Tjeng et al., 2019) also uses an MILP formulation along with several additional techniques to improve efficiency (see Section 2.2). Symbolic interval propagation has also been proposed for ReLU networks by Wang et al. in ReluVal (Wang et al., 2018b), and as part of Neurify (Wang et al., 2018a). In Neurify, interval propagation is used as a technique to tighten the\nbounds used for linear relaxation. Nnenum is another geometric method that is based on propagating linear star sets (Bak et al., 2020)." }, { "heading": "2.2 MIPVERIFY", "text": "Although the idea behind the attack is not specific to a particular verifier—as we discuss in Section C of the Appendix—we develop and evaluate the attack in detail for a state-of-the-art complete verifier: MIPVerify (Tjeng et al., 2019). It is based on a mixed integer linear programming (MILP) formulation. As long as the domain G(x) ∩ Xvalid is the union of a set of polyhedra, and the neural network f(x, θ) is a piecewise linear function of x with parameters θ, the problem of checking the feasibility of the constraint in equation 1 can be formulated as a MILP instance.\nG(x) is normally defined as a ball in a suitable norm with x as the center. In `∞ or `1 norms G(x) is thus a cube. Also, Xvalid is normally a box or a set of boxes, so the domain is indeed the union of a set of polyhedra. The neural network is piecewise linear as long as the nonlinearities used are ReLUs (note that the last softmax normalization layer adds no extra information and can thus be ignored). For the details of the MILP formalization, please see (Tjeng et al., 2019).\nImportantly, MIPVerify applies a presolve step that greatly increases its efficiency. In this step, the authors attempt to tighten the bounds on the variables of the problem, including on the inputs to each ReLU computation. If in this step it turns out that the input of a ReLU gate is always non-positive, the output can be fixed as a constant zero, and if the input is always non-negative then the ReLU gate can be removed from the model as it will have no effect.\nThe presolve step applies three approaches in a progressive manner. First, a fast but inaccurate interval arithmetic approach is used. The resulting bounds are further improved by solving a relaxed LP problem on every variable. Finally, the full MILP problem is solved for the variables but with early stopping." }, { "heading": "2.3 FLOATING POINT REPRESENTATION", "text": "Floating point real number representations are successful and efficient tools for most real life applications (Muller et al., 2010). This arithmetic is available on most modern computers via sophisticated hardware implementations. A floating point number is represented as s · be, where s is the signed significand, b is the base and e is the exponent. There are numerous standards to implement the exact details of this idea that differ mainly in the number of bits that the significand and the exponent use. The formula to compute the represented real number has several possible variations as well.\nHere, we will use the double precision (binary64) arithmetic defined by the IEEE 754-1985 standard (IEEE, 1985). There, b = 2 and we have a sign bit, an 11 bit exponent, and a 53 bit significand (with 52 bits stored explicitly). The so called machine epsilon (the maximum relative rounding error) is 2−53 ≈ 1.11e−16. This means that, for example, the computation 1020 + 2020 − 1020 will result in zero in this representation, if executed in the specified order. In our attack, we will exploit roundoff errors of this type. Note, that in the order of 1020 − 1020 + 2020 we obtain the correct result of 2020." }, { "heading": "2.4 WHAT IS THE OBJECT OF VERIFICATION?", "text": "In related work, it is almost always implicitly assumed that the object of verification is a neural network computed using precise arithmetic. However, a more appropriate and also more ambitious goal is to consider the network as it is computed in practice, that is, using floating point arithmetic, with an arbitrary ordering of the parallelizable or associative operations.\nIf we consider the less ambitious goal of the verification of the precise model, most complete methods still fall short as they use floating point representation internally without any hard guarantees for precision. Those verifiers that are based on different linear programming formulations all belong to this category. Although Katz et al. explicitly consider this issue for Reluplex (Katz et al., 2017), they also propose floating point representation citing efficiency reasons.\nHowever, striving for precision is a wrong direction as actual networks use floating point representation themselves. This fact means that actual networks include non-linearities that need to be modeled\nexplicitly if our goal is verification with mathematical strength. For example, ReluVal (Wang et al., 2018b) is a promising candidate to meet this challenge. It is a symbolic interval-based method that attempts to compute reliable lower and upper bounds for the network activations in a ReLU network. Unfortunately, when computing the parameters of the linear expressions for the symbolic intervals, it still uses a floating point representation, which means the method is not completely reliable in its published form. Similarly, Nnenum is also deliberately implemented in an unreliable manner due to efficiency Bak et al. (2020).\nOur main point here is that any “sloppiness” in the definition of the object of verification or cutting any corners for the sake of efficiency are potential sources of security problems that compromise the reliability of verification. Here, we shall give an example of a successful attack on MIPVerify. We will exploit the fact that—even if both the network and MIPVerify use the same floating point representation—the order of execution of the associative operations (like addition) is not necessarily the same." }, { "heading": "3 A SIMPLE ADVERSARIAL NETWORK", "text": "Now we present our attack in its simplest possible form. We describe an adversarial network that results in incorrect output when given to MIPVerify. The main idea is to exploit the fact that the addition of numbers of different magnitude can become imprecise in floating point arithmetic due to roundoff errors, as described previously in Section 2.\nOur attack crucially depends on the order in which the inputs and the bias are added within a unit. We assume that the bias is always the last to be added, when computing the forward pass. Note that the creator of the network who submits it for verification can control the execution of the network in actual applications, since the verification is only about the structure but not the minute details of execution such as the order of addition in each unit. Nevertheless, we also conjecture that any fixed order of addition, or indeed any fixed algorithm for determining the order could similarly be exploited in an attack.\nIn approaches such as MIPVerify, which rely on state-of-the-art commercial solvers like Gurobi (Gurobi, 2020), the mapping of the actual computation—such as the order of addition— to computations performed by the solver is non-trivial and hard to control, as it is defined by the many (typically proprietary, hence black box) heuristics and techniques applied while solving the MILP problem.\nThe simplest form of our adversarial network is shown in Figure 1. This network performs a binary\nclassification over its input x ∈ [0, 1]. By construction, we know that y1 ∈ [0, 1]. Since MIPVerify expects multi-class models and thus two outputs, we add another technical output y2, such that the two classes are defined by y1 < y2 and y1 ≥ y2, respectively. Also, we include a neuron with a constant input of zero.\nThe key element of the network is neuron C (Figure 1). The idea is that the maximal value of C(x) is given by ω− ω+ 1. The computation of this value might lead to a roundoff error if ω is too large and if 1 is not the last addend. For example, when using the 64 bit floating point representation, if ω > 253 (recall that 2−53 is the machine epsilon) then a roundoff error is possible. In the case of a roundoff error ω + 1 − ω is computed to be zero, that is, C(x) = 0, x ∈ [0, 1]. This means that we get the incorrect output y1(x) = 0, x ∈ [0, 1]. In other words, the entire input domain appears to belong to the y2 > y1 class. The roundoff error thus masks the fact that there are input points that in reality belong to the other class. This property will be used later on to add a backdoor to an existing network.\nThe role of σ is more subtle. It defines the steepness of the transition region of B(x). We should set σ so that the range of B(x) is the interval [0, 1]. This means that we need to have σ < −2. It should be emphasized that the roundoff error drastically changes the behavior of the network. Our goal is not to generate a small perturbation to push the network over a decision boundary; instead, we create a switch with two very distinct positions, which is turned on or off depending on whether the roundoff error occurs or not.\nEmpirical evaluation shows that the attack is successful. We evaluated MIPVerify experimentally using two commercial solvers: Gurobi (Gurobi, 2020), and CPLEX (CPLEX, 2020), and the open source GLPK (GLPK, 2020). During these evaluations, we experimented with different values of σ and ω to see whether our adversarial networks could fool the MIPVerify approach. We randomly generated 500 values for σ from the interval [−15,−2] and for all the sampled σ values we tested ω values 254, 255, . . . , 270. For each parameter setting we tested whether the input point x = 0.75 has an adversarial example within a radius of 1. Recall that the valid input range is x ∈ [0, 1], so in fact we evaluated the problem over the entire valid input range. Clearly, the correct answer is yes, that is, the constraint in equation 1 is feasible. Yet, we found that all three solvers found the problem infeasible for all the parameter combinations we tried. That is, our simple adversarial network reliably fools the MIPVerify model." }, { "heading": "4 OBFUSCATING THE NETWORK", "text": "The naive network in Figure 1 works as an attack, but it is painfully obvious to spot as it has very large weights (which is highly unusual) and it also has a neuron with constant zero input (which is also suspicious). Here, we argue that the network can be made to look more “normal” relatively easily. Obviously, this problem might also lead to an arms race between obfuscators and detectors, and it is not our goal to offer an undetectable obfuscation. We simply wish to argue that detection can be made harder if an attacker puts in some effort to hide the attack. Here, we focus on making the weight distribution more realistic.\nFigure 2 illustrates the construction. We wire the neuron that requires a constant zero input to a neuron that has a constant zero output over the input domain, due to the application of the rectifier nonlinearity. We then add n extra layers and spread the weights ω and −ω over these layers effec-\ntively creating two large products ∏ i ω1i and ∏\ni ω2i. The last weight on both the top and bottom paths is set such that the entire product is ω and−ω, respectively. Each weight ωji is drawn from the normal distribution ωji ∼ N (ω 1 n , (ω 1 n /4)2) independently, but negative values are redrawn until a positive value is found.\nThe obfuscated attack is also successful. We evaluated these networks using Gurobi, GLPK and CPLEX. The values for parameter n (the number of layers) we tested were n = 20 and n = 50. In both cases, we experimented with the same values of σ and ω as in the case of the simple network, and followed the same evaluation methodology. We generated the network weights ωij based on the algorithm above, independently for all the pairs of σ and ω parameters. The algorithm was successful in generating a good network—where the product of the weights along the top and bottom paths is the same in absolute value—for the first try in at least 70% of the cases, so it is not that difficult to generate a suitable network: we can simply try a different random seed until the algorithm is successful.\nWe found that MIPVerify was fooled independently of what solver was used, that is, the problem was found infeasible for all the parameter combinations we tried, when ω > 254. For the value of ω = 254, MIPVerify still found the problem infeasible for the vast majority of the networks with all three underlying solvers. In the remaining few cases, MIPVerify found an adversarial sample with at least one of the three solvers. Table 1 contains the percentage of the successful adversarial networks (that is, the networks that fooled MIPVerify with all three solvers) in different ranges of σ.\nFurther ideas for obfuscation. The values of all the weights ωij are positive. One could also add negative weights if the desired mean weight is zero. Such links could point to “garbage” neurons that have no effect on the output by design. Besides, when using this network as a backdoor to some relatively larger legitimate network (see Section 5), one could imitate the weight distribution of the legitimate network and integrate the backdoor structurally as well into the legitimate network." }, { "heading": "5 CREATING BACKDOORS", "text": "We shall now demonstrate that the adversarial network we described can be used to extend a nontrivial network with a backdoor, so that the extended network passes verification exactly as the original network, but in practice it will have a backdoor that can be used to trigger arbitrary behavior. The idea is that, when the backdoor pattern is present in the input, the integrated adversarial network will operate in the “interesting” domain in which the roundoff error is on, so the verified behavior will be different from the actual behavior. When the backdoor pattern is not present, the adversarial network will operate in the “boring” domain where its output is zero and the roundoff error does not have any effect.\nWe will work with the MNIST dataset and we fix the backdoor pattern to be the top left pixel being larger than 0.05 (assuming the pixels are in [0, 1]). Note that in the MNIST dataset the top left pixel is 0 in every example.\nThe legitimate network to insert the backdoor into. For our evaluation, we selected an MNIST classifier described in Wong & Kolter (2018) and used in (Tjeng et al., 2019) to evaluate MIPVerify. We will refer to this network as WK17a. It has two convolutional layers (stride length: 2) with 16 and 32 filters (size: 4×4) respectively, followed by a fully-connected layer with 100 units. All these layers use ReLU activations. (note that in (Tjeng et al., 2019) it was referred to as CNNA). The network was trained to be robust to attacks of radius 0.1 in the `∞ norm by the method of Wong and Kolter.\nInserting the backdoor. The backdoor construction is shown in Figure 3. The basic idea is that we insert the adversarial network as a switch that is triggered by a pattern in the input. Here, the backdoor is activated (that is, the roundoff error becomes effective) whenever the top left pixel is\nlarger than 0.05. To achieve this, the parameters of neuron A in the simple network in Figure 1 had to be modified.\nThe backdoor is integrated into the convolutional architecture. Figure 3 is a conceptual diagram. In reality, the adversarial network is fully integrated into the convolutional architecture of WK17a. This requires the creation of an additional filter for each neuron in the corresponding layer. The output of every such neuron can then be recovered from the computed new feature map and it can be used in the next layers. The neuron with the two large input weights is already in the fully connected layer, so it can simply be added to that layer. This neuron is wired to just two pixels in the two feature maps in the previous convolutional layer that correspond to the outputs of its two input neurons. The neurons in the last layer of the adversarial network are added to the logits layer of WK17a.\nImplementing the backdoor behavior. When the backdoor is active, we will make the network shift its predictions by one position. This needs the addition of two extra layers after the logits layer of WK17a, as well as the modification of the WK17a logits layer. We empirically determined that the absolute value of the signal in each unit is always less than 1000 in the logits layer of WK17a, so we increase the bias of each unit by 1000. This constant is large enough to ensure that all the values are positive. This further means that the ReLU activation (that we add because now this is no longer the last layer) will not alter these values.\nThe next layer is a new one, into which we copy the WK17a logits layer and we copy a shifted version of the same logits layer as well. Copying can be implemented by zero bias and input weights of 1. Shifting can be implemented by wiring the units in a shifted pattern. The adversarial network is then used to suppress either the original copy or the shifted copy. If the backdoor is activated then the original copy will be suppressed, otherwise the shifted one. Suppression is implemented by subtracting the value 2000 from all the inputs so that they become negative (the output signal of the corresponding switch neuron, adversarial or original, is 1). This way, the output will be zero due to the ReLU activation.\nIn the final layer, the original and shifted halves of the previous layer are added. Since exactly one of these halves has been suppressed, the last layer will be equal to either the original layer or the shifted layer. We then apply softmax normalization to obtain the predicted label distribution y. This distribution will be identical to the one predicted by the original MNIST network if the backdoor is not active; otherwise it will be shifted, so that if the original prediction was i then the shifted prediction is i + 1 mod 10. Note that the backdoor could trigger an arbitrary behavior, shifting is used here as an ad hoc example.\nVerification fails, as it misses the backdoor. We verified the backdoored network—that is, WK17a extended with the adversarial network that implements the backdoor mechanism—using MIPVerify\nwith Gurobi as our solver over the test set of the MNIST dataset, using a radius of 0.1 in the `∞ norm. The verification result was identical to that reported in (Tjeng et al., 2019), namely 4.38% adversarial error, as if no backdoor had been present. The correct verification result should have been 100% adversarial error because, by design, the backdoor mechanism is fully functional in the verified network, and the backdoor pattern is at a distance of at most 0.05, that is, well within the radius of 0.1 from any example. Also, when the backdoor pattern is active, the label is guaranteed to be different from the prediction of WK17a. This means that if the original prediction was correct, the backdoor will certainly introduce an adversarial example." }, { "heading": "6 A DEFENSE", "text": "A naive idea for a defense could be to use a precision that is higher than that of the network while solving the optimization problem. This might indeed work but it would open another similar attack, namely the network’s design could deliberately activate a certain roundoff error that is missed by the verifier. Using combinations of different precisions is also an option but here—instead of attempting to predict the outcome of such an arms race—we assume that both the network and the optimizer use the same double precision internal representation.\nWeight perturbation as a defense. We propose to add a very small perturbation to the parameters of the network. That is, we replace every parameter w with ŵ ∼ w+U(−|w| , |w| ), where is the relative scale parameter of the uniform noise term. The key insight is that natural networks are very robust to small perturbations, so their normal behavior will not be affected. However, even a small perturbation will change the numerical behavior of neuron C in the simple adversarial network (Figure 1). In particular, its positivity will no longer depend on the roundoff error and so the verifier will correctly detect its behavior. Note that the roundoff error might still occur, only the positivity of C will not depend on whether it occurs or not.\nAccuracy is robust to small weight perturbation. We tested the sensitivity of the WK17a network we studied in Section 5. We perturbed the parameters of both the original version and the backdoor version (see Figure 3), using various values of and we measured the test accuracy of the networks. The results are shown in Table 2 (the results are averages of 10 independent perturbed networks). Although the network with a backdoor is somewhat less robust, for a small noise such as = 10−9 the prediction performance of both networks remains unaltered. Note that the test examples do not activate the backdoor.\nAdversarial error is robust to small weight perturbation. We also verified a perturbed WK17a network without the backdoor with MIPVerify, with = 10−3, 10−6 and 10−9, and the results are identical to that of the unperturbed network for every single test example for = 10−6 and 10−9, yielding 4.38% adversarial error. For = 10−3 the adversarial error was 4.37%. Thus, in this network, even adversarial robustness is remarkably robust to small weight perturbation not only test accuracy.\nThe backdoor is revealed to verification with a 50% probability. As mentioned above, the positivity of neuron C in the adversarial network will no longer depend on whether the roundoff error occurs or not. This is because, due to the small noise, the two large incoming weights will no longer sum to zero but instead their sum will be positive or negative with an absolute value that is several orders of magnitude larger than the bias of neuron C, that is, 1. These two cases both have a probability of 50%. The same reasoning applies to the obfuscated network as well. A detailed analysis of this is included in Section B in the Appendix. If the input weights of neuron C sum to a negative value then its output will be a constant zero. If such a network is used as the switch for the backdoor then the backdoor will stay off permanently, independently of the input pattern. If the sum of the incoming weights is positive then C has a large positive output for the range x < 0.5 and so the behavior of the network remains intact, however, the verifier will now detect this behavior.\nOur empirical measurements support the theoretical predictions. We verified the simple as well as the obfuscated networks (Figures 1 and 2), with = 10−3, 10−6 and 10−9, using the same methodology as in the previous sections, with the same hyperparameters σ, ω and n, as before. We tested only those networks where the original behavior remained intact (which happens in approximately 50% of the perturbed networks, as we argued above). With Gurobi, the verification revealed the true behavior in 100% of the cases for all our settings, so the defense indeed worked.\nFinally, we verified a perturbed WK17a network with the backdoor added, perturbed with = 10−3, 10−6 and 10−9, using Gurobi. In all three cases, we selected a perturbation where the backdoor switch remained functional. This time, the result of the verification successfully revealed the backdoor for 75.85%, 91.03% and 98.3% of the test examples, respectively. Since such a perturbation has a probability of only about 50%, it might be necessary to repeat the verification with independently sampled perturbations. This allows one to increase this probability to a desired level. Alternatively, the approval might be assigned to the perturbed network, as opposed to the original network. This way, if the perturbation turns the backdoor off permanently (and thus the verification does not find problems) the approval is still valid.\nSelecting . Based on the observations above, we can summarize the requirements for selecting a suitable value for . First, we need the smallest possible so that the behavior of the network is not changed. Second, we need a large enough so that ω 1. Fortunately, these two requirements can easily be satisfied simultaneously since neural networks are in general very robust to small weight perturbations, while ω is very large. In our case, = 10−9 was a suitable value." }, { "heading": "7 CONCLUSIONS", "text": "We proposed an attack against a complete verifier, MIPVerify (Tjeng et al., 2019). The idea was that we exploited a floating point roundoff error that was made by all the MILP solvers we tested to solve the MIPVerify model. The attack allowed us to modify any given network by adding a backdoor that enables triggering arbitrary behavior using a specified pattern in the input. This backdoor was completely missed by the verification. Our preliminary results with other verifiers indicate that a similar attack might be effective on a number of other methods as well (see Appendix, Section C).\nAlthough we did offer a defense for the particular attack we presented, we believe that our work still implies that for a reliable verification, a verifier must take into account all the details of the implementation of the network. This includes the details of the representation of the numeric parameters as well as the order of the operations. Otherwise, potentially exploitable differences in the actual computation and the model are guaranteed to exist. This way, though, the verification would be valid only for a specific implementation. The implementation of a network can also be non-deterministic. For example, a parallel hierarchical implementation of addition can result in an exponential number of different actual executions of the same addition, depending on the specifics of the hardware the network is running on. In this case, the verifier must make sure that its output is valid for every possible grouping and ordering of each operation performed during a forward pass of the network.\nThe attack we proposed is rather straightforward, just like the defense. However, without the defense, the attack can completely alter the behavior of any network undetected. This means that it is important to keep the numerical vulnerability of verification methods in mind, and further research is needed to find solutions that explicitly prevent numeric attacks in a scalable and efficient manner." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the Artificial Intelligence National Laboratory Program and the Artificial Intelligence National Excellence Program (grant 2018-1.2.1-NKP-2018-00008), as well as grant NKFIH-12792/2020, project “Extending the activities of the HU-MATHS-IN Hungarian Industrial and Innovation Mathematical Service Network” (grant EFOP-3.6.2-16-2017-00015), the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, and the Unkp-19-4-Bolyai+ New National Excellence Program of the Ministry of Human Capacities. We are also grateful to our reviewers and commenters for their very helpful feedback that helped us make the paper more complete and better organized." }, { "heading": "A SPECIFICATION OF OUR EXPERIMENTAL ENVIRONMENT", "text": "Since our work depends on the internals of commercial solvers, for reproducibility, we give the full specification of the environment that we used:\n• CPU: Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz\n• Operating System: Ubuntu 18.04.4 LTS\n• GLIBC 2.27\n• Julia version 1.5.0\n• Gurobi: Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)\n• Gurobi julia package: Gurobi v0.8.1\n• CPLEX: IBM(R) ILOG(R) CPLEX(R) Interactive Optimizer 12.10.0.0\n• CPLEX julia package: CPLEX v0.6.6\n• GLPK v4.64\n• GLPK julia package: GLPK v0.13.0, GLPKMathProgInterface v0.5.0\n• MIPVerify julia package: MIPVerify v0.2.3\n• JuMP julia package: JuMP v0.18.6, ConditionalJuMP v0.1.0\n• MathProgBase julia package: MathProgBase v0.7.8\nThe code is shared at https://github.com/szegedai/nn_backdoor." }, { "heading": "B ANALYSIS OF THE DEFENSE PERTURBATION", "text": "Let us consider the simple network in Figure 1. The defense consists of adding a small perturbation to the parameters of the network with uniform distribution. More precisely, we replace every parameter w with ŵ ∼ w + U(−|w| , |w| ), where is the relative scale parameter of the uniform noise term.\nWe will assume that < 1, which means that adding noise to a weight will never change the sign of the weight. In practice, should be very small, for example, for double precision we used = 10−9. From the construction, we also know that x ∈ [0, 1] and σ < −2. For simplification, we will assume here that ω > 0 although that is not strictly necessary. Note that, in practice, we set ω > 254 for attacking double precision floating point arithmetic, because smaller values do not guarantee adversariality.\nFor a given input x, the output of every neuron is now a random variable depending on the random perturbation. From the definition of neuron A, however, we know that for every x ≤ (1 − )/2 we have A(x) = 0. Formally, we have Pr(A(x) = 0|x ≤ (1 − )/2) = 1. For this reason, the distribution of the output of every neuron is independent of x, if x ≤ (1− )/2, because the output of each neuron depends on x only through neuron A. This means that it suffices to study x = 0 to describe the distribution of the output in this interval.\nFrom now on, any variable ui will denote a random variable with the distribution ui ∼ U(− ,+ ). We also assume that each variable ui is drawn independently. We have seen that A(0) = 0. From this, it follows that B(0) ∈ [1 − , 1 + ] because B(0) = 1 + u0. Let us now examine the input function of C, fC , that we can derive from Figure 1. We have fC(0) = ω(1 + u1)B(0) − ω(1 + u2)(1 + u3) + (1 + u4) = ω(1 + u1)(1 + u0)− ω(1 + u2)(1 + u3) + (1 + u4). From this, we can compute lower and upper bounds to get the range of fC(0):\nfC(0) ∈ [−4ω + 1− , 4ω + 1 + ]. (2) Within this interval, the distribution of fC(0) is symmetrical about the center of the interval, that is, 1. This further means that we have Pr(fC(0) < 1) = 1/2. However, the probability mass of fC(0) is very small in the interval [0, 1] for typical parameter settings, because ω is several orders larger than 1. For example, when = 10−9 and ω = 254, we have ω = 254 · 10−9 ≈ 1.8e7. So, we have Pr(fC(0) < 0) ≈ 1/2. If fC(0) < 0 then it is easy to see that fC(x) < 0, x ∈ [0, 1]. This is because fC(0) is an upper bound of fC(x), which follows from the fact that our only input x has an effect only through a linear chain of neurons all of which will thus have a monotonous output. However, if fC(x) < 0 then C(x) = 0 due to the ReLU activation, which means that y1(x) = 0, thus y1(x) < y2(x) for all x ∈ [0, 1]. Now, let us consider the case where fC(0) > 0 and thus C(0) = fC(0). In this case, we have y1(0) < y2(0) if and only if C(0)(1 + u5) < −2C(0)(1 + u6) + 1 + u7. Here, we know that C(0)(1− ) < C(0)(1 + u5) and −2C(0)(1 + u6) + 1+ u7 < −2C(0)(1− ) + 1+ . From this, it follows that if C(0) > 1 then we must have > 1/2 to have y1(0) < y2(0) with a probability larger than zero. Since typical values of epsilon will be much smaller than 1/2, we conclude that if C(0) > 1 then y1(0) > y2(0). Previously, we showed that Pr(C(0) > 1) = 1/2. In the interval C(0) ∈ [0, 1] the output depends on the value of epsilon as well as on the actual values of u5, u6 and u7. However, as mentioned earlier, the probability of a perturbation that results in C(0) ending up in this interval is negligible.\nLet us now consider the case where x > (1− )/2. Looking at Figure 1, we notice that the transition interval where C(x) decreases from 1 to 0 is very short. We will show that, in fact, it is shorter than the machine epsilon, so C(0) is practically a step function. When adding noise, this transition interval becomes somewhat longer (that is, when C(0) > 0, because otherwise there would be no transition at all) and it will be in the order of at most. To see this, we will need an upper bound on fC and we need to derive the point where it reaches zero. We know that\nfC(x) ≤ ω(1 + )B(x)− ω(1− )(1− ) + (1 + ), (3) B(x) ≤ σ(1− )A(x) + (1 + ), and (4)\nA(x) ≥ x− 1 2 (1 + ). (5)\nNote that we need a lower bound on A(x) because σ < 0. Now, we need to substitute the bounds on A(x) and B(x) into the bound on fC(x) and find x, for which this bound is zero. This gives\nx = 1\n2 (1 + ) +\n1\nσ ( 1− 1 + − 1 + 1− − 1 ω(1− ) ) ≤ 1 2 + 2 1− 2 +\n1\n2ω(1− ) , (6)\nwhere we used the fact that σ ≤ −2. This bound on x is very close to 1/2. In fact, without perturbation (that is, with = 0), the offset is just 1/2ω which is less than the machine epsilon, for our settings of ω. Since 2 and 1/2ω are negligibly small, we can approximate the bound as 1/2 + 2 . Since fC(x) is monotone decreasing, this means that fC(x) < 0 for x ∈ [1/2 + 2 , 1]. This further implies that C(x) = 0 for x ∈ [1/2 + 2 , 1], and thus y1(x) < y2(x) over this interval. To sum up, we proved that if x ∈ [0, 1/2− /2] then with at least 50% probability we have y1(x) ≥ y2(x) and with almost 50% probability we have y1(x) < y2(x), and if x ∈ [1/2 + 2 , 1] then we always have y1(x) < y2(x). When y1(x) ≥ y2(x), the value of C(x) is large enough with overwhelming probability for reasonable parameter settings (e.g., = 10−9, ω = 254) to prevent roundoff errors from occuring. The interval x ∈ [1/2− /2, 1/2 + 2 ] was not discussed; here, the outcome depends on the actual noise values and the other parameters, however, this is an extremely short interval of length 2.5 .\nAs a final note, the network in Figure 2 can be treated in a very similar fashion, the only difference being that the noise that is effectively added to ω and −ω will follow a different distribution. Focusing on ω (−ω is very similar), the noisy product has the form ∏ i(ω1i +u1i). The effective absolute\nnoise added to ω will be more similar to a normal distribution as it is mainly defined by the sum of the first order noise terms: ∑n i=1 uji ∏ k 6=i ωjk. Thus, the expectation is zero and the variance grows with ω n−1 n √ n. The effective relative noise is thus increased by a factor of √ n, approximately. So, Pr(0 < C(x) < 1) is still very small, and the range of fC(x) is larger, so our arguments about the simple case transfer to the obfuscated case as well. The upper bound on the length of the transition interval will be somewhat larger due to the larger variance but it will still be very small." }, { "heading": "C ATTACKING ADDITIONAL VERIFIERS", "text": "Although we focused on MIPVerify, the idea of the attack, and the attack itself is potentially viable for other state-of-the-art verifiers as well. Here, we briefly present a number of preliminary measurements. We emphasize that these measurements are not intended to be thorough or systematic, but are the result of simply making an honest effort to run the public implementation of these verifiers with no parameter tuning and only minimal modifications that were necessary to process our networks. Nevertheless, these preliminary results are still informative as they support the conjecture that the type of attack we discussed is not specific to MIPVerify, and it could be viable for other verifiers as well. Further analysis of these verifiers is an interesting direction for future work.\nReluVal (Wang et al., 2018b) is a complete method based on symbolic interval arithmetic. We used the implementation available on GitHub1. Since this implementation is not able to process convolutional networks, we could test only our simple adversarial network. ReluVal was able to detect the adversarial example in any setting we tried. In other words, ReluVal was not fooled by our adversarial network.\nWe would like to add though that, inspecting the implementation, we found a number of signs that suggest that the implementation itself is not completely reliable. For example, the outward rounding of intervals is done using a fixed constant, instead of an adaptive method. Also, the parameters of the linear expressions in the symbolic intervals are not treated reliably. This makes it likely that one could design an attack specifically for this implementation.\nC.2 NEURIFY\nNeurify (Wang et al., 2018a) is a successor of ReluVal. It is much more efficient and it also uses linear relaxations that define an LP, which needs to be solved. This fact made it likely that our attack might work. We used the GitHub implementation2. Neurify can process convolutional networks, so we could run the verification on both the simple adversarial network and the WK17a networks with or without the backdoor, although that required a slight modification of the code: we had to fix a trivial indexing bug that was unrelated to the verification itself.\nFor the simple adversarial network, Neurify was not able to correctly find adversarial examples, when the radius of the input ball was larger than about 0.85. Thus, this setup fools the method (or at least this implementation of it). With smaller radii, the adversarial examples were found.\nWe tested the original and backdoored variants of WK17a within the `∞ radii of 10% and 100% of the input space diameter. For the original WK17a network, the implementation was not able to process all the input examples, some of the examples caused error messages: “Not implemented: At\n1https://github.com/tcwangshiqi-columbia/ReluVal 2https://github.com/tcwangshiqi-columbia/Neurify\nleast one node needs to be able to be split to test the LP.” Some other examples resulted in very long runs that never terminated. We were able to run the verification for a number of examples. For these examples, the verification was correct.\nFor the WK17a network with the backdoor added, the verification terminated for all the 1000 examples in the implementation, and in all the cases the answer was “safe”, which is an incorrect answer. This means that this implementation of Neurify is fooled by our backdoored network. This result might be due to an implementation issue, because for example, we saw Inf and NaN values among the bounds.\nC.3 NNENUM\nNnenum (Bak et al., 2020) is a geometric method that is based on propagating linear star sets. We used the GitHub implementation3.\nWe tested the simple adversarial network first, with an `∞ radius of 0.1. Nnenum is not fooled on this network. However, a small modification of the simple network allows us to fool the method. The original adversarial network in Figure 1 creates a step function (C(x) = 1, x ≤ 0.5), while setting up a roundoff error trap. We added a new neuron, similar to neuron A, to the first layer with parameters so as to have neuron C represent a roughly rectangular function with C(x) = 1, x ∈ [0.475, 0.5]. When testing this network with x = 0.55 and radius 0.1, Nnenum output “safe”, which is incorrect.\nOn WK17a with the backdoor, out of the 980 correctly classified examples we tested, 180 were incorrectly verified as “safe” and the remaining 800 were “unknown”. No example was verified as “unsafe” (using a timeout of 1 minute).\nC.4 ERAN REFINEPOLY\nDeepPoly is a verification method that is claimed to be sound to floating point operations (Singh et al., 2019). We tested the GitHub implementation4.\nWe were unable to process our simple adversarial network as it would have required substantial modifications of the code base. We verified WK17a and WK17a with the backdoor. We should add that we were not able to reproduce exactly the measurements in (Singh et al., 2019), although the results are close and we got no error messages or warnings. For this reason, our tests might not be entirely accurate.\nWe ran DeepPoly with the “complete” option, using the usual `∞ radius of 0.1. This instance is referred to as RefinePoly, where DeepPoly is combined with MILP. RefinePoly was able to process WK17a and it correctly verified 928 safe examples out of the 980 correctly classified examples in the test set. For the rest of the examples it returned with a “failed” status, meaning it was not able decide about safety. However, for the backdoored version of WK17a, RefinePoly incorrectly output “safe” for 33 out of the 980 examples, all of which are in fact unsafe with respect to this network. For the remaining examples the output was “failed”, which means that RefinePoly was unable to determine whether the input is safe or not. The 33 examples, over which RefinePoly is fooled, represent a small percentage, yet they are proof that RefinePoly is not immune to our attack either." }, { "heading": "D HEURISTIC ATTACKS", "text": "Our work focuses on complete verification, but it is still interesting to ask how our backdoor construction performs against heuristic attacks such as PGD (Kurakin et al., 2017) or BrendelBethge (BB for short) (Brendel et al., 2019). We ran these attacks against the backdoored WK17a network in the `∞ norm. Both attacks successfully found adversarial examples created by the backdoor (PGD (40 iterations) and BB have success rates of more than 30%, and 90%, respectively). The reason is that—although the backdoor switch network itself does not provide any useful gradient—this is not needed because the PGD attack is led by the gradient of the original WK17a network’s loss function to the right direction (increasing the top left pixel value), while the BB attack starts from a random point that will be in the backdoor input space (top left pixel larger than 0.05) with high probability.\n3https://github.com/stanleybak/nnenum 4https://github.com/eth-sri/eran\nIt is interesting to note, though, that with a small modification the backdoor can be hidden from these heuristic attacks as well. Namely, instead of using just one pixel as a backdoor pattern, we can use more (say, a 3x3 area at the top left corner) requiring, for example, half of these pixels to be less than 0.05 and the other half to be larger than 0.05. The switch can easily be modified to be sensitive to this more complex pattern. When attacking this modified backdoor, both algorithms failed to find it, and instead their success rates became identical to that over the original unmodified WK17a network (less than 3% for both algorithms). This is because this more complex backdoor pattern represents a subspace of a relatively very small volume (hence BB will very rarely be initialized inside of it) and the natural gradient of W17a is very unlikely to point towards this specific pattern." } ]
2,021
FOOLING A COMPLETE NEURAL NETWORK VERIFIER
SP:12c875bb1a25581a9f1e4eebfb1e1519d47ee6c7
[ "This paper proposes a paradigm which speeds up the training/inference time of GATs while not compromising too much performance. The method adopts a layerwise sampling procedure. In particular. The authors propose to sample a sub-portion of edges for each layer based on their effective resistance. Such sampling keeps the spectral similar to the original results theoretically and gives a guarantee to the performance drop." ]
The attention mechanism has demonstrated superior performance for inference over nodes in graph neural networks (GNNs), however, they result in a high computational burden during both training and inference. We propose FastGAT, a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph. This results in a per-epoch time that is almost linear in the number of graph nodes as opposed to quadratic. We theoretically prove that spectral sparsification preserves the features computed by the GAT model, thereby justifying our FastGAT algorithm. We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks under both inductive and transductive settings. FastGAT can dramatically reduce (up to 10x) the computational time and memory requirements, allowing the usage of attention based GNNs on large graphs.
[]
[ { "authors": [ "Ingo Althöfer", "Gautam Das", "David Dobkin", "Deborah Joseph", "José Soares" ], "title": "On sparse spanners of weighted graphs", "venue": "Discrete & Computational Geometry,", "year": 1993 }, { "authors": [ "András A. Benczúr", "David R. Karger" ], "title": "Approximating s-t minimum cuts in Õ(n2) time", "venue": "In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing,", "year": 1996 }, { "authors": [ "Filippo Maria Bianchi", "Daniele Grattarola", "Lorenzo Livi", "Cesare Alippi" ], "title": "Hierarchical representation learning in graph neural networks with node decimation pooling", "venue": null, "year": 1910 }, { "authors": [ "Daniele Calandriello", "Ioannis Koutis", "Alessandro Lazaric", "Michal Valko" ], "title": "Improved large-scale graph learning through ridge spectral sparsification", "venue": null, "year": 2018 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Timothy Chu", "Yu Gao", "Richard Peng", "Sushant Sachdeva", "Saurabh Sawlani", "Junxing Wang" ], "title": "Graph sparsification, spectral sketches, and faster resistance computation, via short cycle decompositions", "venue": "IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2018 }, { "authors": [ "Talya Eden", "Shweta Jain", "Ali Pinar", "Dana Ron", "C Seshadhri" ], "title": "Provable and practical approximations for the degree distribution using sublinear graph samples", "venue": "In Proceedings of the 2018 World Wide Web Conference,", "year": 2018 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "CoRR, abs/1706.02216,", "year": 2017 }, { "authors": [ "Wenbing Huang", "Tong Zhang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Christian Hübler", "Hans-Peter Kriegel", "Karsten Borgwardt", "Zoubin Ghahramani" ], "title": "Metropolis algorithms for representative subgraph sampling", "venue": "In 2008 Eighth IEEE International Conference on Data Mining,", "year": 2008 }, { "authors": [ "Vassilis N. Ioannidis", "Siheng Chen", "Georgios B. Giannakis" ], "title": "Pruned graph scattering transforms", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Boris Knyazev", "Graham W Taylor", "Mohamed Amer" ], "title": "Understanding attention and generalization in graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Bohyun Lee", "Shuo Zhang", "Aleksandar Poleksic", "Lei Xie" ], "title": "Heterogeneous multi-layered network model for omics data integration and analysis", "venue": "Frontiers in Genetics,", "year": 2020 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P Kingma" ], "title": "Learning sparse neural networks through l_0 regularization", "venue": "arXiv preprint arXiv:1712.01312,", "year": 2017 }, { "authors": [ "Julian McAuley", "Rahul Pandey", "Jure Leskovec" ], "title": "Inferring networks of substitutable and complementary products", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Veeru Sadhanala", "Yu-Xiang Wang", "Ryan Tibshirani" ], "title": "Graph sparsification approaches for laplacian smoothing", "venue": "Proceedings of the 19th International Conference on Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Daniel A. Spielman", "Nikhil. Srivastava" ], "title": "Graph sparsification by effective resistances", "venue": "SIAM Journal on Computing,", "year": 2011 }, { "authors": [ "Daniel A Spielman", "Shang-Hua Teng" ], "title": "Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems", "venue": "In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing,", "year": 2004 }, { "authors": [ "Daniel A Spielman", "Shang-Hua Teng" ], "title": "Spectral sparsification of graphs", "venue": "SIAM Journal on Computing,", "year": 2011 }, { "authors": [ "Gabriel Taubin" ], "title": "A signal processing approach to fair surface design", "venue": "In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques,", "year": 1995 }, { "authors": [ "Kiran K Thekumparampil", "Chong Wang", "Sewoong Oh", "Li-Jia Li" ], "title": "Attention-based graph neural network for semi-supervised learning", "venue": "arXiv preprint arXiv:1803.03735,", "year": 2018 }, { "authors": [ "Zhang Xinyi", "Lihui Chen" ], "title": "Capsule graph neural network", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yang Ye", "Shihao Ji" ], "title": "Sparse graph attention networks", "venue": "arXiv preprint arXiv:1912.00552,", "year": 2019 }, { "authors": [ "Jiani Zhang", "Xingjian Shi", "Junyuan Xie", "Hao Ma", "Irwin King", "Dit-Yan Yeung" ], "title": "Gaan: Gated attention networks for learning on large and spatiotemporal graphs", "venue": "arXiv preprint arXiv:1803.07294,", "year": 2018 }, { "authors": [ "Peixiang Zhao" ], "title": "gsparsify: Graph motif based sparsification for graph clustering", "venue": "In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management,", "year": 2015 }, { "authors": [ "Cheng Zheng", "Bo Zong", "Wei Cheng", "Dongjin Song", "Jingchao Ni", "Wenchao Yu", "Haifeng Chen", "Wei Wang" ], "title": "Robust graph representation learning via neural sparsification, 2020", "venue": "URL https://openreview.net/forum?id=S1emOTNKvS", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs are efficient representations of pairwise relations, with many real-world applications including product co-purchasing network ((McAuley et al., 2015)), co-author network ((Hamilton et al., 2017b)), etc. Graph neural networks (GNN) have become popular as a tool for inference from graph based data. By leveraging the geometric structure of the graph, GNNs learn improved representations of the graph nodes and edges that can lead to better performance in various inference tasks ((Kipf & Welling, 2016; Hamilton et al., 2017a; Veličković et al., 2018)). More recently, the attention mechanism has demonstrated superior performance for inference over nodes in GNNs ((Veličković et al., 2018; Xinyi & Chen, 2019; Thekumparampil et al., 2018; Lee et al., 2020; Bianchi et al., 2019; Knyazev et al., 2019)). However, attention based GNNs suffer from huge computational cost. This may hinder the applicability of the attention mechanism to large graphs.\nGNNs generally rely on graph convolution operations. For a graph G with N nodes, graph convolution with a kernel gw : R! R is defined as\ngw ? h = Ugw(⇤)U > h (1)\nwhere U is the matrix of eigenvectors and ⇤ is the diagonal matrix of the eigenvalues of the normalized graph Laplacian matrix defined as\nLnorm = I D 1/2AD 1/2, (2)\nwith D and A being the degree matrix and the adjacency matrix of the graph, and gw is applied elementwise. Since computing U and ⇤ can be very expensive (O(N3)), most GNNs use an approximation of the graph convolution operator. For example, in graph convolution networks (GCN) (Kipf & Welling, 2016), node features are updated by computing averages as a first order approximation of Eq.equation 1 over the neighbors of the nodes. A single neural network layer is defined as:\nH (l+1) GCN =\n⇣ eD 1/2 eA eD 1/2H(l)W (l) ⌘ , (3)\nwhere H(l) and W (l) are the activations and the weight matrix at the lth layer respectively and eA = A+ I and eD is the degree matrix of eA.\nAttention based GNNs add another layer of complexity: they compute pairwise attention coefficients between all connected nodes. This process can significantly increase the computational burden,\nespecially on large graphs. Approaches to speed up GNNs were proposed in (Chen et al., 2018; Hamilton et al., 2017a). However, these sampling and aggregation based methods were designed for simple GCNs and are not applicable to attention based GNNs. There has also been works in inducing sparsity in attention based GNNs (Ye & Ji, 2019; Zheng et al., 2020), but they focus on addressing potential overfitting of attention based models rather than scalability.\nIn this paper, we propose Fast Graph Attention neTwork (FastGAT), an edge-sampling based method that leverages effective resistances of edges to make attention based GNNs lightweight. The effective resistance measures importance of the edges in terms of preserving the graph connectivity. FastGAT uses this measure to prune the input graph and generate a randomized subgraph with far fewer edges. Such a procedure preserves the spectral features of a graph, hence retaining the information that the attention based GNNs need. At the same time, the graph is amenable to more complex but computationally intensive models such as attention GNNs. With the sampled subgraph as their inputs, the attention based GNNs enjoy much smaller computational complexity. Note that FastGAT is applicable to all attention based GNNs. In this paper, we mostly focus on the Graph Attention NeTwork model (GAT) proposed by (Veličković et al., 2018). However we also show FastGAT is generalizable to two other attention based GNNs, namely the cosine similarity based approach (Thekumparampil et al., 2018) and Gated Attention Networks (Zhang et al., 2018).\nWe note that Graph Attention Networks can be re-interpreted as convolution based GNNs. We show this explicitly in the Appendix. Based on this re-interpretation, we theoretically prove that spectral sparsification preserves the feature representations computed by the GAT model. We believe this interpretation also opens up interesting connections between sparsifying state transition matrices of random walks and speeding up computations in GNNs.\nThe contributions of our paper are as outlined below:\n• We propose FastGAT, a method that uses effective resistance based spectral graph sparsification to accelerate attention GNNs in both inductive and transductive learning tasks. The rapid subsampling and the spectrum preserving property of FastGAT help attention GNNs retain their accuracy advantages and become computationally light.\n• We provide a theoretical justification for using spectral sparsification in the context of attention based GNNs by proving that spectral sparsification preserves the features computed by GNNs.\n• FastGAT outperforms state-of-the-art algorithms across a variety of datasets under both transductive and inductive settings in terms of computation, achieving a speedup of up to 10x in training and inference time. On larger datasets such as Reddit, the standard GAT model runs out of memory, whereas FastGAT achieves an F1 score 0.93 with 7.73 second per epoch time in training.\n• Further, FastGAT is generalizable to other attention based GNNs such as the cosine similarity based attention (Thekumparampil et al., 2018) and the Gated Attention Network (Zhang et al., 2018)." }, { "heading": "2 RELATED WORK", "text": "Accelerating graph based inference has drawn increasing interest. Two methods proposed in (Chen et al., 2018) (FastGCN) and (Huang et al., 2018) speed up GCNs by using importance sampling to sample a subset of nodes per layer during training. Similarly, GraphSAGE (Hamilton et al., 2017a) also proposes an edge sampling and aggregation based method for inductive learning based tasks. All of the above works use simple aggregation and target simple GCNs, while our work focus on more recent attention based GNNs such as (Veličković et al., 2018). We are able to take advantage of the attention mechanism, while still being computationally efficient.\nGraph sparsification aims to approximate a given graph by a graph with fewer edges for efficient computation. Depending on final goals, there are cut-sparsifiers ((Benczúr & Karger, 1996)), pairwise distance preserving sparsifiers ((Althöfer et al., 1993)) and spectral sparsifiers ((Spielman & Teng, 2004; Spielman & Srivastava, 2011)) , among others ((Zhao, 2015; Calandriello et al., 2018; Hübler et al., 2008; Eden et al., 2018; Sadhanala et al., 2016)). In this work, we use spectral sparsification to choose a randomized subgraph while preserving spectral properties. Apart form providing the strongest guarantees in preserving graph structure ((Chu et al., 2018)), they align well with GNNs due to their connection to spectral graph convolutions.\nGraph sparsification on neural networks have been studied recently ((Ye & Ji, 2019; Zheng et al., 2020; Ioannidis et al., 2020; Louizos et al., 2017)). However, their main goal is to alleviate overfitting in GNNs not reducing the training time. They still require learning attention coefficients and binary gate values for all edges in the graph, hence not leading to any computational or memory benefit. In contrast, FastGAT uses a fast subsampling procedure, thus resulting in a drastic improvement in training and inference time. It is also highly stable in terms of training and inference." }, { "heading": "3 FASTGAT: ACCELERATING GRAPH ATTENTION NETWORKS VIA EDGE SAMPLING", "text": "" }, { "heading": "3.1 THE FASTGAT ALGORITHM", "text": "Let G(E ,V) be a graph with N nodes and M edges. An attention based GNN computes attention coefficients ↵i,j for every pair of connected nodes i, j 2 V in every layer `. The ↵i,j’s are then used as averaging weights to compute the layer-wise feature updates. In the original GAT formulation, the attention coefficients are\n↵ij = exp\nLeakyReLU(a>[Whi||Whj ])\nP\nj2Ni exp (LeakyReLU(a >[Whi||Whj ]))\n, (4)\nwhere hi’s are the input node features to the layer, W and a are linear mappings that are learnt, Ni denotes the set of neighbors of node i, and || denotes concatenation. With the ↵ij’s as defined above, the node-i output embedding of a GAT layer is\nh 0 i =\n0 @ X\nj2Ni\n↵ijWhj\n1\nA . (5)\nFor multi-head attention, the coefficients are computed independently in each attention head with head-dependent matrices W and attention vector a. Note that the computational burden in GATs arises directly from computing the ↵i,j’s in every layer, every attention head and every forward pass during training.\nGoal: Our objective is to achieve performance equivalent to that of full graph attention networks (GAT), but with only a fraction of the original computational complexity. This computational saving is achieved by reducing the number of attention computations.\nIdea: We propose to use edge-sampling functions that sparsify graphs by removing nonessential edges. This leads to direct reduction in the number of attention coefficients to be computed, hence reducing the burden. Choosing the sampling function is crucial for retaining the graph connectivity.\nLet EdgeSample(E,A, q) denote a randomized sampling function that, given an edge set E, adjacency matrix A and a number of edges to be sampled q, returns a subset of the original edge set Es ⇢ E with |Es| = q. Our algorithm then uses this function to sparsify the graph in every layer and attention head. Following this, the attention coefficients are computed only for the remaining edges. A more detailed description is given in Algorithm 1. In every layer and attention head, a randomized subgraph with q ⌧ M edges is generated and the attention coeffients are computed only for this subset of edges. We use a specialized distribution that depends on the contribution of each edge to the graph connectivity. We provide further details in Section 3.2.\nNote that in the general description below, the attention coefficients themselves are used as weights for sparsification and the reweighted attention coefficients are used to compute the feature update. Doing so helps in theoretical analysis of the algorithm. However in practice, we replace this expensive procedure with a one-time sampling of the graph with the original edge weights and compute the attention coefficients for only the remaining edges. In particular, we use two simpler variations of FastGAT include: i) FastGAT-const, where the subgraph g is kept constant in all the layers and attention heads and ii) FastGAT-layer, where the subgraph is different in each layer (drawn stochastically from the original edge weights), but the same across all the attention heads within a layer.\nAlgorithm 1: The FastGAT Algorithm Input: Graph G(V, E), Num. layers = L, Num. Attention heads K(`), ` = 1, · · · , L\nInitial Weight matrices W (`), Non-linearity , Feature matrix H 2 RN⇥D Randomized edge sampling function EdgeSample(·), Attention function (·) Num. edges sampled q\nfor each layer ` do for each attention head k 2 {1, 2, · · · , K(`)} do\nCompute attention matrix (`) k 2 RN⇥N , with (`) k (i, j) = ✓k(h (`) i ,h(`) j ) Sample a graph ̂(`) k = EdgeSample( (`) k ,A, q) Compute H(`+1) k = ⇣ ̂(`) k H (`) k W (`) ⌘\nH (`+1) = ||\nk\nH (`) k // Concatenate the output of attention heads\nCompute loss and update W ’s // gradient based weight update" }, { "heading": "3.2 SAMPLING GRAPH EDGES USING EFFECTIVE RESISTANCES", "text": "We use a particular edge sampling function EdgeSample(·) that is motivated by the field of spectral graph sparsification. Let L represent the graph Laplacian (defined as L = D A where D is the degree matrix), i(L) denote the ith eigenvalue of L and let A† denote the Moore-Penrose inverse of a matrix.\nMotivated by the fact that GNNs are approximations of spectral graph convolutions (defined in equation 1), we aim to preserve the spectrum (or eigenstructure) of the graph. Formally, let LG and LH be the Laplacian matrices of the original graph G and the sparsified graph H . Then, spectral graph sparsification ensures that the spectral content of H is similar to that of G:\n(1 ✏) i(LG) i(LH) (1 + ✏) i(LG), 8i (6)\nwhere ✏ is any desired threshold. (Spielman & Srivastava, 2011) showed how to achieve this by using a distribution proportional to the effective resistances of the edges\nDefinition 1 (Effective Resistance) (Spielman & Srivastava, 2011) The effective resistance between any two nodes of a graph can be defined as the potential difference induced across the two nodes,\nwhen a unit current is induced at one node and extracted from the other node. Mathematically, it is\ndefined as below.\nRe(u, v) = b > e L † be,\nwhere be = u v ( l is a standard basis vector with 1 in the lth position) and L† is the pseudo-inverse of the graph Laplacian matrix.\nThe effective resistance measures the importance of an edge to the graph structure. For example, the removal of an edge with high effective resistance can harm the graph connectivity. The particular function EdgeSample we use in FastGAT is described in Algorithm 2.\nAlgorithm 2: Effective resistance based EdgeSample function (Spielman & Srivastava, 2011) Input: Graph G(EG,V), we is the edge weight for e 2 E , 0 < ✏ < 1 For each edge e(u, v), compute Re(u, v) using fast algorithm in (Spielman & Srivastava, 2011) Set q = max(M, int(0.16N logN/✏2)), H = Graph(EH = Null, V) for i 1 to q do\nSample an edge ei from the distribution pe proportional to weRe if ei 2 EH then\nAdd we/qpe to its weight // Increase weight of an existing edge else Add ei to EH with weight we/qpe // Add the edge for the first time\nH = Graph(EH ,V)\nThe effective-resistance based edge-samplng function is described in Algorithm 2. For a desired value of ✏, the algorithm sampled q = O(N logN/✏2) number of edges such that equation 6 is satisfied.\nChoosing ✏. As shown in Algorithm. 2, it requires setting a pruning parameter ✏, which determines the quality of approximation after sparsification and also determines the number of edges retained q. The choice of ✏ is a design parameter at the discretion of the user. To remove the burden of choosing ✏, we also provide an adaptive algorithm in Section B.4 in the appendix. Complexity. The sample complexity q in Algorithm. 2 directly determines the final complexity. If q = O(N logN/✏2), then the spectral approximation in equation 6 can be achieved (Spielman & Srivastava, 2011). Note that this results in a number of edges that is almost linear in the number of nodes, as compared to quadratic as in the case of dense graphs. The complexity of computing Re for all edges (up to a constant factor approximation) is O(M logN) time, where M is the number of edges (Spielman & Srivastava, 2011). While we describe the algorithm in detail in the appendix (Section B.3) , it uses a combination of fast solutions to Laplacian based linear systems and the Johnson-Lindenstrauss Lemma 1. This is almost linear in the number of edges, and hence much smaller than the complexity of computing attention coefficients in every layer and forward pass of GNNs. Another important point is that the computation of Re’s is a one-time cost. Unlike graph attention coefficients, we do not need to recompute the effective resistances in every training iteration. Hence, once sparsified, the same graph can be used in all subsequent experiments. Further, since each edge is sampled independently, the edge sampling process itself can be parallelized." }, { "heading": "4 THEORETICAL ANALYSIS OF FASTGAT", "text": "In this section we provide the theoretical analysis of FastGAT. Although we used the sampling strategy provided in (Spielman & Srivastava, 2011), their work address the preservation of only the eigenvalues of L. However, we are interested in the following question: Can preserving the spectral structure of the graph lead to good performance under the GAT model? To answer this question, we give an upper bound on the error between the feature updates computed by a single layer of the GAT model using the full graph and a sparsified graph produced by FastGAT.\nSpectral sparsification preserves the spectrum of the underlying graph. This then hints that neural network computations that utilize spectral convolutions can be approximated by using sparser graphs. We first show that this is true in a layer-wise sense for the GCN (Kipf & Welling, 2016) model and then show a similar result for the GAT model as well. Below, we use ReLU to denote the standard Rectified Linear Unit and ELU to denote the Exponential Linear Unit.\nTheorem 1 At any layer l of a GCN model with input features H(l) 2 RN⇥D, weight matrix W\n(l) 2 RD⇥F , if the element-wise non-linearity function is either the ReLU or the ELU function, the features cHf and cHs computed using equation 3 with the full and a layer dependent spectrally sparsified graph obey cHf cHs\nF\n 8✏ H(l)W (l)\nF\n. (7)\nwhere Lnorm is as defined in equation 2 and k·k denotes the spectral norm.\nIn our next result, we show a similar upper bound on the features computed with the full and the sparsified graphs using the GAT model.\nTheorem 2 At any layer l of GAT with input features H(`) 2 RN⇥D, weight matrix W (l) 2 RD⇥F and ↵ij’s be the attention coefficients in that layer. Let the non-linearity used by either ReLU or the ELU functon. Then, the features cHf and cHs computed using equation 5 with the full and a layer dependent spectrally sparsified graph obey\ncHf cHs F 12✏ H(l)W (l) F\n(8)\nwhere k·k denotes the spectral norm of the matrix.\n1https://en.wikipedia.org/wiki/Johnson-Lindenstrauss_lemma\nTheorem 2 shows that our proposed layer-wise spectral sparsification leads to good approximations of latent embedding cH for GAT model as well. The guarantees given above assume a layer-wise sparsification that is updated based on the attention coefficients. To circumvent the associated computational burden, we use the simpler versions such as 1-const and always use the original weight matrix to sparsify the graph in each layer. In the experiment section, we show that such a relaxation by a one-time spectral sparsification does not lead to any degradation in performance.\nApproximation of weight matrices. Theorems 1 and 2 provide an upper bound on the feature updates obtained using the full and sparsified graphs. In practice, we observe an even stronger notion of approximation between GAT and FastGAT: the weight matrices of the two models post training are good approximations of each other. We report this observation in Section. A.4 in the appendix. We show that the error between the learned matrices is small and proportional to the value of ✏ itself." }, { "heading": "5 EXPERIMENTS", "text": "Datasets We evaluated FastGAT on large graph datasets using semi-supervised node classification tasks. This is a standard task to evaluate GNNs, as done in (Veličković et al., 2018; Hamilton et al., 2017a; Kipf & Welling, 2016). Datasets are sourced from the DGLGraph library (DGL). Their statistics are provided in Table 1. We evaluate on both transductive and inductive tasks. The PPI dataset serves as a standard benchmark for inductive classification and the rest of the datasets for transductive classification. Further details about the datasets including details about train/validaton/ test split are given in the appendix (Section B.1). We also evaluated on smaller datasets including Cora, citeseer and Pubmed, but present their results in the appendix (Section B.2).\nBaselines. Transductive learning: We compare FastGAT with the following baseline methods. (1) The original graph attention networks (GAT) (Veličković et al., 2018), (2) SparseGAT (Ye & Ji, 2019) that learns edge coefficients to sparsify the graph, (3) random subsampling of edges, and (4) FastGCN (Chen et al., 2018) that is also designed for GNN speedup. Note that we compare SparseGAT in a transductive setting, whereas the original paper (Ye & Ji, 2019) uses an inductive setting. We thus demonstrate that FastGAT can handle the full input graph, unlike any previous attention based baseline method. Inductive learning: For this task, we compare with both GAT (Veličković et al., 2018) and GraphSAGE (Hamilton et al., 2017a). More importantly, for both inductive and transductive tasks, we show that a uniform random subsampling of edges results in a drop in performance, where as FastGAT does not.\nEvaluation setup and Model Implementation Details are provided in Section. B in the appendix." }, { "heading": "Q1. FASTGAT PROVIDES FASTER TRAINING WITH STATE-OF-THE-ART ACCURACY.", "text": "Our first experiment is to study FastGAT on the accuracy and time performance of attention based GNNs in node classification. We sample q = int(0.16N logN/✏2) number of edges from the distribution pe with replacement, as described in Section 3.2.\nTransductive learning: In this setting, we assume that the features of all the nodes in the graph, including train, validation and test nodes are available, but only the labels of training nodes are available during training, similar to (Veličković et al., 2018). First, we provide a direct comparison between FastGAT and the original GAT model and report the results in Table 2. As can be observed from the results, FastGAT achieves the same test accuracy as the full GAT across all datasets, while being dramatically faster: we are able to achieve up to 5x on GPU (10x on CPU) speedup. We then compare FastGAT with the following baselines: SparseGAT (Ye & Ji, 2019), random subsampling of edges and FastGCN (Chen et al., 2018) in Table 3. SparseGAT uses the attention mechanism to learn embeddings and a sparsifying mask on the edges. We compare the training\ntime per epoch for the baseline methods against FastGAT in Figure 1. The results shows that FastGAT matches state-of-the-art accuracy (F1-score), while being much faster. While random subsampling of edges leads to a model that is as fast as ours but with a degradation in accuracy performance. FastGAT is also faster compared to FastGCN on some large datasets, even though FastGCN does not compute any attention coefficients. Overall the classification accuracy of FastGAT remains the same (or sometimes even improves) compared to standard GAT, while the training time reduces drastically. This is most evident in the case of the Reddit dataset, where the vanilla GAT model runs out of memory on a machine with 128GB RAM and a Tesla P100 GPU when computing attention coefficients over 57 million edges, while FastGAT can train with 10 seconds per epoch.\nInductive learning. FastGAT can also be applied to the inductive learning framework, where features of only the training nodes are available and training is performed using the subgraph consisting of\nIn Tables 2 and 3, FastGCN-400 denotes that we sample 400 nodes in every forward pass, as described in (Chen et al., 2018) (similarly, in FastGCN-800, we sample 800 nodes). FastGAT-0.5 denotes we use ✏ = 0.5. GAT rand 0.5 uses random subsampling of edges, but keeps the same number of edges as FastGAT-0.5\nthe training nodes. To show the utility of FastGAT in such a setting, we use the Protein-Protein interaction (PPI) dataset ((Zitnik & Leskovec, 2017)). Our model parameters are the same as in (Veličković et al., 2018), but we sparsify each of the 20 training graphs before training on them. The other 4 graphs are used for validation and testing (2 each). We use ✏ = 0.25, 0.5 in our experiments, since the PPI dataset is smaller than the other datasets. We report the experimental results in Table 4. FastGAT clearly outperforms the baselines like GraphSAGE and uniform subsampling of edges. While it has the same accuracy performance as the GAT model (which is expected), it has a much smaler training time, as reported in Table 4." }, { "heading": "Q2. FASTGAT CAN BE APPLIED TO OTHER ATTENTION BASED GRAPH INFERENCE METHODS.", "text": "Finally, we study if FastGAT is sensitive to the particular formulation of the attention function. There have been alternative formulations proposed to capture pairwise similarity. For example, (Thekumparampil et al., 2018) proposes a cosine similarity based approach, where the attention coefficient of an edge is defined in equation 9,\n↵(`) ij\n= softmax j2Ni\n⇣ [ (`) cos(h(`)\ni ,h(`) j )] ⌘\n(9)\nwhere (`) is a layer-wise learnable parameter and cos(x,y) = x>y/ kxk kyk. Another definition is proposed in (Zhang et al., 2018) (GaAN:Gated Attention Networks), which defines attention as in equation 10,\n↵(l) ij\n= softmax j2Ni hFCsrc(h`i),FCdst(h`j)i (10)\nwhere FCsrc and FCdst are 2-layered fully connected neural networks.\nWe performed similar experiments on these attention definitions. Tables. 5 confirmed that FastGAT generalizes to different attention functions.Note that the variability in accuracy performance across Tables 2 and 5 comes from the different definitions of the attention functions and not from FastGAT. Our goal is to show that given a specific GAT model, FastGAT can achieve similar accuracy performance as that model, but in much faster time." }, { "heading": "6 CONCLUSION", "text": "In this paper, we introduced FastGAT, a method to make attention based GNNs lightweight by using spectral sparsification. We theoretically justified our FastGAT algorithm. FastGAT can significantly reduce the computational time across multiple large real world graph datasets while attaining state-ofthe-art performance." } ]
2,020
FAST GRAPH ATTENTION NETWORKS USING EFFEC-
SP:5207e34f58574e18c30192be6e2312863129fccd
[ "This paper proposes cut-and-paste neural rendering that allows to insert objects into a target scene in a plausible manner, i.e., in terms of shading plausibility. At the core of the approach is a deep image prior that allows to match the shading and albedo fields based on shading and albedo consistency losses. A normal estimation network that is trained based on synthetic data is used to further inform shading estimation. The approach is interesting and shows plausible results." ]
Cut-and-paste methods take an object from one image and insert it into another. Doing so often results in unrealistic looking images because the inserted object’s shading is inconsistent with the target scene’s shading. Existing reshading methods require a geometric and physical model of the inserted object, which is then rendered using environment parameters. Accurately constructing such a model only from a single image is beyond the current understanding of computer vision. We describe an alternative procedure – cut-and-paste neural rendering, to render the inserted fragment’s shading field consistent with the target scene. We use a Deep Image Prior (DIP) as a neural renderer trained to render an image with consistent image decomposition inferences. The resulting rendering from DIP should have an albedo consistent with cut-and-paste albedo; it should have a shading field that, outside the inserted fragment, is the same as the target scene’s shading field; and cut-and-paste surface normals are consistent with the final rendering’s shading field. The result is a simple procedure that produces convincing and realistic shading. Moreover, our procedure does not require rendered images or image decomposition from real images or any form of labeled annotations in the training. In fact, our only use of simulated ground truth is our use of a pre-trained normal estimator. Qualitative results are strong, supported by a user study comparing against state-of-the-art image harmonization baseline.
[]
[ { "authors": [ "Aayush Bansal", "Yaser Sheikh", "Deva Ramanan" ], "title": "Shapes and context: In-the-wild image synthesis & manipulation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sean Bell", "Kavita Bala", "Noah Snavely" ], "title": "Intrinsic images in the wild", "venue": "ACM Transactions on Graphics (TOG),", "year": 2014 }, { "authors": [ "Sai Bi", "Xiaoguang Han", "Yizhou Yu" ], "title": "An l 1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition", "venue": "ACM Transactions on Graphics (TOG),", "year": 2015 }, { "authors": [ "Patrick Cavanagh", "George A Alvarez" ], "title": "Tracking multiple targets with multifocal attention", "venue": "Trends in cognitive sciences,", "year": 2005 }, { "authors": [ "Wenyan Cong", "Jianfu Zhang", "Li Niu", "Liu Liu", "Zhixin Ling", "Weiyuan Li", "Liqing Zhang" ], "title": "Dovenet: Deep image harmonization via domain verification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Paul Debevec" ], "title": "Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography", "venue": "In Proceedings of the 25th annual conference on Computer graphics and interactive techniques,", "year": 1998 }, { "authors": [ "Aysegul Dundar", "Karan Sapra", "Guilin Liu", "Andrew Tao", "Bryan Catanzaro" ], "title": "Panoptic-based image synthesis", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Debidatta Dwibedi", "Ishan Misra", "Martial Hebert" ], "title": "Cut, paste and learn: Surprisingly easy synthesis for instance detection", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Qingnan Fan", "Jiaolong Yang", "Gang Hua", "Baoquan Chen", "David Wipf" ], "title": "Revisiting deep intrinsic image decompositions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Marc-André Gardner", "Yannick Hold-Geoffroy", "Kalyan Sunkavalli", "Christian Gagné", "Jean-François Lalonde" ], "title": "Deep parametric indoor lighting estimation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Mathieu Garon", "Kalyan Sunkavalli", "Sunil Hadap", "Nathan Carr", "Jean-François Lalonde" ], "title": "Fast spatiallyvarying indoor lighting estimation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Isabel Gauthier", "Pepper Williams", "Michael J Tarr", "James Tanaka" ], "title": "Training ‘greeble’experts: a framework for studying expert object recognition processes", "venue": "Vision research,", "year": 1998 }, { "authors": [ "Yannick Hold-Geoffroy", "Akshaya Athawale", "Jean-François Lalonde" ], "title": "Deep sky modeling for single image outdoor lighting estimation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hamid Izadinia", "Steven M Seitz" ], "title": "Scene recomposition by learning-based icp", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Michael Janner", "Jiajun Wu", "Tejas D Kulkarni", "Ilker Yildirim", "Josh Tenenbaum" ], "title": "Self-supervised intrinsic image decomposition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jiaya Jia", "Jian Sun", "Chi-Keung Tang", "Heung-Yeung Shum" ], "title": "Drag-and-drop pasting", "venue": "ACM Transactions on graphics (TOG),", "year": 2006 }, { "authors": [ "Kevin Karsch", "Varsha Hedau", "David Forsyth", "Derek Hoiem" ], "title": "Rendering synthetic objects into legacy photographs", "venue": "ACM Transactions on Graphics (TOG),", "year": 2011 }, { "authors": [ "Kevin Karsch", "Kalyan Sunkavalli", "Sunil Hadap", "Nathan Carr", "Hailin Jin", "Rafael Fonte", "Michael Sittig", "David Forsyth" ], "title": "Automatic scene inference for 3d object compositing", "venue": "ACM Transactions on Graphics (TOG),", "year": 2014 }, { "authors": [ "Jean-François Lalonde", "Derek Hoiem", "Alexei A Efros", "Carsten Rother", "John Winn", "Antonio Criminisi" ], "title": "Photo clip art", "venue": "ACM transactions on graphics (TOG),", "year": 2007 }, { "authors": [ "Edwin H Land" ], "title": "Color vision and the natural image", "venue": "i. Proceedings of the National Academy of Sciences of the United States of America,", "year": 1959 }, { "authors": [ "Edwin H Land" ], "title": "Color vision and the natural image part ii", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 1959 }, { "authors": [ "Edwin H Land" ], "title": "The retinex theory of color vision", "venue": "Scientific american,", "year": 1977 }, { "authors": [ "Edwin H Land", "John J McCann" ], "title": "Lightness and retinex theory", "venue": null, "year": 1971 }, { "authors": [ "Zhengqi Li", "Noah Snavely" ], "title": "Cgintrinsics: Better intrinsic image decomposition through physically-based rendering", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zhengqin Li", "Zexiang Xu", "Ravi Ramamoorthi", "Kalyan Sunkavalli", "Manmohan Chandraker" ], "title": "Learning to reconstruct shape and spatially-varying reflectance from a single image", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Zhengqin Li", "Mohammad Shafiei", "Ravi Ramamoorthi", "Kalyan Sunkavalli", "Manmohan Chandraker" ], "title": "Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single", "venue": null, "year": 2020 }, { "authors": [ "Zicheng Liao", "Ali Farhadi", "Yang Wang", "Ian Endres", "David Forsyth" ], "title": "Building a dictionary of image fragments", "venue": "In 2012 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Zicheng Liao", "Kevin Karsch", "David Forsyth" ], "title": "An approximate shading model for object relighting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Zicheng Liao", "Kevin Karsch", "Hongyi Zhang", "David Forsyth" ], "title": "An approximate shading model with detail decomposition for object relighting", "venue": null, "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision. Springer,", "year": 2014 }, { "authors": [ "Guilin Liu", "Fitsum A Reda", "Kevin J Shih", "Ting-Chun Wang", "Andrew Tao", "Bryan Catanzaro" ], "title": "Image inpainting for irregular holes using partial convolutions", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yunfei Liu", "Yu Li", "Shaodi You", "Feng Lu" ], "title": "Unsupervised learning for intrinsic image decomposition from a single image", "venue": null, "year": 2020 }, { "authors": [ "Ben Mildenhall", "Pratul P Srinivasan", "Matthew Tancik", "Jonathan T Barron", "Ravi Ramamoorthi", "Ren Ng" ], "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "venue": null, "year": 2003 }, { "authors": [ "Anish Mittal", "Rajiv Soundararajan", "Alan C Bovik" ], "title": "Making a “completely blind” image quality analyzer", "venue": "IEEE Signal processing letters,", "year": 2012 }, { "authors": [ "Vladimir Nekrasov", "Thanuja Dharmasiri", "Andrew Spek", "Tom Drummond", "Chunhua Shen", "Ian Reid" ], "title": "Realtime joint semantic segmentation and depth estimation using asymmetric annotations", "venue": "In 2019 International Conference on Robotics and Automation (ICRA)", "year": 2019 }, { "authors": [ "Thomas Nestmeyer", "Jean-François Lalonde", "Iain Matthews" ], "title": "Epic Games, Andreas Lehrmann, and AI Borealis. Learning physics-guided face relighting under directional light", "venue": null, "year": 2020 }, { "authors": [ "Patrick Pérez", "Michel Gangnet", "Andrew Blake" ], "title": "Poisson image editing", "venue": "In ACM SIGGRAPH", "year": 2003 }, { "authors": [ "Ken Perlin" ], "title": "An image synthesizer", "venue": "ACM Siggraph Computer Graphics,", "year": 1985 }, { "authors": [ "Vilayanur S Ramachandran" ], "title": "Perceiving shape from shading", "venue": "Scientific American,", "year": 1988 }, { "authors": [ "René Ranftl", "Katrin Lasinger", "David Hafner", "Konrad Schindler", "Vladlen Koltun" ], "title": "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),", "year": 2020 }, { "authors": [ "Edgar Schönfeld", "Bernt Schiele", "Anna Khoreva" ], "title": "A u-net based discriminator for generative adversarial networks", "venue": "arXiv preprint arXiv:2002.12655,", "year": 2020 }, { "authors": [ "Soumyadip Sengupta", "Jinwei Gu", "Kihwan Kim", "Guilin Liu", "David W Jacobs", "Jan Kautz" ], "title": "Neural inverse rendering of an indoor scene from a single image", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Meng-Li Shih", "Shih-Yang Su", "Johannes Kopf", "Jia-Bin Huang" ], "title": "3d photography using context-aware layered depth inpainting", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Shuran Song", "Thomas Funkhouser" ], "title": "Neural illumination: Lighting prediction for indoor environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Pratul P Srinivasan", "Ben Mildenhall", "Matthew Tancik", "Jonathan T Barron", "Richard Tucker", "Noah Snavely" ], "title": "Lighthouse: Predicting lighting volumes for spatially-coherent", "venue": null, "year": 2020 }, { "authors": [ "Tiancheng Sun", "Jonathan T Barron", "Yun-Ta Tsai", "Zexiang Xu", "Xueming Yu", "Graham Fyffe", "Christoph Rhemann", "Jay Busch", "Paul Debevec", "Ravi Ramamoorthi" ], "title": "Single image portrait relighting", "venue": "ACM Transactions on Graphics (Proceedings SIGGRAPH),", "year": 2019 }, { "authors": [ "Kalyan Sunkavalli", "Micah K Johnson", "Wojciech Matusik", "Hanspeter Pfister" ], "title": "Multi-scale image harmonization", "venue": "ACM Transactions on Graphics (TOG),", "year": 2010 }, { "authors": [ "Yi-Hsuan Tsai", "Xiaohui Shen", "Zhe Lin", "Kalyan Sunkavalli", "Xin Lu", "Ming-Hsuan Yang" ], "title": "Deep image harmonization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sheng-Yu Wang", "Oliver Wang", "Richard Zhang", "Andrew Owens", "Alexei A Efros" ], "title": "Cnn-generated images are surprisingly easy to spot...for now", "venue": null, "year": 2020 }, { "authors": [ "Ye Yu", "William AP Smith" ], "title": "Inverserendernet: Learning single image inverse rendering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Fangneng Zhan", "Shijian Lu", "Changgong Zhang", "Feiying Ma", "Xuansong Xie" ], "title": "Adversarial image composition with auxiliary illumination", "venue": "arXiv preprint arXiv:2009.08255,", "year": 2020 }, { "authors": [ "Bolei Zhou", "Hang Zhao", "Xavier Puig", "Sanja Fidler", "Adela Barriuso", "Antonio Torralba" ], "title": "Scene parsing through ade20k dataset", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Hao Zhou", "Sunil Hadap", "Kalyan Sunkavalli", "David W Jacobs" ], "title": "Deep single-image portrait relighting", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 } ]
[ { "heading": null, "text": "Cut-and-paste methods take an object from one image and insert it into another. Doing so often results in unrealistic looking images because the inserted object’s shading is inconsistent with the target scene’s shading. Existing reshading methods require a geometric and physical model of the inserted object, which is then rendered using environment parameters. Accurately constructing such a model only from a single image is beyond the current understanding of computer vision.\nWe describe an alternative procedure – cut-and-paste neural rendering, to render the inserted fragment’s shading field consistent with the target scene. We use a Deep Image Prior (DIP) as a neural renderer trained to render an image with consistent image decomposition inferences. The resulting rendering from DIP should have an albedo consistent with cut-and-paste albedo; it should have a shading field that, outside the inserted fragment, is the same as the target scene’s shading field; and cut-and-paste surface normals are consistent with the final rendering’s shading field. The result is a simple procedure that produces convincing and realistic shading. Moreover, our procedure does not require rendered images or image decomposition from real images or any form of labeled annotations in the training. In fact, our only use of simulated ground truth is our use of a pre-trained normal estimator. Qualitative results are strong, supported by a user study comparing against state-of-the-art image harmonization baseline." }, { "heading": "1 INTRODUCTION", "text": "Cut-and-Paste rendering involves creating a new image by cutting fragments out of one or more source images and pasting them into a target image; the idea originates with Lalonde et al. (2007). Results are often unrealistic, because of the difference in illumination between the source and target images. But the procedure is useful to artists, and there is consistent evidence that such procedures can be used to train detectors (Liao et al., 2012; Dwibedi et al., 2017). When the geometry and material of the inserted object are known, it is enough to infer an illumination model from the target, render and composite. But current procedures for recovering shape and material from a single fragment simply can’t deal with most realistic fragments (think of, say, a furry cat).\nThis paper describes an alternative method, Cut-and-Paste Neural Rendering, that can render convincing composite images by adjusting the cut-and-paste images so that some simple image inferences are consistent with cut-and-paste predictions. So the albedo from the adjusted image should look like cut-and-paste albedo; the shading should look like a shading field; and the image should look like an image. A simple post-processing trick produces very high-resolution composites. Note that all our rendered images are 1024x1024 pixels resolution and are best viewed on screen. Evaluation is mostly qualitative, but we show that our method fools a recent method for detecting tampering.\nOur contribution is a method that can realistically correct shading in composite images, without requiring labeled data; our method works for matte, glossy and specular fragments without an explicit geometric or physical model; and human subjects prefer the results of our method over cut-and-paste and image harmonization." }, { "heading": "2 RELATED WORK", "text": "Object Insertion starts with Lalonde et al. (2007), who insert fragments into target images. Lalonde et al. (2007) control illumination problems by checking fragments for compatibility with targets; Bansal et al. (2019) do so by matching contexts. Poisson blending (Pérez et al., 2003; Jia et al., 2006) can resolve nasty boundary artifacts, but significant illumination and color mismatches will cause cross-talk between target and fragment, producing ugly results. Karsch et al. (2011) show that computer graphics (CG) objects can be convincingly inserted into inverse rendering models got with a geometric inference or with single image depth reconstruction (Karsch et al., 2014). Inverse rendering trained with rendered images can produce excellent reshading of CG objects (Ramachandran, 1988). However, recovering a renderable model from an image fragment is extremely difficult, particularly if the fragment has an odd surface texture. Liao et al. showed that a weak geometric model of the fragment can be sufficient to correct shading if one has strong geometric information about the target scene (Liao et al., 2015; 2019). In contrast, our work is entirely image-based: one takes a fragment from one image, drops it into another, and expects a system to correct it.\nWe use image harmonization (IH) methods as a strong baseline. These procedures aim to correct corrupted images. IH methods are trained to correct images where a fragment has been adjusted by some noise process (made brighter; recolored; etc.) to the original image (Sunkavalli et al., 2010; Tsai et al., 2017; Cong et al., 2020), and so could clearly be applied here. But we find those image harmonization methods very often change the albedo of an inserted object, rather than its shading. This is because they rely on ensuring consistency of color representations across the image. For example, in the iHarmony dataset from Cong et al. (2020), they change pink candy to brown (an albedo change; see Fig 12 in Appendix). In contrast, we wish to correct shading alone.\nImage Relighting. With appropriate training data, for indoor-scenes, one can predict multiple spherical harmonic components of illumination (Garon et al., 2019), or parametric lighting model (Gardner et al., 2019) or even full radiance maps at scene points from images (Song & Funkhouser, 2019; Srinivasan et al., 2020). For outdoor scenes, the sun’s position is predicted in panoramas using a learning-based approach (Hold-Geoffroy et al., 2019). One can also construct a volumetric radiance field from multi-view data to synthesize novel views (Mildenhall et al., 2020). However, we do not have access to either training data with lighting parameters/environment maps or multi-view data to construct such a radiance field. Our renderings are entirely image-based. Recent singleimage relighting methods relight portrait faces under directional lighting (Sun et al., 2019; Zhou et al., 2019; Nestmeyer et al., 2020). Our method can relight matte, gloss and specular objects with\ncomplex material properties like cars (Fig 7) for both indoor and outdoor spatially varying illuminated environments only from a single image and without requiring physics-based BRDF (Li et al., 2020).\nImage decomposition. Land’s influential Retinex model assumes effective albedo displays sharp, localized changes (which result in large image gradients), and that shading has small gradients (Land, 1959a;b; 1977; Land & McCann, 1971). These models require no ground truth. An alternative is to use CG rendered images for image decomposition training (Li & Snavely, 2018), particularly with specialized losses (Bi et al., 2015; Fan et al., 2018). One can also train using rendering constraints to produce a form of self-supervised training (Janner et al., 2017). Current image decomposition evaluation uses the weighted human disagreement rate (WHDR) (Bell et al., 2014); current champions are (Fan et al., 2018). We use an image decomposition method built around approximate statistical models of albedo and shading (paradigms) to train our image decomposition network without requiring real image ground truth decompositions. Our method has reasonable, but not SOTA, WHDR; but we show that improvements in WHDR do not result in improvements in reshading (Fig 5)." }, { "heading": "3 CUT-AND-PASTE NEURAL RENDERER", "text": "We synthesize a reshaded composite image containing a fragment transferred from a source image into a target scene image. We use a deep image prior (DIP) (Ulyanov et al., 2018) as a neural renderer to produce a reshaded image that produces consistent image decomposition inferences. We use an image decomposition trained on paradigms (statistical samples of albedo, shading and gloss; Fig 4a) and not real images described in section 3.3, and normals inferred by the method of Nekrasov et al. (2019) to meet the shading consistency tests (section 3.2). The final reshaded image’s albedo must be like the cut-and-paste albedo; the reshaded image’s shading must match the shading of the target scene outside the fragment; and the shading of the reshaded image must have reasonable spherical harmonic properties and meet a consistency test everywhere Fig 2 summarizes our method." }, { "heading": "3.1 DEEP IMAGE PRIOR FOR RENDERING CUT-AND-PASTE IMAGES", "text": "Assume we have a noisy image It, and wish to reconstruct the original. Write z for a random vector, and fθ for a CNN with parameters θ and E(fθ(z); It) for a loss comparing the image fθ(z) to It. The Deep Image Prior seeks\nθ̂ = argminθE(fθ(z); It) (1) and then reports fθ̂(z). We modify this formulation by requiring that the E(·; It) impose inferential consistency. In particular, write gφ for some inference network(s) and tψ(Is, It) for inferences constructed out of It and the source image Is. We seek\nθ̂ = argminθE(gφ(fθ(z)); tψ(Is, It)). (2)\nFor us, gφ is an image decomposition network (pretrained and fixed), and tψ creates target albedos (At), shading (St) and glosses (Gt) fields. We then train DIP to produce an image that has reasonable intrinsic image properties. For DIP, the input z is the cut-and-paste image and fθ is optimized to inpaint inserted fragment and also to meet satisfactory intrinsic image properties.\nWe use a U-Net with partial convolutions (Liu et al., 2018; Shih et al., 2020; Dundar et al., 2020). However, we find the standard partial convolution often convergence to a trivial solution, producing images close to cut-and-paste and without convincing reshading. To prevent this overfitting to cut-and-paste images, we flip the context for the partial convolution; that is, we consider the inserted fragment as the context and hallucinate/outpaint the entire target scene around it. We can view this as an inverse partial convolution.\nWe use CP(Is; It; s) for an operator that cuts the fragment out of the source image Is, scales it by s, and places it in the relevant location in the target image It.M for a mask with the size of the target image that is 0 inside the fragment and 1 outside. Reconstruction loss for background and is given by:\nLrecons = ||It M− (fθ(CP(Is; It; s);M))||2 (3) We then pass the DIP rendered image through the image decomposition network gφ making Arender, Srender and Grender for the albedo, shading and gloss respectively. Our consistent image decomposition inference losses to train DIP are: Ldecomp = ||ACP(Is;It;s) −Arender|| 2 + ||St M−Srender M||2 + ||Gt M− Grender M||2 (4)" }, { "heading": "3.2 SHADING CONSISTENCY LOSSES", "text": "We use two shading consistency losses to make the very strong structure of a shading field apparent to a DIP’s rendering. There is good evidence that shading is tied across surface normals (this underlies spherical harmonic models (Liao et al., 2019; Li et al., 2018; Yu & Smith, 2019)), and one should think of a surface normal as a latent variable that explains shading similarities in images. We assume that the resulting illumination, approximated with the first 9 spherical harmonics coefficients (SHC; a 9-dimensional vector) does not change when new objects are added to a target scene. We get SHC by solving the least square regression between normals (N ) and shading (S) for both the target and the resulting rendered image. We breifly explain the least square regression formulation between normals and shading. Consider we have k = m × n pixels in an image, our N ∈ Rk×3 and S ∈ Rk×1. We estimate first 9 spherical harmonics basis (B(N ) ∈ Rk×9) from N . We can now write S = SHC × B(N ). The solution for SHC is then B(N )†S. 1 We get normal estimates from Nekrasov et al. (2019) and then minimize loss (LSHC) between the target and rendered image’s SHC(S;N ). We use Huber loss for LSHC .\nLSHC = { 0.5× (SHC(St;Nt)− SHC(Srender;NCP))2 for|SHCt − SHCrender| ≤ 1, |SHC(St;Nt)− SHC(Srender;NCP)| − 0.5 otherwise. (5)\nSpherical harmonic shading fields have some disadvantages: every point with the same normal must have the same shading value, which results in poor models of (say) indoor shading on walls. To control this effect, we use a neural shading consistency loss (LNSC) that allows the shading field to depart from a spherical harmonic shading field, but only in ways consistent with past inferences. Inspired from a pixel-wise discriminator in Schönfeld et al. (2020), we use a shading consistency network U-Net; ζ(S;N ), trained to discriminate real and fake shading-normal pair. Our shadingnormal consistency network (ζ(S;N )) produces two outputs; one a pixel level map, yielding the first loss term in Eq. 6 which measures per-pixel consistency; the other an image level value, yielding the second term in Eq. 6 which measures consistency for the whole image. The LNSC loss is a binary cross-entropy loss. We train DIP using LNSC to produce a shading field that achieves a pixel-wise consistency score when stacked with the cut-and-paste normals (see Fig 3). Let m× n be the resolution of our renderings, then LNSC is given by\nLNSC = − m∑ i=1 n∑ j=1 log ζ(Srender[i, j];NCP [i, j])− logζ(SrenderNCP) (6)\nFigure 3: Shading Consistency UNet is trained separately to discriminate consistent and inconsistent pixel-level shading-normal pairs.\nIn summary, the overall rendering losses that we minimize to update DIP are Lrender = Lrecons + Ldecomp + LSHC + LNSC (7)\n1B(N )† is the pseudo-inverse of B(N ).\nAlbedo Shading Gloss\n(a) Paradigms (b) Image Decomposition on Real Images (MS COCO)\nImage, Albedo, Shading, Gloss (from top to bottom)" }, { "heading": "3.3 INTRINSIC IMAGE DECOMPOSITION", "text": "We believe we could use any competent image decomposition network’s inferences for reshading. However, our experience suggests accurate albedo recovery (measured by strong WHDR performance) results in poor reshading outcomes (Fig 5). It is also helpful to have a small and efficient network to reduce the overall back-propagation time when training a DIP. Therefore, we trained a small U-Net to produce albedo, shading and gloss layers from images using paradigms, samples from statistical models intended to capture the spatial statistics of albedo, gloss and shading. This is a simple extension of the Retinex model (Land, 1977). Albedo paradigms are Mondrian images. Shading paradigms are Perlin noise (Perlin, 1985), and gloss paradigms comprise light bars on a dark background. These are used to produce fake images by a simple composition (AS + G) that in turn are used to train the image decomposition network. Figure 4a shows some samples from each. As Figure 4b illustrates, the resulting intrinsic image models are satisfactory on MSCOCO real images (Lin et al., 2014). For our experiments, we trained two models (Paradigm I and Paradigm II); samples drawn from different statistical models, to investigate the consequences of strong WHDR recovery (II has significantly better WHDR than I). The major difference between the two is that Paradigm I has high-frequency, fine-grained details in albedo and not shading, and for Paradigm II it is the opposite." }, { "heading": "3.4 POST-PROCESSING", "text": "Removing DIP artifacts. In our approach (see Fig 2), DIP sees both the target image and the cut-and-paste image. We require that DIP inpaints the shading field of the original target image and also the cut-and-paste composite image. This process is analogous to rendering scenes once with an object added and once without an object (see (Debevec, 1998; Karsch et al., 2011)). Doing so means the target scene’s shading field acts like a regularizer that prevents DIP copying cut-and-paste. Furthermore, we can remove DIP specific noisy artifacts from the reconstructed image. Using the notation above and writing Iobj for the target image rendered by DIP with the object and Inoobj for the target scene rendered by DIP (i.e. no object), we form\nIfinal = (1−M) Iobj +M (It + Iobj − Inoobj) (8)\nHigh-resolution Rendering. Our image decomposition network can decompose very high-resolution albedos reliably. Since our shading and gloss are both locally smooth for Paradigm I (fine details in albedo), we can easily upsample them to high-resolution. This allows us to render very-highresolution final reshaded images (1024p) with no additional computational budget. Note that we train DIP to render images with 256p resolution only, and all our final rendered images in this paper are of 1024p resolution. The IH baseline (Cong et al., 2020) can only reshade 256p resolution images." }, { "heading": "4 EXPERIMENTS", "text": "Dataset. We collected about 75 diverse images with spatially varying illumination, both indoorsoutdoors, day-night to act as target scenes in our experiments. We tested our cut-and-paste rendering on circle cutouts as spheres (Fig 6), plant, chair, lego and a set of sixteen materials used in NeRF\n(Mildenhall et al., 2020) (Fig. 8). We also show reshading results for cars (Fig 7). Many other objects can be found in our Appendix. We use ADE20k validation set Zhou et al. (2017) for supplying real residual loss to our image decomposition network and also to train our shading consistency network. Note that the ADE20k does not have ground truth surface normals and we use normals from a pretrained network (Nekrasov et al., 2019) as described in section 3.2.\nTraining Details. We used U-Net for DIP, Image Decomposition and Shading Consistency Network. Network architecture and other training details are in our supplementary. We update our DIP for a fixed 10k iterations and this takes about 1200 seconds using our image decomposition network and 1600 seconds when using CGIntrinsic (Li & Snavely, 2018).\nIH Baseline. We use DoveNet from Cong et al. (2020) as our IH baseline. We use their provided pretrained model for our evaluation 2.\nIntrinsic images. Figure 5 explains how better albedo recovery can lead to weaker reshading. Paradigm I achieves realistic reshading and we use image decomposition model trained on Paradigm I for all our experiments.\nFooling users. We wish to know whether our reshaded images are reliably preferred to cut and paste images. We have conducted a user study to compare our reshading method (RS), cut-and-paste (CP), and image harmonization (IH). We collected data from a total of 122 unique users in 500 studies from Amazon Mechanical Turk. Each study consists of a prequalifying process, followed by 9 pair-wise comparisons, where the user is asked which of two images are more realistic. The prequalifying process presents the user with five tests; each consists of an image with inserted white spheres which are not reshaded (i.e. bright white disks) and an image with inserted spheres which have been reshaded (see Fig 6). We ignore any study where the user does not correctly identify all five reshaded images, on the grounds that the difference is very obvious and the user must not have been paying attention. The result is 109 prequalified studies. The comparisons are balanced (each study is 3 RS-IH pairs, 3 CP-IH pairs and 3 RS-CP pairs, in random order and presentation).\nThe simplest analysis strongly supports the idea that RS is preferred over both alternatives. One compares the probability that RS is preferred to IH (.673, over 327 comparisons, so standard error is .026, and the difference from 0.5 is clearly significant); RS is preferred to CP (.645, over 327 comparisons, so the standard error is .026, and the difference from 0.5 is clearly significant); IH is preferred to CP (.511, over 327 comparisons, so standard error is .027, and there is no significant difference from 0.5). An alternative is a Bradley-Terry model (Tsai et al., 2017; Cong et al., 2020)\n2https://github.com/bcmi/Image_Harmonization_Datasets/tree/master/ DoveNet\nused in image harmonization evaluation, regressing the quality predicted by the Bradley-Terry model against the class of algorithm. This yields coefficients of 0 for IH, -0.347 for CP and 0.039 for RS, implying again that RS is preferred over IH and strongly preferred over CP.\nQuantitative Analysis. We cannot quantitatively evaluate our reshading method, because we do not know the ground truth. But we tried scoring images based on a quality when no reference is available (Mittal et al., 2012). Quite surprisingly, NIQE gets a significantly worse score for target scenes without any fragments added. This shows that getting a reliable quantitative evaluation for reshading is extremely hard. We also tried the recent CNN detector of image fakes described by Wang et al. (2020). We find that it does not detect any of our synthesized images as fake; but it also fails to detect cut-and-paste images and image harmonization images." }, { "heading": "5 DISCUSSION", "text": "Evaluation on synthetic scenes. One could build ground truth data on both synthetic (easy, but possibly misleading) and real (experimentally hard, see Karsch et al. (2011)) scenes. But evaluating using these will be misleading. The problem is that it is not possible to recover a renderable representation from an image fragment using current computer vision technology. This means the inserted object will be rendered incorrectly, and so evaluating using a metric like Mean Squared Error (MSE) may be bad. Worse, MSE says nothing about the realism of the reshaded image. One could have good MSE, but poor qualitative results; one could also have bad MSE, but strong qualitative results. Only user study remains as an evaluation.\nWhy could this work? It should be clear that corrections to inserted fragments are not veridical. As Liao et al. (2015; 2019) noted, corrected fragments often fool humans more effectively than physically accurate lighting, likely because humans attend to complex materials much more than to consistent illumination (Berzhanskaya et al., 2005). The alternative physics theory by Cavanagh & Alvarez (2005) argues that the brain employs a set of rules that are convenient, but not strictly physical, when interpreting a scene from retina image. When these rules are violated, a perception\nalarm is fired, or recognition is negatively affected (Gauthier et al., 1998). Otherwise, the scene “looks right”. This means that humans may tolerate a considerable degree of estimation error, as long as it is of the right kind. By insisting that the image produces consistent inferences, we appear to be forcing errors to be “of the right kind”.\nNeed For Speed. A desirable cut-and-paste reshading application would correct shading on inserted objects requiring no explicit 3D reasoning about either fragment or scene. It would do so in ways that produce consistent inferences of image analysis. Finally, it would be quick. We described a method that meets two of these three desiderata (DIP still requires minutes to render an image).\nCasting Shadows. Our renderer can adjust the background with cast-shadows but often unrealistically. Therefore, we only show shading corrections on the fragment in this paper. Having better inferences (for eg., normal prediction network) should improve our results significantly. Also, better illumination modeling like Lighthouse (Srinivasan et al., 2020) and residual appearance networks (Sengupta et al., 2019) should also improve background shading changes. We also believe GAN based losses should help in correcting background shadows. This can also be combined with a few recent promising work in this direction (Izadinia & Seitz, 2020; Zhan et al., 2020).\nExtending to Videos. Our method can reshade multiple frames at a time quite convincingly (in Fig 14 of Appendix). Extending to videos is a natural next step. Once the speed of our renderings is improved, we can also use our techniques for improving frame interpolation." } ]
2,020
CUT-AND-PASTE NEURAL RENDERING
SP:0e68a02aff6bc3918d91083d6b48a3d625ebdc5d
[ "This paper presents a method for improving a fine-turned Transformer in terms of a specific metric such as size, speed, or accuracy. The candidates of removed elements are considered hierarchically with some heuristics and are evaluated in terms of training and validation loss to determine whether they should actually be removed from the model. The authors apply their method to several state-of-the-art Transformer models and show that they can produce fast and compact models without losing much accuracy." ]
Transformer models have garnered a lot of interest in recent years by delivering state-of-the-art performance in a range of Natural Language Processing (NLP) tasks. However, these models can have over a hundred billion parameters, presenting very high computational and memory requirements. We address this challenge through Approximate Computing, specifically targeting the use of Transformers in NLP tasks. Transformers are typically pre-trained and subsequently specialized for specific tasks through transfer learning. Based on the observation that pretrained Transformers are often over-parameterized for several downstream NLP tasks, we propose a framework to create smaller, faster and in some cases more accurate models. The key cornerstones of the framework are a Significance Analysis (SA) method that identifies components in a pre-trained Transformer that are less significant for a given task, and techniques to approximate the less significant components. Our approximations include pruning of blocks, attention heads and weight groups, quantization of less significant weights and a low-complexity sign-matching based attention mechanism. Our framework can be adapted to produce models that are faster, smaller and/or more accurate, depending on the user’s constraints. We apply our framework to seven Transformer models, including optimized models like DistilBERT and Q8BERT, and three downstream tasks. We demonstrate that our framework produces models that are up to 4× faster and up to 14× smaller (with less than 0.5% relative accuracy degradation), or up to 5.5% more accurate with simultaneous improvements of up to 9.83× in model size or 2.94× in speed.
[ { "affiliations": [], "name": "OPTIMIZING TRANSFORMERS" } ]
[ { "authors": [ "Tom B. Brown", "Benjamin Mann", "Nick Ryder" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Maha Elbayad", "Jiatao Gu", "Edouard Grave", "Michael Auli" ], "title": "Depth-adaptive transformer", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Angela Fan", "Edouard Grave", "Armand Joulin" ], "title": "Reducing transformer depth on demand with structured dropout", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Saurabh Goyal", "Anamitra Roy Choudhury", "Venkatesan T. Chakaravarthy", "Saurabh ManishRaje", "Yogish Sabharwal", "Ashish Verma" ], "title": "Power-bert: Accelerating BERT inference for classification", "venue": "tasks. CoRR,", "year": 2020 }, { "authors": [ "Ganesh Jawahar", "Benoı̂t Sagot", "Djamé Seddah" ], "title": "What does BERT learn about the structure of language", "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Xiaoqi Jiao", "Yichun Yin", "Lifeng Shang", "Xin Jiang", "Xiao Chen", "Linlin Li", "Fang Wang", "Qun Liu" ], "title": "Tinybert: Distilling BERT for natural language understanding", "venue": "URL http://arxiv.org/abs/1909.10351", "year": 1909 }, { "authors": [ "Ashish Khetan", "Zohar S. Karnin" ], "title": "schubert: Optimizing elements of BERT", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized BERT pretraining approach", "venue": "URL http://arxiv.org/abs/1907.11692", "year": 1907 }, { "authors": [ "Mitchell P. Marcus", "Grace Kim", "Mary Ann Marcinkiewicz", "Robert MacIntyre", "Ann Bies", "Mark Ferguson", "Karen Katz", "Britta Schasberger" ], "title": "The penn treebank: Annotating predicate argument structure. In Human Language Technology, Proceedings of a Workshop held at Plainsboro, New Jerey", "venue": "USA, March 8-11,", "year": 1994 }, { "authors": [ "Paul Michel", "Omer Levy", "Graham Neubig" ], "title": "Are sixteen heads really better than one", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz" ], "title": "Pruning convolutional neural networks for resource efficient inference", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "URL http://arxiv.org/abs/1910.10683", "year": 1910 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100, 000+ questions for machine comprehension of text", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter", "venue": "CoRR, abs/1910.01108,", "year": 2019 }, { "authors": [ "Sheng Shen", "Zhen Dong", "Jiayu Ye", "Linjian Ma", "Zhewei Yao", "Amir Gholami", "Michael W. Mahoney", "Kurt Keutzer" ], "title": "Q-BERT: hessian based ultra low precision quantization of BERT", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Mohammad Shoeybi", "Mostofa Patwary", "Raul Puri", "Patrick LeGresley", "Jared Casper", "Bryan Catanzaro" ], "title": "Megatron-lm: Training multi-billion parameter language models using model parallelism", "venue": "CoRR, abs/1909.08053,", "year": 2019 }, { "authors": [ "Zhiqing Sun", "Hongkun Yu", "Xiaodan Song", "Renjie Liu", "Yiming Yang", "Denny Zhou" ], "title": "Mobilebert: a compact task-agnostic BERT for resource-limited devices", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020,", "year": 2020 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Jamie Brew" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": "URL http://arxiv.org/abs/1910.03771", "year": 1910 }, { "authors": [ "Zhanghao Wu", "Zhijian Liu", "Ji Lin", "Yujun Lin", "Song Han" ], "title": "Lite transformer with long-short range attention", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ji Xin", "Raphael Tang", "Jaejun Lee", "Yaoliang Yu", "Jimmy Lin" ], "title": "Deebert: Dynamic early exiting for accelerating BERT inference", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020,", "year": 2020 }, { "authors": [ "Canwen Xu", "Wangchunshu Zhou", "Tao Ge", "Furu Wei", "Ming Zhou" ], "title": "Bert-of-theseus: Compressing BERT by progressive module replacing", "venue": "CoRR, abs/2002.02925,", "year": 2020 }, { "authors": [ "Zihao Ye", "Qipeng Guo", "Quan Gan", "Xipeng Qiu", "Zheng Zhang" ], "title": "Bp-transformer: Modelling longrange context via binary partitioning", "venue": "CoRR, abs/1911.04070,", "year": 2019 }, { "authors": [ "Ofir Zafrir", "Guy Boudoukh", "Peter Izsak", "Moshe Wasserblat" ], "title": "Q8BERT: quantized 8bit BERT", "venue": "CoRR, abs/1910.06188,", "year": 2019 }, { "authors": [ "Sajjad" ], "title": "auto-regressive), similar to the observation", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Transformer networks with hundreds of billions of parameters, such as T5 (Raffel et al. (2019)), Megatron (Shoeybi et al. (2019)), BERT (Devlin et al. (2019)), GPT-2 (Radford et al. (2019)) and GPT-3 (Brown et al. (2020)), have achieved state-of-the-art performance in several Natural Language Processing tasks. Model sizes are expected to grow further in the future as increasing the number of parameters has been shown to improve performance. For instance, increasing the number of parameters from 1.5B to 175B enabled a reduction in perplexity for Language Modelling (Penn Treebank) from 35.8 in GPT-2 to 20.5 in GPT-3. This makes it computationally challenging to train Transformers as well as perform inference using them. The challenges associated with training these models are alleviated through the (re-)use of pre-trained models that are subsequently fine-tuned for different tasks. Consequently, these models incur a major one-time cost in computational resources, time and energy during the pre-training process, but the repeated fine-tuning for individual downstream tasks is performed at a considerably lower cost.\nHowever, performing inference using fine-tuned Transformer models continues to remain a challenge because of the large amount of storage and compute operations required. Prior research efforts have explored different techniques for improving the efficiency of Transformer inference. However, several of the proposed approaches either require training the network completely from scratch (which is extremely compute and memory-intensive), or cause significant degradation in accuracy on the downstream task. In this work, we overcome these limitations by exploiting the transfer learning step in Transformers to produce individually optimized models for the different\ndownstream tasks, using techniques that do not require training from scratch and maintain or improve accuracy levels.\nFrom the runtime and memory breakdown of Transformers (Fig. 1), we observe that the most timeconsuming and memory-intensive operations in a Transformer are the self-attention (ATTN) blocks, which are used to identify and form relationships between the different tokens in text, and the feedforward neural network blocks (FFN blocks) in the Transformer layers. These blocks together account for more than 85% of the inference time (and more than 75% of the model’s parameters). We accordingly optimize the execution of these two components in our approach. The self-attention component dominates the execution time and memory size for longer context lengths as its operation scales quadratically in time and memory with sequence length. Some previous works (Kitaev et al. (2020), Ye et al. (2019)) have addressed this issue, accelerating training and inference of Transformers when large context lengths are used. However, they suffer from significant overheads and slowdowns in applications with smaller context lengths, such as question answering, where questions and answers are usually short, in the order of a few hundred tokens. Our approach, on the other hand, performs well across context lengths, size of hidden layers, number of layers and other network characteristics.\nThe pre-training of Transformer models with some initial objective (most commonly predicting masked words in a large text corpus) and the subsequent fine-tuning on a downstream task leads to highly over-parameterized models for many downstream tasks (Michel et al. (2019)), providing ample opportunities for approximations. As these models grow larger, such opportunities are expected to increase even further. We observe that for a given downstream task, some parts of the pre-trained Transformer are more significant to obtain good accuracy, while other parts are less important or unimportant. In order to exploit this observation in a principled manner, we introduce a framework to introduce approximations while fine-tuning a pre-trained Transformer network, optimizing for either size, latency, or accuracy of the final network. We perform and apply significance analysis in a hierarchical manner, first pruning entire blocks, followed by attention heads, and finally pruning weight groups. We achieve further gains by also allowing elements that cannot be pruned to be approximated by other techniques. We specifically apply two forms of approximations, depending on the element type. For weights, we utilize quantization. For the self-attention operation, we replace the scaled dot product attention mechanism with a novel sign matching-based attention mechanism.\nWe summarize our main contributions as follows:\n• We introduce a framework for creating fine-tuned models from pre-trained Transformer models that are optimized for various metrics (size, latency, accuracy).\n• We incorporate multiple heuristics in the framework, such as hierarchical processing, model-driven insights, and run-time based ordering of elements.\n• We propose a significance analysis technique to identify the importance of each element of the pre-trained Transformer for a given downstream task. We use this technique to prune entire blocks, attention heads, and weight groups and to guide the quantization of low-importance weights.\n• We propose a low-complexity attention mechanism, sign matching, in order to approximate dot product attention in the less significant attention layers.\n• Across a suite of different Transformer networks, including previously proposed optimized networks, we demonstrate that our techniques produce models that are up to 4× faster and up to 14× smaller (with less than 0.5% relative accuracy degradation), or up to 5.5% more accurate with simultaneous size and latency improvements." }, { "heading": "2 RELATED WORK", "text": "Given the effectiveness and popularity of Transformer models, several techniques have been proposed to overcome their computational and memory challenges, and to accelerate inference using these models. Most of these works directly pre-train efficient models from scratch. For example, DistilBERT (Sanh et al. (2019)), MobileBERT (Sun et al. (2020)) and TinyBERT (Jiao et al. (2019)) use knowledge distillation to train smaller and faster networks using the original network as a teacher. LayerDrop (Fan et al. (2020)) randomly drops layers during pre-training, thereby enabling\ntheir dropping during inference. SchuBERT (Khetan & Karnin (2020)) learns the optimal sizes of the BERT elements during pre-training. Lite Transformer (Wu et al. (2020)) uses Long-Short Range Attention to speed up the self-attention operation, with different attention heads attending to local and global context. Depth-adaptive Transformer (Elbayad et al. (2020)) and DeeBERT (Xin et al. (2020)) modulate Transformer depth depending on the complexity of each input sample using gating functions that are trained along with the model. AlBERT (Lan et al. (2020)) uses factorized embeddings and cross-layer parameter sharing. These works are orthogonal to ours, as the models that they produce are still subsequently fine-tuned for downstream tasks. We demonstrate using DistilBERT, AlBERT and LayerDrop as examples that these optimized networks still have significant opportunities that our techniques can take advantage of.\nOther works (including ours) aim to improve the inference efficiency of Transformers using techniques that do not require training new models from scratch. Among these, PoWER-BERT (Goyal et al. (2020)), which eliminates redundant word vectors from the model without removing any parameters, and Q8BERT (Zafrir et al. (2019)), which quantizes all weights and activations in the model to 8-bit integers through the use of Quantization-Aware Training at fine-tuning time, are orthogonal and complementary to our work. Poor Man’s BERT (Sajjad et al. (2020)) evaluates several layer-dropping techniques that do not require re-training. Compared to layer-dropping techniques that do not require re-training, our techniques produce models that are up to 20% more accurate at comparable inference speed, and this is especially true when working with highly optimized baselines such as Q8BERT. Our framework can also be adapted to satisfy a wide range of user constraints." }, { "heading": "3 PRELIMINARIES", "text": "A Transformer (Fig. 1) consists of an embedding layer, followed by multiple transformer layers stacked together, and a task-specific final layer. A transformer layer consists of the multi-headed self-attention operation (ATTN block), followed by a feed-forward neural network (FFN block) with layer norm operations at the input and output of the layer. In this work, we define the elements of a Transformer to include different levels of granularity, i.e., ATTN blocks, FFN blocks, Attention Heads and Weight Groups. We define Weight Groups only along dimensions that do not impact the shape of the output of the block when these groups are removed.\nThe self-attention operation takes as input a sequence n of vectors X, and computes three matrices, Query = X × Wq , Key = X × Wk and Value = X × Wv . Then, the output of the self-attention operation is computed as Y = softmax((Query × KeyT ) + attention mask) × V alue. For auto-regressive models, tokens are not allowed to attend to future tokens. Hence, an attention mask is applied before the softmax operation, setting attention scores with future tokens to a very large negative number, which becomes zero after the softmax operation. This operation has multiple\n“attention heads” working in parallel on the input sequence, where each head has its own set of parameters to compute the query, key and value matrices. The independent attention outputs are concatenated and transformed into the expected output dimensions. The self-attention operation scales quadratically in time and memory with sequence length n since Query × KeyT has n2 entries." }, { "heading": "4 DESIGN METHODOLOGY", "text": "We propose a framework for producing fine-tuned Transformer models that are optimized for a specific metric (speed, model size, or accuracy). Fig. 2 presents an overview of the proposed framework. As shown in the figure, the inputs to the framework are a pre-trained Transformer model, the fine-tuning dataset, the goal of optimization (speed, size or accuracy) and acceptable accuracy loss (when optimizing for speed or size). The framework has three major components: (i) a set of heuristics used to build an ordered queue of elements (TransElements) to be considered, (ii) a significance analysis method to identify insignificant elements in a pre-trained Transformer and (iii) a set of techniques to prune or approximate the insignificant elements. The framework proceeds in an iterative manner. That is, we first start with the original Transformer. We then remove an element from the TransElements queue, analyze its significance, and apply pruning/approximation techniques to the element. This results in new Transformer, where the element is replaced by the pruned or approximated version. This modified Transformer is then used as the baseline for the next iteration. After processing all of the identified elements, we fine-tune on the downstream task for the same number of epochs as the baseline model to obtain the final, optimized model. A detailed description of our methodology for approximating Transformers is presented in Fig. 2 and in Algorithm 4. In the following subsections, we further describe our techniques for generating the ordered queue TransElements, followed by the significance analysis method, and finally the pruning and approximation techniques for different Transformer elements.\nTransElement Ordered Queue. In order to optimize a given model, we would ideally want to characterize the significance of each and every parameter in the model, rank them in order of importance, and finally prune/approximate only the least significant parameters, as in Molchanov et al. (2017).\nHowever, Transformers have billions of parameters, making this process computationally infeasible. In addition, previously proposed techniques that can efficiently estimate the importance of each parameter, such as using Taylor expansion, are not applicable. This is because the {approximate, fine-tune, approximate} cycle does not work for Transformers during fine-tuning, since they very quickly overfit the training data for the downstream task (usually within 5 epochs). We take advantage of hierarchical structure of Transformers and consider them in a hierarchical manner, ordered by increasing granularity. Specifically, we place entire FFN and ATTN blocks earlier in the queue, followed by heads, and finally weight groups. Through this ordering, we are able to quickly eliminate large numbers of parameters from further consideration, speeding up future iterations of the framework. For example, eliminating a single FFN block in the BERT-Base model removes 5.6% of all parameters under consideration. To further reduce the number of elements under consideration, we also dynamically remove elements from the queue if they are encompassed by a high-importance block. For example, if a given ATTN block is determined to be of high importance, we remove all heads and weight groups within that block from the TransElement queue.\nSince the framework iterates through the entries of the TransElement queue sequentially, its efficacy is dependent on the ordering of the elements at each level of granularity. In order to minimize the run-time of the framework, we provide two additional heuristics to guide the ordering of elements. First, we use the unique linguistic properties captured by the different Transformer layers (Jawahar et al. (2019)). These properties depend on both the Transformer and the downstream task under consideration, since different tasks require different types of linguistic knowledge. For example, top layers usually have low significance for Language Understanding tasks, since long-range dependency information is not required for most tasks (for example, sentiment analysis requires only local context). Hence, we place the final layer at the front of the queue, and work our way backwards towards the first layer, since blocks in the final layers are more likely to be removed, thereby speeding up future iterations. Second, we use a run-time (or parameter-count) aware ordering of the TransElements, such the most time consuming blocks (or blocks with the most parameters) are likely to be removed earlier in the algorithm. For example, at large context lengths, we start with the ATTN blocks in all layers before moving on to the FFN blocks, and vice-versa at small context lengths. This has the dual benefit of producing highly optimized models for inference, as well as speeding up the significance analysis process by eliminating time-consuming blocks early and making further iterations faster. Algorithm 1 and Fig. 2 describe the process of creating the TransElement Queue. The utility of this framework and the heuristics used are discussed in Appendix C. Significance Analysis. To determine the significance of each Transformer element, we first finetune the original Transformer model for the given downstream task to obtain the baseline loss. We then use this baseline loss, along with the provided acceptable accuracy degradation, to generate a set of loss thresholds that determine whether a given element is of low importance and therefore can be pruned/approximated. This is a one-time step and performed globally for all elements in the TransElements queue. Then, for the element under consideration in each iteration of the framework, we compute the loss of the current Transformer model with the element removed. We then compare this loss to the thresholds determined above. The exact thresholds used are dependent on the optimization metric: speed, size, or accuracy. If we are optimizing the network for speed or size, we prune the element under consideration if the training/validation loss upon removing it from the Transformer is less than the pruning threshold. If we are optimizing for accuracy, we prune the element only if the training/validation loss when it is removed is less than the minimum loss seen thus far during the optimization process, since the goal is to find a model with minimum loss. Similarly, we apply approximations if the loss with the element removed from the Transformer is greater than the pruning threshold but lower than the approximation threshold. Algorithm 2 describes Significance Analysis.\nPruning and Approximating. As evident from Section 3, the structure and functionality of ATTN blocks differ significantly from that of FFN blocks in a Transformer. We accordingly adopt different strategies for approximating them, as described below. But pruning an entire ATTN or FFN block is effectively the same as it simply involves using the skip connection to bypass that block. The pruning strategies for the FFN and ATTN blocks are illustrated in Fig. 4 and Fig. 5.\nPruning Weight Groups within approximable FFN Blocks. Consider an approximable FFN block that performs the transformation Rn×d × Rd×y → Rn×y , with weight groups defined along the d dimension ((d/W ) weight groups of (W ) weights each, where W is a hyperparameter that defines the granularity of approximations). When optimizing models for accuracy, we remove weight groups\nonly if doing so results in a reduction in the model loss. When optimizing for size, we remove weight groups that maintain loss within the pruning threshold when removed. When optimizing for speed, however, removing weight groups with low significance from arbitrary locations does not help, since it introduces unstructured sparsity in the weight matrix that can be difficult to exploit to achieve speedups. Instead, we impose structure on our pruning. Specifically, we use a “greedy shrinking” algorithm that finds the largest number of weight groups that can be removed while maintaining loss below the threshold, such that the weight groups that remain in the model form a contiguous block. We first start from the bottom (weight group 0), work our way up and remove as many weight groups as possible while staying within the loss threshold. We then start from the top (weight group d/W ), work our way down and remove as many weight groups as possible while staying within the loss threshold. When this process is completed, the weight groups that remain form a contiguous dense block, enabling speedups on all hardware platforms. Since weight groups are removed along the “hidden” dimension d, our methods do not change the shape of the output of this block; instead, we are simply “shrinking” the effective hidden dimension size through structured pruning.\nQuantizing Weight Groups within approximable FFN and ATTN Blocks. When optimizing the Transformer for size, we also quantize weight values within weight groups for which the loss lies between the pruning and approximation thresholds. We use uniform quantization with QuantizationAware Training proposed in Q8BERT (Zafrir et al. (2019)) within our hierarchical framework to quantize insignificant weight groups to lower precisions. This reduces the memory requirements of those weight groups but does not improve the execution time as the computations are still performed at the baseline precision.\nPruning ATTN heads and Weight Groups within approximable ATTN Blocks. We divide the multi-headed self-attention operation into two main steps. In the first step, we compute the Query, Key and Value matrices by multiplying the input to this layer with the corresponding weight matrices (Rn×d × Rd×y → Rn×y), and then reshape them into multiple attention heads (Rn×y → Rn×h×(y/h)). Our approach to pruning this step is exactly the same as for the FFN blocks, where we iteratively prune weight groups along the d dimension using our shrinking algorithm. In the second step, we compute the “attention output“ as Y = softmax((Query×KeyT )+ attention mask) × V alue. To optimize this step, we apply two techniques. Firstly, we identify insignificant attention heads, and prune them from the model. However, removing attention heads changes the shape of the output of this layer. We overcome this by keeping track of the pruned heads, and padding the output with zeros in the corresponding locations. In spite of this overhead, we still manage to achieve significant speedup from this approximation technique since pruning heads makes multiple downstream operations (computing the attention scores, applying softmax to the attention scores, and computing the final score) considerably faster. Therefore, we do not use our greedy shrinking method, but rather rely on unstructured pruning as it allows for greater pruning which further benefits the downstream operations. Secondly, we dynamically reduce the size of the key and value matrices by pruning weight groups from the same location along the n dimension in both matrices, based on sign matches with the query vectors. This again makes multiple downstream operations considerably faster and does not change the shape of the output of the pruned block.\nApproximating self-attention within approximable ATTN Blocks. We observe that the “attention scores” matrix is highly sparse, especially after the softmax operation. This sparsity implies that most of the dot products between the query and the key are unnecessary. Thus, we would ideally like to perform the attention operations for the query vectors that give highest dot-product with each key vector efficiently without explicitly performing all of the dot products. To this end, we propose replacing the O(n2) dot product-based attention mechanism with a linear-time sign-matching-based mechanism in approximable ATTN blocks. Sign-matching attention (SM) is based on the idea that key vectors whose signs match with the largest number of query vectors will have high dot-products with maximum number of query vectors. However, it is expensive to compute a sign match for all pairs of query-key vectors, as this will grow quadratically. Instead, we employ a low-cost approximation. For each column of the query matrix, we identify if more number of vectors will have a positive or negative number in that column. This becomes the representative sign in that column for all the query vectors. Each key vector is then scored by how well the sign of each of its elements matches with the sign of the representative query vector by computing the Hamming distance between the two sign vectors. This score is used to select the top K key vectors. As a result, we reduce the number of computations required to score each key vector (and the overall complexity) from O(n2) to O(n). Sign matching is illustrated in Fig. 2, and explained in detail in Appendix B. As this\napproximation does not increase the accuracy of the models nor decrease the number of parameters, they are only applied when optimizing the fine-tuned models for speed." }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "We implement our techniques within Huggingface’s Transformers library in PyTorch (Wolf et al. (2019)). We use Intel AI’s NLP Architect for experiments on Q8BERT. The experiments were performed on a GeForce RTX 2080 Ti GPU with 11GB memory. All results reported are the average of 10 runs with random seeds after 3 epochs of fine-tuning on the dev set, unless otherwise specified. When reporting results on the development set, for the larger datasets (like MNLI), we create a validation set using a small subset ( 15%) of the training data. We use the loss on this set to characterize the significance. On the smaller datasets (like WNLI), there isn’t enough data to get a meaningful training-validation split. Hence, we directly use the loss on the training set. When reporting results on the test set, we use loss on the development set to characterize significance. Detailed descriptions of the tasks and Transformers used in our experiments is given in Appendix E. Additional results on the GLUE test set are presented in Appendix F.\nPrimary Results. We present results on GLUE (Wang et al. (2019)) in Table 1, SQUADv1.1 (Rajpurkar et al. (2016)), and Penn Treebank (Marcus et al. (1994)) in Table 2. When optimizing for speed, we aim to reduce inference time as much as possible while maintaining at least 99.5% of baseline accuracy. While the focus in these experiments is on speed, we find that our framework still leads to models that are 1.29×-10.65× smaller due to TransElements being dropped from the pre-trained model. On the other hand, when optimizing for size, we focus on reducing the model size as much as possible while maintaining the <0.5% accuracy degradation constraint. We use uniform quantization with Quantization-Aware Training proposed in Q8BERT\n(Zafrir et al. (2019)) within our hierarchical framework to quantize insignificant blocks, heads and weight groups to lower precisions. This leads to models that are smaller than and at least as fast as a uniform 8-bit integer quantized model such as Q8BERT (Table 1). Our results are competitive with QBERT (Shen et al. (2020)), while maintaining the advantages of uniform 8-bit quantization over the group-wise quantization proposed in QBERT. The compression is lowest for AlBERT since its parameters are shared across layers, and most of the compression is from quantization. While the focus in these experiments is on size, we find that our framework still leads to models that are 1.07×-3.26× faster due to elements being dropped from the pre-trained model, with potential for much greater speedups on optimized 8-bit integer kernels. When optimizing for accuracy, the goal is to maximize the accuracy of the pre-trained Transformer model for any given downstream task. While the focus in these experiments is on accuracy, we find that our framework still leads to models that are 1.28×-9.83× smaller and 1.03×-2.94× faster due to TransElements being dropped from the pre-trained model.\nTable 2: [Left] Results on SQUAD v1.1. We report the Exact Match score. The compression is lowest for AlBERT since parameters are shared across layers, and most of the compression is from quantization. [Right] Results on Penn Treebank. We report perplexity (lower is better).\nFigure 3: Tuning the SA Approximation Knobs with Size and Speed Focus. The average GLUE scores across the 9 tasks using Q8BERT are reported for different acceptable accuracy loss levels." }, { "heading": "Tuning the Approximation Knobs:", "text": "In this work, we considered a tight accuracy constaint of <0.5% accuracy degradation while optimizing the model, and determined the hyperparameter values (PruneThreshold and ApproxThreshold) empirically for that constraint. However, users of different applications and platforms may be willing relax the accuracy constraint for obtaining faster or smaller models. In view of this, we demonstrate the ability of our framework to operate at different points in the speed-size-accuracy tradeoff curve (Fig. 3) through different values of hyperparameters. We note that directly using optimized pre-trained Transformers for inference works best when there is a need\nfor significant speed/size improvement with negligible loss in accuracy (<2%), or if there is a need for more accurate models. When significant degradation in accuracy (>3%) is acceptable, techniques that distil knowledge into simpler networks that no longer maintain the structure of Transformers may be more beneficial. Even in these situations, our techniques are still useful, since they serve as better teachers/baselines during distillation/architecture search.\nComparison to previously proposed compression techniques: A majority of previous works for improving efficiency of Transformers directly pre-train efficient models from scratch. Using a representative subset of these networks (covering the most commonly used techniques used to create efficient models), we demonstrate that our techniques are complementary, since these efficient networks are still fine-tuned for different downstream tasks, providing opportunities for optimization. In addition, we show that our techniques are also complementary to Q8BERT, a post-training quantization method. Poor Man’s BERT (Sajjad et al. (2020)) evaluated several layer dropping strategies that do not require pre-training, and found top-layer dropping to produce least accuracy degradation across tasks. Comparing our framework to top-layer dropping, we observe greater speedups/compression at iso-accuracy across all tasks and networks, and the largest benefits are observed on Q8BERT, where the use of quantization greatly reduces the resilience of the network, making it unsuitable for drastic changes such as dropping entire layers. However, by approaching the problem of improving inference efficiency in a hierarchical manner and with finer granularity, we are able to exploit redundancies in the model missed by a layer-only strategy, achieving greater benefits without significant loss in accuracy. In fact, in our experiments, we observe that starting on the layers as a first step leads to worse models than starting with blocks. We find that the effect of removing an ATTN block of relatively high significance may be masked by removing the FFN block of very low significance in the same layer (and vice-versa), leading to low significance for the entire layer. This has consequences further along in the process, since removing a high-significance block greatly reduces further opportunities for pruning and approximating the model. For experiments with Layerdrop\n(Fan et al. (2020)), we experiment on RoBERTA (Liu et al. (2019)) using fairseq (Ott et al. (2019)) pre-trained with a layer drop rate of 0.5, and then drop every other layer at fine-tuning time. For QBERT, we directly use the results reported by the authors (Table 3).\nImpact on fine-tuning time. Unlike the baseline models, our framework requires multiple fine-tuning passes to optimize the model (Table 4). We minimize this overhead in two ways. First, since our iterative method potentially eliminates a component in each pass and our ordering of elements ensures that timeconsuming components are eliminated early, each subsequent optimization fine-tuning pass takes less time. Second, for the optimization fine-tuning passes, we do not use the entire dataset for large datasets. Instead, we compute the thresholds\nbased on a smaller subset of the target data. Specifically, we randomly sample a small subset of the training data (2̃0%) to fine-tune the model, and a validation set (1̃5% of the training set) to characterize significance. We find empirically that doing so results in the same elements getting pruned and approximated as when the entire training data is used. We further see that this subsampling is robust across models; if the reduced dataset works for one model, it works for all other models. Thus, by both greedily reducing the size of the model to be fine-tuned as well as reducing the amount of work performed for each optimization fine-tuning pass, we can quickly explore the search space." }, { "heading": "6 CONCLUSION", "text": "We proposed an approximate computing framework to optimize pre-trained Transformers. The framework identifies elements that are insignificant for the downstream task at hand, and applies techniques to approximate these elements. We demonstrated that this framework can be adapted to produce models that are faster, smaller or more accurate, depending on the user’s constraints. Using this framework, we produced models that were up to 4.22× faster, up to 14.08× smaller (with less than 0.5% relative accuracy degradation) and up to 5.46% absolute points more accurate with simultaneous speed and size benefits." }, { "heading": "A DETAILED DESCRIPTION OF THE OPTIMIZATION FRAMEWORK", "text": "" }, { "heading": "A.1 ALGORITHMS AND ILLUSTRATION OF PRUNING STRATEGIES", "text": "Algorithm 1: Creating TransElement Queue Input: Pre-trained Transformer T, Focus of optimization F, Downstream task D Output: TransElement Queue Q, containing ordered elements of T for Significance Analysis Q = empty queue() if FFN Blocks are more time-consuming/parameter-intensive then\nif Knowledge in bottom layers is more important then for layer = 1 to num layers do\nQ[granularity=0].push(FFN block[layer]) Q[granularity=1].push(FFN block[layer].Weight Groups)\nfor layer = 1 to num layers do Q[granularity=0].push(ATTN block[layer]) Q[granularity=1].push(ATTN block[layer].Attention Heads) Q[granularity=2].push(ATTN block[layer].Weight Groups)\nelse if Knowledge in top layers is more important then for layer = num layers to 1 do\nQ[granularity=0].push(FFN block[layer]) Q[granularity=1].push(FFN block[layer].Weight Groups)\nfor layer = num layers to 1 do Q[granularity=0].push(ATTN block[layer] Q[granularity=1].push(ATTN block[layer].Attention Heads) Q[granularity=2].push(ATTN block[layer].Weight Groups))\nelse if ATTN Blocks are more time-consuming/parameter-intensive then if Knowledge in bottom layers is more important then\nfor layer = 1 to num layers do Q[granularity=0].push(ATTN block[layer]) Q[granularity=1].push(ATTN block[layer].Attention Heads) Q[granularity=2].push(ATTN block[layer].Weight Groups))\nfor layer = 1 to num layers do Q[granularity=0].push(FFN block[layer]) Q[granularity=1].push(FFN block[layer].Weight Groups)\nelse if Knowledge in top layers is more important then for layer = num layers to 1 do\nQ[granularity=0].push(ATTN block[layer]) Q[granularity=1].push(ATTN block[layer].Attention Heads) Q[granularity=2].push(ATTN block[layer].Weight Groups)\nfor layer = num layers to 1 do Q[granularity=0].push(FFN block[layer]) Q[granularity=1].push(FFN block[layer].Weight Groups)\nreturn Q\nAlgorithm 2: Significance Analysis Input: Current state of the Transformer model T , Fine-tuning-dataset (or its reduced subset) D, TrialElement E, Thresholds {Pruning Threshold, Approximation Threshold} Output: Action to be performed on E (whether to prune E, approximate E, or retain E as-is) action = NULL T1 = T − E TransElement Loss = Fine tune(T1,D) if TransElement Loss < Pruning Threshold then\naction = ”Prune” else if TransElement Loss >= Pruning Threshold and TransElement Loss < Approximation Threshold then\naction = ”Approximate” return action\nAlgorithm 3: Transformer Optimization Input: Pre-trained Transformer T , Fine-tuning-dataset D (and its reduced subset D’ for large datasets, else D’=D), Focus of Optimization F, Acceptable Accuracy Loss A Output: Optimized and fine-tuned Transformer for the given task T’, Baseline Loss = Fine-tune(T,D’) Pruning Threshold, Approximation Threshold = Compute Thresholds(Baseline Loss, F,A) Q = Create TransElement Queue(T, F,D′) granularity = 0 while Q is not empty do\nTrialElement = Q[granularity].pop() action = Significance Analysis[T,D′, T rialElement, Pruning Threshold,Approximation Threshold] if action = ”Prune” then\nmodified TransElement = None Q.pop(All elements encompassed by TrialElement)\nelse if action = ”Approximate” then if granularity = max granularity then\nif Focus = ”Accuracy” then modified TransElement = trialElement\nelse if Focus = ”Size” then modified TransElement = quantize lower(trialElement)\nelse if Focus = ”Speed then if TrialElement is encompassed by a FFN block then\nmodified TransElement = trialElement else if TrialElement is encompassed by an ATTN block then\nmodified TransElement = Sign Matching(trialElement)\nelse modified TransElement = TrialElement\nelse Q.pop(All elements encompassed by TrialElement) modified TransElement = TrialElement\nif Q[granularity] is empty then granularity++\nT = T - TrialElement + modified TransElement T, Final Loss = Fine tune(T,D) return T" }, { "heading": "B SIGN MATCHING - DETAILED DESCRIPTION AND ABLATION STUDIES", "text": "" }, { "heading": "B.1 ALGORITHM", "text": "Algorithm 4: Sign Matching Input: Set of query vectors Query = [q1, q2, ..., qn], set of key vectors Key = [k1, k2, ..., kn], set of value vectors Value = [v1, v2, ..., vn], number of key vectors to select K Output: Set of key vectors with highest sign match with query vectors Key’ = [k′1, k′2, ..., k′K] ,\nset of corresponding value vectors Value’ = [v′1, v ′ 2, ..., v ′ K]\n/* Build sign representation of Query vectors */ i← 1 count← 0 while i <= d do\nj ← 1 while j <= n do\nif qj,i > 0 then count[i]← count[i] + 1\nj ← j + 1 if count[i] >= (n/2) then\nval[i]← 1 else\nval[i]← −1 i← i+ 1\n/* Compute sign matches of Key vectors with the representative Query vector */ i← 1 matches← 0 while i <= n do\nH Dist[i]← Hamming Distance(sign(ki), val) i← i+ 1\nmatches← indices(sort ascending(H Dist)) matches← matches[1 : K] K ′ ← gather(K(matches)) V ′ ← gather(V (matches))" }, { "heading": "B.2 SIGN MATCHING IN AUTO-REGRESSIVE MODELS", "text": "In Auto-regressive models (XL-Net, GPT-2, Transformer-XL, etc.), tokens are not allowed to attend to tokens in the future, and an attention mask is applied to set the corresponding weights to a large negative value. This is a problem because the key vectors selected by SM may be such that vectors at the start of the sequence (first few query vectors) may not be able to attend to any of the key vectors (i.e., their attention outputs will be 0), leading to significant loss of information and degradation in accuracy. We avoid this by selecting the top-scoring (K/4) vectors from the top quarter of the key matrix, and the top-scoring (3K/4) vectors from the overall key matrix and not included in the (K/4) vectors initially selected, instead of directly selecting top K vectors from overall key matrix. This reduces the probability of vectors having no vectors from their past to attend to." }, { "heading": "B.3 COMPARISON TO OTHER DYNAMIC KEY-SELECTION TECHNIQUES", "text": "We compare our Sign-Matching based attention with other intuitive dynamic Key-selection that do not require training from scratch and provide speedups even at small context lengths. We find that Sign Matching provides the best trade-off between accuracy and speed. The techniques considered are described below:\nNorm-based Selection (NBS). NBS is based on the idea that the “most important” key vectors (corresponding to the “most important” tokens in the dataset) will have highest norm, and hence\nhighest attention (dot-product) with the query vectors. The key vectors are ranked in descending order of their norm, and the top K vectors are selected. Attention is then computed only between the query vectors and the selected key vectors.\nValue-based Selection (VBS). One of the disadvantages of NBS is the fact that a vector with only one very large value will have high norm, but it is unlikely to produce high dot-products with other vectors. VBS overcomes this by using distribution of values in a vector, rather than the norm, as the selection criteria. In particular, we count the number of elements in each vector greater than a specified ”value threshold”. We then select the K vectors with maximum number of elements with absolute values greater than the ”value threshold”.\nGrouped Attention (GA). GA places vectors that are likely to have high dot-product with each other in the same group, and vectors likely to have low dot-product with each other in different groups with high probability. The concept of GA was previously explored in Reformer (Kitaev et al. (2020)), where Locality Sensitive Hashing was used to group vectors. However, since we apply this approximation only to resilient layers, we use a simpler and faster grouping criterion, the position of maximum and minimum value. In addition, there is no need for multiple iterations of grouping and computing attention scores that was used in Reformer to ensure that query-key pairs with high attention scores were placed in the same group in at least one of the iterations. Both of these factors together greatly reduce our overheads, enabling speedups even at small context lengths. Our grouping criteria is based on the intuition that vectors that have highest positive/negative values in the same position will have high dot-products with each other. Attention scores are then computed only between query-key pairs in the same group, since they are most likely to have high dot-products with each other. We limit the number of key vectors in each group to K vectors with the highest absolute value in that position, and therefore GA scales linearly in time and memory with context length instead of quadratically." }, { "heading": "B.4 SPEEDUP WITH INCREASING CONTEXT LENGTH", "text": "Since Sign Matching is a linear-time approximation of the quadratic self-attention operation, speedup increases significantly with increase in context length. As context length increases, ATTN Blocks become time-dominant, and hence more emphasis is placed on these blocks by our framework. In addition, the memory requirements increase quadratically with context length due to the self-attention operation, making Transformers extremely memory-bottlenecked. Sign Matching helps alleviate this bottleneck. Through the combination of these factors, we find a large increase in speedup as context length increases from our Sign Matching technique (Table 5)." }, { "heading": "C ANALYSIS OF THE OPTIMIZATION FRAMEWORK", "text": "" }, { "heading": "C.1 PREVIOUSLY PROPOSED METHODS FOR ESTIMATING IMPORTANCE OF ELEMENTS ARE", "text": "NOT APPLICABLE\nDue to the enormous size of Transformer models, brute-force approaches to estimating importance are not feasible. In addition, previously proposed techniques for efficient importance estimation are not well-suited due to the fact that Transformers cannot be repeatedly fine-tuned to recover the accuracy losses from approximating the model, since they very quickly overfit the training data for the downstream tasks (usually within 5 epochs). Therefore, Taylor expansion, which uses gradient of loss to estimate importance, is not reliable, as evidenced in Table 6. We observe that in addition to providing greater control over the accuracy of the final\nmodel (and the ability to increase accuracy), our framework also provides better speedup and compression at similar accuracy." }, { "heading": "C.2 EVALUATION OF HEURISTICS USED", "text": "We compare different possible heuristics to the ones used in our framework (Table 7) on MRPC using DistilBERT. When we remove the element with the lowest loss in each iteration (with loss characterized using our method), there is negligible change in the quality of the final model produced, but the fine-tuning+optimization process is an order of magnitude slower if elements are still considered in the order of coarser to finer granularity, and two orders of magnitude slower otherwise compared to our approach. If the loss is characterized using Taylor expansion, it greatly destabilizes the system, leading to models that do not meet the accuracy constraints. To drive home the fact that our greedy approach combined with a global error bound does not lead to inferior models, we use an adaptive loss threshold. In particular, we use a very tight constraint when analyzing elements at coarse granularities, and relax the constraint as we move towards finer granularities. We again find that there is negligible change in the quality of the final model produced, but the finetuning+optimization process is significantly slower. We hypothesize that a single global error bound is sufficient because we order the elements in such a way that for the given task at hand, we intuitively expect that the elements at the head of the queue are likely to be removed using the linguistic knowledge in different layers. Therefore, it is reasonable to expect that if an element at the head of the queue is identified by our framework as prunable, it can be pruned without using up a large portion of our error budget." }, { "heading": "C.3 GAINS FROM DIFFERENT OPTIMIZATION TECHNIQUES", "text": "The gains obtained from different optimization techniques for different tasks and models depends on two factors: Number of elements to which each technique has been applied, and the gain from applying each technique to a single element. In general, we observe that the largest gains are obtained from pruning entire blocks that are most time-consuming/ parameter-intensive. This means that at small context lengths (such as BERT-Base on MRPC in Fig. 6), pruning entire FFN Blocks produces maximum gain. And at large context lengths (such as GPT-2 Base on Penn Treebank in Fig. 6), pruning entire ATTN Blocks provides maximum gain. Our analysis also demonstrates that all techniques are vital for producing highly optimized models, since no single strategy can provide drastic gains with minimal accuracy degradation." }, { "heading": "D ANALYSIS OF IMPORTANT ELEMENTS ACROSS DOWNSTREAM TASKS AND TRANSFORMER MODELS", "text": "We study which blocks are pruned and approximated for different downstream tasks using different Transformers (Fig. 7). We find that the differences in importance of blocks are more pronounced across different tasks than across different models. For example, for sentiment analysis, long-range dependency information is not very important. Hence, for all models fine-tuned for sentiment analysis, we observe that components in later layers (closer to the output) are more likely to be pruned. Across models, we only observe subtle differences. For example, we find that XLNet(auto-regressive) is able to learn important task-specific information earlier than BERT(non auto-regressive), similar to the observation made in (Sajjad et al. (2020)). Hence, we are able to drop more components (in earlier layers) in XLNet than in BERT, leading to more efficient models for inference. In DistilBERT (a distilled model), we find that there is a clear demarcation in linguistic knowledge across layers due to the reduced capacity of the model. This is evidenced by the fact that components in the top four layers are never pruned across all Language Understanding tasks, while the boundaries are more soft in the original models. At a high level, these trends agree with previous works on understanding how Transformers process language (Jawahar et al. (2019)) that use probing classifiers to discern the linguistic knowledge captured by the different layers. For example, on GLUE tasks, we expect that long-range dependency information is not required for most tasks, since most of these tasks depend on local contexts. This is confirmed by the fact that blocks in layers are more likely to be pruned/approximated than earlier layers using our framework. Similarly, we expect that this is not the case for Language Modelling, since long-range dependency information is vital to fully understand the text. This is also confirmed by the observed trends using our framework. As future work, our framework can be combined with previously proposed techniques to gain deeper understanding of the working of Transformers, especially at finer levels of granularity." }, { "heading": "E.2 HYPERPARAMETERS USED IN OUR EXPERIMENTS", "text": "" }, { "heading": "E.1 DESCRIPTION OF TASKS AND TRANSFORMERS USED IN OUR EXPERIMENTS", "text": "" }, { "heading": "E EXPERIMENT DETAILS AND HYPERPARAMETERS", "text": "" }, { "heading": "F RESULTS ON THE GLUE TEST SET", "text": "While our main results are on the dev set following standard practice, we report results on the test set also using the BERT (base) model in Table 10. We use the GLUE evaluation server to obtain the scores, and make use of code from Xu et al. (2020) to prepare the data for submission." } ]
2,020
null
SP:c0f80cb8844c1d9e6490f25a0b8feaa27557086c
[ "This submission proposes a new method of learning from data with partially observed labels. In this problem, every instance has a label candidate set, which contains the true label. This submission introduces adversarial learning to improve the disambiguation of inexact labels. Particularly, there are two adversarial learning component. In the first component, a generator tries to match the distribution of label candidate sets given the \"true\" label of an instance. In the second component, a generator tries to learn the distribution of instances give their \"true\" labels. Since the the \"true\" label is not accessible, the \"true\" label is actually from a predictive model. " ]
Partial label (PL) learning tackles the problem where each training instance is associated with a set of candidate labels that include both the true label and irrelevant noise labels. In this paper, we propose a novel multi-level generative model for partial label learning (MGPLL), which tackles the PL problem by learning both a label level adversarial generator and a feature level adversarial generator under a bi-directional mapping framework between the label vectors and the data samples. MGPLL uses a conditional noise label generation network to model the nonrandom noise labels and perform label denoising, and uses a multi-class predictor to map the training instances to the denoised label vectors, while a conditional data feature generator is used to form an inverse mapping from the denoised label vectors to data samples. Both the noise label generator and the data feature generator are learned in an adversarial manner to match the observed candidate labels and data features respectively. We conduct extensive experiments on both synthesized and real-world partial label datasets. The proposed approach demonstrates the state-of-the-art performance for partial label learning.
[]
[ { "authors": [ "Eric Arazo", "Diego Ortego", "Paul Albert", "Noel E O’Connor", "Kevin McGuinness" ], "title": "Unsupervised label noise modeling and loss correction", "venue": null, "year": 1904 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Forrest Briggs", "Xiaoli Z Fern", "Raviv Raich" ], "title": "Rank-loss support instance machines for miml instance annotation", "venue": "In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining (KDD),", "year": 2012 }, { "authors": [ "Jingwen Chen", "Jiawei Chen", "Hongyang Chao", "Ming Yang" ], "title": "Image blind denoising with generative adversarial network based noise modeling", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Timothee Cour", "Ben Sapp", "Ben Taskar" ], "title": "Learning from partial labels", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Thomas G Dietterich", "Ghulum Bakiri" ], "title": "Solving multiclass learning problems via error-correcting output codes", "venue": "Journal of artificial intelligence research,", "year": 1994 }, { "authors": [ "Qi Dou", "Cheng Ouyang", "Cheng Chen", "Hao Chen", "Pheng-Ann Heng" ], "title": "Unsupervised crossmodality domain adaptation of convnets for biomedical image segmentations with adversarial loss", "venue": "arXiv preprint arXiv:1804.10916,", "year": 2018 }, { "authors": [ "Jun-Peng Fang", "Min-Ling Zhang" ], "title": "Partial multi-label learning via credible label elicitation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lei Feng", "Bo An" ], "title": "Leveraging latent label distributions for partial label learning", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2018 }, { "authors": [ "Lei Feng", "Bo An" ], "title": "Partial label learning by semantic difference maximization", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2019 }, { "authors": [ "Chen Gong", "Tongliang Liu", "Yuanyan Tang", "Jian Yang", "Jie Yang", "Dacheng Tao" ], "title": "A regularization approach for instance-based superset label learning", "venue": "IEEE transactions on cybernetics,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Matthieu Guillaumin", "Jakob Verbeek", "Cordelia Schmid" ], "title": "Multiple instance metric learning from automatically labeled bags of faces", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2010 }, { "authors": [ "Eyke Hüllermeier", "Jürgen Beringer" ], "title": "Learning from ambiguously labeled examples", "venue": "Intelligent Data Analysis,", "year": 2006 }, { "authors": [ "Rong Jin", "Zoubin Ghahramani" ], "title": "Learning with multiple labels. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2003 }, { "authors": [ "Feng Lei", "Bo An" ], "title": "Partial label learning with self-guided retraining", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Liping Liu", "Thomas G Dietterich" ], "title": "A conditional multinomial mixture model for superset label learning", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Jie Luo", "Francesco Orabona" ], "title": "Learning from candidate labeling sets", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2010 }, { "authors": [ "Nam Nguyen", "Rich Caruana" ], "title": "Classification with partial labels", "venue": "In Proceedings of the ACM SIGKDD international conference on Knowledge discovery and data mining (KDD),", "year": 2008 }, { "authors": [ "Gabriel Panis", "Andreas Lanitis" ], "title": "An overview of research activities in facial age estimation using the fg-net aging database", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2014 }, { "authors": [ "Cai-Zhi Tang", "Min-Ling Zhang" ], "title": "Confidence-rated discriminative partial label learning", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2017 }, { "authors": [ "Kiran K Thekumparampil", "Ashish Khetan", "Zinan Lin", "Sewoong Oh" ], "title": "Robustness of conditional gans to noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning", "venue": null, "year": 2012 }, { "authors": [ "Deng-Bao Wang", "Li Li", "Min-Ling Zhang" ], "title": "Adaptive graph guided disambiguation for partial label learning", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Haobo Wang", "Weiwei Liu", "Yang Zhao", "Chen Zhang", "Tianlei Hu", "Gang Chen" ], "title": "Discriminative and correlative partial multi-label learning", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Xuan Wu", "Min-Ling Zhang" ], "title": "Towards enabling binary decomposition for partial label learning", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2018 }, { "authors": [ "Ming-Kun Xie", "Sheng-Jun Huang" ], "title": "Partial multi-label learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ning Xu", "Jiaqi Lv", "Xin Geng" ], "title": "Partial label learning via label enhancement", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Yilun Xu", "Peng Cao", "Yuqing Kong", "Yizhou Wang. L" ], "title": "dmi: A novel information-theoretic loss function for training deep nets robust to label noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yan Yan", "Yuhong Guo" ], "title": "Partial label learning with batch label correction", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2020 }, { "authors": [ "Yao Yao", "Chen Gong", "Jiehui Deng", "Xiuhua Chen", "Jianxin Wu", "Jian Yang" ], "title": "Deep discriminative cnn with temporal ensembling for ambiguously-labeled image classification", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2020 }, { "authors": [ "Fei Yu", "Min-Ling Zhang" ], "title": "Maximum margin partial label learning", "venue": "In Asian Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Zinan Zeng", "Shijie Xiao", "Kui Jia", "Tsung-Han Chan", "Shenghua Gao", "Dong Xu", "Yi Ma" ], "title": "Learning by associating ambiguously labeled images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2013 }, { "authors": [ "Min-Ling Zhang", "Fei Yu" ], "title": "Solving the partial label learning problem: An instance-based approach", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2015 }, { "authors": [ "Min-Ling Zhang", "Bin-Bin Zhou", "Xu-Ying Liu" ], "title": "Partial label learning via feature-aware disambiguation", "venue": "In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD),", "year": 2016 }, { "authors": [ "Min-Ling Zhang", "Fei Yu", "Cai-Zhi Tang" ], "title": "Disambiguation-free partial label learning", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2017 }, { "authors": [ "Zhengli Zhao", "Dheeru Dua", "Sameer Singh" ], "title": "Generating natural adversarial examples", "venue": "arXiv preprint arXiv:1710.11342,", "year": 2017 }, { "authors": [ "Zhi-Hua Zhou" ], "title": "A brief introduction to weakly supervised learning", "venue": "National Science Review,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Partial label (PL) learning is a weakly supervised learning problem with ambiguous labels (Hüllermeier & Beringer, 2006; Zeng et al., 2013), where each training instance is assigned a set of candidate labels, among which only one is the true label. Since it is typically difficult and costly to annotate instances precisely, the task of partial label learning naturally arises in many real-world learning scenarios, including automatic face naming (Hüllermeier & Beringer, 2006; Zeng et al., 2013), and web mining (Luo & Orabona, 2010).\nAs the true label information is hidden in the candidate label set, the main challenge of PL lies in identifying the ground truth labels from the candidate noise labels, aiming to learn a good prediction model. Some previous works have made effort on adjusting the existing effective learning techniques to directly handle the candidate label sets and perform label disambiguation implicitly (Gong et al., 2018; Nguyen & Caruana, 2008; Wu & Zhang, 2018). These methods are good at exploiting the strengths of the standard classification techniques and have produced promising results on PL learning. Another set of works pursue explicit label disambiguation by trying to identify the true labels from the noise labels in the candidate label sets. For example, the work in (Feng & An, 2018) tries to estimate the latent label distribution with iterative label propagations and then induce a prediction model by fitting the learned latent label distribution. Another work in (Lei & An, 2019) exploits a self-training strategy to induce label confidence values and learn classifiers in an alternative manner by minimizing the squared loss between the model predictions and the learned label confidence matrix. However, these methods suffer from the cumulative errors induced in either the separate label distribution estimation steps or the error-prone label confidence estimation process. Moreover, all these methods have a common drawback: they automatically assumed random noise in the label space – that is, they assume the noise labels are randomly distributed in the label space for each instance. However, in real world problems the appearance of noise labels is usually dependent on the target true label. For example, when the object contained in an image is a “computer”, a noise label “TV” could be added due to a recognition mistake or image ambiguity, but it is less likely to annotate the object as “lamp” or “curtain”, while the probability of getting noise labels such as “tree” or “bike” is even smaller.\nIn this paper, we propose a novel multi-level adversarial generative model, MGPLL, for partial label learning. The MGPLL model comprises of conditional data generators at both the label level and feature level. The noise label generator directly models non-random appearances of noise labels conditioning on the true label by adversarially matching the candidate label observations, while the data feature generator models the data samples conditioning on the corresponding true labels by adversarially matching the observed data sample distribution. Moreover, a prediction network is incorporated to predict the denoised true label of each instance from its input features, which forms inverse mappings between labels and features, together with the data feature generator. The learning of the overall model corresponds to a minimax adversarial game, which simultaneously identifies true labels of the training instances from both the observed data features and the observed candidate labels, while inducing accurate prediction networks that map input feature vectors to (denoised) true label vectors. To the best of our knowledge, this is the first work that exploits multi-level generative models to model non-random noise labels for partial label learning. We conduct extensive experiments on real-world and synthesized PL datasets. The empirical results show the proposed MGPLL achieves the state-of-the-art PL performance." }, { "heading": "2 RELATED WORK", "text": "Partial label (PL) learning is a popular weakly supervised learning framework (Zhou, 2018) in many real-world domains, where the true label of each training instance is hidden within a given candidate label set. The challenge of PL learning lies in disambiguating the true labels from the candidate label sets to induce good prediction models.\nOne strategy towards PL learning is to adjust the standard learning techniques and implicitly disambiguate the noise candidate labels through the statistical prediction pattern of the data. For example, with the maximum likelihood techniques, the likelihood of each PL training sample can be defined over its candidate label set instead of its implicit ground-truth label (Jin & Ghahramani, 2003; Liu & Dietterich, 2012). For the k-nearest neighbor technique, the candidate labels from neighbor instances can be aggregated to induce the final prediction on a test instance (Hüllermeier & Beringer, 2006; Gong et al., 2018; Zhang & Yu, 2015). For the maximum margin technique, the classification margin can be defined over the predictive difference between the candidate labels and the non-candidate labels for each PL training sample (Nguyen & Caruana, 2008; Yu & Zhang, 2016). For the boosting technique, the weight of each PL training instance and the confidence value of each candidate label being ground-truth label can be refined via each boosting round (Tang & Zhang, 2017). For the error-correcting output codes (ECOC) technique, multiple binary classifier corresponding to the ECOC coding matrix are built based on the transformed binary training sets (Zhang et al., 2017). For the binary decomposition techniques, a one-vs-one decomposition strategy has been adopted to address PL learning by considering the relevance of each label pair (Wu & Zhang, 2018).\nRecently, there have been increasing attentions in designing explicit feature-aware disambiguation strategies (Feng & An, 2018; Xu et al., 2019a; Feng & An, 2019; Wang et al., 2019a). The authors of (Feng & An, 2018) attempt to refine the latent label distribution using iterative label propagations and then induce a predictive model based on the learned latent label distribution. However, the latent label distribution estimation in this approach can be impaired by the cumulative error induced in the propagation process, which can consequently degrade the PL learning performance, especially when the noisy labels dominate. Another work in (Lei & An, 2019) tries to refine the label confidence values with a self-training strategy and induce the prediction model over the refined label confidence scores via alternative optimization. Its estimation error on confidence values however can negatively impact the coupled partial label classifier due to the nature of alternative optimization. A recent work in (Yao et al., 2020) proposes to address the PL learning problem by enhancing the representation ability via deep features and improving the discrimination ability through margin maximization between the candidate labels and the non-candidate labels. Another recent work in (Yan & Guo, 2020) proposes to dynamically correct label confidence values with a batch-wise label correction strategy and induce a robust predictive model based on the MixUp enhanced data. Although these works demonstrate good empirical performance, they are subject to one common drawback of assuming random distributions of noise labels by default, which does not hold in many real-world learning scenarios. This paper presents the first work that explicitly model non-random noise labels for partial label learning.\nPL learning is related to other types of weakly supervised learning problems, including noise label learning (NLL) (Xu et al., 2019b; Thekumparampil et al., 2018; Arazo et al., 2019) and partial multi-label learning (PML) (Wang et al., 2019b; Fang & Zhang, 2019; Xie & Huang, 2018), but addresses different problems from them. The main difference between the PL learning and the other two well-established learning problems lies in the assumption on the label information provided by the training samples. Both PL learning and NLL aim to induce a multi-class prediction model from the training instances with noise-corrupted labels. However NLL assumes the true labels on some training instances are replaced by the noise labels, while PL assumes the true-label coexists with the noise labels in the candidate label set of each training instance. Hence the off-the-shelf NLL learning methods cannot be directly applied to solve the PL learning problem. Both PL learning and PML learn from training samples with ambiguous candidate label sets, which contains the true labels and additional noise labels. But PL learning addresses a multi-class learning problem where each candidate label set contains only one true label, while PML learning addresses a multi-label learning problem where each candidate label set contains all but unknown number of true labels.\nThe Wasserstein Generative Adversarial Networks (WGANs) (Arjovsky et al., 2017), which perform minimax adversarial training with a generator and a discriminator, is a popular alternative to the standard GANs (Goodfellow et al., 2014b) due to its effective and stable training of GANs. During the past few years, WGANs have been proposed as a successful tool for various applications, including adversarial sample generation (Zhao et al., 2017), domain adaption (Dou et al., 2018), and learning with noisy labels (Chen et al., 2018). This paper presents the first work that exploits WGAN to model non-random noise labels for partial label learning." }, { "heading": "3 PROPOSED APPROACH", "text": "Given a partial label training set S = {(xi,yi)} ni=1, where xi ∈ Rd is a d-dimensional feature vector for the i-th instance, and yi ∈ {0, 1}L denotes the candidate label indicator vector associated with xi, which has multiple 1 values corresponding to the ground-truth label and the additional noise labels, the task of PL learning is to learn a good multi-class prediction model from S. In real world scenarios, the irrelevant noise labels are typically not presented in a random manner, but rather correlated with the ground-truth label. In this section, we present a novel multi-level generative model for partial label learning, MGPLL, which models non-random noise labels using an adversarial conditional noise label generator, and builds connections between the denoised label vectors and instance features using a label-conditioned feature generator and a label prediction network. The overall model learning problem corresponds to a minimax adversarial game, which conducts multi-level generator learning by matching the observed data in both the feature and label spaces, while boosting the correspondence relationships between features and labels to induce an accurate multi-class prediction model.\nFigure 1 illustrates the proposed multi-level generative model, MGPLL, which attempts to address the partial label learning problem from both the label level and feature level under a bi-directional mapping framework. The MGPLL model comprises five component networks: the conditional noise\nlabel generator,Gn, which models the noise labels conditioning on the ground-truth label at the label level; the conditional data generator,Gx, which generates data samples at the feature level conditioning on the denoised label vectors; the discriminator, Dn, which separates the generated candidate label vectors from the observed candidate label vectors in the real training data; the discriminator, Dx, which separates the generated samples from the real data in the feature space; and the prediction network, F , which predicts the denoised label for each sample from its input features. zp denotes a one-hot label indicator vector sampled from a multinomial distribution Pz. The conditional noise label generator Gn induces the denoised prediction target for the prediction network F , while the conditional data generator Gx learns an inverse mapping at the feature level that maps the denoised label vectors in the label space to the data samples in the feature space. Below we present the details of the two level generations and the overall learning algorithm." }, { "heading": "3.1 CONDITIONAL NOISE LABEL GENERATION", "text": "The key challenge of partial label learning lies in the fact that the ground-truth label is hidden among the noise labels in the given candidate label set. As aforementioned, in real world partial label learning problems, the presence of noise labels typically does not happen at random, but rather correlates with the ground-truth labels. Hence we propose a conditional noise label generation model to model the appearances of the target-label dependent noise labels by adversarially matching the observed candidate label distribution in the training data, aiming to help identify the true labels later.\nSpecifically, given a noise value sampled from a uniform distribution ∼ P and a one-hot label indicator vector z sampled from a multinomial distribution Pz, we use a noise label generatorGn(z, ) to generate a noise label vector conditioning on the true label z, which can be combined with z in a rectified sum, “⊕”, to form a generated candidate label vector ỹ, such that\nỹ = Gn(z, )⊕ z = min(Gn(z, ) + z, 1). (1) Here we assume the generator Gn generates non-negative values. We then adopt the adversarial learning principle to learn such a noise label generation model by introducing a discriminatorDn(y), which is a two-class classifier and predicts how likely a given label vector y comes from the real data instead of the generated data. By adopting the adversarial loss of the Wasserstein Generative Adversarial Network (WGAN), our adversarial learning problem can be formulated as the following minimax optimization problem:\nmin Gn max Dn Lnadv(Gn, Dn) = E(xi,yi)∼SDn(yi)− E z∼Pz ∼P Dn(Gn(z, )⊕ z) (2)\nHere the discriminator Dn attempts to maximally distinguish the generated candidate label vectors from the observed candidate label indicator vectors in the real training data, while the generator Gn tries to generate noise label vectors and hence candidate label vectors that are similar to the real data in order to maximally confuse the discriminator Dn. By playing a minimax game between the generator Gn and the discriminator Dn, the adversarial learning is expected to induce a generator G∗n such that the generated candidate label distribution can match the observed candidate label distribution in the training data. We adopt the training loss of the WGAN here, as WGANs can overcome the mode collapse problem and have improved learning stability comparing to the standard GAN models (Arjovsky et al., 2017).\nNote although the proposed generator Gn is designed to model true-label dependent noise labels, it can be easily modified to model random noise label distributions by simply dropping the label vector input from the generator, which yields Gn( )." }, { "heading": "3.2 PREDICTION NETWORK", "text": "The ultimate goal of partial label learning is to learn an accurate prediction network F . To train a good predictor, we need to obtain denoised labels on the training data. For a candidate label indicator vector y, if the noise label indicator vector yn is given, one can simply perform label denoising as follows to obtain the corresponding true label vector z:\nz = y yn = max(y − yn, 0) (3) Here the rectified minus operator “ ” is introduced to generalize the standard minus “−” operator into the non-ideal case, where the noise label indicator vector yn is not properly contained in the candidate label indicator vector.\nThe generator Gn presented in the previous section provides a mechanism to generate noise labels and denoise candidate label sets, but requires the true target label vector as its input. We propose to use the outputs of the prediction network F to approximate the target true label vectors of the training data for the purpose of denoising the candidate labels with Gn, while using the denoised labels as the prediction target for F . Specifically, with the noise label generatorGn and the predictor F , we perform partial label learning by minimizing the following classification loss on the training data S:\nmin F,Gn Lc(F,Gn) = E ∼P (xi,yi)∼S\n`c ( F (xi), yi Gn(F (xi), ) ) (4)\nAlthough in the ideal case, the output vectors of Gn and F would be indicator label vectors, it is error-prone and difficult for neural networks to output discrete values. To pursue more reliable predictions and avoid overconfident outputs, we useGn and F to predict the probability of each class label being a noise label and the ground-truth label respectively. Hence the loss function `c(·, ·) in Eq.(4) above denotes a mean square error loss between the predicted probability of each label being the true label (through F ) and its denoised confidence of being a ground-truth label (through Gn)." }, { "heading": "3.3 CONDITIONAL FEATURE LEVEL DATA GENERATION", "text": "With the noise label generation model and the prediction network above, the observed training data in both the label and feature spaces are exploited to recognize the true labels and induce good prediction models. Next, we incorporate a conditional data generator Gx(z, ) at the feature level to map (denoised) label vectors in the label space into instances in the feature space, aiming to further strengthen the mapping relations between data samples and the corresponding labels, enhance label denoising and hence improve the partial label learning performance. Specifically, given a noise value sampled from a uniform distribution P and a one-hot label vector z sampled from a multinomial distribution Pz, Gx(z, ) generates an instance in the feature space that is corresponding to label z. Given the training label vectors in S denoised with Gn, the data generator Gx is also expected to regenerate the corresponding training instances in the feature space. This assumption can be captured using the following generation loss:\nLg(F,Gn, Gx) = E (xi,yi)∼S 1, 2∼P\n`g ( Gx(zi, 2),xi ) (5)\nwith zi = yi Gn(F (xi), 1)\nwhere zi denotes the denoised label vector for the i-th training instance, and `g(·, ·) is a mean square error loss function.\nMoreover, by introducing a discriminator Dx(x), which predicts how likely a given instance x is real, we can deploy an adversarial learning scheme to learn the generator Gx through the following minimax optimization problem with the WGAN loss:\nmin Gx max Dx Lxadv(Gx, Dx) = E(xi,yi)∼SDx(xi)− E z∼Pz ∼P Dx(Gx(z, )) (6)\nBy playing a minimax game between Gx and Dx, this adversarial learning is expected to induce a generatorG∗x that can generate samples with the same distribution as the observed training instances. Together with the generation loss in Eq.(5), we expect the mapping relation from label vectors to samples induced by G∗x can be consistent with the observed data. Moreover, the consistency of the mapping relation induced by Gx and the inverse mapping from samples to label vectors through the prediction network F can be further strengthened by enforcing an auxiliary classification loss on the generated data:\nLc′(F,Gx) = E z∼Pz ∼P\n`c′ ( F (Gx(z, )), z ) (7)\nwhere `c′(·, ·) can be a cross-entropy loss between the label prediction probability vector and the sampled true label indicator vector." }, { "heading": "3.4 LEARNING THE MGPLL MODEL", "text": "By integrating the classification loss in Eq.(4), the adversarial losses in Eq.(2) and Eq.(6), the generation loss in Eq.(5) and the auxiliary classification loss in Eq.(7) together, MGPLL learning can be\nformulated as the following min-max optimization problem:\nmin Gn,Gx,F max Dn,Dx\nLc(F,Gn)+Lnadv(Gn, Dn)+αLxadv(Gx, Dx)+βLg(F,Gn, Gx)+γLc′(F,Gx)\n(8)\nwhere α, β and γ are trade-off hyperparameters. The learning of the overall model corresponds to a minimax adversarial game. We develop a batch-based stochastic gradient descent algorithm to solve it by conducting minimization over {Gn, Gx, F} and maximization over {Dn, Dx} alternatively. The overall training algorithm is provided in the appendix." }, { "heading": "4 EXPERIMENT", "text": "We conducted extensive experiments on both controlled synthetic PL datasets and real-world PL datasets to investigate the empirical performance of the proposed model. In this section, we present our experimental settings, comparison results and discussions." }, { "heading": "4.1 EXPERIMENT SETTING", "text": "Datasets The synthetic datasets are generated from six UCI datasets, ecoli, deter, vehicle, segment, satimage and letter. From each UCI dataset, we generated synthetic PL datasets using three controlling parameters p, r and , following the controlling protocol in previous studies (Wu & Zhang, 2018; Xu et al., 2019a; Lei & An, 2019). Among the three parameters, p controls the proportion of instances that have noise candidate labels, r controls the number of false positive labels, and controls the probability of a specific false positive label co-occurring with the true label. Under different parameter configurations, multiple PL variants can be generated from each UCI dataset. Given that both random noise labels and target label-dependent noise labels may exist in real-world applications, we considered two types of settings. In the first type of setting, we consider random noise labels with the following three groups of configurations: (I) r = 1, p ∈ {0.1, 0.2, · · ·, 0.7}; (II) r = 2, p ∈ {0.1, 0.2, · · ·, 0.7}; and (III) r = 3, p ∈ {0.1, 0.2, · · ·, 0.7}. In the second type of setting, we consider the target label-dependent noise labels with the following configuration: (IV) p = 1, r = 1, ∈ {0.1, 0.2, · · ·, 0.7}. In total, the four groups of configurations provide us 168 (28 configurations × 6 UCI datasets) synthetic PL datasets. We used five real-world PL datasets that are collected from several application domains, including FG-NET (Panis & Lanitis, 2014) for facial age estimation, Lost (Cour et al., 2011), Yahoo! News (Guillaumin et al., 2010) for automatic face naming in images or videos, MSRCv2 (Dietterich & Bakiri, 1994) for object classification, and BirdSong (Briggs et al., 2012) for bird song classification.\nComparison Methods We compared the proposed MGPLL approach with the following PL methods, each configured with the suggested parameters according to the respective literature: PL-KNN (Hüllermeier & Beringer, 2006), PL-SVM (Nguyen & Caruana, 2008), CLPL (Cour et al., 2011), PALOC (Wu & Zhang, 2018), and SURE (Lei & An, 2019)." }, { "heading": "4.2 RESULTS ON SYNTHETIC PL DATASETS", "text": "We conducted experiments on two types of synthetic PL datasets generated from the UCI datasets, with random noise labels and target label-dependent noise labels, respectively. For each PL dataset,\nten-fold cross-validation is performed and the average test accuracy results are recorded. Figure 2 presents the comparison results for the configuration setting (IV) on four datasets. We can see that the proposed MGPLL consistently outperforms all the other methods.\nTo statistically study the significance of the performance gains achieved by MGPLL over the other comparison methods, we conducted pairwise t-test at 0.05 significance level based on the comparison results of ten-fold cross-validation over all the 168 synthetic PL datasets obtained from all the different configuration settings. The detailed win/tie/loss counts between MGPLL and each comparison method are reported in Table 1. From the results, we have the following observations: (1) MGPLL achieves superior or at least comparable performance against PALOC, CLPL, PL-SVM and PL-KNN in all cases, which is not easy given the comparison methods have different strengths across different datasets. (2) MGPLL significantly outperforms PALOC, CLPL, PL-SVM and PL-KNN in 75.6%, 79.1%, 86.9% and 82.7% of the cases respectively, and produces ties in the remaining cases. (3) MGPLL significantly outperforms SURE in 61.3% of the cases, achieves comparable performance with SURE in 34.5% of the cases, while being outperformed by SURE in only 4.2% of the cases. (4) On the PL datasets with target label-dependent noise labels, we can see that MGPLL significantly outperforms SURE , PALOC, CLPL, PL-SVM, PL-KNN in 59.5%, 71.4%, 76.2%, 83.3%, 78.6% of the cases respectively. (5) It is worth noting that MGPLL is never significantly outperformed by any comparison method on datasets with label-dependent noise labels. In summary, these results on the controlled PL datasets clearly demonstrate the effectiveness of MGPLL for partial label learning under different settings." }, { "heading": "4.3 RESULTS ON REAL-WORLD PL DATASETS", "text": "We compared the proposed MGPLL method with the comparison methods on five real-world PL datasets. For each dataset, ten-fold cross-validation is conducted. The mean test accuracy and the standard deviation results are reported in Table 2. Moreover, statistical pairwise t-test at 0.05 significance level is conducted to compare MGPLL with each comparison method based on the results of ten-fold cross-validation. The significance results are indicated in Table 2 as well. Note that the average number of candidate labels (avg.#CLs) of FG-NET dataset is quite large, which causes poor performance for all the comparison methods. For better evaluation of this facial age estimation task, we employ the conventional mean absolute error (MAE) (Zhang et al., 2016) to conduct two extra experiments. Two extra test accuracies are reported on the FG-NET dataset where\na test sample is considered to be correctly predicted if the difference between the predicted age and the ground-truth age is less than 3 years (MAE3) or 5 years (MAE5). From Table 2 we have the following observations: (1) Comparing with all the other five PL methods, MGPLL consistently produces the best results on all the datasets, with remarkable performance gains in many cases. For example, MGPLL outperforms the best alternative comparison methods by 5.2%, 3.4% and 2.0% on MSRCv2, Yahoo! News and Birdsong respectively. (2) Out of the total 35 comparison cases (5 comparison methods × 7 datasets), MGPLL significantly outperforms all the comparison methods across 77.1% of the cases, and achieves competitive performance in the remaining 22.9% of cases. (3) It is worth noting that the performance of MGPLL is never significantly inferior to any other comparison method. These results again validate the efficacy of the proposed method." }, { "heading": "4.4 ABLATION STUDY", "text": "The objective function of MGPLL contains five loss terms: classification loss, adversarial loss at the label level, adversarial loss at the feature level, generation loss and auxiliary classification loss. To assess the contribution of each part, we conducted an ablation study by comparing MGPLL with the following ablation variants: (1) CLS-w/o-advn, which drops the adversarial loss at the label level. (2) CLS-w/o-advx, which drops the adversarial loss at the feature level. (3) CLSw/o-g, which drops the generation loss. (4) CLS-w/o-aux, which drops the auxiliary classification loss. (5) CLS, which only uses the classification loss by dropping all the other loss terms. The comparison results are reported in Table 3. We can see that comparing to the full model, all five variants produce inferior results in general and have performance degradations to different degrees. This demonstrates that the different components in MGPLL all contribute to the proposed model to some extend. From Table 3, we can also see that the variant CLS-w/o-advn has a relatively larger performance degradation by dropping the adversarial loss at the label level, while the variant CLSw/o-aux has a small performance degradation by dropping the auxiliary classification loss. This makes sense as by dropping the adversarial loss for learning noise label generator, the generator can produce poor predictions and seriously impact the label denoising of the MGPLL model. This suggests that our non-random noise label generation through adversarial learning is a very effective and important component for MGPLL. For CLS-w/o-aux, as we have already got the classification loss on real data, it is reasonable to see that the auxiliary classification loss on generated data can help but is not critical. Overall, the ablation results suggest that the proposed MGPLL is effective." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed a novel multi-level generative model, MGPLL, for partial label learning. MGPLL uses a conditional label level generator to model the target label dependent non-random noise label appearances, which directly performs candidate label denoising, while using a conditional feature level generator to generate data samples from denoised label vectors. Moreover, a prediction network is incorporated to predict the denoised true label of each instance from its input features, which forms bi-directional inverse mappings between labels and features, together with the data feature generator. The adversarial learning of the overall model simultaneously identifies true labels of the training instances from both the observed data features and the observed candidate labels, while inducing accurate prediction networks that map input feature vectors to (denoised) true label vectors. We conducted extensive experiments on real-world and synthesized PL datasets. The proposed MGPLL model demonstrates the state-of-the-art PL performance." }, { "heading": "A APPENDIX", "text": "A.1 THE OVERALL TRAINING ALGORITHM\nThe overall training algorithm for solving the formulated min-max optimization problem in Eq.(8) is outlined in Algorithm 1.\nA.2 THE CHARACTERISTICS OF THE DATASETS\nThe characteristics of the UCI datasets and the real-world PL datasets are summaized in Table 4.\nA.3 IMPLEMENTATION DETAILS\nThe proposed MGPLL model has five component networks, all of which are designed as multilayer perceptrons with Leaky ReLu activation for the middle layers. The noise label generator is a fourlayer network with sigmoid activation in the output layer. The conditional data generator is a fivelayer network with tanh activation in the output layer, while batch normalization is deployed in its three middle layers. The predictor is a three-layer network with softmax activation in the output layer. Both the noise label discriminator and the data discriminator are three-layer networks without activation in the output layer. We used the RMSProp (Tieleman & Hinton, 2012) optimizer in our implementation and the mini-batch size m is set to 32. We selected the hyperparameters α, β and γ from {0.001, 0.01, 0.1, 1, 10} in a heuristic way based on the classification loss value Lc in the training objective function; that is, we chose their values that lead to the smallest training Lc loss.\nA.4 MORE RESULTS ON SYNTHETIC PL DATASETS\nWe conducted experiments on two types of synthetic PL datasets generated from the UCI datasets, with random noise labels and target label-dependent noise labels, respectively. For each PL dataset, ten-fold cross-validation is performed and the average test accuracy results are recorded. First we study the comparison results over the synthetic PL datasets with target label-dependent noise labels under the PL configuration setting (IV). In this setting, a specific label is selected as the coupled label that co-occurs with the ground-truth label with probability , and any other label can be randomly chosen as a noisy label with probability 1 − . Figure 3 presents the comparison results for the configuration setting (IV), where increases from 0.1 to 0.7 with p = 1 and r = 1. From Figure 3 we can see that the proposed MGPLL produces impressive results. It consistently outperforms\nall the other methods across different values on four datasets, vehicle, segment, satimage and letter, while achieving remarkable performance gains on segment and satimage. On the other two datasets, ecoli and deter, MGPLL also produces the best results in most cases and remains to be the most effective method. By contrast, the performance of the other comparison methods varies largely across different datasets. For example, CLPL and SURE demonstrate good performance on ecoli, deter and vehicle, but presents inferior results than PL-KNN in many cases of the other three datasets. PALOC and PL-SVM have the same drawback of producing poor results on some datasets. Our proposed MGPLL demonstrates good overall performance across these varying cases.\nWe also conducted experiments on the PL datasets with random noise labels produced under the PL configuration settings (I), (II) and (III). The comparison results in these three sets of configurations are reported in Figure 4, Figure 5 and Figure 6 respectively. From these figures we can see\nthat the proposed MGPLL (with noise label generator Gn( )) achieves similar positive comparison results as in the configuration setting (IV). In particular, the proposed method achieves remarkable performance gains on four of the overall six datasets, segment, satimage, vehicle and letter.\nA.5 PARAMETER SENSITIVITY ANALYSIS\nWe also conducted parameter sensitivity analysis on two real-world PL datasets BirdSong and Yahoo! News datasets to study how the trade-off hyperparameters α, β and γ influence the performance of MGPLL. We conducted the experiments by using different combination settings of the α, η and γ values from {0.001, 0.01, 0.1, 1, 10}. We vary each parameter’s value by keeping the other two fixed at their best setting. Note that a larger value for α, β and γ will provide larger weight to the feature level WGAN loss, generation loss and auxiliary classification loss respectively.\nThe three figures in Figure 7 report the average test results as well as standard deviations for different α, β and γ values respectively. We can see that when α is very small, the performance of MGPLL is not very good since the feature level WGAN loss is not allowed to contribute much to the learning. With the increase of α, the performance improves, which suggests that the WGAN loss is important. When α is too large, the performance degrades as the WGAN loss dominates. This is reasonable since the WGAN loss is expected to help the predictive model, rather than dominate the learning process. A similar phenomenon can be observed for γ. For the parameter β, the proposed method performs bad when β is very small. With the increase of β, the performance of MGPLL improves and remains relatively stable in a broader range, i.e., β ∈ [0.01, 1]. It shows that the proposed model is not very sensitivity to the β parameter within the considered range of values." } ]
2,020
null
SP:c9bda3b4e9859b304a8a3d1bc30ae0c8618a509d
[ "The paper proposes a sparse classifier via discriminative GMM. This model is trained based on sparse Bayesian learning. The sparsity constraint removes redundant Gaussian components which results in reducing the number of parameters and improving the generalization. This framework can potentially be embedded into the deep models and trained in an end-to-end fashion. The main motivation is that the proposed model (i.e., SDGM,) can consider multimodal data while conventional softmax classifiers only assume unimodality for each class. Experimental results show the superiority of the SDGM over existing softmax-based discriminative models." ]
In probabilistic classification, a discriminative model based on the softmax function has a potential limitation in that it assumes unimodality for each class in the feature space. The mixture model can address this issue, although it leads to an increase in the number of parameters. We propose a sparse classifier based on a discriminative GMM, referred to as a sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model is trained via sparse Bayesian learning. Using this sparse learning framework, we can simultaneously remove redundant Gaussian components and reduce the number of parameters used in the remaining components during learning; this learning method reduces the model complexity, thereby improving the generalization capability. Furthermore, the SDGM can be embedded into neural networks (NNs), such as convolutional NNs, and can be trained in an end-to-end manner. Experimental results demonstrated that the proposed method outperformed the existing softmax-based discriminative models.
[ { "affiliations": [], "name": "Hideaki Hayashi" }, { "affiliations": [], "name": "Seiichi Uchida" } ]
[ { "authors": [ "Scott Axelrod", "Vaibhava Goel", "Ramesh Gopinath", "Peder Olsen", "Karthik Visweswariah" ], "title": "Discriminative estimation of subspace constrained Gaussian mixture models for speech recognition", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 2006 }, { "authors": [ "Lalit R Bahl", "Mukund Padmanabhan", "David Nahamoo", "PS Gopalakrishnan" ], "title": "Discriminative training of Gaussian mixture models for large vocabulary speech recognition systems", "venue": "In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings (ICASSP),", "year": 1996 }, { "authors": [ "Anita Faul", "Michael Tipping" ], "title": "Analysis of sparse Bayesian learning", "venue": "Proceedings of the Advances in Neural Information Processing Systems (NIPS),", "year": 2001 }, { "authors": [ "Stephane Gaiffas", "Bertrand Michel" ], "title": "Sparse bayesian unsupervised learning", "venue": "arXiv preprint arXiv:1401.8017,", "year": 2014 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Cho-Jui Hsieh", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Mátyás A Sustik" ], "title": "Sparse inverse covariance matrix estimation using quadratic approximation", "venue": "In Proceedings of the Advances in Neural Information Processing Systems (NIPS),", "year": 2011 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Aldebaro Klautau", "Nikola Jevtic", "Alon Orlitsky" ], "title": "Discriminative Gaussian mixture models: A comparison with kernel classifiers", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2003 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Julia A. Lasserre", "Christopher M. Bishop", "Thomas P. Minka" ], "title": "Principled hybrids of generative and discriminative models", "venue": "In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2006 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Meng Yang" ], "title": "Large-margin softmax loss for convolutional neural networks", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Chen Luo", "Shiliang Sun" ], "title": "Variational mixtures of gaussian processes for classification", "venue": "In Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI),", "year": 2017 }, { "authors": [ "Tom Minka" ], "title": "Discriminative models, not discriminative training", "venue": "Technical report, Technical Report MSR-TR-2005-144, Microsoft Research,", "year": 2005 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Gunnar Rätsch", "Takashi Onoda", "K-R Müller" ], "title": "Soft margins for adaboost", "venue": "Machine learning,", "year": 2001 }, { "authors": [ "Brian D Ripley" ], "title": "Pattern Recognition and Neural Networks", "venue": null, "year": 2006 }, { "authors": [ "Michael E Tipping" ], "title": "Sparse Bayesian learning and the relevance vector machine", "venue": "Journal of Machine Learning research,", "year": 2001 }, { "authors": [ "Volker Tresp" ], "title": "Mixtures of gaussian processes", "venue": "In Proceedings of the Advances in Neural Information Processing Systems (NIPS),", "year": 2001 }, { "authors": [ "Wuei-He Tsai", "Wen-Whei Chang" ], "title": "Discriminative training of Gaussian mixture bigram models with application to Chinese dialect identification", "venue": "Speech Communication,", "year": 2002 }, { "authors": [ "Toshio Tsuji", "Osamu Fukuda", "Hiroyuki Ichinobe", "Makoto Kaneko" ], "title": "A log-linearized Gaussian mixture network and its application to EEG pattern classification", "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews,", "year": 1999 }, { "authors": [ "Zoltán Tüske", "Muhammad Ali Tahir", "Ralf Schlüter", "Hermann Ney" ], "title": "Integrating Gaussian mixtures into deep neural networks: Softmax layer with hidden variables", "venue": "In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Ehsan Variani", "Erik McDermott", "Georg Heigold" ], "title": "A Gaussian mixture model layer jointly optimized with discriminative features within a deep neural network architecture", "venue": "In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Jue Wang" ], "title": "Discriminative Gaussian mixtures for interactive image segmentation", "venue": "In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2007 }, { "authors": [ "Florian Wenzel", "Théo Galy-Fajou", "Christan Donner", "Marius Kloft", "Manfred Opper" ], "title": "Efficient gaussian process classification using pòlya-gamma data augmentation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "CF Jeff Wu" ], "title": "On the convergence properties of the EM algorithm", "venue": "The Annals of statistics,", "year": 1983 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In probabilistic classification, a discriminative model is an approach that assigns a class label c to an input sample x by estimating the posterior probability P (c | x). The posterior probability P (c | x) should correctly be modeled because it is not only related to classification accuracy, but also to the confidence of decision making in real-world applications such as medical diagnosis support. In general, the model calculates the class posterior probability using the softmax function after nonlinear feature extraction. Classically, a combination of the kernel method and the softmax function has been used. The recent mainstream method is to use a deep neural network for representation learning and softmax for the calculation of the posterior probability.\nSuch a general procedure for developing a discriminative model potentially contains a limitation due to unimodality. The softmax-based model, such as a fully connected (FC) layer with a softmax function that is often used in deep neural networks (NNs), assumes a unimodal Gaussian distribution for each class (details are shown in Appendix A). Therefore, even if the feature space is transformed into discriminative space via the feature extraction part, P (c | x) cannot correctly be modeled if the multimodality remains, which leads to a decrease in accuracy.\nMixture models can address this issue. Mixture models are widely used for generative models, with a Gaussian mixture model (GMM) as a typical example. Mixture models are also effective in discriminative models; for example, discriminative GMMs have been applied successfully in various fields, e.g., speech recognition (Tüske et al. 2015; Wang 2007). However, the number of parameters increases if the number of mixture components increases, which may lead to over-fitting and an increase in memory usage; this is useful if we can reduce the number of redundant parameters while maintaining multimodality.\nIn this paper, we propose a discriminative model with two important properties; multimodality and sparsity. The proposed model is referred to as the sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model is formulated and trained via sparse Bayesian\nlearning. This learning algorithm reduces memory usage without losing generalization capability by obtaining sparse weights while maintaining the multimodality of the mixture model.\nThe technical highlight of this study is twofold: One is that the SDGM finds the multimodal structure in the feature space and the other is that redundant Gaussian components are removed owing to sparse learning. Figure 1 shows a comparison of the decision boundaries with other discriminative models. The two-class data are from Ripley’s synthetic data (Ripley 2006), where two Gaussian components are used to generate data for each class. The FC layer with the softmax function, which is often used in the last layer of deep NNs, assumes a unimodal Gaussian for each class, resulting in an inappropriate decision boundary. Kernel Bayesian methods, such as the Gaussian process (GP) classifier (Wenzel et al. 2019) and relevance vector machine (RVM) (Tipping 2001), estimate nonlinear decision boundaries using nonlinear kernels, whereas these methods cannot find multimodal structures. Although the discriminative GMM finds multimodal structure, this model retains redundant Gaussian components. However, the proposed SDGM finds a multimodal structure of data while removing redundant components, which leads to an accurate decision boundary.\nFurthermore, the SDGM can be embedded into NNs, such as convolutional NNs (CNNs), and trained in an end-to-end manner with an NN. The proposed SDGM is also considered as a mixture, nonlinear, and sparse expansion of the logistic regression, and thus the SDGM can be used as the last layer of an NN for classification by replacing it with the fully connected (FC) layer with a softmax activation function.\nThe contributions of this study are as follows:\n• We propose a novel sparse classifier based on a discriminative GMM. The proposed SDGM has both multimodality and sparsity, thereby flexibly estimating the posterior distribution of classes while removing redundant parameters. Moreover, the SDGM automatically determines the number of components by simultaneously removing the redundant components during learning. • From the perspective of the Bayesian kernel methods, the SDGM is considered as the\nexpansion of the GP and RVM. The SDGM can estimate the posterior probabilities more flexibly than the GP and RVM owing to multimodality. The experimental comparison using benchmark data demonstrated superior performance to the existing Bayesian kernel methods. • This study connects both fields of probabilistic models and NNs. From the equivalence of\na discriminative model based on a Gaussian distribution to an FC layer, we demonstrate that the SDGM can be used as a module of a deep NN. We also demonstrate that the SDGM exhibits superior performance to the FC layer with a softmax function via end-toend learning with an NN on the image recognition task." }, { "heading": "2 RELATED WORK AND POSITION OF THIS STUDY", "text": "The position of the proposed SDGM among the related methods is summarized in Figure 2. Interestingly, by summarizing the relationships, we can confirm that the three separately developed fields, generative models, discriminative models, and kernel Bayesian methods, are related to each other. Starting from the Gaussian distribution, all the models shown in Figure 2 are connected via\nfour types of arrows. There is an undeveloped area in the upper right part, and the development of the area is the contribution of this study.\nA (unimodal) Gaussian distribution is used as the most naive generative model in machine learning and is the foundation of this relationship diagram. A GMM is the mixture expansion of the Gaussian distributions. Since the GMM can express (almost) arbitrary continuous distributions using multiple Gaussian components, it has been utilized for a long time. Since Gaussian fitting requires numerous parameters, the sparsified versions of Gaussian (Hsieh et al. 2011) and GMM (Gaiffas & Michel 2014) have been proposed.\nThe discriminative models and the generative models are mutually related (Lasserre et al. 2006; Minka\n2005). According to Lasserre et al. (2006), the only difference between these models is their statistical parameter constraints. Therefore, given a generative model, we can derive a corresponding discriminative model. For example, discriminative models corresponding to the Gaussian mixture model have been proposed (Axelrod et al. 2006; Bahl et al. 1996; Klautau et al. 2003; Tsai & Chang 2002; Tsuji et al. 1999; Tüske et al. 2015; Wang 2007). They indicate more flexible fitting capability for classification problems than the generative GMM because the discriminative models have a lower statistical bias than the generative models. Furthermore, as shown by Tüske et al. (2015); Variani et al. (2015), these models can be used as the last layer of the NN because these models output the class posterior probability.\nFrom the perspective of the kernel Bayesian methods, the GP classifier (Wenzel et al. 2019) and the mixture of GPs (MGP) (Luo & Sun 2017) are the Bayesian kernelized version of the logistic regression and the discriminative GMM, respectively. The SDGM with kernelization is also regarded as a kernel Bayesian method because the posterior distribution of weights is estimated during learning instead of directly estimating the weights as points, as with the GP and MGP. The RVM (Tipping 2001) is the sparse version of the GP classifier and is the most important related study. The learning algorithm of the SDGM is based on that of the RVM; however, it is extended for the mixture model.\nIf we use kernelization, the SDGM becomes one of the kernel Bayesian methods and is considered as the mixture expansion of the RVM or sparse expansion of the MGP. Therefore, the classification capability and sparsity are compared with kernel Bayesian methods in Section 4.1. Otherwise, the SDGM is considered as one of the discriminative models and can be embedded in an NN. The comparison with other discriminative models is conducted in Section 4.2 via image classification by combining a CNN." }, { "heading": "3 SPARSE DISCRIMINATIVE GAUSSIAN MIXTURE (SDGM)", "text": "The SDGM takes a continuous variable as its input and outputs the posterior probability of each class, acquiring a sparse structure by removing redundant components via sparse Bayesian learning. Figure 3 shows how the SDGM is trained by removing unnecessary components while maintaining discriminability. In this training, we set the initial number of components to three for each class. As the training progresses, one of the components for each class gradually becomes small and is removed." }, { "heading": "3.1 NOTATION", "text": "Let x ∈ RD be a continuous input variable and tc (c ∈ {1, . . . , C}, C is the number of classes) be a discrete target variable that is coded in a one-of-C form, where tc = 1 if x belongs to class c, tc = 0\notherwise. Also, let zcm be a discrete latent variable, and zcm = 1 when x from class c belongs to the m-th component (m ∈ {1, . . . ,Mc}, Mc is the number of components for class c), zcm = 0 otherwise. For simplicity, in this paper, the probabilities for classes and components are described using only c and m; e.g., we use P (c,m | x) instead of P (tc = 1, zcm = 1 | x)." }, { "heading": "3.2 MODEL FORMULATION", "text": "The posterior probabilities of each class c given x are calculated as follows:\nP (c | x) = Mc∑ m=1 P (c,m | x), P (c,m | x) = πcm exp[w T cmφ]∑C c′=1 ∑Mc′ m′=1 πc′m′ exp[w T c′m′φ] , (1)\nφ = [ 1,xT, x21, x1x2, . . . , x1xD, x 2 2, x2x3, . . . , x 2 D ]T , (2)\nwhere πcm is the mixture weight that is equivalent to the prior of each component P (c,m). It should be noted that we usewcm ∈ RH , which is the weight vector representing the m-th Gaussian component of class c. The dimension of wcm, i.e., H , is the same as that of φ; namely, H = 1 +D(D + 3)/2.\nDerivation. Utilizing a Gaussian distribution as a conditional distribution of x given c and m, P (x | c,m), the posterior probability of c given x, P (c | x), is calculated as follows:\nP (c | x) = Mc∑ m=1 P (c,m)P (x | c,m)∑C c=1 ∑Mc m=1P (c,m)P (x | c,m) , (3)\nP (x | c,m) = 1 (2π) D 2 |Σcm| 1 2 exp\n[ −1\n2 (x− µcm)TΣ−1cm(x− µcm)\n] , (4)\nwhere µcm ∈ RD and Σcm ∈ RD×D are the mean vector and the covariance matrix for component m in class c. Since the calculation inside an exponential function in (4) is quadratic form, the conditional distributions can be transformed as follows:\nP (x | c,m) = exp[wTcmφ], (5) where\nwcm = [ − D\n2 ln 2π − 1 2 ln |Σcm| − 1 2 D∑ i=1 D∑ j=1 scmijµcmiµcmj , D∑ i=1 scmi1µcmi,\n. . . , D∑ i=1 scmiDµcmi,− 1 2 scm11,−scm12, . . . , −scm1D,− 1 2 scm22, . . . ,− 1 2 scmDD ]T . (6)\nHere, scmij is the (i, j)-th element of Σ−1cm." }, { "heading": "3.3 DUAL FORM VIA KERNELIZATION", "text": "Sinceφ is a second-order polynomial form, we can derive the dual form of the SDGM using polynomial kernels. By kernelization, we can treat the SDGM as the kernel Bayesian method as described in Section 2.\nLet ψcm ∈ RN be a novel weight vector for the c,m-th component. Using ψcm and the training dataset {xn}Nn=1, the weight of the original form wcm is represented as\nwcm = [φ(x1), · · · ,φ(xN )]ψcm, (7)\nwhere φ(xn) is the transformation xn of using (2). Then, (5) is reformulated as follows:\nP (x | c,m) = exp[wTcmφ]\n= exp[ψTcm[φ(x1) Tφ(x), · · · ,φ(xN )Tφ(x)]T]\n= exp[ψTcmK(X,x)], (8)\nwhere K(X,x) is an N -dimensional vector that contains kernel functions defined as k(xn,x) = φ(xn) Tφ(x) = (xTnx + 1) 2 for its elements and X is a data matrix that has xTn in the n-th row. Whereas the computational complexity of the original form in Section 3.2 increases in the order of the square of the input dimension D, the dimensionality of this dual form is proportional to N . When we use this dual form, we use N and k(xn, ·) instead of H and φ(·), respectively.\n3.4 LEARNING ALGORITHM\nA set of training data and target value {xn, tnc} (n = 1, · · · , N) is given. We also define π and z as vectors that comprise πcm and zncm as their elements, respectively. As the prior distribution of the weight wcmh, we employ a Gaussian distribution with a mean of zero. Using a different precision parameter (inverse of the variance) αcmh for each weight wcmh, the joint probability of all the weights is represented as follows:\nP (w | α)\n= C∏ c=1 Mc∏ m=1 H∏ h=1 √ αcmh 2π exp [ −1 2 wcmh 2αcmh ] , (9)\nwhere w and α are vectors with wcmh and αcmh as their elements, respectively. During learning, we update not only w but also α. If αcmh → ∞, the prior (9) is 0; hence a sparse solution is obtained by optimizing α as shown in Figure 4.\nUsing these variables, the expectation of the log-likelihood function over z, J , is defined as follows:\nJ = Ez [lnP (T,z | X,w,π,α)] = N∑\nn=1 C∑ c=1 rncmtnc lnP (c,m | xn),\nwhere T is a matrix with tnc as its element. The variable rncm in the right-hand side corresponds to P (m | c,xn) and can be calculated as rncm = P (c,m | xn)/P (c | xn). The posterior probability of the weight vector w is described as follows:\nP (w | T,z,X,π,α) = P (T,z | X,w,π,α)P (w | α) P (T,z | X,α) (10)\nAn optimal w is obtained as the point where (10) is maximized. The denominator of the right-hand side in (10) is called the evidence term, and we maximize it with respect to α. However, this maximization problem cannot be solved analytically; therefore we introduce the Laplace approximation described as the following procedure.\nWith α fixed, we obtain the mode of the posterior distribution of w. The solution is given by the point where the following equation is maximized:\nEz [lnP (w | T,z,X,π,α)] = Ez [lnP (T,z | X,w,π,α)] + lnP (w | α) + const.\n= J −wTAw + const., (11)\nwhere A = diagαcmh. We obtain the mode of (11) via Newton’s method. The gradient and Hessian required for this estimation can be calculated as follows:\n∇Ez [lnP (w | T,z,X,π,α)] = ∇J −Aw, (12)\nAlgorithm 1: Learning algorithm of the SDGM Input: Training data set X and teacher vector T. Output: Trained weightw obtained by maximizing (11). Initialize the weightsw, hyperparameters α, mixture coefficients π, and posterior probabilities r; while α have not converged do\nCalculate J using (10); while r have not converged do\nwhilew have not converged do Calculate gradients using (12); Calculate Hessian (13); Maximize (11) w.r.t. w; Calculate P (c,m | xn) and P (c | xn); end rncm = P (c,m | xn)/P (c | xn);\nend Calculate Λ using (16); Update α using (17); Update π using (18);\nend\n∇∇Ez [lnP (w | T,z,X,π,α)] = ∇∇J −A. (13) Each element of∇J and∇∇J is calculated as follows:\n∂J\n∂wcmh = (rncmtnc−P (c,m | xn))φh, (14)\n∂2J\n∂wcmh∂wc′m′h′ = P (c′,m′ | xn)(P (c,m | xn)− δcc′mm′)φhφh′ , (15)\nwhere δcc′mm′ is a variable that takes 1 if both c = c′ andm = m′, 0 otherwise. Hence, the posterior distribution ofw can be approximated by a Gaussian distribution with a mean of ŵ and a covariance matrix of Λ, where\nΛ = −(∇∇Ez [lnP (ŵ | T,z,X,π,α)])−1. (16)\nSince the evidence term can be represented using the normalization term of this Gaussian distribution, we obtain the following updating rule by calculating its derivative with respect to αcmh.\nαcmh ← 1− αcmhλcmh\nŵ2cmh , (17)\nwhere λcmh is the diagonal component of Λ. The mixture weight πcm can be estimated using rncm as follows:\nπcm = 1\nNc Nc∑ n=1 rncm, (18)\nwhere Nc is the number of training samples belonging to class c. As described above, we obtain a sparse solution by alternately repeating the update of hyper-parameters, as in (17) and (18), and the posterior distribution estimation of w using the Laplace approximation. As a result of the optimization, some of αcmh approach to infinite values, and wcmh corresponding to αcmh have prior distributions with mean and variance both zero as shown in (4); hence such wcmh are removed because their posterior distributions are also with mean and variance both zero. During the procedure, the {c,m}-th component is eliminated if πcm becomes 0 or all the weights wcmh corresponding to the component become 0.\nThe learning algorithm of the SDGM is summarized in Algorithm 1. In this algorithm, the optimal weight is obtained as maximum a posterior solution. We can obtain a sparse solution by optimizing the prior distribution set to each weight simultaneously with weight optimization." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 COMPARATIVE STUDY USING BENCHMARK DATA", "text": "To evaluate the capability of the SDGM quantitatively, we conducted a classification experiment using benchmark datasets. The datasets used in this experiment were Ripley’s synthetic data (Ripley\n2006) (Ripley hereinafter) and four datasets cited from (Rätsch et al. 2001); Banana, Waveform, Titanic, and Breast Cancer. Ripley is a synthetic dataset that is generated from a two-dimensional (D = 2) Gaussian mixture model, and 250 and 1,000 samples are provided for training and test, respectively. The number of classes is two (C = 2), and each class comprises two components. The remaining four datasets are all two-class (C = 2) datasets, which comprise different data sizes and dimensionality. Since they contain 100 training/test splits, we repeated experiments 100 times and then calculated average statistics.\nFor comparison, we used three kernel Bayesian methods: a GP classifier, an MPG classifier (Tresp 2001; Luo & Sun 2017), and an RVM (Tipping 2001), which are closely related to the SDGM from the perspective of sparsity, multimodality, and Bayesian learning, as described in Section 2. In the evaluation, we compared the recognition error rates for discriminability and the number of nonzero weights for sparsity on the test data. The results of RVM were cited from (Tipping 2001). By way of summary, the statistics were normalized by those of the SDGM, and the overall mean was shown.\nTable 1 shows the recognition error rates and the number of nonzero weights for each method. The results in Table 1 demonstrated that the SDGM achieved a better accuracy on average compared to the other kernel Bayesian methods. The SDGM is developed based on a Gaussian mixture model and is particularly effective for data where a Gaussian distribution can be assumed, such as the Ripley dataset. Since the SDGM explicitly models multimodality, it could more accurately represent the sharp changes in decision boundaries near the border of components compared to the RVM, as shown in Figure 1. Although the SDGM did not necessarily outperform the other methods in all datasets, it achieved the best accuracy on average. In terms of sparsity, the number of initial weights for the SDGM is the same as MGP, and the SDGM reduced 90.0–99.5% of weights from the initial state due to the sparse Bayesian learning, which leads to drastically efficient use of memory compared to non-sparse classifiers (GP and MGP). The results above indicated that the SDGM demonstrated generalization capability and a sparsity simultaneously." }, { "heading": "4.2 IMAGE CLASSIFICATION", "text": "In this experiment, the SDGM is embedded into a deep neural network. Since the SDGM is differentiable with respect to the weights, SDGM can be embedded into a deep NN as a module and is trained in an end-to-end manner. In particular, the SDGM plays the same role as the softmax function since the SDGM calculates the posterior probability of each class given an input vector. We can show that a fully connected layer with the softmax is equivalent to the discriminative model based on a single Gaussian distribution for each class by applying a simple transformation (see Appendix A), whereas the SDGM is based on the Gaussian mixture model.\nTo verify the difference between them, we conducted image classification experiments. Using a CNN with a softmax function as a baseline, we evaluated the capability of SDGM by replacing softmax with the SDGM. We also used a CNN with a softmax function trained with L1 regularization, a CNN with a large margin softmax (Liu et al. 2016), and a CNN with the discriminative GMM as other baselines.\nIn this experiment, we used the original form of the SDGM. To achieve sparse optimization during end-to-end training, we employed an approximated sparse Bayesian learning based on Gaussian dropout proposed by Molchanov et al. (2017). This is because it is difficult to execute the learning algorithm in Section 3.4 with backpropagation due to large computational costs for inverse matrix calculation of the Hessian in (16), which takes O(N3)." }, { "heading": "4.2.1 DATASETS AND EXPERIMENTAL SETUPS", "text": "We used the following datasets and experimental settings in this experiment.\nMNIST: This dataset includes 10 classes of handwritten binary digit images of size 28×28 (LeCun et al. 1998). We used 60,000 images as training data and 10,000 images as testing data. As a feature extractor, we used a simple CNN that consists of five convolutional layers with four max pooling layers between them and a fully connected layer. To visualize the learned CNN features, we first set the output dimension of the fully connected layer of the baseline CNN as two (D = 2). Furthermore, we tested by increasing the output dimension of the fully connected layer from two to ten (D = 10).\nFashion-MNIST: Fashion-MNIST (Xiao et al. 2017) includes 10 classes of binary fashion images with a size of 28 × 28. It includes 60,000 images for training data and 10,000 images for testing data. We used the same CNN as in MNIST with 10 as the output dimension.\nCIFAR-10 and CIFAR-100: CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton 2009) consist of 60,000 32 × 32 color images in 10 classes and 100 classes, respectively. There are 50,000 training images and 10,000 test images for both datasets. For these datasets, we trained DenseNet (Huang et al. 2017) with a depth of 40 and a growth rate of 12 as a baseline CNN.\nImageNet: ImageNet classification dataset (Russakovsky et al. 2015) includes 1,000 classes of generic object images with a size of 224 × 224. It consists of 1,281,167 training images, 50,000 validation images, and 100,000 test images. For this dataset, we used MobileNet (Howard et al. 2017) as a baseline CNN.\nIt should be noted that we did not employ additional techniques to increase classification accuracy such as hyperparameter tuning and pre-trained models; therefore, the accuracy of the baseline model did not reach the state-of-the-art. This is because we considered that it is not essential to confirm the effectiveness of the proposed method." }, { "heading": "4.2.2 RESULTS", "text": "Figure 5 shows the two-dimensional feature embeddings on the MNIST dataset. Different feature embeddings were acquired for each method. When softmax was used, the features spread in a fan shape and some parts of the distribution overlapped around the origin. However, when the SDGM was used, the distribution for each class exhibited an ellipse shape and margins appeared between the\nclass distributions. This is because the SDGM is based on a Gaussian mixture model and functions to push the features into a Gaussian shape.\nTable 2 shows the recognition error rates on each dataset. SDGM achieved better performance than softmax. Although sparse learning was ineffective in two out of six comparisons according to the comparison with the discriminative GMM, replacing softmax with SDGM was effective in all the comparisons. As shown in Figure 5, SDGM can create margins between classes by pushing the features into a Gaussian shape. This phenomenon positively affected classification capability. Although large-margin softmax, which has the effect of increasing the margin, and the discriminative GMM, which can represent multimodality, also achieved relatively high accuracy, the SDGM can achieve the same level of accuracy with sparse weights." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed a sparse classifier based on a Gaussian mixture model (GMM), which is named sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model was trained by sparse Bayesian learning. This learning algorithm improved the generalization capability by obtaining a sparse solution and automatically determined the number of components by removing redundant components. The SDGM can be embedded into neural networks (NNs) such as convolutional NNs and could be trained in an end-to-end manner.\nIn the experiments, we demonstrated that the SDGM could reduce the number of weights via sparse Bayesian learning, thereby improving its generalization capability. The comparison using benchmark datasets suggested that SDGM outperforms the conventional kernel Bayesian classifiers. We also demonstrated that SDGM outperformed the fully connected layer with the softmax function when it was used as the last layer of a deep NN.\nOne of the limitations of this study is that the proposed sparse learning reduces redundant Gaussian components but cannot obtain the optimal number of components, which should be improved in future work. Since the learning of the proposed method can be interpreted as the incorporation of the EM algorithm into the sparse Bayesian learning, we will tackle a theoretical analysis by utilizing the proofs for the EM algorithm (Wu 1983) and the sparse Bayesian learning (Faul & Tipping 2001). Furthermore, we would like to tackle the theoretical analysis of error bounds using the PAC-Bayesian theorem. We will also develop a sparse learning algorithm for a whole deep NN structure including the feature extraction part. This will improve the ability of the CNN for larger data classification. Further applications using the probabilistic property of the proposed model such as semi-supervised learning, uncertainty estimation, and confidence calibration will be considered." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by JSPS KAKENHI Grant Number JP17K12752 and JST ACT-I Grant Number JPMJPR18UO." }, { "heading": "A RELATIONSHIP BETWEEN THE DISCRIMINATIVE GAUSSIAN AND LOGISTIC REGRESSION", "text": "We show that a fully connected layer with the softmax function, or logistic regression, can be regarded as a discriminative model based on a Gaussian distribution by utilizing transformation of the equations. Let us consider a case in which the class-conditional probability P (x|c) is a Gaussian distribution. In this case, we can omit m from the equations (3)–(6).\nIf all classes share the same covariance matrix and the mixture weight πcm, the terms πcm in (1), x21, x1x2, . . . , x1xD, x 2 2, x2x3, . . . , x2xD, . . . , x 2 D in (2), and − 12sc11, . . .,− 1 2scDD in (6) can be canceled; hence the calculation of the posterior probability P (c|x) is also simplified as\nP (c|x) = exp(wc Tφ)∑C\nc=1 exp(wc Tφ)\n,\nwhere\nwc = [logP (c)− 1\n2 D∑ i=1 D∑ j=1 scijµciµcj + D 2 log 2π + 1 2 log |Σc|, D∑ i=1 sci1µci, · · ·, D∑ i=1 sciDµci] T,\nφ = [ 1,xT ]T .\nThis is equivalent to a fully connected layer with the softmax function, or linear logistic regression." }, { "heading": "B EVALUATION OF CHARACTERISTICS USING SYNTHETIC DATA", "text": "To evaluate the characteristics of the SDGM, we conducted classification experiments using synthetic data. The dataset comprises two classes. The data were sampled from a Gaussian mixture model with eight components for each class. The numbers of training data and test data were 320 and 1,600, respectively. The scatter plot of this dataset is shown in Figure 6.\nIn the evaluation, we calculated the error rates for the training data and the test data, the number of components after training, the number of nonzero weights after training, and the weight reduction ratio (the ratio of the number of the nonzero weights to the number of initial weights), by varying the number of initial components as 2, 4, 8, . . . , 20. We repeated evaluation five times while regenerating the training and test data and calculated the average value for each evaluation criterion. We used the dual form of the SDGM in this experiment.\nFigure 6 displays the changes in the learned class boundaries according to the number of initial components. When the number of components is small, such as that shown in Figure 6(a), the decision boundary is simple; therefore, the classification performance is insufficient. However, according to the increase in the number of components, the decision boundary fits the actual class boundaries. It is noteworthy that the SDGM learns the GMM as a discriminative model instead of a generative model; an appropriate decision boundary was obtained even if the number of components for the model is less than the actual number (e.g., 6(c)).\nFigure 7 shows the evaluation results of the characteristics. Figures 7(a), (b), (c), and (d) show the recognition error rate, number of components after training, number of nonzero weights after training, and weight reduction ratio, respectively. The horizontal axis shows the number of initial components in all the graphs.\nIn Figure 7(a), the recognition error rates for the training data and test data are almost the same with the few numbers of components and decrease according to the increase in the number of initial components while it is 2 to 6. This implied that the representation capability was insufficient when the number of components was small, and that the network could not accurately separate the classes. Meanwhile, changes in the training and test error rates were both flat when the number of initial components exceeded eight, even though the test error rates were slightly higher than the training error rate. In general, the training error decreases and the test error increases when the complexity of\nthe classifier is increased. However, the SDGM suppresses the increase in complexity using sparse Bayesian learning, thereby preventing overfitting.\nIn Figure 7(b), the number of components after training corresponds to the number of initial components until the number of initial components is eight. When the number of initial components exceeds ten, the number of components after training tends to be reduced. In particular, eight components are reduced when the number of initial components is 20. The results above indicate the SDGM can reduce unnecessary components.\nFrom the results in Figure 7(c), we confirm that the number of nonzero weights after training increases according to the increase in the number of initial components. This implies that the complexity of the trained model depends on the number of initial components, and that the minimum number of components is not always obtained.\nMeanwhile, in Figure 7(d), the weight reduction ratio increases according to the increase in the number of initial components. This result suggests that the larger the number of initial weights, the more weights were reduced. Moreover, the weight reduction ratio is greater than 99 % in any case. The results above indicate that the SDGM can prevent overfitting by obtaining high sparsity and can reduce unnecessary components." }, { "heading": "C DETAILS OF INITIALIZATION", "text": "In the experiments during this study, each trainable parameters for the m-th component of the c-th class were initialized as follows (H = 1 + D(D + 3)/2, where D is the input dimension, for the original form and H = N , where N is the number of the training data, for the kernelized form):\n• wcm (for the original form): A zero vector 0 ∈ RH .\n• ψcm (for the kernelized form): A zero vector 0 ∈ RH .\n• αcm: An all-ones vector 1 ∈ RH . • πcm: A scalar 1∑C c=1 Mc\n, where C is the number of classes and Mc is the number of components for the c-th class.\n• rncm: Initialized based on the results of k-means clustering applied to the training data; rncm = 1 if the n-th sample belongs to class c and is assigned to the m-th component by k-means clustering, rncm = 0 otherwise." } ]
2,021
A DISCRIMINATIVE GAUSSIAN MIXTURE MODEL WITH SPARSITY
SP:f0574c6588c9dc844b3e651e490092f058b7eb3c
[ "This work is an exploration of model behaviour upon meta-learning tasks with compositional structure. The authors discover that, unlike humans, machine learning models do not readily pick up on the underlying compositional generative structure of a set of tasks, and hence cannot match the performance of humans. Conversely, when the task is structured to leverage other statistical patterns, models do well. " ]
In recent years, meta-learning, in which a model is trained on a family of tasks (i.e. a task distribution), has emerged as an approach to training neural networks to perform tasks that were previously assumed to require structured representations, making strides toward closing the gap between humans and machines. However, we argue that evaluating meta-learning remains a challenge, and can miss whether meta-learning actually uses the structure embedded within the tasks. These metalearners might therefore still be significantly different from humans learners. To demonstrate this difference, we first define a new meta-reinforcement learning task in which a structured task distribution is generated using a compositional grammar. We then introduce a novel approach to constructing a “null task distribution” with the same statistical complexity as this structured task distribution but without the explicit rule-based structure used to generate the structured task. We train a standard meta-learning agent, a recurrent network trained with modelfree reinforcement learning, and compare it with human performance across the two task distributions. We find a double dissociation in which humans do better in the structured task distribution whereas agents do better in the null task distribution – despite comparable statistical complexity. This work highlights that multiple strategies can achieve reasonable meta-test performance, and that careful construction of control task distributions is a valuable way to understand which strategies meta-learners acquire, and how they might differ from humans.
[ { "affiliations": [], "name": "Sreejan Kumar" }, { "affiliations": [], "name": "Ishita Dasgupta" }, { "affiliations": [], "name": "Jonathan D. Cohen" }, { "affiliations": [], "name": "Nathaniel D. Daw" }, { "affiliations": [], "name": "Thomas L. Griffiths" } ]
[ { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Matthew Botvinick", "Sam Ritter", "Jane X Wang", "Zeb Kurth-Nelson", "Charles Blundell", "Demis Hassabis" ], "title": "Reinforcement learning, fast and slow", "venue": "Trends in cognitive sciences,", "year": 2019 }, { "authors": [ "Jeff Clune" ], "title": "Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence", "venue": "arXiv preprint arXiv:1905.10985,", "year": 2019 }, { "authors": [ "Ishita Dasgupta", "Demi Guo", "Andreas Stuhlmüller", "Samuel J Gershman", "Noah D Goodman" ], "title": "Evaluating compositionality in sentence embeddings", "venue": "arXiv preprint arXiv:1802.04302,", "year": 2018 }, { "authors": [ "Ishita Dasgupta", "Jane Wang", "Silvia Chiappa", "Jovana Mitrovic", "Pedro Ortega", "David Raposo", "Edward Hughes", "Peter Battaglia", "Matthew Botvinick", "Zeb Kurth-Nelson" ], "title": "Causal reasoning from meta-reinforcement learning", "venue": null, "year": 1901 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Yan Duan", "John Schulman", "Xi Chen", "Peter L Bartlett", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Rl2: Fast reinforcement learning via slow reinforcement learning", "venue": "arXiv preprint arXiv:1611.02779,", "year": 2016 }, { "authors": [ "Rachit Dubey", "Pulkit Agrawal", "Deepak Pathak", "Thomas L Griffiths", "Alexei A Efros" ], "title": "Investigating human priors for playing video games", "venue": "arXiv preprint arXiv:1802.10217,", "year": 2018 }, { "authors": [ "Kevin Ellis", "Catherine Wong", "Maxwell Nye", "Mathias Sable-Meyer", "Luc Cary", "Lucas Morales", "Luke Hewitt", "Armando Solar-Lezama", "Joshua B Tenenbaum" ], "title": "Dreamcoder: Growing generalizable, interpretable knowledge with wake-sleep bayesian program learning", "venue": "arXiv preprint arXiv:2006.08381,", "year": 2020 }, { "authors": [ "Jerry A Fodor", "Zenon W Pylyshyn" ], "title": "Connectionism and cognitive architecture: A critical analysis", "venue": null, "year": 1988 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting gradientbased meta-learning as hierarchical bayes", "venue": "arXiv preprint arXiv:1801.08930,", "year": 2018 }, { "authors": [ "Thomas L Griffiths", "Frederick Callaway", "Michael B Chang", "Erin Grant", "Paul M Krueger", "Falk Lieder" ], "title": "Doing more with less: meta-reasoning and meta-learning in humans and machines", "venue": "Current Opinion in Behavioral Sciences,", "year": 2019 }, { "authors": [ "Felix Hill", "Andrew Lampinen", "Rosalia Schneider", "Stephen Clark", "Matthew Botvinick", "James L McClelland", "Adam Santoro" ], "title": "Environmental drivers of systematicity and generalization in a situated agent", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Charles Kemp", "Joshua B Tenenbaum" ], "title": "The discovery of structural form", "venue": "Proceedings of the National Academy of Sciences,", "year": 2008 }, { "authors": [ "Brenden Lake", "Marco Baroni" ], "title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Brenden M Lake" ], "title": "Compositional generalization through meta sequence-to-sequence learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Yann LeCun", "Bernhard Boser", "John S Denker", "Donnie Henderson", "Richard E Howard", "Wayne Hubbard", "Lawrence D Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Shirley Mark", "Rani Moran", "Thomas Parr", "Steve W Kennerley", "Timothy EJ Behrens" ], "title": "Transferring structural knowledge across cognitive maps in humans and models", "venue": "Nature Communications,", "year": 2020 }, { "authors": [ "R Thomas McCoy", "Erin Grant", "Paul Smolensky", "Thomas L Griffiths", "Tal Linzen" ], "title": "Universal linguistic inductive biases via meta-learning", "venue": "arXiv preprint arXiv:2006.16324,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Nicolas Heess", "Alex Graves" ], "title": "Recurrent models of visual attention", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Neil C Rabinowitz", "Frank Perbet", "H Francis Song", "Chiyuan Zhang", "SM Eslami", "Matthew Botvinick" ], "title": "Machine theory of mind", "venue": "arXiv preprint arXiv:1802.07740,", "year": 2018 }, { "authors": [ "Anna Rogers", "Olga Kovaleva", "Anna Rumshisky" ], "title": "A primer in bertology: What we know about how bert works", "venue": "arXiv preprint arXiv:2002.12327,", "year": 2020 }, { "authors": [ "Eric Schulz", "Joshua B Tenenbaum", "David Duvenaud", "Maarten Speekenbrink", "Samuel J Gershman" ], "title": "Compositional inductive biases in function learning", "venue": "Cognitive psychology,", "year": 2017 }, { "authors": [ "Joshua B Tenenbaum", "Charles Kemp", "Thomas L Griffiths", "Noah D Goodman" ], "title": "How to grow a mind: Statistics, structure, and abstraction", "venue": null, "year": 2011 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt" ], "title": "Learning to learn: Introduction and overview", "venue": "In Learning to learn,", "year": 1998 }, { "authors": [ "Jane X Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z Leibo", "Remi Munos", "Charles Blundell", "Dharshan Kumaran", "Matt Botvinick" ], "title": "Learning to reinforcement learn", "venue": "arXiv preprint arXiv:1611.05763,", "year": 2016 }, { "authors": [ "ZD Zhang" ], "title": "Conjectures on the exact solution of three-dimensional (3d) simple orthorhombic ising lattices", "venue": "Philosophical Magazine,", "year": 2007 } ]
[ { "heading": "1 INTRODUCTION", "text": "While machine learning has supported tremendous progress in artificial intelligence, a major weakness – especially in comparison to humans – has been its relative inability to learn structured representations, such as compositional grammar rules, causal graphs, discrete symbolic objects, etc. (Lake et al., 2017). One way that humans acquire these structured forms of reasoning is via “learning-to-learn”, in which we improve our learning strategies over time to give rise to better reasoning strategies (Thrun & Pratt, 1998; Griffiths et al., 2019; Botvinick et al., 2019). Inspired by this, researchers have renewed investigations into meta-learning. Under this approach, a model is trained on a family of learning tasks based on structured representations such that they achieve better performance across the task distribution. This approach has demonstrated the acquisition of sophisticated abilities including model-based learning (Wang et al., 2016), causal reasoning (Dasgupta et al., 2019), compositional generalization (Lake, 2019), linguistic structure (McCoy et al., 2020), and theory of mind (Rabinowitz et al., 2018), all in relatively simple neural network models. The meta-learning approach, along with interaction with designed environments, has also been suggested as a general way to automatically generate artificial intelligence (Clune, 2019). These approaches have made great strides, and have great promise, toward closing the gap between human and machine learning.\nHowever, in this paper, we argue that significant challenges remain in how we evaluate whether structured forms of reasoning have indeed been acquired. There are often multiple strategies that\ncan result in good meta-test performance, and there is no guarantee a priori that meta-learners will learn the strategies we intend when generating the training distribution. Previous work on metalearning structured representations do partially acknowledge this. In this paper, we highlight these challenges more generally. At the end of the day, meta-learning is simply another learning problem. And similar to any vanilla learning algorithm, meta-learners themselves have inductive biases (which we term meta-inductive bias). Note that meta-learning is a way to learn inductive biases for vanilla learning algorithms Grant et al. (2018). Here, we consider the fact the meta-learners themselves have inductive biases that impact the kinds of strategies (and inductive biases) they prefer to learn.\nIn this work, the kind of structure we study is that imposed by compositionality, where simple rules can be recursively combined to generate complexity (Fodor et al., 1988). Previous work demonstrates that some aspects of compositionality can be meta-learned (Lake, 2019). Here, we introduce a broader class of compositionally generated task environments using an explicit generative grammar, in an interactive reinforcement learning setting. A key contribution of our work is to also develop control task environments that are not generated using the same simple recursively applied rules, but are comparable in statistical complexity. We provide a rigorous comparison between human and meta-learning agent behavior in tasks performed in distributions of environments of each type. We show through three different analyses that human behavior is consistent with having learned the structure that results from our compositional rules in the structured environments. In contrast, despite training on distributions that contain this structure, standard meta-learning agents instead prefer (i.e. have a meta-inductive bias toward) more global statistical patterns that are a downstream consequence of these low-dimensional rules. Our results show that simply doing well at meta-test on a tasks in a distribution of structured environments does not necessarily indicate meta-learning of that structure. We therefore argue that architectural inductive biases still play a crucial role in the kinds of structure acquired by meta-learners, and simply embedding the requisite structure in a training task distribution may not be adequate." }, { "heading": "2 EMBEDDING STRUCTURE IN A TASK DISTRIBUTION", "text": "In this work, we define a broad family of task distributions in which tasks take place in environments generated from abstract compositional structures, by recursively composing those environments using simple, low-dimensional rules. Previous work on such datasets (Lake & Baroni, 2018; Johnson et al., 2017) focuses primarily on language. Here we instead directly consider the domain of structure learning. This is a fundamental tenet of human cognition and has been linked to how humans learn quickly in novel environments (Tenenbaum et al., 2011; Mark et al., 2020). Structure learning is required in a vast range of domains: from planning (understanding an interrelated sequence of steps for cooking), category learning (the hierarchical organization of biological species), to social inference (understanding a chain of command at the workplace, or social cliques in a high school). A task distribution based on structure learning can therefore be embedded into several domains relevant for machine learning.\nKemp & Tenenbaum (2008) provide a model for how people infer such structure. They present a probabilistic context-free graph grammar that produces a space of possible structures, over which humans do inference. A grammar consists of a start symbol S, terminal and non-terminal symbols Σ and V , as well as a set of production rules R. Different structural forms arise from recursively applying these production rules. This framework allows us to specify abstract structures (via the grammar) and to produce various instantiations of this abstract structure (via the noisy generation process), naturally producing different families of task environments, henceforth referred to as task distributions.\nWe consider three structures: chains, trees, and loops. These exist in the real world across multiple domains. Chains describe objects on a one-dimensional spectrum, like people on the left-right political spectrum. Trees describe objects organized in hierarchies, like evolutionary trees. Loops describe cycles, like the four seasons. Here we embed these structures into a grid-based task.\nExploration on a grid is an extensively studied problem in machine learning, particularly in reinforcement learning. Further, it is also a task that is easy for humans to perform on online crowdsourcing platforms – but not trivially so. This allows us to directly compare human and machine performance on the same task. Fig. 1 displays the symbols of the grammar we use and the production rules that give rise to grids of different structural forms." }, { "heading": "2.1 A TASK TO TEST STRUCTURE LEARNING", "text": "Here we describe the specific task built atop this embedding of structural forms. We use a tile revealing task on the grid. Humans as well as agents are shown a 7 × 7 grid of tiles, which are initially white except for one red tile. The first red tile revealed at the beginning of the episode is the same tile as the initial start tile of the grid’s generative process (see Fig. 1). Clicking white tiles reveal them to be either red or blue. The episode finishes when the agent reveals all the red tiles. There is a reward for each red tile revealed, and a penalty for every blue tile revealed. The goal therefore is to reveal all the red tiles while revealing as few blue tiles as possible. The particular configuration of the red tiles defines a single task. The distribution of tasks for meta-learning is defined by the grammar from which these structures are sampled. Here, we randomly sampled from a uniform mixture of chains, trees, and loops as defined in Fig. 1." }, { "heading": "2.2 A STATISTICALLY EQUIVALENT NULL TASK DISTRIBUTION", "text": "Previous approaches to evaluating whether machine-learning systems can extract compositional structure (Lake & Baroni, 2018; Dasgupta et al., 2018) have relied on examining average performance on held-out examples from compositionally structured task distributions. However, we argue that this often confounds whether a system has truly internalized this underlying structure or whether it is relying on statistical patterns that come about as a consequence of compositional rules.\nTo directly examine whether structured reasoning is a factor in how humans and meta-learning agents perform this task, we need a control task distribution that is similar in statistical complexity, by generating one based on those statistics rather than the direct use of the compositional grammar. To this end, we trained a fully connected neural network (3 layers, 49 units each) to learn the conditional distribution of each tile given all the other tiles on the compositional boards. Note that these conditional distributions contain all the relevant statistical information about the boards. We do this by training on an objective inspired by masked language models like BERT (Devlin et al., 2018). The network was given a compositional board with a random tile masked out and trained to reproduce the entire board including the randomly masked tile. The loss was binary cross entropy between the predicted and actual masked tiles. The network was trained on all possible compositional boards for 104 epochs, and achieved a training accuracy of ∼99%. We then sampled boards from these conditionals with Gibbs sampling. We started with a grid in which each tile is randomly set to red or blue with probability 0.5. We then masked out a tile and ran the grid through the network to get the conditional probability of the tile being red given the other tiles, turning the tile red with that probability. We repeated this by masking each tile in the 7 × 7 grid (in a random order) to complete a single Gibbs sweep, and repeated this whole Gibbs sweep 20 times to generate a single sample. We refer to the distribution of boards generated this way as the null task distribution. Fig. 2 shows example compositional and null distribution grids.\nWhile the statistical structure looks similar, the non-compositional null boards shown could not have been generated by the grammar in Fig. 1. The conditional distributions for the two distributions are similar by design; we further quantify statistical similarity using Ising statistics (Zhang, 2007). We compared the 0th order, 1st order, and 2nd order effects defined as follows. The 0th order statistic corresponds to the number of red minus number of blue tiles. The 1st order statistic counts the number of agreeing neighbours (vertically or horizontally adjacent) minus the disagreeing ones, where agreeing means being of the same color. The 2nd order statistic is the number of triples (tile + its neighbor + its neighbor’s neighbor) that agree, minus those that don’t. Fig. 2b shows that the two distributions are not significantly different in terms of the Ising statistics measured (p > 0.05 for all three orders).\nThe principal difference between these two task distributions is the way in which they were generated. The compositional task distribution was generated through the recursive application of simple, low-dimensional rules that generates a mixture of three discrete structures, whereas the null task distribution was generated through a more complex Gibbs sampling procedure that is not explicitly compositional and does not utilize explicit simple, low-dimensional rules. Although it is true that some boards within the null task distribution may be consistent with a simple compositional grammar, the distribution as a whole was not generated through a compositional grammar." }, { "heading": "3 EXPERIMENTS", "text": "We analyze and compare the performance of standard meta-learners and human learning on our tilerevealing task. We test them on boards that are sampled from the generative grammar and contain explicit compositional structure, as well as on boards that are matched for statistical complexity, but are sampled from a null distribution that was constructed without using explicit compositional structure. Comparing performance across these two task distributions allows us to pinpoint the role of simple forms of structure as distinct from statistical patterns that arise as a downstream consequence of compositional rules based on such structure." }, { "heading": "3.1 METHODS", "text": "Meta-Reinforcement Learning Agent Following previous work in meta-reinforcement learning (Wang et al., 2016; Duan et al., 2016) we use an LSTM meta-learner that takes the full board as input, passes it through 2 fully connected layers (49 units each) and feeds that, along with the previous action and reward, to 120 LSTM units. It is trained with a linear learning rate schedule and 0.9 discount. The reward function was: +1 for revealing red tiles, -1 for blue tiles, +10 for the last red tile, and -2 for choosing an already revealed tile. The agent was trained using Advantage Actor Critic (A2C) (Stable baselines package Hill et al., 2018). The agent was trained for 106 episodes. We performed a hyperparamater sweep (value function loss coefficient, entropy loss coefficient, learning rate) using a held-out validation set for evaluation (see Appendix). The selected model’s performance was evaluated on held-out test grids. We trained different agents in the same way on the compositional and null task distributions, with separate hyperparameter sweeps for each.\nHuman Experiment We crowdsourced human performance on our task using Prolific (www.prolific.co) for a compensation of $1.50. Participants were shown the 7 × 7 grid on their web browser and used mouse-clicks to reveal tiles. Each participant was randomly assigned to the compositional or null task distribution, 25 participants in each. Each participant was evaluated on the same test set of grids used to evaluate the models (24 grids from their assigned task distribution in randomized order). Note that a key difference between the human participants and model agents was that the humans did not receive training on either task distribution. While we are interested in examining whether agents can meta-learn abstract structures (by training on compositional task distributions), we assume that humans already have this ability from pre-experimental experience. Since participants had to reveal all red tiles to move on to the next grid, they were implicitly incentivized to be efficient (clicking as few blue tiles as possible) in order to finish the task quickly. We found that this was adequate to get good performance. A reward structure similar to that given to agents was displayed as the number of points accrued, but did not translate to monetary reward.\nEvaluation Unless specified otherwise, performance is evaluated as the number of blue tiles revealed before all red tiles are revealed (lower is better). All error bars are 95% non-parametric bootstrap confidence intervals calculated across agents / participants. Non-overlapping confidence intervals will have a significant difference, but we also include non-parametric bootstrapped p-values for differences across different samples (e.g. human vs agent)." }, { "heading": "3.2 RESULTS", "text": "In this section, we first describe human behavior on this novel task. We see that humans perform better on the compositional distribution, without extensive training and even while directly controlling for statistical complexity. We then compare human performance with that of a meta-learning agent—which has had extensive training on this task, and therefore has had the chance to learn the structure relevant to this task distribution. We find significant qualitative and quantitative differences in behavior, and examine the role of meta-inductive bias – i.e. what kinds of cross-task structure do meta-learners prefer to represent? In particular, we consider compositional and spatial structure. Finally, we demonstrate the effect of an architectural change (adding convolutions) in the metalearner that makes it easier for it to discover spatial structure. We demonstrate that, while this helps agent performance overall, it further highlights the divergence between human and agent behavior in learning the compositional rule-based structure in our task distributions.\nHuman performance: We found that participants perform better on the compositional task distribution than the null task distribution (see Fig. 3a). Despite not having been trained on this task beforehand, human participants do fairly well on this task from the outset, suggesting that humans might have some of the relevant inductive biases from pre-experimental experience. To test if there is learning within the experiment itself, we correlated trial number with the number of blue tiles revealed (Fig. 3B), and found improvement across both conditions but significantly greater im-\nprovement for the compositional distribution. Finally, we investigate performance on the null task distribution more closely. There is some overlap between the null and compositional distributions, because some of the generated null boards could have been generated by the same exact production rules of the generative grammar for the compositional task distribution. We split the null test set by whether or not the board is ‘compositional-passing’ and compare human performance across these. To do this, we generated the set of all possible compositional boards on a 7× 7 grid and labeled any null task distribution board as compositional-passing if it happened to be a part of this set. We find that humans do significantly better on boards that could have have been generated by the compositional production rules (Fig. 3c). This further suggests recognition and use of low-dimensional rules that align more closely with the compositional distribution than the null distribution.\nComparing human and agent performance: First, we note that the meta-learners perform relatively well on this task (Fig. 5), indicating that they have learned some generalizable information from the distribution of tasks. Since the test set has held-out boards of a compositional grammar, this might be taken as evidence that the agents discovered the compositional structure used to generate the boards. Here, we attempt to decouple this possibily – that is, that agents learn to infer and use the simple, low-dimensional compositional rules as humans appear to do – from the possibity that agents learn statistical patterns that are a consequence of the use of compositional rule.\nWe start with an example, involving the chain structure, that highlights the difference between human and agent policies on this task (Fig. 4). In this example, once humans figure out that the board is a chain structural form, they never deviate from the chain’s production direction while agents do. This suggests that humans are learning the simple generative rule of the chain form and using this rule to determine their actions, while the agent is using statistical patterns associated with the chain rule rather than the rule itself.\nWe now consider various ways to quantify this difference. First, we see that humans do better overall on both the compositional and null distributions (Fig. 5; p<0.0001 for both task distributions). This is despite, unlike the agents, having no direct experience with this task. This suggests that humans have useful inductive biases from pre-experimental experience that are valuable in this task (Dubey et al., 2018); for example, the tendency to recognize and utilize low-dimensional, composable rules, and the tendency to look for spatial patterns. We discuss the role of these inductive biases in the following sections. The meta-learner has had extensive experience with each task distribution, and had the chance to discover the structure / inductive biases relevant for this task. The differences in performance indicate that standard meta-learners differ from humans in the kinds of structure / inductive biases they learn (i.e. in their meta-inductive biases).\nBias toward simple discrete rules First, we note that humans perform better on the compositional versus the null distribution (Fig. 5a), whereas the agent does better on the null task distribution than on the compositional tasks. This reflects a notable difference between their performance. We hypothesized that humans perform well on the compositional task distribution by first inferring what kind of structure is relevant for the current task, and then following the production rules for that\nstructure. Since such structure was not used to create the null distribution, the act of learning, inferring, and using this structure is not as helpful in the null task distribution. Further, we hypothesized that the agents learn statistical patterns instead.\nFig. 4 supports this intuition but here we look to quantify it further. If a system represents a set of discrete classes of structures corresponding to our compositional rules, we would expect success rate (rate of choosing red tiles) to be low in the beginning of a trial while figuring out what the structure underlying this trial is. Conversely, we would expect a higher success rate towards the end while following inferred production rules to reveal red tiles. To test this hypothesis, we split human and agent behavior in each trial into the first and last half, and examine success rate in each Fig. (5b and c). For the compositional distribution, we find that humans have a higher success rate in the second half, providing support for our hypothesis. In contrast, we find that agent success rate does not increase over a trial, and in fact decreases. We also find that humans do not show increasing success rate in the null task distribution while agents do, providing further evidence for our hypothesis.\nBias toward spatial proximity. We note that humans outperform the agent even in the null task distribution, despite extensive training for the agent. One possibility is that good human performance in the null task is explained by their performance on the compositional-passing examples in the null task distribution (Fig. 3c). However, another possibility is that humans come to the task with strong inductive biases about spatial proximity.1 While the starting tile for the grammar can be randomly chosen, the production rules operate over nearest-neighbour adjacencies. A system that has a bias toward local spatial structure might therefore perform better at the task.\nWe test this possibility by comparing performance to a heuristic that uses only local spatial information. This heuristic selects uniformly from the (unrevealed) nearest neighbors of a randomly selected red tile. We evaluated this heuristic 1,000 times on each test board and formed a z-score statistic by subtracting the mean heuristic performance from the human’s/agent’s performance for each board divided by the standard deviation of the heuristic’s performance. We find that humans do better than the neighbor heuristic (Fig.6a), while the agent does not. This indicates that humans’ inductive bias for spatial proximity may partially explain the differences in performance across humans and agents.\nWe can give a neural network a bias toward spatial proximity using convolutions (LeCun et al., 1989). To test if this helps the agent, we replaced the agent’s first fully connected layer with a convolutional layer. We find that this agent outperforms humans on the null task distribution(Fig.6b). We also find that it outperforms the spatial heuristic described above (Fig.6c). Note that this strictly reduces the expressivity (i.e. the number of parameters) of the model, and any improvements are due to the right meta-inductive bias (i.e. the right architectural inductive bias given to the metalearner). However, humans still perform better than the agent in the compositional task distribution. Crucially, this means that which distribution is easier is different for the human and the agent – humans find the compositional tasks easier than null, while the agent finds the null tasks easier than the\n1Spatial structure is shared by both distributions (Fig. 2) and can’t explain why humans are better at compositional tasks while agents are better at null. However, here we investigate whether it can explain why humans perform better overall.\ncompositional. This result exhibits a double dissociation between learning simple, abstract structure and statistical learning. It shows that the gap between humans and agents on the compositional task is not due to artificial meta-learners being overall worse learners than humans – the convolutional meta-learner actually outperforms humans on the null distribution of equal statistical complexity. This provides further evidence that the inductive bias toward and representing of simple abstract structure is what may give humans a competitive advantage over the agent on the compositional task distributions, and that meta-learners do not learn it despite access to compositional training distributions." }, { "heading": "4 DISCUSSION", "text": "The ability to recognize structure in environments, as well as learn and utilize this structure, is a central tenet of human intelligence (Lake et al., 2017). One example of this kind of structure is compositional grammars. These use of simple, low-dimensional rules that can be recursively applied to produce arbitrary complexity and can be widely generalized outside the training distribution. An inductive bias toward structured representations could be of great value to machine learning systems. Recent developments in meta-learning hold promise as an approach to endowing systems with such useful inductive biases. In this work, we make several methodological and scientific contributions to provide a rigorous way to test for structured forms of reasoning using compositional grammars as a case study. We show that human behavior is consistent with learning and utilizing low-dimensional compositional rules. We also show that standard meta-learning approaches, in sharp contrast to humans, struggle with discrete abstract structures and prefer statistical patterns.\nOur first contribution is the development of compositionally structured, rule-based task distributions for meta-learning using explicit generative grammars (Kemp & Tenenbaum, 2008). Previous work on generating compositional datasets has focused on language. We argue that using explicit generative grammars has the dual advantage of being generalizable to a variety of structures, as well as being easy to embed in multiple domains relevant to machine learning. In this work, we embed this structure into a grid-based task. Grid-based tasks are commonly studied in reinforcement-learning, are easy for humans to perform on online platforms, and behavior on this task is easy to visualize, analyze, and interpret. This provides fertile ground for direct comparisons between human and machine behavior, as we demonstrate in our experiments. Previous work on meta-learning compositionality uses performance on a compositional task distribution as an indicator for meta-learning this structure (Lake, 2019). However, we show that it is possible for meta-learning systems to perform well using statistical patterns instead.\nOur second methodological contribution is to create distributions with comparable statistical complexity to the structured distribution, that do not directly use the rules used to form the structured distribution. This control distribution allows us to disentangle statistical pattern matching from structured reasoning (which, in this specific case, is rule-based compositionality) and highlights the difference between actually learning and utilizing simple abstract structures (e.g. low-dimensional compositional rules) versus using the statistical patterns that may be a downstream consequence of those structures. Our method closely approximates the global statistics that emerge from the compositional rules, by using a neural network to learn the conditional distributions and generating Gibbs\nsamples from these conditionals. This approach is similar to masked language modelling (Devlin et al., 2018), and our findings—that this procedure generates statistically similar but not explicitly compositional distributions, that are in fact easier for downstream networks to learn than the true compositional distribution—are also relevant to understanding the representations learned by these systems more broadly (Rogers et al., 2020).\nIn our experiments, we first show that humans have a bias toward using the compositional rulebased structure, while directly controlling for statistical complexity. This generalizes findings in the space of function learning (Schulz et al., 2017) to grid-based reinforcement learning tasks. Further, we find that agents (recurrent network trained with model-free reinforcement learning, following Wang et al., 2016; Duan et al., 2016) find the non-compositional null distribution easier to learn than the compositional one. This is in direct contrast with human behavior, indicating that agents do not learn the same strategies that humans use through meta-learning. A followup experiment with a convolutional agent directly dissociates the effectiveness of statistical learning from the inductive bias toward compositional rules, and highlights learning and use of these simple, low-dimensional rules as the key difference between humans and agents in this task. In both sets of experiments, we find a double dissociation between humans and agents: humans find the compositional task easier than the null task, while the pattern swaps for the agent. This indicates a significant difference (orthogonal to overall performance) between the kinds of strategies humans and agents use to solve this task. Our results therefore indicate that learning abstract structure, such as explicit compositional rules, remains difficult for artificial meta-learners – and that they prefer other statistical features when possible. In other words, they do not have a meta-inductive bias toward learning low-dimensional compositional rules.\nAlthough the architecture we investigate here does not successfully meta-learn the ability to recognize and use abstract compositional rules, our point is not that this inductive bias cannot be metalearned. Rather, it is that every meta-learning architecture has its own meta-inductive bias, and we show a specific case in which a standard and widely-used architecture’s meta-inductive bias leads to encoding statistical features rather than the low-dimensional compositional rules used to generate the task distribution. When setting out to meta-learn a structured representation using a meta-learning system, it is important to consider the meta-inductive bias of that system in addition to engineering its task distribution. Graph neural networks (Battaglia et al., 2018), neurosymbolic approaches (Ellis et al., 2020), as well as attention mechanisms (Mnih et al., 2014), permit abstraction by (implicitly or explicitly) decomposing the input into parts. Using these in meta-learning architectures might favor structured representations and reasoning.\nAlthough we encourage exploring the space of architectural changes to give meta-learning agents a better chance to learn such structured forms or reasoning, it may also be true that simpler architectures can acquire relevant inductive biases if given a rich enough high-dimensional training environment that rivals the environment(s) in which the species has evolved and individuals learn (Hill et al., 2019). However, the amount of data required to acquire these structured forms of reasoning in ”vanilla” architectures may be prohibitively large and largely infeasible. Further, even as training within these extraordinarily large environments becomes more feasible, the biases of the correspondingly large networks being used may continue to affect the ease at which they can learn structured representations. Therefore, it is still worthwhile to investigate the role of architectural modifications on the ability to meta-learn structured representations in smaller environments, such as the ones we present here, so that one day we may transfer those insights to training on larger, more naturalistic environments. An exciting direction for future work is to examine a range of approaches to learning structured representations with the tools we set forth in this paper, and using the resulting insights to move toward closing the gap between human and machine intelligence." }, { "heading": "5 ACKNOWLEDGEMENTS", "text": "We thank Erin Grant for providing helpful comments on the initial version of the manuscript. S.K. is supported by NIH T32MH065214. This work was supported by the DARPA L2M Program and the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation." }, { "heading": "A APPENDIX", "text": "A.1 HYPERPARAMETER DETAILS FOR REINFORCEMENT LEARNING\nWe did a hyperparameter search for the following: value function coefficient, entropy coefficient, and learning rate. In particular, we evaluated each set of hyperparameters on a separate validation set, selected the model with the highest performing set, and re-trained the model to be evaluated on a previously-unseen test set of boards. Note that the final test set is not seen by the model until the last, final evaluation step. Searches were ran independently for both task distributions (compositional and null). The final selected hyperparameters for both task distribution were: value function coefficient=0.000675,entropy coefficient=0.000675, learning rate=0.00235.\nA.2 DESCRIPTION OF COMPOSITIONAL GRAMMAR\nHere we provide an intuitive description of all the compositional grammar rules showin in Fig. 1.\nStart Tile Each grammar begins with a start square somewhere on or after the 3rd column/3rd row of the 7x7 grid (so a grammar cannot start with a tile on the first or second row/column of the grid).\nProbabilistic Production Each grammar rule corresponds to a particular abstract structure. These structures can vary in size based on how many times that grammar rule is applied. Whenever a grammar rule is applied, the grammar will terminate with probability p = 0.5. If a grammar rule cannot be applied again (e.g. the current tiles are on the edge and the next production would go off the 7x7 board), then the grammar automatically terminates.\nChain Production On the first chain production, the next two red tiles after the start tile (sx, sy) will be either: (sx−1, sy), (sx+1, sy) or (sx, sy+1), (sx, sy+1). Any subsequent chain productions after the first one will follow the direction of the first production (for example: if the first chain production places red tiles on (sx+1, sy), (sx+1, sy), the second would do (sx−2, sy), (sx+2, sy).\nTree Production On the first tree production, the next two red tiles will either be t1 = (sx + 1, sy), t2 = (sx, sy − 1) or t1 = (sx + 1, sy), t2(sx, sy − 1) or t1 = (sx − 1, sy), t2 = (sx, sy + 1) or t1 = (sx − 1, sy), t2 = (sx, sy + 1). The tree production rule always builds in two orthogonal directions. On subsequent tree productions, one of the two added red tiles from the previous production will be picked and two orthogonal directions will be picked for the next two red tiles. The defining characteristic in the tree structure is the ”lack of loops”, which means there can never be a 2x2 sub-square of all red tiles. Therefore, a currently red tile t is chosen for the center of production such that there does exist a pair of tiles t1, t2 in orthogonal directions to t such that making both t1, t2 red does not create a 2x2 sub-square of red tiles.\nLoop Production On the first loop production, a 2x2 red sub-square will form in one of four directions by coloring three tiles surrounding the start square (t1 = (sx + 1, sy), t2 = (sx + 1, sy + 1), t3 = (sx, sy + 1) or t1 = (sx − 1, sy), t2 = (sx − 1, sy − 1), t3 = (sx, sy − 1) or t1 = (sx−1, sy), t2 = (sx−1, sy +1), t3 = (sx, sy +1) or t1 = (sx +1, sy), t2 = (sx +1, sy−1), t3 = (sx, sy − 1). The next production rule will form another 2x2 red square surrounding an adjacent tile to the original 2x2 red square such that the new 2x2 red square only shares one edge with the old 2x2 red square (see Fig. 1 and2 for what this exactly looks like).\nA.3 REWARD OF AGENTS OVER TRAINING EPISODES\nA.4 ALL PERFORMANCE DIFFERENCES ACROSS HUMANS AND AGENTS FOR ALL CONDITIONS" } ]
2,021
null
SP:21f106f8f8fa276557c2d46d25ab456370502f75
[ "This paper proposes an independent mechanism that divides hidden representations and parameters into multiple independent mechanisms. The authors claim that the mechanism benefits the computation of sparse tensors; it does learn better inductive biases than a sizeable monolithic model. This idea is particularly similar to Recurrent Independent Mechanisms (RIM) [1], mentioned in the paper. The main contribution of this work is introducing competition between independent mechanisms. The authors evaluate their models on the image transformer model, speech enhancement, and NLP tasks." ]
An important development in deep learning from the earliest MLPs has been a move towards architectures with structural inductive biases which enable the model to keep distinct sources of information and routes of processing well-separated. This structure is linked to the notion of independent mechanisms from the causality literature, in which a mechanism is able to retain the same processing as irrelevant aspects of the world are changed. For example, convnets enable separation over positions, while attention-based architectures (especially Transformers) learn which combination of positions to process dynamically. In this work we explore a way in which the Transformer architecture is deficient: it represents each position with a large monolithic hidden representation and a single set of parameters which are applied over the entire hidden representation. This potentially throws unrelated sources of information together, and limits the Transformer’s ability to capture independent mechanisms. To address this, we propose Transformers with Independent Mechanisms (TIM), a new Transformer layer which divides the hidden representation and parameters into multiple mechanisms, which only exchange information through attention. Additionally, we propose a competition mechanism which encourages these mechanisms to specialize over time steps, and thus be more independent. We study TIM on a large-scale BERT model, on the Image Transformer, and on speech enhancement and find evidence for semantically meaningful specialization as well as improved performance.
[]
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Emergence of invariance and disentanglement in deep representations", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Ferran Alet", "Tomás Lozano-Pérez", "Leslie P Kaelbling" ], "title": "Modular meta-learning", "venue": "arXiv preprint arXiv:1806.10166,", "year": 2018 }, { "authors": [ "Bang An", "Jie Lyu", "Zhenyi Wang", "Chunyuan Li", "Changwei Hu", "Fei Tan", "Ruiyi Zhang", "Yifan Hu", "Changyou Chen" ], "title": "Repulsive attention: Rethinking multi-head attention as bayesian inference", "venue": null, "year": 2009 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Yoshua Bengio" ], "title": "Learning deep architectures for AI", "venue": "Now Publishers Inc,", "year": 2009 }, { "authors": [ "Steven Boll" ], "title": "Suppression of acoustic noise in speech using spectral subtraction", "venue": "IEEE Transactions on acoustics, speech, and signal processing,", "year": 1979 }, { "authors": [ "Hyeong-Seok Choi", "Hoon Heo", "Jie Hwan Lee", "Kyogu Lee" ], "title": "Phase-aware single-stage speech denoising and dereverberation with u-net", "venue": "arXiv preprint arXiv:2006.00687,", "year": 2020 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D Manning" ], "title": "What does bert look at? an analysis of bert’s attention", "venue": null, "year": 1906 }, { "authors": [ "Hongyi Cui", "Shohei Iida", "Po-Hsuan Hung", "Takehito Utsuro", "Masaaki Nagata" ], "title": "Mixed multi-head self-attention for neural machine translation", "venue": "In Proceedings of the 3rd Workshop on Neural Generation and Translation,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Yariv Ephraim", "David Malah" ], "title": "Speech Enhancement Using a Minimum Mean-Square Error Short-Time Spectral Amplitude Estimator", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 1984 }, { "authors": [ "Szu-Wei Fu", "Chien-Feng Liao", "Tsun-An Hsieh", "Kuo-Hsuan Hung", "Syu-Siang Wang", "Cheng Yu", "Heng-Cheng Kuo", "Ryandhimas E Zezario", "You-Jin Li", "Shang-Yi Chuang" ], "title": "Boosting objective scores of speech enhancement model through metricgan post-processing", "venue": "arXiv preprint arXiv:2006.10296,", "year": 2020 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Anirudh Goyal", "Alex Lamb", "Jordan Hoffmann", "Shagun Sodhani", "Sergey Levine", "Yoshua Bengio", "Bernhard Schölkopf" ], "title": "Recurrent independent mechanisms", "venue": null, "year": 1909 }, { "authors": [ "Yanxin Hu", "Yun Liu", "Shubo Lv", "Mengtao Xing", "Shimin Zhang", "Yihui Fu", "Jian Wu", "Bihong Zhang", "Lei Xie" ], "title": "Dccrn: Deep complex convolution recurrent network for phase-aware speech enhancement", "venue": null, "year": 2008 }, { "authors": [ "Umut Isik", "Ritwik Giri", "Neerad Phansalkar", "Jean-Marc Valin", "Karim Helwani", "Arvindh Krishnaswamy" ], "title": "Poconet: Better speech enhancement with frequency-positional embeddings, semisupervised conversational data, and biased loss", "venue": null, "year": 2008 }, { "authors": [ "Jesper R. Jensen", "Jingdong Chen" ], "title": "Speech Enhancement–A Signal Subspace Perspective", "venue": null, "year": 2015 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling laws for neural language models", "venue": null, "year": 2001 }, { "authors": [ "Jaeyoung Kim", "Mostafa El-Khamy", "Jungwon Lee" ], "title": "T-gsa: Transformer with gaussian-weighted self-attention for speech enhancement", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Philipp Koehn", "Hieu Hoang", "Alexandra Birch", "Chris Callison-Burch", "Marcello Federico", "Nicola Bertoldi", "Brooke Cowan", "Wade Shen", "Christine Moran", "Richard Zens", "Chris Dyer", "Ondrej Bojar", "Alexandra Constantin", "Evan Herbst. Moses" ], "title": "Open source toolkit for statistical machine translation", "venue": "In ACL,", "year": 2007 }, { "authors": [ "Yuichiro Koyama", "Tyler Vuong", "Stefan Uhlich", "Bhiksha Raj" ], "title": "Exploring the best loss function for dnn-based low-latency speech enhancement with temporal convolutional networks", "venue": "arXiv preprint arXiv:2005.11611,", "year": 2020 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Y. Cortes C. LeCun", "C.J. Burges" ], "title": "The mnist database of handwritten digits", "venue": "arXiv preprint arXiv:2001.08361,", "year": 1998 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Michael F Mathieu", "Junbo Jake Zhao", "Junbo Zhao", "Aditya Ramesh", "Pablo Sprechmann", "Yann LeCun" ], "title": "Disentangling factors of variation in deep representation using adversarial training", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Sarthak Mittal", "Alex Lamb", "Anirudh Goyal", "Vikram Voleti", "Murray Shanahan", "Guillaume Lajoie", "Michael Mozer", "Yoshua Bengio" ], "title": "Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules", "venue": "arXiv preprint arXiv:2006.16981,", "year": 2020 }, { "authors": [ "Giambattista Parascandolo", "Niki Kilbertus", "Mateo Rojas-Carulla", "Bernhard Schölkopf" ], "title": "Learning independent causal mechanisms", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Sungrae Park", "Geewook Kim", "Junyeop Lee", "Junbum Cha", "Ji-Hoon Kim Hwalsuk Lee" ], "title": "Grouptransformer: Towards a lightweight character-level language model, 2020", "venue": "URL https:// openreview.net/forum?id=rkxdexBYPB", "year": 2020 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Mirco Ravanelli", "Maurizio Omologo" ], "title": "Contaminated speech training methods for robust DNNHMM distant speech recognition", "venue": "In Proc. of Interspeech,", "year": 2015 }, { "authors": [ "Chandan KA Reddy", "Vishak Gopal", "Ross Cutler", "Ebrahim Beyrami", "Roger Cheng", "Harishchandra Dubey", "Sergiy Matusevych", "Robert Aichner", "Ashkan Aazami", "Sebastian Braun" ], "title": "The interspeech 2020 deep noise suppression challenge: Datasets, subjective testing framework, and challenge results", "venue": "arXiv preprint arXiv:2005.13981,", "year": 2020 }, { "authors": [ "Salah Rifai", "Yoshua Bengio", "Aaron Courville", "Pascal Vincent", "Mehdi Mirza" ], "title": "Disentangling factors of variation for facial expression recognition", "venue": "In European Conference on Computer Vision,", "year": 2012 }, { "authors": [ "Antony W Rix", "John G Beerends", "Michael P Hollier", "Andries P Hekstra" ], "title": "Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs", "venue": "In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2001 }, { "authors": [ "Eric Ronco", "Henrik Gollee", "Peter J Gawthrop" ], "title": "Modular neural networks and self-decomposition", "venue": "Technical Report CSC-96012,", "year": 1997 }, { "authors": [ "Pascal Scalart", "Jozué V. Filho" ], "title": "Speech enhancement based on a priori signal to noise estimation", "venue": "In Acoustics, Speech, and Signal Processing,", "year": 1996 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In ACL,", "year": 2016 }, { "authors": [ "Joachim Thiemann", "Nobutaka Ito", "Emmanuel Vincent" ], "title": "The diverse environments multi-channel acoustic noise database: A database of multichannel environmental noise recordings", "venue": "The Journal of the Acoustical Society of America,", "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jesse Vig", "Ali Madani", "Lav R Varshney", "Caiming Xiong", "Richard Socher", "Nazneen Fatema Rajani" ], "title": "Bertology meets biology: Interpreting attention in protein language models", "venue": null, "year": 2006 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "CoRR, abs/1804.07461,", "year": 2018 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Richard Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In arXiv preprint arXiv:1506.06724,", "year": 2015 }, { "authors": [ "BERT (Devlin" ], "title": "2018) is one of the most popularly used methods to learn the representation of natural language. The BERT model uses a multi-layer Transformer encoder and is trained by the masked language modeling task using Web data corpus", "venue": "(Liu et al.,", "year": 2019 }, { "authors": [ "Devlin" ], "title": "2018), we use English Wikipedia corpus2 and BookCorpus3 for pre-training. By concatenating these two datasets, we obtain a corpus with roughly 3400M words in total. We follow a couple of consecutive pre-processing steps: segmenting documents into sentences by Spacy 4, normalizing, lower-casing, and tokenizing the texts by Moses decoder", "venue": "(Koehn et al.,", "year": 2007 }, { "authors": [ "Liu" ], "title": "BPE) (Sennrich et al., 2016) with setting the vocabulary size", "venue": null, "year": 2019 }, { "authors": [ "Adam Kingma", "Ba" ], "title": "optimizer, and set the hyper-parameter β as (0.9", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "A major theme throughout the history of deep learning has been the introduction of inductive biases in neural architectures, more recently with a focus on the ability to dynamically keep distinct types of information separated. While an MLP architecture has one large hidden representation at each layer, a convnet keeps different spatial positions’ representations separated by default. This separation enables more appropriate reuse of parameters, improving generalization (e.g. compared with a fully connected MLP) by ensuring that some parts of the hidden representation capturing some aspects of the data can remain unchanged when other aspects are changed. Additionally, it is important to be able to reuse parameters in all situations where the parameters are relevant, and not use parameters in positions where they are irrelevant, and this is where attention mechanisms can be very useful.\nWhile dividing information between different positions (for example time steps or spatial positions) is already very useful, it has been recognized from the earliest deep learning work on the notion of disentangling (Bengio, 2009; Glorot et al., 2011; Rifai et al., 2012; Mathieu et al., 2016; Achille & Soatto, 2018) that other features of the data could advantageously be kept well-separated, even over overlapping sets of positions. This has suggested the idea that a model can be decomposed into multiple components, which are often called modules, each operating on a different set of features. Modularity has been identified as an essential ingredient for generalization in machine learning (Ronco et al., 1997; Alet et al., 2018; Goyal et al., 2019). The motivating intuition is that if the relationship between the modules changes between training and evaluation, then a model which keeps these modules sufficiently separate but can adapt how they are combined could be more robust. It can even be robust to changes where the overall data distribution differs between training and evaluation. This has been studied in the causality literature through the notion of “Independent Mechanisms”\n(Peters et al., 2018; Parascandolo et al., 2018) or causal modules, which can be flexibly re-combined, re-used, and re-purposed.\nWhile modularity and independent mechanisms ideas are closely related, the latter has a special focus on the notion that mechanisms should have the ability to remain unchanged when unrelated aspects of the world are changed. In that sense it is a more specific idea which builds on the more general concept of modularity. While the study of independent mechanisms in the context of deep architectures is relatively recent (Goyal et al., 2019; Mittal et al., 2020), a few ideas are considered central. One is that mechanisms are separately parameterized (or dynamically parameterized, with the possibility of separation), which means that the function computed by a module remains the same even as other mechanisms need to be changed. Another central idea is specialization between mechanisms, which is the idea that mechanisms should seek to only model some parts of the world. One way to help accomplish this is by forcing the mechanisms to compete to explain different positions (in time or space), such that some mechanisms would not be used by the model on positions where they are less relevant.\nIn this work we explore how the idea of independent mechanisms can be beneficial in the Transformer architecture. Transformers (Vaswani et al., 2017) are based on information sharing across positions controlled dynamically by a soft-attention mechanism (Bahdanau et al., 2014), while still using a fully-connected MLP to process the extracted feature vectors (concatenated over a set of attention heads) at each position. An important way in which this improves over convnets is that if this attention becomes sufficiently sparse, then it gains the ability to keep information well-separated between different positions. At the same time, at each position, the Transformer stores a single monolithic hidden representation, over which it applies its entire set of parameters. For example, if we consider a generative model of images of animals in a field, then some of the parameters like those describing how animals have symmetric eyes or a certain number of feet, are only relevant for the positions in the image where the animal is present. A normal Transformer, however, would apply the same parameters to the entire hidden representation at all spatial positions. Additionally, if sources of information need to be accessed over multiple positions, it has no way to keep that information well-separated between parts of the hidden representation, unless a large fraction of the parameters are set to exactly zero. In practice, models tend not to learn these sorts of highly sparse parameter matrices as it is not necessary in order to fit the training set. Thus different underlying factors tend to be freely blended together rather than disentangled: we hypothesize and show empirically that this leads to deteriorated generalization when something about some of these factors changes.\nOur newly proposed technique, which we call Transformers with Competitive Independent Mechanisms (TIM) seeks to address this limitation of the Transformer by dividing the hidden representation and parameters into multiple distinct mechanisms. These mechanisms perform self-attention (over input elements) separately, and information is exchanged sparingly between the mechanisms using\nattention. Thus the model is naturally compelled to keep multiple information signals well-separated, even within a single position. Moreover, only the parameters corresponding to an activated mechanism are called upon, focusing on one aspect of the hidden representation. The process of selectively activating some mechanisms and not others relies on competition between mechanisms, just like in recurrent independent mechanism (RIMs) (Goyal et al., 2019). We hypothesize and show empirically that this provides an inductive bias encouraging the mechanisms to be more independent and specialized, more robust to changes only affecting other mechanisms." }, { "heading": "2 TRANSFORMERS WITH COMPETITIVE INDEPENDENT MECHANISMS", "text": "" }, { "heading": "2.1 PRELIMINARIES", "text": "Multihead Self-attention sub-layer The attention mechanism can be formulated as querying a dictionary with key-value pairs (Bahdanau et al., 2014; Vaswani et al., 2017), e.g., Attention(Q,K, V ) = softmax(QKT / √ dmodel) · V, where dmodel is the dimensionality of the hidden representations and Q (Query), K (Key), V (Value) are specified as the hidden representations of the previous layer in the so-called self-attention sub-layers in the Transformer architecture. The multi-head variant of attention allows the model to jointly attend to information from different representation subspaces, and is defined as Multihead(Q,K, V ) = Concat(head1, · · · , headH)WO, with the heads defined as: headk = Attention(QW Q k ,KW K k , V W V k ) where W Q k ∈ Rdmodel×dK ,WKk ∈ Rdmodel×dK ,WVk ∈ Rdmodel×dV , and WO ∈ RHdV ×dmodel are project parameter matrices, H is the number of heads, and dK and dV are the dimensionalities of Key and Value.\nGroup Linear Layer: It takes multiple hidden representations, and applies a separately parameterized linear transformation to each. This operation can be efficiently implemented using batched-matrix multiplications. We set the numbers of groups ns and define a weight tensor W ∈ Rns×din×dout . If the input h is shaped as h ∈ Rns×din , then we can define the layer as: GroupLinear(h,W, ns) = [hjWj ] ns j=1" }, { "heading": "2.2 TIM ALGORITHM", "text": "We first lay out the parts of a TIM layer and then give more detailed steps in Algorithm 1. We then give a high-level detail of how to turn a transformer layer into a TIM layer in a typical implementation (Section 2.3). An illustration of how independent mechanisms differ from heads is given in Figure 1." }, { "heading": "2.2.1 COMPETITION BETWEEN DIFFERENT MECHANISMS", "text": "Aside from having separate parameters and only exchanging information via inter-mechanism attention, we wanted to create a stronger inductive bias to encourage the mechanisms to specialize. To do this, we created a competition system in which each mechanism has a layer which outputs a single scalar value (as a function of the current layer’s representation), and these are passed through a softmax over the different mechanisms (this softmax is applied position-wise and separately for each layer). The value of this softmax is then used to weight how much each mechanism is allowed to update its representation after the self-attention. This competition score is computed as c = softmax ( GroupLinear(h,W c, ns) ) , where we note that each mechanism has its own parameters for the layer (hence the use of a Group Linear layer instead of a normal linear layer). Thus the ns modules have a per-step weighting for how much they are able to read during the later self-attention stage. Thus if one mechanism wants to perform attention on a given position, it suppresses the other mechanisms on that position. We found that this often improved results and that these softmax scores are fairly interpretable as a measure of specialization. Exact equations for this step are given in Step 1 and used in Step 2 in Algorithm 1 in the appendix." }, { "heading": "2.2.2 EACH MECHANISM SHARES INFORMATION ACROSS TIME AND PROCESSES INFORMATION", "text": "This step allows each mechanism to have its own independent dynamics, which are themselves similar to a normal transformer layer. These independent dynamics allow each mechanism to read information from other time steps using attention and process that information using FFN layers. We modify the self-attention sub-layer and feed-forward sub-layers (FFN) to be mechanism-wise as well\nas position-wise, with separate parameters for each mechanism. Additionally, the layer-normalization is modified to be performed separately for each mechanism. The projections and FFN sub-layers can be modified by replacing the linear layers with group linear layers. When performing the self-attention itself, the mechanisms behave the same as heads, and thus we can use the same type of multi-head attention process, so long as the total number of heads is divisible by the number of mechanisms. One notable property is if TIMs only consisted of this part of the model (independent dynamics) by itself, then each TIM would be a completely independent transformer model with its own forward pass and its own parameters. Steps 2 and 4 in the appendix, Algorithm 1 give more detail on this step." }, { "heading": "2.2.3 ATTENTION IS USED TO COMMUNICATION INFORMATION BETWEEN DIFFERENT MECHANISMS", "text": "Although we allow each TIM to remain independent and process information independently, it is also important to allow the different mechanisms in TIMs to share information between each other (in case the TIMs are not truly fully independent). To do this we use a standard multi-head attention sub-layer to share information between the mechanisms, which is done in a position-wise fashion. We made this attention mechanism relatively small, with just 2 heads with 32 units each. This is because we want the different mechanisms to be as independent as possible, and thus only share small amounts of high level information. This can be thought of as another attention layer, where we treat the different mechanisms as positions, and perform this attention in parallel over the different steps in the sequence. More details on this are given in Step 3 in the appendix’s Algorithm 1." }, { "heading": "2.3 IMPLEMENTING AND INTEGRATING TIM", "text": "The TIM layer is a drop-in replacement for a standard Transformer layer and turning an existing Transformer layer into a TIM layer is surprisingly straightforward. It is a drop-in replacement for a single layer which can be flexibly used in a variety of models and architectures (both encoders and decoders). A simple strategy is if a normal hidden representation is of shape (T, b, dmodel), then our TIM hidden representation should be reshape-able to (T, b, ns, dmodel/ns). First, each layer linear layer in the existing Transformer layer should be replaced by a group-wise linear layer implemented using batch matrix multiplication. Second, so long as the number of heads is divisible by the number of mechanisms, the self-attention does not need to be changed, since mechanisms behave interchangeably with heads in this part of the model. Third, the inter-mechanism communication can be added as a drop-in module into the Transformer layer. Finally, the competition layer is just a single layer with a softmax, which can easily be added.\nAlthough TIM is a drop-in replacement for a normal Transformer layer, there are a few subtleties that must be considered for successful integration. First, if the total size of the hidden representation is kept the same, integrating TIM drastically reduces the total number of parameters in the model because all of the linear layers are replaced by grouped-linear layers (which can be thought of as having a block-sparse structure). This step by itself reduces the number of parameters by a factor of ns, but a TIM layer also adds new parameters to the model through the addition of the Inter-mechanism Attention Sub-Layer and Mechanism-Competition Sub-Layers, although both of these are rather small. In practice a TIM layer usually reduces the number of parameters by about 30-40%, depending on the exact hyperparameters. To compensate for this, in all of our experiments we increased the total hidden size to match the number of parameters of the original model, usually by about 20%.\nAdditionally, while we initially thought that it would make sense to replace every Transformer layer with a TIM layer, when we analyzed the mechanism-competition, we found that it was almost always completely flat on the first layer, which suggested to us that the first two layers as well as the last layer should be kept as normal Transformer layers." }, { "heading": "3 RELATED WORK", "text": "Specialization and Competition over heads in Transformers. Cui et al. (2019) proposed a mixed multi-head attention mechanism which forces some heads to learn specific patterns, such as attending to precedent/local tokens only. Clark et al. (2019) studied which positions attention heads focus on and found that some heads have specific patterns, such as attending locally. Vig et al. (2020) showed that the heads in a model of protein sequences are semantically meaningful. An et al. (2020)\nconsidered adding a repulsive force to the heads in Transformers to try to make them more specialized. In our view, this evidence for specialization over heads is complementary with our results.\nIndependent Mechanisms and Modularity in Transformers. We’re not aware of any work which breaks a Transformer’s hidden representation into multiple mechanisms with separate parameters which interact through attention, though some works hint at this direction. The Group Transformer (Park et al., 2020) replaces the fully-connected layers with group-linear layers and uses low-rank layers to pass information between the groups. The universal transformer (Dehghani et al., 2018) shared parameters between layers and updated using gating, and this gating could behave similarly to the competition that we propose but lacks the idea of having multiple mechanisms.\nIndependent Mechanisms in Recurrent Networks. The idea of independent mechanisms has seen a significant amount of focus in recurrent networks (Goyal et al., 2019). The idea is to parameterize the model as an ensemble of mechanisms, having their own dynamics, but sparingly interacting with each other using a bottleneck of attention. In the case of recurrent networks, dividing the hidden representation into mechanisms has the advantage that at a particular time-step, only a fraction of mechanisms can be active, and hence computation is sparse in time, where in the case of transformers, imposing the idea of independent mechanisms in some higher layers has the added advantage that computation can be sparse both in space (i.e., position ) as well as time." }, { "heading": "4 EXPERIMENTS", "text": "We seek to answer two questions in our experiments. First, do the mechanisms that we learn with TIM specialize in sensible and semantically meaningful ways? We analyze this both on toy datasets where we have clearly independent mechanisms by construction (Figure 2) and on large-scale realistic speech and NLP tasks (Figure 3 and Figure 4). Our second question is how using a model which learns these independent mechanisms leads to better quantitative performance, both on the original task and on transfer learning, which we demonstrate in Figure 1 and Table 2." }, { "heading": "4.1 IMAGE TRANSFORMER: EVIDENCE OF SPECIALIZATION", "text": "We integrated TIM into the Image Transformer, which is a generative model which generates an image pixel-by-pixel, with a small-scale variant of the GPT-2 architecture (Karpathy, 2020; Radford et al., 2019). We first considered a pedagogic task in which the dataset consists of two clearly independent mechanisms. Our synthetic task uses MNIST digits (LeCun & Burges, 1998) and CIFAR images Krizhevsky (2009) of small realistic images of animals and vehicles. Each example in our constructed dataset consists of an MNIST digit on its left-side and a CIFAR image on its right-side, with these two examples selected randomly. It is clear that two sides of the image are independent and have completely different types of content, and thus it is natural for each mechanism to specialize over a single side.\nWhen training with TIM on this dataset, we found that we were able to nearly exactly recover a competition pattern in which the mechanisms specialize over the two sides of the image (Fig. 2, middle). Intriguingly, this specialization does not appear at the very beginning of training, in which the mechanisms mostly specialize over the lightness or darkness of the pixels. However as training progresses, the two sides of the image become increasingly specialized to one mechanism or the other (Figure 2). We also experimented with the CIFAR-10 dataset, and found that integrating TIM led to superior test-set likelihoods. Moreover we visualized the competition pattern with TIM on CIFAR-10 and found a specialization between foreground and background regions in the images (Fig. 1, right)." }, { "heading": "4.2 SPEECH ENHANCEMENT", "text": "Speech enhancement aims to improve the quality of speech recordings. A speech signal captured in real environments, in fact, is often corrupted by noise and reverberation that might severely affect its intelligibility. Speech enhancement has long been studied in the research community (Jacob Benesty & Chen, 2015). Traditional approaches were based on signal processing techniques such as spectral-subtraction or Wiener filtering (Boll, 1979; Ephraim & Malah, 1984; Scalart & Filho, 1996). The idea behind these methods is to estimate the noise in non-speech segments and remove it from speech regions. End-to-end deep learning-based speech enhancement has turned out to significantly outperform traditional signal processing methods, and recently using Transformers\nhas led to promising performance (Kim et al., 2020). We believe that TIM fits well with this task because the traditional technique of decomposing the signal into speech and noisy parts and then analyzing these two signals separately embodies the desiderata of independent mechanisms.\nTable 1 (left) compares the performance achieved by TIM with other recent systems on the widelystudied Deep Noise Suppression (DNS) dataset (Reddy et al., 2020). DNS is a large corpus composed of roughly 441 hours of clean and noisy speech samples. The clean speech is artificially corrupted with noise sequences from the audioset database, which contains two million human-labeled clips drawn from YouTube videos and belong to about 600 audio events. Noise is added to the clean speech signal using a random signal-to-noise-ratio (SNR) ranging from 0 to 40 dB. We replaced all Transformer layers except for the first two and the last layer with TIM layers and we increased the total number of hidden units and heads (by about 20%) to match the number of parameters of the baseline, and we used two mechanisms (but achieved slightly worse yet better-than-baseline results with ns = 4). The systems are evaluated with the Perceptual Evaluation of Speech Quality (PESQ) score (Rix et al., 2001). To assess the generalization capability of TIM, we tested our model on the Voicebank test-set as well (see Table 1-right). Voicebank (Thiemann et al., 2013), in fact, is characterized by noisy conditions different from that of the DNS dataset used for training.\nThe results, shown in Table 1, highlight that TIM slightly outperforms the recently-proposed PocoNet (Hu et al., 2020) model, which uses additional data and has 8 times the parameters of TIM. To the best of our knowledge, TIM achieves the best PESQ performance so far published in the literature on the DNS dataset. Qualitatively, we found that the competition scores matches our intuition. Indeed, the two mechanisms clearly specialize over speech and non-speech parts of the audio sequence, as shown in Figure 3. Moreover, we intriguingly found that this competition between mechanisms is consistent across layers, starts out with low confidence, and becomes increasingly confident in later layers. Compared to a standard Transformer, TIM shows superior generalization capabilities. This interesting feature can be appreciated in Table 1 (right), where we tested our model on a different dataset (VoiceBank). In mismatch conditions, the competition mechanism seems to play a crucial role. This finding agrees with our intuition, according to which employing specialized and competing modules can make the model less affected by irrelevant changes of the input distribution." }, { "heading": "4.3 BERT PRE-TRAINING AND FINE-TUNING", "text": "BERT (Devlin et al., 2018) is one of the most popularly used methods to learn the representation of natural language. The BERT model uses a multi-layer Transformer encoder and is trained by the masked language modeling task using Web data corpus (Liu et al., 2019). The pre-trained contextual sentence representations have been shown to be effective in a large number of downstream tasks.\nFor BERT, we replaced all of the transformer layers except for the first two layers and the last layer with TIM layers (we also report a result where all layers are TIM layers, showing that it leads to worse performance). We used two mechanisms and evenly increased the number of hidden units and total number of heads across all layers to match the number of parameters in the baseline model.\nPre-training Following Devlin et al. (2018), we used English Wikipedia corpus and BookCorpus for pre-training. By concatenating these two datasets, we obtained a corpus with roughly 3.4 billion\nwords in total. We trained all model variants with the same procedure and hyperparameters which were tuned on the BERT baseline model. All models were run on 16 NVIDIA Tesla V100 GPUs.\nFine-tuning We used MNLI, MNLI-MM, QNLI, SST-2 and STS-B from the GLUE (General Language Understanding Evaluation) dataset (Wang et al., 2018) as the downstream tasks to evaluate the performance of the pre-trained models. Ideally the features learned by BERT would remain useful on these distinct tasks which have relatively small training sets.\nResults The overall comparison results are shown in Table 2. We found that both TIM-NoComp and TIM-Comp achieve lower perplexities (masked language modeling loss) on the validation dataset compared to the two BERT baselines. We found generally better and more reliable (less variance between seeds) results when fine-tuning experiments with TIM. These empirical results show that our proposed TIM is a better model architecture in a wide range of natural language applications." }, { "heading": "4.4 DISCUSSION: RNN MODULARITY VS. TRANSFORMER MODULARITY", "text": "A single layer RNN is already a fairly powerful model which can have strong priors to inform how to select mechanisms (or modules more generally). However, a single layer Transformer is a rather weak model, as it can only base its representations on a single round of attention based upon the individual tokens and their position encoding. We’ve consistently found that the quality of the mechanism-competition is poor in the first layer of a Transformer network and that performance is substantially improved by making the early layers use ordinary Transformer layers rather than TIM layers. This is in contrast with what has been observed with RNNs, where improvements can be obtained by using multiple modules or multiple mechanisms even in a single layer model." }, { "heading": "5 CONCLUSION", "text": "Scaling to extremely large Transformers with a very large number of hidden units for each position has become one of the dominant paradigms in applied machine learning. This work explores a new direction in the structure of the Transformer architecture which will become increasingly important as models become larger and researchers seek to model more complex phenomena. Evidence suggests that the Transformer’s success is a result of its use of attention to communicate information between positions, which allows for effective and precise transmission of information even over very long sequences (Kaplan et al., 2020). At the same time, each position within a Transformer is still represented with a single monolithic hidden representation, and a set of parameters which is applied over the entire hidden representation. Our newly proposed technique, TIM, has shown that it is possible to make the Transformer even more dynamic by breaking the hidden representation and layers into multiple mechanisms which interact via attention and have an inductive bias towards specialization. We show that these mechanisms specialize over distinct parts of the data and improve results across diverse types of data. These results suggest that there is room to improve the structural inductive biases in the Transformer and point towards an increasingly central area of future research as state-of-the-art Transformers, and the tasks they’re trained on, become larger and more diverse." }, { "heading": "A EXPERIMENT DETAILS", "text": "A.1 IMAGE TRANSFORMER DETAILS\nFor the Image Transformer, we used a baseline with 6 Transformer layers. As in the other experiments, we made the first 2 layers use TIMs as well as the last layer. We ran each experiment for 30 epochs with a batch size of 24, and otherwise used the same training hyperparameters as the minGPT repository (Karpathy, 2020). We used the Adam optimizer with warmup, with Betas = (0.9, 0.95). Our baseline model had 8 heads (total per-layer) and a layer hidden size of 184. When using TIMs, we increased this (for all layers) to 10 heads and a hidden size of 200. This led to the baseline and the TIM model having roughly the same number of total parameters.\nA.2 SPEECH ENHANCEMENT DETAILS\nDatasets Neural speech enhancement systems are trained using a parallel corpus of noise and clean examples, which are generated by artificially contaminating clean speech with disturbances such as additive noise and reverberation (Ravanelli & Omologo, 2015). The speech enhancement models considered in this work are trained with the DNS dataset (noisy, no reverb) (Reddy et al., 2020), which is a synthetic corpus recently made publicly available by Microsoft. This corpus is extremely suitable for our study because it is quite big (441 hours) and contains a large variety of possible noises (from 600 different categories). To the best of our knowledge, it is the biggest open-source speech enhancement dataset. Moreover, it has been the object of an international challenge on speech enhancement1. This gave us the possibility to compare our TIM with the best systems submitted to this competition.\nFor evaluation, we used the test sets of the DNS and Voicebank datasets (Thiemann et al., 2013). The latter has been adopted to study a transfer learning scenario, where different datasets are used for training and evaluation purposes. Voicebank, in fact, is generated with noisy sequences different from the one contained in the DNS corpus. Since Voicebank is released at 48 kHz, the original raw waveforms were downsampled from 48kHz to 16kHz.\nModel Architecture The proposed TIM is fed with noisy speech and estimates clean speech at the output. More precisely, we estimate the log-spectral magnitude of the clean signal. Mean Squared Error (MSE) between the clean and the corresponding noisy signal is used as cost function. The input waveform is transformed with the Short-Term Fourier Transform (STFT) based on 512 frequency points and window length of 32 ms with 16 ms overlap.\nBefore adding the transformer layers, we employ four 1D convolutional layers that act as a pre-encoder module. This is done to replace positional encoding from the original transformers and inject relative location information to the frames in the sequence (Kim et al., 2020; Fu et al., 2020). The four convolutional layers are based on 1024, 512,128, and 256 channels, respectively. The kernel size is 3. After the convolution, each layer applies layernorm followed by LeakyReLU. The Transformer part is composed of 8 encoder blocks with a hidden size of 512. In order to employ approximately the same number of parameters (i.e, 6 million), the baseline transformers used a hidden size of 256. We used 16 attention heads, a dropout rate of 0.1, and LeakyReLU activations. We kept the number of heads the same as in the baseline model. To follow the real-time processing restriction in DNS challenge, a causal setting is adopted to all our models with access to 32 ms of future frames. Attention masks are also applied to the self-attention layers to prevent using the future information.\nTraining We followed the exact same training procedure for the baseline model and the TIMs model, with both trained for 50 epochs. We used the standard variant of the Adam optimizer with a batch size of 16. The initial learning rate was set to 0.0002 and halved when the validation score decreased for 5 epochs. We reported test set performance at the epoch with the best validation score, which in practice was near the end of training. Both models train for about 50 hours on a single Nvidia V100 GPU.\nA.3 BERT PRE-TRAINING AND FINE-TUNING DETAILS\nBERT (Devlin et al., 2018) is one of the most popularly used methods to learn the representation of natural language. The BERT model uses a multi-layer Transformer encoder and is trained by the masked language modeling task using Web data corpus (Liu et al., 2019). The pre-trained contextual sentence representations have been shown to be effective in a large number of downstream tasks.\nTo validate our proposed architecture, we conduct experiments to compare TIM with Transformer on the language pre-training task. For our model, we replace all of the transformer layers except for the first two layers and the last layer with TIM layers (we also report a result where all layers are TIM layers, showing that it leads to worse performance). We scaled the dimensionality of the hidden nodes and the inner-layer of the FFN sub-layer\n1https://dns-challenge.azurewebsites.net/Interspeech2020\nare set to, the number of mechanisms is set to 2 and the number of heads is set to 16. We mainly test two TIM variants, TIM without competition (TIM-NoComp) and TIM with competition (TIM-Comp).\nFor a fair comparison, we set one baseline as a 12-layer Transformer with 130M parameters (BERT-130M). The size of hidden nodes and the inner-layer of the FFN sub-layer are set to 768/4096, and the number of heads is set to 12. We also use the standard BERT-Base model (110M parameters) as another baseline.\nDataset Following Devlin et al. (2018), we use English Wikipedia corpus2 and BookCorpus3 for pre-training. By concatenating these two datasets, we obtain a corpus with roughly 3400M words in total. We follow a couple of consecutive pre-processing steps: segmenting documents into sentences by Spacy 4, normalizing, lower-casing, and tokenizing the texts by Moses decoder (Koehn et al., 2007), and finally, applying byte pair encoding (BPE) (Sennrich et al., 2016) with setting the vocabulary size |V | as 32,678.\nOptimization Following the standard settings used in many previous works Devlin et al. (2018); Liu et al. (2019), we train the models for 1000k steps with setting the batch size as 256 and the maximum sequence length as 512. For all the models to compare, we set the masked probability p to be 0.15. We follow previous works to replace 80% of the masked positions by [MASK], 10% by randomly sampled words, and keep the remaining positions unchanged. We choose the most widely used Adam Kingma & Ba (2014) as the optimizer, and set the hyper-parameter β as (0.9, 0.98). The learning rate is set as 1e-4 with a 10k-step warm-up stage and then decays linearly to zero. We set the dropout probability as 0.1. All models are run on 8 NVIDIA Tesla V100 GPUs.\nFine-tuning We use the GLUE (General Language Understanding Evaluation) dataset (Wang et al., 2018) as the downstream tasks to evaluate the performance of the pre-trained models. Reporting large-scale task performance or averaged performance over all tasks depends on our choice. Same to the pre-training, we use Adam as the optimizer and set the hyper-parameter β as (0.9, 0.98). Following all previous works, we apply the hyper-parameter search (over β and learning rate) during the fine-tuning for each downstream task. Each configuration was run for five times with different random seeds, and the median and standard deviation over these five results on the development set was be used as the performance of one configuration.\nResults The overall comparison results are shown in Table 2. It is easy to find that both TIM-NoComp and TIM-Comp achieve lower perplexities (masked language modeling loss) on the validation dataset compared to the two BERT baselines. On the downstream tasks, the two TIM variants are also slightly better than the BERTs on all tasks. Those empirical results show that our proposed TIM is a better model architecture in a wide range of natural language applications.\nSimilar to previous analysis, we further study the competition patterns in the TIM-Comp model to investigate how the competitive module behaves.\n2https://dumps.wikimedia.org/enwiki 3As the dataset BookCorpus (Zhu et al., 2015) is no longer freely distributed, we follow the suggestions from\nDevlin et al. (2018) to crawl from smashwords.com and collect BookCorpus by ourselves. 4https://spacy.io" }, { "heading": "B DETAILED ALGORITHM DESCRIPTION", "text": "Algorithm 1 A single TIM Encoder-Layer Hyperparameters: Number of mechanisms ns, key size dk, value size dv , number of heads for self-attention H , number of heads for inter-mechanism attentionHc. We set dmech = dmodel/ns and dffn−m = dffn/ns\nInput: An input hidden representation h for a single example of shape (T, bs, dmodel). Step 1: Compute Mechanism Competition W c ∈ Rns×dmech×1 c = softmax ( GroupLinear(h,W c, ns)\n) Step 2: Mechanism-wise self-attention sub-layer WQ2 ,W K 2 ∈ Rns×dmech×HdK\nWV2 ∈ Rns×dmech×HdV , WO2 ∈ Rns×HdV ×dmech Q = GroupLinear(h,WQ2 , ns) K = GroupLinear(h,WK2 , ns) V = GroupLinear(h,WV2 , ns) M := Attention(Q,K, V, nsH) M := GroupLinear(M,WO2 , ns) h := norm(h+ c M,ns)\nStep 3: Inter-mechanism Attention Sub-Layer WQ3 ,W\nK ∈ Rns×dmech×HcdK WV3 ∈ Rns×dmech×HcdV WO3 ∈ Rns×HcdV ×dmech Q = GroupLinear(h,WQ3 , ns) K = GroupLinear(h,WK3 , ns) V = GroupLinear(h,WV3 , ns) Reshape Q,K, and V to (ns, T ∗ bs,Hc ∗ d). M := Attention(Q,K, V,Hc) Reshape M to (T, bs, ns ∗Hc ∗ dv). M := GroupLinear(M,WO3 , ns) h := norm(h+M,ns)\nStep 4: Mechanism-wise, Position-Wise, FFN Sub-Layer W (1) ∈ Rns×dmech×dffn−m W (2) ∈ Rns×dffn−m×dmech . F = GroupLinear(σ(GroupLinear(h,W (1))),W (2)) h := norm(h+ F, ns)" } ]
2,020
null
SP:eeab784f22aaf84838d021cc4c93a8707389d002
[ "The paper proposes a self-supervised approach for learning environment-level representations for embodied agents. The idea is that agents collect images and their corresponding poses during a walk-through phase. The images are clustered into multiple \"zones\". The zones are divided into seen and unseen zones. Using contrastive learning, the model is trained to distinguish the features of an unseen zone from the rest of the zones. The paper shows this approach improves performance over a number of baselines for Area Coverage, Flee, and Object Coverage tasks." ]
We introduce environment predictive coding, a self-supervised approach to learn environment-level representations for embodied agents. In contrast to prior work on self-supervised learning for images, we aim to jointly encode a series of images gathered by an agent as it moves about in 3D environments. We learn these representations via a zone prediction task, where we intelligently mask out portions of an agent’s trajectory and predict them from the unmasked portions, conditioned on the agent’s camera poses. By learning such representations on a collection of videos, we demonstrate successful transfer to multiple downstream navigationoriented tasks. Our experiments on the photorealistic 3D environments of Gibson and Matterport3D show that our method outperforms the state-of-the-art on challenging tasks with only a limited budget of experience.
[]
[ { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian Reid", "Stephen Gould", "Anton van den Hengel" ], "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian Reid", "Stephen Gould", "Anton van den Hengel" ], "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Mariusz Bojarski", "Davide Del Testa", "Daniel Dworakowski", "Bernhard Firner", "Beat Flepp", "Prasoon Goyal", "Lawrence D Jackel", "Mathew Monfort", "Urs Muller", "Jiakai Zhang" ], "title": "End to end learning for self-driving cars", "venue": "arXiv preprint arXiv:1604.07316,", "year": 2016 }, { "authors": [ "Angel Chang", "Angela Dai", "Tom Funkhouser", "Matthias Nießner", "Manolis Savva", "Shuran Song", "Andy Zeng", "Yinda Zhang" ], "title": "Matterport3d: Learning from rgb-d data in indoor environments", "venue": "In Proceedings of the International Conference on 3D Vision (3DV),", "year": 2017 }, { "authors": [ "Matthew Chang", "Arjun Gupta", "Saurabh Gupta" ], "title": "Semantic visual navigation by watching youtube videos", "venue": "arXiv preprint arXiv:2006.10034,", "year": 2020 }, { "authors": [ "Devendra Singh Chaplot", "Dhiraj Gandhi", "Abhinav Gupta", "Ruslan Salakhutdinov" ], "title": "Object goal navigation using goal-oriented semantic exploration", "venue": "arXiv preprint arXiv:2007.00643,", "year": 2020 }, { "authors": [ "Devendra Singh Chaplot", "Saurabh Gupta", "Dhiraj Gandhi", "Abhinav Gupta", "Ruslan Salakhutdinov" ], "title": "Learning to explore using active neural mapping", "venue": "8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Devendra Singh Chaplot", "Ruslan Salakhutdinov", "Abhinav Gupta", "Saurabh Gupta" ], "title": "Neural topological slam for visual navigation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Tao Chen", "Saurabh Gupta", "Abhinav Gupta" ], "title": "Learning exploration policies for navigation", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ricson Cheng", "Ziyan Wang", "Katerina Fragkiadaki" ], "title": "Geometry-aware recurrent neural networks for active visual recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sungjoon Choi", "Qian-Yi Zhou", "Vladlen Koltun" ], "title": "Robust reconstruction of indoor scenes", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Abhishek Das", "Samyak Datta", "Georgia Gkioxari", "Stefan Lee", "Devi Parikh", "Dhruv Batra" ], "title": "Embodied question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "SM Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A Rusu", "Ivo Danihelka", "Karol Gregor" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Kuan Fang", "Alexander Toshev", "Li Fei-Fei", "Silvio Savarese" ], "title": "Scene memory transformer for embodied agents in long-horizon tasks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Basura Fernando", "Hakan Bilen", "Efstratios Gavves", "Stephen Gould" ], "title": "Self-supervised video representation learning with odd-one-out networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Chuang Gan", "Yiwei Zhang", "Jiajun Wu", "Boqing Gong", "Joshua B Tenenbaum" ], "title": "Look, listen, and act: Towards audio-visual embodied navigation", "venue": null, "year": 1912 }, { "authors": [ "Alessandro Giusti", "Jérôme Guzzi", "Dan C Cireşan", "Fang-Lin He", "Juan P Rodrı́guez", "Flavio Fontana", "Matthias Faessler", "Christian Forster", "Jürgen Schmidhuber", "Gianni Di Caro" ], "title": "A machine learning approach to visual perception of forest trails for mobile robots", "venue": "IEEE Robotics and Automation Letters,", "year": 2016 }, { "authors": [ "Daniel Gordon", "Abhishek Kadian", "Devi Parikh", "Judy Hoffman", "Dhruv Batra" ], "title": "Splitnet: Sim2sim and task2task transfer for embodied visual navigation", "venue": null, "year": 2019 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Frederic Besse", "Yan Wu", "Hamza Merzic", "Aaron van den Oord" ], "title": "Shaping Belief States with Generative Environment Models for RL", "venue": null, "year": 2019 }, { "authors": [ "Daniel Guo", "Bernardo Avila Pires", "Bilal Piot", "Jean-bastien Grill", "Florent Altché", "Rémi Munos", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap latent-predictive representations for multitask reinforcement learning", "venue": null, "year": 2004 }, { "authors": [ "Saurabh Gupta", "James Davidson", "Sergey Levine", "Rahul Sukthankar", "Jitendra Malik" ], "title": "Cognitive mapping and planning for visual navigation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "David Ha", "Jürgen Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Memory-augmented dense predictive coding for video representation learning", "venue": "Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Joao F Henriques", "Andrea Vedaldi" ], "title": "Mapnet: An allocentric spatial memory for mapping environments", "venue": "In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Learning to look around: Intelligently exploring unseen environments for unknown tasks", "venue": "In Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Peter Karkus", "Xiao Ma", "David Hsu", "Leslie Pack Kaelbling", "Wee Sun Lee", "Tomás LozanoPérez" ], "title": "Differentiable algorithm networks for composable robot learning", "venue": null, "year": 1905 }, { "authors": [ "Dahun Kim", "Donghyeon Cho", "In So Kweon" ], "title": "Self-supervised video representation learning with space-time cubic puzzles", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Eric Kolve", "Roozbeh Mottaghi", "Winson Han", "Eli VanderBilt", "Luca Weihs", "Alvaro Herrasti", "Daniel Gordon", "Yuke Zhu", "Abhinav Gupta", "Ali Farhadi" ], "title": "AI2-THOR: An Interactive 3D Environment for Visual AI", "venue": null, "year": 2017 }, { "authors": [ "Ananya Kumar", "SM Eslami", "Danilo J Rezende", "Marta Garnelo", "Fabio Viola", "Edward Lockhart", "Murray Shanahan" ], "title": "Consistent generative query networks", "venue": null, "year": 1807 }, { "authors": [ "Xingyu Lin", "Harjatin Baweja", "George Kantor", "David Held" ], "title": "Adaptive auxiliary task weighting for reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Alena Lukasová" ], "title": "Hierarchical agglomerative clustering procedure", "venue": "Pattern Recognition,", "year": 1979 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J. Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Learning to navigate in complex environments", "venue": "CoRR, abs/1611.03673,", "year": 2016 }, { "authors": [ "Medhini Narasimhan", "Erik Wijmans", "Xinlei Chen", "Trevor Darrell", "Dhruv Batra", "Devi Parikh", "Amanpreet Singh" ], "title": "Seeing the un-scene: Learning amodal semantic maps for room navigation", "venue": null, "year": 2007 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Emilio Parisotto", "Ruslan Salakhutdinov" ], "title": "Neural map: Structured memory for deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": null, "year": 2016 }, { "authors": [ "Santhosh K. Ramakrishnan", "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Emergence of exploratory look-around behaviors through active observation completion", "venue": "Science Robotics,", "year": 2019 }, { "authors": [ "Santhosh K. Ramakrishnan", "Ziad Al-Halah", "Kristen Grauman" ], "title": "Occupancy anticipation for efficient exploration and navigation, 2020a", "venue": null, "year": 2020 }, { "authors": [ "Santhosh K. Ramakrishnan", "Dinesh Jayaraman", "Kristen Grauman" ], "title": "An exploration of embodied visual exploration, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Nikolay Savinov", "Alexey Dosovitskiy", "Vladlen Koltun" ], "title": "Semi-parametric topological memory for navigation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Yili Zhao", "Erik Wijmans", "Bhavana Jain", "Julian Straub", "Jia Liu", "Vladlen Koltun", "Jitendra Malik", "Devi Parikh", "Dhruv Batra" ], "title": "Habitat: A Platform for Embodied AI Research", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Yili Zhao", "Erik Wijmans", "Bhavana Jain", "Julian Straub", "Jia Liu", "Vladlen Koltun", "Jitendra Malik" ], "title": "Habitat: A platform for embodied ai research", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Alexander Sax", "Jeffrey O Zhang", "Bradley Emi", "Amir Zamir", "Silvio Savarese", "Leonidas Guibas", "Jitendra Malik" ], "title": "Learning to navigate using mid-level visual priors", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "William B Shen", "Danfei Xu", "Yuke Zhu", "Leonidas J Guibas", "Li Fei-Fei", "Silvio Savarese" ], "title": "Situational fusion of visual representation for visual navigation", "venue": null, "year": 2019 }, { "authors": [ "Shuran Song", "Fisher Yu", "Andy Zeng", "Angel X Chang", "Manolis Savva", "Thomas Funkhouser" ], "title": "Semantic scene completion from a single depth image", "venue": "Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Shuran Song", "Andy Zeng", "Angel X Chang", "Manolis Savva", "Silvio Savarese", "Thomas Funkhouser" ], "title": "Im2pano3d: Extrapolating 360 structure and semantics beyond the field of view", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Chen Sun", "Fabien Baradel", "Kevin Murphy", "Cordelia Schmid" ], "title": "Learning video representations using contrastive bidirectional transformer", "venue": "arXiv preprint arXiv:1906.05743,", "year": 2019 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "VideoBert: A joint model for video and language representation learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 7464–7473,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Donglai Wei", "Joseph J Lim", "Andrew Zisserman", "William T Freeman" ], "title": "Learning and using the arrow of time", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Erik Wijmans", "Abhishek Kadian", "Ari Morcos", "Stefan Lee", "Irfan Essa", "Devi Parikh", "Manolis Savva", "Dhruv Batra" ], "title": "Dd-ppo: Learning near-perfect pointgoal navigators", "venue": null, "year": 2020 }, { "authors": [ "Yi Wu", "Yuxin Wu", "Aviv Tamar", "Stuart Russell", "Georgia Gkioxari", "Yuandong Tian" ], "title": "Bayesian relational memory for semantic visual navigation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Fei Xia", "Amir R Zamir", "Zhiyang He", "Alexander Sax", "Jitendra Malik", "Silvio Savarese. Gibson" ], "title": "env: Real-world perception for embodied agents", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp", "year": 2018 }, { "authors": [ "Fei Xia", "William B Shen", "Chengshu Li", "Priya Kasimbeg", "Micael Edmond Tchapmi", "Alexander Toshev", "Roberto Martı́n-Martı́n", "Silvio Savarese" ], "title": "Interactive gibson benchmark: A benchmark for interactive navigation in cluttered environments", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Joel Ye", "Dhruv Batra", "Erik Wijmans", "Abhishek Das" ], "title": "Auxiliary tasks speed up learning pointgoal", "venue": "Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Vaswani" ], "title": "2017), we define the attention mechanism used in the environment encoder and policy decoder. Given two inputs X ∈ Rn1×dx and Y ∈ Rn2×dy", "venue": "ATTENTION MECHANISM Following", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In visual navigation tasks, an intelligent embodied agent must move around a 3D environment using its stream of egocentric observations to sense objects and obstacles, typically without the benefit of a pre-computed map. Significant recent progress on this problem can be attributed to the availability of large-scale visually rich 3D datasets (Chang et al., 2017; Xia et al., 2018; Straub et al., 2019), developments in high-quality 3D simulators (Anderson et al., 2018b; Kolve et al., 2017; Savva et al., 2019a; Xia et al., 2020), and research on deep memory-based architectures that combine geometry and semantics for learning representations of the 3D world (Gupta et al., 2017; Henriques & Vedaldi, 2018; Chen et al., 2019; Fang et al., 2019; Chaplot et al., 2020b;c).\nDeep reinforcement learning approaches to visual navigation often suffer from sample inefficiency, overfitting, and instability in training. Recent contributions work towards overcoming these limitations for various navigation and planning tasks. The key ingredients are learning good image-level representations (Das et al., 2018; Gordon et al., 2019; Lin et al., 2019; Sax et al., 2020), and using modular architectures that combine high-level reasoning, planning, and low-level navigation (Gupta et al., 2017; Chaplot et al., 2020b; Gan et al., 2019; Ramakrishnan et al., 2020a).\nPrior work uses supervised image annotations (Mirowski et al., 2016; Das et al., 2018; Sax et al., 2020) and self-supervision (Gordon et al., 2019; Lin et al., 2019) to learn good image representations that are transferrable and improve sample efficiency for embodied tasks. While promising, such learned image representations only encode the scene in the nearby locality. However, embodied agents also need higher-level semantic and geometric representations of their history of observations, grounded in 3D space, in order to reason about the larger environment around them.\nTherefore, a key question remains: how should an agent moving through a visually rich 3D environment encode its series of egocentric observations? Prior navigation methods build environment-level representations of observation sequences via memory models such as recurrent neural networks (Wijmans et al., 2020), maps (Henriques & Vedaldi, 2018; Chen et al., 2019; Chaplot et al., 2020b), episodic memory (Fang et al., 2019), and topological graphs (Savinov et al., 2018; Chaplot et al., 2020c). However, these approaches typically use hand-coded representations such as occupancy maps (Chen et al., 2019; Chaplot et al., 2020b; Ramakrishnan et al., 2020a; Karkus et al., 2019; Gan et al., 2019) and semantic labels (Narasimhan et al., 2020; Chaplot et al., 2020a), or specialize them by learning end-to-end for solving a specific task (Wijmans et al., 2020; Henriques & Vedaldi, 2018; Parisotto & Salakhutdinov, 2018; Cheng et al., 2018; Fang et al., 2019).\n1\nIn this work, we introduce environment predictive coding (EPC), a self-supervised approach to learn flexible representations of 3D environments that are transferrable to a variety of navigation-oriented tasks. The key idea is to learn to encode a series of egocentric observations in a 3D environment so as to be predictive of visual content that the agent has not yet observed. For example, consider an agent that just entered the living room in an unfamiliar house and is searching for a refrigerator. It must be able to predict where the kitchen is and reason that it is likely to contain a refrigerator. The proposed EPC model aims to learn representations that capture these natural statistics of real-world environments in a self-supervised fashion, by watching videos recorded by other agents. See Fig. 1.\nTo this end, we devise a self-supervised zone prediction task in which the model learns environment embeddings by watching egocentric view sequences from other agents navigating in 3D environments in pre-collected videos. Specifically, we segment each video into zones of visually and geometrically connected views, while ensuring limited overlap across zones in the same video. Then, we randomly mask out zones, and predict the masked views conditioned on both the unmasked zones’ views and the masked zones’ camera poses. Intuitively, to perform this task successfully, the model needs to reason about the geometry and semantics of the environment to figure out what is missing. We devise a transformer-based model to infer the masked visual features. Our general strategy can be viewed as a context prediction task in sequential data (Devlin et al., 2018; Sun et al., 2019b; Han et al., 2019)—but, very differently, aimed at representing high-level semantic and geometric priors in 3D environments to aid embodied agents who act in them.\nThrough extensive experiments on Gibson and Matterport3D, we show that our method achieves good improvements on multiple navigation-oriented tasks compared to state-of-the-art models and baselines that learn image-level embeddings." }, { "heading": "2 RELATED WORK", "text": "Self-supervised visual representation learning: Prior work leverages self-supervision to learn image and video representations from large collections of unlabelled data. Image representations attempt proxy tasks such as inpainting (Pathak et al., 2016) and instance discrimination (Oord et al., 2018; Chen et al., 2020; He et al., 2020), while video representation learning leverages signals such as temporal consistency (Wei et al., 2018; Fernando et al., 2017; Kim et al., 2019) and contrastive predictions (Han et al., 2019; Sun et al., 2019a). The VideoBERT project (Sun et al., 2019a;b) jointly learns video and text representations from unannotated videos via filling in masked out information. Dense Predictive Coding (Han et al., 2019; 2020) learns video representations that capture the slow-moving semantics in videos. Whereas these methods focus on capturing human activity for video recognition, we aim to learn geometric and semantic cues in 3D spaces for embodied agents. Accordingly, unlike the existing video models (Sun et al., 2019a;b; Han et al., 2019), our approach is grounded in the 3D relationships between views.\nRepresentation learning via auxiliary tasks for RL: Reinforcement learning approaches often suffer from high sample complexity, sparse rewards, and unstable training. Prior work tackles these\nchallenges by using auxiliary tasks for learning image representations (Mirowski et al., 2016; Gordon et al., 2019; Lin et al., 2019; Shen et al., 2019; Ye et al., 2020). In contrast, we encode image sequences from embodied agents to obtain environment-level representations. Recent work also learns state representations via future prediction and implicit models (Ha & Schmidhuber, 2018; Eslami et al., 2018; Gregor et al., 2019; Hafner et al., 2019; Guo et al., 2020). In particular, neural rendering approaches achieve impressive reconstructions for arbitrary viewpoints (Eslami et al., 2018; Kumar et al., 2018). However, unlike our idea, they focus on pixelwise reconstruction, and their success has been limited to synthetically generated environments like DeepMind Lab (Beattie et al., 2016). In contrast to any of the above, we use egocentric videos to learn predictive feature encodings of photorealistic 3D environments to capture their naturally occurring regularities.\nScene completion: Past work in scene completion performs pixelwise reconstruction of 360 panoramas (Jayaraman & Grauman, 2018; Ramakrishnan et al., 2019), image inpainting (Pathak et al., 2016), voxelwise reconstructions of 3D structures and semantics (Song et al., 2017), and imagelevel extrapolation of depth and semantics (Song et al., 2018; Yang et al., 2019b). Recent work on visual navigation extrapolates maps of room-types (Wu et al., 2019; Narasimhan et al., 2020) and occupancy (Ramakrishnan et al., 2020a). While our approach is also motivated by anticipating unseen elements, we learn to extrapolate in a high-dimensional feature space (rather than pixels, voxels, or semantic categories) and in a self-supervised manner without relying on human annotations. Further, the proposed model learns from egocentric video sequences captured by other agents, without assuming access to detailed scans of the full 3D environment as in past work.\nLearning image representations for navigation: Prior work exploits ImageNet pretraining (Gupta et al., 2017; Anderson et al., 2018a; Chen et al., 2019), mined object relations (Yang et al., 2019a), video (Chang et al., 2020), and annotated datasets from various image tasks (Sax et al., 2020; Chaplot et al., 2020c) to aid navigation. While these methods also consider representation learning in the context of navigation tasks, they are limited to learning image-level functions for classification and proximity prediction. In contrast, we learn predictive representations for sequences of observations conditioned on the camera poses." }, { "heading": "3 APPROACH", "text": "We propose environment predictive coding (EPC) to learn self-supervised environment-level representations (Sec. 3.1). To demonstrate the utility of these representations, we integrate them into a transformer-based navigation architecture and refine them for individual tasks (Sec. 3.2). As we will show in Sec. 4, our approach leads to both better performance and better sample efficiency compared to existing approaches." }, { "heading": "3.1 ENVIRONMENT PREDICTIVE CODING", "text": "Our hypothesis is that it is valuable for an embodied agent to learn a predictive coding of the environment. The agent must not just encode the individual views it observes, but also learn to leverage the encoded information to anticipate the unseen parts of the environment. Our key idea is that the environment embedding must be predictive of unobserved content, conditioned on the agent’s camera pose. This equips an agent with the natural priors of 3D environments to quickly perform new tasks, like finding the refrigerator or covering more area.\nWe propose the proxy task of zone prediction to achieve this goal (see Fig. 2). For this task, we use a dataset of egocentric video walkthroughs collected parallely from other agents deployed in various unseen environments (Fig. 2, top). For each video, we assume access to RGB-D, egomotion data, and camera intrinsics. Specifically, our current implementation uses egocentric camera trajectories from photorealistic scanned indoor environments (Gibson (Xia et al., 2018)) to sample the training videos; we leave leveraging in-the-wild consumer video as a challenge for future work.\nWe do not assume that the agents who generated those training videos were acting to address a particular navigation task. In particular, their behavior need not be tied to the downstream navigationoriented tasks for which we test our learned representation. For example, a training video may show agents moving about to maximize their area coverage, whereas the encoder we learn is applicable to an array of navigation tasks (as we will demonstrate in Sec. 4). Furthermore, we assume that the environments seen in the videos are not accessible for interactive training. In practice, this means\nthat we can parallelly collect data from different robots deployed in a large number of environments, without having to actually train our navigation policy on those environments. These assumptions are much weaker than those made by prior work on imitation learning and behavioral cloning that rely on task-specific data generated from experts (Bojarski et al., 2016; Giusti et al., 2016).\nOur method works as follows. First, we automatically segment videos into “zones” which contain frames with significant view overlaps. We then perform the self-supervised zone prediction task on the segmented videos. Finally, we incorporate the learned environment encoder into an array of downstream navigation-oriented tasks. We explain each step in detail next.\nZone generation At a glance, one might first consider masking arbitrary individual frames in the training videos. However, doing so is inadequate for representation learning, since unmasked frames having high viewpoint overlap with the masked frame can make its prediction trivial. Instead, our approach masks zones of frames at once. We define a zone to be a set of frames in the video which share a significant overlap in their viewpoints. We also require that the frames across multiple zones share little to no overlap.\nTo generate these zones, we first cluster frames in the videos based on the amount of pairwisegeometric overlap between views. We estimate the viewpoint overlap ψ(oi, oj) between two frames oi, oj by measuring their intersection in 3D point clouds obtained by backprojecting depth inputs into 3D space. See Appendix for more details. For a video of length L, we generate a distance matrix D ∈ RL×L where Di,j = 1 − ψ(oi, oj). We then perform hierarchical agglomerative clustering (Lukasová, 1979) to cluster the video frames into zones based on D (see Fig. 2, bottom left). While these zones naturally tend to overlap near their edges, they typically capture disjoint sets of content in the video. Note that the zones segment video trajectories, not floorplan maps, since we do not assume access to the full 3D environment.\nZone prediction task Having segmented the video into zones, we next present our EPC zone prediction task to learn environment embeddings (see Fig. 2). We randomly divide the video v into seen zones {Zvs,i}ni=1 (cyan) and unseen zones {Zvu,i}mi=1 (yellow), where a zone Z is a tuple of images and the corresponding camera poses Zi = {(oj , pj)}|Zi|1 . Given the seen zones, and the camera pose from an unseen zone pvu,i, we need to infer a feature encoding of the unseen zone Z v u,i. To perform this task, we first extract visual features x from each RGB-D frame o in the video using pretrained CNNs (see Sec. 3.2). These features are concatenated with the corresponding pose p and projected using an MLPM to obtain the image-level embedding. The target features for the unseen zone Zvu,i are obtained as follows:\nfvu,i = 1 |Zvu,i| ∑\n[x,p]∈Zvu,i\nM([x, p]). (1)\nThe rationale behind the feature averaging is that we want to predict the high-level visual content of the zone, while ignoring viewpoint specific variations within the zone.\nWe use a transformer-based encoder-decoder model to perform this task (Vaswani et al., 2017). Our model consists of an environment encoder and a zone decoder which infers the zone features (see Fig. 2, bottom). The environment encoder takes in the image-level embeddingsM([x, p]) from the input zones, and performs multi-headed self-attention to generate the environment embeddings E . The zone decoder attends to E using the average camera pose from the unseen zone pvu,i and predicts the zone features as follows:\nf̂u,i = ZoneDecoder(E , pvu,i). (2)\nWe transform all poses in the input zones relative to pvu,i before encoding, which provides the model an egocentric view of the world. The environment encoder, zone decoder, and the projection function M are jointly trained using noise-contrastive estimation (Gutmann & Hyvärinen, 2010). We use f̂u,i as the anchor and fvu,i from Eqn. 1 as the positive. We sample negatives from other unseen zones in the same video and from all zones in other videos. The loss for the ith unseen zone in video v is:\nLvi = −log exp ( sim(f̂u,i, fvu,i) ) m∑ j=1 exp ( sim(f̂u,i, fvu,j) ) + ∑ w 6=v,k exp ( sim(f̂u,i, fwk )\n) , (3) where sim(q, k) = q·k|q||k| 1 τ and τ is a temperature hyperparameter. The idea is to predict zone representations that are closer to the ground truth, while being sufficiently different from the negative zones. Since the unseen zones have only limited overlap with the seen zones, the model needs to effectively reason about the geometric and semantic context in the seen zones to differentiate the positive from the negatives. We discourage the model from simply capturing video-specific textures and patterns by sampling negatives from within the same video." }, { "heading": "3.2 ENVIRONMENT EMBEDDINGS FOR EMBODIED AGENTS", "text": "Having introduced our approach to learn environment embeddings in a self-supervised fashion, we now briefly overview how these embeddings are used for agents performing navigation-oriented tasks. To this end, we integrate our pre-trained environment encoder into the Scene Memory Transformer (SMT) (Fang et al., 2019). Our choice of SMT is motivated by the recent successes of transformers in both NLP (Devlin et al., 2018) and vision (Sun et al., 2019b; Fang et al., 2019). However, our idea is potentially applicable to other forms of memory models as well.\nWe briefly overview the SMT architecture (see Fig. 3, center). It consists of a scene memory that stores visual features {xi}ti=0 and agent poses {pi}ti=0 generated from the observations seen during an episode. The environment encoder uses self-attention on the history of observations to generate a richer set of environment embeddings {ei}ti=1. At a given time-step t + 1, the policy decoder attends to the environment embeddings using the inputs ot+1, which consist of the visual feature x and agent pose p at time t+ 1. The outputs of the policy decoder are used to sample an action at+1 and estimate the value vt+1. We detail each component in the Appendix.\nTo incorporate our EPC environment embeddings, we modify two key components from the original SMT model. First, and most importantly, we initialize the environment encoder with our pre-trained EPC (see Fig. 3, left). Second, we replace the end-to-end trained image encoders with MidLevel features that are known to be useful across a variety of embodied tasks (Sax et al., 2020) (see Fig. 3, right). We consider two visual modalities as inputs: RGB and depth. For RGB, we extract features from the pre-trained models in the max-coverage set proposed by Sax et al. (2020). These include surface normals, keypoints, semantic segmentation, and 2.5D segmentation. For depth, we extract features from pre-trained models that predict surface normals and keypoints from depth (Zamir et al., 2020). For training the model on a navigation task, we keep the visual features frozen, and only finetune the environment encoder, policy decoder, policy π, and value function V ." }, { "heading": "4 EXPERIMENTS", "text": "We validate our pre-trained EPC environment embeddings for zone prediction (Sec. 4.1) and multiple downstream tasks that require an embodied agent to move intelligently through an unmapped\nenvironment (Sec. 4.2). We evaluate the sensitivity of self-supervised learning to noise in the video data (Sec. 4.3), and assess noise robustness of the learned policies on downstream tasks (Sec. 4.4).\nEXPERIMENTAL SETUP AND TASKS We perform experiments on the Habitat simulator (Savva et al., 2019b) with Matterport3D (MP3D) (Chang et al., 2017) and Gibson (Xia et al., 2018), two challenging and photorealistic 3D datasets with ∼ 90 and 500 scanned real-world indoor environments, respectively. Our observation space consists of 171×128 RGB-D observations and odometry sensor readings that provide the relative agent pose p =(x, y, θ) w.r.t the agent pose at t = 0. Our action space consists of: MOVE-FORWARD by 25cm, TURN-LEFT by 30◦, and TURN-RIGHT by 30◦. For all methods, we assume noise-free actuation and odometry for simplicity.\nWe use MP3D for interactive RL training, and reserve Gibson for evaluation. We use the default train/val/test split for MP3D (Savva et al., 2019b) for 1000-step episodes. For Gibson, which has smaller environments, we evaluate on the 14 validation environments for 500-step episodes. Following prior work (Ramakrishnan et al., 2020a; Chaplot et al., 2020b), we divide results on Gibson into small and large environments. We generate walkthroughs for self-supervised learning from 332 Gibson training environments. We train a SMT(scratch) agent to perform area-coverage on MP3D. It explores starting from multiple locations and gathers the RGB-D and odometer readings for 500 steps per video. Note that this agent only collects data, and is not used for downstream tasks. This results in ∼ 5000 videos, which we divide into an 80-20 train/val split. We evaluate our approach on three standard tasks from the literature:\n1. Area coverage (Chen et al., 2019; Chaplot et al., 2020b; Ramakrishnan et al., 2020b): The agent is rewarded for maximizing the area covered (in m2) within a fixed time budget. 2. Flee (Gordon et al., 2019): The agent is rewarded for maximizing the flee distance (in m), i.e., the geodesic distance between its starting location and the terminal location, for fixed-length episodes. 3. Object coverage (Fang et al., 2019; Ramakrishnan et al., 2020b): The agent is rewarded for maximizing the number of categories of objects covered during exploration (see Appendix). Since Gibson lacks extensive object annotations, we evaluate this task only on MP3D.\nTogether, these tasks capture different forms of geometric and semantic inference in 3D environments (e.g., area/object coverage encourage finding large open spaces/new objects, respectively).\nBASELINES We compare to the following baselines:\nScratch baselines: We randomly initialize the visual encoders and policy and train them end-toend for each task. Images are encoded using ResNet-18. Agent pose and past actions are encoded using FC layers. These are concatenated to obtain the features at each time step. We use three temporal aggregation schemes. Reactive (scratch) has no memory. RNN (scratch) uses a 2-layer LSTM as the temporal memory. SMT (scratch) uses a Scene Memory Transformer for aggregating observations (Fang et al., 2019).\nSMT (MidLevel): extracts image features from pre-trained encoders that solve various mid-level perceptual tasks (Sax et al., 2020). This is an ablation of our model from Sec. 3.2 that uses the same image features, but randomly initializes the environment encoder. This SoTA image-level encoder is a critical baseline to show the impact of our proposed EPC environment-level encoder. SMT (Video): Inspired by Dense Predictive Coding (Han et al., 2019), this baseline uses MidLevel features and pre-trains the environment encoder as a video-level model using the same training videos as our model. For pre-training, we randomly sample 25 consecutive frames as inputs and predict the average features corresponding to the next 15 frames. We query based on the time (not pose) and train the model using the NCE loss in Eqn. 3. OccupancyMemory: This is similar to the SoTA Active Neural SLAM model (Chaplot et al., 2020b) that maximizes area coverage, but using ground-truth depth to build the map (instead of RGB) and a state-of-the-art pointnav agent (Wijmans et al., 2020) for low-level navigation (instead of a planner). It represents the environment as a top-down occupancy map.\nAll models are trained in PyTorch (Paszke et al., 2019) with DD-PPO (Wijmans et al., 2020) for 15M frames with 64 parallel processes and the Adam optimizer. See Appendix." }, { "heading": "4.1 ZONE PREDICTION PERFORMANCE", "text": "First we evaluate the EPC embedding quality in terms of zone prediction on the validation videos. We divide each video into m seen and n unseen zones and infer the features for each unseen zone, given its average camera pose. We rank the features from the n unseen zones based on their similarity with the inferred feature, and measure the top-1 retrieval accuracy. We evaluate with (m = 4, n = 2) and (m = 2, n = 4) splits. The larger the value of m, the easier the task, since more information is available as input. We also test two simple baselines. Nearest neighbors uses the query pose to retrieve the 50 closest frames in the input zones, and outputs their averaged features. Random\nArea coverage (m2) Flee (m) Object coverage (#obj)\nMethod Gibson-S Gibson-L MP3D Gibson-S Gibson-L MP3D MP3D-cat. MP3D-inst. Reactive (scratch) 17.4± 0.2 22.8± 0.6 68.0± 1.3 1.9± 0.1 2.5± 0.3 5.1± 0.3 6.2± 0.0 19.0± 0.2\nRNN (scratch) 20.6± 0.5 28.6± 0.3 79.1± 2.1 2.3± 0.2 2.8± 0.4 5.9± 0.1 6.1± 0.0 18.6± 0.2 SMT (scratch) 23.0± 0.7 32.3± 0.8 104.8± 2.3 3.3± 0.2 4.4± 0.4 6.9± 0.6 7.0± 0.1 23.2± 0.9 SMT (MidLevel) 29.1± 0.1 47.2± 1.7 155.7± 2.0 4.2± 0.0 6.0± 0.4 10.6± 0.3 7.6± 0.1 26.8± 0.6 SMT (Video) 28.7± 0.5 50.6± 2.6 129.7± 2.8 4.1± 0.0 5.0± 0.6 10.9± 0.5 7.3± 0.1 25.4± 1.0 OccupancyMemory 29.4± 0.1 67.4± 0.9 155.6± 1.4 2.8± 0.0 7.0± 0.4 14.1± 0.6 7.8± 0.1 27.8± 0.4 Ours (EPC) 29.9± 0.3 56.4± 2.1 165.6± 2.8 4.5± 0.1 7.1± 0.4 12.8± 0.6 8.6± 0.1 34.5± 0.8\nTable 2: Downstream task performance at the end of the episode. Gibson-S/L means small/large. MP3Dcat./inst. means categories/instances. All methods are evaluated on three random seeds. See Appendix for performance vs. time step plots.\nA re\na co\nve re\nd (m\n2 )\nFl ee\nd is\nta nc\ne (m\n)\n# ca\nte go\nrie s\n# in\nst an\nce s\nArea coverage Flee Object coverage\n# training experience (in million frames)\nFigure 5: Sample efficiency on Matterport3D val split. Our environment-level pre-training leads to 2-4× training sample efficiency when compared to SoTA image-level pre-training. See Appendix for Gibson plots.\nmasking uses a different proxy task to learn the environment representations, randomly masking out 10 consecutive frames in the video and predicting their averaged feature from the rest. EPC (bilinear) uses bilinear product similarity (Oord et al., 2018) instead of the `2-norm below Eqn. 3. We report retrieval from only the unseen zones (w/o inputs) as well as the more challenging case where input zones are also candidates (w/inputs).\nTab. 1 shows the results. The EPC (`2-norm) model obtains superior retrieval performance on both settings. It retrieves the positive zones with high confidence (see Fig. 4 and Appendix). EPC’s gain over random masking shows the value of the proposed zone generation step. Therefore, we select this model for downstream task transfer." }, { "heading": "4.2 DOWNSTREAM TASK PERFORMANCE", "text": "Now we transfer these features to downstream navigation tasks. Tab. 2 shows the results. On both datasets, we observe the following ordering:\nReactive (scratch) < RNN (scratch) < SMT (scratch). (4)\nThis is in line with results reported by Fang et al. (2019) and verifies our implementation of SMT. Using MidLevel features for SMT leads to significant gains in performance versus training image encoders from scratch.\nOur environment-level pre-training provides substantial improvements compared to SMT (MidLevel), particularly for larger environments. Furthermore, SMT (Video)—the video-level pretraining strategy—often deteriorates performance compared to using only image-level pre-training. This highlights EPC’s value in representing the underlying 3D spaces of the walkthroughs instead of treating them simply as video frames. EPC competes closely and even slightly outperforms the state-of-the-art OccupancyMemory on these tasks, with a significant gain on the object coverage metrics. Thus, our model competes strongly with a task-specific representation model on the tasks that the latter was designed for, while outperforming it significantly on other tasks.\nFinally, Fig. 5 shows that EPC offers better sample efficiency than image-only pre-training: our method reaches the best performance of SMT (MidLevel) 2-4× faster. This confirms our hypothesis: transferring environment-level representations learned via contextual reasoning can help embodied agents learn faster compared to the current approach of transferring image-level encoders alone." }, { "heading": "4.3 SENSITIVITY ANALYSIS OF SELF-SUPERVISED LEARNING", "text": "We analyze the sensitivity of EPC to sensory noise in the videos, and the exploration strategy used for video data collection. Specifically, we inject noise in the depth and pose data from the videos using existing noise models from Choi et al. (2015) and Ramakrishnan et al. (2020a). We also replace the video walkthroughs from the area-coverage agent with an equivalent amount of data collected by a simple heuristic used in prior work (Chen et al., 2019; Ramakrishnan et al., 2020b). It works as follows: move forward until colliding, then turn left or right by a random amount, then continue moving forward. We evaluate the impact of these changes on the downstream task performance.\nSee Tab. 3. Our approach EPC is reasonably robust to changes in the video data during SSL training. The performance remains stable when noise is injected into depth inputs. While it starts to decline on MP3D when we further inject noise into pose inputs, EPC still generally outperforms the random initialization of environment-encoder in SMT (MidLevel). Note that we do not employ any noisecorrection mechanism, which could better limit this decline (Chaplot et al., 2020b; Ramakrishnan et al., 2020a). Finally, the performance is not significantly impacted when we use video data from a simple exploration heuristic, showing that EPC does not require a strong exploration policy for the agent that generates the self-supervised training videos, nor does it require a tight similarity between the tasks demonstrated in the videos and the downstream tasks." }, { "heading": "4.4 ROBUSTNESS OF LEARNED POLICIES TO SENSOR NOISE", "text": "In previous experiments, we assumed the availability of ground-truth depth and pose sensors for downstream tasks. Now, we relax these assumptions and re-evaluate all methods by injecting noise in the depth and pose sensors for downstream tasks (same noise models as Sec. 4.3), without any noise-correction. This is a common evaluation protocol for assessing noise robustness (Chen et al., 2019; Ramakrishnan et al., 2020b). We compare the top three methods on MP3D in Tab. 4 and provide the complete set of results in Appendix G. As expected, the performance declines slightly as we add noise to more sensors (depth, then pose). However, most learned approaches are reasonably stable. EPC outperforms all methods when all noise sources are added. OccupancyMemory declines rapidly in the absence of noise-correction due to accumulated errors in the map." }, { "heading": "5 CONCLUSIONS", "text": "We introduced Environment Predictive Coding, a self-supervised approach to learn environmentlevel representations for embodied agents. By training on video walkthroughs generated by other agents, our model learns to infer missing content through a zone-prediction task. When transferred to multiple downstream embodied agent tasks, the resulting embeddings lead to better performance and sample-efficiency compared to the current practice of transferring only image-level representations. In future work, we plan to extend our idea for goal-driven tasks like PointNav and ObjectNav." }, { "heading": "A ZONE GENERATION", "text": "As discussed in the main paper, we generate zones by first clustering frames in the video based on their geometric overlap. Here, we provide details on how this overlap is estimated. First, we project pixels in the image to 3D point-clouds using the camera intrinsics and the agent pose. Let Di, pi be the depth map and agent pose for frame i in the video. The agent’s pose in frame i can be expressed as pi = (Ri, ti), with Ri, ti representing the agent’s camera rotation and translation in the world coordinates. Let K ∈ R3×3 be the intrinsic camera matrix, which is assumed to be provided for each video. We then project each pixel xij in the depth map Di to the 3D point cloud as follows:\nwij = [ Ri ti 0 1 ] K−1xij , ∀j ∈ {1, ..., Si} (5)\nwhere Si is the total number of pixels in Di. By doing this operation for each pixel, we can obtain the point-cloud Wi corresponding to the depth map Di. To compute the geometric overlap between two frames i and j, we estimate the overlap in their point-clouds Wi and Wj . Specifically, for each point wi ∈ Wi, we retrieve the nearest neighbor from wj ∈ Wj and check whether the pairwise distance in 3D space is within a threshold τ : ||wi − wj ||2 < τ . If this condition is satisfied, then a match exists for wi. Then, we define the overlap fraction ψ(Di, Dj) the fraction of points in Wi which have a match in Wj . This overlap fraction is computed pairwise between all frames in the video, and hierarchical agglomerative clustering is performed using this similarity measure." }, { "heading": "B TASK DETAILS", "text": "For the object coverage task, to determine if an object is covered, we check if it is within 3m of the agent, present in the agent’s field of view, and if it is not occluded (Ramakrishnan et al., 2020b). We use a shaped reward function:\nRt = Ot −Ot−1 + 0.02(Ct − Ct−1), (6)\nwhere Ot, Ct are the number of object categories and 2D grid-cells visited by time t (similar to Fang et al. (2019))." }, { "heading": "C SCENE MEMORY TRANSFORMER", "text": "We provide more details about individual components of the Scene Memory Transformer Fang et al. (2019). As discussed in the main paper, the SMT model consists of a scene memory for storing the visual features {xi}ti=0 and agent poses {pi}ti=0 seen during an episode. The environment encoder uses self-attention on the scene memory to generate a richer set of environment embeddings {ei}ti=1. The policy decoder attends to the environment embeddings using the inputs ot+1, which consist of\nthe visual feature x, and agent pose p at time t + 1. The outputs of the policy decoder are used to sample an action at+1 and estimate the value vt+1. Next, we discuss the details of the individual components.\nSCENE MEMORY It stores the visual features derived from the input images and the agent poses at each time-step. Motivated by the ideas from Sax et al. (2020), we use mid-level features derived from various pre-trained CNNs for each input modality. In this work, we consider two input modalities: RGB, and depth. For RGB inputs, we extract features from the pre-trained models in the maxcoverage set proposed in Sax et al. (2020). These include surface normals, keypoints, semantic segmentation, and 2.5D segmentation. For depth inputs, we extract features from pre-trained models that predict surface normals and keypoints from depth (Zamir et al., 2020). For simplicity, we assume that the ground-truth pose is available to the agent in the form of (xt, yt, zt, θt) at each time-step, where θt is the agent heading. While this can be relaxed by following ideas from state-ofthe-art approaches to Neural SLAM (Chaplot et al., 2020b; Ramakrishnan et al., 2020a), we reserve this for future work as it is orthogonal to our primary contributions.\nATTENTION MECHANISM Following the notations from Vaswani et al. (2017), we define the attention mechanism used in the environment encoder and policy decoder. Given two inputs X ∈ Rn1×dx and Y ∈ Rn2×dy , the attention mechanism attends to Y using X as follows:\nAttn(X,Y ) = softmax ( QXK T Y√\ndk\n) VY (7)\nwhere QX ∈ Rn1×dk ,KY ∈ Rn2×dk , VY ∈ Rn2×dv are the queries, keys, and values computed from X and Y as follows: QX = XW q , KY = YW k, and VY = YW v . W q,W k,W v are learned weight matrices. The multi-headed version of Attn generates multiple sets of queries, keys, and values to obtain the attended context C ∈ Rn1×dv .\nMHAttn(X,Y ) = FC([Attnh(X,Y )]Hh=1). (8)\nWe use the transformer implementation from PyTorch (Paszke et al., 2019). Here, the multi-headed attention block builds on top of MHAttn by using residual connections, LayerNorm (LN) and fully connected (FC) layers to further encode the inputs.\nMHAttnBlock(X,Y ) = LN(MLP(H) +H) (9)\nwhere H = LN(MHAttn(X,Y ) + X), and MLP has 2 FC layers with ReLU activations. The environment encoder performs self-attention between the features stored in the scene memory to obtain the environment encoding E.\nE = EnvironmentEncoder(M) = MHAttnBlock(M,M). (10)\nThe policy decoder attends to the environment encodings E using the current observation xt, pt.\nPolicyDecoder([xt, pt], E) = MHAttnBlock(FC([xt, pt]), E) (11)\nWe transform the pose vectors {pi}ni=1 from the scene memory relative to the current agent pose pt as this allows the agent to maintain an egocentric view of past inputs Fang et al. (2019)." }, { "heading": "D HYPERPARAMETERS", "text": "We detail the list of hyperparameter choices for different tasks and models in Tab. 5. For the random masking baseline in Tab. 1, we tried masking out 10, 20 and 50 frames and picked 10 frames based on the zone prediction performance. For SMT (Video), we choose 25 frames as inputs and 15 frames as output based on Dense Predictive Coding Han et al. (2019)." }, { "heading": "E DOWNSTREAM TASK PERFORMANCE VS. EPISODE TIME", "text": "We show the downstream task performance as a function of time in Fig. 6. We evaluate each model with 3 different random seeds and report the mean and the 95% confidence interval in the plots.\nA re\na co\nve re\nd (m\n2 )\nFl ee\nd is\nta nc\ne (m ) Fl ee d is ta nc e (m )\nArea coverage Flee Object coverage\n# ca\nte go\nrie s\n# in\nst an\nce s\n# episode steps\nA re\na co\nve re\nd (m\n2 )\nA re\na co\nve re\nd (m\n2 )\nFl ee\nd is\nta nc\ne (m\n)\nArea coverage Flee Area coverage Flee Gibson-L Gibson-S\nMP3D\nFigure 6: We highlight the downstream task performance as a function of episode time on both Matterport3D and Gibson.\n# training experience (in million frames)\nA re\na co\nve re\nd (m\n2 )\nArea coverage\nFl ee\nd is\nta nc\ne (m\n)\nFlee\nFigure 7: Sample efficiency on Gibson val split. Our environment-level pre-training leads to 2-4× training sample efficiency when compared to SoTA image-level pre-training." }, { "heading": "F SAMPLE EFFICIENCY CURVES ON GIBSON", "text": "We plot the Gibson validation performance as a function of training experience in Fig. 7. EPC achieves better sample efficiency through environment-level pre-training when compared to the image-level pre-training baseline SMT (MidLevel)." }, { "heading": "G COMPLETE ANALYSIS OF NOISE ROBUSTNESS IN DOWNSTREAM TASKS", "text": "In Tab. 4 from the main paper, we compared the noise robustness of top three approaches on MP3D. Here, we present the complete set of results for all methods on Gibson and MP3D in Tab. 6." }, { "heading": "H EXPLORING SPATIAL CONTEXT FOR SELF-SUPERVISION IN EPC", "text": "Originally, our EPC proposal previews context from large parts of the video (spanning several zones) to fill in the content for a missing zone (termed “EPC-global”). However, we can also leverage local spatial context spanning a limited set of frames. The SMT (Video) baseline exploits local temporal context which spanned 25 + 15 frames to derive self-supervision. Now, we consider a local variant of EPC that performs spatial reasoning within a similar context, i.e., it takes 25 frames + their poses\nas inputs, and predicts the average feature for the next 15 frames conditioned on the pose. We term this as “EPC-local”. We compare the two EPC variants with SMT (Video) in Tab. 7.\nAs expected, both EPC variants outperform SMT (Video) by a large margin, validating the main hypothesis in EPC that spatial reasoning during self-supervision is critical. However, at a first glance, it appears that EPC-global only offers limited advantage over EPC-local. Our analysis reveals that EPC-global is bottlenecked by the averaging of zone features during self-supervison (Eqn. 1). Each zone typically contains anywhere from 5 - 305 frames (mean of 48 frames), and averaging them reduces the self-supervision available per video. To test this hypothesis, we make a simple change where we replace feature averaging with sampling, i.e., we sample a random frame from the masked zone as the prediction target in Eqn. 1. The performance of this new “sampling-based” zone representation is shown in Tab. 7 (denoted as “+ S”). As expected, removing the feature averaging improves both the EPC variants. We see larger improvements in EPC-global since a lot more frames were averaged over in this case (i.e., more information lost). As we noted, these two variants capture two types of contextual cues: local and global which could be complementary. To test this, we now combine the two losses during SSL training (EPC aug. + S). This model generally outperforms the individual methods, confirming our intuition that we can derive complementary cues from local and global context." } ]
2,020
null
SP:a0417f78d102a7c5ae83d98abe990dc03e3405ec
[ "This paper proposes to use a rhetoric knowledge graph for rhetorical text generation. One of its key contributions is to construct a rhetoric knowledge graph by leveraging SOTA NER and relation classification models. To generate a rhetorical text, the new method starts with sending a keyword to the knowledge graph to retrieve the neighborhood of the keywords as its context words. Both the context words and the original query word are fed into a language model to generate the final word sequence. " ]
Embedding logical knowledge information into text generation is a challenging NLP task. In this paper, we propose a knowledge enhanced text generation (KETG) framework, which incorporates both the knowledge and associated text corpus to address logicality and diversity in text generation. Specifically, we validate our framework on rhetorical text generation from our newly built rhetoric knowledge graph. Experiments show that our framework outperforms baseline models such as Transformer and GPT-2, on rhetorical type control, semantic comprehensibility and diversity.
[]
[ { "authors": [ "Christoph Alt", "Marc Hübner", "Leonhard Hennig" ], "title": "Fine-tuning pre-trained transformer language models to distantly supervised relation extraction", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Thiago Castro Ferreira", "Chris van der Lee", "Emiel van Miltenburg", "Emiel Krahmer" ], "title": "Neural data-to-text generation: A comparison between pipeline and end-to-end architectures, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jian Guan", "Fei Huang", "Zhihao Zhao", "Xiaoyan Zhu", "Minlie Huang" ], "title": "A knowledge-enhanced pretraining model for commonsense story generation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zhiheng Huang", "Wei Xu", "Kai Yu" ], "title": "Bidirectional lstm-crf models for sequence tagging", "venue": "arXiv preprint arXiv:1508.01991,", "year": 2015 }, { "authors": [ "Guillaume Lample", "Miguel Ballesteros", "Sandeep Subramanian", "Kazuya Kawakami", "Chris Dyer" ], "title": "Neural architectures for named entity recognition", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Jiwei Li", "Michel Galley", "Chris Brockett", "Jianfeng Gao", "Bill Dolan" ], "title": "A diversity-promoting objective function for neural conversation models", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Zhiqiang Liu", "Zuohui Fu", "Jie Cao", "Gerard de Melo", "Yik-Cheung Tam", "Cheng Niu", "Jie Zhou" ], "title": "Rhetorically controlled encoder-decoder for modern Chinese poetry generation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Christopher Manning", "Mihai Surdeanu", "John Bauer", "Jenny Finkel", "Steven Bethard", "David McClosky" ], "title": "The Stanford CoreNLP natural language processing toolkit", "venue": "In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations,", "year": 2014 }, { "authors": [ "Todor Mihaylov", "Anette Frank" ], "title": "Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Malay Pramanick", "Ashim Gupta", "Pabitra Mitra" ], "title": "An LSTM-CRF based approach to token-level metaphor detection", "venue": "In Proceedings of the Workshop on Figurative Language Processing,", "year": 2018 }, { "authors": [ "Alec Radford" ], "title": "Improving language understanding by generative pre-training", "venue": null, "year": 2018 }, { "authors": [ "Sunny Rai", "Shampa Chakraverty", "Devendra Tayal" ], "title": "Supervised metaphor detection using conditional random fields", "venue": "pp. 18–27,", "year": 2016 }, { "authors": [ "Jianlin Su" ], "title": "Lightweight information extraction model based on dgcnn and probability graph. 2019", "venue": null, "year": 2019 }, { "authors": [ "Mei Tu", "Yu Zhou", "Chengqing Zong" ], "title": "A novel translation framework based on rhetorical structure theory", "venue": "In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "year": 2013 }, { "authors": [ "Yiren Wang", "Hongzhao Huang", "Zhe Liu", "Yutong Pang", "Yongqiang Wang", "ChengXiang Zhai", "Fuchun Peng" ], "title": "Improving n-gram language models with pre-trained deep transformer, 2019", "venue": null, "year": 2019 }, { "authors": [ "Zhe Wang", "Wei He", "Hua Wu", "Haiyang Wu", "Wei Li", "Haifeng Wang", "Enhong Chen" ], "title": "Chinese poetry generation with planning based neural network, 2016", "venue": null, "year": 2016 }, { "authors": [ "Xiaoyuan Yi", "Maosong Sun", "Ruoyu Li", "Zonghan Yang" ], "title": "Chinese poetry generation with a working memory model", "venue": "In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Zhengyan Zhang", "Xu Han", "Zhiyuan Liu", "Xin Jiang", "Maosong Sun", "Qun Liu" ], "title": "Ernie: Enhanced language representation with informative entities. Meeting of the Association for Computational Linguistics, 2019", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent pre-trained language models such as GPT-2 can capture clear semantic and syntactic features (Radford, 2018), performing well in machine translation and abstract generation tasks (Li et al., 2016; Wang et al., 2016). However, the application of language models in text generation still needs to be explored. The logic in text generation, especially literature creation, is always obscure, which means they are usually low-frequency, causing the difficulty of modeling by current language models. On the other hand, too much limits of prior information will lead to homogenization in generated texts. To address these issues, (Guan et al., 2020) proposes a knowledge-enhanced pretraining model for commonsense story generation, by transforming the commonsense triples into sentences using a template-based method. However, the template-based transformed sentences from commonsense triples for post-training are rather homogeneous.\nIn this paper, we introduce an innovative knowledge enhanced text generation (KETG) framework, which incorporates knowledge tuples and their associated sentences in training, such that the logic relation lying in the knowledge tuples can be effectively addressed. Regarding the sentences associated with the knowledge tuples, we may generate the sentences from the tuples by template-based method as in (Guan et al., 2020). However, incorporating real corpus sentences would be more beneficial as they generally exhibit more diversity than those generated from templates, if they are available.\nIn this way, the generation model can learn the both logicality and diversity in the knowledge tuples and sentences.\nWe validate our KETG framework on rhetorical text generation, which is an important and essential part in modern literature(Tu et al., 2013).\nRhetoric is quite obscure, requiring strong logical correlation, and a rhetoric knowledge graph with explicit logical information (rather than the commonsense knowledge graph) would be helpful to rhetorical text generation. Unfortunately, to the best of our knowledge, we are not aware of any rhetoric knowledge graph. Hence by using relation extraction methods, we build a rhetoric (specifically, here we refer to metaphor and personification) knowledge graph from a collection of Chinese poems and compositions. With the newly built rhetoric knowledge graph and the corpus from which the knowledge graph is extracted, we train a rhetorical text generation model. Both automatic and manual evaluations show that our KETG model outperforms baseline models on rhetorical type control, semantic comprehensibility and diversity. Experiments also illustrate that incorporating sentences by template-based method in training results in rather similar generated text as the template, while incorporating real corpus sentences brings more diversity in text generation.\nTo sum up ,the main contributions of this paper are summarized as follows:\n1. We propose a KETG framework, which includes both knowledge information and associated sentences in training to address logicality and diversity.\n2. We validate our KETG framework on rhetorical (metaphor and personification) text generation. Results show that our KETG framework can generate more reasonable and diverse rhetorical texts, and the rhetoric types can be controlled implicitly.\n3. To the best of our knowledge, we build the first Chinese rhetoric (metaphor and personification) graph with 35228 tuples." }, { "heading": "2 RELATED WORK", "text": "Language Model(LM) In order to use as much semantic information as possible, several research work has been conducted. In early stage, researchers focused on the feature-based method to express syntactic and semantic information in texts. However, this kind of method can not solve the problem of polysemy. To improve, (Peters et al., 2018) Elmo is proposed to capture complex word characteristics in texts. Meanwhile, in NLP tasks, massive texts are often unlabeled. To solve this, fine-tuning models are raised, which can learn ”common sense” from unlabeled texts. Both Bert and GPT-2 are representative models. (Wang et al., 2019; Ferreira et al., 2019) They have achieved good evaluation results in multiple NLP tasks, such as named entity recognition, Q&A, text classification and text generation.\nKnowledge Enhanced LM To mimic human’s writing manner, the most basic thing is to ensure that the generated text fluent and semantically understandable. Secondly, the common sense of humankind is also indispensable. Furthermore, aesthetics and logicality make language expressions more vivid, novel and apt. However, it’s hard to meet these requirements merely by language models. (Bowman et al., 2015) used common-sense knowledge base in natural language inference(NLI) and NLG. As mentioned in (Zhou et al., 2018), common sense knowledge can promote performance in dialogue generation. (Mihaylov & Frank, 2018) introduced a neural reading comprehension model that encodes external common sense knowledge as key-value memory. (Zhang et al., 2019) introduced a knowledge enhanced pre-trained language framework ERNIE, trying to increase the knowledge representation by masking semantic units such as words and entities. (Guan et al., 2020) proposes a knowledge-enhanced pretraining model for commonsense story generation. They post-train the model on knowledge-augmented data by transforming the commonsense triples into sentences.\nRhetorical Text Generation Rhetoric is an important and essential part in modern literature(Tu et al., 2013). It can express author’s passion and grace, improving the aesthetic merit of creations. (Liu et al., 2019) proposed a rhetorically controlled generation model for Chinese poetry generation to govern the rhetorical modes. Through a classifier inserted in the encoder, they can control the rhetorical modes of generated poems. However, it does not include knowledge graph and hence might generate illogical sentences, like ”Flakes of snow are flying like snow”, which appears to be a metaphor, but includes illogical ‘snow like snow’." }, { "heading": "3 OUR KETG FRAMEWORK", "text": "We propose an innovative KETG framework, to combine the knowledge information with text generation models, just like the external device to computer. The architecture could be used to combine different types of knowledge graph with text generation model.\nAs depicted in Figure 1, we query the keyword in knowledge graph firstly, getting a context vector containing knowledge information. Then, we concatenate the context knowledge vector and the keyword vector, input them together with associated sentence to the language model. In this way, we can highlight the topic in the sentence and potential logical relationship between the entities, forcing the model pay more attention to them. When generating texts, with a given topic word, we get the context knowledge vector in the same way, which then serve as input to the trained model to generate the whole sentence in an auto-regressive manner.\nCompared with single topic word, the expanded context knowledge vector can also take the diversity advantage of knowledge graph, make sure the generated sentences full of variety. It’s worth\nmentioning that the real corpus sentences are retained in our framework, rather than those generated from templates, which means the generation model can learn the diversity of sentence structure.\nIn detail, we add [cls] at the beginning of keyword vector and put [mask] to separate them from original sentence. After that, we concatenate them together as the input of text generation model.\nUsing above approach, we can integrate knowledge information into text generation model naturally. With external knowledge, the generation model can generate more reasonable text, meanwhile captures the significant semantic and syntactic features." }, { "heading": "4 RHETORICAL TEXT GENERATION", "text": "Rhetoric is an essential element in literature. Among 8744 Chinese poems(Liu et al., 2019), 31.4% are metaphor and 18.5% are personification. We also collected 54949 excellent sentences from named composition. Among them, 11989 are metaphor and 28718 are personification. It’s obvious that metaphor and personification are the main parts of rhetoric. Therefore we build the rhetorical knowledge graph on metaphor and personification." }, { "heading": "4.1 RHETORICAL RELATION EXTRACTION", "text": "We use relationship extraction algorithm(Rai et al., 2016; Alt et al., 2019) to build our rhetorical graph. Based on bert+crf layer(Huang et al., 2015; Lample et al., 2016; Pramanick et al., 2018), the model is designed to deal with NER(Named Entity Recognition) and relation classification jointly. In addition, we introduces a priori relation graph to filter NER results, which can improve the accuracy of extraction results effectively. Besides, in order to address multiple entities in a sentence, a mechanism called ”semi-pointer semi-label” (Su, 2019) is adopted in our model." }, { "heading": "4.2 CONSTRUCTING RHETORIC GRAPH", "text": "We build our rhetorical knowledge graph in three steps.\nFirstly, we collect sentences of metaphor and personification from named compositions. Based on coreference resolution rules, we use Stanford Core NLP tools (Manning et al., 2014) to extract metaphor to a tuple of (noumenon, metaphor object, metaphor base), meanwhile personification to a set of (unhuman-subject, human-action/human-emotion). Using the above method, we build a seed rhetorical knowledge graph with 8035 rhetorical sentences, after that manually marked to make sure the accuracy.\nSecondly, we trained a rhetorical classifier using this seed graph. We also add 3432 negative examples to prevent over-fitting. The accuracy of the classifier in metaphor is 0.97, while personification is 0.75.\nFinally, Based on rules and the above classifier, we continue to expand the data set and retrain the classifier iteratively to build a large rhetorical knowledge graph, including 35228 tuples and 30970 nodes.\nDuring the construction, we found that rhetoric relationships have strong logicality, especially metaphor. The common storage mechanism will lead to serious logical errors during query. For example, metaphorical relationships like (snowflake, falling, catkins) and (leaf, falling, snowflakes), will be stored as [snowflake]−feature− [float]− like− [catkin], [leaf ]−feature− [float]− like− [snowflake]. when searching noumenon ”snowflake” in the graph, the result will potentially be [snowflake]− feature− [float]− like− [snowflake]. We design a graph storage mechanism to avoid such illogical problem, the structure is shown in Figure 2. We use a triangle structure to store noumenon, metaphor object and metaphor base. It’s worth mentioning that we save the metaphor base as node instead of edge, for that the types of\nmetaphor base are complicated and varied, saving it as edge will lower search efficiency enormously. Personification is similar to metaphor, but it contains only two entities [unhuman − subject] − Personification− [human− action/human− emotion]." }, { "heading": "4.3 GENERATING WITH RHETORIC GRAPH", "text": "Firstly, We query the keyword in rhetorical graph to get a context vector, which including corresponding rhetorical information. For example, in metaphor, the vector contains information of metaphor object and metaphor base.\nAfter, we concatenate the context vector and keyword vector, sending them together with the associated original sentence to text generation model, training the model using the method in Figure 1. During generation, given a topic word and rhetoric type, we get the context knowledge vector in the same way, then generate the corresponding rhetoric sentences in the trained model.\nIn particular, we use the Top-K generation method, that is, when predicting the next word, we will randomly select one sentence from probable values of top 5. This method can effectively solve the problem of repeated words in the generation." }, { "heading": "5 EXPERIMENT", "text": "" }, { "heading": "5.1 DATASET", "text": "We build a dataset based on sentences of named compositions, which distributed in various genre and theme. The corpus contains 56814 sentences of three categories. Among them, 17721 sentences are metaphor, 20402 are personification, and the rests are no rhetoric." }, { "heading": "5.2 EXPERIMENT SETTING", "text": "In order to demonstrate the effect of knowledge graph, in our experiment, we embed our rhetorical graph to Transformer and GPT-2, and compare the results with vanilla Transformer and GPT-2 models.\nThe only work on rhetorical text generation we are aware of in literature is in (Liu et al., 2019), which proposes a rhetorically controlled encoder-decoder for modern Chinese poetry generation based on seq2seq framework. They input the rhetorical type into the model explicitly to control the generated rhetorical type. However, without knowledge information, the problem of logic conflict\nis still exist. On the other hand, they generated a poem line by line. That is, when generating i-line, they need to input previous i-1 lines and the topic keywords. However, in our framework, we focus on text generation with topic keyword. Due to the differences of generation approach, we list some experimental results of (Liu et al., 2019) for reference only.\nWe also carry out an extra experiment, which trains the generation model using template generated sentences(Guan et al., 2020). First of all,we transform the rhetorical triples into template sentences. Then we trained the generation model on templates sentences, instead of real ones. The generated sentences are shown in Table 4. It can be seen that the generated texts are quite homogeneous." }, { "heading": "5.3 AUTOMATIC EVALUATION", "text": "Evaluation Metrics We adopt the perplexity score(PPL) and Rhetoric-F1 score to evaluate the generation performance. Perplexity score indicates the fluency of generated sentences. The Rhetoric-F1 score is introduced in (Liu et al., 2019), which can be used to measure the rhetorically controlled accuracy of the generated sentences.\nEvaluation Results Our GPT-2+KG obtains a lower perplexity, which suggests that the model is on a par with other models in generating grammatical sentences. When combined with rhetorical knowledge graph, the Rhetoric-F1 of both GPT-2 and Transformer achieve the level of explicit method AC model. It proves that our framework can effectively control the generated rhetorical type implicitly. Detailed results are given in Table 1." }, { "heading": "5.4 MANUAL EVALUATION", "text": "Evaluation Metrics Following previous work(Yi et al., 2018),we consider four criteria for human evaluation:\n• Correlation(C): Whether the generated sentence is related to specific keywords; • Fluency(F): Whether the generated sentence is fluent;\nEach criterion is scored ranging from 1 to 5. We have 180 groups of keywords, each group generates 5 different sentences. We calculate the scores by 5 people voting. That is, removing the highest score and the lowest score, taking the average score of the rest as the final result.\nEvaluation Results Table 2 shows the results of the human evaluation. In baseline models, it’s obviously that GPT-2 performs better than Transformer in all four scores. When combined with rhetorical knowledge graph, the fluency of generated sentences are as good as baseline models, even better for Transformer. On the other hand, the correlation score has improved in both models combined with KG, demonstrating that the knowledge information can be well learned in our framework.\nFarther more, when combined with KG, both the semantic comprehensibility and artistic aesthetics of GPT-2 are improved, while the scores of Transformer are slightly reduced. The reason is that the generated sentences of Transformer are always short and simple, too much keywords will limit expression.\nIn addition, the generated rhetorical type is also manually marked to help us analyze the rhetorical distribution of generated sentences. It can be seen that combined with rhetorical knowledge graph, the number of rhetorical sentences increases obviously. Details can be found in Figure 3.\nWe also find an attractive phenomenon, that 46 analogy1 sentences are generated by GPT-2+KG, which can demonstrate the association of a novel internal logic in our graph database. This will help us focus on rhetorical graph inference in the future." }, { "heading": "5.5 CASE STUDY", "text": "In order to further demonstrate how our framework combined with knowledge graph, we display the generated examples in Table 4. It can be seen that our framework learn knowledge information well, control the rhetorical type effectively at the same time. An additional case is shown in Figure 4, illustrating that our framework can take the advantage of knowledge graph, generating diverse texts." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "In this paper, we propose a innovative generation framework which can combine knowledge graph with text generation model effectively. In addition, we construct the first Chinese rhetoric graph and devise a graph storage mechanism to resolve the logic conflict problem during query. Experiments show that our method can control the rhetorical types in generated texts, and the texts are more fluent and reasonable at the same time.\nExtra experiments show that training with real corpus sentences rather than those generated from templates, generation model can learn the diversity in sentence. However, how to obtain both tuples and associated real corpus, is a restriced condition of our framework.\nIn future work, it would be very interesting to investigate additional kinds of rhetoric, such as parallelism, to further expand the rhetorical knowledge graph. Meanwhile, we expect to enhance the knowledge inference ability of the knowledge graph by increasing the attributes of nodes." } ]
2,020
KETG: A KNOWLEDGE ENHANCED TEXT GENERA-
SP:364842bf9376198df47a7323185d72cc73380d4d
[ "This paper combines combines submodular surrogates for sequential decision making with imitation learning. Specifically, it proposes to learn an acquisition function g by imitating an expert which is assumed to be following a greedy policy wrt a general submodular surrogate f. This is accomplished by regularizing g to encourage diminishing returns and monotonicity. The learning algorithm is a modified version of DAgger which is consistent with the expert and provably near-optimal utility. Results outperform baselines on various sequential decision making tasks." ]
Many sequential decision making tasks can be viewed as combinatorial optimization problems over a large number of actions. When the cost of evaluating an action is high, even a greedy algorithm, which iteratively picks the best action given the history, is prohibitive to run. In this paper, we aim to learn a greedy heuristic for sequentially selecting actions as a surrogate for invoking the expensive oracle when evaluating an action. In particular, we focus on a class of combinatorial problems that can be solved via submodular maximization (either directly on the objective function or via submodular surrogates). We introduce a data-driven optimization framework based on the submodular-norm loss, a novel loss function that encourages the resulting objective to exhibit diminishing returns. Our framework outputs a surrogate objective that is efficient to train, approximately submodular, and can be made permutation-invariant. The latter two properties allow us to prove strong approximation guarantees for the learned greedy heuristic. Furthermore, our model is easily integrated with modern deep imitation learning pipelines for sequential prediction tasks. We demonstrate the performance of our algorithm on a variety of batched and sequential optimization tasks, including set cover, active learning, and data-driven protein engineering.
[ { "affiliations": [], "name": "SUBMODULAR REGULARIZATION" }, { "affiliations": [], "name": "Ayya Alieva" }, { "affiliations": [], "name": "Aiden Aceves" }, { "affiliations": [], "name": "Jialin Song" }, { "affiliations": [], "name": "Yisong Yue" }, { "affiliations": [], "name": "Yuxin Chen" } ]
[ { "authors": [ "E.C. Alley", "G. Khimulya", "S. Biswas", "M. AlQuraishi", "G.M. Church" ], "title": "Unified rational protein engineering with sequence-based deep representation learning", "venue": "Nat. Methods, 16(12):1315–1322,", "year": 2019 }, { "authors": [ "Jordan T. Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Francis Bach" ], "title": "Submodular functions: from discrete to continuous domains", "venue": "Mathematical Programming,", "year": 2019 }, { "authors": [ "Ashwinkumar Badanidiyuru", "Baharan Mirzasoleiman", "Amin Karbasi", "Andreas Krause" ], "title": "Streaming submodular maximization: Massive data summarization on the fly", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Maria-Florina Balcan", "Nicholas JA Harvey" ], "title": "Submodular functions: Learnability, structure, and optimization", "venue": "SIAM Journal on Computing,", "year": 2018 }, { "authors": [ "Mislav Balunovic", "Pavol Bielik", "Martin T Vechev" ], "title": "Learning to solve smt formulas", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Andrew An Bian", "Joachim M Buhmann", "Andreas Krause", "Sebastian Tschiatschek" ], "title": "Guarantees for greedy maximization of non-submodular functions with applications", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Andrew An Bian", "Baharan Mirzasoleiman", "Joachim Buhmann", "Andreas Krause" ], "title": "Guaranteed non-convex optimization: Submodular maximization over continuous domains", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Niv Buchbinder", "Moran Feldman", "Joseph Naor", "Roy Schwartz" ], "title": "Submodular maximization with cardinality constraints", "venue": "In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms,", "year": 2014 }, { "authors": [ "K. Chaloner", "I. Verdinelli" ], "title": "Bayesian experimental design: A review", "venue": "Statistical Science,", "year": 1995 }, { "authors": [ "Yuxin Chen", "S Hamed Hassani", "Amin Karbasi", "Andreas Krause" ], "title": "Sequential information maximization: When is greedy near-optimal", "venue": "In Conference on Learning Theory,", "year": 2015 }, { "authors": [ "Yuxin Chen", "Shervin Javdani", "Amin Karbasi", "James Andrew Bagnell", "Siddhartha Srinivasa", "Andreas Krause" ], "title": "Submodular surrogates for value of information", "venue": "In Proc. Conference on Artificial Intelligence (AAAI), January 2015b", "year": 2015 }, { "authors": [ "Yuxin Chen", "S. Hamed Hassani", "Andreas Krause" ], "title": "Near-optimal bayesian active learning with correlated and noisy tests", "venue": "In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), April 2017a", "year": 2017 }, { "authors": [ "Yuxin Chen", "Jean-Michel Renders", "Morteza Haghir Chehreghani", "Andreas Krause" ], "title": "Efficient online learning for optimizing value of information: Theory and application to interactive troubleshooting", "venue": "In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI 2017),", "year": 2017 }, { "authors": [ "Sonia Chernova", "Andrea L Thomaz" ], "title": "Robot learning from human teachers", "venue": "Synthesis Lectures on Artificial Intelligence and Machine Learning,", "year": 2014 }, { "authors": [ "Sanjiban Choudhury", "Ashish Kapoor", "Gireeja Ranade", "Debadeepta Dey" ], "title": "Learning to gather information via imitation", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Brian W Dolhansky", "Jeff A Bilmes" ], "title": "Deep submodular functions: Definitions and learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Khalid El-Arini", "Gaurav Veda", "Dafna Shahaf", "Carlos Guestrin" ], "title": "Turning down the noise in the blogosphere", "venue": "In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2009 }, { "authors": [ "Moran Feldman", "Ashkan Norouzi-Fard", "Ola Svensson", "Rico Zenklusen" ], "title": "The one-way communication complexity of submodular maximization with applications to streaming and robustness", "venue": "In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2020 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Maxime Gasse", "Didier Chételat", "Nicola Ferroni", "Laurent Charlin", "Andrea Lodi" ], "title": "Exact combinatorial optimization with graph convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Daniel Golovin", "Andreas Krause" ], "title": "Adaptive submodularity: Theory and applications in active learning and stochastic optimization", "venue": "Journal of Artificial Intelligence Research,", "year": 2011 }, { "authors": [ "He He", "Hal Daume III", "Jason M Eisner" ], "title": "Learning to search in branch and bound algorithms", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Gaurush Hiranandani", "Harvineet Singh", "Prakhar Gupta", "Iftikhar Ahamath Burhanuddin", "Zheng Wen", "Branislav Kveton" ], "title": "Cascading linear submodular bandits: Accounting for position bias and diversity in online learning to rank", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Thibaut Horel", "Yaron Singer" ], "title": "Maximization of approximately submodular functions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Shervin Javdani", "Yuxin Chen", "Amin Karbasi", "Andreas Krause", "James Andrew Bagnell", "Siddhartha Srinivasa" ], "title": "Near-optimal bayesian active learning for decision making", "venue": "In In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2014 }, { "authors": [ "Ehsan Kazemi", "Marko Mitrovic", "Morteza Zadimoghaddam", "Silvio Lattanzi", "Amin Karbasi" ], "title": "Submodular streaming in all its glory: Tight approximation, minimum memory and low adaptive complexity", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Elias Boutros Khalil", "Pierre Le Bodic", "Le Song", "George Nemhauser", "Bistra Dilkina" ], "title": "Learning to branch in mixed integer programming", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Ksenia Konyushkova", "Raphael Sznitman", "Pascal Fua" ], "title": "Learning active learning from data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Andreas Krause", "Daniel Golovin" ], "title": "Submodular function maximization", "venue": "Tractability,", "year": 2014 }, { "authors": [ "Andreas Krause", "Carlos Guestrin" ], "title": "Optimal value of information in graphical models", "venue": "JAIR, 35:557–591,", "year": 2009 }, { "authors": [ "Andreas Krause", "Ajit Singh", "Carlos Guestrin" ], "title": "Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Jure Leskovec", "Andreas Krause", "Carlos Guestrin", "Christos Faloutsos", "Jeanne VanBriesen", "Natalie Glance" ], "title": "Cost-effective outbreak detection in networks", "venue": "In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2007 }, { "authors": [ "Ching Lih Lim" ], "title": "A suite of greedy methods for set cover computation", "venue": null, "year": 2015 }, { "authors": [ "Hui Lin", "Jeff Bilmes" ], "title": "A class of submodular functions for document summarization", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,", "year": 2011 }, { "authors": [ "Hui Lin", "Jeff Bilmes" ], "title": "Learning mixtures of submodular shells with application to document summarization", "venue": "In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence,", "year": 2012 }, { "authors": [ "Dennis V Lindley" ], "title": "On a measure of the information provided by an experiment", "venue": "The Annals of Mathematical Statistics,", "year": 1956 }, { "authors": [ "Ming Liu", "Wray Buntine", "Gholamreza Haffari" ], "title": "Learning how to actively learn: A deep imitation learning approach", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Marko Mitrovic", "Mark Bun", "Andreas Krause", "Amin Karbasi" ], "title": "Differentially private submodular maximization: data summarization in disguise", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "George L Nemhauser", "Laurence A Wolsey", "Marshall L Fisher" ], "title": "An analysis of approximations for maximizing submodular set functionsi", "venue": "Mathematical programming,", "year": 1978 }, { "authors": [ "Filip Radlinski", "Robert Kleinberg", "Thorsten Joachims" ], "title": "Learning diverse rankings with multi-armed bandits", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Roshan Rao", "Nicholas Bhattacharya", "Neil Thomas", "Yan Duan", "Xi Chen", "John Canny", "Pieter Abbeel", "Yun S. Song" ], "title": "Evaluating protein transfer learning with tape", "venue": "Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Stephane Ross", "Jiaji Zhou", "Yisong Yue", "Debadeepta Dey", "Drew Bagnell" ], "title": "Learning policies for contextual submodular prediction", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "M.C. Runge", "S.J. Converse", "J.E. Lyons" ], "title": "Which uncertainty? using expert elicitation and expected value of information to design an adaptive program", "venue": "Biological Conservation,", "year": 2011 }, { "authors": [ "Yash Satsangi", "Shimon Whiteson", "Frans A Oliehoek", "Matthijs TJ Spaan" ], "title": "Exploiting submodular value functions for scaling up active perception", "venue": "Autonomous Robots,", "year": 2018 }, { "authors": [ "Adish Singla", "Ilija Bogunovic", "Gábor Bartók", "Amin Karbasi", "Andreas Krause" ], "title": "Near-optimally teaching the crowd to classify", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Ruben Sipos", "Pannaga Shivaswamy", "Thorsten Joachims" ], "title": "Large-margin learning of submodular summarization models", "venue": "In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics,", "year": 2012 }, { "authors": [ "U. Sjbring", "L. Bjrck", "W. Kastern" ], "title": "Streptococcal protein G. Gene structure and protein binding properties", "venue": "J. Biol. Chem.,", "year": 1991 }, { "authors": [ "Jialin Song", "Ravi Lanka", "Albert Zhao", "Aadyot Bhatnagar", "Yisong Yue", "Masahiro Ono" ], "title": "Learning to search via retrospective imitation", "venue": "arXiv preprint arXiv:1804.00846,", "year": 2018 }, { "authors": [ "Jialin Song", "Ravi Lanka", "Yisong Yue", "Bistra Dilkina" ], "title": "A general large neighborhood search framework for solving integer linear programs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Matthew Streeter", "Daniel Golovin", "Andreas Krause" ], "title": "Online learning of assignments", "venue": "Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "C.Y. Wang", "P.M. Chang", "M.L. Ary", "B.D. Allen", "R.A. Chica", "S.L. Mayo", "B.D. Olafson" ], "title": "ProtaBank: A repository for protein design and engineering data", "venue": "Protein Sci.,", "year": 2019 }, { "authors": [ "Kai Wei", "Rishabh Iyer", "Jeff Bilmes" ], "title": "Submodularity in data subset selection and active learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Baosheng Yu", "Meng Fang", "Dacheng Tao" ], "title": "Linear submodular bandits with a knapsack constraint", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Yisong Yue", "Carlos Guestrin" ], "title": "Linear submodular bandits and their application to diversified retrieval", "venue": "Advances in Neural Information Processing Systems,", "year": 2011 }, { "authors": [ "Yisong Yue", "Thorsten Joachims" ], "title": "Predicting diverse subsets using structural svms", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Konyushkova" ], "title": "Published as a conference paper at ICLR 2021 adding any one additional datapoint was too weak and thus the selection of the next best datapoint was too noisy. Since BADGE requires a neural network classifier/regressor, we could not use it as a baseline for Set Cover (Set Cover regression function is simply adding all elements in the superset)", "venue": null, "year": 2017 }, { "authors": [ "Konyushkova" ], "title": "2017) is not compatible with most of the tasks we considered here (for MNIST, yes if we use random forest classifiers; but for others not). Furthermore, Konyushkova et al. (2017) treated the problem under a classical supervised learning setting this is often not desirable, given that we are learning a policy from non i.i.d. data samples", "venue": null, "year": 2017 }, { "authors": [ "Alley" ], "title": "UniRep produces protein embeddings as a matrix of shape (length protein sequence, 1900), although we average together the embeddings only of positions being engineered to produce a consistent embedding of shape (1900,). We have implemented the active learning imitation learning algorithm", "venue": "(Rao et al.,", "year": 2019 }, { "authors": [ "Liu" ], "title": "Pseudocode for this method is presented in Algorithms 1 and 2 from the original work", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "dimensional convolution layer (128 filters, kernel size 3), before being flattened and applying two fully connected layers of 128 units each. When predicting protein fitness, dropout is applied with a probability of 0.5 and an additional dense layer is applied with one unit and linear activation. Both networks are trained using ADAM with a learning rate of 1e-3. The implementation of this part of the project", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In real-world automated decision making tasks we seek the optimal set of actions that jointly achieve the maximal utility. Many of such tasks — either deterministic/non-adaptive or stochastic/adaptive — can be viewed as combinatorial optimization problems over a large number of actions. As an example, consider the active learning problem where a learner seeks the maximally-informative set of training examples for learning a classifier. The utility of a training set could be measured by the mutual information (Lindley, 1956) between the training set and the remaining (unlabeled) data points, or by the expected reduction in generation error if the model is trained on the candidate training set. Similar problems arise in a number of other domains, such as experimental design (Chaloner and Verdinelli, 1995), document summarization (Lin and Bilmes, 2012), recommender system (Javdani et al., 2014), and policy making (Runge et al., 2011).\nIdentifying the optimal set of actions (e.g., optimal training sets, most informative experiments) amounts to evaluating the expected utility over a combinatorial number of candidate sets. When the underlying model class is complex and the evaluation of the utility function is expensive, these tasks are notoriously difficult to optimize (Krause and Guestrin, 2009). For a broad class of decision making problems whose optimization criterion is to maximize the decision-theoretic value of information (e.g., active learning and experimental design), it has been shown that it is possible to design surrogate objective functions that are (approximately) submodular while being aligned with the original objective at the optimal solutions (Javdani et al., 2014; Chen et al., 2015b; Choudhury et al., 2017). Here, the information gathering policies no longer aim to directly optimize the target objective value, but rather choose to follow a greedy trajectory governed by the surrogate function\nthat is much cheaper to evaluate. These insights have led to principled algorithms that enable significant gains in the efficiency of the decision making process, while enjoying strong performance guarantees that are competitive with the optimal policy.\nDespite the promising performance, a caveat for these “submodular surrogate”-based approaches is that it is often challenging to engineer such a surrogate objective without an ad-hoc design and analysis that requires trial-and-error (Chen et al., 2015b; Satsangi et al., 2018). Furthermore, for certain classes of surrogate functions, it is NP-hard to compute/evaluate the function value (Javdani et al., 2014). In such cases, even a greedy policy, which iteratively picks the best action given the (observed) history, can be prohibitively costly to design or run. Addressing this limitation requires more automated or systematic ways of designing (efficient) surrogate objective functions for decision making.\nOverview of main results. Inspired by contemporary work in data-driven decision making, we aim to learn a greedy heuristic for sequentially selecting actions. This heuristic acts as a surrogate for invoking the expensive oracle when evaluating an action. Our key insight is that many practical algorithms can be interpreted as greedy approaches that follow an (approximate) submodular surrogate objective. In particular, we focus on the class of combinatorial problems that can be solved via submodular maximization (either directly on the objective function or via a submodular surrogate). We highlight some of the key results below:\n• Focusing on utility-based greedy policies, we introduce a data-driven optimization framework based on the “submodular-norm” loss, which is a novel loss function that encourages learning functions that exhibit “diminishing returns”. Our framework, called LEASURE (Learning with Submodular Regularization), outputs a surrogate objective that is efficient to train, approximately submodular, and can be made permutation-invariant. The latter two properties allow us to prove approximation guarantees for the resulting greedy heuristic.\n• We show that our approach can be easily integrated with modern imitation learning pipelines for sequential prediction tasks. We provide a rigorous analysis of the proposed algorithm and prove strong performance guarantees for the learned objective.\n• We demonstrate the performance of our approach on a variety of decision making tasks, including set cover, active learning for classification, and data-driven protein design. Our results suggest that, compared to standard learning-based baselines: (a) at training time, LEASURE requires significantly fewer oracle calls to learn the target objective (i.e., to minimize the approximation error against the oracle objective); and (b) at test time, LEASURE achieves superior performance on the corresponding optimization task (i.e., to minimize the regret for the original combinatorial optimization task). In particular, LEASURE has shown promising performance in the protein design task and will be incorporated into a real-world protein design workflow." }, { "heading": "2 RELATED WORK", "text": "Near-optimal decision making via submodular optimization. Submodularity is a property of a set function that has a strong relationship with diminishing returns, and the use of submodularity has wide applications from information gathering to document summarization (Leskovec et al., 2007; Krause et al., 2008; Lin and Bilmes, 2011; Krause and Golovin, 2014). The maximization of a submodular function has been an active area of study in various settings such as centralized (Nemhauser et al., 1978; Buchbinder et al., 2014; Mitrovic et al., 2017), streaming (Badanidiyuru et al., 2014; Kazemi et al., 2019; Feldman et al., 2020), continuous (Bian et al., 2017b; Bach, 2019) and approximate (Horel and Singer, 2016; Bian et al., 2017a). Variants of the greedy algorithm, which iteratively selects an element that maximizes the marginal gain, feature prominently in the algorithm design process. For example, in the case of maximizing a monotone submodular function subject to a cardinality constraint, it is shown that the greedy algorithm achieves an approximation ratio of (1− 1/e) of the optimal solution (Nemhauser et al., 1978). In applications where we need to make a sequence of decisions, such as information gathering, we usually need to adapt our future decisions based on past outcomes. Adaptive submodularity is the corresponding property where an adaptive greedy algorithm enjoys a similar guarantee for maximizing an adaptive submodular function (Golovin and Krause, 2011). Recent works have explored optimizing the value of information (Chen et al., 2015b) and Bayesian active learning (Javdani et al., 2014; Chen et al., 2017a) with this property. Another line of related work is online setting (typically\nbandits), which is grounded in minimizing cumulative regret (Radlinski et al., 2008; Streeter et al., 2009; Yue and Guestrin, 2011; Ross et al., 2013; Yu et al., 2016; Hiranandani et al., 2020).\nLearning submodular functions. Early work focused on learning non-negative linear combinations of submodular basis functions (Yue and Joachims, 2008; El-Arini et al., 2009; Yue and Guestrin, 2011; Sipos et al., 2012), which was later generalized to mixtures of “submodular shells” (Lin and Bilmes, 2012). Deep submodular functions (Dolhansky and Bilmes, 2016) extend these ideas to more expressive compositional function classes by using sums of concave composed with modular functions. The theoretical question of the learnability of general submodular functions is analyzed in Balcan and Harvey (2018). Our goal is to encourage submodularity via regularization, rather than via hard constraints on the function class design.\nLearning to optimize via imitation learning. Rather than first learning a submodular function and then optimizing it, one can instead learn to directly make decisions (e.g., imitate the oracle greedy algorithm). This area builds upon imitation learning, which learns a policy (i.e., a mapping from states to actions) directly from examples provided by an expert (e.g., an expensive computational oracle, or a human instructor) (Chernova and Thomaz, 2014). Classic work on imitation learning (e.g., the Dataset Aggregation (DAgger) algorithm (Ross et al., 2011)) reduce the policy learning problem to the supervised learning setting, which has been extended to submodular optimization by imitating the greedy oracle method (Ross et al., 2013). More generally, learning to optimize has been applied generically to improve combinatorial optimization solvers for focused distributions of optimization problems (He et al., 2014; Song et al., 2018; Khalil et al., 2016; Balunovic et al., 2018; Gasse et al., 2019; Song et al., 2020). Our approach bridges learning to optimize and learning submodular functions, with a focus on learning surrogate utilities using submodular regularization.\nLearning active learning. Our approach is applicable to active learning, and so is related to work on learning active learning. The closest line of work learns a utility function as a surrogate for improvement in classifier accuracy (Konyushkova et al., 2017; Liu et al., 2018), which is then used as the decision criterion. However, prior work either used restricted function classes (Konyushkova et al., 2017), or very expressive function classes that can be hard to fit well (Liu et al., 2018). Our work can be viewed as a direct extension of this design philosophy, where we aim to reliably learn over expressive function classes using submodular regularization. Other related work do not directly learn an active learning criterion, instead encouraging sample diversity using submodularity (Wei et al., 2015) or the gradient signal from the classifier (Ash et al., 2020)." }, { "heading": "3 BACKGROUND AND PROBLEM STATEMENT", "text": "" }, { "heading": "3.1 DECISION MAKING VIA SUBMODULAR SURROGATES", "text": "Given a ground set of items V to pick from, let u : 2V → R be a set function that measures the value of any given subset1 A ⊆ V . For example, for experimental design, u(A) captures the utility of the output of the best experiment; for active learning u(A) captures the generalization error after training with set A. We denote a policy π : 2V → V to be a partial mapping from the set/sequence of items already selected, to the next item to be picked. We use Π to denote our policy class. Each time a policy picks an item e ∈ V , it incurs a unit cost. Given the ground set V , the utility function u, and a budget k for selecting items, we seek the optimal policy π that achieves the maximal utility:\nπ∗ ∈ arg max π∈Π u(Sπ,k). (1)\nSπ,k is the sequence of items picked by π: Sπ,i = Sπ,i−1 ∪ {π(Sπ,i−1)} for i > 0 and Sπ,0 = ∅. As we have discussed in the previous sections, many sequential decision making problems can be characterized as constrained monotone submodular maximization problem. In those scenarios u is:\n• Monotone: For any A ⊆ V and e ∈ V \\A, u(A) ≤ u(A ∪ {e}).\n• Submodular: For any A ⊆ B ⊆ V and e ∈ V \\B, u(A ∪ {e})− u(A) ≥ u(B ∪ {e})− u(B).\n1For simplicity, we focus on deterministic set functions in this section. Note that many of our results can easily extent to the stochastic, by leveraging the theory of adaptive submodularity (Golovin and Krause, 2011)\nIn such cases, a mypopic algorithm following the greedy trajectory of u admits a near-optimal policy. However, in many real-world applications, u is not monotone submodular. Then one strategy is to design a surrogate function f : 2V → R which is:\n• Globally aligning with u: For instance, f lies within a factor of u: f(A) ∈ [c1 · u(A), c2 · u(A))] for some constants c1, c2 and any set A ⊆ V; or within a small margin with u: f(A) ∈ [u(A) − , u(A) + ] for a fixed > 0 and any set A ⊆ V;\n• Monotone submodular: Intuitively, a submodular surrogate function encourages selecting items that are beneficial in the long run, while ensuring that the decision maker does not miss out any actions that are “surprisingly good” by following a myopic policy (i.e., future gains for any item are diminishing). Examples that fall into this category include machine teaching (Singla et al., 2014), active learning (Chen et al., 2015a), etc.\nWe argue that in real-world decision making scenarios—as validated later in Section 6—the decision maker is following a surrogate objective that aligns with the above characterization. In the following context, we will assume that such surrogate function exists. Our goal is thus to learn from an expert policy that behaves greedily according to such surrogate functions." }, { "heading": "3.2 LEARNING TO MAKE DECISIONS", "text": "We focus on the regime where the expert policy is expensive to evaluate. Let g : 2V × V → R be the score function that quantifies the benefit of adding a new item to an existing subset of V . For the expert policy and submodular surrogate f discussed in Section 3.1, ∀A ⊆ V and e ∈ V:\ngexp(A, e) = f(A ∪ {e})− f(A).\nFor example, in the active learning case, gexp(A, e) could be the expert acquisition function that ranks the importance of labelling each unlabelled point, given the currently labelled subset. In the set cover case, gexp(A, e) could be the function that gives the score to each vertex and determines the next best vertex to add to the cover set. Given a loss function `, our goal is to learn a score function ĝ that incurs the minimal expected loss when evaluated against the expert policy: ĝ = arg ming EA,e[`(g(A, e), gexp(A, e))]. Subsequently, the utility by the learned policy is u(Sπ̂,k), where for any given history A ⊆ V , π̂(A) ∈ arg maxe∈V ĝ(A, e)." }, { "heading": "4 LEARNING WITH SUBMODULAR REGULARIZATION", "text": "To capture our intuition that a greedy expert policy tends to choose the most useful items, we introduce LEASURE, a novel regularizer that encourages the learned score function (and hence surrogate objective) to be submodular. We describe the algorithm below.\nGiven the groundset V , let f : 2V → R be any approximately submodular surrogate such that f(A) captures the “usefulness” of the set A. The goal of a trained policy is to learn a score function g : 2V × V → R that mimics gexp(A, x) = f(A ∪ {x}) − f(A), which is often prohibitively expensive to evaluate exactly. Then, given any such g, we can define a greedy policy π(A) = argmaxx∈Vg(A, x). With LEASURE, we aim to learn such function g that approximates g\nexp well while being inexpensive to evaluate at test time. Let Dreal = {(〈A, x〉, yexp = gexp(A, x))}m be the gathered tuple of expert scores for each set-element pair. If the set 2V × V was not too large, the LEASURE could be trained on the randomly collected tuples Dreal. However, 2V tends to be too large to explore, and generating ground truth labels could be very expensive. To leverage that, for a subset of set-element pairs in Dreal we generate a set of random supersets to form an unsupervised synthetic dataset of tuples Dsynth = {(〈A, x〉, 〈A′, x〉)|A A′, 〈A, x〉 ∈ Dreal}n where A′ denote a randomly selected superset of A. Define:\nLoss(g, gexp) = ∑\n〈A,x〉,yexp∈Dreal\n(yexp − g(A, x))2 + λ ∑\n(〈A,x〉,〈A′,x〉)∈Dsynth\nσ([g(A′, x)− g(A, x)]),\nwhere λ > 0 is the regularization parameter and σ is the sigmoid function. Intuitively, such regularization term will force the learned function g to be close to submodular, as it will lead to larger losses every time g(A′, x) > g(A, x). If we expect f to be monotonic, we also introduce a second regularizer ReLu(−g(A′, x)) which pushes the learned function to be positive. Combined, the loss\nfunction becomes (used in Line 11 in Algorithm 1): Loss(g, gexp) = ∑\n〈A,x〉,yexp∈Dreal\n(yexp − g(A, x))2 + λ ∑\n(〈A,x〉,〈A′,x〉)∈Dsynth\nσ([g(A′, x)− g(A, x)])\n+ γ ∑\n〈A′,x〉∈Dsynth\nReLu(−g(A′, x)),\nwhere γ is another regularization strength parameter. Such loss should push g to explore a set of approximately submodular, approximately monotonic functions. Thus, if f exhibits the submodular and monotonic behavior, g trained on this loss function should achieve a good local minima.\nWe next note that since 2V is too large to explore, instead of sampling random tuples for Dreal, we use modified DAgger. Then g can learn not only from the expert selections of 〈A, x〉, but it can also see the labels of the tuples the expert would not have chosen.\nAlgorithm 1 Learning to make decisions via Submodular Regularization (LEASURE) 1: Input: Ground set V , expert score function gexp, 2: regularization parameters λ, γ, DAgger constant β, the length of trajectories T . 3: initialize Dreal ← ∅ 4: initialize g to any function. 5: for i = 1 to N do 6: Let gi = gexp with probability β. 7: Sample a batch of T−step trajectories using πi(A) = xi = argmaxx∈Vgi(A, x). 8: Get dataset Di = {〈Ai, xi〉, gexp(Ai, xi)} of labeled tuples on actions taken by πi. 9: Dreal ← Dreal ⋃ Di. 10: Generate synthetic dataset Dsynth from Dreal. 11: Train gi+1 on Dreal and Dsynth using the loss function above. 12: Output: gN+1\nAlgorithm 1 above describes our approach. A trajectory in Line 7 is a sequence of iteratively chosen tuples, (〈∅, x1〉, 〈{x1}, x2〉, 〈{x1, x2}, x3〉..., 〈{x1, ..., xT−1}, xT 〉), collected using a mixed policy πi. In Line 8, expert feedback of selected actions is collected to formDi. Note that in some settings, even collecting exact expert labels gexp at train time could be too expensive. In that case, gexp can be replaced with a less expensive, noisy approximate expert gexp ≈ gexp. In fact, all three of our experiments use noisy experts in one form or another." }, { "heading": "5 ANALYSIS", "text": "Estimating the expert’s policy. We first consider the bound on the loss of the learned policy measured against the expert’s policy. Since LEASURE can be viewed as a specialization of DAGGER (Ross et al., 2011) for learning a submodular function, it naturally inherits the performance guarantees from DAGGER, which show that the learned policy efficiently converges to the expert’s policy. Concretely, the following result, which is adapted from the original DAgger analysis, shows that the learned policy is consistent with the expert policy and thus is a no-regret algorithm: Theorem 1 (Theorem 3.3, Ross et al. (2011)). Denote the loss of π̂ at history state H as l(H, π̂) := `(g(H, π̂(H)), gexp(H,πexp(H))). Let dπ̂ be the average distribution of states if we follow π̂ for a finite number of steps. Furthermore, let Di be a set of m random trajectories sampled with πi at round i ∈ {1, . . . , N}, and ̂N = minπ 1N ∑N i=1 EHi∼Di [l(Hi, π̂)] be the training loss of the best\npolicy on the sampled trajectories. If N is O ( T 2 log(1/δ) ) and m is O (1) then with probability at\nleast 1− δ there exists a π̂ among the N policies, with EH∼dπ̂ [l(H, π̂)] ≤ ̂N +O ( 1 T ) .\nApproximating the optimal policy. Note that the previous notion of regret corresponds to the average difference in score function between the learned policy and the expert policy. While this result shows that LEASURE is consistent with the expert, it does not directly address how well the learned policy performs in terms of the gained utility. We then provide a bound on the expected value of the learned policy, measured against the value of the optimal policy. For specific decision making tasks where the oracle follows an approximately submodular objective, our next result, which is proved in the appendix, shows that the learned policy behaves near-optimally.\nTheorem 2. Assume that the utility function u is monotone submodular. Furthermore, assume the expert policy πexp follows a surrogate objective f such that for all A ⊆ V , |f(A) − u(A)| < E where E > 0. Let ̂N = minπ 1N ∑N i=1 l(Hi, π̂) be the training loss of the best policy on the\nsampled trajectories. If N is O ( T 2 log(1/δ) ) then with probability at least 1 − δ, the expected utility achieved by running π̂ for k steps is\nE[u(Sπ̂,k)] ≥ (1− 1/e)E[u(Sπ∗,k)]− k( E + ∆max̂N )−O(1).\nA closely related work in approximate policy learning is by Ross et al. (2013), which also builds upon DAGGER to tackle policy learning for submodular optimization, via directly imitating the greedy oracle decision rather than learning a surrogate utility. One key difference is that their approach can only yield guarantees against an artificial benchmark (a set or list of simpler policies that each independently selects an item to add to the action set), whereas our theoretical guarantees are with respect to the optimal policy in our class." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we demostrate the performance of LEASURE on three diverse sequential decision making tasks, namely set cover (SC), learning active learning (LAL) and protein engineering (PE).\nBaselines. We compare our approach to the Deep Submodular Function (DSF (Dolhansky and Bilmes, 2016)) and Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds (BADGE (Ash et al., 2020)). The DSF approach learns a submodular surrogate function f : 2V → R that produces a score for each set A ⊂ V . The architecture of the DSF forces the function f to be exactly submodular, as opposed to LEASURE, which is only encouraged to be submodular through a regularizer. However, the architecture and the training procedure of the DSF are quite restrictive, which does not allow the DSF to explore a large domain during training and restricts how expressive it can be compared to a standard neural network. Moreover, DSF are restricted to small V , and the number of parameters increases with the cardinality of V . That is not true for LEASURE, which number of parameters grows with the dimensionality of elements in V . This makes DSF useful for small datasets, but makes it prohibitively expensive to use on larger problems. In fact, we could not compare LEASURE to DSF on LAL or PE tasks, as it was not feasible to train DSF on these sets. For LAL experiment, we also compare with a recent deep active learning approach (Ash et al., 2020). Finally, we want to add that LEASURE can be seamlessly integrated with any standard Machine Learning library, and since the architecture of the learned policy in LEASURE is not restrictive, any available optimization trick can be used to achieve better performance. In fact, existing ‘imitation learning’-based approaches for LAL, such as Liu et al. (2018), can be viewed as special cases of LEASURE (i.e. without regularization). On the other hand, DSF cannot be as easily implemented, and the standard libraries are not optimized for the DSF architecture." }, { "heading": "6.1 SET COVER", "text": "Before testing our approach on a real-world scenario, we showcase its performance on a simple submodular and monotonic maximization problem. Set cover is a classical example: given a set of elements U = {1, 2, ..., n} (called the universe) and a collection of m sets S = {s1, .., sm} whose union equals the universe, the set cover problem is to identify the smallest sub-collection of S whose union equals the universe. Formulated as a policy learning problem, the goal is to learn the score function g : 2S × S → R such that for any Sl ⊂ S, x ∈ S,\ng(Sl, x) ≈ gexp(Sl, x) = | ∪s∈Sl s ∪ x| − | ∪s∈Sl s|.\nGiven g, we can then define a policy π : 2s → S as π(Sl) = argmaxx∈Sg(Sl, x). During training, tuples {(Sl, x), gexp} are collected, and then g is trained on this set. We trained four different policies: a function g parametrized by a neural network with MSE(g, gexp) as the loss, a function g with the same MSE loss and just a monotonicity regularizer, a function g trained using both monotonicity and submodular regularizers (LEASURE), as well as the Deep Submodular Function baseline (Dolhansky and Bilmes, 2016). We use a modified Deepset architecture (Zaheer et al., 2017) for modeling the permutation-invariant score networks g in both the SC and the LAL tasks, and provide the details in Appendix B. Our dataset is the subset of the Mushroom dataset (Lim, 2015), consisting of 1000 sets. Each set contains 23 mushroom species, and there are a total of 119\nspecies. The goal is to train a policy to select the largest superset of these sets. We evaluate in two settings: Exact Set Cover, where we collect tuples {(Sl, x), gexp} for training, and Noisy Set Cover, where we have access only to {(Sl, x), gexp }, where gexp is a noisy score. The networks are trained on rollouts of length 20 (i.e. on sets {Sl : |Sl| ≤ 20}), and tested on rollout of length up to 100. Figure 1 show the value of set cover as a function of the size of the superset. LEASURE significantly outperforms other learned policies, although Deep Submodular Function generalizes better to larger rollout lengths – LEASURE gets most of its set cover gains in the first 10-20 selected points, while Deep Submodular Function continues to noticeably improve past the training rollout length. Note that in Figures 1a & 1b, the competing baselines all exhibit a “diminishing returns” effect, resulting in a concave-shaped value function. With a submodular-norm regularizer, LEASURE quickly identified the sets with large marginal gains. This observation aligns with our analysis in Section 5." }, { "heading": "6.2 LEARNING ACTIVE LEARNING ON FASHION MNIST", "text": "In this section we demonstrate the performance of LEASURE on a real-world task that is not submodular or monotonic, but usually exhibits submodular and monotonic behaviour.\nIn active learning, there is a partially labelled dataset S = {Sl, Su}, where Sl is labelled and Su is unlabelled, and a policy π : 2S → S. The labelled subset Sl can be used to infer from data (learn the image classifier, predict unlabelled protein fitness, etc). The goal of the policy is to select the smallest subset Sπ ⊂ Su to label such that the accuracy of supervised learning from Sπ ∪ Sl is maximized. Since selecting a subset is a prohibitively expensive combinatorial task, the policy is usually sequential. In particular, it selects points to add to Sπ one by one (or in batches) using some score function g(Sπ ∪ Sl, ·) : Su → R to score each point x ∈ Su and then the policy labels the point with the largest score. If g were to be the first order difference of a submodular function f , i.e. g(A, e) = f(A ∪ {e}) − f(A), then the policy would be near-optimal. Moreover, as discussed above, intuitively we expect g to have this property in most cases, since adding an extra point to a larger set usually has less effect than adding the same point to a smaller subset of the set.\nThe above motivates the use of LEASURE in active learning (Figure 2). In this experiment, the set S is the Fashion-MNIST dataset consisting of greyscale images from one of 10 clothes classes (Xiao et al.\n(2017)). The goal was to learn a policy that greedily selects “the best” point x∗ ∈ Su to label, such that a neural network classifier trained on the labelled set Sl ∪ {x∗} produces the most accurate classification of the unlabelled images. In particular, we trained the above function g to predict the\naccuracy gain gexp from labelling a point. The accuracy gain gexp was measured by training the neural network classifier on both Sl and Sl ∪ {x} and then recording the difference in validation set classification accuracy. Since obtaining exact gexp for each datapoint is very expensive, we instead collected noisy labels gexp ≈ gexp, obtained by training the classifier for only 10 epochs. The tuples {(Sl, x), gexp )} were collected using DAgger with rollouts of length 30 (starting from a random batch of 20 images). For training, we used an initially unlabelled dataset with 60000 images, 2000 of which were set aside to use for evaluating validation accuracy. We trained two neural networks to approximate g - an unregularized one, and one with a monotonicity and a submodularity regularizer (i.e. LEASURE). See Appendix B for details on architecture and training procedure.\nThe trained policies were tested on a set of 8000 images, with additional 2000 set aside for validation. At test time, we again started with a random batch of size 20 and then used each policy to sequentially select additional 200 images to label (Figure 2). The recorded test error rate was collected using real gexp, i.e. a classifier trained until training loss reaches a certain threshold. The experiment was benchmarked against the “random” policy that randomly picked the next point, the “uncertainty” policy that selected the next point by maximizing uncertainty, the “no regularizer” policy that used DAgger with MSE loss, and “BADGE” from Ash et al. (2020). See Appendix B for details. Even though LEASURE was trained on much shorter rollouts using very noisy labels, it still outperformed all other baselines. This confirms our intuition that the submodular regularizer allowed the learned score function g to find a local minima that generalizes well to out of sample." }, { "heading": "6.3 PROTEIN ENGINEERING", "text": "By employing a large protein engineering database containing mutation-function data (Wang et al., 2019), we demonstrate that LEASURE enables the learning of an optimal policy for imitating expert design of protein sequences (see Appendix for detailed discussion of datasets). As in Liu et al. (2018) we construct a fully data-driven expert which evaluates via 1-step roll-out the effect of labeling each candidate data (in our case a protein mutant) with the objective of minimizing loss on a downstream regression task (predicting protein fitness).\nWhen training the policy to emulate the algorithmic expert via imitation learning, we represent each state as two merged representations: (1) a fixed dimensional representation of the protein being considered (as the last dense layer of the network described in Appendix C), and (2) a similar fixed dimensional representation of the data already included in the training set (as a sum of their embeddings), including their average label value. At each step a random pool of data is drawn from the state space and the expert policy greedily selects a protein to label, which minimizes the expected regression loss on the downstream regression task (prediction of protein fitness). Once the complete pool of data has been evaluated, the states are stored along with their associated preference score, taken as their ability to reduce the loss in the 1-step roll out. Using these scores, the expert selects a protein sequence to add into the training set, and we retrain the model and use the updated model to predict a protein with the maximum fitness. This paired state action data is used to train the policy model at the end of each episode, as described in Liu et al. (2018). As we observe in Figure 3a, this method trains a policy which performs nearly identically to this 1-step oracle expert.\nThe use of submodular regularization enables the learning of a policy which generalizes to a fundamentally different protein engineering task. In our experiments, LEASURE is trained to emulate\na greedy oracle for maximizing the stability of protein G, a small bacterial protein used across a range of biotechnology applications (Sjbring et al., 1991). We evaluate our results by applying the trained policy to select data for the task of predicting antibody binding to a small molecule. As is the case with all protein fitness landscapes, the evaluation dataset is highly imbalanced, with the vast majority of mutants conferring no improvement at all. Because data is expensive to label in biological settings (proteins must be synthesized, purified and tested), we are often limited in how many labels can feasibly be generated, and the discriminative power among the best results is often more important than among the worst. To construct a metric with real-world applicability we assess each model by systemically examining the median Kd of the next ten data points selected at each budget, from 10 to 110 total labels. This method is utilized in recognisance of the extreme ruggedness of protein engineering landscapes, wherein the vast majority of labels are of null fitness, and the ability to select rare useful labels for the next experimental cycle is of key importance.\nWe observe that LEASURE outperforms all evaluated baselines, and that the inclusion of submodular optimization is mandatory to its success (Figure 3a). A greedy active learner which labels the antibody mutation with the best predicted Kd (the smallest) preforms approximately equivalently with selecting random labels. Use of dropout as an approximation of model uncertainty as in Gal and Ghahramani (2016) improves upon these baselines, although significant betterment is not achieved until approximately 35 labels are added. In comparison, the results from LEASURE diverge from all others nearly immediately, and the best model, which uses a lambda of 0.1, achieves a notable improvement in Kd, 5.81µM, vs 7.27µM achieved by entropy sampling. In support of methods success, we note that the learned policy preforms approximately as well as the greedy oracle which it emulates (Appendix Figure 7a). We observe that the results are robust within a range of possible lambda values (Figure Figure 3b and Appendix Figure 7b), and that without the use of submodular regularization the trained policy fails to learn a policy better than the selection of random labels. This is an important finding, as the method proposed by Liu et al. (2018) without LEASURE, has been shown to be a state-of-the-art method for imitation learning.\nBased on these empirical results, LEASURE demonstrates significant potential as computational tool for real-world automated experimental design tasks: In particular, in the protein engineering task, LEASURE achieves the SOTA on the benchmark data-sets considered in this work. While LEASURE does involve repeated retraining of the protein engineering network, we observe that it returns strong results even with a single step of training. Additionally, the networks that are employed are very simple (Appendix C). This allows for reasonable training time (36 hours) and nearly instantaneous inference. Given the considerable time and cost of protein engineering, these computational budgets are quite modest. Protein engineering is a time consuming (months to years) and expensive undertaking (10’s of thousands to millions of dollars). These projects usually strive to achieve the best possible results given a fixed budget. We have demonstrated in our work the ability deliver significant improvements in protein potency for the modest fixed budgets. Although the cost savings of engineering and testing an individual protein (or label) vary significantly based on the system, ranging tens to hundreds of dollars, we observe that to achieve a Kd of 8e-6 M LEASURE delivers an approximate cost savings of 65%, or 40 fewer labels than the next best method. The sequential synthesis and evaluation of each of these labels would likely span several months and additionally incur several thousands of dollars of materials costs." }, { "heading": "7 CONCLUSION", "text": "In this paper, we introduce LEASURE, a data-driven decision making framework based on a novel submodular-regularized loss function. The algorithm was inspired by the recent developments of submodular-surrogate-based near-optimal algorithms for sequential decision making. We have demonstrated LEASURE on several diverse set of decision making tasks. Our results suggest that LEASURE can be easily integrated with modern deep imitation learning pipelines, and that it is efficient to run, while still reaching the best performance among the competing baselines. In addition to demonstrating the strong empirical performance on several use cases, we believe our work also provides useful insights in the design and analysis of novel information acquisition heuristics where traditional ad-hoc approaches are not feasible.\nAcknowledgements. This research was supported in part by funding from NSF #1645832, NIH #T32GM112592, The Rosen Bioengineering Center, Raytheon, Beyond Limits, JPL, and UChicago CDAC via a JTFI AI + Science Grant. This work was additionally supported by NVIDIA corporation through the donation of the GPU hardware used in experiments." }, { "heading": "A PROOF FOR SECTION 5", "text": "A.1 PROOF OF THEOREM 2\nProof. The high-level idea is to first connect the total expected utility of the learned policy π̂ with the expected utility of the expert policy πexp, following the analysis in DAgger (Ross et al., 2011). Then, we will use the fact that πexp is greedy with respect to f , an approximation to the submodular utility function u, to bound the one step gain of the πexp against the k step gain of running the optimal policy, and subsequently bound the total utility of the expert policy against the optimal policy. We would eventually obtain a similar result as Theorem 2, detailed as follows.\nMore concretely, following Theorem 3.4 in DAgger, we obtain that\nE[u(Sπ̂,k)] ≥ E[u(Sπexp,k)]−∆maxk̂N −O(1)\nHere ∆max is the largest one-step deviation from πexp that π̂ can suffer. It is equivalent to the term u in the DAgger paper. Since f is -close to a monotone submodular function u, we know that ∆max ≤ maxA⊂V,|A|=k f(A) ≤ maxA⊂V,|A|=k u(A) + E , which is a constant once u is given. Next, since πexp is greedily optimizing an E-approximation to a monotone submodular function u, we know that E[u(Sπexp,k)] ≥ (1− 1/e)E[u(Sπ∗,k)]− k E following the proof from Theorem 5 in (Chen et al., 2017b).\nCombining both steps, we have that\nE[u(Sπ̂,k)] ≥ (1− 1/e)E[u(Sπ∗,k)]− k( E + ∆max̂N )−O(1)\nwhich completes the proof." }, { "heading": "B SUPPELEMENTAL DETAILS FOR THE SET COVER AND MNIST ACTIVE LEARNING EXPERIMENTS", "text": "We provide additional results for the set cover experiments, under the same experimental setup as Figure 1a and 1b. The subplots 4a and 4b show the mean square error of learned policy g as a function of the size of Sl. We provide a zoomed-in version of 4b in Figure 4c. In Figure 4c, we show it is clear that training the neural network on the monotonicity regularizer only does not help it learn out of sample - the error rapidly increases as soon as the test rollout length becomes larger than the training rollout length.\nIn Noisy Set Cover experiment (Figure 4a), each label of the element added to the superset was perturbed with N(0, 1) noise. As a result, the variance of the total noise is linear in the number of sets. So, it is reasonable that the MSE error grows with number of sets - the policies cannot learn to predict random noise. While stochastic MSE of LEASURE and the no-regularizer policy are similar, LEASURE outperforms in the number of elements added, which is what matters in practice (Figure 1). These two figures confirm our intuition that when the problem is not exactly submodular, Leasure will still generalize better than no regularizer by learning to ignore small deviations from submodularity. Finally, it is also expected that DSF has a lower MSE than Leasure when the label noise is too large - Deep Submodular Functions are required to be submodular. When the stochasticity in the MSE becomes overwhelmingly large, that restrictive requirement becomes an advantage. However, when the MSE variance is not too large, the lack of expressiveness and the difficulty of optimization of DSF make it lose its advantage compared to Leasure.\nFor completion, we also provide our architecture and parameter choices for both set cover and Learning Active Learning (LAL) on MNIST experiments. For set cover, the problem is too simple to require DAGGER (Ross et al., 2011). Instead, the tuples are generated randomly. For active learning on MNIST, the tuples are indeed generated using Algorithm 1. For MNIST, we first preprocessed our dataset with PCA, leaving the number of vectors necessary to achieve 80% covariance on the training set (24 vectors). That was necessary to allow the comparison with DSF. For set cover, each element was a set v containing 23 elements v1, v2, .., v23, where vi was an integer corresponding to the label of the species. As a neural network input, v was simply represented as a vector of [v1, ..., v23].\nBoth set cover and MNIST used a modified Deepset architecture (Zaheer et al., 2017) for score networks as follows: Given a setA = {v0, ..., vk} ⊂ V and a datapoint v ∈ V , the score network g first preprocesses all inputs v0, ..., vk, v to obtain learned embeddings v̄0, .., v̄k, v̄. (See Figure 5) Then, the elements in A are combined using Deepsets architecture to produce a learned set embedding Ā. Finally, Ā and v̄ are concatenated and then a learned linear layer and a Leaky ReLu nonlinearity are applied to produce g(A, v). (See Figure 6). All dense layers have 64 neurons and a bias term. Using this Deepsets-like framework, we achieve permutation invariance of elements in set A while also keeping the network expressive enough to learn a wide range of functions.\nLearning element representation\nCombining element representation using DeepSets\nFor both tasks, the score networks are trained using ADAM with a learning rate of 1e-3. Beta parameter from Line 2 in LEASURE was picked randomly to be 45 . From experiments, the exact value of the parameter did not matter as long as it starts with at least 12 and degrades towards almost 0 afterN iterations. The λ and γ parameters were picked using a hyperparameter sweep in log space. As per our intuition, we have found that the strength of the parameters should reflect your certainty that the task is submodular and/or monotone. For set cover, λ = 0.1, γ = 0.5, while for active learning λ = 0.001 and γ = 0.001. Notice that the values are not comparable between different experiments: for MNIST Learning Active Learning (LAL), gexp(A, v) ∈ [0, 1) outputs the accuracy gain of adding v to A and training a supervised model on it; for set cover, gexp(A, v) ∈ {0, 1} outputs the number of new elements added to the set by adding x to A. For LAL, the values of gexp are usually much smaller than 1, particularly for larger sets. Thus, the values for the two regularizers had to be smaller so that the model learns not just the regularizer.\nFinally, we wanted to discuss our baselines in Fashion MNIST experiments. In Figure 2, we have four baselines: random, uncertainty, BADGE (Ash et al., 2020), and no regularizer. The no regularizer baseline was trained identically to LEASURE, except for the absense of submodularity and monotonicity regularizers. The no regularizer baseline performed well on the sets with up to 30 additional points - corresponding exactly to the length of the training rollouts. However, it failed to generalize. On the other hand, the submodular regularizer allowed the learned score function to find a local minima that generalized well to out of sample. Finally, BADGE did not seem to perform well when the number of datapoints in the set was large, likely because the gradient signal from adding any one additional datapoint was too weak and thus the selection of the next best datapoint was too noisy.\nSome more details regarding BADGE (Ash et al., 2020). The authors do not learn a policy, instead, they use gradients of the classifier (gradient embedding) to select a useful, diverse batch. Although BADGE was originally made for a batch setting, the authors’ main idea is still applicable to our case: they argued that the next datapoint(s) can be selected by looking at which fictitious labels would produce the largest gradients in the classifier network. Therefore, we replaced the kmeans++ algorithm the authors suggested with simply selecting the datapoint that corresponds to the largest gradient norm. This algorithm has an advantage that it does not require a trained policy network. However, it provides no guarantees about submodularity of the resulting policy, and, in our experiments, the performance degrades with the size of the set - likely because the gradient signal from\nadding any one additional datapoint was too weak and thus the selection of the next best datapoint was too noisy. Since BADGE requires a neural network classifier/regressor, we could not use it as a baseline for Set Cover (Set Cover regression function is simply adding all elements in the superset).\nThe no-regularizer baseline is similar to that of Konyushkova et al. (2017). However, the problem considered in Konyushkova et al. (2017) is not compatible with most of the tasks we considered here (for MNIST, yes if we use random forest classifiers; but for others not). Furthermore, Konyushkova et al. (2017) treated the problem under a classical supervised learning setting this is often not desirable, given that we are learning a policy from non i.i.d. data samples." }, { "heading": "C SUPPLEMENTAL DETAILS FOR THE PROTEIN ENGINEERING EXPERIMENTS", "text": "Dataset Our datasets were identified in Protabank (Wang et al., 2019) for training of active learning policies and benchmarking of performance. In selecting datasets upon which to train our active learning models several factors were considered. As the state space of possible protein variants for typical engineering application is very large, size is our foremost criteria. Additionally it will be advantageous to use datasets which characterize mutations to all amino acids (as opposed to Alanine scans), and those which include epistatic interactions. We also desire to identify datasets which have a high quality, quantitative readout, such as calorimetry, fluorescence, or SPR data.\nProtein Engineering Methods Embeddings of protein sequences were created using the TAPE repository (Rao et al., 2019) according to the UniRep system as first proposed in Alley et al. (2019). UniRep produces protein embeddings as a matrix of shape (length protein sequence, 1900), although we average together the embeddings only of positions being engineered to produce a consistent embedding of shape (1900,). We have implemented the active learning imitation learning algorithm proposed in Liu et al. (2018) to work with the protein embedding representations described above. Pseudocode for this method is presented in Algorithms 1 and 2 from the original work. As in Liu et al. (2018), our policy network consists of a single dense unit which acts sequentially on the pool of samples being considered to produce a preference score. Our downstream protein engineering network (which was used to compute the preference score of the expert policy) acts on the protein embeddings prepared using TAPE. The network consists of an attention layer, followed by a 1- dimensional convolution layer (128 filters, kernel size 3), before being flattened and applying two fully connected layers of 128 units each. When predicting protein fitness, dropout is applied with a probability of 0.5 and an additional dense layer is applied with one unit and linear activation. Both networks are trained using ADAM with a learning rate of 1e-3. The implementation of this part of the project is nearly identical to Liu et al. (2018), only changing the data representation, protein fitness network structure, and values of K (30), B (100) and T (20) as listed in the appendix of our work. Beta is fixed at 0.5, although the method was shown to be robust to a range of values. At training time, 100 labels are randomly selected for evaluating the effect of the greedy oracle, and 10 data are randomly selected to form the initial data set for learning. The superset is appended at each step of training the policy to maintain a size of 2x the labeled dataset. The training of a policy using these settings takes 36 hours on a modern multiprocessor computer equipped with an NVIDIA Titan V GPU." } ]
2,021
null
SP:61d83ed48f892bcb7d0488c9b918132b2623eea1
[ "The authors propose a learning methodology designed to offset detriments to algorithm performance that arise when instances are not i.i.d (independent and identically distributed), focusing on cases in continual learning (CL) given physiological signals. They designed a replay-based learning method that handles an instance buffer using Importance-guided Storage and Uncertainty-based Acquisition strategies. They apply their method on Class, Time and Domain types of CL, and they introduce t-Step Backward Weight Transfer and Lambda Backward Weight Transfer methods by which to evaluate their method. They conclude with two ablation studies to explore an explanation for their method’s performance and attempt to validate their hypotheses based on these studies." ]
Deep learning algorithms are known to experience destructive interference when instances violate the assumption of being independent and identically distributed (i.i.d). This violation, however, is ubiquitous in clinical settings where data are streamed temporally and from a multitude of physiological sensors. To overcome this obstacle, we propose CLOPS, a replay-based continual learning strategy. In three continual learning scenarios based on three publically-available datasets, we show that CLOPS can outperform the state-of-the-art methods, GEM and MIR. Moreover, we propose end-to-end trainable parameters, which we term taskinstance parameters, that can be used to quantify task difficulty and similarity. This quantification yields insights into both network interpretability and clinical applications, where task difficulty is poorly quantified.
[]
[ { "authors": [ "Rahaf Aljundi", "Eugene Belilovsky", "Tinne Tuytelaars", "Laurent Charlin", "Massimo Caccia", "Min Lin", "Lucas Page-Caccia" ], "title": "Online continual learning with maximal interfered retrieval", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Rahaf Aljundi", "Min Lin", "Baptiste Goujaud", "Yoshua Bengio" ], "title": "Gradient based sample selection for online continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Richard A Caruana" ], "title": "Multitask connectionist learning", "venue": "Proceedings of the 1993 Connectionist Models Summer School. Citeseer,", "year": 1993 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-gem", "venue": "arXiv preprint arXiv:1812.00420,", "year": 2018 }, { "authors": [ "Yukun Chen", "Robert J Carroll", "Eugenia R McPeek Hinz", "Anushi Shah", "Anne E Eyler", "Joshua C Denny", "Hua Xu" ], "title": "Applying active learning to high-throughput phenotyping algorithms for electronic health records data", "venue": "Journal of the American Medical Informatics Association,", "year": 2013 }, { "authors": [ "Sebastian Farquhar", "Yarin Gal" ], "title": "Towards robust evaluations of continual learning", "venue": "arXiv preprint arXiv:1805.09733,", "year": 2018 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep Bayesian active learning with image data", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Wenbo Gong", "Sebastian Tschiatschek", "Richard Turner", "Sebastian Nowozin", "José Miguel Hernández-Lobato" ], "title": "Icebreaker: element-wise active information acquisition with bayesian deep latent gaussian model", "venue": null, "year": 1908 }, { "authors": [ "Awni Y Hannun", "Pranav Rajpurkar", "Masoumeh Haghpanahi", "Geoffrey H Tison", "Codie Bourn", "Mintu P Turakhia", "Andrew Y Ng" ], "title": "Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network", "venue": "Nature Medicine,", "year": 2019 }, { "authors": [ "Neil Houlsby", "Ferenc Huszár", "Zoubin Ghahramani", "Máté Lengyel" ], "title": "Bayesian active learning for classification and preference learning", "venue": "arXiv preprint arXiv:1112.5745,", "year": 2011 }, { "authors": [ "David Isele", "Akansel Cosgun" ], "title": "Selective experience replay for lifelong learning", "venue": "In Thirty-second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Alistair EW Johnson", "Tom J Pollard", "Lu Shen", "H Lehman Li-wei", "Mengling Feng", "Mohammad Ghassemi", "Benjamin Moody", "Peter Szolovits", "Leo Anthony Celi", "Roger G Mark" ], "title": "Mimic-III, a freely accessible critical care database", "venue": "Scientific Data,", "year": 2016 }, { "authors": [ "Dani Kiyasseh", "Tingting Zhu", "David A Clifton" ], "title": "Alps: Active learning via perturbations", "venue": "arXiv preprint arXiv:2004.09557,", "year": 2020 }, { "authors": [ "Matthias Lenga", "Heinrich Schulz", "Axel Saalbach" ], "title": "Continual learning for domain adaptation in chest x-ray classification", "venue": "arXiv preprint arXiv:2001.05922,", "year": 2020 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "German I Parisi", "Ronald Kemker", "Jose L Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "E.A. Perez Alday", "A. Gu", "A. Shah", "C. Liu", "A. Sharma", "S. Seyedi", "A. Bahrami Rad", "M. Reyna", "G. Clifford" ], "title": "Classification of 12-lead ECGs: the PhysioNet - computing in cardiology", "venue": "PhysioNet,", "year": 2020 }, { "authors": [ "Joaquin Quionero-Candela", "Masashi Sugiyama", "Anton Schwaighofer", "Neil D Lawrence" ], "title": "Dataset shift in machine learning", "venue": null, "year": 2009 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy Lillicrap", "Gregory Wayne" ], "title": "Experience replay for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shreyas Saxena", "Oncel Tuzel", "Dennis DeCoste" ], "title": "Data parameters: A new family of parameters for learning a differentiable curriculum", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison, Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Daniel L Silver", "Robert E Mercer" ], "title": "The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness", "venue": "In Learning to learn,", "year": 1996 }, { "authors": [ "Asim Smailagic", "Pedro Costa", "Hae Young Noh", "Devesh Walawalkar", "Kartik Khandelwal", "Adrian Galdran", "Mostafa Mirshekari", "Jonathon Fagert", "Susu Xu", "Pei Zhang" ], "title": "Medal: Accurate and robust deep active learning for medical image analysis", "venue": "In IEEE International Conference on Machine Learning and Applications,", "year": 2018 }, { "authors": [ "Asim Smailagic", "Pedro Costa", "Alex Gaudio", "Kartik Khandelwal", "Mostafa Mirshekari", "Jonathon Fagert", "Devesh Walawalkar", "Susu Xu", "Adrian Galdran", "Pei Zhang" ], "title": "O-medal: Online active deep learning for medical image analysis", "venue": "arXiv preprint arXiv:1908.10508,", "year": 2019 }, { "authors": [ "David Spiegelhalter" ], "title": "Should we trust algorithms? Harvard Data Science Review, 2(1), 1 2020", "venue": "doi: 10. 1162/99608f92.cb91a35a. URL https://hdsr.mitpress.mit.edu/pub/56lnenzj", "year": 2020 }, { "authors": [ "Sebastian Thrun", "Joseph O’Sullivan" ], "title": "Discovering structure in multiple learning tasks: The tc algorithm", "venue": "In ICML,", "year": 1996 }, { "authors": [ "Gido M van de Ven", "Andreas S Tolias" ], "title": "Three scenarios for continual learning", "venue": "arXiv preprint arXiv:1904.07734,", "year": 2019 }, { "authors": [ "Guijin Wang", "Chenshuang Zhang", "Yongpan Liu", "Huazhong Yang", "Dapeng Fu", "Haiqing Wang", "Ping Zhang" ], "title": "A global and updatable ecg beat classification system based on recurrent neural networks and active learning", "venue": "Information Sciences,", "year": 2019 }, { "authors": [ "Jianwei Zheng", "Jianming Zhang", "Sidy Danioko", "Hai Yao", "Hangyuan Guo", "Cyril Rakovski" ], "title": "A 12-lead electrocardiogram database for arrhythmia research covering more than 10,000 patients", "venue": "Scientific Data,", "year": 2020 }, { "authors": [ "Zongwei Zhou", "Jae Shin", "Lei Zhang", "Suryakanth Gurudu", "Michael Gotway", "Jianming Liang" ], "title": "Finetuning convolutional neural networks for biomedical image analysis: actively and incrementally", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Xiaojin Jerry Zhu" ], "title": "Semi-supervised learning literature survey", "venue": "Technical report, University of Wisconsin-Madison, Department of Computer Sciences,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many deep learning algorithms operate under the assumption that instances are independent and identically-distributed (i.i.d.). The violation of this assumption can be detrimental to the training behaviour and performance of an algorithm. The assumption of independence can be violated, for example, when data are streamed temporally from a sensor. Introducing multiple sensors in a changing environment can introduce covariate shift, arguably the ‘Achilles heel’ of machine learning model deployment (Quionero-Candela et al., 2009).\nA plethora of realistic scenarios violate the i.i.d. assumption. This is particularly true in healthcare where the multitude of physiological sensors generate time-series recordings that may vary temporally (due to seasonal diseases; e.g. flu), across patients (due to different hospitals or hospital settings), and in their modality. Tackling the challenges posed by such scenarios is the focus of continual learning (CL) whereby a learner, when exposed to tasks in a sequential manner, is expected to perform well on current tasks without compromising performance on previously seen tasks. The outcome is a single algorithm that can reliably solve a multitude of tasks. However, most, if not all, research in this field has been limited to a small handful of imaging datasets (Lopez-Paz & Ranzato, 2017; Aljundi et al., 2019b;a). Although understandable from a benchmarking perspective, such research fails to explore the utility of continual learning methodologies in more realistic healthcare scenarios (Farquhar & Gal, 2018). To the best of our knowledge, we are the first to explore and propose a CL approach in the context of physiological signals. The dynamic and chaotic environment that characterizes healthcare necessitates the availability of algorithms that are dynamically reliable; those that can adapt to potential covariate shift without catastrophically forgetting how to perform tasks from the past. Such dynamic reliability implies that algorithms no longer needs to be retrained on data or tasks to which it has been exposed in the past, thus improving its data-efficiency. Secondly, algorithms that perform consistently well across a multitude of tasks are more trustworthy, a desirable trait sought by medical professionals (Spiegelhalter, 2020).\nOur Contributions. In this paper, we propose a replay-based continual learning methodology that is based on the following:\n1. Importance-guided storage: task-instance parameters, a scalar corresponding to each instance in each task, as informative signals for loss-weighting and buffer-storage.\n2. Uncertainty-based acquisition: an active learning inspired methodology that determines the degree of informativeness of an instance and thus acts as a buffer-acquisition mechanism." }, { "heading": "2 RELATED WORK", "text": "Continual learning (CL) approaches have resurfaced in recent years (Parisi et al., 2019). Those similar to ours comprise memory-based methods such as iCaRL (Rebuffi et al., 2017), CLEAR (Rolnick et al., 2019), GEM (Lopez-Paz & Ranzato, 2017), and aGEM (Chaudhry et al., 2018). In contrast to our work, the latter two methods naively populate their replay buffer with the last m examples observed for a particular task. Isele & Cosgun (2018) and Aljundi et al. (2019b) employ a more sophisticated buffer-storage strategy where a quadratic programming problem is solved in the absence of task boundaries. Aljundi et al. (2019a) introduce MIR whereby instances are stored using reservoir sampling and sampled according to whether they incur the greatest change in loss if parameters were to be updated on the subsequent task. This approach is computationally expensive, requiring multiple forward and backward passes per batch. The application of CL in the medical domain is limited to that of Lenga et al. (2020) wherein existing methodologies are implemented on chest X-ray datasets. In contrast to previous research that independently investigates buffer-storage and acquisition strategies, we focus on a dual storage and acquisition strategy.\nActive learning (AL) in healthcare has observed increased interest in recent years, with a review of methodologies provided by Settles (2009). For example, Gong et al. (2019) propose a Bayesian deep latent Gaussian model to acquire important features from electronic health record (EHR) data in MIMIC (Johnson et al., 2016) to improve mortality prediction. In dealing with EHR data, Chen et al. (2013) use the distance of unlabelled samples from the hyperplane in an SVM to acquire datapoints. Wang et al. (2019) implement an RNN to acquire ECG samples during training. Zhou et al. (2017) perform transfer learning in conjunction with a convolutional neural network to acquire biomedical images in an online manner. Smailagic et al. (2018; 2019) actively acquire unannotated medical images by measuring their distance in a latent space to images in the training set. Such similarity metrics, however, are sensitive to the amount of available labelled training data. Gal et al. (2017) adopt BALD (Houlsby et al., 2011) with Monte Carlo Dropout to acquire instances that maximize the Jensen-Shannon divergence (JSD) across MC samples. To the best of our knowledge, we are the first to employ AL-inspired acquisition functions in the context of CL." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 CONTINUAL LEARNING", "text": "In this work, we consider a learner, fω : xT ∈ Rm → yT ∈ Rc, parameterized by ω, that maps an m-dimensional input, xT , to a c-dimensional output, yT , where c is the number of classes, for each task T ∈ [1 . . . N ]. This learner is exposed to new tasks in a sequential manner once previouslytackled tasks are mastered. In this paper, we formulate our tasks based on a modification of the three-tier categorization proposed by van de Ven & Tolias (2019). In our learning scenarios (see Fig. 1), a network is sequentially tasked with solving a binary classification problem in response to data from mutually-exclusive pairs of classes Class Incremental Learning (Class-IL), multiclass classification problem in response to data collected at different times of the year (e.g., winter\nand summer) Time Incremental Learning (Time-IL), and a multi-class classification problem in response to inputs with a different modality Domain Incremental Learning (Domain-IL). In the aforementioned cases, task identities are absent during both training and testing and neural architectures are single-headed." }, { "heading": "4 METHODS", "text": "The two ideas underlying our proposal are the storage of instances into and the acquisition of instances from a buffer such that destructive interference is mitigated. We describe these in more detail below." }, { "heading": "4.1 IMPORTANCE-GUIDED BUFFER STORAGE", "text": "We aim to populate a buffer, DB , of finite size,M, with instances from the current task that are considered important. To quantify importance, we learn parameters, entitled task-instance parameters, βiT , associated with each instance, xiT , in each task, T . These parameters play a dual role." }, { "heading": "4.1.1 LOSS-WEIGHTING MECHANISM", "text": "For the current task, k, and its associated data, Dk, we incorporate β as a coefficient of the loss, Lik, incurred for each instance, xik ∈ Dk. For a mini-batch of size, B, that consists of Bk instances from the current task, the objective function is shown in Eq. 1. We can learn the values of βik via gradient descent, with some learning rate, η, as shown in Eq. 2.\nL = 1 Bk Bk∑ i=1 βikLik (1) βik ← βik − η ∂L ∂βik\n(2)\nNote that ∂L∂βik = Lik > 0. This suggests that instances that are hard to classify (↑ Lik) will exhibit ↓ βik. From this perspective, βik can be viewed as a proxy for instance difficulty. However, as presented, βik → 0 as training progresses, an observation we confirmed empirically. Since βik is the coefficient of the loss, Lik, this implies that the network will quickly be unable to learn from the data. To avoid this behaviour, we initialize βik = 1 in order to emulate a standard loss function and introduce a regularization term to penalize its undesirable and rapid decay toward zero. As a result, our modified objective function is:\nLcurrent = 1\nBk Bk∑ i=1 βikLik + λ(βik − 1)2 (3)\nWhen k > 1, we replay instances from previous tasks by using a replay buffer (see Sec. 4.2 for replay mechanism). These replayed instances incur a loss Lij ∀ j ∈ [1 . . . k − 1]. We decide to not weight these instances, in contrast to what we perform to instances from the current task (see Appendix K).\nLreplay = 1\nB −Bk k−1∑ j=1 Bj∑ i Lij (4) L = Lcurrent + Lreplay (5)" }, { "heading": "4.1.2 BUFFER-STORAGE MECHANISM", "text": "We leverage β, as a proxy for instance difficulty, to store instances into the buffer. To describe the intuition behind this process, we illustrate, in Fig. 2, the trajectory of β1k and β2k associated with two instances, x1k and x2k, while training on the current task, k, for τ = 20 epochs. In selecting instances for storage into the buffer, we can 1) retrieve their corresponding β values at the conclusion of the task, i.e., at β(t = 20), 2) rank all instances based on these β values, and 3) acquire the top b fraction of instances. This approach, however, can lead to erroneous estimates of the relative difficulty of instances, as explained next.\nIn Fig. 2, we see that β2k > β1k for the majority of the training process, indicating that x2k had been easier to classify than x1k. The swap in the ranking of these β values that occurs towards the end of training in addition to myopically looking at β(t = 20) would erroneously make us believe that the\nopposite was true. Such convergence or swapping of β values has also been observed by Saxena et al. (2019). As a result, the reliability of β as a proxy of instance difficulty is eroded.\nTo maintain the reliability of this proxy, we propose to track the β values after each training epoch, t, until the final epoch, τ , for the task at hand and calculate the area under these tracked values. We do so by using the trapezoidal rule as shown in Eq. 6. We explored several variants of the storage function and found the proposed form to work best (see Appendix H). At t = τ , we rank the instances in descending order of sik (easy to hard) as we found this preferable to the opposite order (see Appendix I), select the top b fraction, and store them into the buffer, of which each task is allotted a fixed portion. The higher the value of the storage fraction, b, the more likely it is that the buffer will contain representative instances and thus mitigate forgetting, however this comes at an increased computational cost.\nsik = ∫ τ 0 βik(t)dt ≈ τ∑ t=0 ( βik(t+ ∆t) + βik(t) 2 ) ∆t (6)" }, { "heading": "4.2 UNCERTAINTY-BASED BUFFER ACQUISITION", "text": "The acquisition of instances that a learner is uncertain about is likely to benefit training (Zhu, 2005). This is the premise of uncertainty-based acquisition functions such as BALD (Houlsby et al., 2011; Gal & Ghahramani, 2016). We now outline how to exploit this premise for buffer acquisition.\nAt epoch number, τMC , referred to as Monte Carlo (MC) epochs, each of the M instances, x ∼ DB , is passed through the network and exposed to a stochastic binary dropout mask to generate an output, p(y|x, ω) ∈ RC . This is repeated T times to form a matrix, G ∈ RMxT xC . An acquisition function, such as BALDMCD, is thus a function F : RMxT xC → RM .\nBALDMCD = JSD(p1, p2, . . . , pT ) = H (p(y|x))− Ep(ω|Dtrain) [H (p(y|x, ω̂))] (7)\nwhere H(p(y|x)) represents the entropy of the network outputs averaged across the MC samples, and ω̂ ∼ p(ω|Dtrain) as in Gal & Ghahramani (2016). At sample epochs, τS , we rank instances in descending order of BALDMCD and acquire the top a fraction from each task in the buffer. A higher value of this acquisition fraction, a, implies more instances are acquired. Although this may not guarantee improvement in performance, it does guarantee increased training overhead. Nonetheless, the intuition is that by acquiring instances, from previous tasks, to which a network is most confused, it can be nudged to avoid destructive interference in a data-efficient manner. We outline the entire training procedure in Algorithms 1-4 in Appendix A." }, { "heading": "5 EXPERIMENTAL DESIGN", "text": "" }, { "heading": "5.1 DATASETS", "text": "We conduct experiments1 in PyTorch (Paszke et al., 2019). Given our emphasis on healthcare, we evaluate our approach on three publically-available datasets that include physiological time-series data such as the electrocardiogram (ECG) alongside cardiac arrhythmia labels. We useD1 = Cardiology ECG (Hannun et al., 2019) (12-way),D2 = Chapman ECG (Zheng et al., 2020) (4-way), andD3 = PhysioNet 2020 ECG (Perez Alday et al., 2020) (9-way, multi-label). Further details regarding the datasets and network architecture can be found in Appendix C.\n1Our code is available at: https://tinyurl.com/CLOPSSubmission" }, { "heading": "5.2 CONTINUAL LEARNING SCENARIOS", "text": "Here, we outline the three primary continual learning scenarios we use for our experiments. In ClassIL, D1 is split according to mutually-exclusive pairs of classes [0, 1], [2, 3], [4, 5], [6, 7], [8, 9], and [10, 11]. This scenario allows us to evaluate the sensitivity of a network to new classes. In Time-IL, D2 is split into three tasks; Term 1, Term 2, and Term 3 corresponding to mutually-exclusive times of the year during which patient data were collected. This scenario allows us to evaluate the effect of temporal non-stationarity on a network’s performance. Lastly, in Domain-IL, D3 is split according to the 12 leads of an ECG; 12 different projections of the same electrical signal generated by the heart. This scenario allows us to evaluate how robust a network is to the input distribution." }, { "heading": "5.3 BASELINE METHODS", "text": "We compare our proposed method to the following. Multi-Task Learning (MTL) (Caruana, 1993) is a strategy whereby all datasets are assumed to be available at the same time and thus can be simultaneously used for training. Although this assumption may not hold in clinical settings due to the nature of data collection, privacy or memory constraints, it is nonetheless a strong baseline. Fine-tuning is a strategy that involves updating all parameters when training on subsequent tasks as they arrive without explicitly dealing with catastrophic forgetting. We also adapt two replay-based methods for our scenarios. GEM (Lopez-Paz & Ranzato, 2017) solves a quadratic programming problem to generate parameter gradients that do not increase the loss incurred by replayed instances. MIR (Aljundi et al., 2019a) replays instances from a buffer that incur the greatest change in loss given a parameter pseudo-update. Details on how these methods were adapted are found in Appendix C." }, { "heading": "5.4 EVALUATION METRICS", "text": "To evaluate our methods, we exploit metrics suggested by Lopez-Paz & Ranzato (2017) such as average AUC and Backward Weight Transfer (BWT). We also propose two additional evaluation metrics that provide us with a more fine-grained analysis of learning strategies.\nt-Step Backward Weight Transfer. To determine how performance changes ‘t-steps into the future’, we propose BWTt which evaluates the performance of the network on a previously-seen task, after having trained on t-tasks after it.\nBWTt = 1\nN − t N−t∑ j=1 Rj+tj − R j j (8)\nLambda Backward Weight Transfer. We extend BWTt to all time-steps, t, to generate BWTλ. As a result, we can identify improvements in methodology at the task-level.\nBWTλ = 1\nN − 1 N−1∑ j=1\n[ 1\nN − j N−j∑ t=1 Rj+tj − R j j\n] (9)" }, { "heading": "5.5 HYPERPARAMETERS", "text": "Depending on the continual learning scenario, we chose τ = 20 or 40, as we found that to achieve strong performance on the respective validation sets. We chose τMC = 40 +n and the sample epochs τS = 41 + n where n ∈ N+ in order to sample data from the buffer at every epoch following the first task. The values must satisfy τS ≥ τMC > τ . For computational reasons, we chose the storage fraction b = 0.1 of the size of the training dataset and the acquisition fraction a = 0.25 of the number of samples per task in the buffer. To calculate the acquisition function, we chose the number of Monte Carlo samples, T = 20. We chose the regularization coefficient, λ = 10. We also explore the effect of changing these values on performance (see Appendices L and M)." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "6.1 CLASS-IL", "text": "Destructive interference is notorious amongst neural networks. In this section, we quantify such interference when learners are exposed to tasks involving novel classes. In Fig. 3a, we illustrate the AUC achieved on sequential binary classification tasks. We find that destructive interference is prevalent. For example, the network quickly forgets how to perform task [0 − 1] once exposed to data from task [2− 3]. This can be seen by the AUC ≈ 0.92→ 0.30. The final performance of the network for that particular task (AUC ≈ 0.78) is also lower than that maximally-achieved. In Fig. 3b, we show that CLOPS alleviates this interference. This can be seen by the absence of significant drops in AUC and higher final performance for all tasks relative to the fine-tuning strategy.\nIn Table 1, we compare the performance of the CL strategies in the Class-IL scenario. We find that CLOPS outperforms MTL (AUC = 0.796 vs. 0.701), which is a seemingly non-intuitive finding. We hypothesize that this finding is due to positive weight transfer brought about by a curriculum wherein sequential tasks of different levels of difficulty can improve generalization performance (Bengio et al., 2009). We explore this hypothesis further in Sec. 6.5. We also find that CLOPS outperforms state-of-the-art methods, GEM and MIR, in terms of generalization performance and exhibits constructive interference. For example, CLOPS and MIR achieve an AUC = 0.796 and 0.753, respectively. Moreover, BWT = 0.053 and 0.009 for these two methods, respectively. Such a finding underscores the ability of CLOPS to deal with tasks involving novel classes. We also show that CLOPS is robust to task order (see Appendix F)." }, { "heading": "6.2 TIME-IL", "text": "Environmental changes within healthcare can introduce seasonal shift into datasets. In this section, we quantify the effect of such a shift on learners. In Fig. 4a, we illustrate the AUC achieved on tasks with seasonally-shifted data.\nIn this scenario, we find that CLOPS is capable of achieving forward weight transfer (FWT). For example, in Figs. 4a and 4b, CLOPS achieves an AUC ≈ 0.62 after one epoch of training on task Term 3, a value that the fine-tuning strategy only achieves after 20 epochs, signalling a 20-fold reduction in training time. We attribute this FWT to the loss-weighting role played by the task-instance parameters. By placing greater emphasis on more useful instances, the generalization performance of the network is improved. We also find that CLOPS exhibits reduced catastrophic forgetting relative to fine-tuning. For example, performance on tasks Term 1 and Term 2 is maintained at AUC > 0.90 when training on task Term 3. We do not observe this for the fine-tuning setup." }, { "heading": "6.3 DOMAIN-IL", "text": "So far, we have shown the potential of CLOPS to alleviate destructive interference and allow for forward weight transfer. In this section, and in Table 2, we illustrate the performance of the CL strategies in the Domain-IL scenario. We show that CLOPS outperforms state-of-the-art methods. For example, CLOPS and MIR achieve an AUC = 0.731 and 0.716, respectively. CLOPS is also better at mitigating destructive interference, as shown by BWT = (0.011) and (0.022), respectively. We provide an explanation for such performance by conducting ablation studies in the next section.\n6.4 EFFECT OF TASK-INSTANCE PARAMETERS, β , AND ACQUISITION FUNCTION, α\nTo better understand the root cause of CLOPS’ benefits, we conduct three ablation studies: 1) Random Storage dispenses with task-instance parameters and instead randomly stores instances into the buffer, 2) Random Acquisition dispenses with acquisition functions and instead randomly acquires instances from the buffer, and 3) Random Storage and Acquisition which stores instances into, and acquires instances from, the buffer randomly. In Fig. 5, we illustrate the effect of these strategies on performance as we vary the storage fraction, b, and acquisition fraction, a.\nWe find that β, as a loss-weighting mechanism, benefits generalization performance. For example, in Fig. 5 (red rectangle), at b = 1, a = 0.5, we show that simply including the loss-weighting\nmechanism ↑ AUC ≈ 12%. We hypothesize that this mechanism is analogous to attention being placed on instance losses and thus allows the network to learn which instances to exploit further. We also find that uncertainty-based acquisition functions offer significant improvements. In Fig. 5 (black rectangles), at a = 0.1, b = 0.5, we show that such acquisition ↑ AUC ≈ 8%. We arrive at the same conclusion when evaluating backward weight transfer (see Appendix J).\n6.5 VALIDATION OF INTERPRETATION OF TASK-INSTANCE PARAMETERS, β\nWe claimed that instances with lower values of β, and by extension, s, are relatively more difficult to classify. In this section, we aim to validate this intuition. In Fig. 6, we illustrate the distribution of s values corresponding to each task.\nWe find that tasks differ in their difficulty level. For example, task [6− 7] is considered more difficult to solve than task [8− 9] as evidenced by the lower distribution mean of the former relative to the latter (s ≈ 18.85 vs. 18.95). After extracting the two ECG recordings associated with the lowest and highest s values, we find that both belong to the same class, normal sinus rhythm. Upon closer inspection, the recording with the lower s value exhibits a feature known as ST-elevation. This feature, which is characterized by the elevation of the segment between the S and T waves (deflections) of the ECG signal relative to the baseline, is typically associated with heart attacks. Mapping an ECG recording with such an abnormal feature to the normal class would have been a source of confusion for the network. We provide additional qualitative evidence in Appendix G.\nWe also leverage s to learn a curriculum (Bengio et al., 2009). First, we fit a Gaussian, N (µT , σ2T ), to each of the distributions in Fig. 6. Using this information, we define the difficulty of task T as dT = 1µT and the similarity, S(j, k), between task j and k as shown in Eq. 10. In Fig. 7, we illustrate the resulting pairwise task similarity matrix.\nS(j, k) = 1− √√√√1−√2σ0σ1 σ20σ 2 1 e − 1 4 (µ0−µ1)2 σ20σ 2 1\n︸ ︷︷ ︸ DH =HellingerDistance\n(10)\nWe design a curriculum by first selecting the easiest task (↓ dT ) and then creating a chain of tasks that are similar to one another as shown in Fig. 7. For an anti-curriculum, we start with the hardest task (↑ dT ). In Table 3, we illustrate the performance of various curricula and find that a curriculum exhibits higher constructive interference than a random one (BWT = 0.087 vs. 0.053). Such an outcome aligns well with the expectations of curriculum learning, thus helping to further validate the intuition underlying β." }, { "heading": "7 DISCUSSION AND FUTURE WORK", "text": "In this paper, we introduce a replay-based method applied to physiological signals, entitled CLOPS, to mitigate destructive interference during continual learning. CLOPS consists of an importance-guided buffer-storage and active-learning inspired buffer-acquisition mechanism. We show that CLOPS outperforms the state-of-the-art methods, GEM and MIR, on both backward and forward weight transfer. Furthermore, we propose learnable parameters, as a proxy for the difficulty with which instances are classified, which can assist with quantifying task difficulty and improving network interpretability. We now elucidate future avenues worth exploring.\nExtensions to Task Similarity. The notion of task similarity was explored by Thrun & O’Sullivan (1996); Silver & Mercer (1996). In this work, we proposed a definition of task similarity and used it to order the presentation of tasks. The exploration of more robust definitions, their validation through domain knowledge, and their exploitation for generalization is an exciting extension.\nPredicting Destructive Interference. Destructive interference is often dealt with in a reactive manner. By predicting the degree of forgetting that a network may experience once trained sequentially can help alleviate this problem in a proactive manner." } ]
null
CLOPS: CONTINUAL LEARNING OF PHYSIOLOGICAL SIGNALS
SP:ceacad438130adfb746240e36dd32d14794b4291
[ "This paper presents SALD, a new type of implicit shape representation that, in addition to predicting the signed distance function, aligns the gradients of the distance function with that of the neural distance field. The resulting algorithm, for example, has improved approximation power and better preserves the sharp features than the ancestor SAL (sign agnostic learning). The formulation is such that the architecture can consume raw point clouds. " ]
Learning 3D geometry directly from raw data, such as point clouds, triangle soups, or unoriented meshes is still a challenging task that feeds many downstream computer vision and graphics applications. In this paper, we introduce SALD: a method for learning implicit neural representations of shapes directly from raw data. We generalize sign agnostic learning (SAL) to include derivatives: given an unsigned distance function to the input raw data, we advocate a novel sign agnostic regression loss, incorporating both pointwise values and gradients of the unsigned distance function. Optimizing this loss leads to a signed implicit function solution, the zero level set of which is a high quality and valid manifold approximation to the input 3D data. The motivation behind SALD is that incorporating derivatives in a regression loss leads to a lower sample complexity, and consequently better fitting. In addition, we provide empirical evidence, as well as theoretical motivation in 2D that SAL enjoys a minimal surface property, favoring minimal area solutions. More importantly, we are able to show that this property still holds for SALD, i.e., with derivatives included. We demonstrate the efficacy of SALD for shape space learning on two challenging datasets: ShapeNet (Chang et al., 2015) that contains inconsistent orientation and non-manifold meshes, and D-Faust (Bogo et al., 2017) that contains raw 3D scans (triangle soups). On both these datasets, we present state-of-the-art results.
[ { "affiliations": [], "name": "Matan Atzmon" }, { "affiliations": [], "name": "Yaron Lipman" } ]
[ { "authors": [ "Brett Allen", "Brian Curless", "Zoran Popović" ], "title": "Articulated body deformation from range scan data", "venue": "ACM Transactions on Graphics (TOG),", "year": 2002 }, { "authors": [ "Brett Allen", "Brian Curless", "Zoran Popović" ], "title": "The space of human body shapes: reconstruction and parameterization from range scans", "venue": "ACM transactions on graphics (TOG),", "year": 2003 }, { "authors": [ "Nina Amenta", "Marshall Bern", "Manolis Kamvysselis" ], "title": "A new voronoi-based surface reconstruction algorithm", "venue": "In Proceedings of the 25th annual conference on Computer graphics and interactive techniques,", "year": 1998 }, { "authors": [ "Dragomir Anguelov", "Praveen Srinivasan", "Daphne Koller", "Sebastian Thrun", "Jim Rodgers", "James Davis" ], "title": "Scape: shape completion and animation of people", "venue": "In ACM SIGGRAPH 2005 Papers,", "year": 2005 }, { "authors": [ "Matan Atzmon", "Yaron Lipman" ], "title": "Sal: Sign agnostic learning of shapes from raw data", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Matan Atzmon", "Niv Haim", "Lior Yariv", "Ofer Israelov", "Haggai Maron", "Yaron Lipman" ], "title": "Controlling neural level sets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Atılım Günes Baydin", "Barak A Pearlmutter", "Alexey Andreyevich Radul", "Jeffrey Mark Siskind" ], "title": "Automatic differentiation in machine learning: a survey", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Heli Ben-Hamu", "Haggai Maron", "Itay Kezurer", "Gal Avineri", "Yaron Lipman" ], "title": "Multi-chart generative surface modeling", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Tolga Birdal", "Benjamin Busam", "Nassir Navab", "Slobodan Ilic", "Peter Sturm" ], "title": "Generic primitive detection in point clouds using novel minimal quadric fits", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2019 }, { "authors": [ "Federica Bogo", "Javier Romero", "Gerard Pons-Moll", "Michael J Black" ], "title": "Dynamic faust: Registering human bodies in motion", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Angel X Chang", "Thomas Funkhouser", "Leonidas Guibas", "Pat Hanrahan", "Qixing Huang", "Zimo Li", "Silvio Savarese", "Manolis Savva", "Shuran Song", "Hao Su" ], "title": "Shapenet: An information-rich 3d model repository", "venue": "arXiv preprint arXiv:1512.03012,", "year": 2015 }, { "authors": [ "Zhiqin Chen", "Hao Zhang" ], "title": "Learning implicit fields for generative shape modeling", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zhiqin Chen", "Andrea Tagliasacchi", "Hao Zhang" ], "title": "Bsp-net: Generating compact meshes via binary space partitioning", "venue": "Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Wojciech M Czarnecki", "Simon Osindero", "Max Jaderberg", "Grzegorz Swirszcz", "Razvan Pascanu" ], "title": "Sobolev training for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Boyang Deng", "Kyle Genova", "Soroosh Yazdani", "Sofien Bouaziz", "Geoffrey Hinton", "Andrea Tagliasacchi" ], "title": "Cvxnet: Learnable convex decomposition", "venue": null, "year": 2020 }, { "authors": [ "Theo Deprelle", "Thibault Groueix", "Matthew Fisher", "Vladimir Kim", "Bryan Russell", "Mathieu Aubry" ], "title": "Learning elementary structures for 3d shape generation and matching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Manfredo P Do Carmo" ], "title": "Differential Geometry of Curves and Surfaces: Revised and Updated Second Edition", "venue": null, "year": 2016 }, { "authors": [ "Kyle Genova", "Forrester Cole", "Daniel Vlasic", "Aaron Sarna", "William T Freeman", "Thomas Funkhouser" ], "title": "Learning shape templates with structured implicit functions", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Kyle Genova", "Forrester Cole", "Avneesh Sud", "Aaron Sarna", "Thomas Funkhouser" ], "title": "Local deep implicit functions for 3d shape", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Amos Gropp", "Lior Yariv", "Niv Haim", "Matan Atzmon", "Yaron Lipman" ], "title": "Implicit geometric regularization for learning shapes", "venue": "In Proceedings of Machine Learning and Systems", "year": 2020 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G Kim", "Bryan C Russell", "Mathieu Aubry" ], "title": "3dcoded: 3d correspondences by deep deformation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G Kim", "Bryan C Russell", "Mathieu Aubry" ], "title": "A papier-mâché approach to learning 3d surface generation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Xianfeng Gu", "Steven J Gortler", "Hugues Hoppe" ], "title": "Geometry images", "venue": "In Proceedings of the 29th annual conference on Computer graphics and interactive techniques,", "year": 2002 }, { "authors": [ "Rana Hanocka", "Gal Metzer", "Raja Giryes", "Daniel Cohen-Or" ], "title": "Point2mesh: A self-prior for deformable meshes", "venue": "arXiv preprint arXiv:2005.11084,", "year": 2020 }, { "authors": [ "Yue Jiang", "Dantong Ji", "Zhizhong Han", "Matthias Zwicker" ], "title": "Sdfdiff: Differentiable rendering of signed distance fields for 3d shape optimization", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Lingxiao Li", "Minhyuk Sung", "Anastasia Dubrovina", "Li Yi", "Leonidas J Guibas" ], "title": "Supervised fitting of geometric primitives to 3d point clouds", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Or Litany", "Alex Bronstein", "Michael Bronstein", "Ameesh Makadia" ], "title": "Deformable shape completion with graph convolutional autoencoders", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Shichen Liu", "Shunsuke Saito", "Weikai Chen", "Hao Li" ], "title": "Learning to infer implicit surfaces without 3d supervision", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "William E Lorensen", "Harvey E Cline" ], "title": "Marching cubes: A high resolution 3d surface construction algorithm", "venue": "In ACM siggraph computer graphics,", "year": 1987 }, { "authors": [ "Haggai Maron", "Meirav Galun", "Noam Aigerman", "Miri Trope", "Nadav Dym", "Ersin Yumer", "Vladimir G Kim", "Yaron Lipman" ], "title": "Convolutional neural networks on surfaces via seamless toric covers", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Lars Mescheder", "Michael Oechsle", "Michael Niemeyer", "Sebastian Nowozin", "Andreas Geiger" ], "title": "Occupancy networks: Learning 3d reconstruction in function space", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Michael Niemeyer", "Lars Mescheder", "Michael Oechsle", "Andreas Geiger" ], "title": "Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Jeong Joon Park", "Peter Florence", "Julian Straub", "Richard Newcombe", "Steven Lovegrove" ], "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Shunsuke Saito", "Zeng Huang", "Ryota Natsume", "Shigeo Morishima", "Angjoo Kanazawa", "Hao Li" ], "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Ayan Sinha", "Jing Bai", "Karthik Ramani" ], "title": "Deep learning 3d shape surfaces using geometry images", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Matthew Tancik", "Pratul P. Srinivasan", "Ben Mildenhall", "Sara Fridovich-Keil", "Nithin Raghavan", "Utkarsh Singhal", "Ravi Ramamoorthi", "Jonathan T. Barron", "Ren Ng" ], "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Tolga Tasdizen", "J-P Tarel", "David B Cooper" ], "title": "Algebraic curves that work better", "venue": "In Proceedings", "year": 1999 }, { "authors": [ "Maxim Tatarchenko", "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Francis Williams", "Teseo Schneider", "Claudio Silva", "Denis Zorin", "Joan Bruna", "Daniele Panozzo" ], "title": "Deep geometric prior for surface reconstruction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Francis Williams", "Jerome Parent-Levesque", "Derek Nowrouzezahrai", "Daniele Panozzo", "Kwang Moo Yi", "Andrea Tagliasacchi" ], "title": "Voronoinet: General functional approximators with local support", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Jiajun Wu", "Chengkai Zhang", "Tianfan Xue", "Bill Freeman", "Josh Tenenbaum" ], "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Lior Yariv", "Yoni Kasten", "Dror Moran", "Meirav Galun", "Matan Atzmon", "Basri Ronen", "Yaron Lipman" ], "title": "Multiview neural surface reconstruction by disentangling geometry and appearance", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Hong-Kai Zhao", "Stanley Osher", "Ronald Fedkiw" ], "title": "Fast surface reconstruction using the level set method", "venue": "In Proceedings IEEE Workshop on Variational and Level Set Methods in Computer Vision,", "year": 2001 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently, neural networks (NN) have been used for representing and reconstructing 3D surfaces. Current NN-based 3D learning approaches differ in two aspects: the choice of surface representation, and the supervision method. Common representations of surfaces include using NN as parametric charts of surfaces (Groueix et al., 2018b; Williams et al., 2019); volumetric implicit function representation defined over regular grids (Wu et al., 2016; Tatarchenko et al., 2017; Jiang et al., 2020); and NN used directly as volumetric implicit functions (Park et al., 2019; Mescheder et al., 2019; Atzmon et al., 2019; Chen & Zhang, 2019), referred henceforth as implicit neural representations. Supervision methods include regression of known or approximated volumetric implicit representations (Park et al., 2019; Mescheder et al., 2019; Chen & Zhang, 2019), regression directly with raw 3D data (Atzmon & Lipman, 2020; Gropp et al., 2020; Atzmon & Lipman, 2020), and differentiable rendering using 2D data (i.e., images) supervision (Niemeyer et al., 2020; Liu et al., 2019; Saito et al., 2019; Yariv et al., 2020).\nThe goal of this paper is to introduce SALD, a method for learning implicit neural representations of surfaces directly from raw 3D data. The benefit in learning directly from raw data, e.g., nonoriented point clouds or triangle soups (e.g., Chang et al. (2015)) and raw scans (e.g., Bogo et al. (2017)), is avoiding the need for a ground truth signed distance representation of all train surfaces for supervision. This allows working with complex models with inconsistent normals and/or missing parts. In Figure 1 we show reconstructions of zero level sets of SALD learned implicit neural representations of car models from the ShapeNet dataset (Chang et al., 2015) with variational autoencoder; notice the high detail level and the interior, which would not have been possible with, e.g., previous data pre-processing techniques using renderings of visible parts (Park et al., 2019).\nOur approach improves upon the recent Sign Agnostic Learning (SAL) method (Atzmon & Lipman, 2020) and shows that incorporating derivatives in a sign agnostic manner provides a significant\nimprovement in surface approximation and detail. SAL is based on the observation that given an unsigned distance function h to some raw 3D data X ⊂ R3, a sign agnostic regression to h will introduce new local minima that are signed versions of h; in turn, these signed distance functions can be used as implicit representations of the underlying surface. In this paper we show how the sign agnostic regression loss can be extended to compare both function values h and derivatives ∇h, up to a sign.\nThe main motivation for performing NN regression with derivatives is that it reduces the sample complexity of the problem (Czarnecki et al., 2017), leading to better accuracy and generalization. For example, consider a one hidden layer NN of the form f(x) = max {ax, bx}+c. Prescribing two function samples at {−1, 1} are not sufficient for uniquely determining f , while adding derivative information at these points determines f uniquely.\nWe provide empirical evidence as well as theoretical motivation suggesting that both SAL and SALD possess the favorable minimal surface property (Zhao et al., 2001), that is, in areas of missing parts and holes they will prefer zero level sets with minimal area. We justify this property by proving that, in 2D, when restricted to the zero level-set (a curve in this case), the SAL and SALD losses would encourage a straight line solution connecting neighboring data points.\nWe have tested SALD on the dataset of man-made models, ShapeNet (Chang et al., 2015), and human raw scan dataset, D-Faust (Bogo et al., 2017), and compared to state-of-the-art methods. In all cases we have used the raw input data X as is and considered the unsigned distance function to X , i.e., hX , in the SALD loss to produce an approximate signed distance function in the form of a neural network. Comparing to state-of-the-art methods we find that SALD achieves superior results on this dataset. On the D-Faust dataset, when comparing to ground truth reconstructions we report state-of-the-art results, striking a balance between approximating details of the scans and avoiding overfitting noise and ghost geometry.\nSummarizing the contributions of this paper:\n• Introducing sign agnostic learning with derivatives. • Identifying and providing a theoretical justification for the minimal surface property of sign\nagnostic learning in 2D. • Training directly on raw data (end-to-end) including unoriented or not consistently oriented\ntriangle soups and raw 3D scans." }, { "heading": "2 PREVIOUS WORK", "text": "Learning 3D shapes with neural networks and 3D supervision has shown great progress recently. We review related works, where we categorize the existing methods based on their choice of 3D surface representation.\nParametric representations. The most fundamental surface representation is an atlas, that is a collection of parametric charts f : R2 → R3 with certain coverage and transition properties (Do Carmo, 2016). Groueix et al. (2018b) adapted this idea using neural network to represent a surface as union of such charts; Williams et al. (2019) improved this construction by introducing better transitions between charts; Sinha et al. (2016) use geometry images (Gu et al., 2002) to represent an entire shape using a single chart; Maron et al. (2017) use global conformal parameterization for learning surface data; Ben-Hamu et al. (2018) use a collection of overlapping global conformal charts for human-shape generative model. Hanocka et al. (2020) shrink-wraps a template mesh to fits a point cloud. The benefit in parametric representations is in the ease of sampling the learned surface (i.e., forward pass) and work directly with raw data (e.g., Chamfer loss); their main struggle is in producing charts that are collectively consistent, of low distortion, and covering the shape.\nImplicit representations. Another approach for representing surfaces is as zero level sets of a function, called an implicit function. There are two popular methods to model implicit volumetric functions with neural networks: i) Convolutional neural network predicting scalar values over a predefined fixed volumetric structure (e.g., grid or octree) in space (Tatarchenko et al., 2017; Wu et al., 2016); and ii) Multilayer Perceptron of the form f : R3 → R defining a continuous volumetric function (Park et al., 2019; Mescheder et al., 2019; Chen & Zhang, 2019). Currently, neural networks are trained to be implicit function representations with two types of supervision: (i) regression of samples taken from a known or pre-computed implicit function representation such as occupancy function (Mescheder et al., 2019; Chen & Zhang, 2019) or a signed distance function (Park et al., 2019); and (ii) working with raw 3D supervision, by particle methods relating points on the level sets to the model parameters (Atzmon et al., 2019), using sign agnostic losses (Atzmon & Lipman, 2020), or supervision with PDEs defining signed distance functions (Gropp et al., 2020).\nPrimitives. Another type of representation is to learn shapes as composition or unions of a family of primitives. Gradient information have been used to improve and facilitate fitting of invariant polynomial representations (Tasdizen et al., 1999; Birdal et al., 2019). Li et al. (2019) represent a shape using a parametric collection of primitives. Genova et al. (2019; 2020) use a collection of Gaussians and learn consistent shape decompositions. Chen et al. (2020) suggest a differentiable Binary Space Partitioning tree (BSP-tree) for representing shapes. Deprelle et al. (2019) combine points and charts representations to learn basic shape structures. Deng et al. (2020) represent a shape as a union of convex sets. Williams et al. (2020) learn cites of Voronoi cells for implicit shape representation.\nTemplate fitting. Lastly, several methods learn 3D shapes of a certain class (e.g., humans) by learning the deformation from a template model. Classical methods use matching techniques and geometric loss minimization for non-rigid template matching (Allen et al., 2002; 2003; Anguelov et al., 2005). Groueix et al. (2018a) use an auto-encoder architecture and Chamfer distance to match target shapes. Litany et al. (2018) use graph convolutional autoencoder to learn deformable template for shape completion." }, { "heading": "3 METHOD", "text": "Given raw geometric input data X ⊂ R3, e.g., a triangle soup, our goal is to find a multilayer perceptron (MLP) f : R3 × Rm → R whose zero level-set,\nS = { x ∈ R3 | f(x; θ) = 0 } (1)\nis a manifold surface that approximates X . Sign agnostic learning. Similarly to SAL, our approach is to consider the (readily available) unsigned distance function to the raw input geometry,\nh(y) = min x∈X ‖y − x‖ (2)\nand perform sign agnostic regression to get a signed version f of h. SAL uses a loss of the form loss(θ) = Ex∼D τ ( f(x; θ), h(x) ) , (3)\nwhere D is some probability distribution, e.g., a sum of gaussians with centers uniformly sampled over the input geometry X , and τ is an unsigned similarity. That is, τ(a, b) is measuring the difference between scalars a, b ∈ R up-to a sign. For example\nτ(a, b) = ∣∣|a| − b∣∣ (4)\nis an example that is used in Atzmon & Lipman (2020). The key property of the sign agnostic loss in equation 3 is that, with proper weights initialization θ0, it finds a new signed local minimum f which in absolute value is similar to h. In turn, the zero level set S of f is a valid manifold describing the data X . Sign agnostic learning with derivatives. Our goal is to generalize the SAL loss (equation 3) to include derivative data of h and show that optimizing this loss provides implicit neural representations, S, that enjoy better approximation properties with respect to the underlying geometry X . Generalizing equation 3 requires designing an unsigned similarity measure τ for vector valued functions. The key observation is that equation 4 can be written as τ(a, b) = min {|a− b| , |a+ b|}, a, b ∈ R, and can be generalized to vectors a, b ∈ Rd by\nτ(a, b) = min {‖a− b‖ , ‖a+ b‖} . (5)\nWe define the SALD loss: loss(θ) = Ex∼D τ ( f(x; θ), h(x) ) + λEx∼D′ τ ( ∇xf(x; θ),∇xh(x) ) (6)\nwhere λ > 0 is a parameter, D′ is a probability distribution, e.g., it could be identical to D, or uniform over the input geometry X , and ∇xf(x; θ),∇xh(x) are the gradients f, h (resp.) with respect to their input x.\nIn Figure 2 we show the unsigned distance h to an L-shaped curve (left), and the level sets of the MLPs optimized with the SALD loss (middle) and the SAL loss (right); note that SALD loss reconstructed the sharp features (i.e., corners) of the shape and the level sets of h, while SAL loss smoothed them out; the implementation details of this experiment can be found in Appendix A.4.\nMinimal surface property. We show that the SAL and SALD losses possess a minimal surface property (Zhao et al., 2001), that is, they strive to minimize surface area of missing parts. For example, Figure 4 shows the unsigned distance to a curve with a missing segment (left), and the zero level sets of MLPs optimized with SALD loss (middle), and SAL loss (right).\nNote that in both cases the zero level set in the missing part area is the minimal length curve (i.e., a line) connecting the end points of that missing part. SALD also preserves sharp features of the rest of the shape. Figure A1 in the supplementary shows additional 2D experiments comparing to the Implicit Geometric Regularization (IGR) method (Gropp et al., 2020) that learns implicit representations by regularizing the gradient norm and do not posses the minimal surface property.\nWe will provide a theoretical justification to this property in the 2D\ncase. We consider a geometry defined by two points in the plane, X = {x1,x2} ⊂ R2 and possible solutions where the zero level set curve S is connecting x1 and x2. We prove that among a class of\ncurves U connecting x1 and x2, the straight line minimizes the losses in equation 3 and equation 6 restricted to U , when assuming uniform distributions D,D′. We assume (without losing generality) that x1 = (0, 0)T , x2 = (`, 0)T and consider curves u ∈ U defined by u(s) = (s, t(s))T , where s ∈ [0, `], and t : R→ R is some differentiable function such that t(0) = 0 = t(`), see Figure 3. For the SALD loss we prove the claim for a slightly simplified agnostic loss motivated by the following lemma proved in Appendix A.1:\nLemma 1. For any pair of unit vectors a, b: min {‖a− b‖ , ‖a+ b‖} ≥ |sin∠(a, b)|.\nWe consider τ(a, b) = |sin∠(a, b)| for the derivative part of the loss in equation 6, which is also sign agnostic.\nTheorem 1. Let X = {x1,x2} ⊂ R2, and the family of curves U connecting x1 and x2. Furthermore, let lossSAL(u) and lossSALD(u) denote the losses in equation 3 and equation 6 (resp.) when restricted to u with uniform distributions D,D′. Then in both cases the straight line, i.e., the curve u(s) = (s, 0), is the strict global minimizer of these losses.\nProof. The unsigned distance function is\nh(u) = {√ s2 + t2 s ∈ [0, `/2]√ (s− `)2 + t2 s ∈ (`/2, `] .\nFrom symmetry it is enough to consider only the first half of the curve, i.e., s ∈ [0, `/2). Then, the SAL loss, equation 3, restricted to the curve u (i.e., where f vanishes) takes the form\nlossSAL(u) = ∫ `/2 0 τ(f(u; θ), h(u)) ‖u̇‖ ds = ∫ `/2 0 √ s2 + t2 √ 1 + ṫ2 ds,\nwhere √\n1 + ṫ2 ds is the length element on the curve u, and τ(f(s, t; θ), h(s, t)) = |h(s, t)| =√ s2 + t2, since f(s, t; θ) = 0 over the curve u. Plugging t(s) ≡ 0 in lossSAL(u) we see that the curve u = (s, 0)T , namely the straight line curve from x1 to 0.5(x1 + x2) is a strict global minimizer of lossSAL(u). Similar argument on s ∈ [`/2, `] prove the claim for the SAL case. For the SALD case, we want to calculate τ(∇xf(u; θ),∇xh(u)) restricted to the curve u; let a = ∇xf(u; θ) and b = ∇xh(u). First, b = (s2+ t2)−1/2(s, t)T . Second, a is normal to the curve u, therefore it is proportional to u̇⊥ = (−ṫ, 1)T . Next, note that\n|sin∠(a, b)| = ∣∣∣∣det(−ṫ s1 t )∣∣∣∣√\n1 + ṫ2 √ s2 + t2\n= 1√\n1 + ṫ2\n∣∣∣∣ dds ‖(s, t)‖ ∣∣∣∣ ,\nwhere the last equality can be checked by differentiating ‖(s, t)‖ w.r.t. s. Therefore,\nlossSALD(u)− lossSAL(u) λ = ∫ `/2 0 τ(a, b) ‖u̇‖ ds = ∫ `/2 0 ∣∣∣∣ dds ‖(s, t)‖ ∣∣∣∣ ds ≥ ∥∥∥∥( `2 , t ( ` 2 ))∥∥∥∥ ≥ `2 . This bound is achieved for the curve u = (s, 0), which is also a minimizer of the SAL loss. The straight line also minimizes this version of the SALD loss since lossSALD(u) = (lossSALD(u)− lossSAL(u)) + lossSAL(u)." }, { "heading": "4 EXPERIMENTS", "text": "We tested SALD on the task of shape space learning from raw 3D data. We experimented with two different datasets: i) ShapeNet dataset (Chang et al., 2015), containing synthetic 3D Meshes; and ii) D-Faust dataset (Bogo et al., 2017) containing raw 3D scans. Furthermore, we empirically test our sample complexity hypothesis (i.e., that incorporating derivatives improve sample complexity) by inspecting surface reconstruction accuracy for SAL and SALD when trained with fixed size sample sets.\nShape space learning architecture. Our method can be easily incorporated into existing shape space learning architectures: i) Auto-Decoder (AD) suggested in Park et al. (2019); and the ii) Modified Variational Auto-Encoder (VAE) used in Atzmon & Lipman (2020). For VAE, the encoder is taken to be PointNet (Qi et al., 2017). For both options, the decoder is the implicit representation in equation 1, where f(x; θ) is taken to be an 8-layer MLP with 512 hidden units in each layer and Softplus activation. In addition, to enable sign agnostic learning we initialize the decoder weights, θ, using the geometric initialization from Atzmon & Lipman (2020). See Appendix A.2.4 for more details regarding the architecture. The point samples x,x′ for the empirical computation of the expectations in equation 6 are drawn according to distributions D,D′ explained in Appendix A.2.1. Baselines. The baseline methods selected for comparison cover both existing supervision methodologies: DeepSDF (Park et al., 2019) is chosen as a representative out of the methods that require pre-computed implicit representation for training. For methods that train directly on raw 3D data, we compare versus SAL (Atzmon & Lipman, 2020) and IGR (Gropp et al., 2020). See Appendix A.6 for a detailed description of the quantitative metrics used for evaluation.\n4.1 SHAPENET\nIn this experiment we tested the ability of SALD to learn a shape space by training on a challenging 3D data such as nonmanifold/non-orientable meshes. We tested SALD with both AD and VAE architectures. In both settings, we set λ = 0.1 for the SALD loss. We follow the evaluation protocol as in DeepSDF (Park et al., 2019): using\nthe same train/test splits, we train and evaluate our method on 5 different categories. Note that comparison versus IGR is omitted as IGR requires consistently oriented normals for shape space learning, which is not available for ShapeNet, where many models have non-consistent triangles’ orientation.\nResults. Table 1 and Figure 6 show quantitative and qualitative results (resp.) for the held-out test set, comparing SAL, DeepSDF and SALD. As can be read from the table and inspected in the\nfigure, our method, when used with the same auto-decoder as in DeepSDF, compares favorably to DeepSDF’s reconstruction performance on this data.\nQualitatively the surfaces produced by SALD are smoother, mostly with more accurate sharp features, than SAL and DeepSDF generated surfaces. Figure 1 shows typical train and test results from the Cars class with VAE. Figure 5 shows a comparison between SALD shape space learning with VAE and AD in reconstruction of a test car model (left). Note that the AD (middle) seems to produce more details of the test model than the VAE (right), e.g., steering wheel and headlights. Figure 7 show SALD (AD) generated shapes via latent space interpolation between two test models." }, { "heading": "4.2 D-FAUST", "text": "The D-Faust dataset (Bogo et al., 2017) contains raw scans (triangle soups) of 10 humans in multiple poses. There are approximately 41k scans in the dataset. Due to the low variety between adjacent scans, we sample each pose scans at a ratio of 1 : 5. The leftmost column in Figure 8 shows examples of raw scans used for training. For evaluation we use the registrations provided with the data set. Note that the registrations where not used for training. We tested SALD using the VAE architecture, with λ = 1.0 set for the SALD loss. We followed the evaluation protocol as in Atzmon & Lipman (2020), using the same train/test split. Note that Atzmon & Lipman (2020) already conducted a comprehensive comparison of SAL versus DeepSDF and AtlasNet (Groueix et al., 2018b), establishing SAL as a state-of-the-art method for this dataset. Thus, we focus on comparison versus SAL and IGR.\nResults. Table 2 and Figure 8 show quantitative and qualitative results (resp.); although SALD does not produces the best test quantitative results, it is roughly comparable in every measure to the best among the two baselines. That is, it produces details comparable to IGR while maintaining the minimal surface property as SAL and not adding undesired surface sheets as IGR; see the figure for visual illustrations of these properties: the high level of details of SALD and IGR compared to SAL, and the extraneous parts added by IGR, avoided by SALD. These phenomena can also be seen quantitatively, e.g., the reconstruction-to-registration loss of IGR. Figure 9 show SALD generated shapes via latent space interpolation between two test scans. Notice the ability of SALD to generate novel mixed faces and body parts." }, { "heading": "4.3 SAMPLE COMPLEXITY", "text": "1000 5000 10000 20000\n0.1\n0.5\n1\n2\nSample Size\nC ha\nm fe\nr d is\nta nc\ne\nSALD SAL\n1000 5000 10000 20000\n0.1\n0.5\nSample Size\nC ha\nm fe\nr d is\nta nc\ne\nSALD SAL\n1000 5000 10000 20000\n0.1\n0.5\n1\n2\nSALD SAL\nSample Size\nC ha\nm fe\nr d is\nta nc\ne\n(chair) (sofa) (table) In this experiment we test the sample complexity hypothesis: namely, whether regressing with derivatives improves shape reconstruction accuracy, under a fixed budget of point samples. This experiment considers 3 different shapes chosen randomly from the chair, sofa and table test sets of the ShapeNet dataset. For each shape, we prepared a fixed sample set of points {xi}mi=1, where m ∈ {1K, 5K, 10K, 20K}, together with the unsigned distance value and derivative {h(xi),∇xh(xi)}mi=1. The point samples are drawn according to distribution D as explained in Appendix A.2.1. We separately trained the SAL and SALD losses on the same sample data, using the same hyper-parameters, in two different scenarios: i) Individual shape reconstruction: optimizing the weights θ of a randomly initialized 8-layer MLP f(x; θ); and ii) latent shape reconstruction: given a trained auto-decoder network f(x, z; θ) (as in 4.1), we optimize solely the latent code z, keeping the weights θ fixed. Lastly, we computed the Chamfer distance between the learned shape S and the input geometry X . For the individual shape reconstruction, the inset figure shows the Chamfer distance, dC(S,X ), as a function of the sample size m. Figure 10 shows for each sample size, the learned sofa and table. Note that SALD demonstrates better approximation to the input geometry in comparison to SAL,\nin particular as the sample size gets smaller, and thus supporting the sample complexity hypothesis. When optimizing the latent code of a fully trained auto-decoder, the sample size has a little to no effect on the approximation quality of a test shape reconstruction. This can be explained by the fact that the auto-decoder is trained on the maximal sample size, and therefore provides a strong prior for the latent reconstruction. See the supplementary A.5, for the results. 4.4 LIMITATIONS\nFigure 11 shows typical failure cases of our method from the ShapeNet experiment described above. We mainly suffer from two types of failures: First, since inside and outside information is not known (and often not even well defined in ShapeNet models) SALD can add surface sheets closing what\nshould be open areas (e.g., the bottom side of the lamp, or holes in the chair). Second, thin structures can be missed (e.g., the electric cord of the lamp on the left). A useful strategy to sample thin structures is to make sure the sample frequency is inversely proportional to the distance to the medial axis Amenta et al. (1998), where an approximation can be made using curvature estimation. Furthermore, it is important to note that implicit representations of the type presented in equation 1 cannot model surfaces with boundaries and therefore cannot represent flat dimensionless surfaces with boundaries. A potential solution could be incorporating additional implicits to handle boundaries." }, { "heading": "5 CONCLUSIONS", "text": "We introduced SALD, a method for learning implicit neural representations from raw data. The method is based on a generalization of the sign agnostic learning idea to include derivative data. We demonstrated that the addition of a sign agnostic derivative term to the loss improves the approximation power of the resulting signed implicit neural network. In particular, showing improvement in the level of details and sharp features of the reconstructions. Furthermore, we identify the favorable minimal surface property of the SAL and SALD losses and provide a theoretical justification in 2D. Generalizing this theoretical analysis to 3D is marked as interesting future work.\nWe see two more possible venues for future work: First, it is clear that there is room for further improvement in approximation properties of implicit neural representations. Although the results in D-Faust are already close to the input quality, in ShapeNet we still see a gap between input models and their implicit neural representations; this challenge already exists in overfitting a large collection of diverse shapes in the training stage. Improvement can come from adding expressive power to the neural networks, or further improving the training losses; adding derivatives as done in this paper is one step in that direction but does not solves the problem completely. Combining sign agnostic learning with the recent positional encoding method (Tancik et al., 2020) could also be an interesting future research venue. Another interesting project is to combine the sign-agnostic losses with gradient regularization such as the one employed in IGR (Gropp et al., 2020). Second, it is interesting to think of applications or settings in which SALD can improve the current state-ofthe-art. Generative 3D modeling, learning geometry with 2D supervision, or other types of partially observed scans such as depth images are all potentially fruitful options." }, { "heading": "ACKNOWLEDGMENTS", "text": "The research was supported by the European Research Council (ERC Consolidator Grant, ”LiftMatch” 771136), the Israel Science Foundation (Grant No. 1830/17) and by a research grant from the Carolito Stiftung (WAIC)." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 1", "text": "Lemma 1. For any pair of unit vectors a, b: min {‖a− b‖ , ‖a+ b‖} ≥ |sin∠(a, b)|.\nProof. Let a, b ∈ Rd be arbitrary unit norm vectors. Then,\nmin {‖a− b‖ , ‖a+ b‖} = [min {2 + 2 〈a, b〉 , 2− 2 〈a, b〉}]1/2\n= √ 2 [1− |〈a, b〉|]1/2\n= 2\n[ 1− |cos∠(a, b)|\n2 ]1/2 ≥ |sin∠(a, b)| .\nWhere the last inequality can be proved by considering two cases: α ∈ [0, π/2] and α ∈ [π/2, π], where we denote α = ∠(a, b). In the first case α ∈ [0, π/2], cosα ≥ 0 and in this case √ 1−cosα\n2 =∣∣sin α2 ∣∣. The inequality is proved by considering 2 ∣∣∣sin α\n2 ∣∣∣− |sinα| = 2 sin α 2 − sinα = 2 sin α 2 (1− cos α 2 ) ≥ 0\nfor α ∈ [0, π/2]. For the case α ∈ [π/2, π] we have √\n1+cosα 2 = ∣∣cos α2 ∣∣. This case is proved by considering\n2 ∣∣∣cos α\n2 ∣∣∣− |sinα| = 2 cos α 2 − sinα = 2 cos α 2 (1− sin α 2 ) ≥ 0\nfor α ∈ [π/2, π]\nunsigned distance SALD IGR (Gropp et al., 2020)\nFigure A1: 2D reconstruction additional results.\nA.2 IMPLEMENTATION DETAILS" }, { "heading": "A.2.1 DATA PREPARATION", "text": "Given some raw 3D data X , SALD loss (See equation 6) is computed on points and corresponding unsigned distance derivatives, {h (x)}x∈D and {∇xh (x′)}x′∈D′ (resp.) sampled from some distributions D and D′. In this paper, we set D = D1 ∪ D2, where D1 is chosen by uniform sampling points {y} from X and placing two isotropic Gaussians,N (y, σ21I) andN (y, σ22I) for each y. The distribution parameter σ1 depends on each point y, set to be as the distance of the 50th closest point to y, whereas σ2 is set to 0.3 fixed. D2 is chosen by projecting D1 to S . The distribution D′ is set to uniform on X ; note that on X , ∇xh(x′) is a sub-differential which is the convex hull of the two possible normal vectors (±n) at x′; as the sign-agnostic loss does not differ between the two normal choices, we arbitrarily use one of them in the loss. Computing the unsigned distance to X is done using the CGAL library (The CGAL Project, 2020). To speed up training, we precomputed for each shape in the dataset, 500K samples of the form {h (x)}x∈D and {∇xh (x′)}x′∈D′ ." }, { "heading": "A.2.2 GRADIENT COMPUTATION", "text": "The SALD loss requires incorporating the term ∇xf(x; θ) in a differentiable manner. Our computation of ∇xf(x; θ) is based on AUTOMATIC DIFFERENTIATION (Baydin et al., 2017) forward mode. Similarly to Gropp et al. (2020), ∇xf(x; θ) is constructed as a network consists of layers of the form\n∇xy`+1 = diag ( σ′ ( W`+1y ` + b`+1 )) W`+1∇xy`\nwhere y` denotes the output of the ` layer in f(x; θ) and θ = (W`, b`) are the learnable parameters." }, { "heading": "A.2.3 TIMINGS AND NETWORK SIZE", "text": "In figure A2, we report the timings and memory footprint of a 8-layer MLP with 512 hidden units. As the gradients calculation, ∇xf(x; θ), is based on automatic differentiation forward mode, in theory it should yield doubling of the forward time. However, in practice we see that the gap increases as we increase the number of points for evaluation. For the DFaust experiment (which is the largest dataset in the paper), training was done a batch of 64 shapes and a sample size of 922. It took around 1.5 days to complete 3000 epochs with 4 Nvidia V100 32GB gpus. Note that for VAE, the computational cost in test time is equivalent between SAL and SALD.\n2500 5000 7500 10000 12500 15000 17500 20000 0\n5\n10\n15\n20\n25\n30\nSALD SAL\nSample Size\nFo rw ar d Ti m e (m ill is ec on d)\n2500 5000 7500 10000 12500 15000 17500 20000 0\n500\n1000\n1500\n2000\n2500\n3000\nSALD SAL\nSample Size\nM em\nor y (M\nB )\nFigure A2: Timings (left) and network memory footprint (right), reported on various sample size." }, { "heading": "A.2.4 ARCHITECTURE DETAILS", "text": "" }, { "heading": "VAE ARCHITECTURE", "text": "Our VAE architecture is based on the one used in Atzmon & Lipman (2020). The encoder g (X;θ1), where X ∈ RN×3 is the input point cloud, is composed of DeepSets (Zaheer et al., 2017) and PointNet (Qi et al., 2017) layers. Each layer consists of\nPFC(din, dout) :X 7→ ν ( XW + 1bT ) PL(din, 2din) : Y 7→ [Y ,max (Y )1]\nwhere [·, ·] is the concat operation, W ∈ Rdin×dout and b ∈ Rdout are the layer weights and bias and ν (·) is the pointwise non-linear ReLU activation function. Our encoder architecture is:\nPFC(3, 128)→ PFC(128, 128)→ PL(128, 256)→ PFC(256, 128)→ PL(128, 256)→ PFC(256, 128)→ PL(128, 256)→ PFC(256, 128)→ PL(128, 256)→\nPFC(256, 256)→ MaxPool ×2→ FC(256, 256),\nwhere FC(din, dout) : x 7→ ν (Wx+ b) denotes a fully connected layer. The final two fully connected layers outputs vectors µ ∈ R256 and η ∈ R256 used for parametrization of a multiviariate GaussianN (µ, diag expη) used for sampling a latent vector z ∈ R256. Our encoder architecture is similar to the one used in Mescheder et al. (2019).\nOur decoder f ([x, z] ;θ2) is a composition of 8 layers where the first layer is FC(256 + 3, 512), middle layers are FC(512, 512) and the final layer is Linear(512, 1). Notice that the input for the decoder is [x, z] where x ∈ R3 and z is the latent vector. In addition, we add a skip connection between the input to the middle fourth layer. We chose the Softplus with β = 100 for the non linear activation in the FC layers. For regulrization of the latent z, we add the following term to training loss 0.001 ∗ (‖µ‖1 + ‖η + 1‖1) , similarly to Atzmon & Lipman (2020)." }, { "heading": "AUTO-DECODER ARCHITECTURE", "text": "We use an auto-decoder architecture, similar to the one suggested in Park et al. (2019). We defined the latent vector z ∈ R256. The decoder architecture is the same as the one described above for the VAE. For regulrization of the latent z, we add the following term to the loss\n0.001 ∗ ‖z‖22 ,\nsimilarly to Park et al. (2019)." }, { "heading": "A.3 TRAINING DETAILS", "text": "We trained our networks using the ADAM (Kingma & Ba, 2014) optimizer, setting the batch size to 64. On each training step the SALD loss is evaluated on a random draw of 922 points out of the precomputed 500K samples. For the VAE, we set a fixed learning rate of 0.0005, whereas for the AD we scheduled the learning rate to start from 0.0005 and decrease by a factor of 0.5 every 500 epochs. All models were trained for 3000 epochs. Training was done on 4 Nvidia V-100 GPUs, using PYTORCH deep learning framework (Paszke et al., 2017)." }, { "heading": "A.4 FIGURES 2 AND 4", "text": "For the two dimensional experiments in figures 2 and 4 we have used the same decoder as in the VAE architecture with the only difference that the first layer is FC(2, 512) (no concatenation of a latent vector to the 2D input). We optimized using the ADAM (Kingma & Ba, 2014) optimizer, for 5000 epochs. The parameter λ in the SALD loss was set to 0.1.\n1000 5000 10000 20000\n0.016\n0.0165\nSALD SAL\nSample Size\nC h a m\nfe r d\nis t a n c e\n1000 5000 10000 20000\n0.06\n0.07\n0.08\n0.09\nSALD SAL\nSample Size\nC h a m\nfe r d\nis t a n c e\n1000 5000 10000 20000\n0.07\n0.08\n0.09\nSALD SAL\nSample Size\nC h a m\nfe r d\nis t a n c e\n(chair) (sofa) (table)\nFigure A3: Latent reconstruction sample complexity experiment: Chamfer distance to the input, as a function of the sample size. Note the Chamfer distance of the latent reconstruction is oblivious to sample size." }, { "heading": "A.5 SAMPLE COMPLEXITY", "text": "Figures A3 and A4 show quantitative and qualitative results for auto-decoder latent test shape reconstruction on samples of sizes 1K, 5K, 10K, 20K. Note that the reconstruction is oblivious to the sample size. This is possibly due to the fact that the auto-decoder was trained with the maximal sample size.\nFigure A4: Latent reconstruction sample complexity experiment: SAL is left, SALD is right. Note the latent reconstruction is oblivious to sample size." }, { "heading": "A.6 EVALUATION", "text": "Evaluation metrics. We use the following Chamfer distance metrics to measure similarity between shapes:\ndC (X1,X2) = 1 2 (d→C (X1,X2) + d→C (X2,X1)) (7)\nwhere\nd→C (X1,X2) = 1 |X1| ∑\nx1∈X1\nmin x2∈X2\n‖x1 − x2‖ (8)\nand the sets Xi are either point clouds or triangle soups. In addition, to measure similarity of the normals of triangle soups T1, T2, we define:\ndN (T1, T2) = 1 2 (d→N (T1, T2) + d→N (T2, T1)) , (9)\nwhere\nd→N (T1, T2) = 1 |T1| ∑\nx1∈T1\n∠(n(x1),n(x̂1)), (10)\nwhere ∠(a, b) is the positive angle between vectors a, b ∈ R3, n (x1) denotes the face normal of a point x1 in triangle soup T1, and x̂1 is the projection of x1 on T2. Tables 1 and 2 in the main paper report quantitative evaluation of our method, compared to other baselines. The meshing of the learned implicit representation was done using the MARCHING CUBES algorithm (Lorensen & Cline, 1987) on a uniform cubical grid of size [512]3. Computing the evaluation metrics dC and dN is done on a uniform sample of 30K points from the meshed surface." } ]
2,021
SALD: SIGN AGNOSTIC LEARNING WITH DERIVATIVES
SP:3120ae529b5b2964470ad055d1f13989f192c961
[ "This work introduces SPRT-TANDEM an algorithm to train a sequential probability ratio test (SPRT) as a neural network. This network is then used to discriminate between two hypotheses as fast as possible (seeing the smallest number of observations in a sequence) while maintaining a certain level of accuracy. The main contribution of this work is to enable Wald's SPRT without actual knowledge of the ratio, learning a neural network to model it. " ]
Classifying sequential data as early and as accurately as possible is a challenging yet critical problem, especially when a sampling cost is high. One algorithm that achieves this goal is the sequential probability ratio test (SPRT), which is known as Bayes-optimal: it can keep the expected number of data samples as small as possible, given the desired error upper-bound. However, the original SPRT makes two critical assumptions that limit its application in real-world scenarios: (i) samples are independently and identically distributed, and (ii) the likelihood of the data being derived from each class can be calculated precisely. Here, we propose the SPRT-TANDEM, a deep neural network-based SPRT algorithm that overcomes the above two obstacles. The SPRT-TANDEM sequentially estimates the loglikelihood ratio of two alternative hypotheses by leveraging a novel Loss function for Log-Likelihood Ratio estimation (LLLR) while allowing correlations up to N(∈ N) preceding samples. In tests on one original and two public video databases, Nosaic MNIST, UCF101, and SiW, the SPRT-TANDEM achieves statistically significantly better classification accuracy than other baseline classifiers, with a smaller number of data samples. The code and Nosaic MNIST are publicly available at https://github.com/TaikiMiyagawa/SPRT-TANDEM.
[ { "affiliations": [], "name": "Akinori F. Ebihara" }, { "affiliations": [], "name": "Taiki Miyagawa" }, { "affiliations": [], "name": "Kazuyuki Sakurai" }, { "affiliations": [], "name": "Hitoshi Imaoka" } ]
[ { "authors": [ "O. Vinyals", "P. Warden", "M. Wattenberg", "M. Wicke", "Y. Yu", "X. Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "P. Armitage" ], "title": "Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis", "venue": "Journal of the Royal Statistical Society. Series B (Methodological),", "year": 1950 }, { "authors": [ "F. Ashby" ], "title": "A biased random walk model for two choice reaction times", "venue": "Journal of Mathematical Psychology,", "year": 1983 }, { "authors": [ "A. Bagnall", "J. Lines", "J. Hills", "A. Bostrom" ], "title": "Time-series classification with cote: The collective of transformation-based ensembles", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2015 }, { "authors": [ "A. Bagnall" ], "title": "Time series classification with ensembles of elastic distance measures", "venue": "Data Mining and Knowledge Discovery,", "year": 2014 }, { "authors": [ "C.W. Baum", "V.V. Veeravalli" ], "title": "A sequential procedure for multihypothesis testing", "venue": "IEEE Transactions on Information Theory,", "year": 1994 }, { "authors": [ "N.H. Bingham", "G. Peskir" ], "title": "Optimal stopping and dynamic programming", "venue": null, "year": 2006 }, { "authors": [ "V. Campos", "B. Jou", "X.G. i Nieto", "J. Torres", "S.-F. Chang" ], "title": "Skip rnn: Learning to skip state updates in recurrent neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "J. Carreira", "A. Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "A. Czarnecki" ], "title": "Positronium properties", "venue": "arXiv preprint hep-ph/9911455,", "year": 1999 }, { "authors": [ "H.A. Dau", "E. Keogh", "K. Kamgar", "C.-C.M. Yeh", "Y. Zhu", "S. Gharghabi", "C.A. Ratanamahatana", "Yanping", "B. Hu", "N. Begum", "A. Bagnall", "A. Mueen", "G. Batista" ], "title": "The ucr time series classification", "venue": null, "year": 2018 }, { "authors": [ "K. Doya" ], "title": "Modulators of decision making", "venue": "Nat. Neurosci.,", "year": 2008 }, { "authors": [ "V.P. Dragalin", "A.G. Tartakovsky", "V.V. Veeravalli" ], "title": "Multihypothesis sequential probability ratio tests .i. asymptotic optimality", "venue": "IEEE Transactions on Information Theory,", "year": 1999 }, { "authors": [ "V.P. Dragalin", "A.G. Tartakovsky", "V.V. Veeravalli" ], "title": "Multihypothesis sequential probability ratio tests. ii. accurate asymptotic expansions for the expected sample size", "venue": "IEEE Transactions on Information Theory,", "year": 2000 }, { "authors": [ "J. Duchi", "E. Hazan", "Y. Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "W. Edwards" ], "title": "Optimal strategies for seeking information: Models for statistics, choice reaction times, and human information processing", "venue": "Journal of Mathematical Psychology,", "year": 1965 }, { "authors": [ "J.P. Gallivan", "C.S. Chapman", "D.M. Wolpert", "J.R. Flanagan" ], "title": "Decision-making in sensorimotor control", "venue": "Nat. Rev. Neurosci., 19(9):519–534,", "year": 2018 }, { "authors": [ "J.I. Gold", "M.N. Shadlen" ], "title": "The neural basis of decision making", "venue": "Annu. Rev. Neurosci.,", "year": 2007 }, { "authors": [ "A. Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "K. Hara", "H. Kataoka", "Y. Satoh" ], "title": "Learning spatio-temporal features with 3d residual networks for action recognition", "venue": "IEEE International Conference on Computer Vision Workshops (ICCVW),", "year": 2017 }, { "authors": [ "C.R. Harris", "K.J. Millman", "S.J. van der Walt", "R. Gommers", "P. Virtanen", "D. Cournapeau", "E. Wieser", "J. Taylor", "S. Berg", "N.J. Smith", "R. Kern", "M. Picus", "S. Hoyer", "M.H. van Kerkwijk", "M. Brett", "A. Haldane", "J.F. Del Río", "M. Wiebe", "P. Peterson", "P. Gérard-Marchant", "K. Sheppard", "T. Reddy", "W. Weckesser", "H. Abbasi", "C. Gohlke", "T.E. Oliphant" ], "title": "Array programming with NumPy", "venue": "Nature, 585(7825):357–362,", "year": 2020 }, { "authors": [ "M.M. Hu", "H. Sun", "N.J. Kasdin" ], "title": "Sequential generalized likelihood ratio test for planet detection with photon-counting mode", "venue": "Techniques and Instrumentation for Detection of Exoplanets IX,", "year": 2019 }, { "authors": [ "A. Irle", "N. Schmitz" ], "title": "On the optimality of the sprt for processes with continuous time parameter", "venue": "Statistics: A Journal of Theoretical and Applied Statistics,", "year": 1984 }, { "authors": [ "Y.-S. Jeong", "M.K. Jeong", "O.A. Omitaomu" ], "title": "Weighted dynamic time warping for time series classification", "venue": "Pattern Recognition,", "year": 2011 }, { "authors": [ "R. Johari", "P. Koomen", "L. Pekelis", "D. Walsh" ], "title": "Peeking at a/b tests: Why it matters, and what to do about it", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD", "year": 2017 }, { "authors": [ "N. Ju", "D. Hu", "A. Henderson", "L. Hong" ], "title": "A sequential test for selecting the better variant: Online a/b testing, adaptive allocation, and continuous monitoring", "venue": "In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining,", "year": 2019 }, { "authors": [ "T. Kanamori", "S. Hido", "M. Sugiyama" ], "title": "A least-squares approach to direct importance estimation", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "F. Karim", "S. Majumdar", "H. Darabi", "S. Chen" ], "title": "LSTM fully convolutional networks for time series classification", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "R. Kate" ], "title": "Using dynamic time warping distances as features for improved time series classification", "venue": "Data Mining and Knowledge Discovery,", "year": 2015 }, { "authors": [ "H. Khan", "L. Marcuse", "B. Yener" ], "title": "Deep density ratio estimation for change point detection", "venue": "arXiv preprint arXiv:1905.09876,", "year": 2019 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "S. Kira", "T. Yang", "M.N. Shadlen" ], "title": "A neural implementation of wald’s sequential probability rato test", "venue": "Sequential Analysis,", "year": 2015 }, { "authors": [ "T.L. Lai" ], "title": "Asymptotic optimality of invariant sequential probability ratio tests", "venue": "Ann. Statist.,", "year": 1981 }, { "authors": [ "K.W. Latimer", "J.L. Yates", "M.L. Meister", "A.C. Huk", "J.W. Pillow" ], "title": "Single-trial spike trains in parietal cortex reveal discrete steps during decision-making", "venue": null, "year": 2015 }, { "authors": [ "E.L. Lehmann", "J.P. Romano" ], "title": "Testing statistical hypotheses", "venue": "Springer Science & Business Media,", "year": 2006 }, { "authors": [ "J. Lines", "S. Taylor", "A. Bagnall" ], "title": "Hive-cote: The hierarchical vote collective of transformationbased ensembles for time series classification", "venue": "IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "V. Lotov" ], "title": "Asymptotic expansions in a sequential likelihood ratio test", "venue": "Theory of Probability & Its Applications,", "year": 1988 }, { "authors": [ "S.M. McClure", "D.I. Laibson", "G. Loewenstein", "J.D. Cohen" ], "title": "Separate neural systems value immediate and delayed monetary rewards", "venue": "Science,", "year": 2004 }, { "authors": [ "H. Nam", "M. Sugiyama" ], "title": "Direct density ratio estimation with convolutional neural networks with application in outlier detection", "venue": "IEICE TRANSACTIONS on Information and Systems,", "year": 2015 }, { "authors": [ "E. Nikishin", "P. Izmailov", "B. Athiwaratkun", "D. Podoprikhin", "T. Garipov", "P. Shvechikov", "D. Vetrov", "A.G. Wilson" ], "title": "Improving stability in deep reinforcement learning with weight averaging", "venue": "In Uncertainty in artificial intelligence workshop on uncertainty in Deep learning,", "year": 2018 }, { "authors": [ "G. Okazawa", "C.E. Hatch", "A. Mancoo", "C.K. Machens", "R. Kiani" ], "title": "The geometry of the representation of decision variable and stimulus difficulty in the parietal cortex", "venue": "bioRxiv,", "year": 2021 }, { "authors": [ "A. Paszke", "S. Gross", "F. Massa", "A. Lerer", "J. Bradbury", "G. Chanan", "T. Killeen", "Z. Lin", "N. Gimelshein", "L. Antiga", "A. Desmaison", "A. Kopf", "E. Yang", "Z. DeVito", "M. Raison", "A. Tejani", "S. Chilamkurthy", "B. Steiner", "L. Fang", "J. Bai", "S. Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "J.D. Roitman", "M.N. Shadlen" ], "title": "Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task", "venue": "J. Neurosci.,", "year": 2002 }, { "authors": [ "P.H. Rudebeck", "M.E. Walton", "A.N. Smyth", "D.M. Bannerman", "M.F. Rushworth" ], "title": "Separate neural pathways process different decision costs", "venue": "Nat. Neurosci.,", "year": 2006 }, { "authors": [ "D.E. Rumelhart", "G.E. Hinton", "R.J. Williams" ], "title": "Learning representations by back-propagating errors", "venue": "Nature, 323:533–536,", "year": 1986 }, { "authors": [ "P. Schäfer", "U. Leser" ], "title": "Multivariate time series classification with weasel+ muse", "venue": "arXiv preprint arXiv:1711.11343,", "year": 2017 }, { "authors": [ "M.N. Shadlen", "R. Kiani", "W.T. Newsome", "J.I. Gold", "D.M. Wolpert", "A. Zylberberg", "J. Ditterich", "V. de Lafuente", "T. Yang", "J. Roitman" ], "title": "Comment on \"Single-trial spike trains in parietal cortex reveal discrete steps during decision-making", "venue": null, "year": 2016 }, { "authors": [ "J. Sochman", "J. Matas" ], "title": "Waldboost - learning for time constrained sequential detection", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),", "year": 2005 }, { "authors": [ "M. Stone" ], "title": "Models for choice-reaction", "venue": "time. Psychometrika,", "year": 1960 }, { "authors": [ "M. Sugiyama", "T. Suzuki", "S. Nakajima", "H. Kashima", "P. von Bünau", "M. Kawanabe" ], "title": "Direct importance estimation for covariate shift adaptation", "venue": "Annals of the Institute of Statistical Mathematics,", "year": 2008 }, { "authors": [ "M. Sugiyama", "T. Suzuki", "T. Kanamori" ], "title": "Density ratio estimation: A comprehensive review (statistical experiment and its related topics)", "venue": null, "year": 2010 }, { "authors": [ "M. Sugiyama", "T. Suzuki", "T. Kanamori" ], "title": "Density ratio estimation in machine learning", "venue": null, "year": 2012 }, { "authors": [ "S.C. Tanaka", "K. Doya", "G. Okada", "K. Ueda", "Y. Okamoto", "S. Yamawaki" ], "title": "Prediction of immediate and future rewards differentially recruits cortico-basal ganglia", "venue": "loops. Nat. Neurosci.,", "year": 2004 }, { "authors": [ "A. Tartakovsky" ], "title": "Sequential methods in the theory of information systems (in Russian)", "venue": "Radio i Svyaz’,", "year": 1991 }, { "authors": [ "A. Tartakovsky" ], "title": "Asymptotically optimal sequential tests for nonhomogeneous processes", "venue": "Sequential Analysis, 17,", "year": 1999 }, { "authors": [ "A. Tartakovsky", "I. Nikiforov", "M. Basseville" ], "title": "Sequential Analysis: Hypothesis Testing and Changepoint Detection", "venue": "Chapman & Hall/CRC,", "year": 2014 }, { "authors": [ "V.V. Veeravalli", "C.W. Baum" ], "title": "Asymptotic efficiency of a sequential multihypothesis test", "venue": "IEEE Transactions on Information Theory,", "year": 1994 }, { "authors": [ "P. Viola", "M. Jones" ], "title": "Rapid object detection using a boosted cascade of simple features", "venue": "In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001,", "year": 2001 }, { "authors": [ "Z. Wang", "W. Yan", "T. Oates" ], "title": "Time series classification from scratch with deep neural networks: A strong baseline", "venue": "In 2017 International Joint Conference on Neural Networks (IJCNN),", "year": 2017 }, { "authors": [ "L. Wei", "E.J. Keogh" ], "title": "Semi-supervised time series classification", "venue": "In KDD ’06,", "year": 2006 }, { "authors": [ "X. Xiong", "F. De la Torre" ], "title": "Supervised descent method and its applications to face alignment", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2013 }, { "authors": [ "K. Yang", "C. Shahabi" ], "title": "An efficient k nearest neighbor search for multivariate time series", "venue": "Information and Computation,", "year": 2007 } ]
[ { "heading": "1 INTRODUCTION", "text": "The sequential probability ratio test, or SPRT, was originally invented by Abraham Wald, and an equivalent approach was also independently developed and used by Alan Turing in the 1940s (Good, 1979; Simpson, 2010; Wald, 1945). SPRT calculates the log-likelihood ratio (LLR) of two competing hypotheses and updates the LLR every time a new sample is acquired until the LLR reaches one of the two thresholds for alternative hypotheses (Figure 1). Wald and his colleagues proved that when sequential data are sampled from independently and identically distributed (i.i.d.) data, SPRT can minimize the required number of samples to achieve the desired upper-bounds of false positive and false negative rates comparably to the Neyman-Pearson test, known as the most powerful likelihood test (Wald & Wolfowitz, 1948) (see also Theorem (A.5) in Appendix A). Note that Wald used the i.i.d. assumption only for ensuring a finite decision time (i.e., LLR reaches a threshold within finite steps) and for facilitating LLR calculation: the non-i.i.d. property does not affect other aspects of the SPRT including the error upper bounds (Wald, 1947). More recently, Tartakovsky et al. verified that the non-i.i.d. SPRT is optimal or at least asymptotically optimal as the sample size increases (Tartakovsky et al., 2014), opening the possibility of potential applications of the SPRT to non-i.i.d. data series.\nAbout 70 years after Wald’s invention, neuroscientists found that neurons in the part of the primate brain called the lateral intraparietal cortex (LIP) showed neural activities reminiscent of the SPRT (Kira et al., 2015); when a monkey sequentially collects random pieces of evidence to make a binary choice, LIP neurons show activities proportional to the LLR. Importantly, the time of the decision can be predicted from when the neural activity reaches a fixed threshold, the same as the SPRT’s decision rule. Thus, the SPRT, the optimal sequential decision strategy, was re-discovered to be an\nalgorithm explaining primate brains’ computing strategy. It remains an open question, however, what algorithm will be used in the brain when the sequential evidence is correlated, non-i.i.d. series.\nThe SPRT is now used for several engineering applications (Cabri et al., 2018; Chen et al., 2017; Kulldorff et al., 2011). However, its i.i.d. assumption is too crude for it to be applied to other real-world scenarios, including time-series classification, where data are highly correlated, and key dynamic features for classification often extend across more than one data point, violating the i.i.d. assumption. Moreover, the LLR of alternative hypotheses needs to be calculated as precisely as possible, which is infeasible in many practical applications.\nIn this paper, we overcome the above difficulties by using an SPRT-based algorithm that Treats data series As an N-th orDEr Markov process (SPRT-TANDEM), aided by a sequential probability density ratio estimation based on deep neural networks. A novel Loss function for Log-Likelihood Ratio estimation (LLLR) efficiently estimates the density ratio that let the SPRT-TANDEM approach close to asymptotic Bayes-optimality (i.e., Appendix A.4). In other words, LLLR optimizes classification speed and accuracy at the same time. The SPRT-TANDEM can classify non-i.i.d. data series with user-defined model complexity by changing N(∈ N), the order of approximation, to define the number of past samples on which the given sample depends. By dynamically changing the number of samples used for classification, the SPRT-TANDEM can maintain high classification accuracy while minimizing the sample size as much as possible. Moreover, the SPRT-TANDEM enables a user to flexibly control the speed-accuracy tradeoff without additional training, making it applicable to various practical applications.\nWe test the SPRT-TANDEM on our new database, Nosaic MNIST (NMNIST), in addition to the publicly available UCF101 action recognition database (Soomro et al., 2012) and Spoofing in the Wild (SiW) database (Liu et al., 2018). Two-way analysis of variance (ANOVA, (Fisher, 1925)) followed by a Tukey-Kramer multi-comparison test (Tukey, 1949; Kramer, 1956) shows that our proposed SPRT-TANDEM provides statistically significantly higher accuracy than other fixed-length and variable-length classifiers at a smaller number of data samples, making Wald’s SPRT applicable even to non-i.i.d. data series. Our contribution is fivefold:\n1. We invented a deep neural network-based algorithm, SPRT-TANDEM, which enables Wald’s SPRT on arbitrary sequential data without knowing the true LLR.\n2. The SPRT-TANDEM extends the SPRT to non-i.i.d. data series without knowing the true LLR.\n3. With a novel loss, LLLR, the SPRT-TANDEM sequentially estimates LLR to optimize speed and accuracy simultaneously.\n4. The SPRT-TANDEM can control the speed-accuracy tradeoff without additional training. 5. We introduce Nosaic MNIST, a novel early-classification database." }, { "heading": "2 RELATED WORK", "text": "The SPRT-TANDEM has multiple interdisciplinary intersections with other fields of research: Wald’s classical SPRT, probability density estimation, neurophysiological decision making, and time-series\nclassification. The comprehensive review is left to Appendix B, while in the following, we introduce the SPRT, probability density estimation algorithms, and early classification of the time series.\nSequential Probability Ratio Test (SPRT). The SPRT, denoted by δ∗, is defined as the tuple of a decision rule and a stopping rule (Tartakovsky et al., 2014; Wald, 1947):\nDefinition 2.1. Sequential Probability Ratio Test (SPRT). Let λt as the LLR at time t, and X(1,T ) as a sequential data X(1,T ) := {x(t)}Tt=1. Given the absolute values of lower and upper decision threshold, a0 ≥ 0 and a1 ≥ 0, SPRT, δ∗, is defined as\nδ∗ = (d∗, τ∗), (1)\nwhere the decision rule d∗ and stopping time τ∗ are\nd∗(X(1,T )) = { 1 if λτ∗ ≥ a1 0 if λτ∗ ≤ −a0 ,\n(2)\nτ∗ = inf{T ≥ 0|λT /∈ (−a0, a1)} . (3)\nWe review the proof of optimality in Appendix A.4, while Figure 1 shows an intuitive explanation.\nProbability density ratio estimation. Instead of estimating numerator and denominator of a density ratio separately, the probability density ratio estimation algorithms estimate the ratio as a whole, reducing the degree of freedom for more precise estimation (Sugiyama et al., 2010; 2012). Two of the probability density ratio estimation algorithms that closely related to our work are the probabilistic classification (Bickel et al., 2007; Cheng & Chu, 2004; Qin, 1998) and density fitting approach (Sugiyama et al., 2008; Tsuboi et al., 2009) algorithms. As we show in Section 4 and Appendix E, the SPRT-TANDEM sequentially estimates the LLR by combining the two algorithms.\nEarly classification of time series. To make decision time as short as possible, algorithms for early classification of time series can handle variable length of data (Mori et al., 2018; Mori et al., 2016; Xing et al., 2009; 2012) to minimize high sampling costs (e.g., medical diagnostics (Evans et al., 2015; Griffin & Moorman, 2001), or stock crisis identification (Ghalwash et al., 2014)). Leveraging deep neural networks is no exception in the early classification of time series (Dennis et al., 2018; Suzuki et al., 2018). Long short-term memory (LSTM) variants LSTM-s/LSTM-m impose monotonicity on classification score and inter-class margin, respectively, to speed up action detection (Ma et al., 2016). Early and Adaptive Recurrent Label ESTimator (EARLIEST) combines reinforcement learning and a recurrent neural network to decide when to classify and assign a class label (Hartvigsen et al., 2019)." }, { "heading": "3 PROPOSED ALGORITHM: SPRT-TANDEM", "text": "In this section, we propose the TANDEM formula, which provides the N -th order approximation of the LLR with respect to posterior probabilities. The i.i.d. assumption of Wald’s SPRT greatly simplifies the LLR calculation at the expense of the precise temporal relationship between data samples. On the other hand, incorporating a long correlation among multiple data may improve the LLR estimation; however, calculating too long a correlation may potentially be detrimental in the following cases. First, if a class signature is significantly shorter than the correlation length in consideration, uninformative data samples are included in calculating LLR, resulting in a late or wrong decision (Campos et al., 2018). Second, long correlations require calculating a long-range of backpropagation, prone to vanishing gradient problem (Hochreiter et al., 2001). Thus, we relax the i.i.d. assumption by keeping only up to the N -th order correlation to calculate the LLR.\nThe TANDEM formula. Here, we introduce the TANDEM formula, which computes the approximated LLR, the decision value of the SPRT-TANDEM algorithm. The data series is approximated as an N -th order Markov process. For the complete derivation of the 0th (i.i.d.), 1st, and N -th order TANDEM formula, see Appendix C. Given a maximum timestamp T ∈ N, let X(1,T ) and y be a sequential data X(1,T ) := {x(t)}Tt=1 and a class label y ∈ {1, 0}, respectively, where x(t) ∈ Rdx and dx ∈ N. By using Bayes’ rule with the N -th order Markov assumption, the joint LLR of data at a\ntimestamp t is written as follows:\nlog ( p(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) ) =\nt∑ s=N+1 log ( p(y = 1|x(s−N), ..., x(s)) p(y = 0|x(s−N), ..., x(s)) ) − t∑ s=N+2 log ( p(y = 1|x(s−N), ..., x(s−1)) p(y = 0|x(s−N), ..., x(s−1)) ) − log ( p(y = 1)\np(y = 0)\n) (4)\n(see Equation (84) and (85) in Appendix C for the full formula). Hereafter we use terms k-let or multiplet to indicate the posterior probabilities, p(y|x(1), ..., x(k)) = p(y|X(1,k)) that consider correlation across k data points. The first two terms of the TANDEM formula (Equation (4)), N + 1- let and N -let, have the opposite signs working in “tandem” adjusting each other to compute the LLR. The third term is a prior (bias) term. In the experiment, we assume a flat prior or zero bias term, but a user may impose a non-flat prior to handling the biased distribution of a dataset. The TANDEM formula can be interpreted as a realization of the probability matching approach of the probability density estimation, under an N -th order Markov assumption of data series.\nNeural network that calculates the SPRT-TANDEM formula. The SPRT-TANDEM is designed to explicitly calculate the N -th order TANDEM formula to realize sequential density ratio estimation, which is the critical difference between our SPRT-TANDEM network and other architecture based on convolutional neural networks (CNNs) and recurrent neural networks (RNN). Figure 2 illustrates a conceptual diagram explaining a generalized neural network structure, in accordance with the 1st-order TANDEM formula for simplicity. The network consists of a feature extractor and a temporal integrator (highlighted by red and blue boxes, respectively). They are arbitrary networks that a user can choose depending on classification problems or available computational resources. The feature extractor and temporal integrator are separately trained because we find that this achieves better performance than the end-to-end approach (also see Appendix D). The feature extractor outputs single-frame features (e.g., outputs from a global average pooling layer), which are the input vectors of the temporal integrator. The output vectors from the temporal integrator are transformed with a fully-connected layer into two-dimensional logits, which are then input to the softmax layer to obtain posterior probabilities. They are used to compute the LLR to run the SPRT (Equation (2)). Note that during the training phase of the feature extractor, the global average pooling layer is followed by a fully-connected layer for binary classification.\nHow to choose the hyperparameter N? By tuning the hyperparameter N , a user can efficiently boost the model performance depending on databases; in Section 5, we change N to visualize the model performance as a function of N . Here, we provide two ways to choose N . One is to choose N based on the specific time scale, a concept introduced in Appendix D, where we describe in detail how to guess on the best N depending on databases. The other is to use a hyperparameter tuning algorithm, such as Optuna, (Akiba et al., 2019) to choose N objectively. Optuna has multiple hyperparameter searching algorithms, the default of which is the Tree-structured Parzen Estimator (Bergstra et al., 2011). Note that tuning N is not computationally expensive, because N is only related to the temporal integrator, not the feature extractor. In fact, the temporal integrator’s training speed is much faster than that of the feature extractor: 9 mins/epoch vs. 10 hrs/epoch (N = 49, NVIDIA RTX2080Ti, SiW database)." }, { "heading": "4 LLLR AND MULTIPLET CROSS-ENTROPY LOSS", "text": "Given a maximum timestamp T ∈ N and dataset size M ∈ N, let S := {(X(1,T )i , yi)}Mi=1 be a sequential dataset. Training our network to calculate the TANDEM formula involves the following loss functions in combination: (i) the Loss for Log Likelihood Ratio estimation (LLLR), LLLR, and (ii) multiplet cross-entropy loss, Lmultiplet. The total loss, Ltotal is defined as\nLtotal = LLLR + Lmultiplet . (5)" }, { "heading": "4.1 LOSS FOR LOG-LIKELIHOOD RATIO ESTIMATION (LLLR).", "text": "The SPRT is Bayes-optimal as long as the true LLR is available; however, the true LLR is often inaccessible under real-world scenarios. To empirically estimate the LLR with the TANDEM formula, we propose the LLLR\nLLLR = 1\nMT M∑ i=1 T∑ t=1 ∣∣∣∣∣yi − σ ( log ( p̂(x (1) i , x (2) i , ..., x (t) i |y = 1) p̂(x (1) i , x (2) i , ..., x (t) i |y = 0) ))∣∣∣∣∣ , (6) where σ is the sigmoid function. We use p̂ to highlight a probability density estimated by a neural network. The LLLR minimizes the Kullback-Leibler divergence (Kullback & Leibler, 1951) between the estimated and the true densities, as we briefly discuss below. The full discussion is given in Appendix E due to page limit.\nDensity fitting. First, we introduce KLIEP (Kullback-Leibler Importance Estimation Procedure, Sugiyama et al. (2008)), a density fitting approach of the density ratio estimation Sugiyama et al. (2010). KLIEP is an optimization problem of the Kullback-Leibler divergence between p(X|y = 1) and r̂(X)p(X|y = 0) with constraint conditions, where X and y are random variables corresponding to X(1,t)i and yi, and r̂(X) := p̂(X|y = 1)/p̂(X|y = 0) is the estimated density ratio. Formally,\nargmin r̂ [KL(p(X|y = 1)||r̂(X)p(X|y = 0))] = argmin r̂\n[ − ∫ dXp(X|y = 1) log(r̂(X)) ] (7)\nwith the constraints 0 ≤ r̂(X) and ∫ dXr̂(X)p(X|y = 0) = 1. The first constraint ensures the positivity of the estimated density r̂(X)p(X|y = 0), while the second one is the normalization condition. Applying the empirical approximation, we obtain the final optimization problem:\nargmin r̂\n[ 1\nM1 ∑ i∈I1 − log r̂(X(1,t)i ) ] , with r̂(X(1,t)i ) ≥ 0 and 1 M0 ∑ i∈I0 r̂(X (1,t) i ) = 1 , (8)\nwhere I1 := {i ∈ [M ]|yi = 1}, I0 := {i ∈ [M ]|yi = 0}, M1 := |I1|, and M0 := |I0|.\nStabilization. The original KLIEP (8), however, is asymmetric with respect to p(X|y = 1) and p(X|y = 0). To recover the symmetry, we add 1M0 ∑ i∈I0 − log(r̂(X (1,t) i ) −1) to the objective and\nimpose an additional constraint 1M1 ∑ i∈I1 r̂(X (1,t) i ) −1 = 1. Besides, the symmetrized objective\nstill has unbounded gradients, which cause instability in the training. Therefore, we normalize the LLRs with the sigmoid function, obtaining the LLLR (6). We can also show that the constraints are effectively satisfied due to the sigmoid funciton. See Appendix E for the details.\nIn summary, we have shown that the LLLR minimizes the Kullback-Leibler divergence of the true and the estimated density and further stabilizes the training by restricting the value of LLR. Here we emphasize the contributions of the LLLR again. The LLLR enables us to conduct the stable LLR estimation and thus to perform the SPRT, the algorithm optimizing two objectives: stopping time and accuracy. In previous works (Mori et al., 2018; Hartvigsen et al., 2020), on the other hand, these two objectives are achieved with separate loss sub-functions.\nCompared to KLIEP, the proposed LLLR statistically significantly boosts the performance of the SPRT-TANDEM (Appendix E.4). Besides, experiment on multivariate Gaussian with a simple toy-model also shows that the LLLR minimize errors between the estimated and the true density ratio (Appendix F)." }, { "heading": "4.2 MULTIPLET CROSS-ENTROPY LOSS.", "text": "To further facilitate training the neural network, we add binary cross-entropy losses, though the LLLR suffices to estimate LLR. We call them multiplet cross-entropy loss here, and defined as:\nLmultiplet := N+1∑ k=1 Lk-let , (9)\nwhere\nLk-let := 1\nM(T −N) M∑ i=1 T−(N+1−k)∑ t=k ( − log p̂(yi|x(t−k+1)i , ..., x (t) i ) ) . (10)\nMinimizing the multiplet cross-entropy loss is equivalent to minimizing the Kullback-Leibler divergence of the estimated posterior k-let p̂(yi|x(t−k+1)i , ..., x (t) i ) and the true posterior p(yi|x(t−k+1)i , ..., x (t) i ) (shown in Appendix G), which is a consistent objective with the LLLR and thus the multiplet loss accelerates the training. Note also that the multiplet loss optimizes all the logits output from the temporal integrator, unlike the LLLR." }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "In the following experiments, we use two quantities as evaluation criteria: (i) balanced accuracy, the arithmetic mean of the true positive and true negative rates, and (ii) mean hitting time, the average number of data samples used for classification. Note that the balanced accuracy is robust to class imbalance (Luque et al., 2019), and is equal to accuracy on balanced datasets.\nEvaluated public databases are NMNIST, UCF, and SiW. Training, validation, and test datasets are split and fixed throughout the experiment. We selected three early-classification models (LSTM-s (Ma et al., 2016), LSTM-m (Ma et al., 2016), and EARLIEST (Hartvigsen et al., 2019)) and one fixed-length classifier (3DResNet (Hara et al., 2017)), as baseline models. All the early-classification models share the same feature extractor as that of the SPRT-TANDEM for a fair comparison.\nHyperparameters of all the models are optimized with Optuna unless otherwise noted so that no models are disadvantaged by choice of hyperparameters. See Appendix H for the search spaces and fixed final parameters. After fixing hyperparameters, experiments are repeated with different random seeds to obtain statistics. In each of the training runs, we evaluate the validation set after each training epoch and then save the weight parameters if the balanced accuracy on the validation set updates the largest value. The last saved weights are used as the model of that run. The model evaluation is performed on the test dataset.\nDuring the test stage of the SPRT-TANDEM, we used various values of the SPRT thresholds to obtain a range of balanced accuracy-mean hitting time combinations to plot a speed-accuracy tradeoff (SAT) curve. If all the samples in a video are used up, the thresholds are collapsed to a1 = a0 = 0 to force a decision.\nTo objectively compare all the models with various trial numbers, we conducted the two-way ANOVA followed by the Tukey-Kramer multi-comparison test to compute statistical significance. For the details of the statistical test, see Appendix I.\nWe show our experimental results below. Due to space limitations, we can only show representative results. For the full details, see Appendix J. For our computing infrastructure, see Appendix K.\nNosaic MNIST (Noise + mosaic MNIST) database. We introduce a novel dataset, NMNIST, whose video is buried with noise at the first frame, and gradually denoised toward the last, 20th frame (see Appendix L for example data). The motivation to create NMNIST instead of using a preexisting time-series database is as follows: for simple video databases such as Moving MNIST (MMNIST, (Srivastava et al., 2015)), each data sample contains too much information so that well-trained classifiers can correctly classify a video only with one or two frames (see Appendix M for the results of the SPRT-TANDEM and LSTM-m on MMNIST).\nWe design a parity classification task, classifying 0− 9 digits into an odd or even class. The training, validation, and test datasets contain 50,000, 10,000, and 10,000 videos with frames of size 28×28×1 (gray scale). Each pixel value is divided by 127.5, before subtracted by 1. The feature extractor of the SPRT-TANDEM is ResNet-110 (He et al., 2016a), with the final output reduced to 128 channels. The temporal integrator is a peephole-LSTM (Gers & Schmidhuber, 2000; Hochreiter & Schmidhuber, 1997), with hidden layers of 128 units. The total numbers of trainable parameters on the feature extractor and temporal integrator are 6.9M and 0.1M, respectively. We train 0th, 1st, 2nd, 3rd, 4th, 5th, 10th, and 19th order SPRT-TANDEM networks. LSTM-s / LSTM-m and EARLIEST use peephole-LSTM and LSTM, respectively, both with hidden layers of 128 units. 3DResNet has 101 layers with 128 final output channels so that the total number of trainable parameters is in the same order (7.7M) as that of the SPRT-TANDEM.\nFigure 3a and Table 1 shows representative results of the experiment. Figure 3d shows example LLR trajectories calculated with the 10th order SPRT-TANDEM. The SPRT-TANDEM outperforms other baseline algorithms by large margins at all mean hitting times. The best performing model is the 10th order TANDEM, which achieves statistically significantly higher balanced accuracy than the other algorithms (p-value < 0.001). Is the proposed algorithm’s superiority because the SPRT-TANDEM successfully estimates the true LLR to approach asymptotic Bayes optimality? We discuss potential interpretations of the experimental results in the Appendix D.\nUCF101 action recognition database. To create a more challenging task, we selected two classes, handstand-pushups and handstand-walking, from the 101 classes in the UCF database. At a glimpse of one frame, the two classes are hard to distinguish. Thus, to correctly classify these classes, temporal information must be properly used. We resize each video’s duration as multiples of 50 frames and sample every 50 frames with 25 frames of stride as one data. Training, validation, and test datasets contain 1026, 106, and 105 videos with frames of size 224× 224× 3, randomly cropped to 200 × 200 × 3 at training. The mean and variance of a frame are normalized to zero and one, respectively. The feature extractor of the SPRT-TANDEM is ResNet-50 (He et al., 2016b), with the final output reduced to 64 channels. The temporal integrator is a peephole-LSTM, with hidden layers of 64 units. The total numbers of trainable parameters in the feature extractor and temporal integrator are 26K and 33K, respectively. We train 0th, 1st, 2nd, 3rd, 5th, 10th, 19th, 24th, and 49th-order SPRT-TANDEM. LSTM-s / LSTM-m and EARLIEST use peephole-LSTM and LSTM, respectively, both with hidden layers of 64 units. 3DResNet has 50 layers with 64 final output channels so that the total number of trainable parameters (52K) is on the same order as that of the SPRT-TANDEM.\nFigure 3b and Table 2 shows representative results of the experiment. The best performing model is the 10th order TANDEM, which achieves statistically significantly higher balanced accuracy than other models (p-value < 0.001). The superiority of the higher-order TANDEM indicates that a classifier needs to integrate longer temporal information in order to distinguish the two classes (also see Appendix D).\nSpoofing in the Wild (SiW) database. To test the SPRT-TANDEM in a more practical situation, we conducted experiments on the SiW database. We use a sliding window of 50 frames-length and 25 frames-stride to sample data, which yields training, validation, and test datasets of 46,729, 4,968, and 43,878 videos of live or spoofing face. Each frame is resized to 256× 256× 3 pixels and randomly cropped to 244 × 244 × 3 at training. The mean and variance of a frame are normalized to zero and one, respectively. The feature extractor of the SPRT-TANDEM is ResNet-152, with the final output reduced to 512 channels. The temporal integrator is a peephole-LSTM, with hidden layers of 512 units. The total number of trainable parameters in the feature extractor and temporal integrator is 3.7M and 2.1M, respectively. We train 0th, 1st, 2nd, 3rd, 5th, 10th, 19th, 24th, and 49th-order SPRT-TANDEM networks. LSTM-s / LSTM-m and EARLIEST use peephole-LSTM and LSTM, respectively, both with hidden layers of 512 units. 3DResNet has 101 layers with 512 final output channels so that the total number of trainable parameters (5.3M) is in the same order as that of the SPRT-TANDEM. Optuna is not applied due to the large database and network size.\nFigure 3c and Table 3 shows representative results of the experiment. The best performing model is the 10th order TANDEM, which achieves statistically significantly higher balanced accuracy than other models (p-value < 0.001). The superiority of the lower-order TANDEM indicates that each video frame contains a high amount of information necessary for the classification, imposing less need to collect a large number of frames (also see Appendix D).\nAblation study. To understand contributions of the LLLR and Lmultiplet to the SAT curve, we conduct an ablation study. The 1st-order SPRT-TANDEM is trained with LLLR only, Lmultiplet only, and both LLLR and Lmultiplet. The hyperparameters of the three models are independently optimized using Optuna (see Appendix H). The evaluated database and model are NMNIST and the 1st-order SPRT-TANDEM, respectively. Figure 3e shows the three SAT curves. The result shows that LLLR leads to higher classification accuracy, whereas Lmultiplet enables faster classification. The best performance is obtained by using both LLLR and Lmultiplet. We also confirmed this tendency with the 19th order SPRT-TANDEM, as shown in Appendix N.\nSPRT vs. Neyman-Pearson test. As we discuss in Appendix A, the Neyman-Person test is the optimal likelihood ratio test with a fixed number of samples. On the other hand, the SPRT takes a flexible number of samples for an earlier decisions. To experimentally test this prediction, we compare the SPRT-TANDEM and the corresponding Neyman-Pearson test. The Neyman-Pearson test classifies the entire data into two classes at each number of frames, using the estimated LLRs with threshold λ = 0. Results support the theoretical prediction, as shown in Figure 3f: the Neyman-Pearson test needs a larger number of samples than the SPRT-TANDEM." }, { "heading": "6 CONCLUSION", "text": "We presented the SPRT-TANDEM, a novel algorithm making Wald’s SPRT applicable to arbitrary data series without knowing the true LLR. Leveraging deep neural networks and the novel loss function, LLLR, the SPRT-TANDEM minimizes the distance of the true LLR and the LLR sequentially estimated with the TANDEM formula, enabling simultaneous optimization of speed and accuracy. Tested on the three publicly available databases, the SPRT-TANDEM achieves statistically significantly higher accuracy over other existing algorithms with a smaller number of data points. The SPRT-TANDEM enables a user to control the speed-accuracy tradeoff without additional training, opening up various potential applications where either high-accuracy or high-speed is required." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors thank anonymous reviewers for their careful reading to improve the manuscript. We would also like to thank Hirofumi Nakayama and Yuka Fujii for insightful discussions. Special thanks to Yuka for naming the proposed algorithm." }, { "heading": "AUTHOR CONTRIBUTIONS", "text": "A.F.E. conceived the study. A.F.E. and T.M. constructed the theory, conducted the experiments, and wrote the paper. T. M. organized python codes to be ready for the release. K.S. and H.I. supervised the study." }, { "heading": "CONTENTS", "text": "" }, { "heading": "1 Introduction 1", "text": "" }, { "heading": "2 Related work 2", "text": "" }, { "heading": "3 Proposed algorithm: SPRT-TANDEM 3", "text": "" }, { "heading": "4 LLLR and multiplet cross-entropy loss 4", "text": "4.1 Loss for Log-Likelihood Ratio estimation (LLLR). . . . . . . . . . . . . . . . . . 5\n4.2 Multiplet cross-entropy loss. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6" }, { "heading": "5 Experiments and results 6", "text": "" }, { "heading": "6 Conclusion 9", "text": "" }, { "heading": "A Theoretical aspects of the sequential probability ratio test 15", "text": "A.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15\nA.2 Definition and the tradeoff of false alarms and stopping time . . . . . . . . . . . . 16\nA.3 The Neyman-Pearson test and the SPRT . . . . . . . . . . . . . . . . . . . . . . . 19\nA.4 The Optimality of the SPRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21" }, { "heading": "B Supplementary review of the related work 23", "text": "" }, { "heading": "C Derivation of the TANDEM formula 25", "text": "" }, { "heading": "D Supplementary discussion 28", "text": "" }, { "heading": "E Loss for Log-Likelihood Ratio estimation (LLLR) 32", "text": "E.1 Density ratio estimation and KLIEP . . . . . . . . . . . . . . . . . . . . . . . . . 32\nE.2 The symmetrized KLIEP loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33\nE.3 The LLLR and density ratio estimation . . . . . . . . . . . . . . . . . . . . . . . . 34\nE.4 Preparatory experiment testing the effectiveness of the LLLR . . . . . . . . . . . . 36" }, { "heading": "F Probability density ratio estimation with the LLLR 38", "text": "F.1 Experimental settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38\nF.2 Density estimation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38" }, { "heading": "G Multiplet cross-entropy loss 39", "text": "" }, { "heading": "H Hyperparameter optimization 40", "text": "H.1 Nosaic MNIST (NMNIST) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40\nH.2 UCF101 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42\nH.3 SiW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43" }, { "heading": "I Statistical test details 45", "text": "" }, { "heading": "J Details of the experiments in Section 5 47", "text": "" }, { "heading": "K Computing infrastructure 58", "text": "" }, { "heading": "L An example video of the Nosaic MNIST database. 59", "text": "" }, { "heading": "M Supplementary experiment on Moving MNIST database 60", "text": "N Supplementary ablation experiment 61" }, { "heading": "APPENDIX", "text": "" }, { "heading": "A THEORETICAL ASPECTS OF THE SEQUENTIAL PROBABILITY RATIO TEST", "text": "In this section, we review the mathematical background of the SPRT following the discussion in Tartakovsky et al. (2014). First, we define the SPRT based on the measure theory and introduce Stein’s lemma, which assures the termination of the SPRT. To define the optimality of the SPRT, we introduce two performance metrics that measure the false alarm rate and the expected stopping time, and discuss their tradeoff — the SPRT solves it. Through this analysis, we utilize two important approximations, the asymptotic approximation, and the no-overshoot approximation, which play essential roles to simplify our analysis. The asymptotic approximation assumes the upper and lower thresholds are infinitely far away from the origin, being equivalent to making the most careful decision to reduce the error rate, at the expense of the stopping time. On the other hand, the no-overshoot approximation assumes that we can neglect the threshold overshoots of the likelihood ratio.\nNext, we show the superiority of the SPRT to the Neyman-Pearson test, using a simple Gaussian model. The Neyman-Pearson test is known to be optimal in the two-hypothesis testing problem and is often compared with the SPRT. Finally, we introduce several types of optimal conditions of the SPRT." }, { "heading": "A.1 PRELIMINARIES", "text": "Notations. Let (Ω,F , P ) be a probability space; Ω is a sample space, F ⊂ PΩ is a sigma-algebra of Ω, where PA denotes the power set of a set A, and P is a probability measure. Intuitively, Ω represents the set of all the elementary events under consideration, e.g., Ω = {all the possible elementary events such that \"a human is walking through a gate.\"}. F is defined as a set of the subsets of Ω, and stands for all the possible combinations of the elementary events; e.g., F 3 { \"Akinori is walking through the gate at the speed of 80 m/min,\" \"Taiki is walking through the gate at the speed of 77 m/min,\" or \"Nothing happened.\"} P : F → [0, 1] is a probability measure, a function that is normalized and countably additive; i.e., P measures the probability that the event A ∈ F occurs. A random variable X is defined as the measurable function from Ω to a measurable space, practically Rd (d ∈ N); e.g., if ω(∈ Ω) is \"Taiki is walking through the gate with a big smile,\" then X(ω) may be 100 frames of the color images with 128×128 pixels (d = 128 × 128 × 3 × 100), i.e., a video recorded with a camera attached at the top of the gate. The probability that a random variable X takes a set of values S ∈ Rd is defined as P (X ∈ S) := P (X−1(S)), where X−1 is the preimage of X . By definition of the measurable function, X−1(S) ∈ F for all S ∈ Rd. Let {Ft}t≥0 be a filtration. By definition, {Ft}t≥0 is a non-decreasing sequence of sub-sigma-algebras of F ; i.e., Fs ⊂ Ft ⊂ F for all s and t such that 0 < s < t. Each element of filtration can be interpreted as the available information at a given point t. (Ω,F , {Ft}t≥0, P ) is called a filtered probability space.\nAs in the main manuscript, let X(1,T ) := {x(t)}Tt=1 be a sequential data point sampled from the density p, where T ∈ N ∪ {∞}. For each t ∈ [T ], x(t) ∈ Rdx , where dx ∈ N is the dimensionality of the input data. In the i.i.d. case, p(X(1,T )) = ∏T t=1 f(x\n(t)), where f is the density of x(1). For each time-series data X(1,T ), the associated label y takes the value 1 or 0; we focus on the binary classification, or equivalently the two-hypothesis testing throughout this paper. When y is a class label, p(X(1,T )|θ) is the likelihood density function. Note that X(1,T ) with label y is sampled according to density p(X(1,T )|y).\nOur goal is, given a sequence X(1,T ), to identify which one of the two densities p1 or p0 the sequence X(1,T ) is sampled from; formally, to test two hypotheses H1 : y = 1 and H0 : y = 0 given X(1,T ). The decision function or test of a stochastic process X(1,T ) is denoted by d(X(1,T )) : Ω→ {1, 0}. We can identify this definition with, for each realization of X(1,T ), d : Rdx×T → {1, 0}, i.e., X(1,T ) 7→ y, where y ∈ {1, 0}. Thus we write d instead of d(X(1,T )), for simplicity. The stopping time of X(1,T ) with respect to a filtration {Ft}t≥0 is defined as τ := τ(X(1,T )) : Ω → R≥0 such that {ω ∈ Ω|τ(ω) ≤ t} ∈ Ft. Accordingly, for fixed T ∈ N ∪ {∞} and y ∈ {1, 0}, {d = y} means the set of time-series data such that the decision function accepts the hypothesis Hi with a finite stopping time; more specifically, {d = y} = {ω ∈ Ω|d(X(1,T ))(ω) = y, τ(X(1,T ))(ω) <∞}. The decision rule δ is defined as the doublet (d, τ). Let ΛT := Λ(X(1,T )) := p(X(1,T )|y=1) p(X(1,T )|y=0) and\nλT := log ΛT be the likelihood ratio and the log-likelihood ratio of X(1,T ). In the i.i.d. case, ΛT = ∏T t=1 p(x(t)|y=1) p(x(t)|y=0) = ∏T t=1 Z (t), where p(X(1,T )|y) = ∏T t=1 p(x (t)|y) (y ∈ {1, 0}) and Z(t) := p(x (t)|y=1)\np(x(t)|y=0) ." }, { "heading": "A.2 DEFINITION AND THE TRADEOFF OF FALSE ALARMS AND STOPPING TIME", "text": "Let us overview the theoretical structure of the SPRT. In the following, we assume that the time-series data points are i.i.d. until otherwise stated.\nDefinition of the SPRT. The sequential probability ratio test (SPRT), denoted by δ∗ is defined as the doublet of the decision function and the stopping time.\nDefinition A.1. Sequential probability ratio test (SPRT) Let a0 = − logA0 ≥ 0 and a1 = logA1 ≥ 0 be (the absolute values of) a lower and an upper threshold respectively.\nδ∗ = (d∗, τ∗), (11)\nd∗(X(1,T )) = { 1 if λτ∗ ≥ a1 0 if λτ∗ ≤ −a0 ,\n(12)\nτ∗ = inf{T ≥ 0|λT /∈ (−a0, a1)} . (13)\nNote that d∗ and τ∗ implicitly depend on a stochastic process X(1,T ). In general, a doublet δ of a terminal decision function and a stopping time is called a decision rule or a hypothesis test.\nTermination. The i.i.d.-SPRT terminates with probability one and all the moments of the stopping time are finite, provided that the two hypotheses are distinguishable:" }, { "heading": "Lemma A.1. Stein’s lemma", "text": "Let (Ω,F , P ) be a probability space and {Y (t)}t≥1 be a sequence of i.i.d. random variables under P . Define τ := inf{T ≥ 1| ∑T t=1 Y\n(t) /∈ (−a0, a1)}. If P (Y (1)) 6= 1, the stopping time τ is exponentially bounded; i.e., there exist constants C > 0 and 0 < ρ < 1 such that P (τ > T ) ≤ CρT for all T ≥ 1. Therefore, P (τ <∞) = 1 and E[τk] <∞ for all k > 0.\nTwo performance metrics. Considering the two-hypothesis testing, we employ two kinds of performance metrics to evaluate the efficiency of decision rules from complementary points of view: the false alarm rate and the stopping time. The first kind of metrics is the operation characteristic, denoted by β(δ, y), and its related metrics. The operation characteristic is the probability of the decision being 0 when the true label is y = y; formally," }, { "heading": "Definition A.2. Operation characteristic", "text": "The operation characteristic is the probability of accepting the hypothesis H0 as a function of y:\nβ(δ, y) := P (d = 0|y) . (14)\nUsing the operation characteristic, we can define four statistical measures based on the confusion matrix; namely, False Positive Rate (FPR), False Negative Rate (FNR), True Negative Rate (TNR), and True Positive Rate (TPR).\nFPR: α0(δ) := 1− β(δ, 0) = P (d = 1|y = 0) (15) FNR: α1(δ) := β(δ, 1) = P (d = 0|y = 1) (16) TNR: β(δ, 0) = 1− α0(δ) = 1− P (d = 1|y = 0) (17) TPR: 1− β(δ, 1) = 1− α1(δ) = 1− P (d = 0|y = 1) (18)\nNote that balanced accuracy is denoted by (1 + β(δ, 0)− β(δ, 1))/2 according to this notation. The second kind of metrics is the mean hitting time, and is defined as the expected stopping time of the decision rule:\nDefinition A.3. Mean hitting time The mean hitting time is the expected number of time-series data points that are necessary for testing a hypothesis when the true parameter value is y: Eyτ =∫\nΩ τdP (·|y). The mean hitting time is also referred to as the expected sample size of the average\nsample number.\nThere is a tradeoff between the false alarm rate and the mean hitting time. For example, the quickness may be sacrificed, if we use a decision rule δ that makes careful decisions, i.e., with the false alarm rate 1 − β(δ, 0) less than some small constant. On the other hand, if we use δ that makes quick decisions, then δ may make careless decisions, i.e., raise lots of false alarms because the amount of evidences is insufficient. At the end of this section, we show that the SPRT is optimal in the sense of this tradeoff.\nThe tradeoff of false alarms and stopping times for both i.i.d. and non-i.i.d. We formulate the tradeoff of the false alarm rate and the stopping time. We can derive the fundamental relation of the threshold to the operation characteristic in both i.i.d. and non-i.i.d. cases (Tartakovsky et al. (2014)):{\nα∗1 ≤ e−a0(1− α∗0) α∗0 ≤ e−a1(1− α∗1) ,\n(19)\nwhere we defined α∗y := αy(δ ∗) (y ∈ {1, 0}). These inequalities essentially represent the tradeoff of the false alarm rate and the stopping time. For example, as the thresholds ay (y ∈ {1, 0}) increase, the false alarm rate and the false rejection rate decrease, as (19) suggests, but the stopping time is likely to be larger, because more observations are needed to accumulate log-likelihood ratios to hit the larger thresholds.\nThe asymptotic approximation and the no-overshoot approximation. Equation 19 is an example of the tradeoff of the false alarm rate and the stopping time; further, we can derive another example in terms of the mean hitting time. Before that, we introduce two types of approximations that simplify our analysis.\nThe first one is the no-overshoot approximation. It assumes to ignore the threshold overshoots of the log-likelihood ratio at the decision time. This approximation is valid when the log-likelihood ratio of a single frame is sufficiently small compared to the gap of the thresholds, at least around the decision time. On the other hand, the second one is the asymptotic approximation, which assumes a0, a1 → ∞, being equivalent to sufficiently low false alarm rates and false rejection rates at the expense of the stopping time. These approximations drastically facilitate the theoretical analysis; in fact, the no-overshoot approximation alters (19) as follows (see Tartakovsky et al. (2014)):\nα∗1 ≈ e−a0(1− α∗0), α∗0 ≈ e−a1(1− α∗1) , (20)\nwhich is equivalent to\nα∗0 ≈ ea0 − 1\nea0+a1 − 1 , α∗1 ≈ ea1 − 1 ea0+a1 − 1\n(21) ⇐⇒ − a0 ≈ log (\nα∗1 1− α∗0\n) , a1 ≈ log ( 1− α∗1 α∗0 ) (22)\n⇐⇒ β∗(0) ≈ e a1 − 1 ea1 − e−a0 , β∗(1) ≈ e −a1 − 1 e−a1 − ea0 , (23)\nwhere β∗(y) := β(δ∗, y) (y ∈ {1, 0}). Further assuming the asymptotic approximation, we obtain\nα∗0 ≈ e−a1 , α∗1 ≈ e−a0 . (24)\nTherefore, as the threshold gap increases, the false alarm rate and the false rejection rate decrease exponentially, while the decision making becomes slow, as is shown in the following.\nMean hitting time without overshoots. Let Iy := Ey[Z(1)] (y ∈ {1, 0}) be the Kullback-Leibler divergence of f1 and f0. Iy is larger if the two densities are more distinguishable. Note that Iy 0 since Py(Z(1) = 0) 1, and thus the mean hitting times of the SPRT without overshoots are\nexpressed as\nE1[τ∗] = 1\nI1\n[ (1− α∗1) log(\n1− α∗1 α∗0 )− α∗1 log( 1− α∗0 α∗1 )\n] , (25)\nE0[τ∗] = 1\nI0\n[ (1− α∗0) log(\n1− α∗0 α∗1 )− α∗0 log( 1− α∗1 α∗0 )\n] (26)\nIn Tartakovsky et al. (2014). Introducing the function\nγ(x, y) := (1− x) log(1− x y )− x log(1− y x ) , (27)\nwe can simplify (25-26):\nE1[τ∗] = 1\nI1 γ(α∗1, α ∗ 0) (28)\nE0[τ∗] = 1\nI1 γ(α∗0, α ∗ 1) . (29)\n(25-26) shows the tradeoff as we mentioned above: the mean hitting time of positive (negative) data diverges if we are to set the false alarm (rejection) rate to be zero.\nThe tradeoff with overshoots. Introducing the overshoots explicitly, we can obtain the equality, instead of the inequality such as (19), that connects the the error rates and the thresholds. We first define the overshoots of the thresholds a0 and a1 at the stopping time as\nκ1(a0, a1) := λτ∗ − a1 on{λτ∗ ≥ a1} (30) κ0(a0, a1) := −(λτ∗ + a0) on{λτ∗ ≤ −a0}. (31)\nWe further define the expectation of the exponentiated overshoots as\ne1(a0, a1) := E1[e−κ1(a0,a1)|λτ∗ ≥ a1] (32) e0(a0, a1) := E0[e−κ0(a0,a1)|λτ∗ ≤ −a0] . (33)\nThen we can relate the thresholds to the error rates (without the no-overshoots approximation, Tartakovsky (1991)):\nα∗0 = e1(a0, a1)e a0 − e1(a0, a1)e0(a0, a1) ea1+a0 − e1(a0, a1)e0(a0, a1) , α∗1 = e0(a0, a1)e a1 − e1(a0, a1)e0(a0, a1) ea1+a0 − e1(a0, a1)e0(a0, a1) . (34)\nTo obtain more specific dependence on the thresholds ay (y ∈ {1, 0}), we adopt the asymptotic approximation. Let T0(a0) and T1(a1) be the one-sided stopping times, i.e., T0(a0) := inf{T ≥ 1|λT ≤ −a0} and T1(a1) := inf{T ≥ 1|λT ≥ a1}. We then define the associated overshoots as\nκ̃1(a1) := λT1 − a1 on{T1 <∞} , (35) κ̃0(a0) := −(λT0 + a0) on{T0 <∞} . (36)\nAccording to Lotov (1988), we can show that\nα∗0 ≈ ζ1e a0 − ζ1ζ0 ea0+a1 − ζ1ζ0 , α∗1 ≈ ζ0e a1 − ζ1ζ0 ea0+a1 − ζ1ζ0\n(37)\nunder the asymptotic approximation. Note that\nζy := lim ay→∞\nEy[e−κ̃y ] (y ∈ {1, 0}) (38)\nhave no dependence on the thresholds ay (y ∈ {1, 0}). Therefore we have obtained more precise dependence of the error rates on the thresholds than (24):\nTheorem A.1. The Asymptotic tradeoff with overshoots Assume that 0 < Iy < ∞ (y ∈ {1, 0}). Let ζy be given in (38). Then\nα∗0 = ζ1e −a1(1 + o(1)), α∗1 = ζ0e −a0(1 + o(1)) (a0, a1 −→∞) . (39)\nMean hitting time with overshoots. A more general form of the mean hitting time is provided in Tartakovsky (1991). We can show that\nE1τ∗ = 1\nI1\n[( 1− α∗1 )( a1 + E1[κ1|τ∗ = T ] ) − α∗1 ( a0 + E1[κ0|τ∗ = T0] )] (40)\nE0τ∗ = 1\nI0\n[( 1− α∗0 )( a0 + E0[κ0|τ∗ = T ] ) − α∗0 ( a1 + E0[κ1|τ∗ = T1] )] . (41)\nThe mean hitting times (40-41) explicitly depend on the overshoots, compared with (25-26). Let\nχy := lim ath→∞\nEy[κ̃y] (y ∈ {1, 0}) (42)\nbe the limiting average overshoots in the one-sided tests. Note that χy have no dependence on ay (y ∈ {1, 0}). The asymptotic mean hitting times with overshoots are\nE1τ∗ = 1\nI1 (a1 +χ1)+o(1), E0τ∗ =\n1 I0 (a0 +χ0)+o(1) (a0e −a1 → 0, a1e−a0 → 0) (43)\nAs expressed in Tartakovsky et al. (2014). Therefore, they have an asymptotically linear dependence on the thresholds." }, { "heading": "A.3 THE NEYMAN-PEARSON TEST AND THE SPRT", "text": "So far, we have discussed the tradeoff of the false alarm rate and the mean hitting time and several properties of the operation characteristic and the mean hitting time. Next, we compare the SPRT with the Neyman-Pearson test, which is well-known to be optimal in the classification of time-series with fixed sample lengths; in contrast, the SPRT is optimal in the early classification of time-series with indefinite sample lengths, as we show in the next section.\nWe show that the Neyman-Pearson test is optimal in the two-hypothesis testing problem or the binary classification of time-series. Nevertheless, we show that in the i.i.d. Gaussian model, the SPRT terminates earlier than the Neyman-Pearson test despite the same error rates.\nPreliminaries. Before defining the Neyman-Pearson test, we specify what the \"best\" test should be. There are three criteria, namely the most powerful test, Bayes test, and minimax test. To explain them in detail, we have to define the size and the power of the test. The significance level, or simply the size of test d is defined as1 α := P (d = 1|y = 0) . (44) It is also known as the false positive rate, the false alarm rate, or the false acceptance rate of the test. On the other hand, the power of the test d is given by\nγ := 1− β := P (d = 1|y = 1) . (45) γ is also called the true positive rate, the true acceptance rate. the recall, or the sensitivity. β is known as the false negative rate or the false rejection rate.\nNow, we can define the three criteria mentioned above. Definition A.4. Most powerful test The most powerful test d of significance level α(> 0) is defined as the test that for every other test d′ of significance level α, the power of d is greater than or equal to that of d′:\nP (d = 1|y = 1) ≥ P (d′ = 1|y = 1) . (46)" }, { "heading": "Definition A.5. Bayes test", "text": "" }, { "heading": "Let π0 := P (y = 0) and π1 := P (y = 1) = 1− π0 be the prior probabilities of hypotheses H0 and", "text": "H1, and ᾱ(d) be the average probability of error:\nᾱ(d) := ∑ i=1,0 πiαi(d) , (47)\n1P (d = 1|y = 0) is short for P ({ω ∈ Ω|d(X(1,T ))(ω) = 1}|y = 0) and is equivalent to PX(1,T )∼p(X(1,T )|y=1)[d(X\n(1,T )) = 1] (i.e., the probability of the decision being 1, where X(1,T ) is sampled from the density p(X(1,T )|y = 1) ).\nwhere αi(d) := P (d 6= i|y = i) is the false negative rate of the class i ∈ {1, 0}. A Bayes test, denoted by dB, for the priors is defined as the test that minimizes the average probability of error:\ndB := arginf d {ᾱ(d)} , (48)\nwhere the infimum is taken over all fixed-sample-size decision rules." }, { "heading": "Definition A.6. Minimax test", "text": "Let αmax(d) be the maximum error probability:\nαmax(d) := max i∈{1,0}\n{αi(d)} . (49)\nA minimax test, denoted by dM, is defined as the test that minimizes the maximum error probability:\nαmax(d M) = inf d {αmax(d)} , (50)\nwhere the infimum is taken over all fixed-sample-size tests.\nNote that a fixed-sample-size decision rule or non-sequential rule is the decision rule with a fixed stopping time T = N with probability one.\nDefinition and the optimality of the Neyman-Pearson test. Based on the above notions, we state the definition and the optimality of the Neyman-Pearson test. We see the most powerful test for the two-hypothesis testing problem is the Neyman-Pearson test; the theorem below is also the definition of the Neyman-Pearson test.\nTheorem A.2. Neyman-Pearson lemma Consider the two-hypothesis testing problem, i.e., the problem of testing two hypotheses Ho : P = P0 and H1 : P1, where P0 and P1 are two probability distributions with densities p0 and p1 with respect to some probability measure. The most powerful test is given by\ndNP(X(1,T )) := { 1 if Λ(X(1,T )) ≥ h(α) 0 otherwise ,\n(51)\nwhere Λ(X(1,T )) = p1(X (1,T ))\np0(X(1,T )) is the likelihood ratio and the threshold h(α) is defined as\nα0(d NP) ( ≡ P (dNP(X(1,T )) = 1|H0) = E0[dNP(X(1,T ))] ) = α (52)\nto ensure for the false positive rate to be the user-defined value α(> 0).\ndNP is referred to as the Neyman-Pearson test and is also optimal with respect to the Bayes and minimax criteria:\nTheorem A.3. Neyman-Pearson test is Bayes optimal Consider the two-hypothesis testing problem. Given a prior distribution πi (i ∈ {1, 0}) the Bayes test dB, which minimizes the average error probability ᾱ(d) = π0α0(d) + π1α1(d), is given by\ndB(X(1,T )) = { 1 (if Λ(X(1,T )) ≥ π0/π1) 0 (otherwise) .\n(53)\nThat is, the Bayesian test is given by the Neyman-Pearson test with the threshold π0/π1.\nTheorem A.4. Neyman-Pearson test is minimax optimal Consider the two-hypothesis testing problem. the minimax test dM, which minimizes the maximal error probability αmax(d) = max\ni∈{1,0} {αi(d)},\nis the Neyman-Pearson test with the threshold such that α0(dM) = α1(dM).\nThe proofs are given in Borovkov (1998) and Lehmann & Romano (2006).\nThe SPRT is more efficient. We have shown that the Neyman-Pearson test is optimal in the twohypothesis testing problem, in the sense that the Neyman-Pearson test is the most powerful, Bayes, and minimax test; nevertheless, we can show that the SPRT terminates faster than the Neyman-Pearson test even when these two show the same error rate.\nConsider the two-hypothesis testing problem for the i.i.d. Gaussian model: Hi : y = yi (i ∈ {1, 0}) x(t) = y + ξ(t) (t ≥ 1, y ∈ R1) ξ(t) ∼ N (0, σ2) (σ ≥ 0) ,\n(54)\nwhereN (0, σ2) denotes the Gaussian distribution with mean 0 and variance σ2. The Neyman-Pearson test has the form\ndNP(X(1,n(α0,α1))) = { 1 (if λn(α0,α1) ≥ h(α0, α1)) 0 (otherwise) .\n(55)\nThe sequence length n = n(α0, α1) and the threshold h = h(α0, α1) are defined so as for the false positie rate and the false negative rate to be equal to α0 and α1 respectively; i.e.,\nP (λn ≥ h|y = y0) = α0 , (56) P (λn < h|y = y1) = α1 . (57)\nWe can solve them for the i.i.d. Gaussian model (Tartakovsky et al. (2014)). To see the efficiency of the SPRT to the Neyman-Pearson test, we define\nE0(α0, α1) = E[τ∗|y = y0] n(α0, α1)\n(58)\nE1(α0, α1) = E[τ ∗ |y = y1] n(α0, α1) . (59)\nAssuming the overshoots are negligible, we obtain the following asymptotic efficiency (Tartakovsky et al. (2014)):\nlim max{α0,α1}→0\nEy(α0, α1) = 1\n4 (y ∈ {1, 0}) . (60)\nIn other words, under the no-overshoot and the asymptotic assumptions, the SPRT terminates four times earlier than the Neyman-Pearson test in expectation, despite the same false positive and negative rates." }, { "heading": "A.4 THE OPTIMALITY OF THE SPRT", "text": "Optimality in i.i.d. cases. The theorem below shows that the SPRT minimizes the expected hitting times in the class of decision rules that have bounded false positive and negative rates. Consider the two-hypothesis testing problem. We define the class of decision rules as C(α0, α1) = {δ s.t. P (d = 1|H0) ≤ α0, P (d = 0|H1) ≤ α1,E[τ |H0] <∞,E[τ |H1] <∞} . (61) Then the optimality theorem states: Theorem A.5. I.I.D. Optimality (Tartakovsky et al. (2014)) Let the time-series data points x(t), t = 1, 2, ... be i.i.d. with density f0 under H0 and with density f1 under H1, where f0 6≡ f1. Let α0 > 0 and α1 > 0 be fixed constants such that α0 + α1 < 1. If the thresholds −ao and a1 satisfies α∗0(a0, a1) = α0 and α ∗ 1(a0, a1) = α1, then the SPRT δ ∗ = (d∗, τ∗) satisfies\ninf δ=(d,τ)∈C(α0,α1)\n{ E[τ |H0] } = E[τ∗|H0] and inf\nδ=(d,τ)∈C(α0,α1)\n{ E[τ |H1] } = E[τ∗|H1] (62)\nA similar optimality holds for continuous-time processes (Irle & Schmitz (1984)). Therefore the SPRT terminates at the earliest stopping time in expectation of any other decision rules achieving the same or less error rates — the SPRT is optimal.\nTheorem A.5 tells us that given user-defined thresholds, the SPRT attains the optimal mean hitting time. Also, remember that the thresholds determine the error rates (e.g., Equation (24)). Therefore, the SPRT can minimize the required number of samples and achieve the desired upper-bounds of false positive and false negative rates.\nAsymptotic optimality in general non-i.i.d. cases. In most of the discussion above, we have assumed the time-series samples are i.i.d. For general non-i.i.d. distributions, we have the asymptotic optimality; i.e., the SPRT asymptotically minimizes the moments of the stopping time distribution (Tartakovsky et al. (2014)).\nBefore stating the theorem, we first define a type of convergence of random variables.\nDefinition A.7. r-quick convergence Let {x(t)}t≥1 be a stochastic process. Let T ({x(t)}t≥1) be the last entry time of the stochastic process {x(t)}t≥1 in the region ( ,∞) ∪ (−∞,− ), i.e.,\nT ({x(t)}t≥1) = sup t≥1 {t s.t. |x(t)| > }, sup{∅} := 0 . (63)\nThen, we say that the stochastic process {x(t)}t≥1 converges to zero r-quickly, or\nx(t) r−quickly−−−−−−→ t→∞ 0 , (64)\nfor some r > 0, if E[(T ({x(t)}t≥1))r] <∞ for every > 0 . (65)\nr-quick convergence ensures that the last entry time in the large-deviation region (T ({x(t)}t≥1)) is finite almost surely. The asymptotic optimality theorem is: Theorem A.6. Non-i.i.d. asymptotic optimality If there exist positive constants I0 and I1 and an increasing non-negative function ψ(t) such that\nλt ψ(t) P1−r−quickly−−−−−−−−−→ t→∞ I1 and λt ψ(t) P0−r−quickly−−−−−−−−−→ t→∞ −I0 , (66)\nwhere λt is defined in section A.1, then\nE[(τ∗)r|y = i] <∞ (i ∈ {1, 0}) for any finite a0 and a1. (67)\nMoreover, if the thresholds a0 and a1 are chosen to satisfy (19), a0 → log(1/α∗1), and a1 → log(1/α∗0) (ai →∞), then for all 0 < m ≤ r,\ninf δ∈C(α0,α1)\n{E[τm|y = y1]} − E[(τ∗)m|y = y1] −→ 0 (68)\ninf δ∈C(α0,α1)\n{E[τm|y = y0]} − E[(τ∗)m|y = y0] −→ 0 (69)\nas max{α0, α1} −→ 0 with | logα0/ logα1| −→ c, where c ∈ (0,∞)." }, { "heading": "B SUPPLEMENTARY REVIEW OF THE RELATED WORK", "text": "Primate’s decision making and parietal cortical neurons. The process of decision making involves multiple steps, such as evidence accumulation, reward prediction, risk evaluation, and action selection. We give a brief overview regarding mainly to neural activities of primate parietal lobe and their relationship to the evidence accumulation, instead of providing a comprehensive review of the decision making literature. Interested readers may refer to review articles, such as Doya (2008); Gallivan et al. (2018); Gold & Shadlen (2007).\nIn order to study neural correlates of decision making, Roitman & Shadlen (2002) used a random dot motion (RDM) task on non-human primates. They found that the neurons in the cortical area lateral intraparietal cortex, or LIP, gradually accumulated sensory evidence represented as increasing firing rate, toward one of the two thresholds corresponding to the two-alternative choices. Moreover, while a steeper increase of firing rates leads to an early decision of the animal, the final firing rates at the decision time is almost constant regardless of reaction time. Thus, at least in population-level LIP neurons are representing information very similar to that of the LLR in the SPRT algorithm (It is under active discussion whether the ramping activity is seen only in averaged population firing rate or both in population and single neuron level. See Latimer et al. (2015); Shadlen et al. (2016)). But also see Okazawa et al. (2021) for a recent finding that evidence accumulation is represented in a high-dimensional manifold of neural population. In any case, LIP neurons seem to represent accumulated evidence as their activity patterns.\nTo test whether the ramping activity is explained by Wald’s SPRT, Kira et al. (2015) used visual stimuli associated with reward likelihood: each stimulus indicates the answer of the binary choice task with a certain probability (e.g., if stimulus ’A’ is presented, choice 1 is the correct answer with 30% probability). LIP neurons’ activities in response to these randomly presented stimuli are proportional to LLR calculated from the associated likelihood of the stimuli, letting authors concluded that the activity of LIP neurons are best explained by SPRT than other alternative models. It remains unclear, however, what algorithm is used in the brain when stimuli are not randomly presented but temporary dependent.\nMore complex decision making involving risk evaluation such as ”delayed, large reward V.S. immediate, small reward” is thought to be guided by other regions including orbitofrontal cortex, dorsal striatum or dorsal prefrontal cortex (McClure et al. (2004); Rudebeck et al. (2006); Tanaka et al. (2004)).\nApplication of SPRT. Ever since Wald’s formulation, the sequential hypothesis testing was applied to study decision making and its reaction time (Stone (1960); Edwards (1965); Ashby (1983)). Several extensions to more general problem settings were also proposed. In order to test more than two hypotheses, multi-hypothesis SPRT (MSPRT) was introduced (Armitage (1950); Baum & Veeravalli (1994)), and shown to be asymptotically optimal (Dragalin et al. (1999; 2000); Veeravalli & Baum (1995)). The SPRT was also generalized for non-i.i.d. data (Lai (1981); Tartakovsky (1999)), and theoretically shown to be asymptotically optimal, given the known LLR (Dragalin et al. (1999; 2000)). Tartakovsky et al. (2014) provided a comprehensive review of these theoretical analyses, a part of whose reasoning we also follow to show optimality in Appendix A. The SPRT, and closely related, generalized LLR test, applied to solve several problems includes drug safety surveillance (Kulldorff et al. (2011)), exoplanet detection (Hu et al. (2019)), and the LLR test out of weak classifiers (WaldBoost, Sochman & Matas (2005)), to name a few. On an A/B test, Johari et al. (2017) tackled an important problem of inflating error rates at the sequential hypothesis testing. Ju et al. (2019) proposed an inputed Girshick test to determine a better variant.\nTime-series classification. Here, we use the term “Time-series” interchangeably to mention both continuous data or discrete data such as video frames.\nOne of the traditional approaches to univariate or multivariate time series classification is distancebased methods, such as dynamic time warping (Bagnall (2014); Jeong et al. (2011); Kate (2015)) or k-nearest neighbors (Dau et al. (2018); Wei & Keogh (2006); Yang & Shahabi (2007)). More recently, Collective Of Transformation-based Ensembles (COTE) and its variant, COTE with Hierarchical Vote system (HIVE-COTE) showed high classification performance at the expense of their high computational cost (Bagnall et al. (2015); Lines et al. (2016)). Word Extraction for time series\nclassification (WEASEL) and its variant, WEASEL+MUSE take a bag-of-pattern approach to utilize carefully designed feature vectors (Schäfer & Leser (2017)).\nThe advent of deep learning allows researchers to classify not only univariate/multivariate data, but also large-size, video data using convolutional neural networks (Hara et al. (2017); Carreira & Zisserman (2017); Karim et al. (2018); Wang et al. (2017)). Thanks to the increasing computation power and memory of modern processing units, each video data in a minibatch are designed to be sufficiently long in the time domain such that class signature can be contained. Video length of the training, validation, and test data are often assumed to be fixed; however, ensuring sufficient length for all data may compromise the classification speed (i.e., number of samples that used for classification). We extensively test this issue in Section 5." }, { "heading": "C DERIVATION OF THE TANDEM FORMULA", "text": "The derivations of the important formulas in Section 3 are provided below.\nThe 0th order (i.i.d.) TANDEM formula. We use the following probability ratio to identify if the input sequence {x(s)}ts=1 is derived from either hypothesis H1 : y = 1 or H0 : y = 0.\np(x(1), ..., x(t)|y = 1) p(x(1), ..., x(t)|y = 0) . (70)\nWe can rewrite it with the posterior. First, by repeatedly using the Bayes rule, we obtain\np(x(1), x(2), ..., x(t)|y) = p(x(t)|x(t−1), x(t−2), ..., x(1), y)p(x(t−1), x(t−2), ..., x(1)|y) = p(x(t)|x(t−1), x(t−2), ..., x(1), y) × p(x(t−1)|x(t−2), x(t−3), ..., x(1), y)p(x(t−2), x(t−3), ..., x(1), y) = ...\n...\n= p(x(t)|x(t−1), x(t−2), ..., x(1), y)p(x(t−1)|x(t−2), x(t−3), ..., x(1), y) . . . p(x(2)|x(1), y) . (71)\nWe use this formula hereafter. Let us assume that the process {x(s)}ts=1 is conditionally-independently and identically distributed (hereafter simply noted as i.i.d.), namely\np(x(1), x(2), ..., x(t)|y) = t∏\ns=1\np(x(s)|y) , (72)\nwhich yields the following LLR representation (\"0-th order Markov process\"):\np(x(t)|x(t−1), x(t−2), ..., x(1), y) = p(x(t)|y) . (73)\nThen\np(x(1), x(2), ..., x(t)|y) = p(x(t)|y)p(x(t−1)|y) . . . p(x(2)|y)p(x(1)|y)\n= t∏ s=1 [ p(x(s)|y) ] =\nt∏ s=1 [ p(y|x(s))p(x(s)) p(y) ] .\nHence p(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) = t∏ s=1 [ p(y = 1|x(s)) p(y = 0|x(s)) ]( p(y = 0) p(y = 1) )t , (74)\nor\nlog ( p(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) ) = t∑ s=1 log ( p(y = 1|x(s)) p(y = 0|x(s)) ) − tlog ( p(y = 1) p(y = 0) ) . (75)\nThe 1st-order TANDEM formula. So far, we have utilized the i.i.d. assumption (73) or (72). Now let us derive the probability ratio of the first-order Markov process, which assumes\np(x(t)|x(t−1), x(t−2), ..., x(1), y) = p(x(t)|x(t−1), y) . (76)\nApplying (76) to (71), we obtain p(x(1), x(2), ..., x(t)|y)\n= p(x(t)|x(t−1), y)p(x(t−1)|x(t−2), y) . . . p(x(2)|x(1), y)p(x(1)|y)\n= t∏ s=2 [ p(x(s)|x(s−1), y) ] p(x(1)|y)\n= t∏ s=2 [ p(y|x(s), x(s−1))p(x(s), x(s−1)) p(x(s−1), y) ] p(y|x(1))p(x(1)) p(y)\n= t∏ s=2 [ p(y|x(s), x(s−1))p(x(s), x(s−1)) p(y|x(s−1))p(x(s−1)) ] p(y|x(1))p(x(1)) p(y) , (77)\nfor t ≥ 2. Hence p(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) =\nt∏ s=2 [ p(y = 1|x(s), x(s−1)) p(y = 0|x(s), x(s−1)) ] t∏ s=3 [ p(y = 0|x(s−1)) p(y = 1|x(s−1)) ] p(y = 0) p(y = 1) , (78)\nor\nlog ( p(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) ) = t∑ s=2 log ( p(y = 1|x(s), x(s−1)) p(y = 0|x(s), x(s−1)) ) − t∑ s=3 log ( p(y = 1|x(s−1)) p(y = 0|x(s−1)) ) − log ( p(y = 1) p(y = 0) ) . (79)\nFor t = 1 and t = 2, the natural extensions are\nlog ( p(x(1)|y = 1) p(x(1)|y = 0) ) = log ( p(y = 1|x(1)) p(y = 0|x(1)) ) − log ( p(y = 1) p(y = 0) ) log ( p(x(1), x(2)|y = 1) p(x(1), x(2)|y = 0) ) = log ( p(y = 1|x(1), x(2)) p(y = 0|x(1), x(2)) ) − log ( p(y = 1) p(y = 0) ) .\n(80)\nThe N -th order TANDEM formula. Finally we extend the 1st order TANDEM formula so that it can calculate the general N -th order log-likelihood ratio. The N -th order Markov process is defined as p(x(t)|x(t−1), x(t−2), ..., x(1), y) = p(x(t)|x(t−1), ..., x(t−N), y) . (81) Therefore, for t ≥ N + 2\np(x(1), x(2), ..., x(t)|y) = p(x(t)|x(t−1), ..., x(t−N), y)p(x(t−1)|x(t−2), ..., x(t−N−1), y) . . . p(x(2)|x(1), y)p(x(1)|y)\n= t∏ s=N+1 [ p(x(s)|x(s−1), ..., x(s−N), y) ] p(x(N), x(N−1), ..., x(1)|y)\n= t∏ s=N+1 [ p(y|x(s), ..., x(s−N))p(x(s), ..., x(s−N)) p(x(s−1), ..., x(s−N), y) ] p(y|x(N), ..., x(1))p(x(N), ..., x(1)) p(y)\n= t∏ s=N+1 [ p(y|x(s), ..., x(s−N))p(x(s), ..., x(s−N)) p(y|x(s−1), ..., x(s−N))p(x(s−1), ..., x(s−N)) ] p(y|x(N), ..., x(1))p(x(N), ..., x(1)) p(y) .\n(82) Hence\np(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) =\nt∏ s=N+1 [ p(y = 1|x(s), ..., x(s−N)) p(y = 0|x(s), ..., x(s−N)) ] t∏ s=N+2 [ p(y = 0|x(s−1), ..., x(s−N)) p(y = 1|x(s−1), ..., x(s−N)) ] p(y = 0) p(y = 1) , (83)\nor\nlog ( p(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) ) =\nt∑ s=N+1 log ( p(y = 1|x(s), ..., x(s−N)) p(y = 0|x(s), ..., x(s−N)) ) − t∑ s=N+2 log ( p(y = 1|x(s−1), ..., x(s−N)) p(y = 0|x(s−1), ..., x(s−N)) ) − log ( p(y = 1)\np(y = 0)\n) . (84)\nFor t < N + 2, we obtain\nlog ( p(x(1), x(2), ..., x(t)|y = 1) p(x(1), x(2), ..., x(t)|y = 0) ) = log ( p(y = 1|x(1), x(2), ..., x(t)) p(y = 0|x(1), x(2), ..., x(t)) ) − log ( p(y = 1) p(y = 0) ) . (85)" }, { "heading": "D SUPPLEMENTARY DISCUSSION", "text": "Why is the SPRT-TANDEM superior to other baselines? The potential drawbacks common to the LSTM-s/m and EARLIEST is that they incorporate long temporal correlation: it may lead to (1) the class signature length problem and (2) vanishing gradient problem, as we described in Section 3. (1) If a class signature is significantly shorter than the correlation length in consideration, uninformative data samples are included in calculating the log-likelihood ratio, resulting in a late or wrong decision. (2) long correlations require calculating a long-range of backpropagation, prone to the vanishing gradient problem.\nAn LSTM-s/m-specific drawback is similar to that of Neyman-Pearson test, in the sense that it fixes the number of samples before performance evaluations. On the other hand, the SPRT, and the SPRT-TANDEM, classify various lengths of samples: thus, the SPRT-TANDEM can achieve a smaller sampling number with high accuracy on average. Another potential drawback of LSTM-s/m is that their loss function explicitly imposes monotonicity to the scores. While the monotonicity is advantageous for quick decisions, it may sacrifice flexibility: the LSTM-s/m can hardly change its mind during a classification.\nEARLIEST, the reinforcement-learning based classifier, decides on the various length of samples. A potential EARLIEST-specific drawback is that deep reinforcement learning is known to be unstable (Nikishin et al. (2018); Kumar et al.).\nHow optimal is the SPRT-TANDEM? In practice, it is difficult to strictly satisfy the necessary conditions for the SPRT’s optimality (Theorem A.5 and A.6) because of experimental limitations.\nOne of our primary interests is to apply the SPRT, the provably optimal algorithm, to real-world datasets. A major concern about extending Wald’s SPRT is that we need to know the true likelihood ratio a priori to implement the SPRT. Thus we propose the SPRT-TANDEM with the help of machine learning and density ratio estimation to remove the concern, if not completely. However, some technical limitations still exist. Let us introduce two properties of the SPRT that can prevent the SPRT-TANDEM from approaching exact optimality.\nFirstly, the SPRT is assumed to terminate for all the LLR trajectories under consideration with probability one. The corresponding equation stating this assumption is Equation (61) and (66) under the i.i.d. and non-i.i.d. condition, respectively. Given that this assumption (and the other minor technical conditions in Theorem A.5 and A.6) is satisfied, the more precisely we estimate the LLRs, the more we approach the genuine SPRT implementation and thus its asymptotic Bayes optimality.\nSecondly, the non-i.i.d. SPRT is asymptotically optimal when the maximum number of samples allowed is not fixed (infinite horizon). On the other hand, our experiment truncates the SPRT (finite horizon) at the maximum timestamp, which depends on the datasets. Under the truncation, gradually collapsing thresholds are proven to give the optimal stopping (Tartakovsky et al. (2014)); however, the collapsing thresholds are obtained via backward induction (Bingham et al. (2006)), which is possible only after observing the full sequence. Thus, under the truncation, finding the optimal solutions in a strict sense critically limits practical applicability.\nThe truncation is just an experimental requirement and is not an essential assumption for the SPRTTANDEM. Under the infinite horizon settings, the LLRs is assumed to increase or decrease toward the thresholds (Theorem A.6) in order to ensure the asymptotic optimality. However, we observed that the estimated LLRs tend to be asymptotically flat, especially when N is large (Figure 10, 11, and 12); the estimated LLRs can violate the assumption of Theorem A.6.\nOne potential reason for the flat LLRs is the TANDEM formula: the first and second term of the formula has a different sign. Thus, the resulting log-likelihood ratio will be updated only when the difference between the two terms are non-zero. Because the first and second term depends on N + 1 and N inputs, respectively, it is expected that the contribution of one input becomes relatively small as N is enlarged. We are aware of this issue and already started working on it as future work.\nNevertheless, the flat LLRs at least do not spoil the practical efficiency of the SPRT-TANDEM, as our experiment shows. In fact, because we cannot know the true LLR of real-world datasets, it is not easy to discuss whether the assumption of the increasing LLRs is valid on the three databases (NMNIST,\nUCF, and SiW) we tested. Numerical simulation may be possible, but it is out of our scope because our primary interest is to implement a practically usable SPRT under real-world scenarios.\nThe best order N of the SPRT-TANDEM. The order N is a hyperparamer, as we mentioned in Section 3 and thus needs to be tuned to attain the best performance. However, each dataset has its own temporal structure, and thus it is challenging to acquire the best order a priori. In the following, we provide a rough estimation of the best order, which may give dramatic benefit to the users and may lead to exciting future works.\nLet us introduce a concept, specific time scale, which is used in physics to analyze qualitative behavior of a physical system. Here, we define the specific time scale of a physical system as a temporal interval in which the physical system develops dramatically. For example, suppose that a physical system under consideration is a small segment of spacetime in which an unstable particle, ortho-positronium (o-Ps), exists. In this case, a specific time scale can be defined as the lifetime of o-Ps, 0.14µs (Czarnecki (1999)), because the o-Ps is likely to vanish in 0.14 × O(1)µs — the physical system has changed completely. Note that the definition of specific time scale is not unique for one physical system; it depends on the phenomena the researcher focuses on. Specific (time) scale are often found in fundamental equations that describes physical systems. In the example above, the decay equation N(t) = A exp(−t/τ) has the lifetime τ ∈ R in itself. Here N(t) ∈ R is the expected number of o-Ps’ at time t ∈ R, and A ∈ R is a constant. Let us borrow the concept of the specific time scale to estimate the best order of the SPRT-TANDEM before training neural networks, though there is a gap in scale. In this case, we define the specific time scale of a dataset as the number of frames after which a typical video in the dataset shows completely different scene. As is discussed below, we claim that the specific time scale of a dataset is a good estimation of the best order of the SPRT-TANDEM, because the correlations shorter than the specific time scale are insufficient to distinguish each class, while the longer correlations may be contaminated with noise and keep redundant information.\nFirst, we consider Nosaic MNSIT (NMNIST). The specific time scale of NMNIST can be defined as the half-life2 of the noise, i.e., the necessary temporal interval for half of the noise to disappear. It is 10 frames by definition of NMNIST, and approximately matches the best order of the SPRT-TANDEM: in Figure 3, our experiment shows that the 10th order SPRT-TANDEM (with 11-frames correlation) outperforms the other orders in the latter timestamps, though we did not perform experiments with all the possible orders. A potential underlying mechanism is: Too long correlations keep noisy information in earlier timestamps, causing degradation, while too short correlations do not fully utilize the past information.\nNext, we discuss the two classes in UCF101 action recognition database, handstand pushups and handstand walking, which are used in our experiment. A specific time scale is ∼ 10 frames because of the following reasons. The first class, handstand pushups, has a specific time scale of one cycle of raising and lowering one’s body ∼ 50 frames (according to the shortest video in the class). The second class, handstand walking, has a specific time scale of one cycle of walking, i.e., two steps, ∼ 10 frames (according to the longest video in the class). Therefore the specific time scale of UCF is ∼ 10, the smaller one, since we can see whether there is a class signature in a video within at most ∼ 10 frames. The specific time scale matches the best order of the SPRT-TANDEM according to Figure 3.\nFinally, a specific time scale of SiW is ∼ 1 frame, because a single image suffices to distinguish a real person and a spoofing image, because of the reflection of the display, texture of the photo, or the movement specific to a live person3. The best order in Figure 3 is ∼ 1, matching the specific time scale.\nWe make comments on two potential future works related to estimation of the best order of the SPRT-TANDEM. First, as our experiments include only short videos, it is an interesting future work to estimate the best order of the SPRT-TANDEM in super-long video classification, where gradient vanishing becomes a problem and likelihood estimation becomes more challenging. Second, it is an\n2This choice of words is, strictly speaking, not correct, because the noise decay in NMNIST is linear; the definition of half-life in physics assumes the decay to be exponential.\n3In fact, the feature extractor, which classified a single frame to two classes, showed fairly high accuracy in our experiment, without temporal information.\nexciting future work to analyse the relation of the specific time scale to the best order when there are multiple time scales. For example, recall the discussion of locality above in this Appendix D: Applying the SPRT-TANDEM to a dataset with distributed class signatures is challenging. Distributed class signatures may have two specific time scales: e.g., one is the mean length of the signatures, and the other is the mean interval between the signatures.\nThe best threshold λτ∗ of the SPRT-TANDEM. In practice, a user can change the thresholds after deploying the SPRT-TANDEM algorithm once and control the speed-accuracy tradeoff. Computing the speed-accuracy-tradeoff curve is not expensive, and importantly computable without re-training. According to the speed-accuracy-tradeoff curve, a user can choose the desired accuracy and speed. Note that this flexible property is missing in most other deep neural networks: controlling speed usually means changing the network structures and training it all over again.\nEnd-to-end v.s. separate training. The design of SPRT-TANDEM does not hamper an end-to-end training of neural networks; the feature extractor and temporal integrator can be readily connected for thorough backpropagation calculation. However, in Section 5, we trained the feature integrator and temporal integrator separately: after training the feature integrator, its trainable parameters are fixed to start training the temporal integrator. We decided to train the two networks separately because we found that it achieves better balanced accuracy and mean hitting time. Originally we trained the network using NMNIST database with an end-to-end manner, but the accuracy was far lower than the result reported in Section 5. We observed the same phenomenon when we trained the SPRT-TANDEM on our private video database containing 1-channel infrared videos. These observations might indicate that while the separate training may lose necessary information for classification compared to the end-to-end approach, it helps the training of the temporal integrator by fixing information at each data point. It will be interesting to study if this is a common problem in early-classification algorithms and find the right balance between the end-to-end and separate training to benefit both approaches.\nFeedback to the field of neuroscience. Kira et al. (2015) experimentally showed that the SPRT could explain neural activities in the area LIP at the macaque parietal lobe. They randomly presented a sequence of visual objects with associated reward probability. A natural question arises from here: what if the presented sequence is not random, but a time-dependent visual sequence? Will the neural activity be explained by our SPRT-TANDEM, or will the neurons utilize a completely different algorithm? Our research provides one driving hypothesis to lead the neuroscience community to a deeper understanding of the brain’s decision-making system.\nUsage of statistical tests. As of writing this manuscript, not all of the computer science papers use statistical tests to evaluate their experiments. However, in order to provide an objective comparison across proposed and existing models, running multiple validation trials with random seeds followed by a statistical test is helpful. Thus, the authors hope that our paper stimulates the field of computer science to utilize statistical tests more actively.\nEthical concern. The proposed method, SPRT-TANDEM, is a general algorithm applicable to a broad range of serial data, such as auditory signals or video frames. Thus, any ethical concerns entirely depend on the application and training database, not on our algorithm per se. For example, if SPRT-TANDEM is applied to a face spoofing detection, using faces of people of one particular racial or ethnic group as training data may lead to a bias toward or against people of other groups. However, this bias is a concern in machine learning in general, not specific to the SPRT-TANDEM.\nIs the SPRT-TANDEM “too local”? In our experiments in Section 5, the SPRT-TANDEM with maximum correlation allowed (i.e., 19th, 49th, and 49th on NMNIST, UCF, and SiW databases, respectively) does not necessarily reach the highest accuracy with a larger number of frames. Instead, depending on the database, the lower order of approximation, such as 10th order TANDEM, outperforms the other orders. In the SiW database, this observation is especially prominent: the model records the highest balanced accuracy is the 2nd order SPRT-TANDEM. While this may indicate our TANDEM formula with the “dropping correlation” strategy works well as we expected, a remaining concern is the SPRT may integrate too local information. What if class signatures are far separated in time?\nIn such a case, the SPRT-TANDEM may fail to integrate the distributed class signatures for correct classification. On the other hand, the SPRT-TANDEM may be able to add the useful information of the class signatures to the LLR only when encountering the signatures (in other words, do not add non-zero values to the LLR without seeing class signatures). The SPRT-TANDEM may be able to skip unnecessary data points without modification, or with a modification similar to SkipRNN (Campos et al. (2018)), which actively achieve this goal: by learning unnecessary data point, the SkipRNN skips updating the internal state of RNN to attend just to informative data. Similarly, we can modify the SPRT-TANDEM so that it learns to skip updating LLR upon encountering uninformative data. It will be exciting future work, and the authors are looking forward to testing the SPRT-TANDEM on a challenging database with distributed class signatures.\nA more challenging dataset: Nosaic MNIST-Hard (NMNIST-H). In the main text, we see the accuracy of the SPRT-TANDEM saturates within a few timestamps. Therefore, it is worth testing the models on a dataset that require more samples for reaching good performance. We create a more challenging dataset, Nosaic MNIST-Hard: The MNIST handwritten digits are buried with heavier noise than the orifinal NMNIST (only 10 pixels/frame are revealed, while it is 40 pixels/frame for the original NMNIST). The resulting speed-accuracy tradeoff curves below show that the SPRTTANDEM outperforms LSTM-s/m more than the error-bar range, even on the more challenging dataset requiring more timestamps to attain the accuracy saturation." }, { "heading": "E LOSS FOR LOG-LIKELIHOOD RATIO ESTIMATION (LLLR)", "text": "In this section, we discuss a deep connection of the novel loss function, LLLR\nLLLR = 1\nM ∑ i∈I1 |1− σ(log r̂(Xi))|+ 1 M ∑ i∈I0 σ(log r̂(Xi)) , (86)\nto density ratio estimation (Sugiyama et al. (2012; 2010)). Here, Xi := {x(t)i ∈ Rdx}Tt=1 and yi ∈ {1, 0} (i ∈ I := I1 ∪ I0, T ∈ N, dx ∈ N) are a sequence of samples and a label, respectively, where I , I1, and I0 are the index sets of the whole dataset, class 1, and class 0, respectively. r̂(Xi) (i ∈ I) is the likelihood ratio of Xi. The hatted notation ( ·̂ ) means that the quantity is an estimation with, e.g., a neural network on the training dataset {(Xi, yi)}i∈I . Note that we do not necessarily have to compute p̂(Xi|y = 1) and p̂(Xi|y = 0) separately to obtain the likelihood ratio r̂(Xi) = p̂(X|y=1)p̂(X|y=0) ; we can estimate r̂ directly, as is explained in the following subsections.\nIn the following, we first introduce KLIEP (Kullback-Leibler Importance Estimation Procedure, Sugiyama et al. (2008)), which underlies the theoretical aspects of the LLLR. KLIEP was originally invented to estimate density ratio without directly estimating the densities. The idea is to minimize the Kullback-Leibler divergence of the true density p(X|y = 1) and the estimated density r̂(X)p(X|y = 0), where (X, y) is a sequential data-label pair defined on the same space as (Xi, yi)’s. Next, we introduce the symmetrized KLIEP, which cares about not only p(X|y = 1) and r̂(X)p(X|y = 0), but p(X|y = 0) and r̂−1(X)p(X|y = 1) to remove the asymmetry inherent in the Kullback-Leibler divergence. Finally, we show the equivalence of the symmetrized KLIEP to the LLLR; specifically, we show that the LLLR minimizes the Kullback-Leibler divergence of the true and the estimated density, and further stabilizes the training by restricting the value of likelihood ratio." }, { "heading": "E.1 DENSITY RATIO ESTIMATION AND KLIEP", "text": "In this section, we briefly review density ratio estimation and introduce KLIEP.\nDensity estimation is the construction of underlying probability densities based on observed datasets. Taking their ratio, we can naively estimate the density ratio; however, division by an estimated quantity is likely to enhance the estimation error (Sugiyama et al. (2012; 2010)). Density ratio estimation has been developed to circumvent this problem. We can categorize the methods to the following four: probabilistic classification, moment matching, density ratio fitting, and density fitting.\nProbabilistic classification. The idea of the probabilistic classification is that the posterior density p(Y |X) is easier to estimate than the likelihood p(X|Y ). Notice that\nr̂(X) = p̂(X|y = 1) p̂(X|y = 0) = p̂(y = 1|X) p̂(y = 0|X) p̂(y = 0) p̂(y = 1) = p̂(y = 1|X) p̂(y = 0|X) M0 M1 , (87)\nwhere M1 and M0 denote the number of the training data points with label 1 and 0 respectively. Thus we can estimate the likelihood ratio from the estimated posterior ratio. The multiplet cross-entropy loss conducts the density ratio estimation in this way.\nMoment matching. The moment matching approach aims to match the moments of p(X|y = 1) and r̂(X)p(X|y = 0), according to the fact that two distributions are identical if and only if all moments agree with each other.\nDensity ratio fitting. Without knowing the true densities, we can directly minimize the difference between the true and estimated ratio as follows:\nargmin r̂\n[∫ dXp(X|y = 0)(r̂(X)− r(X))2 ] (88)\n=argmin r̂\n[∫ dXp(X|y = 0)r̂(X)2 − 2 ∫ dXp(X|y = 1)r̂(X) ] (89)\n;argmin r̂\n[ 1\nM0 ∑ i∈I0 r̂(Xi) 2 − 2 M1 ∑ i∈I1 r̂(Xi)\n] . (90)\nHere, we applied the empirical approximation. In addition, we restrict the value of r̂(X): r̂(X) ≥ 0. Since (90) is not bounded below, we must add other terms or put more constraints, as is done in the original paper (Kanamori et al. (2009)). This formulation of density ratio estimation is referred to as least-squares importance fitting (LSIF, Kanamori et al. (2009)).\nDensity fitting. Instead of the squared expectation, KLIEP minimizes the Kullback-Leibler divergence:\nargmin r̂\n[KL(p(X|y = 1)||r̂p(X|y = 0))] (91)\n=argmin r̂\n[∫ dXp(X|y = 1) log( p(X|y = 1)\nr̂(X)p(X|y = 0) )\n] (92)\n=argmin r̂\n[ − ∫ dXp(X|y = 1) log(r̂(X)) ] . (93)\nWe need to restrict r̂: 0 ≤ r̂(X) (94)∫ dXr̂(X)p(X|y = 0) = 1 , (95)\nThe first inequality ensures the positivity of the probability ratio, while the second equation is the normalization condition. Applying the empirical approximation, we obtain the final objective and the constraints:\nargmin r̂\n[ 1\nM1 ∑ i∈I1 − log r̂(Xi)\n] (96)\n r̂(X) ≥ 0 (97) 1\nM0 ∑ i∈I0 r̂(Xi) = 1 (98)\nSeveral papers implement the algorithms mentioned above using deep neural networks. In Nam & Sugiyama (2015), LSIF is applied to outlier detection with the deep neural network implementation, whereas in Khan et al. (2019), KLIEP and its variant are applied to changepoint detection." }, { "heading": "E.2 THE SYMMETRIZED KLIEP LOSS", "text": "As shown above, KLIEP minimizes the Kullback-Leibler divergence; however, its asymmetry can cause instability of the training, and thus we introduce the symmetrized KLIEP loss. A similar idea was proposed in Khan et al. (2019) independently of our analysis.\nFirst, notice that KL(p(X|y = 1)||r̂p(X|y = 0)) = ∫ dXp(X|y = 1) log( p(X|y = 1)\nr̂(X)p(X|y = 0) ) (99) = − ∫ dXp(X|y = 1) log(r̂(X)) + const. (100)\nThe constant term is independent of the weight parameters of the network and thus negligible in the following discussion. Similarly,\nKL(p(X|y = 0)||r̂−1p(X|y = 1)) = − ∫ dXp(X|y = 1) log(r̂(X)−1) + const. (101)\nWe need to restrict the value of r̂ in order for p(X|y = 1) and p(X|y = 0) to be probability densities: 0 ≤ r̂(X)p(X|y = 0) (102)∫ dXr̂(X)p(X|y = 0) = 1 , (103)\nand 0 ≤ r̂(X)−1p(X|y = 1) (104)∫ dXr̂(X)−1p(X|y = 1) = 1 , (105)\nTherefore, we define the symmetrized KLIEP loss as\nLKLIEP :=\n∫ dX(−p(X|y = 1) log r̂(X))− ∫ dX(−p(X|y = 0) log r̂(X)) (106)\nwith the constraints (102)-(105). The estimated ratio function argminr̂(X)LKLIEP with the constraints minimizes KL(p(X|y = 1)||r̂(X)p(X|y = 0)) + KL(p(X|y = 0)||r̂−1p(X|y = 1))). According to the empirical approximation, they reduce to\nLKLIEP({Xi}Mi=1) ; 1\nM1 ∑ i∈I1 − log(r̂(Xi)) + 1 M0 ∑ i∈I0 − log(r̂(Xi)−1), (107)\n r̂(X) ≥ 0 (108) 1 M0 ∑ i∈I0 r̂(Xi) = 1 (109) 1\nM1 ∑ i∈I1 r̂(Xi) −1 = 1 . (110)" }, { "heading": "E.3 THE LLLR AND DENSITY RATIO ESTIMATION", "text": "Let us investigate the LLLR in connection with the symmetrized KLIEP loss.\nDivergence terms. First, we focus on the divergence terms in (107): 1\nM1 ∑ i∈I1 − log(r̂(Xi)) (111)\n1\nM0 ∑ i∈I0 − log(r̂(Xi)−1) . (112)\nAs shown above, decreasing (111) and (112) leads to minimizing the Kullback-Leibler divergence of p(X|y = 1) and r̂p(X|y = 0) and that of p(X|y = 0) and r̂−1p(X|y = 1) respectively. The counterparts in the LLLR are\nLLLR = 1\nM ∑ i∈I1 |1− σ(log r̂(Xi))| ↔ 1 M1 ∑ i∈I1 − log(r̂(Xi)) (113)\n+ 1\nM ∑ i∈I0 σ(log r̂(Xi)) ↔ 1 M0 ∑ i∈I0 − log(r̂(Xi)−1) , (114)\nbecause, on one hand, both terms in (113) ensures the likelihood ratio r̂ to be large for class 1, and, on the other hand, both terms in (114) ensures r̂ to be small for class 0. Therefore, minimizing LLLR is equivalent to decreasing both (111) and (112) and therefore to minimizing (108), i.e., the Kullback-Leibler divergences of the true and estimated densities.\nAgain, we emphasize that the LLLR is more stable, since LLLR is lower-bounded unlike the KLIEP loss.\nConstraints. Next, we show that the LLLR implicitly keeps r̂ not too large nor too small; specifically, with increasing R̂(Xi) := | log r̂(Xi)|, the gradient converges to zero before R̂(Xi) enters the region, e.g., R̂(Xi) & 1. Therefore the gradient descent converges before r̂(Xi) becomes too large or small. To show this, we first write the gradients explicitly:\n∇Wσ(log(r̂(Xi))) = σ′(log r̂(Xi)) · ∇W log r̂(Xi) (115)\nwhere W is the weight and σ is the sigmoid function. We see that with increasing R̂(Xi) = | log r̂(Xi)|, the factor\nσ′(log r̂(Xi)) (116)\nconverges to zero, because (116) ∼ 0 for too large or small r̂(Xi), e.g., for around R̂(Xi) & 1. Thus the gradient (115) vanishes before r̂(Xi) becomes too large or small, i.e., keeping r̂(Xi) moderate.\nIn conclusion, the LLLR minimizes the difference between the true (p(X|y = 1) and p(X|y = 0)) and the estimated (r̂−1(X)p(X|y = 1) and r̂(X)p(X|y = 0)) densities in the sense of the KullbackLeibler divergence, including the effective constraints.\nTHE TRIVIAL SOLUTION IN EQUATION (115). We show that vanishing\n∇W log r̂(Xi) (117)\nin (115) corresponds to a trivial solution ; i.e., we show that (117)= 0 forces the bottleneck feature vectors to be zero. Let us follow the notations in Table 4 and Figure 5. We particularly focus on the last components in the gradient∇W log r̂(Xi), i.e.,∇W (L)ab log r̂(Xi) (a ∈ [dL−1] and b ∈ [dL] = {0, 1}). Specifically,\n∇ W\n(L) ab\nlog r̂ = ∇ W\n(L) ab\nlog ˆp(X|y = 1)−∇ W\n(L) ab\nlog ˆp(X|y = 0)\n= ∑ y=1,0\n[ ∂g (L) y\n∂W (L) ab\n∂ log ˆp(X|y = 1) ∂g (L) y − ∂g (L) y ∂W (L) ab ∂ log ˆp(X|y = 0) ∂g (L) y\n] . (118)\nSince ∂ log p̂y′/∂g (L) y = δyy′ − p̂y , where y, y′ ∈ {1, 0} and δy,y′ is the Kronecker delta, we see\n∂ log r̂\n∂g (L) y\n= (δy1 − p̂y)− (δy0 − p̂y) = { 1 (if y = 1)\n−1 (if y = 0) (119)\n∴ (118) = ∂g\n(L) 1 W (L) ab · 1 + ∂g (L) 0 ∂W (L) ab · (−1) = δ1bf (L−1)a − δ0bf (L−1)a . (120)\nThus (117) = 0 =⇒ (118)= 0⇐⇒ f (L−1)a = 0 (∀a ∈ [dL−1]), which is a trivial solution, because the bottleneck feature vector collapses to zero at convergence. Our experiments, however, show that our model does not tend to such a trivial solution; otherwise, the SPRT-TANDEM cannot attain such a high performance." }, { "heading": "E.4 PREPARATORY EXPERIMENT TESTING THE EFFECTIVENESS OF THE LLLR", "text": "To test if the proposed LLLR could effectively train a neural network, we ran two preliminary experiments before the main manuscript. First, we compared training the proposed network architecture Lmultiplet, with and without LLLR. Next, we compared training the network using Lmultiplet with LLLR, and training with Lmultiplet with the KLIEP loss, LKLIEP, whose numerator and denominator were carefully bounded so that the LKLIEP did not diverge.\nWe tested the effectiveness of LLLR on a 3D-ness detection task on a depth-from-motion (DfM) dataset 4. The DfM dataset was a small dataset containing 2320 and 2609 3D- and 2D- facial videos, respectively, each of which consisted of 10 frames. In each of the video, a face was filmed from various angles (Figure 6a), so that the dynamics of the facial features could be used to determine whether the face appearing in a video is flat, 2D face, or had a 3D structure. The recording device was an iPhone7. Here, the feature extractor fwFE(x\n(t)) was the LBP-AdaBoost algorithm (Viola & Jones (2001)) combined with the supervised descent method (Xiong & De la Torre (2013)), which took the t-th frame of the given video as input x(t) and output facial feature points (Figure 6b). The feature points were output as a vector of 152 lengths, consisted of vertical and horizontal pixel positions of 76 facial feature points. The temporal integrator gwTI(x\n(t)) is an LSTM, whose number of hidden units was the same as that of feature points. The 1st-order SPRT-TANDEM was evaluated on two hypotheses, y = 1: 2D face, and y = 0: 3D face. We assumed a flat prior, p(y = 1) = p(y = 0). The validation and test data were 10% of the entire data randomly selected at the beginning of training,\n4For the protection of personal information, this database cannot be made public.\nrespectively. We conducted a 10-fold cross-validation test to evaluate the effect of LLLR.\nWe compared the classification performance of the SPRT-TANDEM network using both LLLR and Lmultiplet, and using Lmultiplet only. We also compare LLLR + Lmultiplet and LKLIEP + Lmultiplet. To use LKLIEP without making a loss diverge, we set the upper and lower bound of the numerator and denominator of r̂ as 105 and 10−5, respectively. Out of 100 training epochs, the results of the last 80 epochs were used to calculate test equal error rates (EERs). Two-way ANOVA with factors “loss type” and “epoch” were conducted to see if the difference in loss function caused statistically significantly different EERs. We included the epoch as a factor in order to see if the value of EER reached a plateau in the last 80 epochs (i.e., statistically NOT significant). As we expected, EER values across training epochs were not significantly different (p = 0.17). On the other hand, the loss type caused statistically significant differences between the loss groups (i.e., LLLR + Lmultiplet, LKLIEP + Lmultiplet, and Lmultiplet. p < 0.001). Following Tukey-Kramer multi-comparison test showed that training with LLLR loss statistically significantly reduced the EER, compared to both LKLIEP (p = 9.56 ∗ 10−10) and the LLLR-ablated loss (p = 9.56 ∗ 10−10). The result is plotted in Figure 7." }, { "heading": "F PROBABILITY DENSITY RATIO ESTIMATION WITH THE LLLR", "text": "Below we test whether the proposed LLLR can help a neural network estimating the true probability density ratio. Providing the ground-truth probability density ratio was difficult in the three databases used in the main text, because it was prohibitive to find the true probability distribution out of the public databases containing real-world scenes. Thus, we create a toy-model estimating the probability density ratio of the two multivariate Gaussian distributions. Experimental results show that a multilayer perceptron (MLP) trained with the proposed LLLR achieves smaller estimation error than an MLP with crossentropy (CE)-loss." }, { "heading": "F.1 EXPERIMENTAL SETTINGS", "text": "Following Sugiyama et al. 2008 Sugiyama et al. (2008), let p0(x) be the d-dimensional Gaussian density with mean (2, 0, 0, ..., 0) and covariance identity, and p1(x) be the d-dimensional Gaussian density with mean (0, 2, 0, ..., 0) and covariance identity.\nThe task for the neural network is to estimate the density ratio:\nr̂(xi) = p̂1(xi)\np̂0(xi) . (121)\nHere, x is sampled from one of the two Gaussian distributions, p0 or p1, and is associated with class label y = 0 or y = 1, respectively. We compared the two loss functions, CE-loss and LLLR:\nLLLR := 1\nN N∑ i=1 |y − σ (log r̂i)| (122)\nwhere σ is the sigmoid function.\nA simple Neural network consists of 3-layer fully-connected network with nonlinear activation (ReLU) is used for estimating r̂(x).\nEvaluation metric is normalized mean squared error (NMSE, Sugiyama et al. (2008)):\nNMSE := 1\nN N∑ i=1 ( r̂j∑N j=1 r̂j − ri∑N j=1 rj )2 (123)" }, { "heading": "F.2 DENSITY ESTIMATION RESULTS", "text": "To calculate statistics, the MLP was trained either with the LLLR or CE-loss, repeated 40 times with different random initial vairables. Figure 8 shows the mean NMSE with the shading shows standard error of the mean. Although the training with LLLR does not decrease NMSE well at the first few thousands of trials, the NMSE reaches as low as 10−5 around 14000 iterations. In contrast, the training with CE shows a steep decrease of NMSE in the first 2000 iterations, but saturates after that. Thus, the proposed LLLR not only facilitates the sequential binary hypothesis testing, but also facilitates the estimation of true density ratio." }, { "heading": "G MULTIPLET CROSS-ENTROPY LOSS", "text": "In this section, we show that estimation of the true posterior is realized by minimizing the multiplet cross-entropy loss defined in Section 4 on the basis of the principle of maximum likelihood estimation.\nFirst, let us consider the 1st order for simplicity. The multiplet cross-entropy loss ensures for the posterior p̂(y|x(t)) estimated by the network to be close to the true posterior p(y|x(t)). Consider the Kullback-Leibler divergence of p̂(y|x(t)) and p(y|x(t)) for some x(t) ∈ Rdx (t ∈ N), where y ∈ {0, 1}:\nargmin p̂ E x(t)∼p(x(t)) [KL(p(y|x(t))||p̂(y|x(t)))] (124)\n= argmin p̂ E (x(t),y)∼p(x(t),y) [− log p̂(y|x(t))] (125)\n; argmin p̂\n1\nM M∑ i=1 [− log p̂(yi|x(t)i )] (126)\nThus, the last line shows the smaller singlet loss leads to the smaller Kullback-Leibler divergence; in other words, we can estimate the true posterior density by minimizing the multiplet loss, which is necessary to run the SPRT algorithm.\nSimilarly, we adopt the doublet cross-entropy to estimate the true posterior p(y|x(t), x(t+1)):\nargmin p̂ E (x(t),x(t+1))∼p(x(t),x(t+1)) [KL(p(y|x(t), x(t+1))||p̂(y|x(t), x(t+1)))] (127)\n; argmin p̂\n1\nM M∑ i=1 [− log p̂(yi|x(t)i , x (t+1) i )] . (128)\nThe crucial difference from the singlet loss is that the doublet loss involves the temporal correlation between x(t) and x(t+1), being necessary to implement the SPRT-TANDEM. Similar statements hold for other orders." }, { "heading": "H HYPERPARAMETER OPTIMIZATION", "text": "We used Optuna, the optimization software, to determine hyperparameters. Hyperparameter search trials are followed by performance evaluation trials of the fixed hyperparameter configuration. The evaluation criteria used by Optuna to find the best parameter combination is balanced accuracy. For models that produce multiple balanced accuracies / mean hitting time combinations, we use the average of balanced accuracy at every natural number of the mean hitting time (e.g., one frame, two frames)." }, { "heading": "H.1 NOSAIC MNIST (NMNIST)", "text": "SPRT-TANDEM: feature extractor. ResNet version 1 with 110 layers and 128 final output channels (total trainable parameters: 6.9M) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−2, 10−3} optimizer ∈ {Adam,Momentum,RMSprop}\nweight decay ∈ {10−3, 10−4, 10−5}.\nWhere Adam, Momentum, and RMSprop are based on (Kingma & Ba (2014), Rumelhart et al. (1986), and Graves (2013)), respectively. Numbers of batch size and training epoch are fixed to 64 and 50, respectively. The best hyperparameter combination is summarized in Table 5.\nOne search trial takes approximately 5 hours on our computing infrastructure (see Appendix K).\nSPRT-TANDEM: temporal integrator. Peephole-LSTM with a hidden layer of size 128 (total trainable parameters: 0.1M) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−2, 10−3, 10−4, 10−5} batch size ∈ {256, 512, 1024} optimizer ∈ {Adam,Momentum,Adagrad,RMSprop}\nweight decay ∈ {10−3, 10−4, 10−5}. dropout ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5}\nWhere Adagrad is based on (Duchi et al. (2011)). Number of training epochs is fixed to 50. The number of search trials and resulting best hyperparameter combination are summarized in Table 6.\nOne search trial takes approximately 3 hours on our computing infrastructure (see Appendix K).\nLSTM-m / LSTM-s. Peephole-LSTM with a hidden layer of size 128 (total trainable parameters: 0.1M) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−2, 10−3, 10−4, 10−5} optimizer ∈ {Adam,Momentum,Adagrad,RMSprop}\nweight decay ∈ {10−3, 10−4, 10−5}. dropout ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5} lambda ∈ {0.01, 0.1, 1, 6, 10, 100}\nwhere the lambda is a specific parameter of LSTM-m / LSTM-s. Batch size and number of training epochs are fixed to 1024 and 100, respectively. The number of search trials and resulting best hyperparameter combination are summarized in Table 7.\nOne search trial takes approximately 3 hours on our computing infrastructure (see Appendix K).\nEARLIEST. LSTM with a hidden layer of size 128 (total trainable parameters: 0.1M) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−1, 10−2, 10−3, 10−4, 10−5} optimizer ∈ {Adam,Momentum,Adagrad,RMSprop}\nweight decay ∈ {10−3, 10−4, 10−5}. dropout ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5}\nBatch size and number of training epochs are fixed to 15 and 2, respectively. The number of search trials and resulting best hyperparameter combination are summarized in Table 8.\nOne search trial takes approximately 48 hours on our computing infrastructure (see Appendix K)\n3DResNet. 3DResNet with 101 layers and 128 final output channels (total trainable parameters: 7.7M) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−3, 10−4, 10−5} batch size ∈ {100, 200, 500}\nweight decay ∈ {10−3, 10−4, 10−5}. (129)\nOptimizer and number of training epochs are fixed to Adam and 50, respectively. The number of search trials and resulting best hyperparameter combination are summarized in Table 9.\nAblation experiment. Peephole-LSTM with a hidden layer of size 128 (total trainable parameters: 0.1M) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−1, 10−2, 10−3, 10−4, 10−5} optimizer ∈ {Adam,Momentum,Adagrad,RMSprop}\nweight decay ∈ {10−3, 10−4, 10−5}. dropout ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5}\nBatch size and number of training epochs are fixed to 1024 and 100, respectively. The number of search trials and resulting best hyperparameter combination are summarized in Table 10.\nSPRT-TANDEM: feature extractor. ResNet version 2 with 50 layers and 64 final output channels (total trainable parameters: 26K) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−3, 10−4, 10−5, 10−6} weight decay ∈ {10−3, 10−4, 10−5}.\n(130)\nNumbers of batch size, optimizer, and training epochs are fixed to 512, Adam, and 100, respectively. The best hyperparameter combination is summarized in Table 11.\nSPRT-TANDEM: temporal integrator. Peephole-LSTM with a hidden layer of size 64 (total trainable parameters: 33K) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−4, 10−5, 10−6, 10−7} batch size ∈ {57, 114, 171, 342} optimizer ∈ {Adam,RMSprop}\ndropout ∈ {0.1, 0.2, 0.3, 0.4}\nNumbers of weight decay and training epochs are fixed to 10−4 and 100, respectively. The number of search trials and the best hyperparameter combination are summarized in Table 12.\nLSTM-m / LSTM-s. Peephole-LSTM with a hidden layer of size 64 (total trainable parameters: 33K) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−4, 10−5, 10−6, 10−7} batch size ∈ {57, 114, 171, 342} optimizer ∈ {Adam,Momentum,Adagrad,RMSprop}\nweight decay ∈ {10−3, 10−4, 10−5}. dropout ∈ {0.1, 0.2, 0.3, 0.4} lambda ∈ {0.01, 0.1, 1, 6, 10, 100}\nThe number of training epochs is fixed to 100. The number of search trials and resulting best hyperparameter combination are summarized in Table 13.\nEARLIEST. LSTM with a hidden layer of size 64 (total trainable parameters: 33K) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−1, 10−2, 10−3, 10−4, 10−5} optimizer ∈ {Adam,Momentum,Adagrad,RMSprop}\nweight decay ∈ {10−3, 10−4, 10−5}. dropout ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5}\nBatch size and number of training epochs are fixed to 1 and 30, respectively. The number of search trials and resulting best hyperparameter combination are summarized in Table 14.\n3DResNet. 3DResNet with 50 layers and 64 final output channels (total trainable parameters: 52K) is used. Hyperparameters are searched within the following space:\nlearning rate ∈ {10−3, 10−4, 10−5} weight decay ∈ {10−3, 10−4, 10−5}.\n(131)\nBatch size, optimizer and number of training epochs are fixed to 19, Adam, and 50, respectively. The number of search trials and resulting best hyperparameter combination are summarized in Table 15." }, { "heading": "H.3 SIW", "text": "The large database and network size prevent us to run multiple parameter search trials on SiW database. Thus, we manually selected hyperparameters as follows.\nSPRT-TANDEM: feature extractor. ResNet version 2 with 152 layers and 512 final output channels (total trainable parameters: 3.7M) is used. Table 16 shows the fixed parameter combination.\nSPRT-TANDEM: temporal integrator. Peephole-LSTM with a hidden layer of size 512s (total trainable parameters: 2.1M) is used. Table 17 shows the fixed parameter combination.\nLSTM-m / LSTM-s. Peephole-LSTM with a hidden layer of size 512s (total trainable parameters: 2.1M) is used. Table 18 shows the fixed parameter combination.\nEARLIEST. LSTM with a hidden layer of size 512s (total trainable parameters: 2.1M) is used. Table 19 shows the fixed parameter combination.\n3DResNet. 3DResNet with 101 layers and 512 final output channels (total trainable parameters: 5.3M) is used. Table 20 shows the fixed parameter combination." }, { "heading": "I STATISTICAL TEST DETAILS", "text": "The models we compared in the experiment have various numbers of trials due to the difference in training time; some models were prohibitively expensive for multiple runs (for example, 3DResNet takes 20 hrs/epoch on SiW database with NVIDIA RTX2080Ti.) In order to have an objective comparison of these models, we conducted statistical tests, Two-way ANOVA6 followed by TukeyKramer multi-comparison test. In the tests, small numbers of trials lead to reduced test statistics, making it difficult to claim significance, because the test statistic of Tukey-Kramer method is proportional to 1/ √ (1/n+ 1/m), where n and m are trial numbers of two models to be compared. Nevertheless, the SPRT-TANDEM is statistically significantly better than other baselines. One intuitive interpretation of this result is that “the SPRT-TANDEM achieved accuracy high enough so that only a few trials of baselines were needed to claim the significance.” These statistical tests are standard practice in some research fields such as biological science, in which variable trial numbers are inevitable in experiments.\nAll the statistical tests are executed with a customized MATLAB (2017) script. Here, the two factors for ANOVA are (1) the model factor contains four members: the SPRT-TANDEM with the best performing order on the given database, LSTM-m, EARLIEST, and 3DResNet, and (2) the phase factor contains two or three members: early phase and late phase (NMNIST, UCF), or early, mid, and the late phase (SiW). The early, mid, and late phases are defined based on the number of frames used for classification. The actual number of frames is chosen so that the compared models can use as similar data samples as possible and thus depends on the database. The SPRT-TANDEM, LSTM-m, and 3DResNet can be compared with the same number of samples used. However, EARLIEST cannot flexibly change the average number of samples (i.e., mean hitting time); thus, we include the results of EARLIEST to groups with the closest number of data possible.\nFor NMNIST, five frames and ten frames are used to calculate the statistics of the early and late phases, respectively, except EARLIEST uses 4.37 and 19.66 frames on average in each phase. For UCF, 15 frames and 25 frames are used to calculate the statistics of the early and late phases, respectively, except EARLIEST uses 2.01 and 2.09 frames on average in each phase7. For SiW, 5, 15, and 25 frames are used to calculate the early, mid, and late phases, respectively, except EARLIEST uses 1.19, 8.21, and 32.06 frames. The p-values are summarized in the Tables 21-24. P-values with asterisks are statistically significant: one, two and three asterisks show p < 0.05, p < 0.01, and p < 0.001, respectively.\n6Note that we also conducted a three-way ANOVA with model, phase, and database factors to achieve qualitatively the same result verifying superiority of the SPRT-TANDEM over other algorithms.\n7On UCF, EARLIEST does not use a large number of frames even when the hyperparameter lambda is set to a small value." }, { "heading": "J DETAILS OF THE EXPERIMENTS IN SECTION 5", "text": "Here we present the details of the experiments in Section 5. Figure 9 shows the SAT curves of all the models we use in the experiment. Figure 10, 11, and 12 show example LLR trajectories calculated using NMNIST, UCF, and SiW database, respectively. Tables 25 to 40 shows average balanced accuracy and standard error of the mean (SEM) at the corresponding number of frames that used for classification." }, { "heading": "K COMPUTING INFRASTRUCTURE", "text": "All the experiments are conducted with custom python scripts running on NVIDIA GeForce RTX 2080 Ti, GTX 1080 Ti, or GTX 1080 graphics card. Numpy (Harris et al. (2020)) and Scipy (Virtanen et al. (2020)) are used for mathematical computations. We use Tensorflow 2.0.0 (Abadi et al. (2015)) as a machine learning framework except when running baseline algorithms that are implemented with PyTorch (Paszke et al. (2019))." }, { "heading": "L AN EXAMPLE VIDEO OF THE NOSAIC MNIST DATABASE.", "text": "As we described in Section 5, the Nosaic (Noise + mOSAIC) MNIST, NMNIST for short, contains videos with 20 frames of MNIST handwritten digits, buried with noise at the first frame, gradually denoised toward the last frame. The first frame has all 255-valued pixels (white) except only 40 masks of pixels that are randomly selected to reveal the original image. Another forty pixels are randomly selected at each of the next timestamps, finally revealing the original image at the last, 20th frame. An example video is shown in Figure 13." }, { "heading": "M SUPPLEMENTARY EXPERIMENT ON MOVING MNIST DATABASE", "text": "Prior to the experiment on Nosaic MNIST, we conducted a preliminary experiment on the Moving MNIST (MMNIST) database. 1st, 2nd, 3rd, and 5th-order SPRT-TANDEM were compared to the LSTM-m. Hyperparameters of each model were independently optimized with Optuna. The result plotted in Figure 14 showed that the balanced accuracy of the SPRT-TANDEM peaked and reached the plateau phase only after two or three frames. This indicated that each of the frames in MMNIST contained too much information so that a well-trained classifier could classify a video easily. Thus, although our SPRT-TANDEM outperformed LSTM-m with a large margin, we decided to design the original database, Nosaic MNNIST (NMNIST) for the early-classification task. NMNIST contains videos with noise-buried handwritten digits, gradually denoised towards the end of the videos, increasing mean hitting time compared to the MMNIST." }, { "heading": "N SUPPLEMENTARY ABLATION EXPERIMENT", "text": "In addition to the ablation experiment presented in Figure 3e, which is calculated with 1st-order SPRT-TANDEM, we also conduct an experiment with 19th-order SPRT-TANDEM. The result shown in Figure 15 is qualitatively in line with Figure 3e: the Lmultiplet has an advantage at the early phase with a few data samples, while the LLLR leads to the higher final balanced accuracy at the late phase, and using both loss functions the best SAT curves can be obtained." }, { "heading": "SUPPLEMENTARY REFERENCES", "text": "M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,\nM. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.\nP. Armitage. Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis. Journal of the Royal Statistical Society. Series B (Methodological), 12(1):137–144, 1950.\nF. Ashby. A biased random walk model for two choice reaction times. Journal of Mathematical Psychology, 27:277–297, 1983.\nA. Bagnall, J. Lines, J. Hills, and A. Bostrom. Time-series classification with cote: The collective of transformation-based ensembles. IEEE Transactions on Knowledge and Data Engineering, 27(9): 2522–2535, 2015.\nA. Bagnall. Time series classification with ensembles of elastic distance measures. Data Mining and Knowledge Discovery, 29, 06 2014.\nC. W. Baum and V. V. Veeravalli. A sequential procedure for multihypothesis testing. IEEE Transactions on Information Theory, 40(6):1994–2007, Nov 1994.\nN. H. Bingham, G. Peskir, N. H. Bingham, and G. Peskir. Optimal stopping and dynamic programming, 2006.\nA. Borovkov. Mathematical Statistics. Gordon and Breach Science Publishers, 1998.\nV. Campos, B. Jou, X. G. i Nieto, J. Torres, and S.-F. Chang. Skip rnn: Learning to skip state updates in recurrent neural networks. In ICLR, 2018.\nJ. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724–4733, 2017.\nA. Czarnecki. Positronium properties. arXiv preprint hep-ph/9911455, 1999.\nH. A. Dau, E. Keogh, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, Yanping, B. Hu, N. Begum, A. Bagnall, A. Mueen, and G. Batista. The ucr time series classification archive, October 2018.\nK. Doya. Modulators of decision making. Nat. Neurosci., 11(4):410–416, Apr 2008.\nV. P. Dragalin, A. G. Tartakovsky, and V. V. Veeravalli. Multihypothesis sequential probability ratio tests .i. asymptotic optimality. IEEE Transactions on Information Theory, 45(7):2448–2461, Nov 1999.\nV. P. Dragalin, A. G. Tartakovsky, and V. V. Veeravalli. Multihypothesis sequential probability ratio tests. ii. accurate asymptotic expansions for the expected sample size. IEEE Transactions on Information Theory, 46(4):1366–1383, July 2000.\nJ. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.\nW. Edwards. Optimal strategies for seeking information: Models for statistics, choice reaction times, and human information processing. Journal of Mathematical Psychology, 2(2):312 – 329, 1965.\nJ. P. Gallivan, C. S. Chapman, D. M. Wolpert, and J. R. Flanagan. Decision-making in sensorimotor control. Nat. Rev. Neurosci., 19(9):519–534, 09 2018.\nJ. I. Gold and M. N. Shadlen. The neural basis of decision making. Annu. Rev. Neurosci., 30:535–574, 2007.\nA. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.\nK. Hara, H. Kataoka, and Y. Satoh. Learning spatio-temporal features with 3d residual networks for action recognition. 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 3154–3160, 2017.\nC. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. Del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant. Array programming with NumPy. Nature, 585(7825):357–362, 09 2020.\nM. M. Hu, H. Sun, and N. J. Kasdin. Sequential generalized likelihood ratio test for planet detection with photon-counting mode. In S. B. Shaklan (ed.), Techniques and Instrumentation for Detection of Exoplanets IX, volume 11117, pp. 492 – 498. International Society for Optics and Photonics, SPIE, 2019.\nA. Irle and N. Schmitz. On the optimality of the sprt for processes with continuous time parameter. Statistics: A Journal of Theoretical and Applied Statistics, 15(1):91–104, 1984.\nY.-S. Jeong, M. K. Jeong, and O. A. Omitaomu. Weighted dynamic time warping for time series classification. Pattern Recognition, 44(9):2231 – 2240, 2011. Computer Analysis of Images and Patterns.\nR. Johari, P. Koomen, L. Pekelis, and D. Walsh. Peeking at a/b tests: Why it matters, and what to do about it. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pp. 1517–1525, New York, NY, USA, 2017. Association for Computing Machinery.\nN. Ju, D. Hu, A. Henderson, and L. Hong. A sequential test for selecting the better variant: Online a/b testing, adaptive allocation, and continuous monitoring. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM ’19, pp. 492–500, New York, NY, USA, 2019. Association for Computing Machinery.\nT. Kanamori, S. Hido, and M. Sugiyama. A least-squares approach to direct importance estimation. Journal of Machine Learning Research, 10(Jul):1391–1445, 2009.\nF. Karim, S. Majumdar, H. Darabi, and S. Chen. LSTM fully convolutional networks for time series classification. IEEE Access, 6:1662–1669, 2018. ISSN 2169-3536. doi: 10.1109/ACCESS.2017. 2779939.\nR. Kate. Using dynamic time warping distances as features for improved time series classification. Data Mining and Knowledge Discovery, 30, 05 2015.\nH. Khan, L. Marcuse, and B. Yener. Deep density ratio estimation for change point detection. arXiv preprint arXiv:1905.09876, 2019.\nD. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nS. Kira, T. Yang, and M. N. Shadlen. A neural implementation of wald’s sequential probability rato test. Neuron, 85(4):861–873, February 2015.\nM. Kulldorff, R. L. Davis, M. Kolczak†, E. Lewis, T. Lieu, and R. Platt. A maximized sequential probability ratio test for drug and vaccine safety surveillance. Sequential Analysis, 30(1):58–78, 2011.\nA. Kumar, A. Gupta, and S. Levine. Discor: Corrective feedback in reinforcement learning via distribution correction. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 13.\nT. L. Lai. Asymptotic optimality of invariant sequential probability ratio tests. Ann. Statist., 9(2): 318–333, 03 1981.\nK. W. Latimer, J. L. Yates, M. L. Meister, A. C. Huk, and J. W. Pillow. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science, 349(6244):184–187, Jul 2015.\nE. L. Lehmann and J. P. Romano. Testing statistical hypotheses. Springer Science & Business Media, 2006.\nJ. Lines, S. Taylor, and A. Bagnall. Hive-cote: The hierarchical vote collective of transformationbased ensembles for time series classification. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 1041–1046, 2016.\nV. Lotov. Asymptotic expansions in a sequential likelihood ratio test. Theory of Probability & Its Applications, 32(1):57–67, 1988.\nMATLAB. version 9.3.0 (R2017b). The MathWorks Inc., Natick, Massachusetts, 2017.\nS. M. McClure, D. I. Laibson, G. Loewenstein, and J. D. Cohen. Separate neural systems value immediate and delayed monetary rewards. Science, 306(5695):503–507, Oct 2004.\nH. Nam and M. Sugiyama. Direct density ratio estimation with convolutional neural networks with application in outlier detection. IEICE TRANSACTIONS on Information and Systems, 98(5): 1073–1079, 2015.\nE. Nikishin, P. Izmailov, B. Athiwaratkun, D. Podoprikhin, T. Garipov, P. Shvechikov, D. Vetrov, and A. G. Wilson. Improving stability in deep reinforcement learning with weight averaging. In Uncertainty in artificial intelligence workshop on uncertainty in Deep learning, 2018.\nG. Okazawa, C. E. Hatch, A. Mancoo, C. K. Machens, and R. Kiani. The geometry of the representation of decision variable and stimulus difficulty in the parietal cortex. bioRxiv, 2021. doi: 10.1101/2021.01.04.425244. URL https://www.biorxiv.org/content/early/ 2021/01/04/2021.01.04.425244. Publisher: Cold Spring Harbor Laboratory _eprint: https://www.biorxiv.org/content/early/2021/01/04/2021.01.04.425244.full.pdf.\nA. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019.\nJ. D. Roitman and M. N. Shadlen. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J. Neurosci., 22(21):9475–9489, Nov 2002.\nP. H. Rudebeck, M. E. Walton, A. N. Smyth, D. M. Bannerman, and M. F. Rushworth. Separate neural pathways process different decision costs. Nat. Neurosci., 9(9):1161–1168, Sep 2006.\nD. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.\nP. Schäfer and U. Leser. Multivariate time series classification with weasel+ muse. arXiv preprint arXiv:1711.11343, 2017.\nM. N. Shadlen, R. Kiani, W. T. Newsome, J. I. Gold, D. M. Wolpert, A. Zylberberg, J. Ditterich, V. de Lafuente, T. Yang, and J. Roitman. Comment on \"Single-trial spike trains in parietal cortex reveal discrete steps during decision-making\". Science, 351(6280):1406, Mar 2016.\nJ. Sochman and J. Matas. Waldboost - learning for time constrained sequential detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 2, pp. 150–156 vol. 2, June 2005.\nM. Stone. Models for choice-reaction time. Psychometrika, 25(3):251–260, September 1960.\nM. Sugiyama, T. Suzuki, S. Nakajima, H. Kashima, P. von Bünau, and M. Kawanabe. Direct importance estimation for covariate shift adaptation. Annals of the Institute of Statistical Mathematics, 60(4):699–746, 2008.\nM. Sugiyama, T. Suzuki, and T. Kanamori. Density ratio estimation: A comprehensive review (statistical experiment and its related topics). 2010.\nM. Sugiyama, T. Suzuki, and T. Kanamori. Density ratio estimation in machine learning. Cambridge University Press, 2012.\nS. C. Tanaka, K. Doya, G. Okada, K. Ueda, Y. Okamoto, and S. Yamawaki. Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops. Nat. Neurosci., 7(8):887–893, Aug 2004.\nA. Tartakovsky. Sequential methods in the theory of information systems (in Russian). Radio i Svyaz’, Moscow, 1991.\nA. Tartakovsky. Asymptotically optimal sequential tests for nonhomogeneous processes. Sequential Analysis, 17, 04 1999.\nA. Tartakovsky, I. Nikiforov, and M. Basseville. Sequential Analysis: Hypothesis Testing and Changepoint Detection. Chapman & Hall/CRC, 1st edition, 2014.\nV. V. Veeravalli and C. W. Baum. Asymptotic efficiency of a sequential multihypothesis test. IEEE Transactions on Information Theory, 41(6):1994–1997, Nov 1995.\nP. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, volume 1, pp. I–I, Dec 2001.\nP. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. Jarrod Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. Carey, İ. Polat, Y. Feng, E. W. Moore, J. Vand erPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and S. . . Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17: 261–272, 2020.\nZ. Wang, W. Yan, and T. Oates. Time series classification from scratch with deep neural networks: A strong baseline. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1578–1585, 2017.\nL. Wei and E. J. Keogh. Semi-supervised time series classification. In KDD ’06, 2006.\nX. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 532–539, June 2013.\nK. Yang and C. Shahabi. An efficient k nearest neighbor search for multivariate time series. Information and Computation, 205(1):65 – 98, 2007." } ]
2,021
null
SP:b64d32119a136b5957e85e52c3ab32c27d3c2f3f
[ "This paper aims to present a method that allows efficient learning in neural networks architecture that present optimization blocks. These blocks have the form of x_{i+1} = \\arg \\min_x F(x, x_i, \\theta), and can be thought of as a neural network layer. The addition of this block results in a complex optimization problem, since it presents a multi-level problem. The approach presented in this paper relies on adaptive stochastic search as a differentiable optimization procedure. The authors evaluate the proposed algorithm in a variety of applications, including structured prediction networks and control." ]
In this work we propose the use of adaptive stochastic search as a building block for general, non-convex optimization operations within deep neural network architectures. Specifically, for an objective function located at some layer in the network and parameterized by some network parameters, we employ adaptive stochastic search to perform optimization over its output. This operation is differentiable and does not obstruct the passing of gradients during backpropagation, thus enabling us to incorporate it as a component in end-to-end learning. We study the proposed optimization module’s properties and benchmark it against two existing alternatives on a synthetic energy-based structured prediction task, and further showcase its use in stochastic optimal control applications.
[ { "affiliations": [], "name": "Ioannis Exarchos" }, { "affiliations": [], "name": "Marcus A. Pereira" }, { "affiliations": [], "name": "Ziyi Wang" }, { "affiliations": [], "name": "Evangelos A. Theodorou" } ]
[ { "authors": [ "Brandon Amos", "J Zico Kolter" ], "title": "Optnet: Differentiable optimization as a layer in neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Brandon Amos", "Denis Yarats" ], "title": "The differentiable cross-entropy method", "venue": "arXiv preprint arXiv:1909.12830,", "year": 2019 }, { "authors": [ "Brandon Amos", "Lei Xu", "J Zico Kolter" ], "title": "Input convex neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Brandon Amos", "Ivan Jimenez", "Jacob Sacks", "Byron Boots", "J Zico Kolter" ], "title": "Differentiable MPC for end-to-end planning and control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Brandon Amos", "Vladlen Koltun", "J Zico Kolter" ], "title": "The limited multi-label projection layer", "venue": "arXiv preprint arXiv:1906.08707,", "year": 2019 }, { "authors": [ "Se Yung Bae", "Junkee Jeon", "Hyeng Keun Koo" ], "title": "Continuous-time portfolio selection: A cursory survey", "venue": "Frontiers in Applied Mathematics and Statistics,", "year": 2020 }, { "authors": [ "Sergey Bartunov", "Jack W Rae", "Simon Osindero", "Timothy P Lillicrap" ], "title": "Meta-learning deep energy-based memory models", "venue": "arXiv preprint arXiv:1910.02720,", "year": 2019 }, { "authors": [ "David Belanger", "Andrew McCallum" ], "title": "Structured prediction energy networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "David Belanger", "Bishan Yang", "Andrew McCallum" ], "title": "End-to-end learning for structured prediction energy networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Homanga Bharadhwaj", "Kevin Xie", "Florian Shkurti" ], "title": "Model-predictive control via cross-entropy and gradient-based optimization", "venue": "arXiv preprint arXiv:2004.08763,", "year": 2020 }, { "authors": [ "Thomas Bird", "Julius Kunze", "David Barber" ], "title": "Stochastic variational optimization", "venue": "arXiv preprint arXiv:1809.04855,", "year": 2018 }, { "authors": [ "Richard Cheng", "Gábor Orosz", "Richard M Murray", "Joel W Burdick" ], "title": "End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Filipe de Avila Belbute-Peres", "Kevin Smith", "Kelsey Allen", "Josh Tenenbaum", "J. Zico Kolter" ], "title": "End-to-end differentiable physics for learning and control", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Pieter-Tjerk De Boer", "Dirk P Kroese", "Shie Mannor", "Reuven Y Rubinstein" ], "title": "A tutorial on the cross-entropy method", "venue": "Annals of operations research,", "year": 2005 }, { "authors": [ "Justin Domke" ], "title": "Generic methods for optimization-based modeling", "venue": "In Artificial Intelligence and Statistics, pp", "year": 2012 }, { "authors": [ "Ioannis Exarchos", "Evangelos A. Theodorou" ], "title": "Stochastic optimal control via forward and backward stochastic differential equations and importance sampling", "venue": "Systems & Control Letters,", "year": 2018 }, { "authors": [ "Ioannis Exarchos", "Evangelos Theodorou", "Panagiotis Tsiotras" ], "title": "Stochastic differential games: A sampling approach via FBSDEs", "venue": "Dynamic Games and Applications,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Jakob Foerster", "Richard Y Chen", "Maruan Al-Shedivat", "Shimon Whiteson", "Pieter Abbeel", "Igor Mordatch" ], "title": "Learning with opponent-learning awareness", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 122–130. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Stephen Gould", "Basura Fernando", "Anoop Cherian", "Peter Anderson", "Rodrigo Santa Cruz", "Edison Guo" ], "title": "On differentiating parameterized argmin and argmax problems with application to bi-level optimization", "venue": "CoRR, abs/1607.05447,", "year": 2016 }, { "authors": [ "Jiequn Han", "Arnulf Jentzen", "E Weinan" ], "title": "Solving high-dimensional partial differential equations using deep learning", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Matthew J Johnson", "David K Duvenaud", "Alex Wiltschko", "Ryan P Adams", "Sandeep R Datta" ], "title": "Composing graphical models with neural networks for structured representations and fast inference", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "M Ranzato", "F Huang" ], "title": "A tutorial on energy-based learning", "venue": "Predicting structured data,", "year": 2006 }, { "authors": [ "Luke Metz", "Ben Poole", "David Pfau", "Jascha Sohl-Dickstein" ], "title": "Unrolled generative adversarial networks", "venue": "arXiv preprint arXiv:1611.02163,", "year": 2016 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Etienne Pardoux", "Shige Peng" ], "title": "Adapted solution of a backward stochastic differential equation", "venue": "Systems & Control Letters,", "year": 1990 }, { "authors": [ "Marcus Pereira", "David D Fan", "Gabriel Nakajima An", "Evangelos Theodorou" ], "title": "MPC-inspired neural network policies for sequential decision making", "venue": "arXiv preprint arXiv:1802.05803,", "year": 2018 }, { "authors": [ "Marcus A Pereira", "Ziyi Wang", "Tianrong Chen", "Emily Reed", "Evangelos A Theodorou" ], "title": "Deep 2FBSDEs for systems with control multiplicative noise", "venue": "arXiv, pp. arXiv–1906,", "year": 2019 }, { "authors": [ "Marcus A Pereira", "Ziyi Wang", "Ioannis Exarchos", "Evangelos A Theodorou" ], "title": "Learning deep stochastic optimal control policies using forward-backward SDEs", "venue": "In Robotics: Science and Systems,", "year": 2019 }, { "authors": [ "Marcus A Pereira", "Ziyi Wang", "Ioannis Exarchos", "Evangelos A Theodorou" ], "title": "Safe optimal control using stochastic barrier functions and deep forward-backward SDEs", "venue": "arXiv preprint arXiv:2009.01196,", "year": 2020 }, { "authors": [ "J.A. Primbs" ], "title": "Portfolio optimization applications of stochastic receding horizon control", "venue": "American Control Conference,", "year": 2007 }, { "authors": [ "Maziar Raissi" ], "title": "Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations", "venue": "arXiv preprint arXiv:1804.07010,", "year": 2018 }, { "authors": [ "Aravind Rajeswaran", "Chelsea Finn", "Sham M Kakade", "Sergey Levine" ], "title": "Meta-learning with implicit gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Reuven Y Rubinstein" ], "title": "Combinatorial optimization, cross-entropy, ants and rare events. In Stochastic optimization: algorithms and applications, pp. 303–363", "venue": null, "year": 2001 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "arXiv preprint arXiv:1807.05960,", "year": 2018 }, { "authors": [ "Aravind Srinivas", "Allan Jabri", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Universal planning networks", "venue": "arXiv preprint arXiv:1804.00645,", "year": 2018 }, { "authors": [ "Ben Taskar", "Vassil Chatalbashev", "Daphne Koller", "Carlos Guestrin" ], "title": "Learning structured prediction models: A large margin approach", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Enlu Zhou", "Jiaqiao Hu" ], "title": "Gradient-based adaptive stochastic search for non-differentiable optimization", "venue": "IEEE Transactions on Automatic Control,", "year": 2014 }, { "authors": [ "Exarchos", "Theodorou", "Pereira" ], "title": "2019b) deals with the problem of the Hamiltonian min operator by assuming a special structure of the problem. Specifically, they restrict the dynamics of eq. (13) to be affine in control, i.e., of the form f(x, u) = F (x) + G(x)u, and the cost in eq", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has experienced a drastic increase in the diversity of neural network architectures, both in terms of proposed structure, as well as in the repertoire of operations that define the interdependencies of its elements. With respect to the latter, a significant amount of attention has been devoted to incorporating optimization blocks or modules operating at some part of the network. This has been motivated by large number of applications, including meta-learning (Finn et al., 2017; Rusu et al., 2018; Bartunov et al., 2019), differentiable physics simulators (de Avila Belbute-Peres et al., 2018), classification (Amos et al., 2019), GANs (Metz et al., 2016), reinforcement learning with constraints, latent spaces, or safety (Amos & Kolter, 2017; Srinivas et al., 2018; Amos & Yarats, 2019; Cheng et al., 2019; Pereira et al., 2020), model predictive control (Amos et al., 2018; Pereira et al., 2018), as well as tasks relying on the use of energy networks (Belanger et al., 2017; Bartunov et al., 2019), among many others. Local1 optimization modules lead to nested optimization operations, as they interact with the global, end-to-end training of the network that contains them. Consider some component within the neural network architecture, e.g. a single layer, whose input and output are xi ∈ Rn and xi+1 ∈ Rm, respectively. Within that layer, the input and output are linked via the solution of the following optimization problem:\nxi+1 = arg min x F (x;xi, θ), (1)\nthat is, the output xi+1 is defined as the solution to an optimization problem for which the input xi remains temporarily fixed, i.e., acts as a parameter. Here, F (x;xi, θ) : Rm × Rn × Θ → R is a function possibly further parameterized by some subset of the neural network parameters θ ∈ Θ. Note that x here is an independent variable which is free to vary. The result of this optimization could potentially also be subject to a set of (input-dependent) constraints, though in this paper we\n∗Equal contribution. 1To distinguish between the optimization of the entire network as opposed to that of the optimization module, we frequently refer to the former as global or outer-loop optimization and to the latter as local or inner-loop optimization.\nwill consider only unconstrained optimization. It is also important to note that, depending on the problem, F can be a given function, or it can itself be represented by a multi-layer neural network (trained by the outer loop), in which case the aforementioned optimization layer consists of multiple sub-layers and is more accurately described as a module rather than a single layer. Examples of this type of optimization are structured prediction energy networks (e.g. Belanger et al. (2017)); another such example is Amos & Kolter (2017) which treats the case of convex F (·;xi, θ). In order to facilitate end-to-end learning over the entire network, the gradient of its loss function L with respect to θ will require during backpropagation passing the gradient of the module’s output xi+1 with respect to parameters θ and xi. Depending on the nature of the optimization problem under consideration, several procedures have been suggested; among them, particularly appealing is the case of convex optimization (Gould et al., 2016; Johnson et al., 2016; Amos et al., 2017; Amos & Kolter, 2017), in which the aforementioned gradients can be computed efficiently through an application of the implicit function theorem to a set of optimality conditions, such as the KKT conditions. In the case of non-convex functions however, obtaining such gradients is not as straight-forward; solutions involve either forming and solving a locally convex approximation of the problem, or unrolling gradient descent (Domke, 2012; Metz et al., 2016; Belanger et al., 2017; Finn et al., 2017; Srinivas et al., 2018; Rusu et al., 2018; Foerster et al., 2018; Amos et al., 2018). Unrolling gradient descent approximates the arg min operator with a fixed number of gradient descent iterations during the forward pass and interprets these as an unrolled compute graph that can be differentiated through during the backward pass. One drawback in using this unrolled gradient descent operation however is the fact that doing so can lead to over-fitting to the selected gradient descent hyper-parameters, such as learning rate and number of iterations. Recently, Amos & Yarats (2019) demonstrated promising results in alleviating this phenomenon by replacing these iterations of gradient descent by iterations of sampling-based optimization, in particular a differentiable approximation of the cross-entropy method. While still unrolling the graph created by the fixed number of iterations, they showed empirically that no over-fitting to the hyper-parameters occurred by performing inference on the trained network with altered inner-loop optimization hyper-parameters. Another significant bottleneck in all methods involving graph unrolling is the number of iterations, which has to be kept low to prevent a prohibitively large graph during backprop, to avoid issues in training.\nNote that in eq. (1) the variable of optimization is free to vary independently of the network. This is in contrast to many applications involving nested optimization, mainly in the field of meta-learning, in which the inner loop, rather than optimizing a free variable, performs adaptation to an initial value which is supplied to the inner loop by the outer part of the network. For example, MAML (Finn et al., 2017) performs the inner-loop adaptation θ → θ′, in which the starting point θ is not arbitrary (as x is in eq. (1)) but is supplied by the network. Thus, in the context of adaptation, unrolling the inner-loop graph during back-prop is generally necessary to trace the adaptation back to the particular network-supplied initial value. Two notable exceptions are first-order MAML (Finn et al., 2017; Nichol et al., 2018), which ignores second derivative terms, and implicit MAML (Rajeswaran et al., 2019), which relies on local curvature estimation.\nIn this paper we propose Non-convex Optimization Via Adaptive Stochastic Search (NOVAS), a module for differentiable, non-convex optimization. The backbone of this module is adaptive stochastic search (Zhou & Hu, 2014), a sampling-based method within the field of stochastic optimization. The contributions of our work are as follows: (A). We demonstrate that the NOVAS module does not over-fit to optimization hyper-parameters and offers improved speed and convergence rate over its alternative (Amos & Yarats, 2019). (B). If the inner-loop variable of optimization is free to vary (i.e., the problem fits the definition given by eq. (1)), we show that there is no need to unroll the graph during the back-propagation of gradients. The latter advantage is critical, as it drastically reduces the size of the overall end-to-end computation graph, thus facilitating improved ability to learn with higher convergence rates, improved speed, and reduced memory requirements. Furthermore, it allows us to use a higher number of inner-loop iterations. (C). If the inner-loop represents an adaptation to a network-supplied value as it is the case in meta-learning applications, NOVAS may still be used in lieu of the gradient descent rule (though unrolling the graph may be necessary here). Testing NOVAS in such a setting is left for future work. (D). We combine the NOVAS module with the framework of deep FBSDEs, a neural network-based approach to solving nonlinear partial differential equations (PDEs). This combination allows us to solve Hamilton-Jacobi-Bellman (HJB) PDEs of the most general form, i.e., those in which the min operator does not have a closed-form solution, a class of problems that was previously impossible to address due to the non-convexity of\nthe corresponding Hamiltonian. We validate the algorithm on a cart-pole task and demonstrate its scalability on a 101-dimensional continuous-time portfolio selection problem. The code is available at https://github.com/iexarchos/NOVAS.git" }, { "heading": "2 FURTHER BACKGROUND AND RELATED WORK", "text": "Relation to Differentiable Cross-Entropy: Particular importance should be given to Amos & Yarats (2019), since, to the best of our knowledge, it is the first to suggest sampling-based optimization instead of gradient descent, and features some similarities with our approach. The authors therein propose a differentiable approximation of the cross-entropy method (CEM) (Rubinstein, 2001; De Boer et al., 2005), called differentiable cross-entropy (DCEM). To obtain this approximation, they need to approximate CEM’s eliteness threshold operation, which is non-differentiable. This is done by solving an additional, convex optimization problem separately for each inner loop step (and separately for each sample of xi in the batch, resulting in a total of N ×M ×K additional convex optimization problems, with N : batch size, M : number of inner loop iterations, K: number of outer loop iterations, i.e. training epochs). After CEM has been locally approximated by DCEM, they replace the usual inner-loop gradient descent steps with DCEM steps, and the entire inner-loop optimization graph is unrolled during the backward pass. Our method differs from this approach in the following ways: 1. we employ the already differentiable adaptive stochastic search algorithm, thus not having to solve any additional optimization problem to obtain a differentiable approximation (speed improvement), while also showing some convergence rate improvements, and most importantly 2. In the case of inner-loop optimization over an independent variable (e.g., such as the problem defined by eq. (1)), we do not unroll the optimization graph, but instead pass the gradients only through the last inner-loop iteration. This drastically reduces its size during backpropagation, increasing speed, reducing memory requirements, and facilitating easier learning.\nSampling-based Optimization: Adaptive stochastic search (Zhou & Hu, 2014) is a sampling-based method within stochastic optimization that transforms the original optimization problem via a probabilistic approximation. The core concept behind this algorithm is approximating the gradient of the objective function by evaluating random perturbations around some nominal value of the independent variable, a concept that also appears under the name Stochastic Variational Optimization and shares many similarities with natural evolution strategies (Bird et al., 2018). Another comparable approach is CEM (Rubinstein, 2001; De Boer et al., 2005). In contrast to adaptive stochastic search, CEM is non-differentiable (due to the eliteness threshold) and the parameters are typically updated de novo in each iteration, rather than as a gradient descent update to the parameter values of the previous iteration. In the case of Gaussian distributions, the difference between CEM and adaptive stochastic search boils down to the following: in adaptive stochastic search, the mean gets updated by calculating the average of all sampled variable values weighted by a typically exponential mapping of their corresponding objective function values, whereas in CEM only the top-k performing values are used, and are weighted equally. Furthermore, this difference can be made even smaller if one replaces the exponential mapping in the former method with a differentiable (sigmoid) function that approximates the eliteness operation. More details are available in the Appendix.\nDeep Learning Approaches for PDEs and FBSDEs: There has been a recent surge in research and literature in applying deep learning to approximate solutions of high-dimensional PDEs. The transition from a PDE formulation to a trainable neural network is done via the concept of a system of Forward-Backward Stochastic Differential Equations (FBSDEs). Specifically, certain PDE solutions are linked to solutions of FBSDEs. Systems of FBSDEs can be interpreted as a stochastic equivalent to a two-point boundary value problem, and can be solved using a suitably defined deep neural network architecture. This is known in the literature as the deep FBSDE approach (Han et al., 2018; Raissi, 2018). While applied in high-dimensional PDEs, the aforementioned results have seen very limited applicability in the field of optimal control. Indeed, the HJB PDE in control theory has a much more complicated structure, and in its general form involves a min operator applied on its Hamiltonian term over the control input. Exploiting certain structures of system dynamics and cost functions that allowed for a closed-form expression for this operator, Exarchos & Theodorou (2018); Exarchos et al. (2018; 2019) developed a framework for control using FBSDEs, which was then translated to a deep neural network setting in Pereira et al. (2019b); Wang et al. (2019). In this work, we incorporate the NOVAS module inside deep FBSDE neural network architectures to account for PDEs lacking a closed-form expression for their min and/or max operators. Thus, we are able to\naddress the most general description of a HJB PDE in which the corresponding Hamiltonian is nonconvex. More information concerning the deep FBSDE framework can be found in the Appendix." }, { "heading": "3 NON-CONVEX OPTIMIZATION VIA ADAPTIVE STOCHASTIC SEARCH", "text": "The cornerstone of our approach is a method within stochastic optimization called adaptive stochastic search (Zhou & Hu, 2014). Adaptive stochastic search addresses the general maximization2 problem\nx∗ ∈ arg max x∈X F (x), X ⊆ Rn, (2)\nwith X being non-empty and compact, and F : X → R a real-valued, non-convex, potentially discontinuous and non-differentiable function. Instead of dealing with this function that lacks desirable properties such as smoothness, adaptive stochastic search proposes the solution of a stochastic approximation of this problem in which x is drawn from a selected probability distribution f(x; ρ) of the exponential family with parameters ρ and solve\nρ∗ = arg max ρ\n∫ F (x)f(x; ρ)dx = Eρ [F (x)] .\nThis new objective function, due to its probabilistic nature, exhibits desirable properties for optimization. Algorithmically, this can be facilitated by introducing a natural log and a shape function S(·) : R → R+ which is continuous, non-decreasing, and with a non-negative lower bound (an example of such a function would be the exponential). Due to their properties, passing F (x) through S(·) and the log does not affect the optimal solution. The final optimization problem is then\nρ∗ = arg max ρ ln\n∫ S(F (x))f(x; ρ)dx = lnEρ [S(F (x))] . (3)\nTo address this optimization problem, one can sample candidate solutions x from f(x; ρ) in the solution space X , and then use a gradient ascent method on eq. (3) to update the parameter ρ. Depending on the chosen probability distribution for sampling x, a closed-form solution for the gradient of the above objective function with respect to ρ is available. Thus, while still being a sampling-based method at its core, adaptive stochastic search employs gradient ascent on a probabilistic mapping of the initial objective function. While any probability density function of the exponential family will work, in our work we sample x from a Gaussian distribution. The resulting update scheme is shown in Alg. 1. More details concerning its derivation are included in the Appendix.\nAs described in the introduction, the standard approach employed in the literature for non-convex inner-loop optimization is to apply an optimization procedure (either gradient descent or DCEM) for a fixed number of iterations during the forward pass and interpret these as an unrolled compute graph that can be differentiated through during the backward pass. In this work we argue that this needs to be done only in cases of adaptation (e.g., in meta-learning): the variable to be adapted is supplied to the inner-loop by the outer part of the network, and is adapted using a rule such as gradient descent with respect to the inner-loop objective function for a fixed number of steps. Crucially, the process is not initialized with an arbitrary initial value for the adapted variable but with the one that is supplied by the network; the backward pass which needs to pass through the optimization module also needs to flow through the input of the variable of optimization in the module. Thus, the variable’s values pre- and post-adaptation need to be linked via the unrolled computational graph of the adaptation. In contrast, in the case in which the variable of optimization is not an input to the layer, but rather is allowed to vary freely and independently of the outer part of the network (e.g. as described by problem (1)), such a process is not only unnecessary, but further leads to complications such as reduced network trainability and learning speed, as well as increased memory usage and computation time. Given that in this case the variable of optimization is initialized by what is typically no more than a random guess, there is no need to trace the gradients during backpropagation all the way back to that random guess.\nAfter fixing a number of iterations for optimization, say, n, a simple way to implement NOVAS in a non-unrolled fashion is to take n − 1 of the iterations off the graph and perform only the n-th on\n2While presented for maximization, we deploy it for minimization by switching the sign of the objective function.\nAlgorithm 1: Non-convex Optimization Via Adaptive Stochastic Search (NOVAS) Given: Neural network architecture containing NOVAS module, mini-batch dataset {Xj , Yj}Jj=1, NOVAS objective function F (·;xi, θ). Parameters: Neural network parameters: number of layers L, layer transformations fi, network parameters θ. NOVAS parameters: initial mean and standard deviation µ0, σ0, learning rate α, shape function S, number of samples M, number of iterations N , small positive number ε = 10−3 (sampling variance lower bound). Set x1 ← {Xj} for i = 1 to L do\nif fi is NOVAS layer then Set µ← µ0, σ ← σ0 for n = 1 to N − 1 (off-graph operations) do\n(µ, σ)← NOVAS Layer (µ, σ, α, S, M, N, , F ) end for (µ, σ)← NOVAS Layer (µ, σ, α, S, M, N, , F ) xi+1 = µ\nelse xi+1 = fi(xi; θ) end if {Ŷj} = xL+1 Compute Loss: L = MSE(Yj , Ŷj) Update Parameters: θ ← Adam(L, θ)\nend for\n(µ, σ)← NOVAS Layer (µ, σ, α, S, M, N, , F )\nGenerate M samples of xm ∼ N (µ, σ2), m = 1, . . . ,M ; for m = 1 to M (vectorized operation) do\nEvaluate Fm = F (xm) for maximization or Fm = −F (xm) for minimization; Normalize Fm = ( Fm −minm(Fm) ) / ( maxm(F m)−minm(Fm) ) ; Apply shape function Sm = S(Fm); Normalize Sm = Sm/ ∑M m=1 S\nm; end for Update µ = µ+ α ∑M m=1 S m(xm − µ), σ = sqrt( ∑M m=1 S m(xm − µ)2 + ε);\ngraph. This amounts essentially to getting a good initial guess solution and performing a single step optimization. The latter is enough to supply gradient information during back-prop as it relates to the relevant, optimized value x∗, rather than the intermediate steps forming the trajectory from its initial guess to its final optimal value. From a coding perspective, most automatic differentiation packages allow for localized deactivation of gradient information flow; in PyTorch (Paszke et al., 2019), this is as simple as adding a “with torch.no grad():” for the n− 1 first iterations. With regards to the proposed Alg. 1, we would like to mention following: A. The forward pass of the given neural network containing the NOVAS module is a nested operation of layer transformations fi given by Yj = fL(fL−1(fL−2(. . . f1(Xj) . . .))), wherein fi can be either a NOVAS layer or any standard neural network layer. Also, note that the above transformation applies to recurrent layers with the forward pass unrolled wherein each fi corresponds to a time step. B. For clarity of presentation, we outline the forward and backward (i.e. backprop) passes for a single sampled mini-batch from the training dataset. The user is free to choose a training strategy that best suits the problem and apply the proposed algorithm to every sampled mini-batch with any variant of stochastic gradient descent for the outer loop. C. The “Normalize F ” operation in NOVAS layer is optional, but may lead to some numerical improvements. Further algorithm implementation details are given in the Appendix. Finally, an important remark is necessary concerning the differentiability of the module. Nonconvex optimization problems do not necessarily have a unique optimum, and as a result, the arg min operator is a set-valued map and not differentiable in the classical sense; even when the\noptimum is unique almost everywhere in the parameter space it is possible for the optimal value to be discontinuous in the parameter. For non-global optimizers, an initialization that avoids wrong local optima is crucial. With respect to the latter, there is some evidence (see, e.g., Bharadhwaj et al. (2020)) that sampling-based optimization methods are more robust compared to gradient descent as they evaluate the objective function over an extended area of the input space and, given enough sampling variability for exploration, have thus more chances of escaping a narrow local optimum." }, { "heading": "4 APPLICATIONS", "text": "In this section we explore the properties of NOVAS and test its applicability in a few problems involving end-to-end learning. The first task is a Structured Prediction Energy Network (SPEN) learning task, which we adopted directly from Amos & Yarats (2019). We found it to be an ideal environment to test NOVAS against unrolled DCEM and unrolled gradient descent because it is simple, allows for fast training, and, being two-dimensional, one can visualize the results. We would like to stress that this is merely an example for illustrating various algorithm differences and behavior rather than a claim on state-of-the-art results in the domain of SPENs. We then address two optimal control problems by incorporating the NOVAS module in a deep FBSDE neural network architecture as shown in Fig. 3: the first problem is the cart-pole swing-up task, a low-dimensional problem that has been successfully addressed with already existing deep FBSDE approaches (Pereira et al., 2019b) that exploit the structure of dynamics and cost (dynamics are affine in control and the cost is quadratic in the control) in order to perform minimization of the Hamiltonian explicitly. This problem merely serves as a means to validate the NOVAS-FBSDE algorithm. The second problem demonstrates the establishment of a new state-of-the-art in solving high-dimensional HJB PDEs using the deep FBSDE method; specifically, we address a continuous-time portfolio optimization problem that leads to a general (i.e., without an explicit solution for the min operator) HJB PDE in 101 dimensions. This HJB PDE form could not be addressed by deep FBSDE methods previously." }, { "heading": "4.1 STRUCTURED PREDICTION ENERGY NETWORKS", "text": "The goal in energy-based learning is to estimate a conditional probability P(y|x) of an output y ∈ Y given an input x ∈ X using a parameterized energy function E(x, y; θ) : X ×Y ×Θ→ R, wherein θ ∈ Θ are the energy function’s trainable parameters. The conditional probability is approximated as P(y|x) ∝ exp(−E(y;x, θ)). Predictions can be made by minimizing the trained energy function with respect to y:\nŷ = arg min y E(x, y; θ). (4)\nInitially studied in the context of linear energy functions (Taskar et al., 2005; LeCun et al., 2006), the field recently adopted deep neural networks called structured prediction energy networks (SPENs) (Belanger & McCallum, 2016) to increase the complexity of learned energy functions. In particular, Belanger et al. (2017) suggested training SPENs in a supervised manner; unrolled gradient descent is used to obtain ŷ which then compared to the ground-truth y∗ by a suitable loss function. Mimicking the unrolled gradient descent suggested by Belanger et al. (2017), Amos & Yarats (2019) replaced the gradient descent operations with differentiable cross entropy iterations, also using an unrolled computation graph during backpropagation. Here we adopt the same example as in Amos & Yarats (2019) to benchmark NOVAS against unrolled gradient descent and unrolled DCEM, compare their properties, and further show that unrolling the inner-loop graph is not necessary. We consider the simple regression task where ground-truth data are generated from f(x) = x sin(x), x ∈ [0, 2π], and we use a neural network of 4 hidden layers to approximate E. This problem belongs to the class of problems described by eq. (1) in the introduction: the variable of optimization, y, is not an input to the optimization module from the exterior part of the network. The energy function represented by the multi-layer neural network defines the objective function within the module (that is, it corresponds to F (·;xi, θ) of eq. (1)). While in this case the entire network is within the module, the input could instead be features extracted from an anterior part of the network, e.g. through convolution. The results are shown in Figs. 1 and 2. As can be seen from Fig. 2(a), unrolled gradient descent converges to a very low loss value (which implies good regression performance), but the trained energy function does not reflect the ground-truth relationship, Fig. 1(a). This implies a “faulty” inner-loop optimization, which the energy network itself learns to compensate for. The result resembles more ordinary regression than energy-based structured prediction, since no useful\nstructure is learned; furthermore, changing the inner-loop optimization parameters during inference (after training) leads to an operation that the energy network has not learned to compensate for, as seen in Fig. 2(c). The sampling-based methods of unrolled DCEM and NOVAS both alleviate this phenomenon by learning the correct energy landscape. Furthermore, as seen in Fig. 1(b) and (c), unrolling the graph is unnecessary, and avoiding it leads to a significant speed-up, 5x with respect to unrolled DCEM, Fig. 2(b). Interestingly, an additional benefit is that NOVAS seems to offer a greater inner-loop convergence rate than DCEM (Fig. 2(c)). Due to the simplicity of this example, there is no learning inhibition when using an unrolled graph, as seen from the comparison between NOVAS and unrolled NOVAS. However, this can be the case in more complex tasks and network architectures, as we shall see in Section 4.2. Further details are given in the Appendix." }, { "heading": "4.2 CONTROL USING FBSDES", "text": "" }, { "heading": "4.2.1 CART-POLE SWING-UP TASK", "text": "We first validate the NOVAS-FBSDE algorithm (neural network architecture seen in Fig. 3) by solving a task whose special structure (dynamics affine in control and cost function quadratic in control) allows for a closed-form solution for the min operator of the Hamiltonian. Because of this special structure, this task can be solved by already existing deep FBSDE approaches (e.g., Pereira et al. (2019b)). Here, we replace the minimization explicit solution with NOVAS. The results are shown in Fig. 4, and are in accordance with results obtained using explicit minimization (see Fig. 6 in Pereira et al. (2019b)). Equations and implementation details are given in the Appendix." }, { "heading": "4.2.2 HIGH-DIMENSIONAL, CONTINUOUS-TIME PORTFOLIO SELECTION", "text": "We now demonstrate that augmenting the deep FBSDE method with NOVAS allows us to solve general, high-dimensional HJB PDEs by employing NOVAS-FBSDE on a stock portfolio optimization problem, defined as follows: we consider a market index I that consists of N = 100 stocks, and select a subset of M = 20 of those for trading. There is also a risk-less asset with (relatively low) return rate. We may invest an initial wealth capital W among these 20 + 1 assets, and the goal is to control the percentage allocation among these assets over time such that the wealth outperforms the market index in probability. This optimal control formulation leads to a HJB PDE on a state space of 100 + 1 dimensions (100 stocks of the market plus the wealth process, which incorporates the risk-less asset dynamics. Volatility clearly dominates in such short-term horizons, so a successful trading strategy would be one that increases the odds of beating the market average compared to a random selection. Index-tracking and wealth-maximization have long been the subject of study from a controls perspective (Primbs, 2007; Bae et al., 2020), though with limited results due to the difficulty in dealing with such high uncertainties. Primbs (2007) investigates a low-dimensional variant of this problem (5 stocks, 3 traded) but does not enforce the constraint that allocation can only be positive (thus leading to negative investments, i.e. borrowing money from stocks), and tracks the index (thus paying a penalty also when the portfolio outperforms the index) due to the approach being restricted to consider only quadratic cost functions. We avoid these and enforce positive investments only by applying a softmax on the control input, and use ( softplus(I −W ) )2 as cost function to incentivize outperforming rather than tracking the index. We consider two alternative investment strategies as baselines: a constant and equal allocation among all traded assets, as well as random allocations. The results are shown in Fig. 5. As indicated by the violin plots, the FBSDE investment strategy outperforms in probability the two alternative strategies, and is the only one who\noutperforms the market index (by almost 5% on average) at the end of the planning horizon of one year. These statistics are obtained by running 128 market realizations with test set volatility profiles (noise profiles not seen during training or validation). All equations and implementation details are provided in the Appendix. We note that solving this problem in an unrolled graph setting was not possible: either because it was impossible to facilitate learning when a high number of inner-loop iterations was applied (presumably due to the excessive total graph depth), or because of memory issues. Thus, eliminating the unrolled graph is absolutely critical in this case." }, { "heading": "5 CONCLUSIONS", "text": "In this paper we presented NOVAS, an optimization module for differentiable, non-convex innerloop optimization that can be incorporated in end-to-end learning architectures involving nested optimizations. We demonstrated its advantages over alternative algorithms such as unrolled gradient decent and DCEM on a SPEN benchmark. We also showed that NOVAS allows us to expand the class of PDEs that can be addressed by the deep FBSDE method while resisting the curse of dimensionality, as demonstrated by the solution of a 101-dimensional HJB PDE associated with a portfolio optimization problem. NOVAS is a general purpose differentiable non-convex optimization approach and thus, owing to its broad description and its generality, could be useful in a plethora of other applications involving nested optimization operations. We hope that the results of this work will inspire continued investigation." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was supported by AWS Machine Learning Research Awards and NASA Langley. We would also like to thank Brandon Amos for sharing his code for the example of Section 4.1 with us, which allowed us to reproduce his results and benchmark all algorithms." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 NOVAS: DERIVATION AND IMPLEMENTATION DETAILS", "text": "Given the optimization problem defined in (3) we calculate the gradient with respect to the sampling distribution parameter ρ\n∇ρ ln ∫ S(F (x))f(x; ρ)dx = ∫ S(F (x))∇ρf(x; ρ)dx∫ S(F (x))f(x; ρ)dx , (5)\n=\n∫ S(F (x))∇ρ ln f(x; ρ)f(x; ρ)dx∫\nS(F (x))f(x; ρ)dx , (6)\n= E[S(F (x))∇ρ ln f(x; ρ)]\nE[S(F (x))] , (7)\nwhere the second equality is obtained using the log trick. The exponential family distribution is characterized by the probability distribution function\nf(x; ρ) = h(x) exp(ρTT (x)−A(ρ)), (8)\nwhere h(x) is the base measure whose formula depends on the particular choice of distribution, T (x) is the vector of sufficient statistics and A(ρ) = ln{ ∫ h(x) exp(ρTT (x))dx} is the log partition. The gradient with respect to log distribution can then be calculated as\n∇ρ ln f(x; ρ) = T (x)−∇ρA(ρ). (9)\nA GaussianN (µ,Σ) is fully defined by its mean and covariance, but one can choose to optimize (3) only over the mean µ, and sample using a fixed covariance. This results in the following parameters of the distribution: T (x) = Σ−1/2x, ∇ρA(ρ) = Σ−1/2µ, (10) and for ρ = µ the gradient can be calculated as\n∇µ lnE[S(F (x))f(µ)] = E[S(F (x))(x− µ)]\nE[S(F (x))] . (11)\nOne reason for choosing to optimize over the mean only is the simplicity of the resulting update expression, as well as numerical reasons, since applying gradient descent on the covariance matrix can lead to non-positive definiteness if the initial values or the learning rate are not chosen carefully. An intermediate solution between using a fixed covariance matrix and its gradient descent-based update law is assuming a diagonal covariance matrix and calculating each element of the diagonal via a simple weighted empirical estimator. The resulting update scheme is given in Alg. 1. Note that we also investigated using the full gradient descent-based update rule for the covariance, as well as the use of the Hessian of the objective function (3) and other techniques like momentum, line search, trainable optimization hyper-parameter values etc. to speed up convergence, but the results were inconclusive as to their additional benefit. We tested two different shape functions: S(y;κ) = exp(κy), as well as a shape function suggested by Zhou & Hu (2014), namely S(y;κ, γ) = (y − ymin)/ ( 1 + exp(−κ(y − γ)) ) , where ymin is the minimum of the sampled values and γ is the n-th biggest value of the assorted y values. For the first choice it is numerically advantageous to include the normalization step in the function definition and replace the exponential with the softmax function. The latter choice is a differentiable function approximating the level/indicator function used in CEM. Though both shape functions exhibited similar performance, we noticed that the former was slightly faster, and the latter lead to slightly more accurate results in the SPEN example (but not in the FBSDE example, where they were equivalent). All parameter values such as κ can be made trainable parameters of the network, though we did not notice an improvement in doing so. In fact, the algorithm seems to be quite insensitive to its parameter values including the learning rate α, with the sole exception of σ which does indeed affect its output significantly." }, { "heading": "A.2 STOCHASTIC OPTIMAL CONTROL USING FBSDES", "text": "In this section we introduce the deep FBSDE framework for solving PDEs and show that combining NOVAS with the deep FBSDE allows us to extend the capabilities of the latter framework\n(Pereira et al., 2019b;a; Wang et al., 2019) in addressing Stochastic Optimal Control (SOC) problems. The mathematical formulation of a SOC problem leads to a nonlinear PDE, the HamiltonJacobi-Bellman PDE. This motivates algorithmic development for stochastic control that combine elements of PDE theory with deep learning. Recent encouraging results (Han et al., 2018; Raissi, 2018) in solving nonlinear PDEs within the deep learning community illustrate the scalability and numerical efficiency of neural networks. The transition from a PDE formulation to a trainable neural network is done via the concept of a system of Forward-Backward Stochastic Differential Equations (FBSDEs). Specifically, certain PDE solutions are linked to solutions of FBSDEs, which are the stochastic equivalent of a two-point boundary value problem and can be solved using a suitably defined neural network architecture. This is known in the literature as the deep FBSDE approach. In what follows, we will first define the SOC problem and present the corresponding HJB PDE, as well as its associated system of FBSDEs. The FBSDEs are then discretized over time and solved on a neural network graph.\nConsider a SOC problem with the goal of minimizing an expected cost functional subject to dynamics:\ninf u∈U [0,T ]\nJ ( u) = inf u∈U [0,T ] E [ φ ( x(T ) ) + ∫ T 0 l ( x(t), u(t) ) dt ] , (12)\ns.t. dx(t) = f ( x(t), u(t) ) dt+ Σ ( x(t), u(t) ) dw(t), x(0) = ξ, (13)\nwhere x ∈ Rn and u ∈ Rm are the state and control vectors respectively, f : Rn × Rm → Rn is a non-linear vector-valued drift function, Σ : Rn × Rm → Rn×v is the diffusion matrix, w ∈ Rv is vector of mutually independent Brownian motions, U is the set of all admissible controls and l : Rn × Rm → R and φ : Rn → R are the running and terminal cost functions respectively. Equation (13) is a controlled Itô drift-diffusion stochastic process.\nThrough the value function definition V ( x, t ) = infu∈U [0,T ] J ( u)|x0=x,t0=t and using Bellman’s principle of optimality, one can derive the Hamilton Jacobi Bellman PDE, given by\nVt + inf u∈U [0,T ]\n[ 1\n2 tr ( VxxΣΣ T ) + V Tx f ( x, u ) + l(x, u) ] = 0, V ( x, T ) = φ ( x ) , (14)\nwhere we drop explicit time dependencies for brevity, and use subscripts to indicate partial derivatives with respect to time and the state vector. The term inside the infimum operation is called the Hamiltonian:\nH ( x, u, Vx, VxxΣΣ T ) , 1 2 tr ( VxxΣΣ T ) + V Tx f ( x, u ) + l(x, u). (15)\nGiven that a solution u∗ to the minimization of H exists, the unique solution of (14) corresponds by virtue of the non-linear Feynman-Kac lemma (see for example Pardoux & Peng (1990)) to the following system of FBSDEs:\nx(t) = ξ + ∫ t 0 f ( x(t), u∗(t) ) dt+ ∫ t 0 Σ ( x(t), u∗(t) ) dwt, (FSDE) (16)\nV (x(t), t) = φ(x(T )) + ∫ T t l ( x(t), u∗(t) ) dt− ∫ T t V Tx (x(t), t)Σ ( x(t), u∗(t), t ) dw, (BSDE)\n(17)\nu∗(t) = arg min u\nH ( x(t), u, Vx(x(t), t), Vxx(x(t), t)Σ(x(t), u)Σ(x(t), u) T ) . (18)\nHere, V (x(t), t) denotes an evaluation of V (x, t) along a path of x(t), thus V (x(t), t) is a stochastic process (and similarly for Vx(x(t), t) and Vxx(x(t), t)). Note that x(t) evolves forward in time (due to its initial condition x(0) = ξ), whereas V (x(t), t) evolves backwards in time, due to its terminal condition φ(x(T )), thus leading to a system that is similar to a two-point boundary value problem. While we can easily simulate a forward process by sampling noise and then performing Euler integration, a simple backward integration of V (x(t), t) would result in it depending explicitly on future values of noise, which is not desirable for a non-anticipating process, i.e., a process that does not exploit knowledge on future noise values. Two remedies exist to mitigate this problem: either back-propagate the conditional expectation of V (x(t), t) (e.g., as in Exarchos & Theodorou (2018)), or forward-propagate V (x(t), t) starting from an initial condition guess, compare its terminal value V (x(T ), T ) to the terminal condition, and adjust the initial condition accordingly so that\nthe terminal condition is satisfied approximately. For this forward evolution of the BSDE, the above system is discretized in time as follows:\nxk+1 = xk + f(xk, u ∗ k)∆t+ Σ(xk, u ∗ k)∆wk, x0 = ξ, (FSDE) (19) Vk+1 = Vk − l(xk, u∗k)∆t+ V Tx,kΣ(xk, u∗k) ∆wk, V0 = ψ, (BSDE) (20) u∗k = arg min u H ( xk, u, Vx,k, Vxx,kΣ(xk, u)Σ(xk, u) T ) . (21)\nHere, ∆wk is drawn from N (0,∆t) and H is given by eq. (15). Note that for every sampled trajectory {xk}Kk=1 there is a corresponding trajectory {Vk}Kk=1. Under the deep FBSDE controller framework, V0 = ψ and Vx,0 are set to be trainable parameters of a deep neural network that approximates Vx ( x(t), t ) at every time step under forward-propagation, using an LSTM3. The terminal\nvalue of the propagated V ( x(t), t ) , namely V (x(T ), T ), is then compared to φ ( x(T ) ) to compute a loss function to train the network. Note that since the Hamiltonian can have any arbitrary non-linear dependence on the control, the resulting minimization problem (21) is generally non-covex and does not have a closed-form solution. Furthermore, it must be solved for each time step, and for utilization within the deep FBSDE controller framework, the non-convex optimizer must be differentiable to facilitate end-to-end learning. This makes NOVAS a good fit. The neural network architecture is shown in Fig. 3. Since the non-convex Hamiltonian minimization procedure is performed at every time step leading to a repeated use of NOVAS in the architecture, the ability to avoid unrolling the inner-loop computation graph is crucial." }, { "heading": "A.3 FBSDE STOCHASTIC OPTIMAL CONTROL FOR AFFINE-QUADRATIC SYSTEMS", "text": "We now show how the previous state-of-the-art (Exarchos & Theodorou, 2018; Pereira et al., 2019b) deals with the problem of the Hamiltonian min operator by assuming a special structure of the problem. Specifically, they restrict the dynamics of eq. (13) to be affine in control, i.e., of the form f(x, u) = F (x) + G(x)u, and the cost in eq. (12) to be quadratic in control, i.e., l(x, u) = q(x)+uTRu. In this case, and if Σ(x, t) is not a function of u, one can perform explicit minimization of the Hamiltonian with respect to u in eq. (21) to find the optimal control:\nu∗ = −R−1GTVx. (22)\nThis is done by simply setting ∂H/∂u = 0 and solving for u. Substituted back into the HJB PDE, this yields a simplified expression without a min operator:\nVt + 1\n2 tr(VxxΣΣ\nT ) + V Tx F + q − 1\n2 V Tx GR −1GTVx = 0, V (x, T ) = φ(x).\nThus, for this restricted class of systems, the deep FBSDE neural neural network architecture does not require a numerical minimization operation over u at every time step, as in eq. (21). The cart-pole swing-up task of the next section is an example of a system that satisfies these restrictions. A similar closed-form solution exists for some cases of L1-optimal control (Exarchos et al., 2018), as well as some differential games (Exarchos et al., 2019). While simplifying the problem significantly, this approach comes with an important caveat: several dynamical systems do not have a control-affine structure, and penalizing control energy (uTRu) is not always meaningful in every setting." }, { "heading": "A.3.1 CART-POLE SWING-UP PROBLEM", "text": "We define the state vector to be X = [ x, θ, ẋ, θ̇ ]T , where x represents the cart-position, θ repre-\nsents the pendulum angular-position, ẋ represents the cart-velocity, and θ̇ represents the pendulum angular-velocity. Let u ∈ R be the control force applied to the cart. The deterministic equations of motion for the cart-pole system are,\nẍ = u+mp sin θ(lθ̇ + g cos θ)\nmc +mp sin θ\nθ̈ = −u cos θ −mplθ̇ cos θ sin θ\nl(mc +mp sin θ) 3In this work, we additionally use the same LSTM to predict a column of the Hessian Vxx ( x(t), t ) .\nFor our experiments, we consider the case where noise enters the velocity channels of the state. The stochastic dynamics therefore take the following form,\ndX = d xθẋ θ̇ = \nẋ\nθ̇\nmp sin θ(lθ̇ + g cos θ)\nmc +mp sin θ −mplθ̇ cos θ sin θ l(mc +mp sin θ)\n dt+ \n0 0 1\nmc +mp sin θ − cos θ\nl(mc +mp sin θ)\nu dt+ 0 00 0σ̃ 0\n0 σ̃\n[dw1dw2 ]\nThe task is to perform a swing-up i.e. starting from an initial state ofX = [ 0, 0, 0, 0 ]T at time t0 = 0,\nreach the target state of X = [ 0, π, 0, 0 ]T by the end of the time horizon t = T . We consider T = 1.5s with a time discretization step of ∆t = 0.02s. Notice that the dynamics are affine in control, and selecting the running cost to be l = uTRu, minimization of the Hamiltonian with respect to u assumes a closed-form solution, namely that of eq. (22). This fact allows us to replace the min operator in favor of this solution (Pereira et al., 2019b). Here, we test NOVAS by avoiding this replacement. We consider a running and terminal cost matrix of diag(Q) = [0.0, 10.0, 3.0, 0.5] and the control cost matrix of R = 0.1. The cart-pole parameters considered are mp = 0.01 kg, mc = 1.0 kg, l = 0.5m, which are the mass of the pendulum, mass of cart, and length of the pendulum, respectively. For the noise standard deviation, σ̃ = 0.5 was used. As far as the hyper-parameters for learning the deep FBSDE controller are concerned, we used a two-layer LSTM network as shown in Fig. 5(e) with hidden dimension of 16 in each layer, a batch size of 128, and trained the network using the Adam optimizer for 3500 iterations with a learning rate of 5e−3. For the NOVAS layer at every time step, we used 5 inner-loop iterations and 100 samples for both training and inference. A shape function of S = exp(·), initial µ = 0, and initial σ = 10 were used. With reference to parameters in Alg. 1, for this experiment we used 75 time steps, which means that the LSTM graph can be viewed as a 75 layered feed-forward network when unrolled. Additionally, at each time step we use a NOVAS Layer to compute the optimal control. Thus, the total number of network layers is L = 75 + 74 = 149 with fi’s being NOVAS Layer for i = 2, 4, 6, . . . ." }, { "heading": "A.3.2 PORTFOLIO OPTIMIZATION PROBLEM", "text": "We now consider a problem for which an explicit solution of the Hamiltonian min operator does not exist. Let N be the total number of stocks that make up an index I such that I = 1N ∑N i=1 Si, where Si is the stock price process of the i-th stock. Let M be the number of a fixed selection of traded stocks taken from those N stocks such that M < N . Furthermore, let u ∈ RM+1 be the control vector. The (N + 1) dimensional state vector is comprised N stock prices and a wealth process W . The dynamics of each stock price and wealth process are given by\nπk = [ softmax(u) ] k = euk∑M+1\nm=1 e um\n, (k = 1, 2, · · · ,M + 1) (23)\ndSi(t) = Si(t)µi dt+ Si(t) dηi ( where, i = 1, 2, · · · , N and dηi = N∑ j=1 σi,j dwj(t) ) (24)\ndW (t) = W (t) ( π1 r dt+ M+1∑ m=2 πm µm dt+ M+1∑ m=2 πm dηm ) (25)\nwhere πk is the fraction of wealth invested in the k-th traded stock, r is rate of return per period of the risk-free asset, µi is the rate of return of the ith stock. Here, σi,j denotes the standard deviation of noise terms entering the i−the stock process wherein i = j indicates the contribution of the process’ own noise as opposed to i 6= j, which indicates the interaction of noises between stocks (correlation). All wi’s are mutually independent standard Brownian motions. To obtain the σ’s, we used randomly generated synthetic covariance matrices which mimic real stock-market data. Note that the M traded stocks were randomly picked and were not constrained to be any specific subselection of the N stocks. Separate noise realizations were used during training, validation, and testing to ensure that the network does not over-fit to a particular noise profile.\nFor our experiments we use N = 100 stocks that make up the index I and M = 20 traded stocks. We used a scaled squared-softplus function as terminal cost, given by\nφ(x(T )) = q\n( 1\nβ · log\n( 1 + eβ(I(T )−W (T )) ))2 ,\nwith β = 10, q = 500, and no running cost, focusing on investment outperformance at the end of the planning horizon of one year. To simulate the stock dynamics we used a time discretization of dt = 1/52, which amounts to controls (and thus amounts invested) being applied on a weekly basis, for a total time of 1 year. The deep FBSDE-NOVAS hyperparameters were as follows: 16 neurons each in a two-layer LSTM network to predict the gradient of the value function at each time step, a batch size of 32, an initial learning rate set to 1e−2 and reduced by factor of 0.1 after 4000 and 4500 training iterations. Training was done using the Adam optimizer for a total of 5000 iterations. For NOVAS, we used 100 samples with 5 inner-loop iterations for training and 200 samples with 50 inner-loop iterations for inference. The shape function used was S(x) = exp (x).\nWith reference to parameters in Alg. 1, for this experiment we used 52 time steps, which means that the LSTM graph can be viewed as a 52 layered feed-forward network when unrolled. Additionally, at each time step (except for the last time step) we use a NOVAS Layer to compute the optimal control. Thus, the total number of network layers is L = 52 + 51 = 103 with fi’s being NOVAS Layer for i = 2, 4, 6, . . . ." }, { "heading": "A.3.3 LOSS FUNCTION FOR TRAINING DEEP FBSDE CONTROLLERS", "text": "The loss function used in our experiments to train the deep FBSDE controller with the NOVAS layer is as follows:\nL = l1 ·Hδ ( V (xT , T )− V ∗(xT , T ) ) + l2 ·Hδ ( Vx(xT , T )− V ∗x (xT , T ) ) + l3 ·Hδ ( Vxx(xT , T )− V ∗xx(xT , T ) ) + l4 · ( V ∗(xT , T ) )2 + l5 · ( V ∗x (xT , T ) )2 + l6 · ( V ∗xx(xT , T ) )2 ,\nwhere\nHδ(a) = { a2, for |a| < δ, δ(2|a| − δ), otherwise.\nHere, xT denotes x(T ), V (xT , T ), Vx(xT , T ), and Vxx(xT , T ) are the predicted value function, its predicted gradient, and the predicted last column of the Hessian matrix, respectively, at the terminal time step. The corresponding targets are obtained through the given terminal cost function φ ( x(T ) ) so that V ∗(xT , T ) = φ(xT ), Vx(xT , T ) = φx(xT ) and Vxx(xT , T ) = φxx(xT ). Each term is computed by averaging across the batch samples. Additionally, we may choose to add terms that directly minimize the targets. This is possible because gradients flow through the dynamics functions and therefore the weights of the LSTM can influence what the terminal state x(T ) will be.\nFor the cart-pole problem we used δ = 50 and [l1, l2, l3, l4, l5, l6] = [1, 1, 0, 1, 1, 0], and for the portfolio optimization problem we used δ = 50 and [l1, l2, l3, l4, l5, l6] = [1, 1, 1, 1, 0, 0]." }, { "heading": "A.3.4 HARDWARE CONFIGURATION AND RUN-TIMES", "text": "All experiments were run on a NVIDIA GeForce RTX 2080Ti graphics card with 12GB memory. The PyTorch (Paszke et al., 2019) implementation of the 101-dimensional portfolio optimization problem had a run-time of 2.5 hours." } ]
2,021
NOVAS: NON-CONVEX OPTIMIZATION VIA ADAPTIVE STOCHASTIC SEARCH FOR END-TO-END LEARNING AND CONTROL
SP:401998f890d05e3c22e89754ed6b64403e1a6ead
[ "This work studies the DNN-based spatiotemporal point process model. It points out the drawback of most existing DNN-based point process models: incapability to incorporate the spatio information. Although in statistics, the spatiotemporal point process is capable of capturing events in continuous space and time, such methods are computation expensive. The theoretical analysis is provided, and experimental comparisons are conducted on synthetic and real data. " ]
Learning the dynamics of spatiotemporal events is a fundamental problem. Neural point processes enhance the expressivity of point process models with deep neural networks. However, most existing methods only consider temporal dynamics without spatial modeling. We propose Deep Spatiotemporal Point Process (DeepSTPP), a deep dynamics model that integrates spatiotemporal point processes. Our method is flexible, efficient, and can accurately forecast irregularly sampled events over space and time. The key construction of our approach is the nonparametric space-time intensity function, governed by a latent process. The intensity function enjoys closed form integration for the density. The latent process captures the uncertainty of the event sequence. We use amortized variational inference to infer the latent process with deep networks. Using synthetic datasets, we validate our model can accurately learn the true intensity function. On real-world benchmark datasets, our model demonstrates superior performance over state-of-the-art baselines.
[ { "affiliations": [], "name": "Zihao Zhou" }, { "affiliations": [], "name": "Xingyi Yang" }, { "affiliations": [], "name": "Ryan Rossi" }, { "affiliations": [], "name": "Handong Zhao" }, { "affiliations": [], "name": "Rose Yu" } ]
[ { "authors": [ "Zhengping Che", "Sanjay Purushotham", "Kyunghyun Cho", "David Sontag", "Yan Liu" ], "title": "Recurrent neural networks for multivariate time series with missing values", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Ricky TQ Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ricky TQ Chen", "Brandon Amos", "Maximilian Nickel" ], "title": "Neural spatio-temporal point processes", "venue": "ICLR,", "year": 2021 }, { "authors": [ "Junyoung Chung", "Çaglar Gülçehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "CoRR, abs/1412.3555,", "year": 2014 }, { "authors": [ "Daryl J Daley", "David Vere-Jones" ], "title": "An introduction to the theory of point processes: volume II: general theory and structure", "venue": "Springer Science & Business Media,", "year": 2007 }, { "authors": [ "Edward De Brouwer", "Jaak Simm", "Adam Arany", "Yves Moreau" ], "title": "Gru-ode-bayes: Continuous modeling of sporadically-observed time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nan Du", "Hanjun Dai", "Rakshit Trivedi", "Utkarsh Upadhyay", "Manuel Gomez-Rodriguez", "Le Song" ], "title": "Recurrent marked temporal point processes: Embedding event history to vector", "venue": "In KDD,", "year": 2016 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shen Fang", "Qi Zhang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Gstnet: Global spatialtemporal network for traffic flow prediction", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "Chris Finlay", "Jörn-Henrik Jacobsen", "Levon Nurbekyan", "Adam M Oberman" ], "title": "How to train your neural ode", "venue": "arXiv preprint arXiv:2002.02798,", "year": 2020 }, { "authors": [ "Xu Geng", "Yaguang Li", "Leye Wang", "Lingyu Zhang", "Qiang Yang", "Jieping Ye", "Yan Liu" ], "title": "Spatiotemporal multi-graph convolution network for ride-hailing demand forecasting", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Amir Gholami", "Kurt Keutzer", "George Biros" ], "title": "Anode: Unconditionally accurate memory-efficient gradients for neural odes", "venue": "arXiv preprint arXiv:1902.10298,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Valerie Isham", "Mark Westcott" ], "title": "A self-correcting point process", "venue": "Stochastic processes and their applications,", "year": 1979 }, { "authors": [ "Junteng Jia", "Austin R Benson" ], "title": "Neural jump stochastic differential equations", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Patrick Kidger", "James Morrill", "James Foster", "Terry Lyons" ], "title": "Neural controlled differential equations for irregular time series", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Wenwei Liang", "Wei Zhang", "Xiaoling Wang" ], "title": "Deep sequential multi-task modeling for next check-in time and location prediction", "venue": "In International Conference on Database Systems for Advanced Applications,", "year": 2019 }, { "authors": [ "Hongyuan Mei", "Jason Eisner" ], "title": "The neural hawkes process: A neurally self-modulating multivariate point process", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "George O Mohler", "Martin B Short", "P Jeffrey Brantingham", "Frederic Paik Schoenberg", "George E Tita" ], "title": "Self-exciting point process modeling of crime", "venue": "Journal of the American Statistical Association,", "year": 2011 }, { "authors": [ "Jesper Moller", "Rasmus Plenge Waagepetersen" ], "title": "Statistical inference and simulation for spatial point processes", "venue": null, "year": 2003 }, { "authors": [ "Michael C Mozer", "Denis Kazakov", "Robert V Lindsey" ], "title": "Discrete event, continuous time rnns", "venue": null, "year": 2017 }, { "authors": [ "Yosihiko Ogata" ], "title": "On lewis’ simulation method for point processes", "venue": "IEEE transactions on information theory,", "year": 1981 }, { "authors": [ "Maya Okawa", "Tomoharu Iwata", "Takeshi Kurashima", "Yusuke Tanaka", "Hiroyuki Toda", "Naonori Ueda" ], "title": "Deep mixture point processes: Spatio-temporal event prediction with rich contextual information", "venue": "In KDD,", "year": 2019 }, { "authors": [ "Kira Rehfeld", "Norbert Marwan", "Jobst Heitzig", "Jürgen Kurths" ], "title": "Comparison of correlation analysis techniques for irregularly sampled time series", "venue": "Nonlinear Processes in Geophysics,", "year": 2011 }, { "authors": [ "Alex Reinhart" ], "title": "A review of self-exciting spatio-temporal point processes and their applications", "venue": "Statistical Science,", "year": 2018 }, { "authors": [ "Abolfazl Safikhani", "Camille Kamga", "Sandeep Mudigonda", "Sabiheh Sadat Faghih", "Bahman Moghimi" ], "title": "Spatio-temporal modeling of yellow taxi demands in new york city using generalized star models", "venue": "International Journal of Forecasting,", "year": 2018 }, { "authors": [ "Jin Shang", "Mingxuan Sun" ], "title": "Geometric hawkes processes with graph convolutional recurrent neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Oleksandr Shchur", "Ali Caner Türkmen", "Tim Januschowski", "Stephan Günnemann" ], "title": "Neural temporal point processes: A review", "venue": "arXiv preprint arXiv:2104.03528,", "year": 2021 }, { "authors": [ "Satya Narayan Shukla", "Benjamin Marlin" ], "title": "Interpolation-prediction networks for irregularly sampled time series", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Alejandro Veen", "Frederic P Schoenberg" ], "title": "Estimation of space–time branching process models in seismology using an em–type algorithm", "venue": "Journal of the American Statistical Association,", "year": 2008 }, { "authors": [ "Shuai Xiao", "Mehrdad Farajtabar", "Xiaojing Ye", "Junchi Yan", "Le Song", "Hongyuan Zha" ], "title": "Wasserstein learning of deep generative point process models", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Shi Xingjian", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Guolei Yang", "Ying Cai", "Chandan K. Reddy" ], "title": "Recurrent spatio-temporal point process for checkin time prediction", "venue": "In CIKM,", "year": 2018 }, { "authors": [ "Huaxiu Yao", "Xianfeng Tang", "Hua Wei", "Guanjie Zheng", "Zhenhui Li" ], "title": "Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Qiang Zhang", "Aldo Lipani", "Omer Kirnap", "Emine Yilmaz" ], "title": "Self-attentive hawkes processes", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Liang Zhao", "Qian Sun", "Jieping Ye", "Feng Chen", "Chang-Tien Lu", "Naren Ramakrishnan" ], "title": "Multitask learning for spatio-temporal event forecasting", "venue": "In KDD,", "year": 2015 }, { "authors": [ "Shixiang Zhu", "Shuang Li", "Zhigang Peng", "Yao Xie" ], "title": "Imitation learning of neural spatio-temporal point processes", "venue": "arXiv preprint arXiv:1906.05467,", "year": 2019 }, { "authors": [ "Simiao Zuo", "Haoming Jiang", "Zichong Li", "Tuo Zhao", "Hongyuan Zha" ], "title": "Transformer hawkes process", "venue": "International Conference on Machine Learning (ICML),", "year": 2020 } ]
[ { "heading": "1. Introduction", "text": "Accurate modeling of spatiotemporal event dynamics is fundamentally important for disaster response (Veen and Schoenberg, 2008), logistic optimization (Safikhani et al., 2018) and social media analysis (Liang et al., 2019). Compared to other sequence data such as texts or time series, spatiotemporal events occur irregularly with uneven time and space intervals.\nDiscrete-time deep dynamics models such as recurrent neural networks (RNNs) (Hochreiter and Schmidhuber, 1997; Chung et al., 2014) assume events to be evenly sampled. Interpolating an irregular sampled sequence into a regular sequence can introduce significant biases (Rehfeld et al., 2011). Furthermore, event sequences contain strong spatiotemporal dependencies. The rate of an event depends on the preceding events, as well as the events geographically correlated to it.\nSpatiotemporal point processes (STPP) (Daley and Vere-Jones, 2007; Reinhart et al., 2018) provides the statistical framework for modeling continuous-time event dynamics. As shown in Figure 1, given the history of events sequence, STPP estimates the intensity function that is evolv-\n© 2022 Z. Zhou, X. Yang, R. Rossi, H. Zhao & R. Yu.\ning in space and time. However, traditional statistical methods for estimating STPPs often require strong modeling assumptions, feature engineering, and can be computationally expensive.\nMachine learning community is observing a growing interest in continuous-time deep dynamics models that can handle irregular time intervals. For example, Neural ODE (Chen et al., 2018) parametrizes the hidden states in an RNN with an ODE. Shukla and Marlin (2018) uses a separate network to interpolates between reference time points. Neural temporal point process (TPP) (Mei and Eisner, 2017; Zhang et al., 2020; Zuo et al., 2020) is an exciting area that combines fundamental concepts from temporal point processes with deep learning to model continuous-time event sequences, see a recent review on neural TPP (Shchur et al., 2021). However, most of the existing models only focus on temporal dynamics without considering spatial modeling.\nIn the real world, while time is a unidirectional process (arrow of time), space extends in multiple directions. This fundamental difference from TPP makes it nontrivial to design a unified STPP model. The naive approach to approximate the intensity function by a deep neural network would lead to intractable integral computation for likelihood. Prior research such as Du et al. (2016) discretizes the space as “markers” and use marked TPP to classify the events. This approach cannot produce the space-time intensity function. Okawa et al. (2019) models the spatiotemporal density using a mixture of symmetric kernels, which ignores the unidirectional property of time. Chen et al. (2021) proposes to model temporal intensity and spatial density separately with neural ODE, which is computational expensive.\nWe propose a simple yet efficient approach to learn STPP. Our model, Deep Spatiotemporal Point Process (DeepSTPP) marries the principles of spatiotemporal point processes with deep learning. We take a non-parametric approach and model the space-time intensity function as mixture of kernels. The parameters of the intensity function are governed by a latent stochastic process no sampling which captures the uncertainty of the event sequence. The latent process is then inferred via amortized variational inference. That is, we draw a sample from the variational distribution for every event. We use a Transformer network to parametrize the variational distribution conditioned on the previous events.\nCompared with existing approaches, our model is non-parametric, hence does not make assumptions on the parametric form of the distribution. Our approach learns the space-time intensity function jointly without requiring separate models for time-intensity function and spatial density as in Chen et al. (2021). Our model is probabilistic by nature and can describe various uncertainties in the data. More importantly, our model enjoys closed form integration, making it feasible for processing large-scale event datasets. To summarize, our work makes the following key contributions:\n• Deep Spatiotemporal Point Process. We propose a novel Deep Point Process model for forecasting unevenly sampled spatiotemporal events. It integrates deep learning with spatiotemporal point processes to learn continuous space-time dynamics.\n• Neural Latent Process. We model the space-time intensity function using a nonparametric approach, governed by a latent stochastic process. We use amortized variational inference to perform inference on the latent process conditioned on the previous events.\n• Effectiveness. We demonstrate our model using many synthetic and real-world spatiotemporal event forecasting tasks, where it achieves superior performance in accuracy and efficiency. We also derive and implement efficient algorithms for simulating STPPs." }, { "heading": "2. Methodology", "text": "We first introduce the background of spatiotemporal point process, and then describe our approach to learn the underlying spatiotemporal event dynamics." }, { "heading": "2.1. Background on Spatiotemporal Point Process", "text": "Spatiotemporal Point Process. Spatiotemporal point process (STPP) models the number of events N(S × (a, b)) that occurred in the Cartesian product of the spatial domain S ⊆ R2 and the time interval (a, b]. It is characterized by a non-negative space-time intensity function given the history Ht := {(s1, t1), . . . , (sn, tn)}tn≤t:\nλ∗(s, t) := lim ∆s→0,∆t→0 E[N(B(s,∆s)× (t, t+∆t))|Ht] B(s,∆s)∆t\n(1)\nwhich is the probability of finding an event in an infinitesimal time interval (t, t + ∆t] and an infinitesimal spatial ball S = B(s,∆s) centered at location s.\nExample 1: Spatiotemporal Hawkes process (STH). Spatiotemporal Hawkes (or self-exciting) process assumes every past event has an additive, positive, decaying, and spatially local influence over future events. Such a pattern resembles neuronal firing and earthquakes. It is characterized by the following intensity function (Reinhart et al., 2018):\nλ∗(s, t) := µg0(s) + ∑ i:ti<t g1(t, ti)g2(s, si) : µ > 0 (2)\nwhere g0(s) is the probability density of a distribution over S, g1 is the triggering kernel and is often implemented as the exponential decay function, g1(∆t) := α exp(−β∆t) : α, β > 0, and g2(s, si) is the density of an unimodal distribution over S centered at si.\nExample 2: Spatiotemporal Self-Correcting process (STSC). Self-correcting spatiotemporal point process Isham and Westcott (1979) assumes that the background intensity increases with a varying speed at different locations, and the arrival of each event reduces the intensity nearby. STSC can model certain regular event sequences, such as an alternating home-to-work travel sequence. It has the following intensity function:\nλ∗(s, t) = µ exp ( g0(s)βt− ∑ i:ti<t αg2(s, si) ) : α, β, µ > 0 (3)\nHere g0(s) is the density of a distribution over S, and g2(s, si) is the density of an unimodal distribution over S centered at location si.\nMaximum likelihood Estimation. Given a history of n events Ht, the joint log-likelihood function of the observed events for STPP is as follows:\nlog p(Ht) = n∑\ni=1\nlog λ∗(si, ti)− ∫ S ∫ t 0 λ∗(u, τ)dudτ (4)\nHere, the space-time intensity function λ∗(s, t) plays a central role. Maximum likelihood estimation seeks the optimal λ∗(s, t) from data that optimizes Eqn. (4).\nPredictive distribution. Denote the probability density function (PDF) for STPP as f(s, t|Ht) which represents the conditional probability that next event will occur at location s and time t, given the history. The PDF is closely related to the intensity function:\nf(s, t|Ht) = λ∗(s, t)\n1− F ∗(s, t|Ht) = λ∗(s, t) exp\n( − ∫ S ∫ t tn λ∗(u, τ)dτdu ) (5)\nwhere F is the cumulative distribution function (CDF), see derivations in Appendix A.1. This means the intensity function specifies the expected number of events in a region conditional on the past.\nThe predicted time of the next event is the expected value of the predictive distribution for time f⋆(t) in the entire spatial domain:\nE[tn+1|Ht] = ∫ ∞ tn t ∫ S f∗(s, t)dsdt = ∫ ∞ tn t exp ( − ∫ t tn λ∗(τ)dτ ) λ∗(t)dsdt\nSimilarly, the predicted location of the next event evaluates to:\nE[sn+1|Ht] = ∫ S s ∫ ∞ tn f∗(s, t)dtds = ∫ ∞ tn exp ( − ∫ t tn λ∗(τ)dτ )∫ S sλ∗(s, t)dsdt\nUnfortunately, Eqn. (4) is generally intractable. It requires either strong modeling assumptions or expensive Monte Carlo sampling. We propose the Deep STPP model to simplify the learning." }, { "heading": "2.2. Deep Spatiotemporal Point Process (DSTPP)", "text": "We propose DeepSTPP, a simple and efficient approach for learning the space-time event dynamics. Our model (1) introduces a latent process to capture the uncertainty (2) parametrizes the latent process with deep neural networks to increase model expressivity and (3) approximates the intensity function with a set of spatial and temporal kernel functions.\nNeural latent process. Given a sequence of n event, we wish to model the conditional density of observing the next event given the history f(s, t|Ht). We introduce a latent process to capture the uncertainty of the event history and infer the latent process with armotized variational inference. The latent process dictates the parameters in the space-time intensity function. We sample from the latent process using the re-parameterization trick Kingma and Welling (2013).\nAs shown in Figure 2, given the event sequence Ht = {(s1, t1), . . . , (sn, tn)}tn≤t, we encode the entire sequence into the high-dimensional embedding. We use positional encoding to encode the sequence order. To capture the stochasticity in the temporal dynamics, we introduce a latent process\nz = (z1, · · · , zn) for the entire sequence. We assume the latent process follows a multivariate Gaussian at each time step:\nzi ∼ qϕ(zi|Ht) = N (µ,Diag(σ)) (6)\nwhere the mean µ and covariance Diag(σ) are the outputs of the embedding neural network. In our implementation, we found using a Transformer Vaswani et al. (2017) with sinusoidal positional encoding to be beneficial. The positions to be encoded are the normalized event time instead of the index number, to account for the unequal time interval. Recently, Zuo et al. (2020) also demonstrated that Transformer enjoys better performance for learning the intensity in temporal point processes.\nNon-parametric model. We take a non-parameteric approach to model the space-time intensity function λ∗(s, t) as:\nλ∗(s, t|z) = n+J∑ i=1 wiks(s, si; γi)kt(t, ti;βi) (7)\nHere wi(z), γi(z), βi(z) are the parameters for each event that is conditioned on the latent process. Specifically, wi represents the non-negative intensity magnitude, implemented with a soft-plus activation function. ks(·, ·) and kt(·, ·) are the spatial and temporal kernel functions, respectively. For both kernel functions, we parametrize them as a normalized RBF kernel:\nks(s, si) = α −1 exp ( − γi∥s− si∥ ) , kt(t, ti) = exp ( − βi∥t− ti∥ ) (8)\nwhere the bandwidth parameter γi controls an event’s influence over the spatial domain. The parameter βi is the decay rate that represents the event’s influence over time. α = ∫ S exp ( −γi∥s−si∥ ) ds is the normalization constant. We use a decoder network to generate the parameters {wi, γi, βi} given z separately, shown in Figure 2. Each decoder is a 4-layer feed-forward network. We use a softplus activation function to\nensure wi and γi are positive. The decay rate βi can be any number, such that an event could have constant or increasing triggering intensity over time.\nIn addition to n historical events, we also randomly sample J representative points from the spatial domain to approximate the background intensity. This is to account for the influence from unobserved events in the background, with varying rates at different absolution locations. The inclusion of these representative points can approximate this background distribution.\nThe model design in (7) enjoys a closed form integration, which gives the conditional PDF as:\nf(s, t|Ht, z) = λ∗(s, t|z) exp ( −\nn+J∑ i=1 wi βi [kt(tn, ti)− kt(t, ti)]\n) (9)\nSee the derivation details in Appendix A.2. DeepSTPP circumvents the integration of the intensity function and enjoys fast inference in forecasting future events. In contrast, NSTPP Chen et al. (2021) is relatively inefficient as its ODE solver also requires additional numerical integration.\nParameter learning. Due to the latent process, the posterior becomes intractable. Instead, we use amortized inference by optimizing the evidence lower bound (ELBO) of the likelihood. In particular, given event historyHt, the conditional log-likelihood of the next event is:\nlog p(s, t|Ht) ≥ log pθ(s, t|Ht, z) + KL(qϕ(z|Ht)||p(z)) (10) = log λ∗(s, t|z)− ∫ t tn λ∗(τ)dτ + KL(q||p) (11)\nwhere ϕ represents the parameters of the encoder network and θ are the parameters of the decoder network. p(z) is the prior distribution, which we assume to be Gaussian. KL(·||·) is the Kullback–Leibler divergence between two distributions. We can optimize the objective function in Eqn. (11) w.r.t. the parameters ϕ and θ using back-propagation." }, { "heading": "3. Related Work", "text": "Spatiotemporal Dynamics Learning. Modeling the spatiotemporal dynamics of a system in order to forecast the future is a fundamental task in many fields. Most work on spatiotemporal dynamics has been focused on spatiotemporal data measured at regular space-time interval, e.g., (Xingjian et al., 2015; Li et al., 2018; Yao et al., 2019; Fang et al., 2019; Geng et al., 2019). For discrete spatiotemporal events, statistical methods include space-time point process, see (Moller and Waagepetersen, 2003; Mohler et al., 2011). (Zhao et al., 2015) propose multi-task feature learning whereas (Yang et al., 2018) propose RNN-based model to predict spatiotemporal check-in events. These discrete-time models assume data are sampled evenly, thus are unsuitable for our task.\nContinous Time Sequence Models. Continuous time sequence models provide an elegant approach for describing irregular sampled time series. For example, (Chen et al., 2018; Jia and Benson, 2019; Dupont et al., 2019; Gholami et al., 2019; Finlay et al., 2020; Kidger et al., 2020; Norcliffe et al., 2021) assumes the latent dynamics are continuous and can be modeled by an ODE. But for high-dimensional spatiotemporal processes, this approach can be computationally expensive. Che et al. (2018); Shukla and Marlin (2018) modifies the hidden states with exponential decay. GRUODE-Bayes proposed by De Brouwer et al. (2019) introduces a continuous-time version of GRU and a Bayesian update network capable of handling sporadic observations. However, Mozer et al.\n(2017) shows that there is no significant benefit of using continuous-time RNN for discrete event data. Special treatment is still needed for modeling unevenly sampled events.\nDeep Point Process. Point process is well-studied in statistics (Moller and Waagepetersen, 2003; Daley and Vere-Jones, 2007; Reinhart et al., 2018). Deep point process couples deep learning with point process and has received considerable attention. For example, neural Hawkes process applies RNNs to approximate the temporal intensity function (Du et al., 2016; Mei and Eisner, 2017; Xiao et al., 2017; Zhang et al., 2020), and (Zuo et al., 2020) employs Transformers. (Shang and Sun, 2019) integrates graph convolution structure. However, all existing works focus on temporal point processes without spatial modeling. For datasets with spatial information, they discretize the space and treat them as discrete “markers”. Okawa et al. (2019) extends Du et al. (2016) for spatiotemporal event prediction but they only predict the density instead of the next location and time of the event. Zhu et al. (2019) parameterizes the spatial kernel with a neural network embedding without consider the temporal sequence. Recently, Chen et al. (2021) propose neural spatiotemporal point process (NSTPP) which combines continuous-time neural networks with continuous-time normalizing flows to parameterize spatiotemporal point processes. However, this approach is quite computationally expensive, which requires evaluating the ODE solver for multiple time steps." }, { "heading": "4. Experiments", "text": "We evaluate DeepSTPP for spatiotemporal prediction using both synthetic and real-world data.\nBaselines We compare DeepSTPP with the state-of-the-art models, including\n• Spatiotemporal Hawkes Process (MLE) (Reinhart et al., 2018): it learns a spatiotemporal parametric intensity function using maximum likelihood estimation, see derivation in Appendix A.3.\n• Recurrent Marked Temporal Point Process (RMTPP) (Du et al., 2016): it uses GRU to model the temporal intensity function. We modify this model to take spatial location as marks.\n• Neural Spatiotemporal Point Process (NSTPP) Chen et al. (2021): a neural point process model that parameterizes the spatial PDF and temporal intensity with continuous-time normalizing flows. Specifically, we use Jump CNF as it is a better fit for Hawkes processes.\nAll models are implemented in PyTorch, trained using the Adam optimizer. We set the number of representative points to be 100. The details of the implementation are deferred to the Appendix C.1. For the baselines, we use the authors’ original repositories whenever possible.\nDatasets. We simulated two types of STPPs: spatiotemporal Hawkes process (STH) and spatiotemporal self-correcting process (STSC) . For both STPPs, we generate three synthetic datasets, each with a different parameter setting, denoted as DS1, DS2, and DS3 in the tables. We also derive and implement efficient algorithms for simulating STPPs based on Ogata’s thinning algorithm Ogata (1981). We view the simulator construction as an independent contribution from this work. The details of the simulation can be found in Appendix B. We use two real-world spatiotemporal event datasets from NSTPP Chen et al. (2021) to benchmark the performance.\n• Earthquakes Japan: catalog earthquakes data including the location and time of all earthquakes in Japan from 1990 to 2020 with magnitude of at least 2.5 from the U.S. Geological\nSurvey. There are in total 1,050 sequences. The number of events per sequences ranges between 19 to 545 1.\n• COVID-19: daily county level COVID-19 cases data in New Jersey state published by The New York Times. There are 1,650 sequences and the number of events per sequences ranges between 7 to 305.\nFor both synthetic data and real-world data, we partition long event sequences into non-overlapping subsequences according to a fixed time range T . The targets are the last event, and the input is the rest of the events. The number of input events varies across subsequences. For each dataset, we split each into train/val/test sets with the ratio of 8:1:1. All results are the average of 3 runs." }, { "heading": "4.1. Synthetic Experiment Results", "text": "For synthetic data, we know the ground truth intensity function. We compare our method with the best possible estimator: maximum likelihood estimator (MLE), as well as the NSTPP model. The MLE is learned by optimizing the log-likelihood using the BFGS algorithm. RMTPP can only learn the temporal intensity thus is not included in this comparison.\nPredictive log-likelihood. Table 1 shows the comparison of the predictive distribution for space and time. We report Log Likelihood (LL) of f(s, t|Ht) and the Hellinger Distance (HD) between the predictive distributions and the ground truth averaged over time.\n1. The statistics differ slightly from the original paper due to updates in the data source.\nOn both the STH and STSC datasets with different parameter settings, DeepSTPP outperform the baseline NSTPP in terms of LL and HD. It shows that DeepSTPP can estimate the spatiotemporal intensity more accurately for point processes with unknown parameters.\nTemporal intensity estimate. Table 2 shows the mean absolute percentage error (MAPE) between the models’ estimated temporal intensity and the ground truth λ⋆(t) over a short sampled range. On the STH datasets, since MLE has the correct parametric form, it is the theoretical optimum. Compared to baselines, DeepSTPP generally obtained the same or lower MAPE. It shows that joint spatiotemporal modeling also improve the performance of temporal prediction.\nIntensity visualization. Figure 3 visualizes the learned space-time intensity and the ground truth for STH and STSC, providing strong evidence that DeepSTPP can correctly learn the underlying dynamics of the spatiotemporal events. Especially, NSTPP has difficulty in modeling the complex dynamics of the multimodal distribution such as the spatiotemporal Hawkes process. NSTPP sometimes produces overly smooth intensity surfaces, and lost most of the details at the peak. In contrast, our DeepSTPP can better fit the multimodal distri-\nbution through the form of kernel summation and obtain more accurate intensity functions.\nComputational efficiency. Figure 4 provides the run time comparison for the training between DeepSTPP and NSTPP for 100 epochs. To ensure a fair comparison, all experiments are conducted on 1 GTX 1080 Ti with Intel Core i7-4770 and 64 GB RAM. Our method is 100 times faster than NSTPP in training. It is mainly because our spatiotemporal kernel formulation has a close form of integration, which bypasses the complex and cumbersome numerical integration." }, { "heading": "4.2. Real-World Experiment Results", "text": "For real-world data evaluation, we report the conditional spatial and temporal log-likelihoods, i.e., log f∗(s|t) and log f∗(t), of the final event given the input events, respectively. The total log-likelihood, log f∗(s, t), is the summation of the two values.\nPredictive performances. As our model is probabilistic, we compare\nagainst baselines models on the test predictive LL for space and time separately in Table 3. RMTPP can only produce temporal intensity thus we only include the time likelihood. We observe that DeepSTPP outperforms NSTPP most of the time in terms of accuracy. It takes only half of the time to train, as shown in Figure 4. Furthermore, we see that STPP models (first three rows) achieve higher LL compared with only modeling the time (RMTPP). It suggests the additional benefit of joint spatiotemporal modeling to increases the time prediction ability.\nAblation study We conduct ablation studies on the model design. Our model assumes a global latent process z that governs the parameters {wi, βi, γi} with separate decoders. We examine other alternative designs experimentally. (1) Shared decoders: We use one shared decoder to gener-\nate model parameters. Shared decoders input the sampled z to one decoder and partition its output to generate model parameters.(2) Separate process: We assume that each of the {wi, βi, γi} follows a separate latent process and we sample them separately. Separate processes use three sets of means and variances to sample {wi, βi, γi} separately. (3) LSTM encoder: We replace the Transformer encoder with a LSTM module.\nAs shown in Table 4, we see that (1) Shared decoders decreases the number of parameters but reduces the performance. (2) Separate process largely increases the number of parameters but has negligible influences in test log-likelihood. (3) LSTM encoder: changing the encoder from Transformer to LSTM also results in slightly worse performance. Therefore, we validate the design of DeepNSTPP: we assume all distribution parameters are governed by one single hidden stochastic process with separate decoders and a Transformer as encoder." }, { "heading": "5. Conclusion", "text": "We propose a family of deep dynamics models for irregularly sampled spatiotemporal events. Our model, Deep Spatiotemporal Point Process (DeepSTPP), integrates a principled spatiotemporal point process with deep neural networks. We derive a tractable inference procedure by modeling the space-time intensity function as a composition of kernel functions and a latent stochastic process. We infer the latent process with neural networks following the variational inference procedure. Using synthetic data from the spatiotemporal Hawkes process and self-correcting process, we show that our model can learn the spatiotemporal intensity accurately and efficiently. We demonstrate superior forecasting performance on many real-world benchmark spatiotemporal event datasets. Future work include further considering the mutual-exciting structure in the intensity function, as well as modeling multiple heterogeneous spatiotemporal processes simultaneously." }, { "heading": "Appendix A. Model Details", "text": "" }, { "heading": "A.1. Spatiotemporal Point Process Derivation", "text": "Conditional Density. The intensity function and probability density function of STPP is related:\nf(s, t|Ht) = λ∗(s, t)\n1− F ∗(s, t)\n= λ∗(s, t) exp ( − ∫ S ∫ t tn λ∗(s, τ)dτds ) = λ∗(s, t) exp\n( − ∫ t tn λ∗(τ)dτ ) The last equation uses the relation that λ∗(s, t) = λ∗(t)f(s|t), according Daley and Vere-Jones (2007) Chapter 2.3 (4). Here λ∗(t) is the time intensity and f∗(s|t) := f(s|t,Ht) is the spatial PDF that the next event will be at location s given time t. According to Daley and Vere-Jones (2007) Chapter 15.4, we can also view STPP as a type of TPP with continuous (spatial) marks,\nLikelihood. Given a STPP, the log-likelihood of observing a sequenceHt = {(s1, t1), (s2, t2), ...(sn, tn)}tn≤t is given by:\nL(Htn) = log\n[ n∏\ni=1\nf(si, ti|Hti−1)(1− F ∗(s, t))\n]\n= n∑ i=1\n[ log λ∗(si, ti)− ∫ S ∫ ti ti−1 λ∗(τ)dτds ] + log(1− F ∗(s, t))\n= n∑ i=1 log λ∗(si, ti)− ∫ S ∫ tn 0 λ∗(s, τ)dτ − ∫ S ∫ T tn λ∗(s, τ)dτ\n= n∑ i=1 log λ∗(si, ti)− ∫ S ∫ T 0 λ∗(s, τ)dτ\n= n∑ i=1 log λ∗(ti) + n∑ i=1 log f∗(si|ti)− ∫ T 0 λ∗(τ)dτ\nInference. With a trained STPP and a sequence of history events, we can predict the next event timing and location using their expectations, which evaluate to\nE[tn+1|Htn ] = ∫ ∞ tn t ∫ S f(s, t|Htn)dsdt = ∫ ∞ tn t exp ( − ∫ t tn λ∗(τ)dτ )∫ S λ∗(s, t)dsdt,\n= ∫ ∞ tn t exp ( − ∫ t tn λ∗(τ)dτ ) λ∗(t)dt (12)\nThe predicted location for the next event is: E[sn+1|Htn ] = ∫ ∞ tn s ∫ S λ∗(s, t) exp ( − ∫ t tn λ∗(s, τ)dτ ) dsdt\n= ∫ ∞ tn exp ( − ∫ t tn λ∗(τ)dτ )∫ S sλ∗(s, t)dsdt (13)\nComputational Complexity. It is worth noting that both learning and inference require conditional intensity. If the conditional intensity has no analytic formula, then we need to compute numerical integration over S. Then, evaluating the likelihood or either expectation requires at least triple integral. Note that E[ti|Hti−1 ] and E[si|Hti−1 ] actually are sextuple integrals, but we can memorize all λ∗(s, t) from t = ti−1 to t ≫ ti−1 to avoid re-compute the intensities. However, memorization leads to high space complexity. As a result, we generally want to avoid an intractable conditional intensity in the model.\nA.2. Deep Spatiotemporal Point process (DeepSTPP) Derivation\nPDF Derivation The model design of DeepSTPP enjoys a closed form formula for the PDF. First recall that\nf∗(t) = λ∗(t) exp ( − ∫ t tn λ∗(τ)dτ ) Also notice that f∗(s, t) = f∗(s|t)f∗(t), λ∗(s, t) = f∗(s|t)λ∗(t) and λ∗(t) = f ∗(t)\n1− F ∗(t) .\nTherefore\nf∗(s, t) = f∗(s | t)f∗(t) = f∗(s | t)λ∗(t) exp ( − ∫ t tn λ∗(τ)dτ ) = λ∗(s, t) exp\n( − ∫ t tn λ∗(τ)dτ )\nFor DeepSTPP, the spatiotemporal intensity is\nλ∗(s, t) = ∑ i wi exp(−βi(t− ti))ks (s− si)\nThe temporal intensity simply removes the ks (which integrates to one). The bandwidth doesn’t matter. λ∗(t) = ∑ i wi exp(−βi(t− ti)) Integrate λ∗(τ) yields ∫ λ∗(τ)dτ = −\n∑ i wi βi exp(−βi(τ − ti)) + C\nNote that deriving the exp would multiply the coefficient −βi. The definite integral is∫ t\ntn λ∗(τ)dτ = − ∑ i wi βi [exp(−βi(t− ti))− exp(−βi(tn − ti))]\nThen replacing the integral in the original formula yields\nf∗(s, t) = λ∗(s, t) exp ( − ∫ t tn λ∗(τ)dτ ) = λ∗(s, t) exp\n(∑ i wi βi [exp(−βi(t− ti))− exp(−βi(tn − ti))] )\nThe temporal kernel function kt(t, ti) = exp(−βi(t− ti)), we reach the closed form formula.\nInference The expectation of the next event time is\nE∗[ti] = ∫ ∞ ti−1 tf∗(t)dt = ∫ ∞ tn tλ∗(t) exp ( − ∫ t ti−1 λ∗(τ)dτ ) dt\nwhere the inner integral has a closed form. It requires 1D numerical integration. Given the predicted time t̄i, the expectation of the space can be efficiently approximated by\nE∗[si] ≈ E∗[si|t̄i] = ∑ i′<i α−1wi′kt(t̄i, ti′)si′\nwhere α = ∑\ni′<iwi′kt(t̄i, ti′) is a normalize coefficient." }, { "heading": "A.3. Spatiotemporal Hawkes Process Derivation", "text": "Spatiotemporal Hawkes process (STHP). Spatiotemporal Hawkes (or self-exciting) process is one of the most well-known STPPs. It assumes every past event has an additive, positive, decaying, and spatially local influence over future events. Such a pattern resembles neuronal firing and earthquakes.\nSpatiotemporal Hawkes is characterized by the following intensity function (Reinhart et al., 2018): λ∗(s, t) := µg0(s) + ∑ i:ti<t g1(t, ti)g2(s, si) : µ > 0 (14)\nwhere g0(s) is the probability density of a distribution over S, g1 is the triggering kernel and is often implemented as the exponential decay function, g1(∆t) := α exp(−β∆t) : α, β > 0, and g2(s, si) is the density of an unimodal distribution over S centered at si.\nMaximum Likelihood. For spatiotemporal Hawkes process, we pre-specified the model kernels g0(s) and g2(s, sj) to be Gaussian:\ng0(s) := 1\n2π |Σg0|− 1 2 exp ( −1 2 (s− sµ)Σ−1g0 (s− sµ)T ) (15)\ng2(s, sj) := 1\n2π |Σg2|− 1 2 exp ( −1 2 (s− sj)Σ−1g2 (s− sj)T ) (16)\nSpecifically for the STHP, the second term in the STPP likelihood evaluates to∫ T 0 λ∗(τ)dτ = µT + α ∫ T 0 ∫ τ 0 e−β(τ−u)dN(u)dτ\n(0 ≤ u ≤ τ, 0 ≤ τ ≤ T )→ (u ≤ τ ≤ T, 0 ≤ u ≤ T )\n= µT + α ∫ T 0 ∫ T u e−β(τ−u)dτdN(u)\n= µT − α β ∫ T 0 [ e−β(T−u) − 1 ] dN(u)\n= µT − α β N∑ i=0 [ e−β(T−ti) − 1 ] Finally, the STHP log-likelihood is\nL = n∑\ni=1\nlog λ∗(si, ti)− µT + α\nβ N∑ i=0 [ e−β(T−ti) − 1 ] This model has 11 scalar parameters: 2 for sµ, 3 for Σg0, 3 for Σg2, α, β, and µ. We directly estimate sµ as the mean of {si}n0 , and then estimate the other 9 parameters by minimizing the negative log-likelihood using the BFGS algorithm. T in the likelihood function is treated as tn.\nInference Based on the general formulas in Appendix A.1, and also note that for an STHP,∫ t ti−1 λ∗(τ)dτ = ∫ t 0 λ∗(τ)dτ − ∫ ti−1 0 λ∗(τ)dτ\n= µt− αβ i−1∑ j=0 [ e−β(t−tj) − 1 ]− µti−1 − αβ i−1∑ j=0 [ e−β(ti−1−tj) − 1 ] = µ(t− ti−1)− α\nβ i−1∑ j=0 [ e−β(t−ti−1+ti−1−tj) − e−β(ti−1−tj) ]\n= µ(t− ti−1)− α\nβ\n( e−β(t−ti−1) − 1 ) i−1∑ j=0 [ e−β(ti−1−tj) ] and\n∫ S sµg2(s, sµ)ds = µsµ∫\nS s n∑ i=0 g1(t, ti)g2(s, si)ds = n∑ i=0 g1(t, ti) ∫ S sg2(s, si)ds = n∑ i=0\ng1(t, ti)si∫ S sλ∗(s, t)ds = µsµ + n∑ i=0 g1(t, ti)si,\nwe have\nE[ti|Hti−1 ] = ∫ ∞ ti−1 t µ+ α i−1∑ j=0 e−β(t−tj) exp α β ( e−β(t−ti−1) − 1 ) i−1∑ j=0 [ e−β(ti−1−tj) ] − µ(t− ti−1) dt and\nE[si|Hti−1 ] = ∫ ∞ ti−1 µsµ + α i−1∑ j=0 e−β(t−tj)sj exp α β ( e−β(t−ti−1) − 1 ) i−1∑ j=0 [ e−β(ti−1−tj) ] − µ(t− ti−1)\n dt Both require only 1D numerical integration.\nSpatiotemporal Self-Correcting process (STSCP). A lesser-known example is self-correcting spatiotemporal point process Isham and Westcott (1979). It assumes that the background intensity increases with a varying speed at different locations, and the arrival of each event reduces the intensity nearby. The next event is likely to be in a high-intensity region with no recent events.\nSpatiotemporal self-correcting process is capable of modeling some regular event sequences, such as an alternating home-to-work travel sequence. It has the following intensity function:\nλ∗(s, t) = µ exp ( g0(s)βt− ∑ i:ti<t αg2(s, si) ) : α, β, µ > 0 (17)\nHere g0(s) is the density of a distribution over S, and g2(s, si) is the density of an unimodal distribution over S centered at si." }, { "heading": "Appendix B. Simulation Details", "text": "In this appendix, we discuss a general algorithm for simulating any STPP, and a specialized algorithm for simulating an STHP. Both are based on an algorithm for simulating any TPP." }, { "heading": "B.1. TPP Simulation", "text": "The most widely used technique to simulate a temporal point process is Ogata’s modified thinning algorithm, as shown in Algorithm 1 Daley and Vere-Jones (2007) It is a rejection technique; it samples points from a stationary Poisson process whose intensity is always higher than the ground truth intensity, and then randomly discards some samples to get back to the ground truth intensity.\nThe algorithm requires picking the forms of M∗(t) and L∗(t) such that\nsup(λ∗(t+∆t),∆t ∈ [0, L(t)]) ≤M∗(t).\nIn other words, M∗(t) is an upper bound of the actual intensity in [t, t + L(t)]. It is noteworthy that if M∗(t) is chosen to be too high, most sampled points would be rejected and would lead to an inefficient simulation.\nWhen simulating a process with decreasing inter-event intensity, such as the Hawkes process, M∗(t) and L∗(t) can be simply chosen to be λ∗(t) and ∞. When simulating a process with increasing inter-event intensity, such as the self-correcting process, L∗(t) is often empirically chosen to be 2/λ∗(t), since the next event is very likely to arrive before twice the mean interval length at the beginning of the interval. M∗(t) is therefore λ∗(t+ L∗(t)).\nAlgorithm 1 Ogata Modified Thinning Algorithm for Simulating a TPP 1: Input: Interval [0, T ], model parameters 2: t← 0,H ← ∅ 3: while true do 4: Compute m←M(t|H) , l← L(t|H) 5: Draw ∆t ∼ Exp(m) (exponential distribution with mean 1/m) 6: if t+∆t > T then 7: return H 8: end if 9: if ∆t > l then 10: t← t+ l 11: else 12: t← t+∆t 13: Compute λ = λ∗(t) 14: Draw u ∼ Unif(0, 1) 15: if λ/m > u then 16: H = H ∪ t 17: end if 18: end if 19: end while=0" }, { "heading": "B.2. STPP Simulation", "text": "It has been mentioned in Section 2.1 that an STPP can be seen as attaching the locations sampled from f∗(s|t) to the events generated by a TPP. Simulating an STPP is basically adding one step to Algorithm 1: sample a new location from f∗(s|t) after retaining a new event at t.\nAs for a spatiotemporal self-correcting process, neither f∗(s, t) nor λ∗(t) has a closed form, so the process’s spatial domain has to be discretized for simulation. λ∗(t) can be approximated by ∑\ns∈S λ ∗(s, t)/|S|, where S is the set of discretized coordinates. L∗(t) and M∗(t) are chosen\nto be 2/λ∗(t) and λ∗(t + L∗(t)). Since f∗(s|t) is proportional to λ∗(s, t), sampling a location from f∗(s|t) is implemented as sampling from a multinomial distribution whose probability mass function is the normalized λ∗(s, t)." }, { "heading": "B.3. STHP Simulation", "text": "To simulate a spatiotemporal Hawkes process with Gaussian kernel, we mainly followed an efficient procedure proposed by Zhuang (2004), that makes use of the clustering structure of the Hawkes process and thus does not require repeated calculations of λ∗(s, t).\nAlgorithm 2 Simulating spatiotemporal Hawkes process with Gaussian kernel\nα β µ Σg0 Σg2 ST-Hawkes DS1 .5 1 .2 [.2 0; 0 .2] [0.5 0; 0 0.5]\nDS2 .5 .6 .15 [5 0; 0 5] [.1 0; 0 .1] DS3 .3 2 1 [1 0; 0 1] [.1 0; 0 .1]\nST-Self Correcting DS1 .2 .2 1 [1 0; 0 1] [0.85 0; 0 0.85] DS2 .3 .2 1 [.4 0; 0 .4] [.3 0; 0 .3] DS3 .4 .2 1 [.25 0; 0 .25] [.2 0; 0 .2]" }, { "heading": "B.4. Parameter Settings", "text": "For the synthetic dataset, we pre-specified both the STSCP’s and the STHP’s kernels g0(s) and g2(s, sj) to be Gaussian:\ng0(s) := 1 2π |Σg0|− 1 2 exp ( −1 2 (s− [0, 0])Σ−1g0 (s− [0, 0]) T ) g2(s, sj) := 1\n2π |Σg2|− 1 2 exp ( −1 2 (s− sj)Σ−1g2 (s− sj) T ) The STSCP is defined on S = [0, 1]× [0, 1], while the STHP is defined on S = R2. The STSCP’s kernel functions are normalized according to their cumulative probability on S. Table 5 shows the simulation parameters. The STSCP’s spatial domain is discretized as an 101× 101 grid during the simulation." }, { "heading": "Appendix C. Experiment Details", "text": "In this section, we include experiment configurations and some additional experiment results." }, { "heading": "C.1. Model Setup Details", "text": "For a better understanding of DeepSTPP, we list out the detailed hyperparameter settings in Table 6. We use the same set of hyperparameters across all datasets." } ]
2,021
Neural Point Process for Learning Spatiotemporal Event Dynamics
SP:510133bddf8cd65c97348e4a8161009fc1d791e0
[ "The authors propose to search for activation functions with regularized evolution, an evolutionary algorithm proposed by Real et al. Various mutations are proposed that allow to investigate a larger search space than prior work. In particular, a mutation is added which adds trainable parameters to the activation function. The discovered activation functions are compared on three different architectures to several state-of-the-art activation functions." ]
Recent studies have shown that the choice of activation function can significantly affect the performance of deep learning networks. However, the benefits of novel activation functions have been inconsistent and task dependent, and therefore the rectified linear unit (ReLU) is still the most commonly used. This paper proposes a technique for customizing activation functions automatically, resulting in reliable improvements in performance. Evolutionary search is used to discover the general form of the function, and gradient descent to optimize its parameters for different parts of the network and over the learning process. Experiments with four different neural network architectures on the CIFAR-10 and CIFAR-100 image classification datasets show that this approach is effective. It discovers both general activation functions and specialized functions for different architectures, consistently improving accuracy over ReLU and other recently proposed activation functions by significant margins. The approach can therefore be used as an automated optimization step in applying deep learning to new tasks.
[]
[ { "authors": [ "M. Abadi", "P. Barham", "J. Chen", "Z. Chen", "A. Davis", "J. Dean", "M. Devin", "S. Ghemawat", "G. Irving", "M. Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th USENIX Symposium on Operating Systems Design and Implementation", "year": 2016 }, { "authors": [ "M. Basirat", "P.M. Roth" ], "title": "The quest for the golden activation function", "venue": null, "year": 2018 }, { "authors": [ "G. Bingham", "W. Macke", "R. Miikkulainen" ], "title": "Evolutionary optimization of deep learning activation functions", "venue": "In Genetic and Evolutionary Computation Conference (GECCO", "year": 2020 }, { "authors": [ "D.-A. Clevert", "T. Unterthiner", "S. Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "CoRR, abs/1511.07289,", "year": 2015 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "F.-F. Li" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "S. Elfwing", "E. Uchibe", "K. Doya" ], "title": "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "T. Elsken", "J.H. Metzen", "F. Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "F. Gomez", "R. Miikkulainen" ], "title": "Active guidance for a finless rocket using neuroevolution", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference,", "year": 2003 }, { "authors": [ "S. Gonzalez", "R. Miikkulainen" ], "title": "Improved training speed, accuracy, and data utilization through loss function optimization", "venue": null, "year": 1905 }, { "authors": [ "S. Gonzalez", "R. Miikkulainen" ], "title": "Evolving loss functions with multivariate taylor polynomial parameterizations", "venue": null, "year": 2002 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "D. Hendrycks", "K. Gimpel" ], "title": "Gaussian error linear units (gelus)", "venue": null, "year": 2016 }, { "authors": [ "G.E. Hinton", "N. Srivastava", "A. Krizhevsky", "I. Sutskever", "R.R. Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": null, "year": 2012 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "G. Klambauer", "T. Unterthiner", "A. Mayr", "S. Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "J.R. Koza" ], "title": "Genetic programming: on the programming of computers by means of natural selection, volume 1", "venue": "MIT press,", "year": 1992 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "J. Liang", "S. Gonzalez", "R. Miikkulainen" ], "title": "Population-based training for loss function optimization", "venue": null, "year": 2002 }, { "authors": [ "A.L. Maas", "A.Y. Hannun", "A.Y. Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In Proceedings of the 30th international conference on machine learning (ICML-13),", "year": 2013 }, { "authors": [ "D. Misra" ], "title": "Mish: A self regularized non-monotonic neural activation function", "venue": null, "year": 2019 }, { "authors": [ "V. Nair", "G.E. Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "C. Nwankpa", "W. Ijomah", "A. Gachagan", "S. Marshall" ], "title": "Activation functions: Comparison of trends in practice and research for deep learning", "venue": null, "year": 2018 }, { "authors": [ "P. Ramachandran", "B. Zoph", "Q.V. Le" ], "title": "Searching for activation functions", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "E. Real", "A. Aggarwal", "Y. Huang", "Q.V. Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence,", "year": 2019 }, { "authors": [ "J. Springenberg", "A. Dosovitskiy", "T. Brox", "M. Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "In ICLR (workshop track),", "year": 2015 }, { "authors": [ "M. Tan", "Q. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "D. Thain", "T. Tannenbaum", "M. Livny" ], "title": "Distributed computing in practice: the condor experience", "venue": "Concurrency and computation: practice and experience,", "year": 2005 }, { "authors": [ "D. Whitley", "K. Mathias", "P. Fitzhorn" ], "title": "Delta-Coding: An iterative search strategy for genetic algorithms", "venue": "In Proceedings of the International Conference on Genetic Algorithms,", "year": 1991 }, { "authors": [ "He" ], "title": "discovered for a Preactivation ResNet of depth 56 (ResNet-v2-56). Figure 6 shows the performance of this function when paired with Preactivation ResNets of different depths. Unlike with the Wide ResNets, there is no clear increase or decrease in relative improvement over ReLU as depth increases. Impressively, ResNet-v2-164 with Softplus(ELU(x)) achieved test set accuracy 78.01, outperforming the accuracy of ResNet-v2-1001 with ReLU", "venue": null, "year": 2016 }, { "authors": [ "Real" ], "title": "2019), which is based on a strict sliding window of size P", "venue": null, "year": 2019 }, { "authors": [ "Clevert" ], "title": "GELU xΦ(x), with Φ(x) = P (X ≤ x), X ∼ N (0, 1), approximated as 0.5x(1 + tanh", "venue": null, "year": 2015 }, { "authors": [ "Hendrycks" ], "title": "HardSigmoid max{0,min{1, 0.2x+ 0.5}} Leaky ReLU x if x ≥ 0 else 0.01x", "venue": null, "year": 2016 }, { "authors": [ "Klambauer" ], "title": "2017) sigmoid (1 + e−x)−1 Softplus log(e + 1) Softsign x/(|x|+", "venue": "Swish", "year": 2017 }, { "authors": [ "He" ], "title": "PSwish x · σ(βx), where β is a per-channel learnable parameter Ramachandran et al", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The rectified linear unit (ReLU(x) = max{x, 0}) is the most commonly used activation function in modern deep learning architectures (Nair & Hinton, 2010). When introduced, it offered substantial improvements over the previously popular tanh and sigmoid activation functions. Because ReLU is unbounded as x→∞, it is less susceptible to vanishing gradients than tanh and sigmoid are. It is also simple to calculate, which leads to faster training times.\nActivation function design continues to be an active area of research, and a number of novel activation functions have been introduced since ReLU, each with different properties (Nwankpa et al., 2018). In certain settings, these novel activation functions lead to substantial improvements in accuracy over ReLU, but the gains are often inconsistent across tasks. Because of this inconsistency, ReLU is still the most commonly used: it is reliable, even though it may be suboptimal.\nThe improvements and inconsistencies are due to a gradually evolving understanding of what makes an activation function effective. For example, Leaky ReLU (Maas et al., 2013) allows a small amount of gradient information to flow when the input is negative. It was introduced to prevent ReLU from creating dead neurons, i.e. those that are stuck at always outputting zero. On the other hand, the ELU activation function (Clevert et al., 2015) contains a negative saturation regime to control the forward propagated variance. These two very different activation functions have seemingly contradicting properties, yet each has proven more effective than ReLU in various tasks.\nThere are also often complex interactions between an activation function and other neural network design choices, adding to the difficulty of selecting an appropriate activation function for a given task. For example, Ramachandran et al. (2018) warned that the scale parameter in batch normalization (Ioffe & Szegedy, 2015) should be set when training with the Swish activation function; Hendrycks & Gimpel (2016) suggested using an optimizer with momentum when using GELU; Klambauer et al. (2017) introduced a modification of dropout (Hinton et al., 2012) called alpha dropout to be used with SELU. These results suggest that significant gains are possible by designing the activation function properly for a network and task, but that it is difficult to do so manually.\nThis paper presents an approach to automatic activation function design. The approach is inspired by genetic programming (Koza, 1992), which describes techniques for evolving computer programs to solve a particular task. In contrast with previous studies (Bingham et al., 2020; Ramachandran et al., 2018; Liu et al., 2020; Basirat & Roth, 2018), this paper focuses on automatically discovering activation functions that are parametric. Evolution discovers the general form of the function, while gradient descent optimizes the parameters of the function during training. The approach,\ncalled PANGAEA (Parametric ActivatioN functions Generated Automatically by an Evolutionary Algorithm), discovers general activation functions that improve performance overall over previously proposed functions. It also produces specialized functions for different architectures, such as Wide ResNet, ResNet, and Preactivation ResNet, that perform even better than the general functions, demonstrating its ability to customize activation functions to architectures." }, { "heading": "2 RELATED WORK", "text": "Prior work in automatic activation function discovery includes that of Ramachandran et al. (2018), who used reinforcement learning to design novel activation functions. They discovered multiple functions, but analyzed just one in depth: Swish(x) = x ·σ(x). Of the top eight functions discovered, only Swish and max{x, σ(x)} consistently outperformed ReLU across multiple tasks, suggesting that improvements are possible but often task specific.\nBingham et al. (2020) used evolution to discover novel activation functions. Whereas their functions had a fixed graph structure, PANGAEA utilizes a flexible search space that implements activation functions as arbitrary computation graphs. PANGAEA also includes more powerful mutation operations, and a function parameterization approach that makes it possible to further refine functions through gradient descent.\nLiu et al. (2020) evolved normalization-activation layers. They searched for a computation graph that replaced both batch normalization and ReLU in multiple neural networks. They argued that the inherent nonlinearity of the discovered layers precluded the need for any explicit activation function. However, experiments in this paper show that carefully designed parametric activation functions can in fact be a powerful augmentation to existing deep learning models.\nFinally, Basirat & Roth (2018) used a genetic algorithm to discover task-specific piecewise activation functions. They showed that different functions are optimal for different tasks. However, the discovered activation functions did not outperform ELiSH and HardELiSH, two hand-designed activation functions proposed in the same paper (Basirat & Roth, 2018). The larger search space in PANGAEA affords evolution extra flexibility in designing activation functions, while the trainable parameters give customizability to the network itself, leading to consistent, significant improvement." }, { "heading": "3 THE PANGAEA METHOD", "text": "" }, { "heading": "3.1 REPRESENTING AND MODIFYING ACTIVATION FUNCTIONS", "text": "Activation functions are represented as computation graphs in which each node is a unary or a binary operator (Table 1). The activation functions are implemented in TensorFlow (Abadi et al., 2016), and safe operator implementations are chosen when possible (e.g. the binary operator x1/x2 is implemented as tf.math.divide_no_nan, which returns 0 if x2 = 0). The operators in Table 1 were chosen to create a large and\nexpressive search space that contains activation functions unlikely to be discovered by hand. Operators that are periodic (e.g. sin(x)) and operators that contain repeated asymptotes were not included; in\npreliminary experiments they often caused training instability. All of the operators have domain R, making it possible to compose them arbitrarily.\nPANGAEA begins with an initial population of P random activation functions. Each function is either of the form f(x) = unary1(unary2(x)) or f(x) = binary(unary1(x),unary2(x)), as shown in Figure 1. Both forms are equally likely, and the unary and binary operators are also selected uniformly at random. Previous work has suggested that it is difficult to discover highperforming activation functions that have complicated computation graphs (Bingham et al., 2020). The computation graphs in Figure 1 thus represent the simplest non-trivial computation graphs with and without a binary operator.\nDuring the search, all ReLU activation functions in a given neural network are replaced with a candidate activation function. No other changes to the network or training setup are made. The network is trained on the dataset, and the activation function is assigned a fitness score equal to the network’s accuracy on the validation set.\nGiven a parent activation function, a child activation function is created by applying one of four possible mutations (Figure 2). Other possible evolutionary operators like crossover are not used in this paper. All mutations are equally likely with two special cases. If a remove mutation is selected for an activation function with just one node, a change mutation is applied instead. Additionally, if an activation function with greater than seven nodes is selected for mutation, the mutation is a remove mutation, in order to reduce bloat.\nInsert In an insert mutation, one operator in the search space is selected uniformly at random. This operator is placed on a random edge of a parent activation function graph. In Figure 2b, the unary operator Swish(x) is inserted at the edge connecting the output of tanh(x) to the input of x1 + x2. After mutating, the parent activation function (tanh(x) + |erf(x)|)2 produces the child activation function (Swish(tanh(x)) + |erf(x)|)2. If a binary operator is randomly chosen for the insertion, the incoming input value is assigned to the variable x1. If the operator is addition or subtraction, the input to x2 is set to 0. If the operator is multiplication, division, or exponentiation, the input to x2 is set to 1. Finally, if the operator is the maximum or minimum operator, the input to x2 is a copy of the input to x1. When a binary operator is inserted into a computation graph, the activation function computed remains unchanged. However, the structure of the computation graph is modified and can be further altered by future mutations.\nRemove In a remove mutation, one node is selected uniformly at random and deleted. The node’s input is rewired to its output. If the removed node is binary, one of the two inputs is chosen at random and is deleted. The other input is kept. In Figure 2c, the addition operator is removed from the parent activation function. The two inputs to addition, tanh(x) and |erf(x)|, cannot both be kept. By chance, tanh(x) is discarded, resulting in the child activation function |erf(x)|2.\nChange To perform a change mutation, one node in the computation graph is selected at random and replaced with another operator from the search space, also uniformly at random. Unary operators are always replaced with unary operators, and binary operators with binary operators. Figure 2d shows how changing addition to multiplication produces the activation function (tanh(x) · |erf(x)|)2.\nRegenerate In a regenerate mutation, every operator in the computation graph is replaced with another operator from the search space. As with change mutations, unary operators are replaced with unary operators, and binary operators with binary operators. Although every node in the graph is changed, the overall structure of the computation graph remains the same. Regenerate mutations are useful for increasing exploration, and are similar in principle to burst mutation and delta coding (Gomez & Miikkulainen, 2003; Whitley et al., 1991). Figure 2e shows the child activation function −max{0, tanh(SELU(x))}, which is quite different from the parent function in Figure 2a.\nParameterization of Activation Functions After mutation (or random initialization), activation functions are parameterized (Figure 3). A value k ∈ {0, 1, 2, 3} is chosen uniformly at random, and k edges of the activation function graph are randomly selected. Multiplicative per-channel parameters are inserted at these edges and initialized to one. Whereas evolution is well suited for discovering the general form of the activation function in a discrete, structured search space, parameterization makes it possible to fine-tune the function using gradient descent. The function parameters are updated at every epoch during backpropagation, resulting in different activation functions in different stages of training. As the parameters are per-channel, the process creates different activation functions at different locations in the neural network. Thus, parameterization gives neural networks additional flexibility to customize activation functions." }, { "heading": "3.2 DISCOVERING ACTIVATION FUNCTIONS WITH EVOLUTION", "text": "Activation functions are discovered by regularized evolution (Real et al., 2019). Initially, P random activation functions are created, parameterized, and assigned fitness scores. To generate a new activation function, S functions are sampled with replacement from the current population. The function with the highest validation accuracy serves as the parent, and is mutated to create a child activation function. This function is parameterized and assigned a fitness score. The new activation function is then added to the population, and the oldest function in the population is removed, ensuring the population is always of size P . This process continues until C functions have been evaluated in total, and the top functions over the history of the search are returned as a result.\nAny activation function that achieves a fitness score less than a threshold V is discarded. These functions are not added to the population, but they do count towards the total number of C activation functions evaluated for each architecture. This quality control mechanism allows evolution to focus only on the most promising candidates.\nTo save computational resources during evolution, each activation function is evaluated by training a neural network for 100 epochs using a compressed learning rate schedule (Appendix B). After evolution is complete, the top 10 activation functions from the entire search are reranked. Each function receives an adjusted fitness score equal to the average validation accuracy from two independent 200-epoch training runs using the original learning rate schedule. The top three activation functions after reranking proceed to the final testing experiments.\nDuring evolution, it is possible that some activation functions achieve unusually high validation accuracy by chance. The 100-epoch compressed learning rate schedule may also have a minor effect on which activation functions are optimal compared to a full 200-epoch schedule. Reranking thus serves two purposes. Full training reduces bias from the compressed schedule, and averaging two such runs lessens the impact of activation functions that achieved high accuracy by chance." }, { "heading": "4 DATASETS AND ARCHITECTURES", "text": "The experiments in this paper focus primarily on the CIFAR-100 image classification dataset (Krizhevsky et al., 2009). This dataset is a more difficult version of the popular CIFAR-10 dataset, with 100 object categories instead of 10. Fifty images from each class were randomly selected from the training set to create a balanced validation set, resulting in a training/validation/test split of 45K/5K/10K images.\nTo demonstrate that PANGAEA can discover effective activation functions in various settings, it is evaluated with three different neural networks. The models were implemented in TensorFlow (Abadi et al., 2016), mirroring the original authors’ training setup as closely as possible (Appendix B).\nWide Residual Network (WRN-10-4; Zagoruyko & Komodakis, 2016) has a depth of 10 and widening factor of four. Wide residual networks provide an interesting comparison because they are shallower and wider than many other popular architectures, while still achieving good results. WRN-10-4 was chosen because its CIFAR-100 accuracy is competitive, yet it trains relatively quickly.\nResidual Network (ResNet-v1-56; He et al., 2016a), with a depth of 56, provides an important contrast to WRN-10-4. It is significantly deeper and has a slightly different training setup, which may have an effect on the performance of different activation functions.\nPreactivation Residual Network (ResNet-v2-56; He et al., 2016b) has identical depth to ResNetv1-56, but is a fundamentally different architecture. Activation functions are not part of the skip connections, as is the case in ResNet-v1-56. Since information does not have to pass through an activation function, this structure makes it easier to train very deep architectures. PANGAEA should exploit this structure and discover different activation functions for ResNet-v2-56 and ResNet-v1-56.\n5 RESULTS\nOverview Separate evolution experiments were run to discover novel activation functions for each of the three architectures. Evolutionary parameters P = 64, S = 16, C = 1,000, and V = 20% were used since they were found to work well in preliminary experiments.\nFigure 4 visualizes progress in these experiments. For all three architectures, PANGAEA quickly discovered activation functions that outperform ReLU. It continued to make further progress, gradually discovering better activation functions, and did not plateau during the time allotted for the experiment. Each run took approximately 2,000 GPU hours on GeForce GTX 1080 GPUs (Appendix C).\nTable 2 shows the final test accuracy for the top specialized activation functions discovered by PANGAEA in each run. For comparison, the accuracy of the top general functions dis-\ncovered in this process are also shown, as well as the accuracy of 28 baseline activation functions. In sum, PANGAEA discovered the best activation function for ResNet-v2-56, the top two activation functions for ResNet-v1-56, and the top three activation functions for WRN-10-4.\nSpecialized Activation Functions For all three architectures, there is at least one baseline activation function that outperforms ReLU by a statistically significant margin. This result already demonstrates the importance of activation function design, and suggests that the common practice of\nusing ReLU by default is suboptimal. The best baseline activation function is different for different architectures, reinforcing the importance of developing specialized activation functions.\nBecause PANGAEA uses validation accuracy from a single neural network to assign fitness scores to activation functions, there is selective pressure to discover functions that exploit the structure of the network. The functions thus become specialized to the architecture. They increase the performance of that architecture; however, they may not be as effective with other architectures. Specialized activation function accuracies are highlighted in gray in Table 2. To verify that the functions are customized to a specific architecture, the functions were cross-evaluated with other architectures.\nPANGAEA discovered two specialized activation functions for WRN10-4 and one for ResNet-v1-56 that achieved statistically significant improvements in mean accuracy over all baseline activation functions. All three specialized activation functions evolved for ResNet-v2-56 significantly outperformed ReLU as well. These results strongly demonstrate the power of customizing activation functions to architectures.\nGeneral Activation Functions Although the best performance tends to come from specialization, it is also useful to discover activation functions that achieve high accuracy across multiple architectures. For instance, they could be used initially on a new architecture before spending compute on specialization. A powerful albeit computationally demanding approach would be to evolve general functions directly, by evaluating candidates on multiple architectures during evolution. However, it turns out that each\nspecialized evolution run already generates a variety of functions, many of which are general.\nTo evaluate whether the PANGAEA runs discovered general functions as well, the top 10 functions from each run were combined into a pool of 30 candidate functions. Each candidate was assigned three fitness scores equal to the average validation accuracy from two independent training runs on each of the three architectures. Candidate functions that were Pareto-dominated, were functionally equivalent to one of the baseline activation functions, or had already been selected as a specialized activation function were discarded, leaving three Pareto-optimal general activation functions.\nThese functions indeed turned out to be effective as general activation functions: they all performed well on all architectures. One outperformed all baseline activation functions on WRN-10-4, while two functions on ResNet-v1-56 and three functions on ResNet-v2-56 outperformed 25 of the 28 baseline functions. However, specialized activation functions, i.e. those specifically evolved for each architecture, still tend to give the biggest improvements.\nShapes of Discovered Functions Many of the top discovered activation functions are compositions of multiple unary operators. These functions do not exist in the core unit search space of Ramachandran et al. (2018), which requires binary operators. They also do not exist in the S1 or S2 search spaces proposed by Bingham et al. (2020), which are too shallow. The design of the search space is therefore as important as the search algorithm itself. Previous search spaces that rely on repeated fixed building blocks only have limited representational power. In contrast, PANGAEA utilizes a flexible search space that can represent activation functions in an arbitrary computation graph.\nFigure 5 shows examples of parametric activation functions discovered by PANGAEA. As training progresses, gradient descent makes small adjustments to the function parameters α, β, and γ, resulting in activation functions that change over time. This result suggests that it is ad-\nvantageous to have one activation function in the early stages of training when the network learns rapidly, and a different activation function in the later stages of training when the network is focused on fine-tuning. The parameters α, β, and γ are also learned separately for the different channels, resulting in activation functions that vary with location in a neural network. Functions in deep layers (near the output) are more nonlinear than those in shallow layers (closer to the input), possibly contrasting the need to form regularized embeddings with the need to form categorizations. In this manner, PANGAEA customizes the activation functions to both time and space for each architecture.\n6 ABLATIONS AND VARIATIONS\nTable 4: WRN-10-4 accuracy with different activation functions on CIFAR100, shown as a median of ten runs, with mean ± sample standard deviation in parenthesis. PANGAEA discovers better activation functions than random search and nonparametric evolution.\nPANGAEA log(σ(αx)) · arcsinh(x) 73.23 (73.16± 0.41) log(σ(αx)) · βarcsinh(x) 73.22 (73.20± 0.37) −Swish(Swish(αx)) 72.38 (72.49± 0.55) Random Search αSwish(x) 72.80 (72.85± 0.25) Softplus(x) · arctan(αx) 72.78 (72.81± 0.35) ReLU(αarcsinh(βσ(x))) · SELU(γx) 72.63 (72.69± 0.21) Nonparametric Evolution cosh(1) · Swish(x) 72.81 (72.78± 0.24) (e1 − 1) · Swish(x) 72.57 (72.52± 0.34) ReLU(Swish(x)) 72.06 (72.04± 0.54)\nReLU 71.44 (71.46± 0.50) Swish 72.27 (72.26± 0.28)\nEffect of Parameterization To understand the effect that parameterizing activation functions has on performance, the specialized functions (Table 2) were trained without them. As Table 3 shows, when parameters are removed, performance drops. The function log(σ(x)) is the only exception to this rule, but its high performance is not surprising, since it was previously discovered as a general activation function (Table 2). These results confirm that the learnable parameters contributed to the success of PANGAEA.\nSearch Strategy As additional baseline comparisons, two alternative search strategies were used to discover activation functions for WRN-10-4. First, a random search baseline was established by applying random mutations without regard to fitness values. This approach corresponds to setting evolutionary parameters P = 1, S = 1, and V = 0%. Second, to understand the effects of function parameterization, a nonparametric evolution baseline was run. This setting is identical to PANGAEA, except functions are not parameterized (Figure 3). Otherwise, both baselines follow the same setup as PANGAEA, including evaluating C = 1,000 candidate functions and reranking the most promising ones (Section 3.2).\nTable 5: Specialized activation functions discovered for WRN-10-4, ResNet-v156, and ResNet-v2-56 are evaluated on larger versions of those architectures: WRN-16-8, ResNet-v1-110, and ResNet-v2-110, respectively. CIFAR100 test accuracy is reported as the median of three runs, with mean ± sample standard deviation in parenthesis. Specialized activation functions successfully transfer to WRN-16-8 and ResNet-v2-110, outperforming ReLU.\nWRN-16-8 log(σ(αx)) · arcsinh(x) 78.42 (78.34± 0.20) log(σ(αx)) · βarcsinh(x) 78.38 (78.36± 0.17) −Swish(Swish(αx)) 77.90 (78.00± 0.35) ReLU 78.14 (78.15± 0.03)\nResNet-v1-110 αx− β log(σ(γx)) 70.88 (70.85± 0.50) αx− log(σ(βx)) 70.40 (70.34± 0.60) max{Swish(x), 0} 70.30 (70.36± 0.56) ReLU 71.15 (71.23± 0.25)\nResNet-v2-110 Softplus(ELU(x)) 77.34 (77.14± 0.38) min{log(σ(x)), α log(σ(βx))} 76.99 (76.93± 0.19) SELU(Swish(x)) 77.04 (76.96± 0.14) ReLU 76.35 (76.34± 0.11)\nTable 4 shows the results of this experiment. Random search is able to discover good functions that outperform ReLU, but the functions are not as powerful as those discovered by PANGAEA. This result demonstrates the importance of fitness selection in evolutionary search. The functions discovered by nonparametric evolution similarly outperform ReLU but underperform PANGAEA. Interestingly, without parameterization, evolution is not as creative: two of the three functions discovered are merely Swish multiplied by a constant. Random search and nonparametric evolution both discovered good functions that improved accuracy, but PANGAEA achieves the best performance by combining the advantages of fitness selection and function parameterization.\nScaling Up PANGAEA discovered specialized activation functions for WRN-10-4, ResNet-v1-56, and ResNetv2-56. Table 5 shows the performance of these activation functions when paired with the larger WRN-16-8, ResNet-v1-110, and ResNet-v2-110 architectures. Due to time constraints, ReLU is the only baseline activation function in these experiments.\nTwo of the three functions discovered for WRN-10-4 outperform ReLU with WRN-16-8, and all three functions discovered for ResNet-v2-56 outperform ReLU with ResNet-v2-110. Interestingly, ReLU achieves the highest accuracy for ResNet-v1-110, where activation functions are part of the skip connections, but not for ResNet-v2-110, where they are not. Thus, it is easier to achieve\nhigh performance with specialized activation functions on very deep architectures when they are not confounded by skip connections. Notably, ResNet-v2-110 with Softplus(ELU(x)) performs comparably to much larger ResNet-v2-1001 with ReLU (77.34 vs. 77.29, as reported by He et al. (2016b)).\nEvolving novel activation functions can be computationally expensive. The results in Table 5 suggest that it is possible to reduce this cost by evolving activation functions for smaller architectures, and then using the discovered functions with larger architectures.\nAll-CNN-C Finally, to verify that PANGAEA is effective with different datasets and types of architectures, activation functions were evolved for the All-CNN-C (Springenberg et al., 2015) architecture on the CIFAR-10 dataset. All-CNN-C is quite distinct from the architectures considered above: it contains only convolutional layers, activation functions, and a global average pooling layer, but it does not have residual connections. As shown in Table 6, PANGAEA improves significantly over ReLU in this setting as well. The accuracy improvement from 88.47% to 92.80% corresponds to an impressive 37.55% reduction in the error rate. This experiment provides further evidence\nthat PANGAEA can improve performance for different architectures and tasks." }, { "heading": "7 FUTURE WORK", "text": "It is difficult to select an appropriate activation function for a given architecture because the activation function, network topology, and training setup interact in complex ways. It is especially promising that PANGAEA discovered activation functions that significantly outperformed the baselines, since the architectures and training setups were standard and developed with ReLU. A compelling research direction is to jointly optimize the architecture, training setup, and activation function.\nMore specifically, there has been significant recent research in automatically discovering the architecture of neural networks through gradient-based, reinforcement learning, or neuroevolutionary methods (Elsken et al., 2019; Wistuba et al., 2019; Real et al., 2019). In related work, evolution was used discover novel loss functions automatically (Gonzalez & Miikkulainen, 2019; 2020; Liang et al., 2020), outperforming the standard cross entropy loss. In the future, it may be possible to optimize many of these aspects of neural network design jointly. Just as new activation functions improve the accuracy of existing network architectures, it is likely that different architectures will be discovered when the activation function is not ReLU. One such example is EfficientNet (Tan & Le, 2019), which achieved state-of-the-art accuracy for ImageNet (Deng et al., 2009) using the Swish activation function (Ramachandran et al., 2018; Elfwing et al., 2018). Coevolution of activation functions, topologies, loss functions, and possibly other aspects of neural network design could allow taking advantage of interactions between them, leading to further improvements in the future." }, { "heading": "8 CONCLUSION", "text": "This paper introduced PANGAEA, a technique for automatically designing novel, high-performing, parametric activation functions. PANGAEA builds a synergy of two different optimization processes: evolutionary population-based search for the general form, and gradient descent-based fine-tuning of the parameters of the activation function. Compared to previous studies, the search space is extended to include deeper and more complex functional forms, including ones unlikely to be discovered by humans. The parameters are adapted during training and are different in different locations of the architecture, thus customizing the functions over both time and space. PANGAEA is able to discover general activation functions that perform well across architectures, and specialized functions taking advantage of a particular architecture, significantly outperforming previously proposed activation functions in both cases. It is thus a promising step towards automatic configuration of neural networks." }, { "heading": "A ADJUSTING ARCHITECTURE WIDTH AND DEPTH", "text": "To further investigate the effect of network size on the performance of novel activation functions, two specialized activation functions were paired with neural networks of different widths and depths. Due to time constraints, the results in this experiment are based on single training runs.\nWide Residual Networks The specialized activation function log(σ(αx)) · βarcsinh(x) was discovered for a Wide ResNet of depth 10 and width four (WRN-10-4). Figure 6 shows the performance of this function when paired with Wide ResNets of different depths and widths.\nFor all widths tested, log(σ(αx)) · βarcsinh(x) outperforms ReLU, albeit with diminishing returns as the width becomes large. This result implies that log(σ(αx)) · βarcsinh(x) gives the network more representational power than ReLU. As the width of the architecture is increased, the additional network parameters partially offset this advantage, explaining the decreasing relative improvement of log(σ(αx)) · βarcsinh(x) over ReLU. For a fixed architecture width of four, log(σ(αx)) · βarcsinh(x) outperforms ReLU only when the depth is 10 and 16. Surprisingly, as the depth is increased to 22 and beyond, the performance of log(σ(αx)) · βarcsinh(x) drops. This result suggests that log(σ(αx)) · βarcsinh(x) is specialized to shallow architectures.\nPreactivation Residual Networks The specialized activation function Softplus(ELU(x)) was discovered for a Preactivation ResNet of depth 56 (ResNet-v2-56). Figure 6 shows the performance of this function when paired with Preactivation ResNets of different depths. Unlike with the Wide ResNets, there is no clear increase or decrease in relative improvement over ReLU as depth increases. Impressively, ResNet-v2-164 with Softplus(ELU(x)) achieved test set accuracy 78.01, outperforming the accuracy of ResNet-v2-1001 with ReLU (77.29) as reported by He et al. (2016b)." }, { "heading": "B TRAINING DETAILS", "text": "Wide Residual Network (WRN-10-4) When measuring final performance after evolution, the standard WRN setup is used; all ReLU activations in WRN-10-4 are replaced with the evolved activation function, but no other changes to the architecture are made. The network is optimized using stochastic gradient descent with Nesterov momentum 0.9. The network is trained for 200 epochs; the initial learning rate is 0.1, and it is decreased by a factor of 0.2 after epochs 60, 120, and 160. Dropout probability is set to 0.3, and L2 regularization of 0.0005 is applied to the weights. Data augmentation includes featurewise center, featurewise standard deviation normalization, horizontal flip, and random 32× 32 crops of images padded with four pixels on all sides. This setup was chosen to mirror the original WRN setup (Zagoruyko & Komodakis, 2016) as closely as possible.\nDuring evolution of activation functions, the training is compressed to save time. The network is trained for only 100 epochs; the learning rate begins at 0.1 and is decreased by a factor of 0.2 after epochs 30, 60, and 80. Empirically, the accuracy achieved by this shorter schedule is sufficient to guide evolution; the computational cost saved by halving the time required to evaluate an activation function can then be used to search for additional activation functions.\nResidual Network (ResNet-v1-56) As with WRN-10-4, when measuring final performance with ResNet-v1-56, the only change to the architecture is replacing the ReLU activations with an evolved activation function. The network is optimized with stochastic gradient descent and momentum 0.9. Dropout is not used, and L2 regularization of 0.0001 is applied to the weights. In the original ResNet experiments (He et al., 2016a), an initial learning rate of 0.01 was used for 400 iterations before increasing it to 0.1, and further decreasing it by a factor of 0.1 after 32K and 48K iterations. An iteration represents a single forward and backward pass over one training batch, while an epoch consists of training over the entire training dataset. In this paper, the learning rate schedule is implemented by beginning with a learning rate of 0.01 for one epoch, increasing it to 0.1, and then decreasing it by a factor of 0.1 after epochs 91 and 137. (For example, (48K iterations / 45K training images) * batch size of 128≈ 137.) The network is trained for 200 epochs in total. Data augmentation includes a random horizontal flip and random 32× 32 crops of images padded with four pixels on all sides, as in the original setup (He et al., 2016a).\nWhen evolving activation functions for ResNet-v1-56, the learning rate schedule is again compressed. The network is trained for 100 epochs; the initial warmup learning rate of 0.01 still lasts one epoch, the learning rate increases to 0.1, and then decreases by a factor of 0.1 after epochs 46 and 68. When evolving activation functions, their relative performance is more important than the absolute accuracies they achieve. The shorter training schedule is therefore a cost-efficient way of discovering high-performing activation functions.\nPreactivation Residual Network (ResNet-v2-56) The full training setup, data augmentation, and compressed learning rate schedule used during evolution for ResNet-v2-56 are all identical to those for ResNet-v1-56 with one exception: with ResNet-v2-56, it is not necessary to warm up training with an initial learning rate of 0.01 (He et al., 2016b), so this step is skipped.\nAll-CNN-C When measuring final performance with All-CNN-C, the ReLU activation function is replaced with an evolved one, but the setup otherwise mirrors that of Springenberg et al. (2015) as closely as possible. The network is optimized with stochastic gradient descent and momentum 0.9. Dropout probability is 0.5, and L2 regularization of 0.001 is applied to the weights. The data augmentation involves featurewise centering and normalizing, random horizontal flips, and random 32× 32 crops of images padded with five pixels on all sides. The initial learning rate is set to 0.01, and it is decreased by a factor of 0.1 after epochs 200, 250, and 300. The network is trained for 350 epochs in total.\nDuring evolution of activation functions, the same training setup was used. It is not necessary to compress the learning rate schedule as was done with the residual networks because All-CNN-C trains more quickly.\nCIFAR-10 As with CIFAR-100, a balanced validation set was created for CIFAR-10 by randomly selecting 500 images from each class, resulting in a training/validation/test split of 45K/5K/10K images.\nC IMPLEMENTATION AND COMPUTE REQUIREMENTS\nHigh-performance computing in two clusters is utilized for the experiments. One cluster uses HTCondor (Thain et al., 2005) for scheduling jobs, while the other uses the Slurm workload manager. Training is executed on GeForce GTX 1080 GPUs on both clusters. When a job begins executing, a parent activation function is selected by sampling S = 16 functions from the P = 64 most recently evaluated activation functions. This is a minor difference from the original regularized evolution (Real et al., 2019), which is based on a strict sliding window of size P . This approach may give extra influence to some activation functions, depending on how quickly or slowly jobs are executed in each\nof the clusters. In practice the method is highly effective; it allows evolution to progress quickly by taking advantage of extra compute when demand on the clusters is low.\nIt is difficult to know ahead of time how computationally expensive the evolutionary search will be. Some activation functions immediately result in an undefined loss, causing training to end. In that case only a few seconds have been spent and another activation function can immediately be evaluated. Other activation functions train successfully, but their complicated expressions result in longer-than-usual training times. In these experiments, evolution for WRN-10-4 took 2,314 GPU hours, evolution for ResNet-v1-56 took 1,594 GPU hours, and evolution for ResNet-v2-56 took 2,175 GPU hours. These numbers do not include costs for reranking and repeated runs in the final experiments. Although substantial, the computational cost is negligible compared to the cost in human labor in designing activation functions. Evolution of parametric activation functions requires minimal manual setup and delivers automatic improvements in accuracy." }, { "heading": "D BASELINE ACTIVATION FUNCTION DETAILS", "text": "" } ]
2,020
DISCOVERING PARAMETRIC ACTIVATION FUNCTIONS
SP:d23a1168bdf9f77e67f24b5062525cefd213a43e
[ "The paper “Efficient Competitive Self-Play Policy Optimization” introduces a new self-play scheme for solving zero-sum two-player games. It is suggested to train a population of N agents in parallel, where each agent is matched against the comparatively strongest opponent in the next round of training. As baselines, the paper considers self-play against the best, the latest and random snapshots from the training history of only a single agent." ]
Reinforcement learning from self-play has recently reported many successes. Self-play, where the agents compete with themselves, is often used to generate training data for iterative policy improvement. In previous work, heuristic rules are designed to choose an opponent for the current learner. Typical rules include choosing the latest agent, the best agent, or a random historical agent. However, these rules may be inefficient in practice and sometimes do not guarantee convergence even in the simplest matrix games. This paper proposes a new algorithmic framework for competitive self-play reinforcement learning in two-player zero-sum games. We recognize the fact that the Nash equilibrium coincides with the saddle point of the stochastic payoff function, which motivates us to borrow ideas from classical saddle point optimization literature. Our method simultaneously trains several agents and intelligently takes each other as opponents based on a simple adversarial rule derived from a principled perturbation-based saddle optimization method. We prove theoretically that our algorithm converges to an approximate equilibrium with high probability in convex-concave games under standard assumptions. Beyond the theory, we further show the empirical superiority of our method over baseline methods relying on the aforementioned opponentselection heuristics in matrix games, grid-world soccer, Gomoku, and simulated robot sumo, with neural net policy function approximators.
[]
[ { "authors": [ "Maruan Al-Shedivat", "Trapit Bansal", "Yura Burda", "Ilya Sutskever", "Igor Mordatch", "Pieter Abbeel" ], "title": "Continuous adaptation via meta-learning in nonstationary and competitive environments", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kenneth Joseph Arrow", "Hirofumi Azawa", "Leonid Hurwicz", "Hirofumi Uzawa" ], "title": "Studies in linear and non-linear programming, volume 2", "venue": null, "year": 1958 }, { "authors": [ "Yu Bai", "Chi Jin" ], "title": "Provable self-play algorithms for competitive reinforcement learning", "venue": "arXiv preprint arXiv:2002.04017,", "year": 2020 }, { "authors": [ "Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch" ], "title": "Emergent tool use from multi-agent autocurricula", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Trapit Bansal", "Jakub Pachocki", "Szymon Sidor", "Ilya Sutskever", "Igor Mordatch" ], "title": "Emergent complexity via multi-agent competition", "venue": "ICLR, 2017", "year": 2017 }, { "authors": [ "Christopher Berner", "Greg Brockman", "Brooke Chan", "Vicki Cheung", "Przemyslaw Debiak", "Christy Dennison", "David Farhi", "Quirin Fischer", "Shariq Hashme", "Chris Hesse" ], "title": "Dota 2 with large scale deep reinforcement learning", "venue": "arXiv preprint arXiv:1912.06680,", "year": 1912 }, { "authors": [ "Michael Bowling", "Manuela Veloso" ], "title": "Multiagent learning using a variable learning rate", "venue": "Artificial Intelligence,", "year": 2002 }, { "authors": [ "George W Brown" ], "title": "Iterative solution of games by fictitious play", "venue": "Activity analysis of production and allocation,", "year": 1951 }, { "authors": [ "Noam Brown", "Tuomas Sandholm" ], "title": "Superhuman ai for multiplayer poker", "venue": null, "year": 2019 }, { "authors": [ "Adrian Rivera Cardoso", "Jacob Abernethy", "He Wang", "Huan Xu" ], "title": "Competing against nash equilibria in adversarially changing zero-sum games", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Arpad E Elo" ], "title": "The rating of chessplayers, past and present", "venue": "Arco Pub.,", "year": 1978 }, { "authors": [ "Adam Gleave", "Michael Dennis", "Cody Wild", "Neel Kant", "Sergey Levine", "Stuart Russell" ], "title": "Adversarial policies: Attacking deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jean-Bastien Grill", "Florent Altché", "Yunhao Tang", "Thomas Hubert", "Michal Valko", "Ioannis Antonoglou", "Rémi Munos" ], "title": "Monte-carlo tree search as regularized policy optimization", "venue": null, "year": 2020 }, { "authors": [ "Jihun Hamm", "Yung-Kyun Noh" ], "title": "K-beam minimax: Efficient optimization for deep adversarial learning", "venue": null, "year": 2018 }, { "authors": [ "He He", "Jordan Boyd-Graber", "Kevin Kwok", "Hal Daumé III" ], "title": "Opponent modeling in deep reinforcement learning", "venue": "ICML, 2016", "year": 2016 }, { "authors": [ "Johannes Heinrich", "David Silver" ], "title": "Deep reinforcement learning from self-play in imperfectinformation", "venue": "games. arXiv:1603.01121,", "year": 2016 }, { "authors": [ "Junling Hu", "Michael P Wellman" ], "title": "Nash q-learning for general-sum stochastic games", "venue": "JMLR, 4 (Nov):1039–1069,", "year": 2003 }, { "authors": [ "M Jaderberg", "WM Czarnecki", "I Dunning", "L Marris", "G Lever", "AG Castaneda", "C Beattie", "NC Rabinowitz", "AS Morcos", "A Ruderman" ], "title": "Human-level performance in 3d multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Amir Jafari", "Amy Greenwald", "David Gondek", "Gunes Ercal" ], "title": "On no-regret learning, fictitious play, and nash equilibrium", "venue": null, "year": 2001 }, { "authors": [ "MJ Kallio", "Andrzej Ruszczynski" ], "title": "Perturbation methods for saddle point computation", "venue": null, "year": 1994 }, { "authors": [ "GM Korpelevich" ], "title": "The extragradient method for finding saddle points and other problems", "venue": "Matecon, 12:747–756,", "year": 1976 }, { "authors": [ "Marc Lanctot", "Vinicius Zambaldi", "Audrunas Gruslys", "Angeliki Lazaridou", "Karl Tuyls", "Julien Pérolat", "David Silver", "Thore Graepel" ], "title": "A unified game-theoretic approach to multiagent reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chung-Wei Lee", "Haipeng Luo", "Chen-Yu Wei", "Mengxiao Zhang" ], "title": "Linear last-iterate convergence for matrix games and stochastic games", "venue": "arXiv preprint arXiv:2006.09517,", "year": 2020 }, { "authors": [ "Michael L Littman" ], "title": "Markov games as a framework for multi-agent reinforcement learning", "venue": "In Machine Learning,", "year": 1994 }, { "authors": [ "Siqi Liu", "Guy Lever", "Josh Merel", "Saran Tunyasuvunakool", "Nicolas Heess", "Thore Graepel" ], "title": "Emergent coordination through competition", "venue": "ICLR,", "year": 2019 }, { "authors": [ "Panayotis Mertikopoulos", "Houssam Zenati", "Bruno Lecouat", "Chuan-Sheng Foo", "Vijay Chandrasekhar", "Georgios Piliouras" ], "title": "Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile", "venue": "ICLR, 2019", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "John Nash" ], "title": "Non-cooperative games", "venue": "Annals of mathematics,", "year": 1951 }, { "authors": [ "Angelia Nedić", "Asuman Ozdaglar" ], "title": "Subgradient methods for saddle-point problems", "venue": "Journal of optimization theory and applications,", "year": 2009 }, { "authors": [ "Ann Nowé", "Peter Vrancx", "Yann-Michaël De Hauwere" ], "title": "Game theory and multi-agent reinforcement learning", "venue": "Reinforcement Learning,", "year": 2012 }, { "authors": [ "Manish Prajapat", "Kamyar Azizzadenesheli", "Alexander Liniger", "Yisong Yue", "Anima Anandkumar" ], "title": "Competitive policy optimization", "venue": "arXiv preprint arXiv:2006.10611,", "year": 2020 }, { "authors": [ "Sasha Rakhlin", "Karthik Sridharan" ], "title": "Optimization, learning, and games with predictable sequences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "ICLR, 2016", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Satinder Singh", "Michael Kearns", "Yishay Mansour" ], "title": "Nash convergence of gradient dynamics in general-sum games", "venue": null, "year": 2000 }, { "authors": [ "Richard Sutton", "Andrew Barto" ], "title": "reinforcement learning: an introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Kaiqing Zhang", "Sham M Kakade", "Tamer Başar", "Lin F Yang" ], "title": "Model-based multi-agent rl in zero-sum markov games with near-optimal sample complexity", "venue": "arXiv preprint arXiv:2007.07461,", "year": 2020 }, { "authors": [ "Martin Zinkevich", "Michael Johanson", "Michael Bowling", "Carmelo Piccione" ], "title": "Regret minimization in games with incomplete information", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "y = Y" ], "title": "Another example would be the proximal regions around x , y. In practice, Alg. 1 constructs the candidate sets from the population which needs to be adequately large and diverse to satisfy Assump", "venue": "Ruszczynski,", "year": 1994 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) from self-play has drawn tremendous attention over the past few years. Empirical successes have been observed in several challenging tasks, including Go (Silver et al., 2016; 2017; 2018), simulated hide-and-seek (Baker et al., 2020), simulated sumo wrestling (Bansal et al., 2017), Capture the Flag (Jaderberg et al., 2019), Dota 2 (Berner et al., 2019), StarCraft II (Vinyals et al., 2019), and poker (Brown & Sandholm, 2019), to name a few. During RL from selfplay, the learner collects training data by competing with an opponent selected from its past self or an agent population. Self-play presumably creates an auto-curriculum for the agents to learn at their own pace. At each iteration, the learner always faces an opponent that is comparably in strength to itself, allowing continuous improvement.\nThe way the opponents are selected often follows human-designed heuristic rules in prior work. For example, AlphaGo (Silver et al., 2016) always competes with the latest agent, while the later generation AlphaGo Zero (Silver et al., 2017) and AlphaZero (Silver et al., 2018) generate self-play data with the maintained best historical agent. In specific tasks, such as OpenAI’s sumo wrestling, competing against a randomly chosen historical agent leads to the emergence of more diverse behaviors (Bansal et al., 2017) and more stable training than against the latest agent (Al-Shedivat et al., 2018). In population-based training (Jaderberg et al., 2019; Liu et al., 2019) and AlphaStar (Vinyals et al., 2019), an elite or random agent is picked from the agent population as the opponent.\nUnfortunately, these rules may be inefficient and sometimes ineffective in practice since they do not necessarily enjoy last-iterate convergence to the “average-case optimal” solution even in tabular matrix games. In fact, in the simple Matching Pennies game, self-play with the latest agent fails to converge and falls into an oscillating behavior, as shown in Sec. 5.\nIn this paper, we develop an algorithm that adopts a principle-derived opponent-selection rule to alleviate some of the issues mentioned above. This requires clarifying first what the solution of\nself-play RL should be. From the game-theoretical perspective, Nash equilibrium is a fundamental solution concept that characterizes the desired “average-case optimal” strategies (policies). When each player assumes other players also play their equilibrium strategies, no one in the game can gain more by unilaterally deviating to another strategy. Nash, in his seminal work (Nash, 1951), has established the existence result of mixed-strategy Nash equilibrium of any finite game. Thus solving for a mixed-strategy Nash equilibrium is a reasonable goal of self-play RL.\nWe consider the particular case of two-player zero-sum games as the model for the competitive selfplay RL environments. In this case, the Nash equilibrium is the same as the (global) saddle point and as the solution of the minimax program minx∈X maxy∈Y f(x, y). We denote x, y as the strategy profiles (in RL terminology, policies) and f as the loss for x or utility/reward for y. A saddle point (x∗, y∗) ∈ X × Y , where X,Y are the sets of all possible mixed-strategies (stochastic policies) of the two players, satisfies the following key property\nf(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗), ∀x ∈ X,∀y ∈ Y. (1) Connections to the saddle problem and game theory inspire us to borrow ideas from the abundant literature for finding saddle points in the optimization field (Arrow et al., 1958; Korpelevich, 1976; Kallio & Ruszczynski, 1994; Nedić & Ozdaglar, 2009) and for finding equilibrium in the game theory field (Zinkevich et al., 2008; Brown, 1951; Singh et al., 2000). One particular class of method, i.e., the perturbation-based subgradient methods to find the saddles (Korpelevich, 1976; Kallio & Ruszczynski, 1994), is especially appealing. This class of method directly builds upon the inequality properties in Eq. 1, and has several advantages: (1) Unlike some algorithms that require knowledge of the game dynamics (Silver et al., 2016; 2017; Nowé et al., 2012), it requires only subgradients; thus, it is easy to be adapted to policy optimization with estimated policy gradients. (2) For convexconcave functions, it is guaranteed to converge in its last iterate instead of an average iterate, hence alleviates the need to compute any historical averages as in Brown (1951); Singh et al. (2000); Zinkevich et al. (2008), which can get complicated when neural nets are involved (Heinrich & Silver, 2016). (3) Most importantly, it prescribes a simple principled way to adversarially choose self-play opponents, which can be naturally instantiated with a concurrently-trained agent population.\nTo summarize, we apply ideas from the perturbation-based methods of classical saddle point optimization to the model-free self-play RL regime. This results in a novel population-based policy gradient method with a principled adversarial opponent-selection rule. Analogous to the standard model-free RL setting, we assume only “naive” players (Jafari et al., 2001) where the game dynamic is hidden and only rewards for their own actions are revealed. This enables broader applicability to problems with mismatched or unknown game dynamics than many existing algorithms (Silver et al., 2016; 2017; Nowé et al., 2012). In Sec. 4, we provide an approximate convergence theorem for convex-concave games as a sanity check. Sec. 5 shows extensive experiment results favoring our algorithm’s effectiveness in several games, including matrix games, grid-world soccer, a board game, and a challenging simulated robot sumo game. Our method demonstrates higher per-agent sample efficiency than baseline methods with alternative opponent-selection rules. Our trained agents also outperform the baseline agents on average in competitions." }, { "heading": "2 RELATED WORK", "text": "Reinforcement learning trains a single agent to maximize the expected return in an environment (Sutton & Barto, 2018). Multiagent reinforcement learning (MARL), of which two-agent is a special case, concerns multiple agents taking actions in the same environment (Littman, 1994). Self-play is a training paradigm to generate data for MARL and has led to great successes, achieving superhuman performance in several domains (Tesauro, 1995; Silver et al., 2016; Brown & Sandholm, 2019). Applying RL algorithms naively as independent learners in MARL sometimes produces strong agents (Tesauro, 1995) but not always. People have studied ways to extend RL algorithms specifically to MARL, e.g., minimax-Q (Littman, 1994), Nash-Q (Hu & Wellman, 2003), WoLF-PG (Bowling & Veloso, 2002), etc. However, most of these methods are designed for tabular RL only, therefore not readily applicable to continuous state action spaces or complex policy functions where gradient-based policy optimization methods are preferred. Recently, Bai & Jin (2020), Lee et al. (2020) and Zhang et al. (2020) provide theoretical regret or convergence analyses under tabular or other restricted self-play settings, which complement our empirical effort.\nThere are algorithms developed from the game theory and online learning perspective (Lanctot et al., 2017; Nowé et al., 2012; Cardoso et al., 2019), notably Tree search, Fictitious self-play (Brown,\n1951), Regret minimization (Jafari et al., 2001; Zinkevich et al., 2008), and Mirror descent (Mertikopoulos et al., 2019; Rakhlin & Sridharan, 2013). Tree search such as minimax and alpha-beta pruning is particularly effective in small-state games. Monte Carlo Tree Search (MCTS) is also effective in Go (Silver et al., 2016). However, Tree search requires learners to know (or at least learn) the game dynamics. The other ones typically require maintaining some historical quantities. In Fictitious play, the learner best-responds to a historical average opponent, and the average strategy converges. Similarly, the total historical regrets in all (information) states are maintained in (counterfactual) regret minimization (Zinkevich et al., 2008). Furthermore, most of those algorithms are designed only for discrete state action games. Special care has to be taken with neural net function approximators (Heinrich & Silver, 2016). On the contrary, our method does not require the complicated computation of averaging neural nets, and is readily applicable to continuous environments.\nIn two-player zero-sum games, the Nash equilibrium coincides with the saddle point. This enables the techniques developed for finding saddle points. While some saddle-point methods also rely on time averages (Nedić & Ozdaglar, 2009), a class of perturbation-based gradient method is known to converge under mild convex-concave assumption for deterministic functions (Kallio & Ruszczynski, 1994; Korpelevich, 1976; Hamm & Noh, 2018). We develop a sampling version of them for stochastic RL objectives, which leads to a more principled and effective way of choosing opponents in self-play. Our adversarial opponent-selection rule bears a resemblance to Gleave et al. (2019). However, our goal is to develop an effective self-play RL algorithm, while Gleave et al. (2019) aims at attacking deep self-play learned policies. A recent work by Prajapat et al. (2020) tackles the self-play policy optimization problem differently from ours by employing a bilinear approximation to the game. Finally, although the algorithm presented here builds upon policy gradient, the same framework may be extended to other RL algorithms such as MCTS thanks to a recent interpretation of MCTS as policy optimization (Grill et al., 2020). Our way of leveraging Eq. 1 in a population may potentially work beyond gradient-based RL, e.g., in training generative adversarial networks similarly to Hamm & Noh (2018) due to the same minimax formulation." }, { "heading": "3 METHOD", "text": "Classical game theory defines a two-player zero-sum game as a tuple (X,Y, f) where X,Y are the sets of possible strategies of Players 1 and 2 respectively, and f : X × Y 7→ R is a mapping from a pair of strategies to a real-valued utility/reward for Player 2. The game is zero-sum (fully competitive), so Player 1’s reward is −f . This is a special case of the Stochastic Game formulation for Multiagent RL (Shapley, 1953) which itself is an extension to Markov Decision Processes (MDP).\nWe consider mixed strategies induced by stochastic policies πx and πy . The policies can be parameterized functions in which case X,Y are the sets of all possible policy parameters. Denote at as the action of Player 1 and bt as the action of Player 2 at time t, let T be the time limit of the game, then the stochastic payoff f writes as\nf(x, y) = E at∼πx,bt∼πy, st+1∼P (·|st,at,bt) T∑ t=0 γtr(st, at, bt) . (2) The state sequence {st}Tt=0 follows a transition dynamic P (st+1|st, at, bt). Actions are sampled according to action distributions πx(·|st) and πy(·|st). And r(st, at, bt) is the reward (payoff) for Player 2 at time t, determined jointly by the state and actions. We use the term ‘agent’ and ‘player’ interchangeably. While we consider an agent pair (x, y) in this paper, in some cases (Silver et al., 2016), x = y can be enforced by sharing parameters if the game is impartial. The discounting factor γ weights between short- and long-term rewards and is optional.\nNote that when one agent is fixed, taking y as an example, the problem x is facing reduces to an MDP if we define a new state transition dynamic Pnew(st+1|st, at) = ∑ bt P (st+1|st, at, bt)πy(bt|st) and\na new reward rnew(st, at) = ∑ bt r(st, at, bt)πy(bt|st). This leads to the naive gradient descentascent algorithm, which provably works in strictly convex-concave games (where f is strictly convex in x and strictly concave in y) under some assumptions (Arrow et al., 1958). However, in general, it does not enjoy last-iterate convergence to the Nash equilibrium. Even for simple games such as Matching Pennies and Rock Paper Scissors, as we shall see in our experiments, the naive algorithm generates cyclic sequences of xk, yk that orbit around the equilibrium. This motivates us to study the perturbation-based method which converges under weaker assumptions.\nAlgorithm 1: Perturbation-based self-play policy optimization of an n agent population. Input: N : No iterations; ηk: learning rates; mk: sample size; n: population size; l: No inner updates; Result: n pairs of policies;\n1 Initialize (x0i , y 0 i ), i = 1, 2, . . . n; 2 for k = 0, 1, 2, . . . N − 1 do 3 Evaluate f̂(xki , y k j ), ∀i, j ∈ 1 . . . n with Eq. 4 and sample size mk; 4 for i = 1, . . . n do 5 Construct candidate opponent sets Ckyi = {y k j : j = 1 . . . n} and Ckxi = {x k j : j = 1 . . . n}; 6 Find perturbed vki = argmaxy∈Ckyi f̂(xki , y), perturbed u k i = argminx∈Ckxi f̂(x, yki ); 7 Invoke a single-agent RL algorithm (e.g., A2C, PPO) on xki for l times that: 8 Estimate policy gradients ĝkxi = ∇̂xf(x k i , v k i ) with sample size mk (e.g., Eq. 5); 9 Update policy by xk+1i ← x k i − ηkĝkxi (or RmsProp);\n10 Invoke a single-agent RL algorithm (e.g., A2C, PPO) on yki for l times that: 11 Estimate policy gradients ĝkyi = ∇̂yf(u k i , y k i ) with sample size mk; 12 Update policy by yk+1i ← yi k + ηkĝ k yi (or RmsProp); 13 return {(xNi , yNi )}ni=1;\nRecall that the Nash equilibrium has to satisfy the saddle constraints Eq. 1: f(x∗, y) ≤ f(x∗, y∗) ≤ f(x, y∗). The perturbation-based methods build upon this property (Nedić & Ozdaglar, 2009; Kallio & Ruszczynski, 1994; Korpelevich, 1976) and directly optimize for a solution that meets the constraints. They find perturbed points u of Player 1 and v of Player 2, and use gradients at (x, v) and (u, y) to optimize x and y respectively. Under some regularity assumptions, gradient direction from a single perturbed point is adequate for proving convergence for (not strictly) convex-concave functions (Nedić & Ozdaglar, 2009). They can be easily extended to accommodate gradient based policy optimization and the stochastic RL objective in Eq. 4.\nWe propose to find the perturbations from an agent population, resulting in the algorithm outlined in Alg. 1. The algorithm trains n pairs of agents simultaneously. At each rounds of training, we first run n2 pairwise competitions as the evaluation step (Alg. 1 L3), costing n2mk trajectories. To save sample complexity, we can use these rollouts to do one policy update as well. Then a simple adversarial rule (Eq. 3) is adopted in Alg. 1 L6 to choose the opponents adaptively. The intuition is that vki and u k i are the most challenging opponents in the population for the current xi and yi.\nvki = arg max y∈Ckyi f̂(xki , y), u k i = arg min x∈Ckxi f̂(x, yki ). (3)\nThe perturbations vki and u k i always satisfy f(x k i , v k i ) ≥ f(uki , yki ), since maxy∈Ckyi f̂(x k i , y) ≥ f̂(xki , y k i ) ≥ minx∈Ckxi f̂(x, y k i ). Then we run gradient descent on x k i with the perturbed v k i as opponent to minimize f(xki , v k i ), and run gradient ascent on y k i to maximize f(u k i , y k i ). Intuitively, the duality gap between minx maxy f(x, y) and maxy minx f(x, y), approximated by f(xki , v k i )− f(uki , y k i ), is reduced, leading (x k i , y k i ) to converge to the saddle point (equilibrium).\nWe build the candidate opponent sets in L5 of Alg. 1 simply as the concurrently-trained n-agent population. Specifically, Ckyi = { yk1 , . . . , y k n } and Ckxi = { xk1 , . . . , x k n } . This is due to the following considerations. An alternative source of candidates is the fixed known agents such as a rule-based agent, which may not be available in practice. Another source is the extragradient methods (Korpelevich, 1976; Mertikopoulos et al., 2019), where extra gradient steps are taken on y before optimizing x. The extragradient method can be thought of as a local approximation to Eq. 3 with a neighborhood opponent set, thus is related to our method. However, this method could be less efficient because the trajectory sample used in the extragradient steps is wasted as it does not contribute to actually optimizing y. Yet another source is the past agents. This choice is motivated by Fictitious play and ensures that the current learner always defeats a past self. However, as we shall see in the experiments, self-play with a random past agent may learn slower than our method. We expect all agents in the population in our algorithm to be strong, thus provide stronger learning signals.\nFinally, we use Monte Carlo estimation to compute the values and gradients of f . In the classical game theory setting where the game dynamic and payoff are known, it is possible to compute the exact values and gradients of f . But in the model-free MARL setting, we have to collect roll-out trajectories to estimate both the function values through policy evaluation and gradients through\nthe Policy gradient theorem (Sutton & Barto, 2018). After collecting m independent trajectories{ {(sit, ait, rit)}Tt=0 }m i=1 , we can estimate f(x, y) by\nf̂(x, y) = 1\nm m∑ i=1 T∑ t=0 γtrit. (4)\nAnd given estimates Q̂x(s, a; y) to the state-action value Qx(s, a; y) (assuming an MDP with y as a fixed opponent of x), we construct an estimator for∇xf(x, y) (and similarly for∇yf given Q̂y) by\n∇̂xf(x, y) ∝ 1\nm m∑ i=1 T∑ t=0 ∇x log πx(ait|sit)Q̂x(sit, ait; y). (5)" }, { "heading": "4 CONVERGENCE ANALYSIS", "text": "We establish an asymptotic convergence result in the Monte Carlo policy gradient setting in Thm. 2 for a variant of Alg. 1 under regularity assumptions. This variant sets l = 1 and uses the vanilla SGD as the policy optimizer. We add a stop criterion f̂(xki , v\nk)− f̂(uk, yki ) . after Line 6 with an accuracy parameter . The full proof can be found in the appendix. Since the algorithm is symmetric between different pairs of agents in the population, we drop the subscript i for text clarity. Assumption 1 (A1). X,Y ⊆ Rd are compact sets. As a consequence, there exists D s.t ∀x1, x2 ∈ X, ‖x1− x2‖1 ≤ D and ∀y1, y2 ∈ Y, ‖y1− y2‖1 ≤ D. Assume Cky , Ckx are compact subsets of X and Y . Further, assume f : X × Y 7→ R is a bounded convex-concave function. Theorem 1 (Convergence with exact gradients (Kallio & Ruszczynski, 1994)). Under A1, if a sequence (xk, yk) → (x̂, ŷ) ∧ f(xk, vk) − f(uk, yk) → 0 implies (x̂, ŷ) is a saddle point, Alg. 1 (replacing estimates with true values) produces a sequence { (xk, yk) }∞ k=0 convergent to a saddle.\nThe above case with exact sub-gradients is easy since both f and∇f are deterministic. In RL setting, we construct estimates for f(x, y) and ∇xf,∇yf with samples. Intuitively, when the samples are large enough, we can bound the deviation between the true values and estimates by concentration inequalities, then the proof outline similar to Kallio & Ruszczynski (1994) also goes through.\nThm. 2 requires an extra assumption on the boundedness of Q̂ and gradients. By showing the policy gradient estimates are approximate sub-/super-gradients of f , we are able to prove that the output (xNi , y N i ) of Alg. 1 is an approximate Nash equilibrium with high probability.\nAssumption 2 (A2). The Q value estimation Q̂ is unbiased and bounded by R, and the policy has bounded gradient ‖∇ log πθ(a|s)‖∞ ≤ B. Theorem 2 (Convergence with policy gradients). Under A1, A2, let sample size at step k be mk ≥ Ω ( R2B2D2 2 log d\nδ2−k ) and learning rate ηk = α Êk−2 ‖ĝkx‖2+‖ĝky‖2 with 0 ≤ α ≤ 2, then with probability\nat least 1−O(δ), the Monte Carlo version of Alg. 1 generates a sequence of points { (xk, yk) }∞ k=0 convergent to an O( )-approximate equilibrium (x̄, ȳ), that is ∀x ∈ X,∀y ∈ Y, f(x, ȳ) − O( ) ≤ f(x̄, ȳ) ≤ f(x̄, y) +O( ).\nDiscussion. The theorems require f to be convex in x and concave in y, but not strictly, which is a weaker assumption than Arrow et al. (1958). The purpose of this simple analysis is mainly a sanity check for correctness. It applies to the setting in Sec. 5.1 but not beyond, as the assumptions do not necessarily hold for neural networks. The sample size is chosen loosely as we are not aiming at a sharp finite sample complexity analysis. In practice, we can find suitable mk (sample size) and ηk (learning rates) by experimentation, and adopt a modern RL algorithm with an advanced optimizer (e.g., PPO (Schulman et al., 2017) with RmsProp (Hinton et al.)) in place of the SGD updates." }, { "heading": "5 EXPERIMENTS", "text": "We empirically evaluate our algorithm in several games with distinct characteristics.\nCompared methods. In Matrix games, we compare to a naive mirror descent method, which is essentially Self-play with the latest agent, to verify convergence. In the rest of the environments, we compare the results from the following methods:\n1. Self-play with the latest agent (Naive Mirror Descent). The learner always competes with the most recent agent. This is essentially the Gradient Descent Ascent method by ArrowHurwicz-Uzawa (Arrow et al., 1958) or the naive mirror/alternating descent. 2. Self-play with the best past agent. The learner competes with the best historical agent maintained. The new agent replaces the maintained agent if it beats the existing one. This is the scheme in AlphaGo Zero and AlphaZero (Silver et al., 2017; 2018). 3. Self-play with a random past agent (Fictitious play). The learner competes against a randomly sampled historical opponent. This is the scheme in OpenAI sumo (Bansal et al., 2017; Al-Shedivat et al., 2018). It is similar to Fictitious play (Brown, 1951) since uniformly random sampling is equivalent to historical average. However, Fictitious play only guarantees convergence of the average-iterate but not the last-iterate agent. 4. OURS(n = 2, 4, 6, . . .). This is our algorithm with a population of n pairs of agents trained simultaneously, with each other as candidate opponents. Implementation can be distributed.\nEvaluation protocols. We mainly measure the strength of agents by the Elo scores (Elo, 1978). Pairwise competition results are gathered from a large tournament among all the checkpoint agents of all methods after training. Each competition has multiple matches to account for randomness. The Elo scores are computed by logistic regression, as Elo assumes a logistic relationship P (A wins) + 0.5P (draw) = 1/(1 + 10(RB−RA)/400). A 100 Elo difference corresponds to roughly 64% win-rate. The initial agent’s Elo is calibrated to 0. Another way to measure the strength is to compute the average rewards (win-rates) against other agents. We also report average rewards in the appendix." }, { "heading": "5.1 MATRIX GAMES", "text": "We verified the last-iterate convergence to Nash equilibrium in several classical two-player zerosum matrix games. In comparison, the vanilla mirror descent/ascent is known to produce oscillating behaviors (Mertikopoulos et al., 2019). Payoff matrices (for both players separated by comma), phase portraits, error curves, and our observations are shown in Tab. 1,2,3,4 and Fig. 1,2,3,4.\nWe studied two settings: (1) OURS(Exact Gradient), the full information setting, where the players know the payoff matrix and compute the exact gradients on action probabilities; (2) OURS(Policy Gradient), the reinforcement learning or bandit setting, where each player only receives the reward of its own action. The action probabilities were modeled by a probability vector p ∈ ∆2. We estimated the gradient w.r.t p with REINFORCE estimator (Williams, 1992) with sample sizemk = 1024, and applied ηk = 0.03 constant learning rate SGD with proximal projection onto ∆2. We trained n = 4 agents jointly for Alg. 1 and separately for the naive mirror descent under the same initialization." }, { "heading": "5.2 GRID-WORLD SOCCER GAME", "text": "We conducted experiments in a grid-world soccer game. Similar games were adopted in Littman (1994) and He et al. (2016). Two players compete in a 6 × 9 grid world, starting from random positions. The action space is {up, down, left, right, noop}. Once a player scores a goal, it gets positive reward 1.0, and the game ends. Up to T = 100 timesteps are allowed. The game ends with a draw if time runs out. The game has imperfect information, as the two players move simultaneously.\nThe policy and value functions were parameterized by simple one-layer networks, consisting of a one-hot encoding layer and a linear layer that outputs the action logits and values. The logits are transformed into probabilities via softmax. We used Advantage Actor-Critic (A2C) (Mnih et al., 2016) with Generalized Advantage Estimation (Schulman et al., 2016) and RmsProp (Hinton et al.) as the base RL algorithm. The hyper-parameters were N = 50, l = 10, mk = 32 for Alg. 1. We kept track of the per-agent number of trajectories (episodes) each algorithm used for fair comparison. Other hyper-parameters are listed in the appendix. All methods were run multiple times to calculate the confidence intervals.\nIn Fig. 5, OURS(n = 2, 4, 6) all perform better than others, achieving higher Elo scores after experiencing the same number of per-agent episodes. Other methods fail to beat the rule-based agent after 32000 episodes. Competing with a random past agent learns the slowest, suggesting that, though it may stabilize training and diversify behaviors (Bansal et al., 2017), the learning efficiency is not high because a large portion of samples is devoted to weak opponents. Within our method, the performance increases with a larger n, suggesting a larger population may help find better perturbations." }, { "heading": "5.3 GOMOKU BOARD GAME", "text": "We investigated the effectiveness in the Gomoku game, which is also known as Renju, Five-in-a-row. In our variant, two players place black or white stones on a 9-by-9 board in turn. The player who\nGame payoff matrix Phase portraits and error curves\nHeads Tails Heads 1,−1 −1, 1 Tails −1, 1 1,−1\nTab. 1: Matching Pennies, a classical game where two players simultaneously turn their pennies to heads or tails. If the pennies match, Player 2 (Row) wins one penny from Player 1 (Column); otherwise, Player 1 wins. (Px(head), Py(head)) = ( 1 2 , 1 2 ) is the unique Nash equilibrium with game value 0.\n0.00 0.25 0.50 0.75 1.00 P1(heads)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nP 2 (h\nea ds\n)\nNaive Mirror Descent\n0.00 0.25 0.50 0.75 1.00 P1(heads)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0 Ours (Exact Gradient)\n0.00 0.25 0.50 0.75 1.00 P1(heads)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0 Ours (Policy Gradient)\n0 25 50 75 100 iter\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\nL2 d\nist . t\no Na\nsh\n0 20 40 iter\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\n0 20 40 iter\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\nFig. 1: Matching Pennies. (Top) The phase portraits. (Bottom) The squared L2 distance to the equilibrium. Four colors correspond to the 4 agents in the population with 4 initial points.\nHeads Tails Heads 2,−2 0, 0 Tails −1, 1 2,−2\nTab. 2: Skewed Matching Pennies.\nObservation: In the leftmost column of Fig. 1,2, the naive mirror descent does not converge pointwisely; Instead, it is trapped in a cyclic behavior. The trajectories of the probability of playing Heads orbit around the Nash, showing as circles in the phase portrait. On the other hand, our method enjoys approximate last-iterate convergence with both exact and policy gradients.\n0.00 0.25 0.50 0.75 1.00 P1(heads)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nP 2 (h\nea ds\n)\nNaive Mirror Descent\n0.00 0.25 0.50 0.75 1.00 P1(heads)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0 Ours (Exact Gradient)\n0.00 0.25 0.50 0.75 1.00 P1(heads)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0 Ours (Policy Gradient)\n0 25 50 75 100 iter\n0.0\n0.2\n0.4\n0.6 0.8 L2 d ist . t o Na sh\n0 20 40 iter\n0.0\n0.2\n0.4\n0.6\n0.8\n0 20 40 iter\n0.0\n0.2\n0.4\n0.6\n0.8\nFig. 2: Skewed Matching Pennies. The unique Nash equilibrium is (Px(heads), Py(heads)) =( 3\n5\n, 2\n5\n)\nwith value 0.8.\nRock Paper Scissors Rock 0, 0 −1, 1 1,−1 Paper 1,−1 0, 0 −1, 1\nScissors −1, 1 1,−1 0, 0\nTab. 3: Rock Paper Scissors.\nObservation: Similar observations occur in the Rock Paper Scissors game (Fig. 3). The naive method circles around the corresponding equilibrium points ( 3 5 , 2 5 ) and ( 1 3 , 1 3 , 1 3 ) , while our method converges with diminishing error.\nP0 Rock\nP0 PaperP0 Sc iss or\n0.0\n1.0\n1.0\n0.0\n0.0 1.0 P0 Rock\nP0 PaperP0 Sc iss or\n0.0\n1.0\n1.0\n0.0\n0.0 1.0 P0 Rock\nP0 PaperP0 Sc iss or\n0.0\n1.0\n1.0\n0.0\n0.0 1.0\n0 25 50 75 100 iter\n0.0\n0.2\n0.4\n0.6\n0.8\nL2 d\nist . t\no Na\nsh\n0 20 40 iter\n0.0\n0.2\n0.4\n0.6\n0.8\n0 20 40 iter\n0.0\n0.2\n0.4\n0.6\n0.8\nFig. 3: Rock Paper Scissors. (Top) Visualization of Player 1’s strategies (y0) of one of the agents in the population. (Down) The squared distance to equilibrium.\na b c A 1,−1 −1, 1 0.5,−0.5 B −1, 1 1,−1 −0.5, 0.5\nTab. 4: Extended Matching Pennies.\nObservation: Our method has the benefit of producing diverse solutions when there exist multiple Nash equilibria. The solution for row player is x = ( 1 2 , 1 2 ) , while any interpolation between(\n1 2 , 1 2 , 0 ) and ( 0, 13 , 2 3 ) is an equilibrium column strategy. Depending on initialization, agents in our method converges to different equilibria.\nP(a)\nP(b)P( c)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\n0.0 0.2 0.4 0.6 0.8 1.0\nP(a)\nP(b)P( c)\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\n0.0 0.2 0.4 0.6 0.8 1.0\nFig. 4: Visualization of the row player’s strategies. (Left) Exact gradient; (Right) Policy gradient. The dashed line represents possible equilibrium strategies. The four agents (in different colors) in the population trained by our algorithm (n = 4) converge differently.\n* In all three figures, bars show the 95% confidence intervals. We compare per-agent sample efficiency.\ngets an unbroken row of five horizontally, vertically, or diagonally, wins (reward 1). The game is a draw (reward 0) when no valid move remains. The game is sequential and has perfect information.\nThis experiment involved much more complex neural networks than before. We adopted a 4- layer convolutional ReLU network (kernels (5, 5, 3, 1), channels (16, 32, 64, 1), all strides 1) for both the policy and value networks. Gomoku is hard to train from scratch with pure model-free RL without explicit tree search. Hence, we pre-trained the policy nets on expert data collected from renjuoffline.com. We downloaded roughly 130 thousand games and applied behavior cloning. The pre-trained networks were able to predict expert moves with ≈ 41% accuracy and achieve an average score of 0.93 (96% win and 4% lose) against a random-action player. We adopted the A2C (Mnih et al., 2016) with GAE (Schulman et al., 2016) and RmsProp (Hinton et al.) with learning rate ηk = 0.001. Up to N = 40 iterations of Alg. 1 were run. The other hyperparameters were the same as those in the soccer game.\nIn Fig. 6, all methods are able to improve upon the behavior cloning policies significantly. OURS(n = 2, 4, 6) demonstrate higher sample efficiency by achieving higher Elo ratings than the alternatives given the same amount of per-agent experience. This again suggests that the opponents are chosen more wisely, resulting in better policy improvements. Lastly, the more complex policy and value functions (multi-layer CNN) do not seem to undermine the advantage of our approach." }, { "heading": "5.4 ROBOSUMO ANTS", "text": "Our last experiment is based on the RoboSumo simulation environment in Al-Shedivat et al. (2018) and Bansal et al. (2017), where two Ants wrestle in an arena. This setting is particularly relevant to practical robotics research, as we believe success in this simulation could be transferred into the realworld. The Ants move simultaneously, trying to force the opponent out of the arena or onto the floor. The physics simulator is MuJoCo (Todorov et al., 2012). The observation space and action space are continuous. This game is challenging since it involves a complex continuous control problem with sparse rewards. Following Al-Shedivat et al. (2018) and Bansal et al. (2017), we utilized PPO (Schulman et al., 2017) with GAE (Schulman et al., 2016) as the base RL algorithm, and used a 2-layer fully connected network with width 64 for function approximation. Hyper-parameters N = 50, mk = 500. In Al-Shedivat et al. (2018), a random past opponent is sampled in selfplay, corresponding to the “Self-play w/ random past” baseline here. The agents are initialized from imitating the pre-trained agents of Al-Shedivat et al. (2018). We considered n = 4 and n = 8 for our method. From Fig. 7, we observe again that OURS(n = 4, 8) outperform the baseline methods by a statistical margin and that our method benefits from a larger population size." }, { "heading": "6 CONCLUSION", "text": "We propose a new algorithmic framework for competitive self-play policy optimization inspired by a perturbation subgradient method for saddle points. Our algorithm provably converges in convexconcave games and achieves better per-agent sample efficiency in several experiments. In the future, we hope to study a larger population size (should we have sufficient computing power) and the possibilities of model-based and off-policy self-play RL under our framework." }, { "heading": "A EXPERIMENT DETAILS", "text": "A.1 ILLUSTRATIONS OF THE GAMES IN THE EXPERIMENTS\nA\nB\nObservation space: Tensor of shape [5, ], (xA, yA, xB , yB , A has ball) Action space: {up, down, left, right, noop} Time limit: 50 moves Terminal reward: +1 for winning team −1 for losing team 0 if timeout\nA B C D E F G H I\n1 2\n3 4\n5 6\n7 8\n9\n1 39\n4 6\n28\n7\n5\n1011\n12\n13\nFig. 9: Illustration of the Gomoku game (also known as Renju, five-in-a-row). We study the 9x9 board variant. Two players sequentially place black and white stones on the board. Black goes first. A player wins when he or she gets five stones in a row. In the case of this illustration, the black wins because there is five consecutive black stones in the 5th row. Numbers in the stones indicate the ordered they are placed.\nObservation space: Tensor of shape [9, 9, 3], last dim 0: vacant, 1: black, 2: white Action space: Any valid location on the 9x9 board Time limit: 41 moves per-player Terminal reward: +1 for winning player −1 for losing player 0 if timeout\nFig. 10: Illustration of the RoboSumo Ants game. Two ants fight in the arena. The goal is to push the opponent out of the arena or down to the floor. Agent positions are initialized to be random at the start of the game. The game ends in a draw if the time limit is reached. In addition to the terminal ±1 reward, the environment comes with shaping rewards (motion bonus, closeness to opponent, etc.). In order to make the game zero-sum, we take the difference between the original rewards of the two ants.\nObservation space: R120 Action space: R8 Time limit: 100 moves Reward: rt = r orig y t − r orig x t Terminal ±1 or 0.\nA.2 HYPER-PARAMETERS\nThe hyper-parameters in different games are listed in Tab. 5.\nA.3 ADDITIONAL RESULTS\nWin-rates (or average rewards). Here we report additional results in terms of the average winrates, or equivalently the average rewards through the linear transform win-rate = 0.5 + 0.5 reward, in Tab. 6 and 7. Since we treat each (xi, yi) pair as one agent, the values are the average of f(xi, ·) and f(·, yi) in the first table. The one-side f(·, yi) win-rates are in the second table. Mean and 95% confidence intervals are estimated from multiple runs. Exact numbers of runs are in the captions\nTab. 5: Hyper-parameters.\nHyper-param \\ Game Soccer Gomoku RoboSumo Num. of iterations N 50 40 50 Learning rate ηk 0.1 0 → 0.001 in first 20 steps then 0.001 3e-5→ 0 linearly Value func learning rate (Same as above.) (Same as above.) 9e-5 Sample size mk 32 32 500 Num. of inner updates l 10 10 10 Env. time limit 50 41 per-player 100\nBase RL algorithm A2C A2C PPO, clip 0.2, mini-batch 512, epochs 3 Optimizer RmsProp, α = 0.99 RmsProp, α = 0.99 RmsProp, α = 0.99 Max gradient norm 1.0 1.0 0.1 GAE λ parameter 0.95 0.95 0.98 Discounting factor γ 0.97 0.97 0.995 Entropy bonus coef. 0.01 0.01 0\nPolicy function\nSequential[ OneHot[5832], Linear[5832,5], Softmax, CategoricalDist ] Sequential[ Conv[c16,k5,p2], ReLU, Conv[c32,k5,p2], ReLU, Conv[c64,k3,p1], ReLU, Conv[c1,k1], Spatial Softmax, CategoricalDist ] Sequential[ Linear[120,64], TanH, Linear[64,64], TanH, Linear[64,8], TanH, GaussianDist ]\nTanh ensures the mean of the Gaussian is between -1 and 1. The density is corrected.\nValue function Sequential[\nOneHot[5832], Linear[5832,1] ]\nShare 3 Conv layers with the policy, but additional heads: global average and Linear[64,1] Sequential[ Linear[120,64], TanH, Linear[64,64], TanH, Linear[64,1] ]\nof Fig. 5,6,7 of the main paper. The message is the same as that suggested by the Elo scores: Our method consistently produces stronger agents. We hope the win-rates may give better intuition about the relative performance of different methods.\nTab. 6: Average win-rates (∈ [0, 1]) between the last-iterate (final) agents trained by different algorithms. Last two rows further show the average over other last-iterate agents and all other agents (historical checkpoint) included in the tournament, respectively. Since an agent consists of an (x, y) pair, the win-rate is averaged on x and y, i.e., win(col vs row) = f(x\nrow,ycol)−f(xcol,yrow) 2 × 0.5 + 0.5.\nThe lower the better within each column; The higher the better within each row.\n(a) Soccer Soccer Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6) Self-play latest - 0.533± 0.044 0.382± 0.082 0.662± 0.054 0.691± 0.029 0.713± 0.032 Self-play best 0.467± 0.044 - 0.293± 0.059 0.582± 0.042 0.618± 0.031 0.661± 0.030 Self-play rand 0.618± 0.082 0.707± 0.059 - 0.808± 0.039 0.838± 0.028 0.844± 0.043 Ours (n=2) 0.338± 0.054 0.418± 0.042 0.192± 0.039 - 0.549± 0.022 0.535± 0.022 Ours (n=4) 0.309± 0.029 0.382± 0.031 0.162± 0.028 0.451± 0.022 - 0.495± 0.023 Ours (n=6) 0.287± 0.032 0.339± 0.030 0.156± 0.043 0.465± 0.022 0.505± 0.023 - Last-iter average 0.357± 0.028 0.428± 0.028 0.202± 0.023 0.532± 0.023 0.608± 0.018 0.585± 0.022 Overall average 0.632± 0.017 0.676± 0.014 0.506± 0.020 0.749± 0.009 0.775± 0.006 0.776± 0.008\n(b) Gomoku Gomoku Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6) Self-play latest - 0.523± 0.026 0.462± 0.032 0.551± 0.024 0.571± 0.018 0.576± 0.017 Self-play best 0.477± 0.026 - 0.433± 0.031 0.532± 0.024 0.551± 0.018 0.560± 0.020 Self-play rand 0.538± 0.032 0.567± 0.031 - 0.599± 0.027 0.588± 0.022 0.638± 0.020 Ours (n=2) 0.449± 0.024 0.468± 0.024 0.401± 0.027 - 0.528± 0.015 0.545± 0.017 Ours (n=4) 0.429± 0.018 0.449± 0.018 0.412± 0.022 0.472± 0.015 - 0.512± 0.013 Ours (n=6) 0.424± 0.017 0.440± 0.020 0.362± 0.020 0.455± 0.017 0.488± 0.013 - Last-iter average 0.455± 0.010 0.479± 0.011 0.407± 0.012 0.509± 0.010 0.537± 0.008 0.560± 0.008 Overall average 0.541± 0.004 0.561± 0.004 0.499± 0.005 0.583± 0.004 0.599± 0.003 0.615± 0.003\n(c) RoboSumo RoboSumo Self-play latest Self-play best Self-play rand Ours (n=4) Ours (n=8) Self-play latest - 0.502± 0.012 0.493± 0.013 0.511± 0.011 0.510± 0.010 Self-play best 0.498± 0.012 - 0.506± 0.014 0.514± 0.008 0.512± 0.010 Self-play rand 0.507± 0.013 0.494± 0.014 - 0.508± 0.011 0.515± 0.011 Ours (n=4) 0.489± 0.011 0.486± 0.008 0.492± 0.011 - 0.516± 0.008 Ours (n=8) 0.490± 0.010 0.488± 0.010 0.485± 0.011 0.484± 0.008 - Last-iter average 0.494± 0.006 0.491± 0.005 0.492± 0.006 0.500± 0.005 0.514± 0.005 Overall average 0.531± 0.004 0.527± 0.004 0.530± 0.004 0.539± 0.003 0.545± 0.003\nTraining time. Thanks to the easiness of parallelization, the proposed algorithm enjoys good scalability. We can either distribute the n agents into n processes to run concurrently, or make the rollouts parallel. Our implementation took the later approach. In the most time-consuming RoboSumo Ants experiment, with 30 Intel Xeon CPUs, the baseline methods took approximately 2.4h, while Ours (n=4) took 10.83h to train (×4.5 times), and Ours (n=8) took 20.75h (×8.6 times). Note that, Ours (n) trains n agents simultaneously. If we train n agents with the baseline methods by repeating the experiment n times, the time would be 2.4n hours, which is comparable to Ours (n).\nChance of selecting the agent itself as opponent. One big difference between our method and the compared baselines is the ability to select opponents adversarially from the population. Consider the agent pair (xi, yi). When training xi, our method finds the strongest opponent (that incurs the largest loss on xi) from the population, whereas the baselines always choose (possibly past versions of) yi. Since the candidate set contains yi, the “fall-back” case is to use yi as opponent in our method. We report the frequency that yi is chosen as opponent for xi (and xi for yi likewise). This gives a sense of how often our method falls back to the baseline method. From Tab. 8, we can observe that, as n grows larger, the chance of fall-back is decreased. This is understandable since a larger population means larger candidate sets and a larger chance to find good perturbations." }, { "heading": "B PROOFS", "text": "We adopt the following variant of Alg. 1 in our asymptotic convergence analysis. For clarity, we investigate the learning process of one agent in the population and drop the i index. Ckx and C k y are\nTab. 7: Average one-sided win-rates (∈ [0, 1]) between the last-iterate (final) agents trained by different algorithms. The win-rate is one-sided, i.e., win(ycol vs xrow) = f(xrow, ycol) × 0.5 + 0.5. The lower the better within each column; The higher the better within each row.\n(a) Soccer row x \\ col y Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6) Self-play latest 0.536± 0.054 0.564± 0.079 0.378± 0.103 0.674± 0.080 0.728± 0.039 0.733± 0.048 Self-play best 0.497± 0.065 0.450± 0.064 0.306± 0.106 0.583± 0.056 0.601± 0.039 0.642± 0.050 Self-play rand 0.614± 0.163 0.719± 0.090 0.481± 0.102 0.796± 0.071 0.816± 0.039 0.824± 0.062 Ours (n=2) 0.350± 0.051 0.419± 0.057 0.181± 0.049 0.451± 0.037 0.525± 0.031 0.553± 0.034 Ours (n=4) 0.346± 0.046 0.365± 0.047 0.140± 0.034 0.427± 0.034 0.491± 0.020 0.494± 0.033 Ours (n=6) 0.308± 0.042 0.319± 0.052 0.136± 0.050 0.483± 0.043 0.505± 0.030 0.515± 0.032 Last-iter average 0.381± 0.033 0.422± 0.036 0.188± 0.028 0.525± 0.029 0.601± 0.021 0.587± 0.026 Overall average 0.654± 0.017 0.665± 0.016 0.502± 0.021 0.745± 0.010 0.771± 0.006 0.775± 0.009\n(b) Gomoku row x \\ col y Self-play latest Self-play best Self-play rand Ours (n=2) Ours (n=4) Ours (n=6) Self-play latest 0.481± 0.031 0.540± 0.038 0.488± 0.050 0.594± 0.041 0.571± 0.026 0.586± 0.030 Self-play best 0.494± 0.033 0.531± 0.030 0.471± 0.049 0.597± 0.040 0.562± 0.024 0.572± 0.028 Self-play rand 0.565± 0.036 0.605± 0.036 0.572± 0.051 0.668± 0.040 0.617± 0.027 0.647± 0.029 Ours (n=2) 0.491± 0.031 0.533± 0.033 0.470± 0.040 0.568± 0.035 0.571± 0.022 0.552± 0.025 Ours (n=4) 0.428± 0.022 0.461± 0.024 0.440± 0.035 0.515± 0.029 0.491± 0.017 0.503± 0.020 Ours (n=6) 0.435± 0.021 0.453± 0.026 0.370± 0.028 0.462± 0.025 0.479± 0.018 0.467± 0.017 Last-iter average 0.472± 0.012 0.506± 0.014 0.438± 0.017 0.549± 0.016 0.550± 0.011 0.564± 0.012 Overall average 0.548± 0.005 0.585± 0.005 0.536± 0.007 0.631± 0.006 0.608± 0.004 0.617± 0.004\n(c) RoboSumo row x \\ col y Self-play latest Self-play best Self-play rand Ours (n=4) Ours (n=8) Self-play latest 0.516± 0.022 0.494± 0.020 0.491± 0.023 0.502± 0.017 0.511± 0.016 Self-play best 0.489± 0.018 0.504± 0.023 0.503± 0.022 0.506± 0.014 0.509± 0.014 Self-play rand 0.505± 0.021 0.491± 0.026 0.494± 0.026 0.518± 0.017 0.516± 0.014 Ours (n=4) 0.480± 0.018 0.479± 0.012 0.502± 0.016 0.496± 0.009 0.517± 0.012 Ours (n=8) 0.491± 0.012 0.484± 0.016 0.485± 0.016 0.486± 0.012 0.491± 0.012 Last-iter average 0.489± 0.008 0.485± 0.008 0.495± 0.009 0.500± 0.007 0.514± 0.007 Overall average 0.528± 0.004 0.521± 0.004 0.530± 0.005 0.534± 0.003 0.544± 0.003\nTab. 8: Average frequency of using the agent itself as opponent, in the Soccer and Gomoku experiments. The frequency is calculated by counting over all agents and iterations. The ± shows the standard deviations estimated by 3 runs with different random seeds.\nMethod Ours (n = 2) Ours (n = 4) Ours (n = 6)\nFrequency of self (Soccer) 0.4983± 0.0085 0.2533± 0.0072 0.1650± 0.0082 Frequency of self (Gomoku) 0.5063± 0.0153 0.2312± 0.0111 0.1549± 0.0103\nnot set simply as the population for the sake of the proof. Alternatively, we pose some assumptions. Setting them to the population as in the main text may approximately satisfy the assumptions.\nAlgorithm 2: Simplified perturbation-based self-play policy optimization of one agent. Input: ηk: learning rates, mk: sample size; Result: Pair of policies (x, y);\n1 Initialize x0, y0; 2 for k = 0, 1, 2, . . .∞ do 3 Construct candidate opponent sets Cky and Ckx ; 4 Find perturbed vk = argmaxy∈Cky f̂(x k, y) and perturbed uk = argminx∈Ckx f̂(x, y k) where the evaluation is done with Eq. 4 and sample size mk ; 5 Compute estimated duality gap Êk = f̂(xk, vk)− f̂(uk, yk); 6 if Êk ≤ 3 then 7 return (xk, yk) 8 Estimate policy gradients ĝkx = ∇̂xf(xk, vk) and ĝky = ∇̂yf(uk, yk) w/ Eq. 5 and sample size mk; 9 Update policy parameters with xk+1 ← xk − ηkĝkx and yk+1 ← yk + ηkĝky ;\nB.1 PROOF OF THEOREM 1\nWe restate the assumptions and the theorem here more clearly for reference.\nAssumption B.1. X,Y ⊆ Rd (d > 1) are compact sets. As a consequence, there exists D ≥ 1, s.t.,\n∀x1, x2 ∈ X, ‖x1 − x2‖1 ≤ D and ∀y1, y2 ∈ Y, ‖y1 − y2‖1 ≤ D.\nFurther, assume f : X × Y 7→ R is a bounded convex-concave function. Assumption B.2. Cky , Ckx are compact subsets of X and Y . Assume that a sequence (xk, yk) → (x̂, ŷ)∧f(xk, vk)−f(uk, yk)→ 0 for some vk ∈ Cky and uk ∈ Ckx implies (x̂, ŷ) is a saddle point. Theorem 1 (Convergence with exact gradients (Kallio & Ruszczynski, 1994)). Under Assump. B.1,B.2, let the learning rate satisfies\nηk < Ek\n‖gkx‖2 + ‖gky‖2 ,\nAlg. 2 (when replacing all estimates with true values) produces a sequence of points { (xk, yk) }∞ k=0 convergent to a saddle point.\nAssump. B.1 is standard, which is true if f is based on a payoff table and X,Y are probability simplex as in matrix games, or if f is quadratic and X,Y are unit-norm vectors. Assump. B.2 is about the regularity of the candidate opponent sets. This is true if Cky , C k x are compact and f(xk, vk) − f(uk, yk) = 0 only at a saddle point (uk, vk) ∈ Cky × Ckx . An trivial example would be Ckx = X,C k y = Y . Another example would be the proximal regions around x\nk, yk. In practice, Alg. 1 constructs the candidate sets from the population which needs to be adequately large and diverse to satisfy Assump. B.2 approximately.\nThe proof is due to (Kallio & Ruszczynski, 1994), which we paraphrase here.\nProof. We shall prove that one iteration of Alg. 2 decreases the distance between the current (xk, yk) and the optimal (x∗, y∗). Expand the squared distance,\n‖xk+1 − x∗‖2 ≤ ‖xk + ηkgkx − x∗‖2 = ‖xk − x∗‖2 + 2ηk〈gkx, xk − x∗〉+ η2k‖gkx‖2. (6)\nFrom Assump. B.1, convexity of f(x, y) on x gives\n〈gkx, xk − x∗〉 ≥ f(xk, vk)− f(x∗, vk) (7)\nwhich yields\n‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − 2ηk(f(xk, vk)− f(x∗, vk)) + η2k‖gkx‖2. (8)\nSimilarly for yk, concavity of f(x, y) on y gives ‖yk+1 − y∗‖2 ≤ ‖yk − y∗‖2 + 2ηk ( f(uk, yk)− f(uk, y∗) ) + η2k‖gkx‖2. (9)\nSum the two and notice the saddle point condition implies\nf(x∗, vk) ≤ f(x∗, y∗) ≤ f(uk, y∗), (10)\nwe have\nWk+1 := ‖xk+1 − x∗‖2 + ‖yk+1 − y∗‖2\n≤ ‖xk − x∗‖2 + ‖yk − y∗‖2 − 2ηk ( f(xk, vk)− f(x∗, vk)− f(uk, yk) + f(uk, y∗) ) + η2k ( ‖gkx‖2 + ‖gky‖2 ) ≤Wk − 2ηkEk + η2k ( ‖gkx‖2 + ‖gky‖2 ) .\n(11)\nIf the learning rate satisfies ηk < Ek‖gkx‖2+‖gky‖2 , the sequence {Wk} ∞ k=0 is strictly decreasing unless Ek = 0. Since Wk is bounded below by 0, therefore Ek → 0. Following from Assump. B.2, the convergent point limk→∞(xk, yk) = (x∗, y∗) is a saddle point.\nB.2 PROOF OF THEOREM 2\nWe restate the additional Assump. B.3 and the theorem here for reference. Assump. B.2 is replaced by the following approximated version B.4. Assumption B.3. The total return is bounded by R, i.e., | ∑ t γ\ntrt| ≤ R. The Q value estimator Q̂ is unbiased and bounded by R (|Q̂| ≤ R). And the policy has bounded gradient max{‖∇ log πθ(a|s)‖∞, 1} ≤ B in terms of L∞ norm. Assumption B.4. Cky , Ckx are compact subsets ofX and Y . Assume at iteration k, for some (x̂, ŷ) ∈ X × Y , ∀(u, v) ∈ Ckx × Cky , f(u, ŷ)− ≤ f(x̂, ŷ) ≤ f(x̂, v) + implies ∀(u, v) ∈ X × Y, f(u, ŷ)− ≤ f(x̂, ŷ) ≤ f(x̂, v) + , namely, (x̂, ŷ) is an -approximate saddle point. Theorem 2 (Convergence with policy gradients). Under Assump. B.1,B.3,B.4, let sample size at step k be\nmk ≥ 2R2B2D2\n2 log\n2d\nδ2−k\nand, with 0 ≤ α ≤ 2, let the learning rate\nηk = α Êk − 2\n‖ĝkx‖2 + ‖ĝky‖2 .\nThen with probability at least 1−O(δ), the Monte Carlo version of Alg. 2 generates a sequence of points { (xk, yk) }∞ k=0 convergent to an O( )-approximate equilibrium (x̄, ȳ). That is\n∀x ∈ X,∀y ∈ Y, f(x, ȳ)−O( ) ≤ f(x̄, ȳ) ≤ f(x̄, y) +O( ).\nIn the stochastic game (or reinforcement learning) setting, we construct estimates for f(x, y) (Eq. 4) and policy gradients ∇xf,∇yf (Eq. 5) with samples. Intuitively speaking, when the samples are large enough, we can bound the deviation between the true values and the estimates by concentration inequalities, then the similar proof outline also goes through.\nLet us first define the concept of -subgradient for convex functions and -supergradient for concave functions. Then we calculate how many samples are needed for accurate gradient estimation in Lemma 3 with high probability. With Lemma 3, we will be able to show that the Monte Carlo policy gradient estimates are good enough to be -subgradients when sample size is large in Lemma 4. Definition 1. An -subgradient of a convex function h : Rd 7→ R at x is g ∈ Rd that satisfies\n∀x′, h(x′)− h(x) ≥ 〈g, x′ − x〉 − . Similarly, an -supergradient of a concave function h : Rd 7→ R at x is g ∈ Rd that satisfies\n∀x′, h(x′)− h(x) ≤ 〈g, x′ − x〉+ . Lemma 3 (Policy gradient sample size). Consider x or y alone and treat the problem as MDP. Suppose Assump. B.3 is satisfied. Then with independently collected\nm ≥ 2R 2B2\n2 log\n2d\nδ trajectories {\n(sit, a i t, Q̂ i t)}Tt=0 }m i=1 , the policy gradient estimate\n∇̂f = 1 m ∑ i,t ∇ log πθ(ait|sit)Q̂it\nis -close to the true gradient∇f with high probability, namely, Pr ( ‖∇̂f −∇f‖∞ ≤ ) ≥ 1− δ.\nProof. It directly follows from Hoeffding’s inequality and the union bound, since the range of each sample point is bounded by RB and by the policy gradient theorem E∇̂f = ∇f .\nLemma 4 (Policy gradients are sub-/super- gradients). Under Assump. B.1, the policy gradient estimate ∇̂xf in Lemma 3 is an D-subgradient of f at x, i.e., for all x′ ∈ X ,\nf(x′, y)− f(x, y) ≥ 〈∇̂xf, x′ − x〉 − D\nwith probability ≥ 1− δ. (And ∇̂yf is D-super-gradient for y.)\nProof. Apply the telescoping trick,\n〈∇̂xf, x′ − x〉 = 〈∇̂xf −∇xf +∇xf, x′ − x〉\n= 〈∇xf, x′ − x〉+ 〈∇̂xf −∇xf, x′ − x〉\n≥ f(x′, y)− f(x, y) + 〈∇̂xf −∇xf, x′ − x〉.\n(12)\nWith the sample size in Lemma 3, we know it holds that maxi |∇̂xf −∇xf |i ≤ with probability ≥ 1− δ. Hence, by Holder’s inequality, the last part satisfies\n〈∇̂xf −∇xf, x′−x〉 ≥ −〈|∇̂xf −∇xf |, |x′−x|〉 ≥ −‖∇̂xf −∇xf‖∞‖x′−x‖1 ≥ − D. (13)\nThe proof of ∇̂yf being D-super-gradient for y is similar, hence omitted.\nSimilarly for accurate function value evaluation, we have the following lemma on sample size, which directly follows from Hoeffding’s inequality.\nLemma 5 (Evaluation sample size). Suppose Assump. B.3 holds. Then with independently collected m ≥ 2R 2\n2 log 2 δ trajectories { (sit, a i t, r i t)}Tt=0 }m i=1 , the value estimate f̂ = 1m ∑ i,t γ trt is -close to\nthe true gradient f with high probability, namely, Pr ( ‖f̂ − f‖∞ ≤ ) ≥ 1− δ.\nNow we prove our main theorem which guarantees the output of Alg. 2 is an approximate Nash with high probability. This is done by using Lemma 4 in place of the exact convexity condition to analyze the relationship betweenWk andWk+1, using Lemma 5 to bound the error of policy evaluation, and analyzing the stop condition carefully.\nProof. (Theorem 2.)\nSuppose (x∗, y∗) is one saddle point of f . We shall prove that one iteration of Alg. 2 sufficiently decreases the squared distance between the current (xk, yk) and (x∗, y∗) defined as Wk := ‖xk − x∗‖2 + ‖yk − y∗‖2. Relation between Wk and Wk+1: Note that\nWk+1 = ‖xk+1−x∗‖2 ≤ ‖xk +ηkĝkx−x∗‖2 = ‖xk−x∗‖2 + 2ηk〈ĝkx, xk−x∗〉+η2k‖ĝkx‖2. (14)\nBy Lemma 4, the gradient estimate ĝkx with sample sizemk is an -subgradient on xwith probability at least 1− δ/2k, i.e.,\n〈ĝkx, xk − x∗〉 ≥ f(xk, vk)− f(x∗, vk)− . (15)\nPlugging back into Eq. 14, we get ‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − 2ηk ( f(xk, vk)− f(x∗, vk)− ) + η2k‖gkx‖2. (16)\nSimilarly for yk, since ĝkx is a super-gradient by Lemma 4, ‖yk+1 − y∗‖2 ≤ ‖yk − y∗‖2 + 2ηk ( f(uk, yk)− f(uk, y∗) + ) + η2k‖gkx‖2. (17)\nSum the two inequalities above, and notice the saddle point condition implies\nf(x∗, vk) ≤ f(x∗, y∗) ≤ f(uk, y∗),\nwe have the following inequality holds with probability 1− 2δ/2k,\nWk+1 = ‖xk+1 − x∗‖2 + ‖yk+1 − y∗‖2\n≤ ‖xk − x∗‖2 + ‖yk − y∗‖2 − 2ηk ( f(xk, vk)− f(x∗, vk)− f(uk, yk) + f(uk, y∗)− 2 ) + η2k ( ‖ĝkx‖2 + ‖ĝky‖2 ) ≤Wk − 2ηk(Ek − 2 ) + η2k ( ‖ĝkx‖2 + ‖ĝky‖2 ) . (18)\nAccurate estimation of Ek: In Eq. 18, the second term involves Ek which is unknown to the algorithm. Recall that Ek(uk, vk) = f(xk, vk) − f(uk, yk) and the empirical estimate Êk = f̂(xk, vk)− f̂(uk, yk) in Alg. 2 Line 5.\nBy Lemma 5, when the sample size mk is chosen as in Theorem 2, with probability 1− 2δd2k ,\n|f̂(xk, vk)− f(xk, vk)| ≤ BD ≤\nand |f̂(uk, yk)− f(uk, yk)| ≤\nBD ≤ .\nThus Êk is 2 -accurate because\nÊk − 2 = f(xk, vk)− − f(uk, yk)− ≤ Ek ≤ f(xk, vk) + − f(uk, yk) + = Êk + 2 .\n(19)\nCase (1). Stop condition in Alg. 2 Line 6: If there does not exist (u, v) ∈ Ckx × Cky such that Êk(u, v) > 3 , meaning ∀(u, v) ∈ Ckx × Cky , Êk ≤ 3 . We can conclude\nEk = f(x k, v)− f(u, yk) ≤ Êk + 3 = 5 (20)\nwith probability at least 1− 2δ d2k ≥ 1− 2δ 2k .\nSet v = yk and u = xk respectively in the above inequality, we obtain ∀(u, v) ∈ Ckx × Cky ,\nf(u, yk)− 5 ≤ f(xk, yk) ≤ f(xk, v) + 5 . (21) Following from Assump. B.4, this implies ∀(u, v) ∈ X×Y, f(u, yk)−5 ≤ f(xk, yk) ≤ f(xk, v)+ 5 , which suggests (xk, yk) is an approximate saddle point (equilibrium).\nOn the other hand, we want to bound the failure probability. Define events\nF (g) := “|ĝ − g| ≤ is true” for all g ∈ {g0x, g0y, f(x0, v0), f(u0, y0) . . . , gky , f(xk, yk) . . . }. By De Morgan’s law and the union bound,\nPr [ all MC estimates till step k are accurate ] = Pr\n[ k⋂ l=0 F (glx) ∩ F (gly) ∩ F (f(xl, vl)) ∩ F (f(ul, yl)) ]\n= 1− Pr [ k⋃ l=1 F (glx) ∩ . . . ∩ F (f(ul, yl)) ]\n≥ 1−O ( ∞∑ l=0 δ 2l )\n≥ 1−O(δ).\n(22)\nThis means that inaccurate MC estimation (failure) occurs with small probabilityO(δ). The purpose of the increasing mk w.r.t. k is to handle the union bound and the geometric series here. So, when the algorithm stops, it returns (x̄, ȳ) = (xk, yk) as a 5 -approximate solution to the saddle point (equilibrium) with high probability.\nCase (2). Sufficient decrease of Wk: Otherwise, if the stop condition is not triggered, we have picked uk, vk such that Êk > 3 . With probability 1 − 2δ, Ek > Êk − 2 ≥ . With the chosen learning rate ηk in the theorem statement, Wk strictly decreases by at least\nWk −Wk+1 > α(2− α) 2 ‖ĝkx‖2 + ‖ĝky‖2 ≥ α(2− α) 2 2R2B2 > 0. (23)\nSince Wk is bounded below by 0, by the monotone convergence theorem, there exists a finite k such that W0 ≤ kα(2−α) 2\n2R2B2 , and no (u, v) can be found to decrease Wk more than 3 . In this case, ∀(u, v) ∈ Ckx × Cky , Êk(u, v) ≤ 3 , which is exactly the stop condition in Case (1). This means the algorithm will eventually stop, and the proof is complete.\nRemark 1. The sample size is chosen very loosely. More efficient ways to find perturbations (e.g., best-arm identification), to better characterize or cover the policy class and to better utilize trajectories (e.g., especially off-policy evaluation w/ importance sampling) can potentially reduce sample complexity. In practice, we found on-policy methods which do not reuse past experience such as A2C and PPO work well enough. Remark 2. Assump. B.4 is a rather strong assumption on the candidate opponent sets. In theory, we can construct an -covering of f to satisfy the assumption. In practice, as in population-based training of Alg. 1, this assumption can be roughly met if n is large or diverse enough. We found a relatively small population with randomly initialized agents already brought noticeable benefit. Remark 3. The proof requires a variable learning rate ηk. However, the intuition is that the learning rate needs to be small, as we did in our experiments." } ]
2,020
null
SP:be01b10daaf670341722afb0c2d8570156ba7b53
[ "The paper proposes an architecture (ensemble of networks) aiming at being robust against black-box attacks, based on the idea that crafting an adversarial example able to fool enough individual networks such that the majority vote changes is a more difficult task. The paper presents ways of training such ensembles and provides several sets of experiments showing the advantage of the approach. It also contains an observation on \"non-transferability\", counting how many co-networks are fooled when only one is targetted by the blackbox attack. It turns out that this amount is lower for the proposed scheme. " ]
While machine learning models today can achieve high accuracies on classification tasks, they can be deceived by minor imperceptible distortions to the data. These are known as adversarial attacks and can be lethal in the black-box setting which does not require knowledge of the target model type or its parameters. Binary neural networks that have sign activation and are trained with gradient descent have been shown to be harder to attack than conventional sigmoid activation networks but their improvements are marginal. We instead train sign activation networks with a novel gradient-free stochastic coordinate descent algorithm and propose an ensemble of such networks as a defense model. We evaluate the robustness of our model (a hard problem in itself) on image, text, and medical ECG data and find it to be more robust than ensembles of binary, full precision, and convolutional neural networks, and than random forests while attaining comparable clean test accuracy. In order to explain our model’s robustness we show that an adversary targeting a single network in our ensemble fails to attack (and thus non-transferable to) other networks in the ensemble. Thus a datapoint requires a large distortion to fool the majority of networks in our ensemble and is likely to be detected in advance. This property of non-transferability arises naturally from the non-convexity of sign activation networks and randomization in our gradient-free training algorithm without any adversarial defense effort.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "R Bousseljot", "D Kreiseler", "A Schnabel" ], "title": "Nutzung der ekg-signaldatenbank cardiodat der ptb über das internet", "venue": "Biomedizinische Technik/Biomedical Engineering,", "year": 1995 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Huangxun Chen", "Chenyu Huang", "Qianyi Huang", "Qian Zhang", "Wei Wang" ], "title": "Ecgadv: Generating adversarial electrocardiogram to misguide arrhythmia classification system", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "J Chen", "MI Jordan", "MJ Wainwright" ], "title": "Hopskipjumpattack: A query-efficient decision-based attack", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2020 }, { "authors": [ "Jinghui Chen", "Quanquan Gu" ], "title": "Rays: A ray searching method for hard-label adversarial attack", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Sajad Darabi", "Mouloud Belbahri", "Matthieu Courbariaux", "Vahid Partovi Nia" ], "title": "Bnn+: Improved binary network training", "venue": "arXiv preprint arXiv:1812.11800,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Evading defenses to transferable adversarial examples by translation-invariant attacks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Angus Galloway", "Graham W Taylor", "Medhat Moussa" ], "title": "Attacking binarized neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lukas Geiger", "Plumerai Team" ], "title": "Larq: An open-source library for training binarized neural networks", "venue": "Journal of Open Source Software,", "year": 2020 }, { "authors": [ "Amin Ghiasi", "Ali Shafahi", "Tom Goldstein" ], "title": "Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates", "venue": "arXiv preprint arXiv:2003.08937,", "year": 2020 }, { "authors": [ "Justin Gilmer", "Nicolas Ford", "Nicholas Carlini", "Ekin Cubuk" ], "title": "Adversarial examples are a natural consequence of test error in noise", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ary L Goldberger", "Luis AN Amaral", "Leon Glass", "Jeffrey M Hausdorff", "Plamen Ch Ivanov", "Roger G Mark", "Joseph E Mietus", "George B Moody", "Chung-Kang Peng", "H Eugene Stanley" ], "title": "Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic", "venue": "signals. circulation,", "year": 2000 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Xintian Han", "Yuxuan Hu", "Luca Foschini", "Larry Chinitz", "Lior Jankelson", "Rajesh Ranganath" ], "title": "Deep learning models for electrocardiograms are susceptible to adversarial attack", "venue": "Nature Medicine,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Di Jin", "Zhijing Jin", "Joey Tianyi Zhou", "Peter Szolovits" ], "title": "Is bert really robust? a strong baseline for natural language attack on text classification and entailment", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Alex Kantchelian", "J Doug Tygar", "Anthony Joseph" ], "title": "Evasion and hardening of tree ensemble classifiers", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Sanjay Kariyappa", "Moinuddin K Qureshi" ], "title": "Improving adversarial robustness of ensembles with diversity training", "venue": "arXiv preprint arXiv:1901.09981,", "year": 2019 }, { "authors": [ "Guy Katz", "Clark Barrett", "David L Dill", "Kyle Julian", "Mykel J Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "arXiv preprint arXiv:1408.5882,", "year": 2014 }, { "authors": [ "A Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Master’s thesis,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Joseph Lilleberg", "Yun Zhu", "Yanqing Zhang" ], "title": "Support vector machines and word2vec for text classification with semantic features", "venue": "In 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI* CC),", "year": 2015 }, { "authors": [ "Ling Liu", "Wenqi Wei", "Ka-Ho Chow", "Margaret Loper", "Emre Gursoy", "Stacey Truex", "Yanzhao Wu" ], "title": "Deep neural network ensembles against deception: Ensemble diversity, accuracy and robustness", "venue": "IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems (MASS),", "year": 2019 }, { "authors": [ "Xuanqing Liu", "Minhao Cheng", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Towards robust neural networks via random self-ensemble", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zechun Liu", "Baoyuan Wu", "Wenhan Luo", "Xin Yang", "Wei Liu", "Kwang-Ting Cheng" ], "title": "Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Maria-Irina Nicolae", "Mathieu Sinn", "Minh Ngoc Tran", "Beat Buesser", "Ambrish Rawat", "Martin Wistuba", "Valentina Zantedeschi", "Nathalie Baracaldo", "Bryant Chen", "Heiko Ludwig" ], "title": "Adversarial robustness toolbox", "venue": "arXiv preprint arXiv:1807.01069,", "year": 2018 }, { "authors": [ "Priyadarshini Panda", "Indranil Chakraborty", "Kaushik Roy" ], "title": "Discretization based solutions for secure machine learning against adversarial attacks", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Improving adversarial robustness via promoting ensemble diversity", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "arXiv preprint arXiv:1605.07277,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "IEEE European Symposium on Security and Privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John C Duchi", "Percy Liang" ], "title": "Adversarial training can hurt generalization", "venue": "In Identifying and Understanding Deep Learning Phenomena ICML Workshop,", "year": 2019 }, { "authors": [ "Antônio H Ribeiro", "Manoel Horta Ribeiro", "Gabriela MM Paixão", "Derick M Oliveira", "Paulo R Gomes", "Jéssica A Canazart", "Milton PS Ferreira", "Carl R Andersson", "Peter W Macfarlane", "Meira Wagner Jr." ], "title": "Automatic diagnosis of the 12-lead ecg using a deep neural network", "venue": "Nature communications,", "year": 2020 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifiable distributional robustness with principled adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Johannes Stallkamp", "Marc Schlipsing", "Jan Salmen", "Christian Igel" ], "title": "The German Traffic Sign Recognition Benchmark: A multi-class classification competition", "venue": "In IEEE International Joint Conference on Neural Networks,", "year": 2011 }, { "authors": [ "Thilo Strauss", "Markus Hanselmann", "Andrej Junginger", "Holger Ulmer" ], "title": "Ensemble methods as a defense to adversarial perturbations against deep neural networks", "venue": "arXiv preprint arXiv:1709.03423,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick Drew McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lily Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Luca Daniel", "Duane Boning", "Inderjit Dhillon" ], "title": "Towards fast computation of certified robustness for relu networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dongxian Wu", "Yisen Wang", "Shu-Tao Xia", "James Bailey", "Xingjun Ma" ], "title": "Skip connections matter: On the transferability of adversarial examples generated with resnets", "venue": "arXiv preprint arXiv:2002.05990,", "year": 2020 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan L Yuille" ], "title": "Improving transferability of adversarial examples with input diversity", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Meiyan Xie", "Yunzhe Xue", "Usman Roshan" ], "title": "Stochastic coordinate descent for 0/1 loss and its sensitivity to adversarial attacks", "venue": "In Proceedings of 18th IEEE International Conference on Machine Learning and Applications - ICMLA 2019,", "year": 2019 }, { "authors": [ "Yunzhe Xue", "Meiyan Xie", "Usman Roshan" ], "title": "Towards adversarial robustness with 01 loss neural networks", "venue": "In IEEE International Conference on Machine Learning and Applications,", "year": 2020 }, { "authors": [ "Yunzhe Xue", "Meiyan Xie", "Usman Roshan" ], "title": "On the transferability of adversarial examples between convex and 01 loss models", "venue": "In IEEE International Conference on Machine Learning and Applications,", "year": 2020 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": null, "text": "While machine learning models today can achieve high accuracies on classification tasks, they can be deceived by minor imperceptible distortions to the data. These are known as adversarial attacks and can be lethal in the black-box setting which does not require knowledge of the target model type or its parameters. Binary neural networks that have sign activation and are trained with gradient descent have been shown to be harder to attack than conventional sigmoid activation networks but their improvements are marginal. We instead train sign activation networks with a novel gradient-free stochastic coordinate descent algorithm and propose an ensemble of such networks as a defense model. We evaluate the robustness of our model (a hard problem in itself) on image, text, and medical ECG data and find it to be more robust than ensembles of binary, full precision, and convolutional neural networks, and than random forests while attaining comparable clean test accuracy. In order to explain our model’s robustness we show that an adversary targeting a single network in our ensemble fails to attack (and thus non-transferable to) other networks in the ensemble. Thus a datapoint requires a large distortion to fool the majority of networks in our ensemble and is likely to be detected in advance. This property of non-transferability arises naturally from the non-convexity of sign activation networks and randomization in our gradient-free training algorithm without any adversarial defense effort." }, { "heading": "1 INTRODUCTION", "text": "State of the art machine learning algorithms can achieve high accuracies in classification tasks but misclassify minor perturbations in the data known as as adversarial attacks Goodfellow et al. (2015); Papernot et al. (2016b); Kurakin et al. (2016); Carlini & Wagner (2017); Brendel et al. (2018). Adversarial examples have been shown to transfer across models which makes it possible to perform transfer-based (substitute model) black box attacks Papernot et al. (2016a). To counter adversarial attacks many defense methods been proposed with adversarial training being the most popular Szegedy et al. (2014); Tramèr et al. (2018). However this tends to lower accuracy on clean test data that has no perturbations Raghunathan et al. (2019); Zhang et al. (2019) and can still be attacked with better transfer based methods Wu et al. (2020); Xie et al. (2019a); Dong et al. (2019). Many previously proposed defenses have also been shown to be vulnerable Carlini & Wagner (2017); Athalye et al. (2018); Ghiasi et al. (2020) thus leaving adversarial robustness an open problem in machine learning.\nA more lethal and practical attack than substitute model training is a boundary based one that requires only the prediction of the model Brendel et al. (2018). These attacks are aimed at finding the minimum distortion to an image such that it will fool a classifier. This is in fact an NP-hard problem for ReLu activated neural networks Katz et al. (2017); Sinha et al. (2018) and tree ensemble classifiers Kantchelian et al. (2016). Even approximating the minimum distortion for ReLu activated neural networks is NP-hard Weng et al. (2018). Boundary based black box attacks such as HopSkipJump Chen et al., Boundary Attack Brendel et al. (2018) and RayS Chen & Gu (2020) give an upper bound on the minimum adversarial distortion.\nBinary neural networks that have sign activation and binary weights were originally proposed as lightweight models. These are trained with gradient descent by approximating the sign activation. Recent work has shown that they tend to be more adversarially robust than full precision networks but the improvements are marginal (see Tables 4 and 5 in Galloway et al. (2018) and Table 8 in Panda et al. (2019)).\nIn this paper we propose a gradient free stochastic coordinate descent algorithm for training sign activation networks with and without binary weights similar to recent work Xue et al. (2020a;b); Xie et al. (2019b). While our original intention was to study the accuracy of a sign activation network trained directly without any approximation we make an interesting finding on the adversarial robustness of our model. We find that ensembling our model gives a high minimum distortion (as measured by HopSkipJump) compared to full precision, binary, and convolutional neural networks. We explain this phenomena by measuring the transferability between networks in an ensemble.\nIn summary we make the following observations in our paper:\n• Our single hidden layer sign activation network has higher minimum distortion than ensembles of full precision and binary neural networks, than random forests that have the advantage of bootstrapping and random feature selection, and than ensembles of convolutional networks that have the advantage of convolutions and several layers.\n• Our model’s robustness stems from non-transferability of adversarial examples between networks in our ensemble and its robustness increases as we add more networks to the ensemble.\n• Substitute model black box attacks require a much greater distortion to bring our model to zero adversarial accuracy compared to ensembles of full precision and binary networks.\n• Text classification black box attacks are less effective on our model than on convolutional networks, random forests, and ensembles of full precision and binary networks.\n• In a medical diagnosis setting, attacks on ECG data on our model have a higher distortions and are visually distinguishable compared to attacks on ensembles of full precision and convolutional networks, and on random forests." }, { "heading": "2 METHODS", "text": "" }, { "heading": "2.1 GRADIENT-FREE STOCHASTIC COORDINATE DECENT", "text": "Suppose we are given binary class data xi ∈ Rd and yi ∈ {−1,+1} for i = 0...n − 1. Consider the objective function of a single hidden layer neural network with sign activation and 01 loss given below. We employ a stochastic coordinate descent algorithm shown in Algorithm 1 (similar to recent work Xue et al. (2020a;b); Xie et al. (2019b)) to minimize this objective.\n1\n2n argmin\nW,W0,w,w0 ∑ i (1− sign(yi(wT (sign(WTxi +W0)) + w0))) (1)\nWe can train sign activation networks with and without binary weights using our SCD training procedure above. In the case of binary weights we don’t need a learning rate. We apply GPU parallelism to simultaneously update features and other heuristics to speed up runtimes (with additional details given in the Supplementary Material)." }, { "heading": "2.2 IMPLEMENTATION, TEST ACCURACY, AND RUNTIME", "text": "We implement our training procedure in Python, numpy, and Pytorch Paszke et al. (2019) and make our code freely available from https://github.com/zero-one-loss/scd_github. We train three types of sign activation networks with our algorithm: (1) SCD01: 01-loss in the final node, (2) SCDCE: cross-entropy loss in the final node, and (3) SCDCEBNN: cross-entropy in the final node with binary weights throughout the model. Since sign activation is non-convex and our training starts from a different random initialization we run it a 100 times and output the majority vote.\nAlgorithm 1 Stochastic coordinate descent for single hidden layer network Procedure:\n1. Initialize all network weights W,w to random values from the Normal distribution N(0, 1). 2. Set network thresholds W0 to the median projection value on their corresponding weight vectors and w0 to the projection value that minimizes our network objective. while i < epochs do\n1. Randomly sample a batch of data equally from each class. (We set this to 75% of the training data in image and text data experiments and 25% in the ECG data.) 2. Perform coordinate descent separately first on the final node w and then a randomly selected hidden node u (a random column from the hidden layer weight matrix W ) 3. Suppose we are performing coordinate descent on nodew. We select a random set of features (coordinates) from w called F . For each feature wi ∈ F we add/subtract a learning rate η and then determine the w0 that optimizes the loss (done in parallel on a GPU). We consider all possible values of w0 = wT xi+w T xi+1\n2 for i = 0...n− 2 and select the one that minimizes the loss (also performed in parallel on a GPU). 4. After making the update above we evaluate the loss on the full dataset (performed on a GPU for parallel speedups) and accept the change if it improves the loss.\nend while\nTo illustrate our real runtimes and clean test accuracies we compare our models with a single hidden layer of 20 nodes to the equivalent network with sigmoid activation and logistic loss (denoted as MLP) and the binary neural network (denoted as BNN) Hubara et al. (2016). We used the MLPClassifier in scikit-learn Pedregosa et al. (2011) to implement MLP and the Larq library Geiger & Team (2020) with the approx approximation to the sign activation. This has shown to achieve a higher test accuracy than the original straight through estimator (STE) of the sign activation Liu et al. (2018b).\nWe perform a 1000 iterations of SCD01 and SCDCE and 10000 of SCDCEBNN. In Table 1 we show the runtimes of a single run of all models on CIFAR10 Krizhevsky (2009) (32× 32× 3, 10K train, 2K test), CelebA facial attributes black hair vs brown hair Liu et al. (2015) (96×96×3, 1K train, 1K test), GTSRB street sign recognition 60 vs 120 speed limit signs Stallkamp et al. (2011) (48×48×3, 2816 train, 900 test), and ImageNet class 0 vs. 1 Russakovsky et al. (2015) (256 × 256 × 3, 2580 train, 100 test). Our training runtimes are comparable to gradient descent in MLP and BNN and thus practically usable. We can trivially parallelize training an ensemble by doing multiple runs on CPU and GPU cores at the same time. We also show test accuracies of 100 vote ensembles of all models and find our model accuracies to be comparable to MLP and BNN." }, { "heading": "3 RESULTS", "text": "Going forward we compare the adversarial robustness of ensembles of our three models SCD01, SCDCE, and SCDCEBNN, their full precision and binary gradient descent trained equivalent counterparts MLP and BNN, two convolutional neural networks: LeNet LeCun et al. (1998) and ResNet50 He et al. (2016), and random forests Breiman (2001) (denoted as RF). For each model we use the majority vote output of 100 votes each with different initial parameters except for ResNet50 where we use 10 votes. In random forest we use an ensemble of 100 trees.\nWe use a single hidden layer of 20 nodes in our three models and in MLP and BNN throughout the paper. The convolutional networks and random forest are not a fair comparison to our model since it has fewer parameters and does not perform bootstrapping or random feature selection as random forest. We include them nevertheless since convolutional neural networks serve as state of the art references and random forest serves as an alternative ensemble method." }, { "heading": "3.1 ADVERSARIAL DISTORTION ON IMAGE DATA", "text": "The minimum distortion required to make a datapoint adversarial is an indicator of a model’s adversarial and even corruption/general robustness Gilmer et al. (2019). We consider 10 randomly selected datapoints from the CIFAR10 benchmark Krizhevsky (2009) and report their minimal adversarial distortion as given by HopSkipJump Chen et al., Boundary Attack Brendel et al. (2018) and RayS Chen & Gu (2020).\nWe use the HopSkipJump and Boundary Attack implementation in the IBM Adversarial Robustness Toolkit (ART) Nicolae et al. (2018) In order to obtain as accurate an estimate as possible we run both mthods 10 times each with an initial pool size of 1000 random datapoints and maximum iterations of 100 and report the minimum value. For a single datapoint this typically takes several hours to finish and thus we are able to report the distortion of only 10 random points in this study. We use the RayS implementation from their GitHub site https://github.com/uclaml/RayS and run it with default parameters of 40,000 queries to obtain a distortion estimate.\nIn Table 2 first row we show the clean test accuracy of all models on CIFAR10 class 0 vs. 1. The convolutional networks LeNet and ResNet50 have higher accuracies since they have the advantage of convolutions. In the following three rows of Table 2 we see the minimum adversarial distortion of models as estimate by three boundary attack methods. We were unable to attack some models with Boundary Attack and RayS due to time constraints and mark them as NA. We see that HopSkipJump gives the lowest distortion for each model except for SCD01 and SCDCE where it is comparable to RayS.\nAmongst HopSkipJump distortions our sign activation trained models have the highest adversarial distortion with the binary weights cross-entropy variant as the winner. All other neural networks lag far behind and have distortion even lower than random forest. Even though BNN also has sign activations its distortions are similar to MLP possibly due to its approximation of the sign activation and gradient descent search. If we use the the straight through estimator and swish approximations Darabi et al. (2018) the distortions remain similar to what we report here.\nTo further validate the distortions above we run HopSkipJump on SCD01, MLP, LeNet, and RF with 10 maximum iterations on the first 100 CIFAR10 test datapoints. We used a fixed image as the initial one in these experiments. In Table 3 we see that SCD01 distortions are the highest and the relative ranking is the same as we saw for the 10 images above with 100 maximum iterations of HopSkipJump.\nIn Table 4 below we show HopSkipJump distortions (min of 10 runs 100 max iterations each) on a single random image from CelebA, GTSRB, and ImageNet datasets. We find our SCD models to have a higher distortion on both CelebA and GTSRB but comparable to MLP on ImageNet.\nTo illustrate our model’s scalability we show HopSkipJump distortion values for our SCD01 model with different number of hidden nodes." }, { "heading": "3.2 TRANSFERABILITY WITHIN ENSEMBLES AND EFFECT OF ENSEMBLE SIZE", "text": "To understand the above phenomena we estimate the probability that an adversarial example targeting a single model in the ensemble will also be adversarial to other models in the ensemble. We can estimate this by first performing a HopSkipJump attack on each model in the ensemble separately. Let x′i be the adversary obtained by targeting model mi in the ensemble. Let ki be the number of models in the ensemble that are also misclassified by the adversary x′i (thus transferable). We sum ki for i = 0...n− 1 and divide by 9900 which is the maximum value of this sum (obtained when the adversary attacks all models in the ensemble excluding the target of course).\nWe average this probability for Images 0 through 7 for each method. In Table 6 we see that this probability is lowest for our models and highest for MLP and BNN. The fact that this probability is very low for our models indicates that for several of the networks in our ensemble the adversary targeting a fixed network does not transfer to other ones. The low transferability of our models indicates that a greater distortion is required for an image to be adversarial.\nIn fact as we see in Figure 1 the robustness of our models increases as we increase the ensemble size to a much larger degree than ensembles of MLP and BNN, and than RF. We use ensemble sizes of\n100 in this study but the figure suggests that increasing our ensemble size is likely to further increase robustness." }, { "heading": "3.3 SUBSTITUTE MODEL BLACK-BOX ATTACKS", "text": "Our model’s high distortion in CIFAR10 is reflected in substitute model black box attacks on this dataset Papernot et al. (2016a). We train a two hidden layer neural network each with 200 nodes as the substitute using the standard adversarial augmented algorithm Papernot et al. (2017) (described fully in the Supplementary Material). In Figure 2 we that our models require a much higher distortion than their gradient descent trained equivalents MLP and BNN in order to reach zero percent adversarial accuracy. We also see that all models attacked with random Gaussian noise of the same distortion added to the test examples are barely affected thus showing the effectiveness of the black box adversarial examples." }, { "heading": "3.4 TEXT BLACK-BOX ATTACKS", "text": "The TextFooler Jin et al. (2020) method is designed to find syntactically and semantically similar adversarial documents by replacing important words with similar ones until the document is misclassified. We apply this to all ensemble models on four document classification datasets: Internet Movie Database (25K train, 25K test, mean words per document: 215) and Yelp (560K train, 38K test, mean words per document: 152) positive and negative reviews (IMDB and Yelp), sentence classification of positive and negative sentiments (9K train, 1K test, mean words per document: 20, denoted as MR), and sentence-level classification of news items in World and Sports categories (120K train, 7.6K test, mean words per document: 43, denoted as AG) Jin et al. (2020).\nWordCNN stacks word vectors Pennington et al. (2014) of each word in a document into a matrix to treat it as 2D image Kim (2014). In the other models that take feature vectors as inputs we consider the averaged word vector of all words in a document Lilleberg et al. (2015). For all models we use 200 dimensional Glove word embeddings pre-trained on 6 billion tokens from Wikipedia and Gigawords Pennington et al. (2014). This gives a lower clean test accuracy than WordCNN but still above an acceptable level in practice.\nIn Table 7 we see that ensembles of our models give the highest adversarial accuracy on all four datasets and require the greatest number of queries. If a smaller limit was placed on the allowed queries (for example imposed by the system being attacked) we can expect a higher adversarial\naccuracy for our models. Here we show ensembles of 8 votes for each model. If we increase the ensemble size to 100 we find the adversarial accuracy of our models, BNN, and RF slightly drop but their relative difference remains the same." }, { "heading": "3.5 ECG BLACK-BOX ATTACKS", "text": "ECG time-series data is increasingly being used in automatic diagnosis by machine learning systems Ribeiro et al. (2020). Tailored adversarial attacks have recently been proposed Han et al. (2020); Chen et al. (2020) but HopSkipJump can also be used to produce adversarial ECG examples. To illustrate this and evaluate our model’s robustness on this data we consider the PTB Diagnostic ECG dataset Bousseljot et al. (1995); Goldberger et al. (2000) available from this URL https: //www.kaggle.com/shayanfazeli/heartbeat. We randomly split this dataset into an 80:20 train test split (yielding 13096 train and 1456 test points).\nWe train 100-model ensembles of SCD01, SCDCE, and MLP. We also train an ensemble of 10 convolutional neural networks (CNN) with 1D convolutional kernels, random forest (RF) with 100 trees, and a 10-model ensemble of BNN (as opposed to a 100-model ensemble which is slow to attack and did not show a better distortion on selected datapoints). Each of our CNNs has the following structure: 64 1x16 Conv1D kernels → MaxPool 1x4 → 128 1x16 Conv1D kernels → MaxPool 1x4 → 256 1x16 Conv1D kernels → MaxPool 1x2 → FullyConnected → Output. In Table 8 we see the clean test accuracy of our model is slightly lower than gradient-descent trained models and random forest. We picked 37 random datapoints from the test and attacked all models on these points. We attack each point 10 times and report the minimum with the same parameters as we did in our CIFAR10 attacks described above.\nIn Table 8 second row we show the average min L2 distortion and find SCD01 to have the highest one. The L2 difference between SCD01 and the next best RF (after SCDCE) turns out to be statistically significant with a p-value of .008.\nIn Figure 3 we visualize an original ECG sample and its adversarial versions targeting SCD01, CNN, and RF. The SCD01 adversary is rigid and has many more bumps compared to the CNN and RF adversaries and is thus likely to be detected by an observer or a system that checks for smoothness (that we expect to see as in the original sample)." }, { "heading": "3.6 DISCUSSION", "text": "Using ensembles of neural networks and promoting diverse ensembles has been previously proposed as a defense against adversarial attacks. Studies using ensembles with different initializations (like we do), bootstrapping, and Gaussian noise have shown robustness but only in the white box setting Strauss et al. (2017) (which is somewhat unrealistic since it assumes the attacker has full knowledge of the model and its parameters). Other studies combine the loss of all models in the classifier and add a regularizer that promotes diversity.\nFor example we could try to maximize the angle between gradients of models in the ensemble Kariyappa & Qureshi (2019) to make them misaligned. In their diversity training they use a Gaussian noise augmented dataset which raises concerns about the effectiveness of their method since augmentation alone has been shown to be effective in ensemble training Strauss et al. (2017). An-\nother study maximizes diversity between classes Pang et al. (2019) and thus does not apply to our work here that focuses on binary classes only. Even for multiple classes their method is computationally expensive as it uses a joint loss function. Other methods inject noise to models in the ensemble Liu et al. (2018a) but their evaluation is only in the white box setting. Various measures for ensemble diversity have been previously proposed for deep networks Liu et al. (2019) and evaluated in the white-box setting.\nWe can apply all of the above diversity training methods to our ensemble of sign networks. Our work, however, is not explicitly aimed at enhancing diversity. As we show it is naturally diverse and we conjecture this is due to the non-convexity of sign activation and our randomized training method. Even sigmoid activation networks have a non-convex search space but we can imagine that sign activation gives a greater degree of freedom. This can easily be seen in the case of a linear classifier with logistic or hinge loss vs. 01 loss Xue et al. (2020b).\nOur model accuracy is not the same as convolutional networks understandably due to lack of convolutions in our networks. But they are close to sigmoid activated networks and random forests, and better than binary neural networks in most cases. It is possible to extend our training procedure to allow for convolutions and this may increase accuracy making our model comparable to convolutional networks and much more robust.\nIt is hard to make a general claim of robustness with only 100 images from CIFAR10. We would need to show more images from CIFAR10 and other image benchmarks as well but our preliminary experiments on CelebA, GTSRB and ImageNet (shown in Table 4) suggest higher distortion on other image data as well. Due to computational limitations we are unable to show more image data here but instead we take another route to show generality of our results. We show that our model is robust even to text classification black box attacks and on ECG data attacks. Both of these are outside the domain of images and our model’s robustness there suggests a greater generalization. Future work entails extending our training to sign activated convolutions and multi-class networks." }, { "heading": "3.7 CONCLUSION", "text": "We show that our ensemble of gradient-free sign activation networks are harder to attack than ensembles of several other networks and random forests on images, text, and medical data." } ]
2,020
null
SP:c582c4634f7e343732bab5e9cc7024efbf6d88d0
[ "The goal of this study is the 1-step prediction of flow rate in flow networks. They first define a “spatial-temporal induction effect (STI)” and claim it to be the universal property of flow networks. Their main contribution is their proposed “flow neural network” which is based on the STI effect and a combination of GCN and GRU architectures. According to authors, the novelty of their work lies in the fact that they consider the spatiotemporal features of the flow network simultaneously, whereas the previous works only consider them separately. " ]
This paper presents and investigates a novel and timely application domain for deep learning: sub-second traffic flow modelling in IP networks. Traffic flows are the most fundamental components in an IP based networking system. The accurate modelling of the generative patterns of these flows is crucial for many practical network applications. However, the high nonlinearity and dynamics of both the traffic and network conditions make this task challenging, particularly at the time granularity of sub-second. In this paper, we cast this problem as a representation learning task to model the intricate patterns in data traffic according to the IP network structure and working mechanism. Accordingly, we propose a customized Flow Neural Network, which works in a self-supervised way to extract the domain-specific data correlations. We report the state-of-the-art performances on both synthetic and realistic traffic patterns on multiple practical network applications, which provides a good testament to the strength of our approach.
[]
[ { "authors": [ "Mohammad Al-Fares", "Alexander Loukissas", "Amin Vahdat" ], "title": "A scalable, commodity data center network architecture", "venue": "ACM SIGCOMM computer communication review,", "year": 2008 }, { "authors": [ "Theophilus Benson", "Aditya Akella", "David A. Maltz" ], "title": "Network traffic characteristics of data centers in the wild", "venue": "IMC, pp", "year": 2010 }, { "authors": [ "Theophilus Benson", "Ashok Anand", "Aditya Akella", "Ming Zhang" ], "title": "Microte: fine grained traffic engineering for data centers", "venue": "IMACM CoNEXTC, pp", "year": 2011 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": null, "year": 2002 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "NIPS Deep Learning Workshop,", "year": 2014 }, { "authors": [ "Zhiyong Cui", "Kristian Henrickson", "Ruimin Ke", "Yinhai Wang" ], "title": "Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting", "venue": "IEEE Transactions on Intelligent Transportation Systems,", "year": 2020 }, { "authors": [ "Doganalp Ergenc", "Onur Ertan" ], "title": "On network traffic forecasting using autoregressive models", "venue": "arXiv preprint arXiv: 1912.12220v1,", "year": 2019 }, { "authors": [ "Fayez Gebali" ], "title": "Modeling Network Traffic, pp. 445–492", "venue": "ISBN 978-3-319-15657-6", "year": 2015 }, { "authors": [ "Robert Geirhos", "Jorn-Henrik Jacobsen", "Claudio Michaelis", "Richard Zemel", "Wieland Brendel", "Matthias Bethge", "Felix A. Wichmann" ], "title": "Shortcut learning in deep neural networks", "venue": "arXiv preprint arXiv:2004.07780v3,", "year": 2020 }, { "authors": [ "Albert Greenberg", "James R. Hamilton", "Navendu Jain" ], "title": "Vl2: A scalable and flexible data center", "venue": null, "year": 2009 }, { "authors": [ "Shengnan Guo", "Youfang Lin", "Ning Feng", "Chao Song", "Huaiyu Wan" ], "title": "Attention based spatialtemporal graph convolutional networks for traffic flow forecasting", "venue": null, "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jurgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "C. Hunt" ], "title": "TCP/IP Network Administration", "venue": "OReilly and Associates: Sebastopol, CA.,", "year": 1992 }, { "authors": [ "Rob J Hyndman", "Yeasmin Khandakar" ], "title": "Automatic time series forecasting: The forecast package for r", "venue": "Journal of Statistical Software,", "year": 2008 }, { "authors": [ "Marcus Kalander", "Min Zhou", "Chengzhi Zhang", "Hanling Yi", "Lujia Pan" ], "title": "Spatio-temporal hybrid graph convolutional network for traffic forecasting in telecommunication", "venue": null, "year": 2009 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: a method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": null, "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "NIPS, pp", "year": 2012 }, { "authors": [ "Kai Lei", "Meng Qin", "Bo Bai", "Gong Zhang", "Min Yang" ], "title": "Gcn-gan: A non-linear temporal link prediction model for weighted dynamic networks", "venue": "IEEE INFOCOMM,", "year": 2019 }, { "authors": [ "Will E. Leland", "Murad S. Taqqu", "Walter Willinger", "Daniel V. Wilson" ], "title": "On the self-similar nature of ethernet traffic", "venue": "IEEE/ACM Transactions on Networking,", "year": 1994 }, { "authors": [ "Ziqian Lin", "Jie Feng", "Ziyang Lu", "Yong Li", "Depeng Jin" ], "title": "Deepstn+: Context-aware spatialtemporal neural network for crowd flow prediction in metropolis", "venue": null, "year": 2019 }, { "authors": [ "Haggai Maron", "Or Litany", "Gal Chechik", "Ethan Fetaya" ], "title": "On learning sets of symmetric elements", "venue": null, "year": 2020 }, { "authors": [ "Alberto Mozo", "Bruno Ordozgoiti", "Sandra Gomez-Canaval" ], "title": "Forecasting short-term data center network traffic load with convolutional neural networks", "venue": "PLOS One,", "year": 2018 }, { "authors": [ "Kanthi Nagaraj", "Dinesh Bharadia", "Hongzi Mao", "Sandeep Chinchali", "Mohammad Alizadeh", "Sachin Katti" ], "title": "Numfabric: Fast and flexible bandwidth allocation in datacenters", "venue": "ACM SIGCOMM,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748v2,", "year": 2019 }, { "authors": [ "Nicholas G. Polson", "Vadim O. Sokolov" ], "title": "Deep learning for short-term traffic flow prediction", "venue": "Transportation Research Part C: Emerging Technologies,", "year": 2017 }, { "authors": [ "Krzysztof Rusek", "Jos Suarez-Varela", "Paul Almasan", "Pere Barlet-Ros", "Albert Cabellos-Aparicio" ], "title": "Routenet: Leveraging graph neural networks for network modeling and optimization in sdn", "venue": "arXiv preprint arXiv:1910.01508,", "year": 1910 }, { "authors": [ "Andrew W. Senior", "Richard Evans", "John Jumper", "James Kirkpatrick", "Laurent Sifre", "Green Tim" ], "title": "Improved protein structure prediction using potentials from deep learning", "venue": "Nature, pp", "year": 2020 }, { "authors": [ "Pierre Sermanet", "Corey Lynch", "Jasmine Hsu", "Sergey Levine" ], "title": "Time-contrastive networks: Selfsupervised learning from multi-view observation", "venue": null, "year": 2018 }, { "authors": [ "David Silver", "Julian Schrittwiseser", "Karen Simonyan", "Antonoglou Ioannis", "Aja Huang" ], "title": "Mastering the game of go without human knowledge", "venue": "Nature, pp", "year": 2017 }, { "authors": [ "Minjie Wang", "Da Zheng", "Zihao Ye", "Quan Gan", "Mufei Li", "Xiang Song", "Jinjing Zhou", "Chao Ma", "Lingfan Yu", "Yu Gai", "Tianjun Xiao", "Tong He", "George Karypis", "Jinyang Li", "Zheng Zhang" ], "title": "Deep graph library: A graph-centric, highly-performant package for graph neural networks", "venue": "arXiv preprint arXiv:1909.01315,", "year": 2019 }, { "authors": [ "Xiaozhe Wang", "Kate Smith-Miles", "Rob J Hyndman" ], "title": "Characteristic-based clustering for time series data", "venue": "Data Mining and Knowledge Discovery,", "year": 2006 }, { "authors": [ "Shihan Xiao", "Dongdong He", "Zhibo Gong" ], "title": "Deep-q: Traffic-driven qos inference using deep generative network", "venue": "Proceedings of the 2018 Workshop on Network Meets AI & ML,", "year": 2018 }, { "authors": [ "Mang Ye", "Xu Zhang", "Pong C. Yuen", "Shih-Fu Chang" ], "title": "Unsupervised embedding learning via invariant and spreading instance feature", "venue": null, "year": 2019 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Learning (DL) has gained substantial popularity in light of its applicability to real-world tasks across computer vision, natural language processing (Goodfellow et al., 2016), protein structure prediction (Senior et al., 2020) and challenging games such as Go (Silver et al., 2017). Typically, the data for these learning tasks takes the form of either grids, sequences, graphs or their combinations. The tremendous efforts on customizing neural network structures (Krizhevsky et al., 2012; Kiros et al., 2015; Hochreiter & Schmidhuber, 1997) and learning strategies (Sermanet et al., 2018; Oord et al., 2019) to explore the data-specific properties underpin the success of modern DL in these domains. Following the same design philosophy, we wish to capitalize on these advancements to develop a customized neural network and self-supervised learning strategy to tackle the crucial and timely challenge of traffic flow modelling in IP networks." }, { "heading": "1.1 TRAFFIC FLOW MODELLING IN IP NETWORKS", "text": "An IP network is a communication network that uses Internet Protocol (IP) to send and receive messages between one or more devices such as computers, mobile phones. The messages could be general application data such as video, emails or control signals of any connected devices. When sending the messages from a source to a destination, the source device encapsulates the bit chunks of encoded messages into a set of IP packets. The packets then travel through communications links and routers or switches in a given routing path sequentially, thus forming the traffic flows in an IP network (Hunt, 1992). As one of the most commonly used global networks, the IP network provides the majority of such data transmission services to support today’s Internet applications such as video streaming, voice-over-IP, and Internet of Things. Therefore, a good understanding of the behaviorial patterns of the underlying traffic flows plays a crucial role in network planning, traffic management, as well as optimizing Quality of Service (QoS, e.g., transmission rate, delay). This challenge is termed as traffic flow modelling and is fundamental to IP networking research and practice. However, the high nonlinearity, randomness and complicated self similarity (Leland et al., 1994) of these traffic thwart extensive traditional analytical and learning models, particularly at fine-grained time scales, such as traffic flow modelling at a sub-second level.\nConsider the illustrative example in Fig. 1, which depicts multiple packet flows with shared forwarding nodes and links in their routing paths. The sender of each flows streams data packets to the\nreceiver at a dynamic sending rate, which is determined according to many factors such as its rate demand, existing traffic loads, available link bandwidth, and etc. The packets usually experience various delays on the journey due to actions such as forwarding processing, link transmission, packet queueing. For example, when the sum rate of Sender 2 and 3 exceeds 10 Gbps, the router R2–R4 will hold off and cache the arriving packets in their buffers until the links from R2 to Receiver 1 become free, causing what is known as the queueing delay. The extent of these delays depends on multiple factors, including the amount of traffic going on, the capacity of the router’s output queue, link bandwidth etc. The random establishment, interaction and termination of massive flow connections give rise to network dynamics. This illustrates the complexity of traffic flow modelling in IP network even for the simple example. This challenge is exacerbated when the traffic loads are running at over 100 Gbps and in a network with significantly larger size in practice." }, { "heading": "1.2 MOTIVATING FLOWNN BASED TRAFFIC FLOW MODELLING", "text": "A flow pattern can be defined as anything that follows a trend and exhibits some kind of regularity, e.g., distribution, periodicity etc. The modelling of traffic flow patterns can be done mathematically or by the use of data-driven learning algorithms. We argue that developing a customized FlowNN in the context of IP traffic flow modelling is important in two aspects: 1) improving the performances of supported network applications from the accurate modelling towards the behavioral patterns of traffic flows in IP network, particularly at the time scale of sub-second level; 2) providing an exciting new “playground” and neural network model for the DL community to solve real-world-motivated research challenges by deeply combining its structure and working mechanisms. Next, we make the following two clarifications.\nWhy not using traditional mathematical models. The past decades have seen numerous traffic models proposed to mathematically model the traffic characteristics of networks (Gebali, 2015). For example, extensive studies use the Poisson model to characterize the traffic by assuming the arrival pattern between two successive packets follows Poisson process. Considering the heavy tailed distribution and burstiness of the data-center traffic, recent work in Benson et al. (2010) models the traffic arrival pattern as a log-normal process. To capture the temporal patterns and make predictions accordingly, Seasonal Autoregressive Integrated Moving Average (SARIMA) is exploited in (Ergenc & Ertan, 2019) to model the traffic time series. These analytical models may generate outputs that are easier to interpret, but are bonded to the specific working circumstance and assumptions. More importantly, these statistical models function at coarse time scales of hours and assume relatively smoother traffic patterns. However, as reported in many practical traffic measurements in e.g. Benson et al. (2010; 2011); Greenberg et al. (2009), most flows last less than 1 minute. This implicates tasks requiring traffic models at finer-grained time scales are beyond the capability of these traditional models.\nFig. 2 plots the traffic traces we collected from a practical backbone network–WIDE1, which shows the realistic traffic patterns when the packet flows are sampled by two different time scales. The long time-scale plot in Fig. 2b shows clear a “tide-effect” associated with daily human activities. By contrast, the traffic traces in Fig. 2a get more noisy and difficult to recognize obvious patterns when they are counted by millisecond.\n1http://mawi.wide.ad.jp/˜agurim/index.html\nWhy not using existing neural network models. When put in the context of data-driven learning, the traffic flow modelling problem can be reduced to the representation learning task. If treating the traffic flows as the general spatio-temporal data, extensive existing neural networks fit such task, including Convolutional Neural Net (CNN, (Mozo et al., 2018)), Graph Neural Net (GNN, (Rusek et al., 2019)), Recurrent Neural Net (RNN) as well as their variants and combinations (e.g., STHGCN (Kalander et al., 2020), STGCN (Yu et al., 2018), and Xiao et al. (2018); Polson & Sokolov (2017); Cui et al. (2020); Guo et al. (2019); Lin et al. (2019)). The customized designs to the data-specific properties make the success of these existing models, such as the convolutional operation to capture the spatially local correlations in CNN and the aggregation operation to extract the adjacent link correlations in GNN, and so on. As a human-engineered industrial system with clear system structure and working mechanism, the IP network creates domain-specific spatio-temporal data correlations, which are difficult to capture for the incumbent spatio-temporal models if without any modification. One of the most important difference is that the spatial data in IP networks is not only correlated to other spatial data at the same point in time, but also able to directly influence the future realizations of correlated locations with strict order (i.e., the Spatio-Temporal Induction effect as we will disclose later). Moreover, these existing studies only target at a coarse-grained timescale above minutes or even hours. Models at a sub-second granularity, as FlowNN functions, require deeply combining the spatio-temporal data trends, as well as the system structural knowledge and working mechanism." }, { "heading": "1.3 OUR CONTRIBUTIONS", "text": "We claim two critical contributions: 1) we formulate the crucial traffic flow modelling problem in IP networks as the representation learning task in deep learning, and develop a customized neural network–FlowNN and the associated Induction Operation to extract the domain-specific spatiotemporal data correlations in IP traffic flows. To the best of our knowledge, this is the first work to design customized neural network and learning strategy by deeply combining the IP network structure and working mechanism. The Induction Operation also makes it the first time for a data-driven learning model able to infer the data features at a millisecond granularity, which is usually treated as the ‘noise’ region by existing coarse-grained models; 2) we report the state-of-the-art performance over the baselines in different type of practical network applications, which provides a good testament of our model." }, { "heading": "2 SPATIO-TEMPORAL INDUCTION", "text": "By stacking the networking feature timeseries sampled at all the nodes that a flow passes through, the IP traffic flow data can be organized in the form of high-dimensional tensor time series, as shown in Fig. 3a. The feature (denoted as xtf,n) could be the average flow rate (i.e., the amount of packet bits received in each unit measurement time) at each node or the average per-hop packet delay etc. The routing path herein constitutes the most significant attribute for the generative process of each IP traffic flow. This creates many peculiar properties in such flow data. For example, for a flow with a routing path [1→4→12] in Fig. 3a, the current data at node 4 was originated from the history data at its predecessor node 1, but delayed by at least2 the link delay ∆t. These data will also flow to its successor node after certain delays. This shows that the flow state at a node is physically\n2Packet processing and queueing will also impose extra delay.\ndriven3 by the past flow state at its predecessor node. Therefore, such time-resolved flow data not only tells us who is related to whom but also when and in which order relations occur. This forms the S-shaped data correlation pattern, as exemplified in Fig. 3a. An accurate modelling of the flow data requires attention to such domain-specific data properties, which are absent in the majority of existing learning models if not all of them.\nFig. 3b plots a sample of flow traces at two neighboring path nodes from the WIDE dataset. We can observe that an exceeding of data rate at node 1 over node 2 for some duration (e.g., T1) will always induce a subsequent duration T2 when the data rate at node 2 is greater than that at node 1. Moreover, the cumulative amount of data bits the two nodes forward in such two durations are almost same, as indicated by the rate difference between the two nodes at the bottom of Fig. 3b. This illustrates that there is a stronger correlation among the data in these two durations, and the future realizations at T2 are subject to the constraint of the states at T1 by local flow conservation4 between the two nodes.\nIn analogy to the concept of Electromagnetic Induction5 in Physics, in what follows, we introduce the Spatio-Temporal Induction (STI) effect in IP traffic flows. Accordingly, a learning model is proposed to model the network traffic flows at a fine-grained timescale.\nDefinition 1. Spatio-Temporal Induction is the production of the temporal evolutions of a flow from the history spatial patterns at its correlated locations.\nThe STI builds a physical relationship between the spatial and temporal patterns of IP flow data. This provides a more accurate interpretation to the underlying data generating process than the trending information manifested by the data itself. Such induction effect is created by the IP network structure and working mechanism, which is preserved when the flow data is sampled at any timescale in practice.\nNext, we develop an Induction Operation to concretely capture the S-shaped data correlation in IP flow data and propose what we call FlowNN to generate the desired traffic model for IP networks." }, { "heading": "3 FLOW NEURAL NETWORK", "text": "As aforementioned, after the packet flows leave the source devices, they may experience delays from many factors (e.g., link transmission, packet processing, queueing etc) during the journey throughout the routing path. Accordingly, each node in the path will record a transformed time-series view of the same source flow due to the transmission and forwarding actions from the network. This\n3The exotic competition from other co-located flows also drives the flow evolution in part. This can be included by appending the aggregated coflows as an auxiliary feature.\n4Flow conservation is the network property that except the source and sink, the total incoming flow of a node is same as its outgoing flow or incoming flow of its successor node in the routing path if no packet is lost.\n5Electromagnetic induction is the production of electric currents across a conductor in a changing magnetic field, which determines a relationship between electricity and magnetism.\nimplicates the existence of similarity among these time series since they are from the same source flow. Such path-wide similarity reflects the shared forwarding policies all path nodes follow, as well as the original behaviors of the source flows at the sending side, such as the dynamics of user demands. On the other hand, the flow patterns recorded at each node may end up being different from the patterns at the sending side, depending on the experienced networking actions. With all these in mind, we attend to such similarities and differences by two key blocks–Path-level Coding and Node-level Induction, and build the framework of FlowNN as Fig. 4.\nPath-level Coding: This block constructs an encoding network to extract the patterns that can represent the state of the whole path. Such path-level patterns capture the similar part of behaviors within the time series received from all path nodes and will be the base to derive the evolved patterns at each path node. Technically, we tested both the very recent Siamese net (Maron et al., 2020; Ye et al., 2019) and GNN based encoders (Rusek et al., 2019) to extract the path-level patterns. Surprisingly, they are not outperforming a direct encoding by the Gated Recurrent Unit (GRU) with all raw node data as inputs. This stands to the reason that the raw data from the correlated locations conveys the direct guidance for the near future realization at a node. The operations in above models, e.g., SumPooling or node aggregation, actually destroy such data correlations and introduce more ‘noise’. More details are provided in later experiments. Consequently, in the following, we directly apply a GRU to encode all the raw inputs.\nNode-level Induction: This block encodes the individual patterns at different nodes conditioned on the path-level patterns from the source flows. Looking into how flows are induced in Fig. 3b, we can observe an explicit data correlations subject to local flow conservation among the correlated data sequences. Next, we propose the Induction Operation and the associated Contrastive Induction Learning to extract such data correlations.\nAs illustrated in Fig. 3b, the induction process operates on any two consecutive durations like T1 and T2, which can be identified through the rate differences between two neighboring nodes. The induction is performed separately for each path node, as shown in Fig. 4. For ease of exposition, we denote the data sequences at all path nodes in T1 and the data sequence at the induced node in T2 as S and T , respectively. Then, the Induction Operation performs the conditional induction function f(T |S) to capture the correlations between source sequence S and target sequence T . The function f(T |S) can be approximated by a specialized Contrastive Induction Learning in a self-supervised manner as follows.\nContrastive Induction Learning: As discussed in Path-level Coding, we first take the outputs of path-level feature patterns, {htS}t∈T1 , as the initial coding states of source sequence S. A recurrent encoder (e.g., GRU6 (Chung et al., 2014)) is then applied to generate the context c of S with {htS}t∈T1 as inputs. Conditioned on the context c, a GRU decoder can be then used to produce the\n6Note that the encoder is not limited to GRU, but any type of recurrent models can be used. More recent recurrent models stacked with modules e.g., attention blocks could help improve results further.\nstate codes, {ĥtT }t∈T2 , for the target sequence T . With the state codes of both S and T at hand, the key of the induction function f(T |S) is to force the learned state codes of S and T to express the data correlations subject to local flow conservation. Inspired by Contrastive Predictive Coding (CPC) (Oord et al., 2019), we learn the mappings from S to T in a way that maximally preserves the mutual information of the source and target sequences defined as follows:\nI(T ; c) = ∑ T ,c p(T , c) log p(T |c) p(T )\n(1)\nwhere p(·) is the probability function. In analogy to CPC, we rely on a contrastive loss function to train the model. As the practice of contrastive learning (Chen et al., 2020; Oord et al., 2019), we first construct a training batch of Nb sequence samples, Φ = {T ?1 , T2, · · · , TNb}. T ?1 is the positive sample from the conditional distribution p(T |c) for the target sequence T and other Nb − 1 sequences are the negative samples from the original distribution p(T ). Accordingly, we define the following contrastive induction loss LI to optimize the mutual information in Equation 1 as follows:\nLI = −EΦ [ log\nf(T ?1 , c)∑ Tj∈Φ f(Tj , c)\n] (2)\nwhere f(·) is a score function proportional to the ratio p(T |c)/p(T ). As the way done in CPC, this can be technically implemented by using a simple log-bilinear model:\nf(Tj , c) = ∑ t∈T2 exp(trans[htTj ] · ĥ t T ?1 ) (3)\nwhere trans[htT ] is the transpose of the state coding h t T from the initial embedding layer in Fig. 4.\nThe induction lossLI defines a categorical cross-entropy of classifying the positive sample correctly. Optimizing LI will result in high similarity between the true embedding codes htT ?1 and the induced state codes ĥtT ?1 . The above induction pattern is learned by directly comparing the target coding with the codings of randomly sampled negative samples in latent space in a self-supervised manner. Such process requires no direct labels in the output space, which makes it easier to generalize the induced knowledge to diverse applications in IP networks." }, { "heading": "4 EXPERIMENTS", "text": "We validate the efficiencies of FlowNN on two crucial networking application tasks in IP networks– flow rate prediction and end-to-end delay inference.\nDatasets. We conduct experiments on two publicly available flow datasets–WIDE and NumFabric7. WIDE are the realistic traffic traces collected from real-world network environment, and NumFabric are the synthetic traffic widely applied in recent networking researches (Nagaraj et al., 2016; Lei et al., 2019). The network topology includes 4 core nodes, 8 leaf nodes and 8× 16 hosts organized in accordance with Fat-Tree network architecture (Al-Fares et al., 2008). We sampled the flow rates of 50 flows as well as the associated aggregated flow rates at each nodes at the time scale of 1ms for total length 18432ms. Each flow traverses a routing path with 5 nodes. We found that the flow tensor time series of such length are enough to learn a stable flow pattern at a sub-second granularity.\nBaselines. We compare the proposed FlowNN with the following baselines: 1) Seasonal AutoRegression Integrated Moving Average (SARIMA) (Hyndman & Khandakar, 2008; Wang et al., 2006); 2) GRU (Chung et al., 2014); 3) multiple GRUs (multiGRU); 4) STHGCN (Kalander et al., 2020), and 5) STGCN (Yu et al., 2018). Particularly, GRU encodes and predicts each time-series sequence stand-alone without any reference to the information from other spatial nodes. In contrast, multiGRU uses separate GRUs to encode each time-series sequence but predicts with Fullyconnected Neural Network (FNN) by jointly concatenating the codings of sequences from all spatial nodes. STHGCN is the recent work for networking traffic analysis, which uses graph convolutional\n7The data can be generated by running the simulation code released in https://knagaraj@ bitbucket.org/knagaraj/numfabric.git\nnetwork (Kipf & Welling, 2016) and GRU to predict the spatial states of all nodes at each time. Finally, STGCN is a widely applied model for transportation traffic analysis, which is build upon the spatio-temporal graph convolution.\nImplementation Details. We use Adam (Kingma & Ba, 2014) with a learning rate of 1e−4. All hidden dimensions are set to 128 and the layer size for GRUs is 3. We train on sampled time series window of length 256. A batch of Nb = 10 is used to draw the samples for the contrastive induction loss in Equation 2. The initial embedding layer in FlowNN is performed for each node data by separated FNNs. We first pre-train the FlowNN with the contrastive induction loss until convergence and then fine-tune the model with MSE loss (Equation 4) for different application tasks.\nEvaluation metrics. The following metrics are adopted for evaluating quality of prediction ŷ with the ground truth y.\nMean squared error: MSE = 1\nn n∑ i=1 (yi − ŷi)2 (4)\nRelative absolute error: RAE = 1\nn n∑ i=1 | yi − ŷi yi | (5)\nCorrelation coefficient: Corr = ∑n\ni=1(yi − ȳ)(ŷi − ˆ̄y)√∑n i=1 (yi − ȳi)2 ∑n i=1 (ŷi − ˆ̄y)2\n(6)\nwhere ȳ and ˆ̄y are the mean values of y and ŷ." }, { "heading": "4.1 PATH-WIDE VIEWS MATTER FOR IP TRAFFIC FLOW MODELLING!", "text": "We first use the one-step-ahead prediction task to demonstrate that path-wide views of FlowNN are more informative than the widely used single-node view or graph-level views in the literature.\nFig. 5a shows the validation MSE loss across training epochs for the one-step-ahead prediction task. Naive solution directly uses the previous observation as the next-step prediction. The results of Naive and SARIMA provide a good testament to the prediction difficulty for traditional nonDL solutions in a millisecond-granularity networking prediction task. We can see that the loss of SARIMA is still as high as 80% of the Naive solution. However, the losses of all DL based solutions are less than 20% of Naive. This shows the powerful advantages of the deep learning principle in difficult tasks. Table 1 shows the test performances of different methods. Specifically, GRU holds a single node view and learns the time series of each node separately and the achieved loss is only 22.5% of SARIMA. By contrast, the other DL solutions harvest the information from all path nodes to predict (i.e., the path-wide views) and the loss are remarkably reduced further against GRU. As discussed in Path-level Coding, STGCN and STHGCN inherit the graph aggregation operation,\ntheir performances are better than GRU, but are inferior to multiGRU. Such findings discourage the spatial aggregation operation in GCN as the data correlation exploited by GCN deviates a lot from the truly S-shaped correlation in IP flow data. Finally, the proposed FlowNN outperforms all the baselines.\nFig. 5b compares the predictions by different models against the ground truth. We can observe that FlowNN and multiGRU capture the trend of ground truth accurately. Benefiting from the Induction Operation, FlowNN avoids over-reliance on the far distant history as the vanilla GRU and its variants function and shows the fastest response to the dynamic changes among the traffic flows." }, { "heading": "4.2 WIDE APPLICABILITY ON DIFFERENT APPLICATION TASKS", "text": "Multi-step-ahead predictions: We first exam the effectiveness of FlowNN on the multi-step-ahead prediction task. In this task, we first pretrain FlowNN based on the contrastive induction loss in Equation 2 until convergence. Following the practice of the recent work for the short-term (secondlevel granularity) traffic forecasting in (Mozo et al., 2018), we then take the output of pre-trained FlowNN and finetune a FNN based Readout layer in Fig. 4 to infer the mean of multi-step-ahead predictions. The mean realizations of the multi-step ahead are crucial to many short-term networking planning and control tasks in IP networks. Fig. 6 compares the test performances of different DL models on the NumFabric dataset. The test results shows that FlowNN outperforms all these recent baselines on all metrics.\nEnd-to-end delay inference: In today’s IP network, it is important to guarantee that the provided QoS meets the subscribed service agreement, such as end-to-end transmission delay less than 50ms for online gaming. However, it is difficult to build an accurate delay model for human experts (Xiao et al., 2018; Rusek et al., 2019) in practical networks since many factors may influence the delay, including the dynamic traffic demands, packet processing and queueing etc. In this task, we apply the learned traffic flow model from FlowNN to perform the traffic-driven delay inference. Specifically, we apply the same FlowNN model pretrained as in multi-step-ahead prediction task, and finetune a new FNN based Readout layer to infer the next-step delay. Table 2 shows the test performances of different models. Although pretrained on the dataset of traffic flow rates, FlowNN still achieves the best results on the task of inferring the data with different physical meaning. This shows the robustness of FlowNN across different tasks." }, { "heading": "4.3 GENERALITY TEST ON OUT-OF-DISTRIBUTION DATASET", "text": "In this subsection, we test the model generality when a FlowNN model, pretrained with the contrastive induction loss on one dataset, say NumFabric, is used to work in an environment that is different from the trained one. This is performed by testing with the one-step-ahead prediction task on Out-Of-Distribution dataset (i.e., cross dataset test). Specifically, we take the FlowNN model pretrained on the NumFabric dataset and finetune the FNN based Readout layer to test its prediction performances on the WIDE dataset. As comparison, we also finetune the Readout layers of multiGRU and STHGCN already trained on NumFabric to test their performances on WIDE. From the test results shown in Table 3, we can observe that FlowNN achieves the best performances except that RAE is slightly higher. Moreover, the performances of FlowNN achieved in cross dataset test is even better than the results of other baselines achieved in same dataset test (except multiGRU). This shows the good generality and robustness of FlowNN.\nDiscussions: We observe that multiGRU, to certain extent, also works well in above experiments, although is inferior to FlowNN. This can be interpreted by the shortcut learning, as disclosed in (Geirhos et al., 2020). As illustrated by the data correlation in Section 2, the near future realization of a node is highly correlated to the path-wide values at last step. This makes a shortcut for multiGRU that a prediction by directly weighting the last-step path-wide value will capture such kind of data correlation, although this can not further extract the effect of local flow conservation along the routing path as FlowNN functions." }, { "heading": "5 CONCLUSION", "text": "In this paper, we formulated the crucial traffic flow modelling problem in IP networks, and develop a customized neural network–FlowNN and the associated Induction Operation to extract the domain-specific spatio-temporal data correlations in IP traffic flows. This study makes the pioneering work to design customized neural network and learning strategy by deeply combining the IP network structure and working mechanism. We reported the state-of-the-art performances for multiple practical networking application tasks, which demonstrates the strength of our approach. As a new ‘playground’ for both networking and deep learning communities, the research of network intelligence is still in its infancy. We hope this work will inspire more innovations in this field in future." } ]
2,020
null
SP:d9610d460905f545ccdd7524b9efc049ecdc0f25
[ "The paper introduces a new Markov chain Monte-Carlo (MCMC) algorithm to obtain and track the posterior distribution over unknown parameters in a non-linear system. Despite its simple elegance, i.e., the introduction of a data-driven _temporal forgetting factor_ into the usual Metropolis-Hastings algorithm, the approach is, to my knowledge, novel. Its discovery seems to be the result of the intersection between fields: system identification and Bayesian sampling techniques, leading to new bridges. " ]
Although the Bayesian paradigm provides a rigorous framework to estimate the full probability distribution over unknown parameters, its online implementation can be challenging due to heavy computational costs. This paper proposes Adaptive Recursive Markov Chain Monte Carlo (ARMCMC) which estimates full probability density of model parameters while alleviating shortcomings of conventional online approaches. These shortcomings include: being solely able to account for Gaussian noise, being applicable to systems with linear in the parameters (LIP) constraint, or having requirements on persistence excitation (PE). In ARMCMC, we propose a variable jump distribution, which depends on a temporal forgetting factor. This allows one to adjust the trade-off between exploitation and exploration, depending on whether there is an abrupt change to the parameter being estimated. We prove that ARMCMC requires fewer samples to achieve the same precision and reliability compared to conventional MCMC approaches. We demonstrate our approach on two challenging benchmark: the estimation of parameters in a soft bending actuator and the Hunt-Crossley dynamic model. Our method shows at-least 70% improvement in parameter point estimation accuracy and approximately 55% reduction in tracking error of the value of interest compared to recursive least squares and conventional MCMC.
[]
[ { "authors": [ "Niki Abolhassani", "Rajni Patel", "Mehrdad Moallem" ], "title": "Needle insertion into soft tissue: A survey", "venue": "Medical Engineering and Physics,", "year": 2007 }, { "authors": [ "Pedram Agand", "Mahdi Aliyari Shoorehdeli" ], "title": "Adaptive model learning of neural networks with uub stability for robot dynamic estimation", "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN),", "year": 2019 }, { "authors": [ "Pedram Agand", "Hamid D Taghirad", "Ali Khaki-Sedigh" ], "title": "Particle filters for non-gaussian huntcrossley model of environment in bilateral teleoperation", "venue": "In 4th International Conference on Robotics and Mechatronics (ICROM),", "year": 2016 }, { "authors": [ "S Bhasin", "K Dupree", "PM Patre", "WE Dixon" ], "title": "Neural network control of a robot interacting with an uncertain hunt-crossley viscoelastic environment", "venue": "In Dynamic Systems and Control Conference,", "year": 2008 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition", "venue": "Machine Learning,", "year": 2006 }, { "authors": [ "Steve Brooks", "Andrew Gelman", "Galin Jones", "Xiao-Li Meng" ], "title": "Handbook of markov chain monte carlo", "venue": "CRC press,", "year": 2011 }, { "authors": [ "André S Carvalho", "Jorge M Martins" ], "title": "Exact restitution and generalizations for the hunt–crossley contact model", "venue": "Mechanism and Machine Theory,", "year": 2019 }, { "authors": [ "Noah J Cowan", "Ken Goldberg", "Gregory S Chirikjian", "Gabor Fichtinger", "Ron Alterovitz", "Kyle B Reed", "Vinutha Kallem", "Wooram Park", "Sarthak Misra", "Allison M Okamura" ], "title": "Robotic needle steering: Design, modeling, planning, and image guidance", "venue": "In Surgical Robotics,", "year": 2011 }, { "authors": [ "Nicola Diolaiti", "Claudio Melchiorri", "Stefano Stramigioli" ], "title": "Contact impedance estimation for robotic systems", "venue": "IEEE Transactions on Robotics,", "year": 2005 }, { "authors": [ "Peter J Green" ], "title": "Reversible jump markov chain monte carlo computation and bayesian model determination", "venue": null, "year": 1995 }, { "authors": [ "Peter L Green" ], "title": "Bayesian system identification of a nonlinear dynamical system using a novel variant of simulated annealing", "venue": "Mechanical Systems and Signal Processing,", "year": 2015 }, { "authors": [ "Amir Haddadi", "Keyvan Hashtrudi-Zaad" ], "title": "Real-time identification of hunt-crossley dynamic models of contact environments", "venue": "IEEE transactions on robotics,", "year": 2012 }, { "authors": [ "KH Hunt", "FRE Crossley" ], "title": "Coefficient of restitution interpreted as damping in vibroimpact", "venue": "Journal of applied mechanics,", "year": 1975 }, { "authors": [ "Dominik Joho", "Gian Diego Tipaldi", "Nikolas Engelhard", "Cyrill Stachniss", "Wolfram Burgard" ], "title": "Nonparametric bayesian models for unsupervised scene analysis and reconstruction. Robotics: Science and Systems", "venue": null, "year": 2013 }, { "authors": [ "Shima Khatibisepehr", "Biao Huang", "Swanand Khare" ], "title": "Design of inferential sensors in the process industry: A review of bayesian methods", "venue": "Journal of Process Control,", "year": 2013 }, { "authors": [ "Scott Kuindersma", "Roderic Grupen", "Andrew Barto" ], "title": "Variational bayesian optimization for runtime risk-sensitive control", "venue": "Robotics: Science and Systems VIII,", "year": 2012 }, { "authors": [ "Tomasz Kuśmierczyk", "Joseph Sakaya", "Arto Klami" ], "title": "Variational bayesian decision-making for continuous utilities", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Brett Ninness", "Soren Henriksen" ], "title": "Bayesian system identification via markov chain monte carlo techniques", "venue": null, "year": 2010 }, { "authors": [ "Ali Noormohammadi-Asl", "Hamid D Taghirad" ], "title": "Multi-goal motion planning using traveling salesman problem in belief space", "venue": "Information Sciences,", "year": 2019 }, { "authors": [ "Kaur Aare Saar", "Fabio Giardina", "Fumiya Iida" ], "title": "Model-free design optimization of a hopping robot and its comparison with a human designer", "venue": "IEEE Robotics and Automation Letters,", "year": 2018 }, { "authors": [ "Roberto Tempo", "Giuseppe Calafiore", "Fabrizio Dabbene" ], "title": "Randomized algorithms for analysis and control of uncertain systems: with applications", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Felipe Tobar" ], "title": "Bayesian nonparametric spectral estimation", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Beilun Wang", "Arshdeep Sekhon", "Yanjun Qi" ], "title": "A fast and scalable joint estimator for integrating additional knowledge in learning multiple related sparse Gaussian graphical models", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tao Wang", "Yunce Zhang", "Zheng Chen", "Shiqiang Zhu" ], "title": "Parameter identification and modelbased nonlinear robust control of fluidic soft bending actuators", "venue": "IEEE/ASME Transactions on Mechatronics,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Bayesian methods are powerful tools to not only obtain a numerical estimate of a parameter but also to give a measure of confidence (Kuśmierczyk et al., 2019; Bishop, 2006; Joho et al., 2013). In particular, Bayesian inferences calculate the probability distribution of parameters rather than a point estimate, which is prevalent in frequentist paradigms (Tobar, 2018). One of the main advantages of probabilistic frameworks is that they enable decision making under uncertainty (NoormohammadiAsl & Taghirad, 2019). In addition, knowledge fusion is significantly facilitated in probabilistic frameworks; different sources of data or observations can be combined according to their level of certainty in a principled manner (Agand & Shoorehdeli, 2019). Nonetheless, Bayesian inferences require high computational effort for obtaining the whole probability distribution and prior general knowledge on noise distribution before estimation.\nOne of the most effective methods for Bayesian inferences is the Markov Chain Monte Carlo (MCMC) methods. In the field of system identification, MCMC variants such as the one recently proposed by Green (2015) are mostly focused on offline system identification. This is partly due to computational challenges which prevent real-time use (Kuindersma et al., 2012). The standard MCMC algorithm is not suitable for model variation since different candidates do not share the same parameter set. Green (1995) first introduced reversible jump Markov chain Monte Carlo (RJMCMC) as a method to address the model selection problem. In this method, an extra pseudo random variable is defined to handle dimension mismatch. There are further extensions of MCMC in the literature, however, an online implication of it has yet to be reported.\nMotion filtering and force prediction of robotic manipulators are important fields of study with interesting challenges suitable for Bayesian inferences to address (Saar et al., 2018). Here, measurements are inherently noisy, which is not desirable for control purposes. Likewise, inaccuracy, inaccessibility, and costs are typical challenges that make force measurement not ideal for practical use (Agand et al., 2016). Different environmental identification methods have been proposed in the literature\nfor linear and Gaussian noise (Wang et al., 2018); however, in cases of nonlinear models like HuntCrossley that does not have Gaussian noise (e.g. impulsive disorder), there is no optimal solution for the identification problem. Diolaiti et al. (2005) proposed a double-stage bootstrapped method for online identification of the Hunt-Crossley model, which is sensitive to parameter initial conditions. Carvalho & Martins (2019) proposed a method to determine the damping term in the Hunt-Crossley model. A neural network-based approach was introduced to control the contact/non-contact HuntCrossley model in (Bhasin et al., 2008)\nThis paper proposes a new technique, Adaptive Recursive Markov Chain Monte Carlo (ARMCMC), to address address certain weaknesses of traditional online identification methods, such as only being appllicable to systems Linear in Parameter (LIP), having Persistent Excitation (PE) requirements, and assuming Gaussian noise. ARMCMC is an online method that takes advantage of the previous posterior distribution, given that there is no sudden change in the parameter distribution. To achieve this, we define a new variable jump distribution that accounts for the degree of model mismatch using a temporal forgetting factor. The temporal forgetting factor is computed from a model mismatch index and determines whether ARMCMC employs modification or reinforcement to either restart or refine parameter distribution. As this factor is a function of the observed data rather than a simple user-defined constant, it can effectively adapt to the underlying dynamics of the system. We demonstrate our method using two different examples: soft bending actuator and Hunt-Crossley model and show favorable performance compared to state-of-the-art baselines.\nThe rest of this paper is organized as follows: In Sec. 2, introductory context about the Bayesian approach and MCMC is presented. Sec 3 is devoted to presenting the proposed ARMCMC approach in a step-by-step algorithm. Simulation results on a soft bending actuator with empirical results on a reality-based model of a soft contact environment capturing a Hunt-Crossley dynamic are presented in Sec. 4. Lastly, the final remarks and future directions are concluded in Sec 5." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 PROBLEM STATEMENT", "text": "In the Bayesian paradigm, estimates of parameters are given in the form of the posterior probability density function (pdf); this pdf can be continuously updated as new data points are received. Consider the following general model:\nY = F (X, θ) + ν, (1)\nwhere Y , X , θ, and ν are concurrent output, input, model parameters and noise vector, respectively. To calculate the posterior probability, the observed data along with a prior distribution are combined via Bayes’ rule (Khatibisepehr et al., 2013). The data includes input/output data pairs (X,Y ). We will be applying updates to the posterior pdf using batches of data points; hence, it will be convenient to partition the data as follows:\nDt = {(X,Y )tm , (X,Y )tm+2, · · · , (X,Y )tm+Ns+1}, (2) where Ns = Ts/T is the number of data points in each data pack with T, Ts being the data and algorithm sampling times, respectively. This partitioning is convenient for online applications, as Dt−1 should have been previously collected so that the algorithm can be executed from tm to tm + Ns+1 or algorithm time step t. Ultimately, inferences are completed at tm+Ns+2. Fig. 1 illustrates the timeline for the data and the algorithm. It is worth mentioning that the computation can be done in parallel by rendering the task of the adjacent algorithm step (e.g. phase A of algorithm t, phase B of algorithm t− 1 and phase C of algorithm t− 2 can all be done simultaneously) According to Bayes’ rule and assuming data points are independent and identically distributed ( i.i.d) in equation 1, we have\nP (θt|[Dt−1, Dt]) = P ( Dt|θt, Dt−1 ) P (θt|Dt−1)∫\nP ( D1|θt, Dt−1 ) P (θt|Dt−1)dθt , (3)\nwhere θt denotes the parameters at current time step. P (θt|Dt−1) is the prior distribution over parameters, which is also the posterior distribution at the previous algorithm sampling time. P ( Dt|θt, Dt−1 ) is the likelihood function which is obtained by the one-step-ahead prediction:\nŶ t|t−1 = F (Dt−1, θt), (4)\nwhere Ŷ t|t−1 is the prediction of the output in (1). If the model in (4) is valid, then the difference between the real output and predicted should be measurement noise, (i.e., Y t|t−1 − Ŷ t|t−1 = ν). Therefore, the model parameter may be updated as follows:\nP ( Dt|θt, Dt−1 ) = tm+Ns+1∏ tm+1 Pν ( Y t|t−1 − Ŷ t|t−1 ) , (5)\nwhere Pν is the probability distribution of noise. Note that there is no restriction on the type of noise probability distribution.\nRemark 1: As it was mentioned before, there is no need to know the exact probability distribution of noise. This probability distribution can be simply substituted with a Gaussian distribution, if one has minimal knowledge of the mean and variance of the data which can be easily obtained with preprocessing (Bishop, 2006)." }, { "heading": "2.2 MARKOV CHAIN MONTE CARLO", "text": "MCMC is often employed to compute the posterior pdf numerically. The multidimensional integral in (3) is approximated by samples drawn from the posterior pdf. The samples are first drawn from a different distribution called proposal distribution, denoted q(.), which can be sampled easier compared to the posterior. Brooks et al. (2011) discuss different types of MCMC implementations which may employ various proposal distributions and corresponding acceptance criteria. The main steps of the Metropolis-Hastings algorithm are listed as follows (Ninness & Henriksen, 2010):\n1. Set initial guess θ0 while P (θ0|Y ) > 0 for iteration k = 1, 2. Draw candidate parameter θcnd, at iteration k, from the proposal distribution, q(θcnd|θk−1) 3. Compute the acceptance probability,\nα(θcnd|θk−1) = min {\n1, P (θcnd|D)q(θk−1|θcnd) P (θk−1|D)q(θcnd|θk−1)\n} , (6)\n4. Generate a uniform random number γ in [0, 1], 5. ‘Accept’ candidate if γ ≤ α and ‘ignore’ it if γ > α, 6. Set iteration to k + 1 and go to step 2." }, { "heading": "2.3 PRECISION AND RELIABILITY", "text": "Two important notions in probabilistic framework to compare results are precision ( ) and reliability (δ). The former represents the proximity of a sample to the ground truth, and the latter represents the probability that an accepted sample lies within of the ground truth.\nLemma: Let Pk be k samples from MCMC, and E(Pk) denote its expected value. According to Chernoff bound (Tempo et al., 2012), given , δ ∈ [0, 1], if the number of samples (k) satisfies\nk ≥ 1 2 2 log( 2 1− δ ), (7)\nthen Pr { {Pk − E(Pk)} ≤ } ≥ δ.\nAlgorithm 1 ARMCMC Assumptions: 1) roughly noise mean (µν) 2) roughly noise variance (σν) 3) desired precision and reliability ( 0, δ0) 4) desired threshold for model mismatch (ζth) Goal: Online calculation of parameters posterior distribution given the consecutive t-th pack of data (P (θt|Dt)) Initialization: Prior knowledge for θ01 , n=0 Consider desire precision and reliability ( , δ) repeat\nPut t0 = n ∗Ns + 1 from (2), n++ Add new data pack to dataset Dt Model mismatch index: ζt from (10) if ζt < ζth then\nReinforcement: set prior knowledge equal to the latest posterior of previous pack Temporal forgetting factor: λt from (9)\nelse Modification: set prior knowledge θn1 Temporal forgetting factor: λt = 0 end if Set minimum iteration kmin from (12) for k = 1 to kmax do\nProposal distribution: • draw λk ∼ U(0, 1) • Variable jump distribution: qtk(.) from (8) Draw θt∗k ∼ qtk(.) Acceptance rate: α(.) from (6) Draw γ ∼ U(0, 1) if γ ≤ α then\n‘Accept’ the proposal end if\nend for Wait to build Dtm+Ns+1t0 (algorithm sample time)\nuntil No data is obtained" }, { "heading": "3 ARMCMC ALGORITHM", "text": "At each time interval, ARMCMC recursively estimates the posterior distribution by drawing samples. The number of samples drawn is constrained by the desired precision and reliability, and the real-time requirement. On the other hand, the maximum number of data points in each data pack, Ns, is limited by the frequency of model variation, and the minimum is confined by the shortest required time such that the algorithm is real-time. We propose a variable jump distribution that enables both enriching and exploring. This will necessitate the definition of the temporal forgetting factor as a measure to reflect current underlying dynamics of the data. In other words, this parameter will show the validity of the previous model for the current data. We also prove that ARMCMC achieves the same precision and reliability with fewer samples compared to the traditional MCMC. Algorithm 1 summarizes ARMCMC." }, { "heading": "3.1 VARIABLE JUMP DISTRIBUTION", "text": "We propose a variable jump distribution (also known as a proposal distribution) to achieve faster convergence, thereby enabling real-time parameter estimation:\nqtk(θ t|θtk−1) = { P (θt−1|Dt−1) λk ≤ λt N(µD, σν) λk > λ t , (8)\nwhere θtk−1 is the (k − 1)-th parameter sample which is given by the t-th data pack throughout the MCMC evaluation. Averaging the second half of this quantity will construct θt. P (θt−1|Dt−1) is the posterior distribution of the parameters at the previous algorithm time step, and N(µD, σν) is a Gaussian distribution with µD, σν computed using sample-based mean and variance of Dt−1.\nThe hyperparameter λt (temporal forgetting factor), is an adaptive threshold for the t-th pack that takes inspiration from in the classical system identification terminology; it regulates how previous knowledge affects the posterior distribution. Smaller values of λt intuitively means that there may be a large sudden change in θ, and thus more exploration is needed. Conversely, larger values of λt is appropriate when θ is changing slowly, and thus previous knowledge should be exploited. As more data is obtained, better precision and reliability can be achieved." }, { "heading": "3.2 TEMPORAL FORGETTING FACTOR", "text": "Depending on whether the distribution of the parameter θ has changed significantly, a new sample can be drawn according to the modification or the reinforcement mode. Reinforcement is employed to make the identified probability distribution more precise when it is not undergoing sudden change. Modification is employed otherwise to re-identify the distribution ”from scratch”. Therefore, we define a model mismatch index, denoted ζt, such that when it surpasses a predefined threshold (ζt > ζth), modification is applied. In other respects, if ζt ≤ ζth, then ζt is used to determine λt as follows:\nλt = e−|µν−ζ t|, (9)\nwhere µν is an estimation of the noise mean, by calculation of the expected value in relation (1). Note that employing modification is equivalent to setting λt = 0. The model mismatch index ζt itself is calculated by averaging the errors of the previous model given the current data:\nζt = 1/Ns Ns∑ n=1 ( ytn − E θ∈θt−1 (F (Dt(n), θ)) ) , ζ0 =∞ (10)\nRemark 2: The model mismatch index accounts for all sources of uncertainty in the system. To calculate ζth, one needs to precalculate the persisting error between the designated modeled and measured data. In other words, ζth is basically an upper bound for the unmodeled dynamics, disturbances, noises, and any other source of uncertainty in the system.\nRemark 3: To avoid numerical issues, the summation of probability logarithms are calculated. In addition, each pair in the algorithm time sample is weighted based on its temporal distance to concurrent time. Therefore Eq. (5) is modified as\nlog ( P (·) ) = tm+Ns+1∑ tm+1 logPν(e t),\net = ( Y tn − F t(Dt−1(n), θt) ) e−ρ(Ns−n),\n(11)\nwhere ρ ∈ [0, 1] is a design parameter that reflects the volatility of the model parameters, and et = [et1, ..., e t n, ..., e t Ns\n]. For systems with fast-paced parameters, ρ should take larger values. We are trying to solve the Bayesian optimization problem that gives us the pdf of the model parameters (θ), when given the data pairs in the presence of uncertainty." }, { "heading": "3.3 MINIMUM REQUIRED EVALUATION", "text": "Theorem 1. Let and δ be the desired precision and reliability. . Furthermore, it can be assumed that the initial sample has enough number of evaluations (as in (7)). To satisfy the inequality in Eq. (7), the minimum number of samples k in ARMCMC is calculated using this implicit relation:\nkmin = 1\n2 2 log(\n2\nλt(1− δ) + 2(1− λt)e−2 2(1−λt)kmin ). (12)\nProof. Samples from previous pdf: According to the variable jump distribution in (8), given k samples, the expected number of samples drawn from the previous posterior probability (P (θ|Dt)) is λtk. By assumption, the algorithm has already drawn at least k samples in the previous algorithm time-step. Consequently, by (7), the expected number of samples with distances less than from E(Pk) drawn from a previous distribution is at least λkδ.\nSamples from Gaussian: By (8), there are k0 = (1−λt)k samples drawn in expectation. According to (13), we have Pr { {Pk − E(Pk)} ≤ } ≥ δ0, where δ0 is given by rearranging (7):\nδ0 = 1− 2e−2 2k0 . (13)\nThus, the expected number of samples with distances less than from E(Pk) are at least δ0(1−λt)k. Overall reliability: The total expected number of samples with distances less than from E(Pk) is the summation of the two parts mentioned above. Hence it is obtained through dividing by k:\nδ1 = (λtkδ) + (δ0(1− λt)k)\nk (14)\nGiven the new obtained reliability, which is greater than the desired one, helps us decrease the number of evaluations. For the sake of illustration, Fig. 2 presents the minimum required number of evaluations with respect to λ for different precisions and reliabilities. As it can be seen, the MCMC is equal to ARMCM if λ is always set to one. The number of evaluations in ARMCMC mitigates as the validity of the previous model increases." }, { "heading": "4 RESULTS", "text": "In this section, we demonstrate the priority of the proposed approach given two different examples. First, we employ the proposed method to identify the soft bending actuator model and compare the results with a Recursive Least Squares (RLS). In the second example, we evaluate it on the Hunt-Crossley model given reality-based data and compare it with a simple MCMC and RLS." }, { "heading": "4.1 SIMULATION RESULTS", "text": "For this part, we consider the dynamic model of a fluid soft bending actuator. The dynamic is given by the following relation: (Wang et al., 2019)\nα̈ = q1(p− patm)− q2α̇− q3α uc sign(ps − p) √ |ps − p| =q4ṗ+ q5ṗp, ud = 0\nud sign(p− patm) √ |p− patm| = q6ṗ+ q7ṗp, uc = 0,\n(15)\nwhere α is the angle of the actuator, uc, ud are the control inputs, and p, ps, patm are the current, compressor and atmosphere pressure respectively. For this example, we assume q1 = 1408.50, q2 = 132.28, q3 = 3319.40 are known and patm = 101.3kpa, ps = 800kpa. We are trying to identify the four other parameters (q4, ..., q7). To this end, we assume the hybrid model below:\nu sign(ps − p) √ |ps − p| = θ1ṗ+ θ2ṗp, u = {uc, ud} (16)\nAs the range of these parameters are small, we scale the input vector by the factor of 107 for RLS. Given the input (uc, ud) and the output (p, ṗ), we want to identify the parameter and estimate the current angle of actuator knowing its initial position at the origin. The data sample time is T = 1 ms and each data pack includes 100 samples which results in an algorithm sample time equal to Ts = 0.1 sec. The point estimation obtained by considering the mode at the modification phase and the medium during the reinforcement phase in ARMCMC is denoted as AR-MAPS. The point estimate results for the parameter estimation are shown in Fig. 3a. The true parameters are q5 = −2.14 × 10−4, q6 = 6.12 × 10−9, q7 = −9.76 × 10−5, q8 = −1.90 × 10−9. The second norm of the parameters’ errors are 0.0235, 6.0053 × 10−7 for θ1, θ2 in RLS and 0.0089, 1.1840 × 10−7 in AR-MAPS, respectively. Moreover, the estimation of the angle is plotted in Fig. 3b." }, { "heading": "4.2 EMPIRICAL RESULTS", "text": "In this section, we demonstrate ARMCMC by identifying parameters of the Hunt-Crossley model, which represents an environment involving a needle contacting soft material. The needle is mounted as an end-effector on a high-precision translational robot, which switches between two modes: free motion and contact. Due to abrupt changes in the model parameters when the contact is established or lost, online estimation of the force is extremely challenging." }, { "heading": "4.2.1 CONTACT DYNAMIC MODEL", "text": "Consider the dynamic of contact as described by the Hunt-Crossley model which is more consistent with the physics of contact than classical linear models such as Kelvin-Voigt (Haddadi & HashtrudiZaad, 2012). In order to overcome the shortcomings of linear models, Hunt & Crossley (1975) proposed the following hybrid/nonlinear model :\nfe(x(t)) =\n{ Kex p(t) +Bex p(t)ẋ(t) x(t) ≥ 0\n0 x(t) < 0 , (17)\nin which Ke, Bexp denote the nonlinear elastic and viscous force coefficients, respectively. The parameter p is typically between 1 and 2, depending on the material and the geometric properties of contact. Also x(t), ẋ(t), fe are the current position, velocity (as inputs X) and contact force (as output (Y in (1)) of a needle near or inside the soft material, with x ≥ 0 representing the needle being inside. This needle can move freely in open space or penetrate the soft material; the forces on this needle are modeled using the Hunt-Crossley model. The practical problem we consider is to estimate the force at the tip of the needle by identifying the model parameters. Ke, Be, p are three unknown parameters (θ in Eq. (1)) that needs to be estimated. An online estimate of environment force plays a pivotal role in stable interaction between robotic manipulators and unknown environments.\nTo make the parameters ready for the regression problem we have\nlog(fe) = log(Kex p s +Beẋsx p s),\nlog(fe) =p log(xs) + log(Ke +Beẋs). (18)\nWe will use the RLS method proposed in (Haddadi & Hashtrudi-Zaad, 2012) as a baseline of comparison where it is needed to make the assumption that Be/Keẋs << 1. It should be noticed that the vector of parameters (θ) in the following relation are not independent, which may lead to divergence. With this assumption, we have\nlog(1 +Be/Keẋs) ≈ Be/Keẋs, log(fe) = p log(xs)+ log(Ke) +Be/Keẋs.\n(19)\nφ =[1, ẋs, ln(xs)],\nθ =[log(Ke), Be/Ke, p] T .\n(20)" }, { "heading": "4.2.2 SETUP", "text": "The data structure is same as previous simulation. Prior knowledge of all three parameters (Ke, Be, p) are initialized to N(1, 0.1) (a normal distribution with µ = 1 and σ = 0.1) Moreover, more data is collected, the spread of the posterior pdf decreases. A bit after 5 seconds, the needle goes outside of the soft material, and experiences zero force; this is equivalent to all parameters being set to zero. The color-based visualization of probability distribution over time is used for the three parameters in Fig. 4a. During the period of time that the whole space is blue (zero probability density), there is no contact and the parameter values are equal to zero.\nSince we are taking a Bayesian approach, we are able to estimate the entire posterior pdf. However, for the sake of illustration, the point estimates are computed by using AR-MAPS method which is shown in Fig. 4b for the time-varying parameters θ1 = Ke, θ2 = Be, θ3 = p. During the times that RLS results are chattering due to the use of saturation (if not, the results would have diverged), the needle is transitioning from being inside the soft material to the outside or vice versa. In addition, due to the assumption(19), performance can deteriorate even when there is no mode transition. Furthermore, in the RLS approach, estimated parameters suddenly diverged during free motion, since the regression vectors are linear dependent. In contrast, with the Bayesian approach, this issue can be easily resolved. The result of ARMCMC is presented in Fig. 5, which shows the force estimation with two different identification approaches. This probability of interest can be easily obtained by deriving the parameter density at one’s disposal." }, { "heading": "4.2.3 QUANTITATIVE COMPARISON", "text": "Quantitative details of comparing a naive point estimate of the ARMCMC algorithm by averaging the particles (AR-APS) and the RLS method are listed in Table 1. This reveals more than a 70% improvement in the precision of all model parameters throughout the study by using the Mean Absolute Error (MAE) criteria and also more than a 55% improvements in the second norm of force estimation error. Among parameters, the viscose (Be) has the largest error in the RLS method since it is underestimated due to the restrictive assumption in Eq. (19). The AR-MAPS approach uplifts the performance of the parameter identification and the force estimation.\nWe also compare ARMCMC to MCMC. For the algorithm to run in real-time, MCMC requires more time to converge. For this example, with λ = 0.7, the value of Kmin is 15000 for MCMC but only 6000 for ARMCMC (more than two times faster) with = 0.01, δ = 0.9. Two approaches that can be used to fix this drawback are to reduce the number of samples, which results in worse precision and reliability compared to ARMCMC, or to increase the algorithm sample time which would cause more delay in the estimation result and slower responses to changes in the parameter." }, { "heading": "5 CONCLUSIONS", "text": "This paper presented an algorithm for an online identification of full probability distribution of model parameters in a Bayesian paradigm using an adaptive recursive MCMC. Due to the abrupt\nchange of model parameters such as contact with a soft environment when it is established/lost, conventional approaches suffer from low performance. Empirical results on the Hunt-Crossley model as a nonlinear hybrid dynamic model was compared with a well-known conventional identification process and revealed proficiency of the proposed algorithm. The proposed method provides a systematic strategy for handling abrupt changes which relaxes the pre-requirement conditions in the parameters. As future work, deploying a fully probabilistic framework from identification to control and a decision-making stage is considered to exploit the full potentials of the Bayesian optimization. Additionally, employing a method to compensate the delay will be taken into consideration." }, { "heading": "A APPENDIX", "text": "According to Abolhassani et al. (2007), a nonlinear hybrid model based on a reality-based soft environment is considered as follows:\nfe = fst(x, t, tp) + ffr(x, ẋ) + fct(x, t, tp), (21) where x is the needle tip position and tp is the latest time of puncture. Initial position of the environment is assumed to be at the origin. The stiffness of the force (fst) belongs to a pre-puncture and the friction (ffr) and cutting forces (fct) belong to a post-puncture. The stiffness force is modeled using a nonlinear Hunt-Crossley model:\nfst(x, t, tp) = 0 x < 0 Kex p(t) 0 ≤ x ≤ x1, t < tp\n0 x > x2, t ≥ tp (22)\nwhere Ke, p are the same parameters defined in (17). The maximum depth that the soft environment yields before the puncture and its position after it is denoted by x1, x2, respectively (0 < x2 < x1). In this study, the needle can insert up to 16.65, 10.21 mm before and after penetration. A friction model is inspired from modified Karnopp model.\nffr(x, ẋ) = Cnsgn(ẋ) +Bex pẋ ẋ ≤ −∆v/2 max(Dn, Fa) −∆v/2 < ẋ ≤ 0 max(Dp, Fa) 0 < ẋ < ∆v/2\nCpsgn(ẋ) +Bex pẋ ẋ ≥ ∆v/2\n(23)\nwhere Cn = −11.96 × 10−3 and Cp = 10.57 × 10−3 are negative and positive values of dynamic friction, Dn = −0.01823 and Dp = 0.01845 are negative and positive values of static friction, and Be, p are same as Eq. (17). The relative velocity between the needle and tissue is denoted by ẋ, and ∆v/2 = 0.005 is the value below where the velocity is considered to be zero, and Fa is the sum of non-frictional applied forces. The cutting force is considered as a static force constant for the tissue and the needle geometry if soft tissues are not inhomogeneous and anisotropic Cowan et al. (2011)\nfct(x, t, tp) = { 0 x ≤ x1, t < tp 0.94 x > x2, t ≥ tp . (24)\nAccording to the previous relations, the system is considered as a hybrid model while providing both free motion and in-contact environment. The manipulator is a translational mechanism with a friction, slip, and hysteresis loop for the actuator. To present the superiority of the proposed algorithm, the results are compared with the RLS method presented in (Haddadi & Hashtrudi-Zaad, 2012). To prevent the results of RLS from divergence in model mismatch sequences, saturation is applied in the outputs of the identifier." } ]
2,020
null
SP:c3995e4d2f6dcf282fa8312606a43471c82f629f
[ "Authors present “mixture of experts” type of method to solve a clustering with unsupervised learning problem. Method is called as Mixture of Contrastive Experts (MiCE) which uses contrastive learning as a base module and combines it with latent mixture models. Authors develop a scalable algorithm for MiCE and empirically evaluate the proposed method for image clustering. " ]
We present Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering framework that simultaneously exploits the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model. Motivated by the mixture of experts, MiCE employs a gating function to partition an unlabeled dataset into subsets according to the latent semantics and multiple experts to discriminate distinct subsets of instances assigned to them in a contrastive learning manner. To solve the nontrivial inference and learning problems caused by the latent variables, we further develop a scalable variant of the Expectation-Maximization (EM) algorithm for MiCE and provide proof of the convergence. Empirically, we evaluate the clustering performance of MiCE on four widely adopted natural image datasets. MiCE achieves significantly better results 1 than various previous methods and a strong contrastive learning baseline.
[ { "affiliations": [], "name": "IMAGE CLUSTERING" }, { "affiliations": [], "name": "Tsung Wei Tsai" }, { "affiliations": [], "name": "Chongxuan Li" }, { "affiliations": [], "name": "Jun Zhu" } ]
[ { "authors": [ "Philip Bachman", "Devon R. Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Pascal Lamblin", "Dan Popovici", "Hugo Larochelle" ], "title": "Greedy layer-wise training of deep networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2006 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jianlong Chang", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Deep adaptive image clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Shlomo E. Chazan", "Sharon Gannot", "Jacob Goldberger" ], "title": "Deep clustering based on a mixture of autoencoders", "venue": "IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP),", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Li Chongxuan", "Max Welling", "Jun Zhu", "Bo Zhang" ], "title": "Graphical generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2011 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2020 }, { "authors": [ "Luke Nicholas Darlow", "Amos Storkey" ], "title": "Dhog: Deep hierarchical object grouping", "venue": "arXiv preprint arXiv:2003.08821,", "year": 2020 }, { "authors": [ "Arthur P Dempster", "Nan M Laird", "Donald B Rubin" ], "title": "Maximum likelihood from incomplete data via the em algorithm", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1977 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-jia Li", "Kai Li", "Fei-fei Li" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Inderjit S Dhillon", "Dharmendra S Modha" ], "title": "Concept decompositions for large sparse text data using clustering", "venue": "Machine Learning,", "year": 2001 }, { "authors": [ "Nat Dilokthanakul", "Pedro AM Mediano", "Marta Garnelo", "Matthew CH Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Deep unsupervised clustering with gaussian mixture variational autoencoders", "venue": "arXiv preprint arXiv:1611.02648,", "year": 2016 }, { "authors": [ "Alexey Dosovitskiy", "Philipp Fischer", "Tobias Jost Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with exemplar convolutional neural networks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),", "year": 2015 }, { "authors": [ "Kamran Ghasedi Dizaji", "Amirhossein Herandi", "Cheng Deng", "Weidong Cai", "Heng Huang" ], "title": "Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2010 }, { "authors": [ "Philip Haeusser", "Johannes Plapp", "Vladimir Golkov", "Elie Aljalbout", "Daniel Cremers" ], "title": "Associative deep clustering: Training a classification network with no labels", "venue": "In German Conference on Pattern Recognition (GCPR),", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Kurt Hornik", "Ingo Feinerer", "Martin Kober" ], "title": "Spherical k-means clustering", "venue": "Journal of Statistical Software,", "year": 2012 }, { "authors": [ "Weihua Hu", "Takeru Miyato", "Seiya Tokui", "Eiichi Matsumoto", "Masashi Sugiyama" ], "title": "Learning discrete representations via information maximizing self-augmented training", "venue": "In Proceedings of the 34th International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Jiabo Huang", "Shaogang Gong", "Xiatian Zhu" ], "title": "Deep semantic clustering by partition confidence maximisation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Robert A Jacobs", "Michael I Jordan", "Steven J Nowlan", "Geoffrey E Hinton" ], "title": "Adaptive mixtures of local experts", "venue": "Neural Computation,", "year": 1991 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Zhuxi Jiang", "Yin Zheng", "Huachun Tan", "Bangsheng Tang", "Hanning Zhou" ], "title": "Variational deep embedding: An unsupervised and generative approach to clustering", "venue": "arXiv preprint arXiv:1611.05148,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "arXiv preprint arXiv:1901.09005,", "year": 2019 }, { "authors": [ "Andreas Kopf", "Vincent Fortuin", "Vignesh Ram Somnath", "Manfred Claassen" ], "title": "Mixture-of-experts variational autoencoder for clustering and generating from similarity-based representations", "venue": null, "year": 1910 }, { "authors": [ "Andreas Krause", "Pietro Perona", "Ryan G Gomes" ], "title": "Discriminative clustering by regularized information maximization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2010 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Junnan Li", "Pan Zhou", "Caiming Xiong", "Richard Socher", "Steven CH Hoi" ], "title": "Prototypical contrastive learning of unsupervised representations", "venue": "arXiv preprint arXiv:2005.04966,", "year": 2020 }, { "authors": [ "Xiaopeng Li", "Zhourong Chen", "K.M. Leonard Poon", "L. Nevin Zhang" ], "title": "Learning latent superstructures in variational autoencoders for deep multidimensional clustering", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Stuart Lloyd" ], "title": "Least squares quantization in pcm", "venue": "IEEE transactions on information theory,", "year": 1982 }, { "authors": [ "Zhuang Ma", "Michael Collins" ], "title": "Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency", "venue": "arXiv preprint arXiv:1809.01812,", "year": 2018 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2008 }, { "authors": [ "Geoffrey J McLachlan", "David Peel" ], "title": "Finite mixture models", "venue": null, "year": 2004 }, { "authors": [ "Sudipto Mukherjee", "Himanshu Asnani", "Eugene Lin", "Sreeram Kannan" ], "title": "Clustergan: Latent space clustering in generative adversarial networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Chuang Niu", "Jun Zhang", "Ge Wang", "Jimin Liang" ], "title": "Gatcluster: Self-supervised gaussian-attention network for image clustering", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Tianyu Pang", "Chao Du", "Jun Zhu" ], "title": "Max-mahalanobis linear discriminant analysis networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Yinpeng Dong", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Rethinking softmax crossentropy loss for adversarial robustness", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Deepak Pathak", "Philipp Krähenbühl", "Jeff Donahue", "Trevor Darrell", "A. Alexei Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2015 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "David Sculley" ], "title": "Web-scale k-means clustering", "venue": "In Proceedings of the 19th International Conference on World Wide Web (WWW), pp", "year": 2010 }, { "authors": [ "Guy Shiran", "Daphna Weinshall" ], "title": "Multi-modal deep clustering: Unsupervised partitioning of images", "venue": "arXiv preprint arXiv:1912.02678,", "year": 2019 }, { "authors": [ "Padhraic Smyth" ], "title": "Model selection for probabilistic clustering using cross-validated likelihood", "venue": "Statistics and computing,", "year": 2000 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Tsung Wei Tsai", "Chongxuan Li", "Jun Zhu" ], "title": "Countering noisy labels by learning from auxiliary clean labels", "venue": "arXiv preprint arXiv:1905.13305,", "year": 2019 }, { "authors": [ "Wouter Van Gansbeke", "Simon Vandenhende", "Stamatios Georgoulis", "Marc Proesmans", "Luc Van Gool" ], "title": "Scan: Learning to classify images without labels", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2010 }, { "authors": [ "CF Jeff Wu" ], "title": "On the convergence properties of the em algorithm", "venue": "The Annals of statistics,", "year": 1983 }, { "authors": [ "Jianlong Wu", "Keyu Long", "Fei Wang", "Chen Qian", "Cheng Li", "Zhouchen Lin", "Hongbin Zha" ], "title": "Deep comprehensive correlation mining for image clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Bo Yang", "Xiao Fu", "Nicholas D Sidiropoulos", "Mingyi Hong" ], "title": "Towards k-means-friendly spaces: Simultaneous deep learning and clustering", "venue": "In Proceedings of the 34th International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Linxiao Yang", "Ngai-Man Cheung", "Jiaying Li", "Jun Fang" ], "title": "Deep clustering by gaussian mixture variational autoencoders with graph embedding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Mang Ye", "Xu Zhang", "Pong C Yuen", "Shih-Fu Chang" ], "title": "Unsupervised embedding learning via invariant and spreading instance feature", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Lihi Zelnik-Manor", "Pietro Perona" ], "title": "Self-tuning spectral clustering", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2004 }, { "authors": [ "Dejiao Zhang", "Yifan Sun", "Brian Eriksson", "Laura Balzano" ], "title": "Deep unsupervised clustering using mixture of autoencoders", "venue": "arXiv: Learning,", "year": 2017 }, { "authors": [ "Richard Zhang", "Phillip Isola", "A. Alexei Efros" ], "title": "Colorful image colorization", "venue": "European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Junbo Zhao", "Michael Mathieu", "Ross Goroshin", "Yann Lecun" ], "title": "Stacked what-where auto-encoders", "venue": "arXiv preprint arXiv:1506.02351,", "year": 2015 }, { "authors": [ "Zhihua Zhou" ], "title": "A brief introduction to weakly supervised learning", "venue": "National Science Review,", "year": 2018 }, { "authors": [ "Pang" ], "title": "For a dataset with K ground-truth classes, we will generate K centers which are all `2-normalized in the R space. Please kindly note that the algorithm requires K", "venue": null, "year": 2020 }, { "authors": [ "2019 Ji et al", "Shiran", "2019 Weinshall", "Darlow", "2020). Storkey" ], "title": "For CIFAR-10, CIFAR-100, and STL-10, all the training and test images are jointly utilized, and the 20 superclasses of CIFAR-100 are used instead of the fine labels. The 15 classes of dog images are selected from the ILSVRC2012 1K (Deng et al., 2009) dataset and resized to 96× 96× 3 to form the ImageNet-Dog dataset (Chang", "venue": null, "year": 2009 }, { "authors": [ "Ji" ], "title": "Note that the numbers of the clusters are known in advance as in Chang et al", "venue": "Shiran & Weinshall", "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unsupervised clustering is a fundamental task that aims to partition data into distinct groups of similar ones without explicit human labels. Deep clustering methods (Xie et al., 2016; Wu et al., 2019) exploit the representations learned by neural networks and have made large progress on high-dimensional data recently. Often, such methods learn the representations for clustering by reconstructing data in a deterministic (Ghasedi Dizaji et al., 2017) or probabilistic manner (Jiang et al., 2016), or maximizing certain mutual information (Hu et al., 2017; Ji et al., 2019) (see Sec. 2 for the related work). Despite the recent advances, the representations learned by existing methods may not be discriminative enough to capture the semantic similarity between images.\nThe instance discrimination task (Wu et al., 2018; He et al., 2020) in contrastive learning has shown promise in pre-training representations transferable to downstream tasks through fine-tuning. Given that the literature (Shiran & Weinshall, 2019; Niu et al., 2020) shows improved representations can lead to better clustering results, we hypothesize that instance discrimination can improve the performance as well. A straightforward approach is to learn a classical clustering model, e.g. spherical k-means (Dhillon & Modha, 2001), directly on the representations pre-trained by the task. Such a two-stage baseline can achieve excellent clustering results (please refer to Tab. 1). However, because of the independence of the two stages, the baseline may not fully explore the semantic structures of the data when learning the representations and lead to a sub-optimal solution for clustering.\nTo this end, we propose Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering method that utilizes the instance discrimination task as a stepping stone to improve clustering. In particular, to capture the semantic structure explicitly, we formulate a mixture of conditional models by introducing latent variables to represent cluster labels of the images, which is inspired by the mixture of experts (MoE) formulation. In MiCE, each of the conditional models, also called an expert, learns to discriminate a subset of instances, while an input-dependent gating function partitions the dataset into subsets according to the latent semantics by allocating weights among experts. Further, we develop a scalable variant of the Expectation-Maximization (EM) algorithm (Dempster et al.,\n∗Corresponding author. 1Code is available at: https://github.com/TsungWeiTsai/MiCE\n1977) for the nontrivial inference and learning problems. In the E-step, we obtain the approximate inference of the posterior distribution of the latent variables given the observed data. In the M-step, we maximize the evidence lower bound (ELBO) of the log conditional likelihood with respect to all parameters. Theoretically, we show that the ELBO is bounded and the proposed EM algorithm leads to the convergence of ELBO. Moreover, we carefully discuss the algorithmic relation between MiCE and the two-stage baseline and show that the latter is a special instance of the former in a certain extreme case.\nCompared with existing clustering methods, MiCE has the following advantages. (i) Methodologically unified: MiCE conjoins the benefits of both the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model within a unified probabilistic framework. (ii) Free from regularization: MiCE trained by EM optimizes a single objective function, which does not require auxiliary loss or regularization terms. (iii) Empirically effective: Evaluated on four widely adopted natural image datasets, MiCE achieves significantly better results than a strong contrastive baseline and extensive prior clustering methods on several benchmarks without any form of pre-training." }, { "heading": "2 RELATED WORK", "text": "Deep clustering. Inspired by the success of deep learning, many researchers propose to learn the representations and cluster assignments simultaneously (Xie et al., 2016; Yang et al., 2016; 2017) based on data reconstruction (Xie et al., 2016; Yang et al., 2017), pairwise relationship among instances (Chang et al., 2017; Haeusser et al., 2018; Wu et al., 2019), multi-task learning (Shiran & Weinshall, 2019; Niu et al., 2020), etc. The joint training framework often ends up optimizing a weighted average of multiple loss functions. However, given that the validation dataset is barely provided, tuning the weights between the losses may be impractical (Ghasedi Dizaji et al., 2017).\nRecently, several methods also explore probabilistic modeling, and they introduce latent variables to represent the underlying classes. On one hand, deep generative approaches (Jiang et al., 2016; Dilokthanakul et al., 2016; Chongxuan et al., 2018; Mukherjee et al., 2019; Yang et al., 2019) attempt to capture the data generation process with a mixture of Gaussian prior on latent representations. However, the imposed assumptions can be violated in many cases, and capturing the true data distribution is challenging but may not be helpful to the clustering (Krause et al., 2010). On the other hand, discriminative approaches (Hu et al., 2017; Ji et al., 2019; Darlow & Storkey, 2020) directly model the mapping from the inputs to the cluster labels and maximize a form of mutual information, which often yields superior cluster accuracy. Despite the simplicity, the discriminative approaches discard the instance-specific details that can benefit clustering via improving the representations.\nBesides, MIXAE (Zhang et al., 2017), DAMIC (Chazan et al., 2019), and MoE-Sim-VAE (Kopf et al., 2019) combine the mixture of experts (MoE) formulation (Jacobs et al., 1991) with the data reconstruction task. However, either pre-training, regularization, or an extra clustering loss is required.\nContrastive learning. To learn discriminative representations, contrastive learning (Wu et al., 2018; Oord et al., 2018; He et al., 2020; Tian et al., 2019; Chen et al., 2020) incorporates various contrastive loss functions with different pretext tasks such as colorization (Zhang et al., 2016), context autoencoding (Pathak et al., 2016), and instance discrimination (Dosovitskiy et al., 2015; Wu et al., 2018). The pre-trained representations often achieve promising results on downstream tasks, e.g., depth prediction, object detection (Ren et al., 2015; He et al., 2017), and image classification (Kolesnikov et al., 2019), after fine-tuning with human labels. In particular, InstDisc (Wu et al., 2018) learns from instance-level discrimination using NCE (Gutmann & Hyvärinen, 2010), and maintains a memory bank to compute the loss function efficiently. MoCo replaces the memory bank with a queue and maintains an EMA of the student network as the teacher network to encourage consistent representations. A concurrent work called PCL (Li et al., 2020) also explores the semantic structures in contrastive learning. They add an auxiliary cluster-style objective function on top of the MoCo’s original objective, which differs from our method significantly. PCL requires an auxiliary k-means (Lloyd, 1982) algorithm to obtain the posterior estimates and the prototypes. Moreover, their aim of clustering is to induce transferable embeddings instead of discovering groups of data that correspond to underlying semantic classes." }, { "heading": "3 PRELIMINARY", "text": "We introduce the contrastive learning methods based on the instance discrimination task (Wu et al., 2018; Ye et al., 2019; He et al., 2020; Chen et al., 2020), with a particular focus on the recent state-of-the-art method, MoCo (He et al., 2020). Let X = {xn}Nn=1 be a set of images without the ground-truth labels, and each of the datapoint xn is assigned with a unique surrogate label yn ∈ {1, 2, ..., N} such that yn 6= yj ,∀j 6= n2. To learn representations in an unsupervised manner, instance discrimination considers a discriminative classifier that maps the given image to its surrogate label. Suppose that we have two encoder networks fθ and fθ′ that generate `2-normalized embeddings vyn ∈ Rd and fn ∈ Rd, respectively, given the image xn with the surrogate label yn. We show the parameters of the networks in the subscript, and images are transformed by a stochastic data augmentation module before passing to the networks (please see Appendix D). We can model the probability classifier with:\np(Y|X) = N∏ n=1 p(yn|xn) = N∏ n=1 exp(v>ynfn/τ)∑N i=1 exp(v > i fn/τ) , (1)\nwhere τ is the temperature hyper-parameter controlling the concentration level (Hinton et al., 2015) 3.\nThe recent contrastive learning methods mainly differ in: (1) The contrastive loss used to learn the network parameters, including NCE (Wu et al., 2018), InfoNCE (Oord et al., 2018), and the margin loss (Schroff et al., 2015). (2) The choice of the two encoder networks based on deep neural networks (DNNs) in which θ′ can be an identical (Ye et al., 2019; Chen et al., 2020), distinct (Tian et al., 2019), or an exponential moving average (EMA) (He et al., 2020) version of θ.\nIn particular, MoCo (He et al., 2020) learns by minimizing the InfoNCE loss:\nlog exp\n( v>ynfn/τ ) exp ( v>ynfn/τ ) + ∑ν i=1 exp ( q>i fn/τ\n) , (2) where q ∈ Rν×d is a queue of size ν ≤ N storing previous embeddings from fθ′ . While it adopts the EMA approach to avoid rapidly changing embeddings in the queue that adversely impacts the performance (He et al., 2020). For convenience, we refer fθ and fθ′ as the student and teacher network respectively (Tarvainen & Valpola, 2017; Tsai et al., 2019). In the following, we propose a unified latent mixture model based on contrastive learning to tackle the clustering task." }, { "heading": "4 MIXTURE OF CONTRASTIVE EXPERTS", "text": "Unsupervised clustering aims to partition a dataset X with N observations into K clusters. We introduce the latent variable zn ∈ {1, 2, ...,K} to be the cluster label of the image xn and naturally extend Eq. (1) to Mixture of Contrastive Experts (MiCE):\np(Y,Z|X) = N∏ n=1 K∏ k=1 p(yn, zn = k|xn)1(zn=k)\n= N∏ n=1 K∏ k=1 p(zn = k|xn)1(zn=k)p(yn|xn, zn = k)1(zn=k), (3)\nwhere 1(·) is an indicator function. The formulation explicitly introduces a mixture model to capture the latent semantic structures, which is inspired by the mixture of experts (MoE) framework (Jacobs et al., 1991). In Eq. (3), p(yn|xn, zn) is one of the experts that learn to discriminate a subset of instances and p(zn|xn) is a gating function that partitions the dataset into subsets according to the latent semantics by routing the given input to one or a few experts. With a divide-and-conquer principle, the experts are often highly specialized in particular images that share similar semantics, which improves the learning efficiency. Notably, MiCE is generic to the choice of the underlying\n2The value of the surrogate label can be regarded as the index of the image. 3Due to summation over the entire dataset in the denominator term, it can be computationally prohibitive to\nget Maximum likelihood estimation (MLE) of the parameters (Ma & Collins, 2018).\ncontrastive methods (Wu et al., 2018; He et al., 2020; Chen et al., 2020), while in this paper, we focus on an instance based on MoCo. Also, please see Fig. 1 for an illustration of MiCE with three experts.\nIn contrast to the original MoE used in the supervised settings (Jacobs et al., 1991), our experts learn from instance-wise discrimination instead of human labels. In addition, both gating and expert parts of MiCE are based on DNNs to fit the high-dimensional data. In the following, we will elaborate on how we parameterize the gating function and the experts to fit the clustering task. For simplicity, we omit the parameters in all probability distributions in this section.\nGating function. The gating function organizes the instance discrimination task into K simpler subtasks by weighting the experts based on the semantics of the input image. We define gψ as an encoder network that outputs an embedding for each input image. We denote the output vector for image xn as gn ∈ Rd. The gating function is then parameterized as:\np(zn|xn) = exp(ω>zngn/κ)∑K k=1 exp(ω > k gn/κ) , (4)\nwhere κ is the temperature, and ω = {ωk}Kk=1 represent the gating prototypes. All prototypes and image embeddings are `2-normalized in the Rd space. Hence, the gating function performs a soft partitioning of the dataset based on the cosine similarity between the image embeddings and the gating prototypes. We can view it as a prototype-based discriminative clustering module, whereas we obtain the cluster labels using posterior inference to consider additional information in the experts.\nExperts. In MiCE, every expert learns to solve the instance discrimination subtask arranged by the gating function. We define the expert in terms of the unnormalized model Φ(·) following Wu et al. (2018); He et al. (2020). Therefore, the probability of the image xn being recognized as the yn-th one by the zn-th expert is formulated as follows:\np(yn|xn, zn) = Φ(xn, yn, zn)\nZ(xn, zn) , (5)\nwhere Z(xn, zn) = ∑N i=1 Φ(xn, yi, zn) is a normalization constant that is often computationally intractable.\nSimilar to MoCo, we have the student network fθ that maps the image xn into K continuous embeddings fn = {fn,k}Kk=1 ∈ RK×d. Likewise, the teacher network fθ′ outputs vyn = {vyn,k}Kk=1 ∈ RK×d given xn. To be specific, fn,zn ∈ Rd and vyn,zn ∈ Rd are the student embedding and the teacher embedding for images xn under the zn-th expert, respectively. We then parameterize the unnormalized model as:\nΦ(xn, yn, zn) = exp ( v>yn,zn (fn,zn + µzn) /τ ) , (6)\nwhere τ is the temperature and µ = {µk}Kk=1 represent the cluster prototypes for the experts. In Eq. (6), the first instance-wise dot product explores the instance-level information to induce discriminative representations within each expert. The second instance-prototype dot product incorporates the class-level information into representation learning, encouraging a clear cluster structure around the prototype. Overall, the learned embeddings are therefore encoded with semantic structures while being discriminative enough to represent the instances. Eq. (6) is built upon MoCo with the EMA approach, while in principle, many other potential solutions exist to define the experts, which are left for future studies. Besides, the parameters θ and ψ are partially shared, please refer to the Appendix D for more details on the architecture." }, { "heading": "5 INFERENCE AND LEARNING", "text": "We first discuss the evidence lower bound (ELBO), the single objective used in MiCE, in Sec. 5.1. Then, we present a scalable variant of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977) to deal with the non-trivial inference and learning of MiCE in Sec. 5.2. Lastly, in Sec. 5.3, we show that a naı̈ve two-stage baseline, in which we run a spherical k-means algorithm on the embeddings learned by MoCo, is a special case of MiCE." }, { "heading": "5.1 EVIDENCE LOWER BOUND (ELBO)", "text": "The parameters to update include the parameters θ,ψ of the student and gating network respectively, and the expert prototypes µ = {µ}Kk=1. The learning objective of MiCE is to maximize the evidence lower bound (ELBO) of the log conditional likelihood of the entire dataset. The ELBO of the datapoint n is given by:\nlog p(yn|xn) ≥ L(θ,ψ,µ;xn, yn) := Eq(zn|xn,yn)[log p(yn|xn, zn;θ,µ)]−DKL(q(zn|xn, yn)‖p(zn|xn;ψ)), (7)\nwhere q(zn|xn, yn) is a variational distribution to infer the latent cluster label given the observed data. The first term in Eq. (7) encourages q(zn|xn, yn) to be high for the experts that are good at discriminating the input images. Intuitively, it can relief the potential degeneracy issue (Caron et al., 2018; Ji et al., 2019), where all images are assigned to the same cluster. This is because a degenerated posterior puts the pressure of discriminating all images on a single expert, which may result in a looser ELBO. The second term in Eq. (7) is the Kullback–Leibler divergence between the variational distribution and the distribution defined by the gating function. With this term, the gating function is refined during training and considers the capability of the experts when partitioning data. Notably, MiCE does not rely on auxiliary loss or regularization terms as many prior methods (Haeusser et al., 2018; Shiran & Weinshall, 2019; Wu et al., 2019; Niu et al., 2020) do." }, { "heading": "5.2 EM ALGORITHM", "text": "E-step. Inferring the posterior distribution of latent variables given the observations is an important step to apply MiCE to clustering. According to Bayes’ theorem, the posterior distribution given the current estimate of the model parameters is:\np(zn|xn, yn;θ,ψ,µ) = p(zn|xn;ψ)p(yn|xn, zn;θ,µ)∑K k=1 p(k|xn;ψ)p(yn|xn, k;θ,µ) . (8)\nComparing with the gating function p(zn|xn;ψ), the posterior provides better estimates of the latent variables by incorporating the supplementary information of the experts. However, we cannot tractably compute the posterior distribution because of the normalization constants Z(xn, zn;θ,µ). In fact, given the image xn and the cluster label zn, Z(xn, zn;θ,µ) sums over the entire dataset, which is prohibitive for large-scale image dataset. We present a simple and analytically tractable estimator to approximate them. Specifically, we maintain a queue q ∈ Rν×K×d that stores ν previous outputs of the teacher network, following MoCo closely. Formally, the estimator Ẑ(·) is:\nẐ(xn, zn;θ,µ) = exp ( v>yn,zn (fn,zn + µzn) /τ ) + ν∑ i=1 exp ( q>i,zn (fn,zn + µzn) /τ ) . (9)\nThe estimator is biased, while its bias decreases as ν increases and we can get a sufficient amount of embeddings from the queue q efficiently4. With Eq. (9), we approximate the posterior as:\nq(zn|xn, yn;θ,ψ,µ) = p(zn|xn;ψ)Φ (xn, yn, zn;θ,µ)/Ẑ(xn, zn;θ,µ)∑K k=1 p(k|xn;ψ)Φ (xn, yn, k;θ,µ)/Ẑ(xn, k;θ,µ) . (10)\nM-step. We leverage the stochastic gradient ascent to optimize ELBO with respect to the network parameters θ,ψ and the expert prototypes µ. We approximate the normalization constants appear in ELBO in analogy to the E-step, formulated as follows:\nL̃(θ,ψ,µ;xn, yn) = Eq(zn|xn,yn;θ,ψ,µ)\n[ log Φ (xn, yn, zn;θ,µ)\nẐ(xn, zn;θ,µ) ] −DKL(q(zn|xn, yn;θ,ψ,µ)‖p(zn|xn;ψ)). (11)\nSampling a mini-batch B of datapoints, we can construct an efficient stochastic estimator of ELBO over the full dataset to learn θ,ψ and µ:\nL(θ,ψ,µ;X,Y) ≈ N |B| ∑ n∈B L̃(θ,ψ,µ;xn, yn). (12)\nIt requires additional care on the update of the prototypes, as discussed in many clustering methods (Sculley, 2010; Xie et al., 2016; Yang et al., 2017; Shiran & Weinshall, 2019). Some of them carefully adjust the learning rate of each prototype separately (Sculley, 2010; Yang et al., 2017), which can be very different from the one used for the network parameters. Since evaluating different learning rate schemes on the validation dataset is often infeasible in unsupervised clustering, we employ alternative strategies which are free from using per-prototype learning rates in MiCE.\nAs for the expert prototypes, we observe that using only the stochastic update can lead to bad local optima. Therefore, at the end of each training epoch, we apply an additional analytical update derived from the ELBO as follows:\nµ̂k = ∑\nn:ẑn=k\nvyn,k, µk = µ̂k ‖µ̂k‖2 , ∀k, (13)\nwhere ∀n, ẑn = arg maxk q(k|xn, yn;θ,ψ,µ) is the hard assignment of the cluster label. Please refer to Appendix A.2 for the detailed derivation. Intuitively, the analytical update in Eq. (13) considers all the teacher embeddings assigned to the k-th cluster, instead of only the ones in a mini-batch, to avoid bad local optima.\nBeside, we fix the gating prototypes ω to a set of pre-defined embeddings to stabilize the training process. However, using randomly initialized prototypes may cause unnecessary difficulties in partitioning the dataset if some of them are crowded together. We address the potential issue by using the means of a Max-Mahalanobis distribution (MMD) (Pang et al., 2018) which is a special case of the mixture of Gaussian distribution. The untrainable means in MMD provide the optimal inter-cluster dispersion (Pang et al., 2020) that stabilizes the gating outputs. We provide the algorithm of MMD in Appendix B and a systematical ablation study in Tab. 3 to investigate the effect of the updates on ω and µ. Lastly, we provide the formal proof on the convergence of the EM algorithm in Appendix A.4." }, { "heading": "5.3 RELATIONS TO A TWO-STAGE BASELINE", "text": "The combination of a contrastive learning method and a clustering method is a natural baseline of MiCE. Our analysis reveals that MiCE is the general form of the two-stage baseline in which we learn the image embeddings with MoCo (He et al., 2020) and subsequently run a spherical k-means algorithm (Dhillon & Modha, 2001) to obtain the cluster labels.\nOn one hand, in the extreme case where κ → ∞ (Assumption A3), the student embeddings fn,k and teacher embeddings vyn,k are identical for different k (Assumption A4), and the class-level\n4Even though the bias does not vanish due to the use of queue, we find that the approximation works well empirically.\ninformation in Eq. (6) is omitted (Assumption A5), we arrive at the same Softmax classifier (Eq. (1)) and the InfoNCE loss (Eq. (2)) used by MoCo as a special case of our method. On the other hand, of particular relevance to the analytical update on expert prototypes (Eq. (13)) is the spherical k-means algorithm (Dhillon & Modha, 2001) that leverages the cosine similarity to cluster `2-normalized data (Hornik et al., 2012). In addition to Assumptions A3 and A4, if we assume the unnormalized model is perfectly self-normalized (Assumption A2), using the hard assignment to get the cluster labels together with the analytical update turns out to be a single-iteration spherical k-means algorithm on the teacher embeddings. Please refer to the Appendix C for a detailed derivation.\nThe performance of the baseline is limited by the independence of the representation learning stage and the clustering stage. In contrast, MiCE provides a unified framework to align the representation learning and clustering objectives in a principled manner. See a comprehensive comparison in Tab. 1.\n6 EXPERIMENTS\nIn this section, we present experimental results to demonstrate the effectiveness of MiCE. We compare MiCE with extensive prior clustering methods and the contrastive learning based twostage baseline on four widely adopted benchmarking datasets for clustering, including STL-10 (Coates et al., 2011), CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009),\nand ImageNet-Dog (Chang et al., 2017). The experiment settings follow the literature closely (Chang et al., 2017; Wu et al., 2019; Ji et al., 2019; Shiran & Weinshall, 2019; Darlow & Storkey, 2020) and the numbers of the clusters are known in advance. The statistics of the datasets are summarized in\nTab. 2. We adopt three common metrics to evaluate the clustering performance, namely normalized mutual information (NMI), cluster accuracy (ACC), and adjusted rand index (ARI). All the metrics are presented in percentage (%). We use a 34-layer ResNet (ResNet-34) (He et al., 2016) as the backbone for MiCE and MoCo following the recent methods (Ji et al., 2019; Shiran & Weinshall, 2019) for fair comparisons. We set both temperatures τ and κ as 1.0, and the batch size as 256. The datasets, network, hyper-parameters, and training settings are discussed detailedly in Appendix D." }, { "heading": "6.1 MAIN CLUSTERING RESULTS", "text": "Comparison with existing deep clustering methods. As shown in Tab. 1, MiCE outperforms the previous clustering approaches by a significant margin on all datasets. The comparison highlights the importance of exploring the discriminative representations and the semantic structures of the dataset.\nComparison with the two-stage baseline. Compared to the straightforward combination of MoCo and spherical k-means, MiCE explores the semantic structures of the dataset that improve the clustering performance. From Tab. 1, we can see that MiCE consistently outperforms the baseline in terms of the mean performance, which agrees with the analysis in Sec. 5.3. Specifically, regarding ACC, we improve upon the strong baseline by 8.7%, 2.7%, and 8.2% on CIFAR-10, CIFAR-100, and ImageNet-Dog, respectively. Taking the measurement variance into consideration, our performance overlaps with MoCo only on STL-10. We conjecture that the small data size may limit the performance as each expert learns from a subset of data. Nevertheless, the comparison manifests the significance of aligning the representation learning and clustering objectives in a unified framework, and we believe that MiCE points out a promising direction for future studies in clustering.\nVisualization of the learned embeddings. We visualize the image embeddings produced by the gating network using t-SNE (Maaten & Hinton, 2008) in Fig. 2. Different colors denote the different ground-truth class labels. At the beginning, the embeddings from distinct classes are indistinguishable. MiCE progressively refines its estimates and ends up with embeddings that show a clear cluster structure. The learned clusters align with the ground-truth semantics well, which verifies the effectiveness of our method. Additional visualizations and the comparisons with MoCo are in Appendix E." }, { "heading": "6.2 ABLATION STUDIES", "text": "Simplified model (Tab. 3 (left)). We investigate the gating function and the unnormalized model to understand the contributions of different components. Using a simpler latent variable model often deteriorates the performance. (1) With a uniform prior, the experts would take extra efforts to become specialized in a set of images with shared semantics. (2 & 3) The teacher embedding vyn is pushed to be close to all expert prototypes at the same time. It may be difficult for the simplified expert to encode the latent semantics while being discriminative. (4) The performance drop shows that the class-level information is essential for the image embeddings to capture the semantic structures of the dataset, despite the learned representations are still discriminative between instances. Without the\nterm, the learned embeddings are mixed up and scattered over the embedding space without a clear cluster structure.\nPrototypes update rules (Tab. 3 (right)). We also conduct ablation studies to gain insights into the different ways of handling the gating and expert prototypes. We see that (a) without Eq. (13), we may be stuck in bad local optima. As mentioned in Sec. 5.2, a possible reason is that we are using the same learning rate for all network parameters and prototypes (Sculley, 2010; Yang et al., 2017), but tuning separate learning rates for each prototype is impractical for unsupervised clustering. Hence, we derive the analytical update to tackle the issue. As for (b), it shows that the current gradient update rule avoids the potential inconsistency between the expert prototypes and the teacher embeddings during the mini-batch training. Lastly, as discussed in Sec. 5.2, comparing to using (c) uniformly initiated gating prototypes projected onto the unit sphere, utilizing the means of MMD slightly improves performance. This also bypasses the potential learning rate issue that may appear in (d)." }, { "heading": "7 CONCLUSION", "text": "We present a principled probabilistic clustering method that conjoins the benefits of the discriminative representations learned by contrastive learning and the semantic structures introduced by the latent mixture model in a unified framework. With a divide-and-conquer principle, MiCE comprises an input-dependent gating function that distributes subtasks to one or a few specialized experts, and K experts that discriminate the subset of images based on instance-level and class-level information. To address the challenging inference and learning problems, we present a scalable variant of Expectation-Maximization (EM) algorithm, which maximizes the ELBO and is free from any other loss or regularization terms. Moreover, we show that MoCo with spherical k-means, one of the two-stage baselines, is a special case of MiCE under various assumptions. Empirically, MiCE outperforms extensive prior methods and the strong two-stage baseline by a significant margin on several benchmarking datasets.\nFor future work, one may explore different learning pretext tasks that potentially fit the clustering task, other than the instance discrimination one. Also, it would be an interesting and important future work to include dataset with a larger amount clusters, such as ImageNet. Besides, being able to obtain semantically meaningful clusters could be beneficial to weakly supervised settings (Zhou, 2018) where quality labels are scarce." }, { "heading": "8 ACKNOWLEDGEMENTS", "text": "The authors would like to thank Tianyu Pang and Zihao Wang for the discussion and the reviewers for the valuable suggestions. This work was supported by NSFC Projects (Nos. 62061136001, 61620106010, 62076145), Beijing NSF Project (No. JQ19016), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, and the NVIDIA NVAIL Program with GPU/DGX Acceleration. C. Li was supported by the fellowship of China postdoctoral Science Foundation (2020M680572), and the fellowship of China national postdoctoral program for innovative talents (BX20190172) and Shuimu Tsinghua Scholar." }, { "heading": "B ALGORITHM FOR GENERATING THE CENTERS OF MMD", "text": "We provide the algorithm on generating the centers of the Max-Mahalanobis distribution (MMD) in Algorithm 2. The gating prototypes ω are fix to these centers during training. The algorithm closely follows the one proposed by Pang et al. (2020). For a dataset with K ground-truth classes, we will generate K centers which are all `2-normalized in the Rd space. Please kindly note that the algorithm requires K ≤ (d+ 1) (Pang et al., 2020).\nAlgorithm 2: Algorithm to craft the gating prototypes ω as the centers of MMD Input: The dimension of each gating prototypes d and the number of the clusters K. Initialization: We initialize the first prototype ω1 with the first unit basis vector e1 ∈ Rd . Rest\nof the prototypes ωi, i 6= 1, are initialized with the zero vector 0d ∈ Rd 1 for i = 2 to K do 2 for j = 1 to i− 1 do 3 ωij = −[1/(K − 1) + ω>i ωj ]/ωjj 4 end 5 ωii = √ 1− ‖ωi‖22 6 end Return: The gating prototypes ω = {ωi}Ki=1." }, { "heading": "C RELATIONS TO THE TWO-STAGE BASELINE", "text": "C.1 CONTRASTIVE LEARNING\nAssumption A3. The gating temperature κ→∞ such that for all k, the prior\np(zn|xn) = 1\nK , ∀n.\nAssumption A4. There is only a single output layer for both student network fθ and teacher network fθ′ respectively. The resulting embeddings are used across K expert, to be specific, for all possible cases,\nΦ(xn, yn, k) = exp ( v>yn(fn + µk)/τ ) ,\nwhere fn = fθ(xn) ∈ Rd, v>yn = fθ′(xn) ∈ R d.\nAssumption A5. The unnormalized model Φ(·) considers only the instance-level information such that for all possible cases,\nΦ(xn, yn, zn) = exp ( v>yn,znfn,zn/τ ) .\nTheorem 2. If Assumptions A3-A5 holds, the overall output of MiCE become\np(yn|xn) = exp(v>ynfn/τ)∑N i=1 exp(v > i fn/τ) .\nProof. Since we the expert\np(yn|xn, zn) = Φ(xn, yn, zn)\nZ(xn, zn) =\nexp ( v>yn(fn + µzn)/τ )∑N i=1 exp ( v>i (fn + µzn)/τ\n) (Assumption A4) =\nexp(v>ynfn/τ)∑N i=1 exp(v > i fn/τ) . (Assumption A5)\nThe overall output of MiCE is then\np(yn|xn) = K∑ k=1 p(zn = k|xn)p(yn|xn, zn = k)\n= K∑ k=1 1 K p(yn|xn, zn = k) (Assumption A3) = exp(v>ynfn/τ)∑N i=1 exp(v > i fn/τ) .\nThe simplified model shown in Theorem 2 is essentially the non-parametric Softmax classifier used by MoCo and InstDisc, which is also related to some recent contrastive learning methods (Ye et al., 2019; Bachman et al., 2019). Note that InstDisc adopts a slightly different implementation in which the teacher network is identical to the student network and the loss function is based on NCE (Gutmann & Hyvärinen, 2010). For detailed comparisons, please refer to He et al. (2020).\nLemma 1. If Assumptions A3-A5 hold, the posterior is uniformly distributed.\nProof.\np(zn|xn, yn) = p(zn|xn)Φ (xn, yn, zn)/Z(xn, zn)∑K k=1 p(k|xn)Φ (xn, yn, k)/Z(xn, k)\n= Φ (xn, yn, zn)/Z(xn, zn)∑K k=1 Φ (xn, yn, k)/Z(xn, k)\n(Assumption A3)\n=\nexp(v>yn fn/τ)∑N i=1 exp(v\n> i fn/τ)∑K\nk=1 exp(v>yn fn/τ)∑N i=1 exp(v > i fn/τ)\n(Theorem 2)\n= 1\nK .\nAssumption A6. The normalization constant is computationally tractable such that we can have the variational distribution being identical to the posterior distribution.\nTheorem 3. Given that A3-A6 hold, p(yn|xn) = exp(v>ynfn/τ)/ ∑N i=1 exp(v > i fn/τ), and the tractable version of ELBO is identical to the form of InfoNCE (Oord et al., 2018) loss used by MoCo.\nProof.\nL̃(θ,ψ,µ;xn, yn) = Eq(zn|xn,yn)[log Φ (xn, yn, zn)\nẐ(xn, zn) ]−DKL(q(zn|xn, yn)‖p(zn|xn)\n= log Φ (xn, yn, zn)\nẐ(xn, zn) (Lemma 1)\n= log exp\n( v>ynfn/τ ) exp ( v>ynfn/τ ) + ∑ν i=1 exp ( q>i fn/τ ) . (Theorem 2)\nAccording to Theorem 2 and 3, we see that MoCo can be viewed as a special case of the proposed MiCE. In other words, they are able to learn the same representations under the same experimental setting if the above assumptions are made.\nC.2 SPHERICAL k-MEANS\nTheorem 4. If Assumptions A1-A4 hold, the analytical update on the expert prototypes is equivalent to a single-iteration spherical k-means algorithm on the teacher embeddings.\nProof. Under the assumptions, the variational distribution reduces to\nq(zn|xn, yn) = Φ (xn, yn, zn)∑K k=1 Φ (xn, yn, k)\n(Assumptions A2 and A3)\n= exp\n( v>yn,zn fn,zn/τ + v > yn,zn µzn/τ )\n∑K k=1 exp ( v>yn,k fn,k/τ + v > yn,k µk/τ )\n= exp\n( v>ynfn/τ + v > ynµzn/τ )∑K k=1 exp ( v>ynfn/τ + v > ynµk/τ\n) (Assumption A4) = exp ( v>ynµzn/τ\n)∑K k=1 exp ( v>ynµk/τ\n) , such that the hard cluster assignment is determined bt the cosine similarity between the teacher embedding and the expert prototypes:\nq̂(zn|xn, yn) =\n{ 1, if zn = argmax\nk q(k|xn, yn) = argmax k v>ynµk,\n0, otherwise.\nWith the simplified expert model, we follow the previous discussion on Eq. (14) and get the Lagrangian of the objective function:\nargmax µk λ(1− µ>k µk) + N∑ n=1 q̂(zn = k|xn, yn)v>ynµk/τ.\nThe analytical solution is therefore:\nµ̂k := N∑ n=1 q̂(zn = k|xn, yn)vyn ,\nµk = µ̂k ‖µ̂k‖ ,\nwhere the cluster assignment step and prototype update rule are the same as the spherical k-means." }, { "heading": "D EXPERIEMENT SETTINGS", "text": "We mainly compare with the methods that are trained from scratch without using the pre-training model and the experiment settings follow the literature closely (Chang et al., 2017; Wu et al., 2019; Ji et al., 2019; Shiran & Weinshall, 2019; Darlow & Storkey, 2020). For CIFAR-10, CIFAR-100, and STL-10, all the training and test images are jointly utilized, and the 20 superclasses of CIFAR-100 are used instead of the fine labels. The 15 classes of dog images are selected from the ILSVRC2012 1K (Deng et al., 2009) dataset and resized to 96× 96× 3 to form the ImageNet-Dog dataset (Chang\net al., 2017; Wu et al., 2019). Note that the numbers of the clusters are known in advance as in Chang et al. (2017); Ji et al. (2019); Wu et al. (2019); Shiran & Weinshall (2019). The statistics of the datasets are summarized in Tab. 2. We adopt three common metrics to evaluate the clustering performance, namely normalized mutual information (NMI), cluster accuracy (ACC), and adjusted rand index (ARI). All the metrics are presented in percentage (%).\nRegarding the network architecture, MiCE mainly use a ResNet-34 (He et al., 2016) as the backbone following the recent methods (Ji et al., 2019; Shiran & Weinshall, 2019) for fair comparisons. For the gating network, the output layer is replaced by a single fully connected layer that generates a `2-normalized embeddings in R128. As for the student network, it uses the same ResNet-34 backbone as the gating network and includes K more fully connected layers which map the images to K embeddings. Therefore, the parameters of student and gating networks are shared except for the output layers. The teacher network fθ′ is the exponential moving average (EMA) version of the student network fθ , which stabilizes the learning process (He et al., 2020; Tarvainen & Valpola, 2017). The update rule follows θ′ ← mθ′ + (1 −m)θ with m ∈ [0, 1) being the smoothing coefficient. In practice, we let m = 0.999 following MoCo. Since the images of CIFAR-10 and CIFAR-100 are smaller than ImageNet images, following (Chen et al., 2020), we replace the first 7x7 Conv of stride 2 with a 3x3 Conv of stride 1 for all experiments on CIFAR-10 and CIFAR-100. The first max-pooling operation is removed as well (Wu et al., 2018; Chen et al., 2020; Ye et al., 2019). Please kindly note that if the first max-pooling operation is not removed, MiCE can still achieve 83.6% ACC on CIFAR-10. For fair comparisons, MoCo also uses a ResNet-34 with a 128-dimensional output and follows the same hyper-parameter settings as MiCE.\nAs it is often infeasible to tune the hyper-parameters with a validation dataset in real-world clustering tasks (Ghasedi Dizaji et al., 2017), we set both temperatures τ and κ as 1.0. The queue size ν is set to 12800 for STL-10 because of the smaller data size and 16384 for the other three datasets. The data augmentation follows MoCo closely. Specifically, before passing into any of the embedding networks, images are randomly resized and cropped to the same size, followed by random gray-scale, random color jittering, and random horizontal flip. For a fair comparison, MoCo in the two-stage baseline also uses a ResNet-34 backbone. For all datasets, we use a batch size equals to 256. Note that the data augmentation strategy is critical to contrastive learning methods and MiCE, and we follow the one used by MoCo for fairness.\nIn terms of the optimization details, we use stochastic gradient descent (SGD) as our optimizer on the negative ELBO. We set the SGD weight decay as 0.0001 and the SGD momentum as 0.9 (He et al., 2020). The learning rate is initiated as 1.0 and is multiplied by 0.1 at three different epochs. For different datasets, the number of training epochs is different to accommodate the data size to have a similar and reasonable training time. For CIFAR-10/100, we train for 1000 epochs in total and multiply the learning rate by 0.1 at 480, 640, and 800 epochs. For STL-10, the total epochs are 6000 and the learning rate is multiplied by 0.1 at 3000, 4000, and 5000 epochs. Lastly, for ImageNet-Dog, the total epochs are 3000 and the learning rate is multiplied by 0.1 at 1500, 2000, and 2500 epochs. Also, the learning rate for expert prototypes is the same as the one for network parameters. All the experiments are trained on a single GPU.\nFor all experiment settings in the main text, we follow the recent methods (Wu et al., 2019; Ji et al., 2019; Shiran & Weinshall, 2019) closely where the models are trained from scratch. The setting is different from some of the methods including VaDE (Jiang et al., 2016), DGG (Yang et al., 2019), and LTVAE (Li et al., 2019) from two aspects. Firstly, we do not use any form of the pre-trained model. Secondly, we focus on a purely unsupervised setting. In contrast, VaDE (Jiang et al., 2016) and DGG (Yang et al., 2019) use a supervised pre-trained model on ImageNet for STL-10. For fairness, in the original submission, we compare to many previous methods that use the same settings." }, { "heading": "E ADDITIONAL EXPERIMENTS AND VISUALIZATIONS", "text": "Posterior distribution and cluster predictions. From the left side of Fig. 3, we can see that in the initial stage (epoch 1), MiCE is yet certain about the cluster labels of any give images. At the end of the training (epoch 1000), the major values of the approximated posterior distribution fall in the [0, 0.1) interval, indicating that the model is confident that images do not belong to those clusters.\nAfter training, the learned model is able to generate sparse posterior distributions. The predicted cluster labels are also balanced across different clusters, which is shown on the right side of Fig. 3.\nVisualization of embeddings of MiCE and MoCo. We present the t-SNE visualization of the embeddings learned by MiCE and MoCo in Fig. 4 and Fig. 5 to investigate whether the cluster structure and the latent semantics are captured by the models. The two figures differ in the way we color the datapoints.\nIn Fig. 4, different colors represent different cluster labels predicted by the clustering methods. We get the predicted cluster labels of MiCE based on the hard assignments using the posterior distributions. The cluster labels for MoCo are the outputs of the spherical k-means algorithm. MiCE can learn a distinct structure at the end of the training where each expert would mainly be responsible for one cluster. By comparing Fig. 4 (c) and (f), we can see that the boundaries between the clusters learned by spherical k-means do not match the structure learned by MoCo well. The divergence is caused by the independence of representation learning and clustering. MiCE solves the issue with a unified framework.\nIn Fig. 5, the datapoints are colored according to the ground-truth class labels that are unknown to the models. In Fig. 5(c), the cluster structure is highly aligned with the underlying semantics. Most of the clusters are filled with images from the same classes while having some difficult ones lying mostly on the boundaries. This verifies that the gating network learns to divide the dataset based on the latent semantics and allocates each of the images to one or a few experts. Please kindly note that we use the embeddings from the gating network instead of the student network for simplicity since the embeddings are from the same output head. In contrast, the cluster structure learned by MoCo does not align the semantics well. For all the above t-SNE visualizations, we set the perplexity hyper-parameter to 200.\nTraining time. We present the training time of MiCE and MoCO on CIFAR-10. It takes around 17 and 30 hours to train MoCo and MiCE for 1000 epochs, respectively. For all four datasets, experiments are conducted on a single GPU (NVIDIA GeForce GTX 1080 Ti).\nExtra ablation studies on updating expert prototypes. A bad specification of the expert prototypes can lead to a bad result if it is not properly handled. Specifically, in Tab. (3) (a), we see that the ACC on CIFAR-10 is only 21.3%. We empirically verify two principled methods that solve the issue: (1) extra end-of-epoch training only on µ with stochastic gradient ascent or (2) using Eq. (13).\nThe first method is used as follows. At the end of each epoch, we update µ while fixing the network parameters until the pre-defined convergence criteria are met. The convergence can be determined based on either the norm of the prototypes or the change of the objective function. To control the training time, we stop the update at the current epoch once we iterate through the entire dataset 20 times. We discover that with additional training, we can achieve 42.3% ACC. The result shows that with a proper update on µ, the proposed model can identify semantic clusters with stochastic gradient ascent alone. However, it requires more than 10 times of training time (around 394 hours).\nThe above discussions manifest the benefits of using the Eq. (13) which is derived based on the same objective function. With the prototypical update, we can achieve similar results with minimal computational overhead. On average, we can achieve 42.2% ACC on CIFAR-100 as shown in Sec. 6.\nComparing MiCE to SCAN (Van Gansbeke et al., 2020). We provide additional experiment results to compare with SCAN (Van Gansbeke et al., 2020) that uses unsupervised pre-training under a comparable experiment setting. As SCAN adopts a three-step training procedure, we think that it will provide additional insights by comparing MiCE to SCAN at the steps after the pre-training step. The detail results on CIFAR-10 are presented in Tab. 4.\nFirstly, we focus on SCAN with two steps of training. MiCE outperforms SCAN with two steps of training. SCAN obtains 78.7% and 81.8% on CIFAR-10 with the SimCLR augmentation and RandAugment, respectively. In contrast, MiCE can get 83.4% despite we are using a weaker augmentation strategy following MoCo (He et al., 2020) and InstDisc (Wu et al., 2018).\nSince SCAN involves pre-training using SimCLR (Chen et al., 2020), it takes additional advantages when directly comparing to other methods without pre-training. Thus, we fine-tune the MiCE model with the same training protocol described in Appendix D (except the learning rate can be smaller). We discover that MiCE can obtain higher results and outperforms SCAN, as shown in the second sector in Tab. 4. To be specific, in the fine-tuning stage, we load the network parameters from the pre-trained model and a smaller initial learning rate of 0.1 with all the other settings remain the same. We show the augmentation strategy in the parenthesis if we use a different one from the pre-training stage. For the RandAugment (Cubuk et al., 2020), we follow the implementation (and hyper-parameters) based on the public code provided by SCAN. As mentioned in Van Gansbeke et al. (2020), the self-labeling stage requires a shift in the augmentation, otherwise, it will lead to a degenerated solution. In contrast, MiCE with pre-training is not prone to degeneracy can get comparable or better performance than SCAN regardless of the choice of the augmentation strategy." }, { "heading": "F ADDITIONAL DISCUSSIONS", "text": "The number of the experts. In the cases where the number of the experts L differs from the number of ground-truth clusters K, the model will partition the datasets into L subsets instead of K. Even though the number of experts is currently tied with K, it is not a drawback and does not prevent us from applying to the common clustering settings. Also, MiCE does not use additional knowledge comparing to the baseline methods. If the ground-truth K is not known, we may treat K as a hyperparameter and decide K following the methods described in Smyth (2000); McLachlan & Peel (2004), which is worth investigating in the future.\nOverclustering. The overclustering technique (Ji et al., 2019) is orthogonal to our methods and can be applied with minor adaptations. However, it may require additional hyper-parameter tuning to ensure overclustering improves the results. From the supplementary file provided by IIC (Ji et al., 2019), we see that the numbers of clusters (for overclustering) are set differently for different datasets.\nDespite overclustering is an interesting technique, we would like to highlight the simplicity of the current version of MiCE.\nSimilarity between the two set of prototypes. We do not expect the two prototypes to be similar even though their dimensions are the same. In fact, they have different aims: the gating ones aim to divide the datasets into simpler subtasks for the experts, while the expert ones help the expert to solve the instance discrimination subtasks by introducing the class-level information. Therefore, the derived ELBO objective does not encourage the two sets of prototypes to be similar or maintaining a clear correspondence between them.\nEmpirically, we calculate the cosine similarity between any pair of µ and ω. The absolute values we get are less than 0.25, which showed that they are not similar. If we force them to be similar during training, the performance may be negatively affected due to the lack of flexibility." } ]
2,021
null
SP:6357b56f8b11f6eb3ccd152460b4aff5ab9ff6d4
[ "The paper proposed the method of neural PDE as an improvement of neural ODE. In specific, neural PDE considers both the layer and the hidden dimension as continuous variables of the PDE. The new part of neural PDE compared to neural ODE is essentially solving PDE inverse problems (learning PDE from data) in the computational mathematics and engineering community, and the way of learning PDE (by embedding the PDE and initial condition into the loss function via automatic differentiation) is the physics-informed neural network (PINN) proposed in [Raissi et al., JCP, 2019]. The experiments show that compared to neural ODE, neural PDE achieves comparable accuracy but with less forward-pass inference time; but these experiments are not convincing enough." ]
Neural ordinary differential equations (neural ODEs) introduced an approach to approximate a neural network as a system of ODEs after considering its layer as a continuous variable and discretizing its hidden dimension. While having several good characteristics, neural ODEs are known to be numerically unstable and slow in solving their integral problems, resulting in errors and/or much computation of the forward-pass inference. In this work, we present a novel partial differential equation (PDE)-based approach that removes the necessity of solving integral problems and considers both the layer and the hidden dimension as continuous variables. Owing to the recent advancement of learning PDEs, the presented novel concept, called PR-Net, can be implemented. Our method shows comparable (or better) accuracy and robustness in much shorter forward-pass inference time for various datasets and tasks in comparison with neural ODEs and Isometric MobileNet V3. For the efficient nature of PR-Net, it is suitable to be deployed in resource-scarce environments, e.g., deploying instead of MobileNet.
[]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li" ], "title": "Feature purification: How adversarial training performs robust deep learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Aharon Azulay", "Yair Weiss" ], "title": "Why do deep convolutional networks generalize so poorly to small image transformations", "venue": "J. Mach. Learn. Res.,", "year": 2019 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Karianne J Bergen", "Paul A Johnson", "V Maarten", "Gregory C Beroza" ], "title": "Machine learning for data-driven discovery in solid earth geoscience", "venue": null, "year": 2019 }, { "authors": [ "Lukas Bossard", "M. Guillaumin", "L. Gool" ], "title": "Food-101 - mining discriminative components with random forests", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Michael B Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In NeurIPS", "year": 2018 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic recurrent neural networks", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Marco Ciccone", "Marco Gallieri", "Jonathan Masci", "Christian Osendorfer", "Faustino Gomez" ], "title": "Naisnet: Stable deep networks from non-autonomous differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "M. Cimpoi", "S. Maji", "I. Kokkinos", "S. Mohamed", "A. Vedaldi" ], "title": "Describing textures in the wild", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Miles Cranmer", "Sam Greydanus", "Stephan Hoyer", "Peter Battaglia", "David Spergel", "Shirley Ho" ], "title": "Lagrangian neural networks", "venue": "In ICLR Deep Differential Equations Workshop,", "year": 2020 }, { "authors": [ "Talgat Daulbaev", "Alexandr Katrutsa", "Larisa Markeeva", "Julia Gusak", "Andrzej Cichocki", "Ivan Oseledets" ], "title": "Interpolated Adjoint Method for Neural ODEs", "venue": null, "year": 2003 }, { "authors": [ "Emmanuel de Bezenac", "Arthur Pajot", "Patrick Gallinari" ], "title": "Deep learning for physical processes: Incorporating prior scientific knowledge", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial robustness as a prior for learned representations, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling CNNs with simple transformations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Chris Finlay", "Jörn-Henrik Jacobsen", "Levon Nurbekyan", "Adam M Oberman" ], "title": "How to train your neural ode: the world of jacobian and kinetic regularization", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Amir Gholami", "Kurt Keutzer", "George Biros" ], "title": "Anode: Unconditionally accurate memory-efficient gradients for neural odes", "venue": "arXiv preprint arXiv:1902.10298,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Samuel Greydanus", "Misko Dzamba", "Jason Yosinski" ], "title": "Hamiltonian neural networks", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Lian-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan", "Quoc V Le", "Hartwig Adam" ], "title": "Searching for mobilenetv3", "venue": null, "year": 2019 }, { "authors": [ "Hwajoon Kim" ], "title": "The solution of the heat equation without boundary conditions", "venue": "Dynamic Systems and Applications, 27:653–662,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Kookjin Lee", "Eric J Parish" ], "title": "Parameterized neural ordinary differential equations: Applications to computational physics problems", "venue": "arXiv preprint arXiv:2010.14685,", "year": 2020 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "PDE-net: Learning PDEs from data", "venue": null, "year": 2018 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Subhransu Maji", "Esa Rahtu", "Juho Kannala", "Matthew Blaschko", "Andrea Vedaldi" ], "title": "Fine-grained visual classification of aircraft", "venue": null, "year": 2013 }, { "authors": [ "A. Mas-Colell" ], "title": "Cooperative Equilibrium, pp. 95–102", "venue": "Palgrave Macmillan UK, London,", "year": 1989 }, { "authors": [ "Stefano Massaroli", "Michael Poli", "Jinkyoo Park", "Atsushi Yamashita", "Hajime Asama" ], "title": "Dissecting neural odes, 2020", "venue": null, "year": 2020 }, { "authors": [ "Wei Peng", "W. Zhou", "Jun Zhang", "Wenbing Yao" ], "title": "Accelerating physics-informed neural network training with prior dictionaries", "venue": "ArXiv, abs/2004.08151,", "year": 2020 }, { "authors": [ "Hans Pinckaers", "Geert Litjens" ], "title": "Neural ordinary differential equations for semantic segmentation of individual colon glands, 2019", "venue": null, "year": 2019 }, { "authors": [ "Maziar Raissi" ], "title": "Deep hidden physics models: Deep learning of nonlinear partial differential equations", "venue": "Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George E Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Markus Reichstein", "Gustau Camps-Valls", "Bjorn Stevens", "Martin Jung", "Joachim Denzler", "Nuno Carvalhais" ], "title": "Deep learning and process understanding for data-driven earth system science", "venue": null, "year": 2019 }, { "authors": [ "David Rolnick", "Priya L Donti", "Lynn H Kaack", "Kelly Kochanski", "Alexandre Lacoste", "Kris Sankaran", "Andrew Slavin Ross", "Nikola Milojevic-Dupont", "Natasha Jaques", "Anna Waldman-Brown" ], "title": "Tackling climate change with machine learning", "venue": "arXiv preprint arXiv:1906.05433,", "year": 2019 }, { "authors": [ "Lars Ruthotto", "Eldad Haber" ], "title": "Deep neural networks motivated by partial differential equations", "venue": "Journal of Mathematical Imaging and Vision,", "year": 2019 }, { "authors": [ "Hadi Salman", "Andrew Ilyas", "Logan Engstrom", "Ashish Kapoor", "Aleksander Madry" ], "title": "Do adversarially robust imagenet models transfer better", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin A Riedmiller", "Raia Hadsell", "Peter Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": null, "year": 2018 }, { "authors": [ "Mark Sandler", "Jonathan Baccash", "Andrey Zhmoginov", "Andrew Howard" ], "title": "Non-discriminative data or weak model? on the relative importance of data and model resolution", "venue": "In ICCV Workshops,", "year": 2019 }, { "authors": [ "Xu Shen", "Xinmei Tian", "Anfeng He", "Shaoyan Sun", "Dacheng Tao" ], "title": "Transform-invariant convolutional neural networks for image classification and search", "venue": "In MM,", "year": 2016 }, { "authors": [ "E Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "L. Yang", "P. Luo", "C.C. Loy", "X. Tang" ], "title": "A large-scale car dataset for fine-grained categorization and verification", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Yaofeng Desmond Zhong", "Biswadip Dey", "Amit Chakraborty" ], "title": "Symplectic ode-net: Learning hamiltonian dynamics with control", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Juntang Zhuang", "Nicha Dvornek", "Xiaoxiao Li", "Sekhar Tatikonda", "Xenophon Papademetris", "James Duncan" ], "title": "Adaptive checkpoint adjoint method for gradient estimation in neural ode", "venue": "In ICML,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "It had been discovered that interpreting neural networks as differential equations is possible by several independent research groups (Weinan, 2017; Ruthotto & Haber, 2019; Lu et al., 2018; Ciccone et al., 2018; Chen et al., 2018; Gholami et al., 2019). Among them, the seminal neural ordinary differential equation (neural ODE) research work, which considers the general architecture in Figure 1 (a), is to learn a neural network approximating ∂h(t)∂t , where h(t) is a hidden vector at layer (or time) t (Chen et al., 2018). As such, a neural network is described by a system of ODEs, each ODE of which describes a dynamics of a hidden element. While neural ODEs have many good characteristics, they also have limitations, which are listed as follows:\nPros. Neural ODEs can interpret t as a continuous variable and we can have hidden vectors at any layer (or time) l by h(l) = h(0) + ∫ l 0 o(h(t), t;θo) dt, where o(h(t), t;θo) = ∂h(t) ∂t is a neural network parameterized by θo. Pros. Neural ODEs sometimes have smaller numbers of parameters than those of other conven-\ntional neural network designs, e.g., (Pinckaers & Litjens, 2019). Cons. Neural ODEs, which use an adaptive step-size ODE solver, sometimes show numerical\ninstability (i.e., the underflow error of the step-size) or their forward-pass inference can take a long time (i.e., too many steps) in solving integral problems, e.g, a forward-pass time of 37.6 seconds of ODE-Net vs. 9.8 seconds of PR-Net in Table 2. Several countermeasures have been proposed but it is unavoidable to solve integral problems (Zhuang et al., 2020; Finlay et al., 2020; Daulbaev et al., 2020).\nTo tackle the limitation, we propose the concept of partial differential equation-regularized neural network (PR-Net) to directly learn a hidden element, denoted h(d, t) at layer (or time) t ∈ [0, T ] and dimension d ∈ Rm. Under general contexts, a PDE consists of i) an initial condition at t = 0, ii) a boundary condition at a boundary location of the spatial domain Rm, and iii) a governing equation describing ∂h(d,t)∂t . As such, learning a PDE from data can be reduced to a regression-like problem to predict h(d, t) that meets its initial/boundary conditions and governing equation.\nIn training our proposed PR-Net, h(0) is provided by an earlier feature extraction layer, which is the same as neural ODEs. However, an appropriate governing equation is unknown for downstream\nmachine learning tasks. Therefore, we propose to train a regression model for predicting h(d, t) and its governing equation simultaneously (see Figure 1 (b)). In other words, neural ODEs directly learn a governing equation (i.e., ∂h(t)∂t ), whereas PR-Net learns a governing equation in conjunction with a regression model that conforms with the learned governing equation. The key advantage in our approach is that we can eliminate the necessity of solving integral problems — in neural ODEs, where we learn a governing equation only, solving integral problems is mandatory.\nSuch forward and inverse problems (i.e., solving PDEs for h(d, t) and identifying governing equations, respectively) arise in many important computational science problems and there have been many efforts applying machine learning/deep learning techniques to those problems (e.g., in earth science (Reichstein et al., 2019; Bergen et al., 2019) and climate science (Rolnick et al., 2019)). Recently, physics-informed or physics-aware approaches (Battaglia et al., 2016; Chang et al., 2017; de Bezenac et al., 2018; Raissi et al., 2019; Sanchez-Gonzalez et al., 2018; Long et al., 2018) have demonstrated that designing neural networks to incorporate prior scientific knowledge (e.g., by enforcing physical laws described in governing equations (Raissi et al., 2019)) greatly helps avoiding over-fitting and improving generalizability of the neural networks. There also exist several approaches to incorporate various ideas of classical mechanics in designing neural-ODE-type networks (Greydanus et al., 2019; Chen et al., 2020; Cranmer et al., 2020; Zhong et al., 2020; Lee & Parish, 2020). However, all these works are interested in solving either forward or inverse problems whereas we solve the two different problem types at the same time for downstream tasks. The most similar existing work to our work is in (Long et al., 2018). However, this work studied scientific PDEs and do not consider t as a continuous variable but use a set of discretized points of t.\nCompared to previous approaches, the proposed method has a distinct feature that forward and inverse problems are solved simultaneously with a continuous variable t. Due to this unique feature, the method can be applied to general machine learning downstream tasks, where we do not have a priori knowledge on governing equations, such as image classification. Our proposed PR-Net had the following characteristics:\nPros. PR-Net trains a regression model that outputs a scalar element h(d, t) (without solving any integral problems), and we can consider both d and t as continuous variables. Therefore, it is possible to construct flexible hidden dimension vectors. Pros. PR-Net does not require solving integral problems. As such, there is no numerical instability and their forward-pass time is much shorter than that of neural ODEs.\nPros. By learning a governing equation, we can regularize the overall behavior of PR-Net. Cons. PR-Net sometimes requires a larger number of parameters than that of neural ODEs or\nconventional neural networks." }, { "heading": "2 PARTIAL DIFFERENTIAL EQUATIONS", "text": "The key difference between ODEs and PDEs is that PDEs can have derivatives of multiple variables whereas ODEs should have only one such variable’s derivative. Therefore, our PDE-based method interprets both the layer of neural network and the dimension of hidden vector as continuous variables, which cannot be done in neural ODEs. In our context, h(d, t) means a hidden scalar element at layer t ∈ R and dimension d ∈ Rm, e.g., m = 1 if h(t) is a vector, m = 3 if h(t) is a convolutional feature map, and so on.\nh(d,0)\nGoverning Equation\nSolution h(d,t) Neural Network\nFigure 2: A neural network predicts solution values at d, t given initial conditions, denoted h(d, 0) for various d, and a governing equation.\nTable 1: Two types of PDE problems related to our work\nType Data What to infer\nForward Problem - Initial condition- Governing equation Solution h(d, t) Inverse Problem - Solution h(d, t)- Initial condition Governing equation\nIn this section, we first introduce the forward and inverse problems of PDEs in general contexts (see Table 1). Then, we extend them to design our proposed method in deep-learning contexts." }, { "heading": "2.1 FORWARD PROBLEM OF PDES IN GENERAL CONTEXTS", "text": "The forward PDE problem in general contexts is to find a solution h(d, t), where d is in a spatial domain Rm and t is in a time domain [0, T ], given i) an initial condition h(d, 0), ii) a boundary condition h(dbc, t), where dbc is a boundary location of the spatial domain Rm, and iii) a governing equation g (Raissi et al., 2019) We note that the boundary condition can be missing in some cases (Kim, 2018). The governing equation is typically in the following form with particular choices of αi,j (Raissi, 2018; Peng et al., 2020):\ng(d, t;h) def = ht − ( α0,0 + α1,0h+ α2,0h 2 + α3,0h 3\n+ α0,1hd + α1,1hhd + α2,1h 2hd + α3,1h 3hd\n+ α0,2hdd + α1,2hhdd + α2,2h 2hdd + α3,2h 3hdd\n+ α0,3hddd + α1,3hhddd + α2,3h 2hddd + α3,3h 3hddd ) ,\n(1)\nwhere ht = ∂h(d,t) ∂t , hd = ∂h(d,t) ∂d , hdd = ∂2h(d,t) ∂d2 , and hddd = ∂3h(d,t) ∂d3 . We also note that g is always zero in all PDEs, i.e., g(d, t;h) = 0.\nIn many cases, it is hard to solve the forward problem and hence general purpose PDE solvers do not exist. Nevertheless, one can use the following optimization to train a neural network f(d, t;θ) to approximate the solution function h(d, t) as shown in Figure 2 (Raissi et al., 2019):\nargmin θ LI + LB + LG, (2)\nLI def =\n1\nNI ∑ d ( f(d, 0;θ)− h(d, 0) )2 , (3)\nLB def =\n1\nNB ∑ (dbc,t) ( f(dbc, t;θ)− h(dbc, t) )2 , (4)\nLG def =\n1\nNG ∑ (d,t) g(d, t; f,θ)2, (5)\nwhere NI , NB , NG are the numbers of training samples, LI is to train θ for the initial condition, LB is for the boundary condition, and LG is for the governing equation. Because the governing equation is always zero, we simply minimize its squared term. Note that i) ft, fd, fdd, fddd can be easily constructed using the automatic differentiation implemented in TensorFlow or PyTorch, and ii) we only need h(d, 0), h(dbc, t), which are known a priori, to train the parameters θ." }, { "heading": "2.2 INVERSE PROBLEM OF PDES IN GENERAL CONTEXTS", "text": "The inverse problem is to find a governing equation given i) an initial condition h(d, 0) and ii) a solution function h(d, t) (Raissi, 2018). It learns αi,j in Eq. 1 with the following loss (if possible, they use reference solutions as well):\nargmin αi,j\n1\nNG ∑ (d,t) g(d, t;h)2.\nGiven a solution function h and its partial derivative terms, we train αi,j by minimizing the objective loss. Note that we know h in this case. Therefore, the objective loss is defined with h rather than with f , unlike Eq. 5.\nThe optimal solution of αi,j is not unique sometimes. However, we note that no trivial solutions, e.g., αi,j = 0 for all i, j, exist for the inverse problem." }, { "heading": "3 PDE-REGULARIZED NEURAL NETWORKS", "text": "Our goal in this work is to replace a system of ODEs (cf. Figure 1 (a)) with a PDE. Assuming that a target task-specific PDE is known a priori, given an initial condition h(0) extracted by the feature extractor from a sample x, a forward problem can be solved via the method described in Section 2.1. However, a target task-specific PDE is not known a priori in general, and thus, the governing equation should be learned from data via solving the inverse problem. Unfortunately, the solution function h(d, t)) is not also known a priori in our setting. Therefore, we make an assumption on the governing equation that it consists of the most common partial derivative terms (cf. Eq. 1) and then we propose to solve the forward and the inverse problems alternately: to train θ, we fix its governing equation g (more precisely, αi,j for all i, j), and to train αi,j for all i, j, we fix θ.\nHow to Solve Forward Problem. We customize the method presented in Section 2.1 by i) adding a task-specific loss, e.g., cross-entropy loss for image classification, ii) parameterizing the neural network f by the initial condition h(0), and iii) dropping the boundary condition. Let f(h(0), d, t;θ) be our neural network to approximate h(d, t) given the varying initial condition h(0)1. The definition of the governing equation is also extended to g(d, t; f,h(0),θ). We use the following loss definition to train θ:\nargmin θ LT + L̂I + L̂G, (6)\nL̂I def =\n1\nNX ∑ x∈X ( 1 dim(h) ∑ d ( f(h(0), d, 0;θ)− h(d, 0) )2) , (7)\nL̂G def =\n1\nNX ∑ x∈X ( 1 NH ∑ (d,t)∈H g(d, t; f,h(0),θ)2 ) , (8)\nwhere LT is a task-specific loss,X is a training set, andH is a set of (d, t) pairs, where d ∈ R≥0, t ∈ R≥0, with which we construct the hidden vector that will be used for downstream tasks, denoted by htask (See Figure 3).\nWe query f(h(0), d, t;θ) with the (d, t) pairs in H to construct htask. One more important point to note is that in order to better construct htask, we can train even the pairs in H as follows: argmin(d,t)∈H LT (line 7 in Alg. 1). Thus, the elements of h\ntask can be collected from different dimensions and layers. A similar approach to optimize the end time of integral was attempted for neural ODEs in (Massaroli et al., 2020).\nHow to Solve Inverse Problem. After fixing θ, we train αi,j for all i, j by using the following L1 regularized loss with a coefficient w:\nargmin αi,j L̂G +RG, (9)\nRG def = w ∑ i,j |αi,j |. (10)\nWe minimize the sum of |αi,j | to induce a sparse governing equation according to Occam’s razor and since in many PDEs, their governing equations are sparse. This optimization allows us to choose a sparse solution among many possible governing equations. In many cases, therefore, our regularized inverse problem can be uniquely solved.\n1Therefore, one can consider that our neural network f approximates a general solution rather than a particular solution. A general solution means a solution of PDE with no specified initial conditions and a particular solution means a solution of PDE given an initial condition. Both neural ODEs and PR-Net approximate general solutions because initial conditions are varied.\nTraining Algorithm. Our overall training algorithm is in Alg. 1. We alternately train θ, (d, t) ∈ H , and αi,j for all i, j. The forward problem to train θ becomes a well-posed problem (i.e., its solution always exists and is unique) if the neural network f is analytical or equivalently, uniformly Lipschitz continuous (Chen et al., 2018). Many neural network operators are analytical, such as softplus, fully-connected, and exponential. Under the mild condition of analytical neural networks, therefore, the well-posedness can be fulfilled. The inverse problem can also be uniquely solved in many cases due to the sparseness requirement. As a result, our proposed training algorithm can converge to a cooperative equilibrium. Note that θ, (d, t) ∈ H , and αi,j for all i, j cooperate to minimize LT + L̂I + L̂G + RG. Therefore, the proposed training method can be seen as a cooperative game (Mas-Colell, 1989). After finishing the training process, αi,j , for all i, j, are not needed any more (because θ already conforms with the learned governing equation at this point) and can be discarded during testing.\nFor complicated downstream tasks, training for LT should be done earlier than others (line 5). Then, we carefully update the PDE parameters (line 6) and other training procedures follow. The proposed sequence in Alg. 1 produces the best outcomes in our experiments. However, this sequence can be varied for other datasets or downstream tasks.\nComplexity Analyses. The adjoint sensitivity method of neural ODEs enables the space complexity of O(1) while calculating gradients. However, its forward-pass inference time is O( 1s ), where s is the (average) step-size of an underlying ODE solver. Because s can sometimes be very small, its inference via forward-pass can take a long time.\nOur PR-Net uses the standard backpropagation method to train and its gradient computation complexity is the same as that in conventional neural networks. In addition, the forward-pass inference time is O(1), given a fixed network f , because we do not solve integral problems." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we introduce our experimental evaluations with various datasets and tasks. All experiments were conducted in the following software and hardware environments: UBUNTU 18.04 LTS, PYTHON 3.6.6, NUMPY 1.18.5, SCIPY 1.5, MATPLOTLIB 3.3.1, PYTORCH 1.2.0, CUDA 10.0, and NVIDIA Driver 417.22, i9 CPU, and NVIDIA RTX TITAN. In Section J of Appendix, we summarize detailed dataset information and additional experiments." }, { "heading": "4.1 IMAGE CLASSIFICATION WITH MNIST AND SVHN", "text": "We reuse the convolutional neural network, called ODE-Net, in the work by Chen et al. (2018) to classify MNIST and SVHN and replace its ODE part with our proposed PDE, denoted PR-Net in Table 2. See Appendix for the architecture and the hyperparameters of the network f in PR-Net for this experiment. We reuse their codes and strictly follow their experimental environments.\nIts detailed results are summarized in Table 2. We compare with ResNet, RK-Net and ODE-Net. In ResNet, we have a downsampling layer followed by 6 standard residual blocks (He et al., 2016). For RK-Net and ODE-Net, we replace the residual blocks with an ODE but they differ at the choice of ODE solvers. RK-Net uses the fourth-order Runge–Kutta method and ODE-Net uses the adaptive Dormand–Prince method for their forward-pass inference — both of them are trained with the adjoint sensitivity method which is a standard backward-pass gradient computation method. Our PR-Net, which does not require solving integral problems, shows the best performance in all aspects for MNIST. In particular, PR-Net shows much better efficiency than ResNet, considering their numbers of parameters, i.e., 0.60M of ResNet and 0.21M of PR-Net. Comparing ODE-Net and PR-Net for the inference time, our method shows much faster performance, i.e., 24.8355 seconds of ODE-Net vs. 6.5023 seconds of PR-Net to classify a batch of 1,000 images. Considering its short inference time, in SVHN we can say that its efficiency is still better than that of ODE-Net. One interesting point is that using the fourth-order Runge–Kutta method in RK-Net produces better accuracy and inferente time than ODE-Net in our experiments, which is slightly different from the original neural ODE paper (Chen et al., 2018). We tested more hyperparameters for them." }, { "heading": "4.2 IMAGE CLASSIFICATION WITH TINY IMAGENET", "text": "We use one more convolutional neural network to test with Tiny ImageNet. Tiny ImageNet is the modified subset of ImageNet with downscaled image resolution 64× 64. It consists of 200 different classes with 100,000 training images and 10,000 validation images. Our baseline model is Isometric MobileNet V3 (Sandler et al., 2019). For the efficient nature of ODE-Net and PR-Net, we consider that the resource-scarce environments, for which MobileNet was designed, are one of their best application areas. The isometric architecture of Isometric MobileNet V3 maintains constant resolution throughout all layers. Therefore, pooling layers are not needed and computation efficiency is high, according to their experiments. In addition, neural ODEs require an isometric architecture, i.e., the dimensionality of h(t), t ≥ 0, cannot be varied. In our PR-Net, we do not have such restrictions. For fair comparison, however, we have decided to use Isometric MobileNet V3. We replace some of its MobileNet V3 blocks with ODEs or PDEs, denoted ODE-Net and PR-Net in Table 3, respectively. We train our models from scratch without using any pretrained network, with a synchronous training setup.\nTable 3 summarizes their results. We report both of the top-1 and the top-5 accuracy, which is a common practice for (Tiny) ImageNet. In general, our PR-Net shows the best accuracy. PR-Net achieves an top-1 accuracy of 0.6157 with 4.56M parameters. The full Isometric MobileNet V3 marks an top-1 accuracy of 0.6578 with 20M parameters and the reduced Isometric MobileNet V3 with 4.30M parameters shows an top-1 accuracy of 0.6076. Considering the large difference on the number of parameters, PR-Net’s efficiency is high. In particular, it outperforms others in the top-5 accuracy by non-trivial margins, e.g., 0.7911 of ODE-Net vs. 0.8115 of Isometric MobileNet V3 vs. 0.8357 of PR-Net. In addition, PR-Net shows faster forward-pass inference time in comparison with ODE-Net. The inference time is to classify a batch of 1,000 images." }, { "heading": "4.3 EXPERIMENTS ON ROBUSTNESS WITH TINY IMAGENET", "text": "To check the efficacy of learning a governing equation, we conduct three more additional experiments with Tiny ImageNet: i) out-of-distribution image classification, ii) adversarial attacks, and iii) transfer learning to other image datasets. In the first and second experiments, we apply many augmentation/perturbation techniques to generate out-of-distribution/adversarial images and check how each model responses to them. Being inspired by the observations that robust models are better transferred to other datasets (Engstrom et al., 2019a; Allen-Zhu & Li, 2020; Salman et al., 2020), in the third experiment, we check the transfer learning accuracy to other image datasets. According to our hypothesis, PR-Net which knows the governing equation for classifying Tiny ImageNet should show better robustness than others (as seen in Figure 4 for a scientific PDE problem in Appendix).\nNeural networks are typically vulnerable to out-of-distribution and adversarial samples (Shen et al., 2016; Azulay & Weiss, 2019; Engstrom et al., 2019b). As being more fitted to training data, they typically show lower robustness to out-of-distribution and adversarial samples. However, PR-Net’s processing them should follow its learned governing equation. Therefore, one way to understand learning a governing equation is a sort of regularization which prevents overfitting and implanting knowledge governing the classification process.\nOut-of-Distribution Image classification. We use four image augmentation methods: i) adding a Gaussian noise of N (0, 0.1), ii) cropping a ceter area by size 56× 56 and resizing to the original size, iii) rotating into a random direction for 30 degree, and iv) perturbing colors through randomly jittering the brightness, contrast, saturation, and hue with a strength coefficient of 0.2. All these are popular out-of-distribution augmentation methods (Shen et al., 2016; Azulay & Weiss, 2019; Engstrom et al., 2019b).\nOur PR-Net shows the best accuracy (i.e., robustness) in all cases. In comparison with ODE-Net, it shows much better robustness, e.g., 0.3812 of ODE-Net vs. 0.4429 of PR-Net for the color jittering augmentation. One interesting point is that all methods are commonly more vulnerable to the random rotation and the color jittering augmentations than the other two augmentations.\nAdversarial Attack Robustness. It is well-known that neural networks are vulnerable to adversarial attacks. Because the governing equation regularizes PR-Net’s behaviors, it can be robust to unknown adversarial samples. We use FGSM (Goodfellow et al., 2015) and PGD (Madry et al., 2018) to find adversarial samples and the robustness to them is reported in Table 4. With various settings for the key parameter that controls the degree of adversarial perturbations, we generate adversarial samples. The configuration of doubling the number of channels used in each layer, denoted as “Width Multiplier 2”, showed better performance in Table 3 and we use only the con-\nfiguration for this adversarial attack and next transfer learning experiments. For all attacks except FGSM( = 3/255) and PGD ( = 3/255), PR-Net shows the best robustness as shown in Table 4. The gap between PR-Net and other baselines are significant for the most cases of PGD.\nTransfer Learning. As reported in (Engstrom et al., 2019a; Allen-Zhu & Li, 2020; Salman et al., 2020), robust models tend to produce feature maps suitable for transfer learning than regular models. In this regard, we checked the transferability of the pretrained PR-Net for Tiny ImageNet to other datasets: CIFFAR100 (Krizhevsky, 2009), CIFAR10 (Krizhevsky, 2009), FGVC Aircraft (Maji et al., 2013), Food-101 (Bossard et al., 2014), DTD (Cimpoi et al., 2014), and Cars (Yang et al., 2015). As shown in Table 5, PR-Net shows the best transfer learning accuracy in all cases except Cars. The improvements over M.Net V3 and ODE-Net are significant for Aircraft and DTD." }, { "heading": "5 DISCUSSIONS & CONCLUSIONS", "text": "It recently became popular to design neural networks based on differential equations. In most cases, ODEs are used to approximate neural networks. In this work, on the other hand, we presented a PDE-based approach to design neural networks. Our method simultaneously learns a regression model and a governing equation that conform with each other. Therefore, the internal processing mechanism of the learned regression model should follow the learned governing equation. One can consider that this mechanism is a sort of implanting domain knowledge into the regression model. The main challenge in our problem definition is that we need to discover a governing equation from data while training a regression model. Therefore, we adopt a joint training method of the regression model and the governing equation.\nTo show the efficacy, we conducted five experiments: i) MNIST/SVHN classification, ii) Tiny ImageNet classification, iii) classification with out-of-distribution samples, iv) adversarial attack robustness, and v) transfer learning. Our method shows the best accuracy and robsutness (or close to the best) only except SVHN. In particular, the challenging robustness experiments empirically prove why learning an appropriate governing equation is important.\nOne limitation on this method is that it is sometimes hard to achieve a good trade-off between all different loss and regularization terms. Our method intrinsically involves various terms and we found that it is important to tune hyperparameters (especially for various coefficients and learning rates) in order to achieve reliable performance. In particular, αi,j , for all i, j, are important to learn reliable governing equations. Because the trained network f is greatly influenced by the governing equation, hyperparameters should be tuned to learn meaningful equations. We also plan to study the proposed concept for many other classification/regression tasks." } ]
2,020
null
SP:8079cb72ef8db9b5ab9275770ade605746840832
[ "This work analyses the interaction between data-augmentation strategies such as MixUp and model ensembles with regards to calibration performance. The authors note how strategies such as mixup and label smoothing, which reduce a single model's over-confidence, lead to degradation in calibration performance when such models are combined as an ensemble. Specifically, all techniques, taken individually, improve calibration by reducing overconfidence. However, in combination they lead to under-confident models and, therefore, worse calibration. Based on this analysis, the author's provide a simple technique which yields SOTA calibration performance on CIFAR-10, CIFAR-10-C, CIFAR-100 and CIFAR-100-C and ImageNet. The authors propose to dynamically enable and disable MixUp based on whether the model is over/under confident on a particular class, as judged on a validation dataset. " ]
Ensemble methods which average over multiple neural network predictions are a simple approach to improve a model’s calibration and robustness. Similarly, data augmentation techniques, which encode prior information in the form of invariant feature transformations, are effective for improving calibration and robustness. In this paper, we show a surprising pathology: combining ensembles and data augmentation can harm model calibration. This leads to a trade-off in practice, whereby improved accuracy by combining the two techniques comes at the expense of calibration. On the other hand, selecting only one of the techniques ensures good uncertainty estimates at the expense of accuracy. We investigate this pathology and identify a compounding under-confidence among methods which marginalize over sets of weights and data augmentation techniques which soften labels. Finally, we propose a simple correction, achieving the best of both worlds with significant accuracy and calibration gains over using only ensembles or data augmentation individually. Applying the correction produces new state-of-the art in uncertainty calibration across CIFAR-10, CIFAR-100, and ImageNet.1
[ { "affiliations": [], "name": "Yeming Wen" }, { "affiliations": [], "name": "Ghassen Jerfel" }, { "affiliations": [], "name": "Rafael Muller" }, { "affiliations": [], "name": "Michael W. Dusenberry" }, { "affiliations": [], "name": "Jasper Snoek" }, { "affiliations": [], "name": "Balaji Lakshminarayanan" }, { "affiliations": [], "name": "Dustin Tran" } ]
[ { "authors": [ "Chirag Agarwal", "Sara Hooker" ], "title": "Estimating example difficulty using variance of gradients", "venue": "arXiv preprint arXiv:2008.11600,", "year": 2020 }, { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ekin D. Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V. Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space. arXiv: Computer Vision and Pattern Recognition, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Ekin Dogus Cubuk", "Barret Zoph", "Dandelion Mané", "V. Vasudevan", "Quoc V. Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Morris H. DeGroot", "Stephen E. Fienberg" ], "title": "The Comparison and Evaluation of Forecasters", "venue": "The Statistician,", "year": 1983 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Fei-Fei Li" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Thomas G. Dietterich" ], "title": "Ensemble methods in machine learning", "venue": "In Multiple Classifier Systems,", "year": 2000 }, { "authors": [ "Michael W Dusenberry", "Dustin Tran", "Edward Choi", "Jonas Kemp", "Jeremy Nixon", "Ghassen Jerfel", "Katherine Heller", "Andrew M Dai" ], "title": "Analyzing the role of model uncertainty for electronic health records", "venue": null, "year": 1906 }, { "authors": [ "Michael W. Dusenberry", "Ghassen Jerfel", "Yeming Wen", "Yi-an Ma", "Jasper Snoek", "Katherine Heller", "Balaji Lakshminarayanan", "Dustin Tran" ], "title": "Efficient and scalable Bayesian neural nets with rank-1 factors", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Stanislav Fort", "Huiyi Hu", "Balaji Lakshminarayanan" ], "title": "Deep Ensembles: A Loss Landscape Perspective", "venue": "arXiv preprint arXiv:1912.02757,", "year": 2019 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On Calibration of Modern Neural Networks", "venue": "In International Conference on Machine Learning (ICML), volume cs.LG. Cornell University Library,", "year": 2017 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Mixup as locally linear out-of-manifold regularization", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Augmenting data with mixup for sentence classification: An empirical study", "venue": "arXiv preprint arXiv:1905.08941,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Dustin Tran", "Alex Irpan", "Timothy Lillicrap", "James Davidson" ], "title": "Reliable uncertainty estimates in deep neural networks using noise contrastive priors", "venue": "arXiv preprint arXiv:1807.09289,", "year": 2018 }, { "authors": [ "Lars Kai Hansen", "Péter Salamon" ], "title": "Neural network ensembles", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 1990 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D. Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple data processing method to improve robustness and uncertainty", "venue": null, "year": 1912 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Q. Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E Hopcroft", "Kilian Q Weinberger" ], "title": "Snapshot ensembles: Train 1, get m for free", "venue": "arXiv preprint arXiv:1704.00109,", "year": 2017 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Anders Krogh", "Jesper Vedelsby" ], "title": "Neural network ensembles, cross validation, and active learning", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Puneet Mangla", "Vedant Singh", "Shreyas Jayant Havaldar", "Vineeth N. Balasubramanian" ], "title": "Varmixup: Exploiting the latent space for robust training and inference", "venue": null, "year": 2003 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory F. Cooper", "Milos Hauskrecht" ], "title": "Obtaining Well Calibrated Probabilities Using Bayesian Binning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Jeremy Nixon", "Mike Dusenberry", "Linchuan Zhang", "Ghassen Jerfel", "Dustin Tran" ], "title": "Measuring Calibration in Deep Learning. arXiv:1904.01685 [cs, stat], April 2019", "venue": "URL http://arxiv", "year": 1904 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D Sculley", "Sebastian Nowozin", "Joshua V Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift", "venue": "In Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Jun Zhu" ], "title": "Mixup inference: Better exploiting mixup to defend adversarial attacks", "venue": "ArXiv, abs/1909.11515,", "year": 2020 }, { "authors": [ "Michael P. Perrone", "Leon N. Cooper" ], "title": "When networks disagree: Ensemble methods for hybrid neural networks", "venue": null, "year": 1992 }, { "authors": [ "Yao Qin", "Xuezhi Wang", "Alex Beutel", "Ed Huai hsin Chi" ], "title": "Improving uncertainty estimates through the relationship with adversarial robustness", "venue": null, "year": 2006 }, { "authors": [ "Rahul Rahaman", "Alexandre H Thiery" ], "title": "Uncertainty quantification and deep ensembles", "venue": "arXiv preprint arXiv:2007.08792,", "year": 2020 }, { "authors": [ "Ryne Roady", "T. Hayes", "Christopher Kanan" ], "title": "Improved robustness to open set inputs via tempered mixup", "venue": "ArXiv, abs/2009.04659,", "year": 2020 }, { "authors": [ "Alejandro Romero", "Nicolas Ballas", "Samira Ebrahimi Kahou", "Antoine Chassang", "Carlo Gatta", "Yoshua Bengio" ], "title": "FitNets: Hints for thin deep", "venue": "nets. CoRR,", "year": 2015 }, { "authors": [ "David Ruppert" ], "title": "Efficient estimations from a slowly convergent Robbins-Monro process", "venue": "Technical report, Cornell University Operations Research and Industrial Engineering,", "year": 1988 }, { "authors": [ "Takuya Shimada", "Shoichiro Yamaguchi", "Kohei Hayashi", "Sosuke Kobayashi" ], "title": "Data interpolating prediction: Alternative interpretation of mixup", "venue": "ArXiv, abs/1906.08412,", "year": 2019 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A. Bilmes", "Tanmoy Bhattacharya", "Sarah Ellen Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Mariya Toneva", "Alessandro Sordoni", "Remi Tachet des Combes", "Adam Trischler", "Yoshua Bengio", "Geoffrey J Gordon" ], "title": "An empirical study of example forgetting during deep neural network learning", "venue": "arXiv preprint arXiv:1812.05159,", "year": 2018 }, { "authors": [ "Juozas Vaicenavicius", "D. Widmann", "Carl R. Andersson", "F. Lindsten", "J. Roll", "Thomas Bo Schön" ], "title": "Evaluating model calibration in classification", "venue": null, "year": 2019 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "David Lopez-Paz", "Yoshua Bengio" ], "title": "Manifold mixup: Better representations by interpolating hidden states", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yeming Wen", "Dustin Tran", "Jimmy Ba" ], "title": "BatchEnsemble: An alternative approach to efficient ensemble and lifelong learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Florian Wenzel", "Jasper Snoek", "Dustin Tran", "Rodolphe Jenatton" ], "title": "Hyperparameter ensembles for robustness and uncertainty quantification", "venue": "In Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "D. Widmann", "F. Lindsten", "D. Zachariah" ], "title": "Calibration tests in multi-class classification: A unifying framework", "venue": "ArXiv, abs/1910.11385,", "year": 2019 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk", "venue": "minimization. ArXiv,", "year": 2018 }, { "authors": [ "Huang" ], "title": "If a training method requires validation dataset", "venue": null, "year": 2016 }, { "authors": [ "Ovadia" ], "title": "2019) benchmarked a number of methods on CIFAR-10 corruption", "venue": null, "year": 2019 }, { "authors": [ "F METRICS" ], "title": "OTHER THAN ECE ECE is the standard metric in calibration, but it is a biased estimate of true calibration (Vaicenavicius et al., 2019). Heavily relying on ECE metric might lead to inconsistent conclusion. In this section, we computed the calibration error with recently proposed calibration estimator which reduces bias in ECE, including debiased calibration estimator (Kumar et al., 2019) (DCE) and SKCE (Widmann", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many success stories in deep learning (Krizhevsky et al., 2012; Sutskever et al., 2014) are in restricted settings where predictions are only made for inputs similar to the training distribution. In real-world scenarios, neural networks can face truly novel data points during inference, and in these settings it can be valuable to have good estimates of the model’s uncertainty. For example, in healthcare, reliable uncertainty estimates can prevent over-confident decisions for rare or novel patient conditions (Dusenberry et al., 2019). We highlight two recent trends obtaining state-of-the-art in uncertainty and robustness benchmarks.\nEnsemble methods are a simple approach to improve a model’s calibration and robustness (Lakshminarayanan et al., 2017). The same network architecture but optimized with different initializations can converge to different functional solutions, leading to decorrelated prediction errors. By averaging predictions, ensembles can rule out individual mistakes (Lakshminarayanan et al., 2017; Ovadia et al., 2019). Additional work has gone into efficient ensembles such as MC-dropout (Gal and Ghahramani, 2016), BatchEnsemble, and its variants (Wen et al., 2020; Dusenberry et al., 2020; Wenzel et al., 2020). These methods significantly improve calibration and robustness while adding few parameters to the original model.\nData augmentation is an approach which is orthogonal to ensembles in principle, encoding additional priors in the form of invariant feature transformations. Intuitively, data augmentation enables the model to train on more data, encouraging the model to capture certain invariances with respect to its inputs and outputs; data augmentation may also produce data that may be closer to an out-ofdistribution target task. It has been a key factor driving state-of-the-art: for example, Mixup (Zhang et al., 2018; Thulasidasan et al., 2019a), AugMix (Hendrycks et al., 2020), and test-time data augmentation (Ashukha et al., 2020).\nA common wisdom in the community suggests that ensembles and data augmentation should naturally combine. For example, the majority of uncertainty models in vision with strong performance are\n1Contact: ywen@utexas.edu. Code: https://github.com/google/edward2/tree/master/ experimental/marginalization_mixup.\nbuilt upon baselines leveraging standard data augmentation (He et al., 2016; Hendrycks et al., 2020) (e.g., random flips, cropping); Hafner et al. (2018) cast data augmentation as an explicit prior for Bayesian neural networks, treating it as beneficial when ensembling; and Hendrycks et al. (2020) highlights further improved results in AugMix when combined with Deep Ensembles (Hansen and Salamon, 1990; Krogh and Vedelsby, 1995). However, we find the complementary benefits between data augmentations and ensembels are not universally true. Section 3.1 illustrates the poor calibration of combining ensembles (MC-dropout, BatchEnsemble and Deep Ensembles) and Mixup on CIFAR: the model outputs excessive low confidence. Motivated by this pathology, in this paper, we investigate in more detail why this happens and propose a method to resolve it.\nContributions. In contrast to prior work, which finds individually that ensembles and Mixup improve calibration, we find that combining ensembles and Mixup consistently degrades calibration performance across three ensembling techniques. From a detailed analysis, we identify a compounding under-confidence, where the soft labels in Mixup introduce a negative confidence bias that hinders its combination with ensembles. We further find this to be true for other label-based strategies such as label smoothing. Finally, we propose CAMixup to correct this bias, pairing well with ensembles. CAMixup produces new state-of-the-art calibration on both CIFAR-10/100 (e.g., 0.4% and 2.3% on CIFAR-10 and CIFAR-10C), building on Wide ResNet 28-10 for competitive accuracy (e.g., 97.5% and 89.8%) and on ImageNet (1.5%), building on ResNet-50 for competitive accuracy (77.4%)." }, { "heading": "2 BACKGROUND ON CALIBRATION, ENSEMBLES AND DATA AUGMENTATION", "text": "" }, { "heading": "2.1 CALIBRATION", "text": "Uncertainty estimation is critical but ground truth is difficult to obtain for measuring performance. Fortunately, calibration error, which assesses how well a model reliably forecasts its predictions over a population, helps address this. Let (Ŷ , P̂ ) denote the class prediction and associated confidence (predicted probability) of a classifier.\nExpected Calibration Error(ECE): One notion of miscalibration is the expected difference between confidence and accuracy (Naeini et al., 2015): EP̂ [|P(Ŷ = Y |P̂ = p)− p|]. ECE approximates this by binning the predictions in [0, 1] under M equally-spaced intervals, and then taking a weighted average of each bins’ accuracy/confidence difference. Let Bm be the set of examples in the mth bin whose predicted confidence falls into interval (m−1M , m M ]. The bin Bm’s accuracy and confidence are:\nAcc(Bm) = 1 |Bm| ∑\nxi∈Bm\n1(ŷi = yi), Conf(Bm) = 1 |Bm| ∑\nxi∈Bm\np̂i, (1)\nwhere ŷi and yi are the predicted and true labels and p̂i is the confidence for example xi. Given n examples, ECE is ∑M m=1 |Bm| n ∣∣∣Acc(Bm)− Conf(Bm)∣∣∣." }, { "heading": "2.2 ENSEMBLES", "text": "Aggregating the predictions of multiple models into an ensemble is a well-established strategy to improve generalization (Hansen and Salamon, 1990; Perrone and Cooper, 1992; Dietterich, 2000).\nBatchEnsemble: BatchEnsemble takes a network architecture and shares its parameters across ensemble members, adding only a rank-1 perturbation for each layer in order to decorrelate member predictions (Wen et al., 2020). For a given layer, define the shared weight matrix among K ensemble members as W ∈ Rm×d. A tuple of trainable vectors rk ∈ Rm and sk ∈ Rn are associated with each ensemble member k. The new weight matrix for each ensemble member in BatchEnsemble is\nW′k = W ◦ Fk, where Fk = rks>k ∈ Rm×d, (2) where ◦ denotes the element-wise product. Applying rank-1 perturbations via r and s adds few additional parameters to the overall model. We use an ensemble size of 4 in all experiments.\nMC-Dropout: Gal and Ghahramani (2016) interpret Dropout (Srivastava et al., 2014) as an ensemble model, leading to its application for uncertainty estimates by sampling multiple dropout masks at test time in order to ensemble its predictions. We use an ensemble size of 20 in all experiments.\nDeep Ensembles: Composing an ensemble of models, each trained with a different random initialization, provides diverse predictions (Fort et al., 2019) which have been shown to outperform strong\nbaselines on uncertainty estimation tasks (Lakshminarayanan et al., 2017). We use an ensemble size of 4 in all experiments.\nIn this work, we focus on the interaction between data augmentation strategies and BatchEnsemble, MC-Dropout, and deep ensembles. Other popular ensembling approaches leverage weight averaging such as Polyak-Ruppert (Ruppert, 1988), checkpointing (Huang et al., 2017), and stochastic weight averaging (Izmailov et al., 2018) to collect multiple sets of weights during training and aggregate them to make predictions with only a single set." }, { "heading": "2.3 DATA AUGMENTATION", "text": "Data augmentation encourages a model to make invariant predictions under desired transformations which can greatly improve generalization performance. For example, in computer vision, random leftright flipping and cropping are de-facto approaches (He et al., 2016). We highlight two state-of-the-art techniques which we study.\nMixup: Mixup (Zhang et al., 2018) manipulates both the features and the labels in order to encourage linearly interpolating predictions. Given an example (xi, yi), Mixup applies\nx̃i = λxi + (1− λ)xj , ỹi = λyi + (1− λ)yj . (3) Here, xj is sampled from the training dataset (taken from the minibatch), and λ ∼ Beta(a, a) for a fixed hyperparameter a > 0.\nMixup was shown to be effective for generalization and calibration of deep neural networks (Zhang et al., 2018; Thulasidasan et al., 2019b). Recent work has investigated why Mixup improves generalization (Guo et al., 2018; Shimada et al., 2019) and adversarial robustness (Beckham et al., 2019; Pang et al., 2020; Mangla et al., 2020). Given Mixup’s simplicity, many extensions have been proposed with further improvements (Yun et al., 2019; Berthelot et al., 2019; Verma et al., 2019; Roady et al., 2020; Chou et al., 2020).\nAugMix: Searching or sampling over a set of data augmentation operations can lead to significant improvement on both generalization error and calibration (Cubuk et al., 2019b;a). AugMix (Hendrycks et al., 2020) applies a sum of augmentations, each with random weighting, with a Jensen-Shannon consistency loss to encourage similarity across the augmentations. AugMix achieves state-of-the-art calibration across in- and out-of-distribution tasks. Let O be the set of data augmentation operations and k be the number of AugMix iterations. AugMix samples w1, . . . , wk ∼ Dirichlet(a, . . . , a) for a fixed hyperparameter a > 0 and op1, . . . , opk from O. Given an interpolation parameter m, sampled from Beta(a, a), the augmented input x̃augmix is:\nx̃augmix = mxorig + (1−m)xaug, xaug = k∑\ni=1\nwiopi(xorig). (4)" }, { "heading": "3 MIXUP-ENSEMBLE PATHOLOGY", "text": "We seek to understand the effect of data augmentations on ensembles. In particular, we hope to verify the hypothesis of compounding improvements when combining the seemingly orthogonal techniques of data augmentation and ensembles. To our surprise, we find that augmentation techniques can be detrimental to ensemble calibration." }, { "heading": "3.1 THE SURPRISING MISCALIBRATION OF ENSEMBLES WITH MIXUP", "text": "Ensembles are the most known and simple approaches to improving calibration (Ovadia et al., 2019; Lakshminarayanan et al., 2017), and Thulasidasan et al. (2019b) showed that Mixup improves calibration in a single network. Motivated by this, Fig. 1 applies Mixup to each ensemble member on CIFAR-10/CIFAR-100 with WideResNet 28-10 (Zagoruyko and Komodakis, 2016). Here, we searched over Mixup’s optimal hyperparameter α (Eq. 3) and found that α = 1 gives the best result, which corroborates the finding in Zhang et al. (2018). All data points in Fig. 1 are averaged over 5 random seeds.\nFigs. 1a and 1b demonstrate improved test accuracy (Red (ensembles without Mixup) to Blue (ensembles with Mixup)). However, if we shift focus to Figs. 1c and 1d’s calibration error, it is evident that combining Mixup with ensembles leads to worse calibration (Red to Blue). This is counterintuitive as we would expect Mixup, which improves calibration of individual models (Thulasidasan\net al., 2019a), to also improve the calibration of their ensemble. Fig. 1 confirms this pattern across BatchEnsemble (BE), MC-dropout (MC), and deep ensembles (DE). This pathology also occurs on ImageNet, as seen in Table 1.\nWhy do Mixup ensembles degrade calibration? To investigate this in more detail, Fig. 2 plots a variant of reliability diagrams (DeGroot and Fienberg, 1983) on BatchEnsemble. We bin the predictions into M = 15 equally spaced intervals based on their confidence (softmax probabilities) and compute the difference between the average confidence and the average accuracy as in Eq. 1 for each bin. Fig. 2 tracks this difference over varying confidence levels. A positive difference (Acc−Conf) implies under-confidence with respect to the true frequencies; negative implies over-confidence; and zero implies perfect calibration.\nThe backbone model in Fig. 2 is BatchEnsemble with an ensemble size of 4 (we also found this consistent for MC-Dropout and Deep-Ensemble). The figure presents 4 methods: Single: vanilla WideResNet 28-10; Mix-\nupSingle: WideResNet 28-10 model trained with Mixup; BatchEnsemble: vanilla BatchEnsemble WideResNet 28-10 model; MixupBE: BatchEnsemble WideResNet 28-10 model trained with Mixup. Fig. 2 shows that only models trained with Mixup have positive (Acc− Conf) values on the test set, which suggests that Mixup encourages under-confidence. Mixup ensemble’s under-confidence is also greater in magnitude than that of the individual Mixup models. This suggests that Mixup ensembles suffer from compounding under-confidence, leading to a worse calibration for the ensemble than the individual Mixup models’ calibration. This is contrary to our intuition that ensembles always improves calibration.\nTo further visualize this issue, Appendix C’s Fig. 8 investigates the confidence (softmax probabilities) surface of deep ensembles and Mixup when trained on a toy dataset consisting of 5 clusters, each with a different radius. We ensemble over 4 independently trained copies of 3-layer MLPs. Deep ensemble’s predictive confidence is plotted over the entire input data space in Fig. 8c. The resulting predictions are extremely confident except at the decision boundaries. Deep Ensemble still displays high confidence in the area nearest to the origin which is expected to have lower confidence level. On the other hand, Fig. 8d shows that Mixup-Ensemble is only confident in a very constrained area around the training clusters, leading to an overall under-confident classifier which confirms our postulation of compounding under-confidence." }, { "heading": "3.2 IS THE PATHOLOGY SPECIFIC TO MIXUP?", "text": "At the core of the issue is that Mixup conflates data uncertainty (uncertainty inherent to the data generating process) with model uncertainty. Soft labels can correct for over-confidence in single models which have no other recourse to improve uncertainty estimates. However, when combined with ensembles, which incorporate model uncertainty, this correction may be unnecessary. Because image classification benchmarks tend to be deterministic, soft labels encourage predictions on training data to be less confident about their true targets even if they are correct. We validate this hypothesis by showing it also applies to label smoothing.\nLabel Smoothing: Like Mixup, label smoothing applies soft labels: it smoothens decision boundaries by multiplying a data point’s true class by (1− α), with probability α spread equally across other classes. Using the same experimental setup as before, we apply increasing levels of label smoothing to ensembles of WideResNet 28-10 models trained on CIFAR-10. Fig. 3 demonstrates the harmful effect of label smoothing on CIFAR-10 ECE, particularly when aggressive (coeff ≥ 0.2). In the concurrent work, Qin et al. (2020) found that label smoothing plus ensemble leads to worse calibration. They showed that adjusting model confidence successfully corrects the compounding underconfidence." }, { "heading": "4 CONFIDENCE ADJUSTED MIXUP ENSEMBLES (CAMIXUP)", "text": "In this section, we aim to fix the compounding under-confidence issue when combining Mixup and ensembles without sacrificing its improved accuracy on both in- and out-of-distribution data." }, { "heading": "4.1 CLASS BASED CAMIXUP", "text": "Mixup encourages model under-confidence as shown in Fig. 2. Notice that Mixup assigns a uniform hyperparameter α to all examples in the training set. To improve Mixup, we start from the intuition that in classification, some classes are prone to be more difficult than others to predict. This can be confirmed by Fig. 4a, which provides examples of per-class test accuracy. Ideally, we prefer our model to be confident when it is predicting over easy classes such as cars and ships. For harder classes like cats and dogs, the model is encouraged to be less confident to achieve better calibration.\nTherefore, instead of a uniform Mixup hyperparameter for all classes, we propose to adjust the Mixup hyperparameter of each class by the difference between its accuracy and confidence. CAMixup’s intuition is that we want to apply Mixup on hard classes on which models tend to be over-confident. On easy examples, we impose the standard data-augmentation without Mixup. This partially prevents Mixup models from being over-confident on difficult classes while maintaining its good calibration on out-of-distribution inputs.2\nDenote the accuracy and confidence of class i as Acc(Ci) and Conf(Ci). We adjust Mixup’s λ in Eqn. 3 by the sign of Acc(Ci)− Conf(Ci), which are defined as Acc(Ci) = 1|Ci| ∑ xj∈Ci 1(ŷj = i)\nand Conf(Ci) = 1|Ci| ∑ xj∈Ci p̂i.\nλi = { 0 Acc(Ci) > Conf(Ci) λ Acc(Ci) ≤ Conf(Ci).\n(5)\n2We focus on classification, where classes form a natural grouping of easy to hard examples. However, the same idea can be used on metadata that we’d like to balance uncertainty estimates, e.g., gender and age groups.\nIf the model is already under-confident on class i (Acc(Ci) > Conf(Ci)), Mixup is not applied to examples in the class, and λi = 0. However, if Acc(Ci) ≤ Conf(Ci), the model is over-confident on this class, and Mixup is applied to reduce model confidence. We compute the accuracy and confidence on a validation dataset after each training epoch.\nNotice that λi is dynamically updated at the end of each epoch. To understand which classes are more often assigned Mixup operation, Fig. 4 calculates the number of times that λi > 0 throughout training. The maximum number of times is the number of total training epochs, which is 250 in the BatchEnsemble model. We find that CAMixup rarely enables Mixup to easy classes such as cars and ships: the number of times is less than 10% of the total epochs. For harder classes like cats and dogs, CAMixup assigns Mixup operation almost every epoch, accounting for more than 80% of total epochs. In summary, Fig. 4 shows that CAMixup reduces model confidence on difficult classes and encourages model confidence on easy classes, leading to better overall calibration. Appendix D.1’s Fig. 9a also shows that CAMixup effectively shifts the confidence to the lower region.\nFig. 5 presents results of CAMixup on CIFAR-10 and CIFAR-100 test set, where we compare the effect of Mixup and CAMixup on different ensembling strategies (BatchEnsemble, MC Dropout, DeepEnsemble). Adding Mixup to ensembles improves accuracy but worsens ECE. Adding CAMixup to ensembles significantly improves accuracy of ensembles in all cases. More importantly, the calibration results in Figs. 5c and 5d show that CAMixup ensembles are significantly better calibrated\nthan Mixup ensembles, for instance, CAMixup reduces ECE by more than 5X for BatchEnsemble over Mixup. We observe a minor decrease in test accuracy (at most 0.2%) when comparing CAMixup ensembles with Mixup ensembles, but we believe that this is a worthwhile trade-off given the significant improvement in test ECE.\nTable 1: BatchEnsemble with ensemble size 4 on ImageNet.\nACC ECE\nBatchEnsemble 77.0 2.0% MixupBE 77.5 2.1% CAMixupBE 77.4 1.5%\nTable 1 presents similar experiments applied to ResNet-50 on ImageNet, using BatchEnsemble as the base ensembling strategy. These results are state of the art to the best of our knowledge: Dusenberry et al. (2020) report 1.7% ECE with Rank-1 Bayesian neural nets and 3.0% with Deep Ensembles; Thulasidasan et al. (2019a) report 3.2% for ResNet-50 with Mixup, 2.9% for ResNet-50 with an entropy-regularized loss, and 1.8% for ResNet-50 with label smoothing." }, { "heading": "4.2 PERFORMANCE UNDER DISTRIBUTION SHIFT", "text": "Here, we assess model resilience to covariate shift by evaluating on the CIFAR-10-C and CIFAR-100-C benchmarks (C stands for corruptions) proposed by Hendrycks and Dietterich (2019a), which apply 15 types of corruptions each with 5 levels of intensity. We evaluate the performance of CAMixup vs Mixup when applied to different ensembles, and report average error on ECE across different types of corruptions and intensities.\nFig. 6a shows that Mixup improves accuracy on the corrupted dataset because of its strong regularization effect. However, the models tend to be over-confident as one moves further from the original distribution (higher corruption intensities), so encouraging underconfidence is not an issue. This explains why Mixup ensembles maintain low ECE on out-of-distribution test data in Fig. 6b.\nFig. 6b also shows that CAMixup’s calibration on out-of-distribution data (CIFAR-10-C) is also on par with Mixup ensembles. We observe the same result on CIFAR-100-C (Appendix D.1’s Fig. 9). Thus, we successfully improve model calibration on in-distribution datasets without sacrificing its calibration on out-of-distribution datasets." }, { "heading": "5 COMPOUNDING THE BENEFITS OF CAMIXUP WITH AUGMIX ENSEMBLES", "text": "We have investigated why certain data augmentation schemes may not provide complementary benefits to ensembling. We proposed class-adjusted Mixup (CAMixup) which compounds both accuracy and ECE over vanilla ensembles. We believe that the insights from our work will allow the community and practitioners to compound SOTA performance. We provide two concrete examples." }, { "heading": "5.1 AUGMIX", "text": "We show how CAMixup can compound performance over ensembles of models trained with AugMix, which were shown by Hendrycks et al. (2020) to achieve state-of-the-art accuracy and calibration on both clean and corrupted benchmarks. We primarily focus on improving BatchEnsemble and we investigate if adding better data augmentation schemes closes the gap between memory-efficient ensembles (BatchEnsemble) and independent deep ensembles.\nAs discussed in Section 2.3, AugMix only uses label-preserving transformations. Therefore AugMix provides complementary benefits to ensembles (and CAMixup). This is consistent with calibration improvements in the literature with ensemble methods, which apply standard data augmentation such as random flips, which also do not smoothen labels.\nWe consider a combination of AugMix and Mixup as it allows the model to encounter both diverse label-preserving augmentations and soft labels under a linearly interpolating regime. The combination\nAugMixup (AugMix + Mixup) can be written as\nx = λ ∗AugMix(x1) + (1− λ) AugMix(x2), y = λ ∗ y1 + (1− λ) ∗ y2. (6)\nConsistent with earlier results on Mixup, Table 2 shows combining AugMixup with BatchEnsemble improves accuracy but worsens ECE, leading to under-confidence on in-distribution data. (Appendix D.2’s Fig. 10). With our proposed fix CAMixup, the combination AugCAMixup (AugMix + CAMixup) improves calibration while retaining the highest accuracy for ensembles. Fig. 7 shows detailed results on CIFAR-10-C and CIFAR-100-C. Similar to Mixup, AugMixup improves calibration under shift but worsens calibration on in-distribution. However, our proposed AugCAMixup improves accuracy and calibration of ensembles on both clean and corrupted data.\nTo the best of our knowledge, these results are state-of-the-art in the literature: Dusenberry et al. (2020) report 0.8% ECE and 1.8% ECE for CIFAR-10 and CIFAR-100 along with 8% and 11.7% ECE for corruptions; Guo et al. (2017) report 0.54% and 2.3% ECE for the smaller Wide ResNet 32 on CIFAR-10 and CIFAR-100 with temperature scaling (93% and 72% accuracy), and Ovadia et al. (2019) demonstrated that temperature scaling does not extend to distribution shift." }, { "heading": "5.2 TEMPERATURE SCALING", "text": "In concurrent work, Rahaman and Thiery (2020) consider the interplay between data augmentation and ensembling on calibration. They also find that Mixup ensembles can be under-confident, and propose temperature scaling as a solution. Their core contribution is the same but differ in slight ways: we further this analysis by showing the compounding under-confidence extends to other techniques applying soft labels such as label smoothing, and we propose CAMixup as a solution. Post-hoc calibration techniques like temperature scaling are complementary to our proposal and do not address the core conflation issue with Mixup. Corroborating findings of Ovadia et al. (2019), Appendix G shows combining CAMixup and temperature scaling can further improve test calibration error; it does not improve out-of-distribution calibration. Another concurrent work showed that calibrated ensemble members do not always lead to calibrated ensemble predictions (Anonymous, 2021)." }, { "heading": "6 CONCLUSION", "text": "Contrary to existing wisdom in the literature, we find that combining ensembles and Mixup consistently degrades calibration performance across three ensembling techniques. From a detailed\nanalysis, we identify a compounding under-confidence, where Mixup’s soft labels (and more broadly, label-based augmentation strategies) introduce a negative confidence bias that hinders its combination with ensembles. To correct this, we propose CAMixup, which applies Mixup to only those classes on which the model tends to be over-confident, modulated throughout training. CAMixup combines well with state-of-the-art methods. It produces new state-of-the-art calibration across CIFAR-10, CIFAR-100, and ImageNet while obtaining competitive accuracy. Appendix H points out potential future work and limitations of CAMixup." }, { "heading": "A DATASET DETAILS", "text": "CIFAR & CIFAR-C: We consider two CIFAR datasets, CIFAR-10 and CIFAR-100 (Krizhevsky, 2009). Each consists of a training set of size 50K and a test set of size 10K. They are natural images with 32x32 pixels. Each class has 5,000 training images and 500 training images on CIFAR-10 and CIFAR-100 respectively. In our experiments, we follow the standard data pre-processing schemes including zero-padding with 4 pixels on each sise, random crop and horizon flip (Romero et al., 2015; Huang et al., 2016; Srivastava et al., 2015). If a training method requires validation dataset such as CAMixup, we use separate 2, 500 images from 50K training images as the validation set.\nIt’s important to test whether models are well calibrated under distribution shift. CIFAR-10 corruption dataset (Hendrycks and Dietterich, 2019a) is designed to accomplish this. The dataset consists of 15 types of corruptions to the images. Each corruption types have 5 intensities. Thus, in total CIFAR-10C has 75 corrupted datasets. Notice that the corrupted dataset is used as a testset without training on it. Ovadia et al. (2019) benchmarked a number of methods on CIFAR-10 corruption. Similarly, we can apply the same corruptions to CIFAR-100 dataset to obtain CIFAR-100C.\nImageNet & ImageNet-C: We used the ILSVRC 2012 classification dataset (Deng et al., 2009) which consists of a total of 1.2 million training images, 50,000 validation images and 150,000 testing images. Images span over 1,000 classes. We follow the data augmentation scheme in He et al. (2016), such as random crop and random flip, to preprocess the training images. During testing time, we apply a 224x224 center crop to images. Similarly to CIFAR-C, we apply 15 corruption types with 5 intensities each to obtain ImageNet-C (Hendrycks and Dietterich, 2019b)." }, { "heading": "B HYPERPARAMETERS IN SECTION 3", "text": "We kept the same set of hyperparameters as the BatchEnsemble model in Wen et al. (2020). All hyperparameters can be found in Table 3. The most sensitive hyperparameter we found is whether to use ensemble batch norm, which applies a separate batch norm layer for each ensemble member; and the value of random_sign_init, which controls the standard deviation of Gaussian distributed initialization of s and r. We kept BatchEnsemble CIFAR-10 the same as Wen et al. (2020), which does not deploy ensemble batch norm. We enable ensemble batch norm on CIFAR-100 and ImageNet. This allows us to use larger standard deviation in the initialization. The random_sign_init is −0.5 on CIFAR-10 and −0.75 on CIFAR-100 and -0.75 on ImageNet. In the code, we use negative value to denote the standard deviation of Gaussian distribution (positive value instead initializes with a Bernoulli distribution under that probability). In our case, we only use negative random_sign_init, which means we only consider Gaussian distributed initialization in this work." }, { "heading": "C EXCESSIVE UNDER-CONFIDENCE ON SYNTHETIC DATA", "text": "To further understand the confidence surface of Mixup + Ensembles, we provided a visualization in Fig. 8. We trained on a synthetic dataset consisting of 5 clusters, each with a different radius. We ensemble over 4 independently trained copies of 3-layer MLPs. We plotted the softmax probabilities surface of Mixup-Single model, Deep-Ensemble and Mixup-Ensemble. The softmax probabilities\nrepresent the model confidence. Fig. 8c shows that Deep-Ensemble predictions are extremely confident except at the decision boundaries. Fig. 8b displays a lower confidence than Deep-Ensemble. This is beneficial in the single model context because single deep neural networks tend to be over-confident and Mixup can partially correct this bias. On the other hand, Fig. 8d shows that MixupEnsemble is only confident in a very constrained area around the training clusters, leading to an overall under-confident classifier which confirms our postulation of compounding under-confidence." }, { "heading": "D MORE CALIBRATION RESULTS OF MIXUP-BATCHENSEMBLE", "text": "In Section 3.1, we demonstrated that combining Mixup and ensembles leads to worse calibration on testset. In this appendix section, we complement the above conclusion with the analysis on corrupted datasets and with data-augmentation techniques like AugMix.\nD.1 SUPPLEMENTARY RESULTS ON CAMIXUP\nIn this section, we provided supplementary results on CAMixup. Fig. 2 shows that combining Mixup and BatchEnsemble leads to excessive under-confidence. In Fig. 9a, we showed that our proposed CAMixup fixes this issue by correcting the confidence bias. This explains why CAMixup achieves better calibration on in-distribution testset. As demonstrated in Section 4.2, Mixup improves model out-of-distribution performance because of its strong regularization effect. We showed that our proposed CAMixup inherits Mixup’s improvement on CIFAR-10-C. Fig. 9b and Fig. 9c show that this conclusion seamlessly transfers to CIFAR-100-C. We also supplement Fig. 5 with Table 4 and Table 5, illusrating detailed numbers.\nD.2 SUPPLEMENTARY RESULTS ON AUGMIX\nWe show that Mixup does not combine with ensembles without sacrificing in-distribution calibration in Section 3.1. As discussed in Section 2.3, AugMix only uses label-preserving transformations and does not modify the labels. Intuitively, it does not reduce model confidence. We support this intuition with Fig. 10. It shows that AugMix does not lead to under-confidence. Therefore it can be combined with ensembles without any calibration issue.\nIn Table 2, we showed that combining AugMix and Mixup leads to worse calibration due to the under-confidence although AugMix itself does not. To better understand the insights beyond staring at scalars, we provided the reliability diagram analysis as well. In Figure 10, we showed that the underconfidence issue of AugMixup (Augmix + Mixup) still exists. It suggests that applying CAMixup to Augmix can correct the under-confidence bias as what we showed in Fig. 10a and Fig. 10b. Our proposed CAMixup allows to compound performance of ensembles and data augmentation to achieve the best possible performance." }, { "heading": "E DEEP ENSEMBLES WITH MIXUP", "text": "We showed that CAMixup improves Mixup BatchEnsemble calibration on testset without undermining its calibration under distribution shift in Section 4. In this section, we show that the improvement can also be observed on deep ensembles. In Fig. 11, we showed the under-confidence bias we observed on Mixup + BatchEnsemble also exists on Mixup + deep ensembles, with an even more obvious trend. Beyond commonly used ECE measure, we also explore other calibration measures. They further confirmed our under-confidence intuition. We provide some brief explanation on how to calculate ACE, SCE and TACE.\nACE measure is the same as ECE except for the binning scheme. Rather than equally divide the confidence evenly into several bins, ACE choses an adaptive scheme which spaces the bin intervals so that each contains an equal number of predictions. SCE is the same as ECE except that it accounts for all classes into calibration measure rather than just looking at the class with maximum probability. The softmax predictions induce infinitesimal probabilities. These tiny predictions can wash out the calibration score. TACE is proposed to set a threshold to only include predictions with large predictive probability, to address the above issue.\nWe present the results of Mixup, CAMixup, AugMix, AugMixup and AugCAMixup on deep ensembles in Table 6. We notice that the improvement of CAMixup on deep ensembles is smaller than its improvement on BatchEnsemble. We postulate that this is because Mixup + deep ensembles is much badly calibrated than Mixup + BatchEnsemble. For example, AugMixup + deep ensembles achieve 2.71% and 6.86% ECE on CIFAR-10 and CIFAR-100. In the meanwhile, AugMixup + BatchEnsemble achieve 1.71% and 4.19%. Thus, even if CAMixup can improve the calibration of Mixup + deep ensembles, it still cannot beat AugMix + deep ensembles. As a result, when we say we close the calibration gap between BatchEnsemble and deep ensembles, we are comparing AugCAMixup BatchEnsemble (BatchEnsemble + CAMixup + Augmix) to AugMix deep ensembles. This is because AugMix deep ensembles achieve the best calibration among all variants we tried. How to completely fix the under-confidence in deep ensembles is a natural extension of this work. Since we focus on bridging the calibration gap between BatchEnsemble and deep ensembles, we delegate the complete fix in deep ensembles to the future work." }, { "heading": "F METRICS OTHER THAN ECE", "text": "ECE is the standard metric in calibration, but it is a biased estimate of true calibration (Vaicenavicius et al., 2019). Heavily relying on ECE metric might lead to inconsistent conclusion. In this section, we computed the calibration error with recently proposed calibration estimator which reduces bias in ECE, including debiased calibration estimator (Kumar et al., 2019) (DCE) and SKCE (Widmann et al., 2019). fig. 12 shows that our conclusion in the main section are also supported by these two recently proposed calibration estimators. In particular, the improvement of proposed CAMixup over\nMixup on testset is even larger than what ECE reflects in Fig. 5. Table 7 demonstrates the specific numbers used in Fig. 12." }, { "heading": "G CAMIXUP WITH TEMPERATURE SCALING", "text": "See Fig. 13." }, { "heading": "H LIMITATIONS AND FUTURE WORK", "text": "We describe limitations of our work, signalling areas for future research. One limitation of CAMixup is that all examples in the same class still share the same Mixup coefficient. This leaves room for developing more fine-grained adaptive Mixup mechanisms, such as adapting the Mixup coefficient per example. This relates to an open research question: how do you measure the training difficulty of a data point given a deep network? (Toneva et al., 2018; Agarwal and Hooker, 2020) Another limitation is we showed that CAMixup still cannot fully fix the miscalibration of Mixup + deep ensembles in Appendix E, due to the fact that Mixup + deep ensembles leads to even worse calibration than Mixup + BatchEnsemble. This raises a harder question which CAMixup cannot completely solve but also leaves more research room to further understand why Mixup is worse on deep ensembles and how to address it. Thus, we leave the question on how to address the above issues to future work. Next, we determine whether to use Mixup based on the reliability (Mean Accuracy - Mean Confidence) of each class on validation set. One concern is that CAMixup does not scale well to a large number of classes. Fortunately, we showed that this works on problems up to 1000 classes (ImageNet). Additionally, Mixup has been most successful in the vision domain, hence our focus; and with preliminary success on tabular data and natural language processing (Zhang et al., 2018; Guo et al., 2019). Assessing whether CAMixup and ensembling techniques translate to text is an interesting area.\nAlgorithm 1 Forgetting Count Based CAMixup\ninitialize prevacci = 0, i ∈ D initialize forgetting T [i] = 0, i ∈ D initialize MixupCoeff[i] = 0 while training do\nB ∼ D # sample a minibatch Apply Mixup on B based on\nMixupCoeff for examplei ∈ B do\ncompute acci if prevacci > acci then\nT [i] = T [i] + 1 prevacci = acci\nend if end for gradient update classifier on B rank = sort(T ) threshold = rank[|D|//2] for examplei ∈ B do\nif T [i] > threshold then MixupCoeff[i] = a else MixupCoeff[i] = 0\nend if end for\nend while\nWe took a first step in developing a more fine-grained adaptive Mixup mechanism. Recall that class based CAMixup calculates the reliability (Accuracy - Confidence) at the end of each epoch, then it decided whether to apply Mixup in each class (illustrated in Fig. 4). This requires extra computation on validation dataset and it assigns uniform Mixup coefficient within one class. By leveraging recently developed forgetting count (Toneva et al., 2018), we can adjust Mixup coefficient for each example based on its forgetting counts. The intuition is if an examples is associated with high forgetting counts, it indicates the model tends to forget this example. To achieve better calibration, we should place low confidence on this example. The algorithm of forgetting counts based CAMixup is presented in Algorithm 1. In summary, we first calculate the forgetting counts for each training example and obtain the median of these counts as the threshold. Then, CAMixup applies Mixup to the training example whose forgetting counts are higher than the median.\nWe provided a preliminary results on CIFAR-10 in Fig. 14. It demonstrates that forgetting counts based CAMixup outperforms class based CAMixup on most metrics across BatchEnsemble and MC-dropout. One exception is that it underperforms on test calibration on MC-dropout. We could not observe the same improvement on CIFAR-100. We postulate that the reliability of forgetting count on CIFAR-100 is not as good as it is on CIFAR-10, leading to the inconsistent results. We leave the question on how to improve forgeting count based CAMixup on CIFAR-100 into future work." } ]
2,021
null
SP:4f6e5411e0d5a017100c74a3842fed4ff323d883
[ "This paper mainly answers a fundamental question: what is the role of depth in convolutional networks? Specifically, the authors present an empirical analysis of the impact of the depth on the generalization in CNNs. Experiments on CIFAR10 and ImageNet32 demonstrate that the test performance beyond a critical depth. My detailed comments are as follows." ]
Over-parameterization is a recent topic of much interest in the machine learning community. While over-parameterized neural networks are capable of perfectly fitting (interpolating) training data, these networks often perform well on test data, thereby contradicting classical learning theory. Recent work provided an explanation for this phenomenon by introducing the double descent curve, showing that increasing model capacity past the interpolation threshold can lead to a decrease in test error. In line with this, it was recently shown empirically and theoretically that increasing neural network capacity through width leads to double descent. In this work, we analyze the effect of increasing depth on test performance. In contrast to what is observed for increasing width, we demonstrate through a variety of classification experiments on CIFAR10 and ImageNet32 using ResNets and fullyconvolutional networks that test performance worsens beyond a critical depth. We posit an explanation for this phenomenon by drawing intuition from the principle of minimum norm solutions in linear networks.
[]
[ { "authors": [ "Sanjeev Arora", "Simon S. Du", "Wei Hu", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Peter L. Bartlett", "Philip M. Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Ji Xu" ], "title": "Two models of double descent for weak features", "venue": null, "year": 2019 }, { "authors": [ "K Bibas", "Y. Fogel", "M. Feder" ], "title": "A new look at an old problem: A universal learning approach to linear regression", "venue": null, "year": 1905 }, { "authors": [ "Patryk Chrabaszcz", "Ilya Loshchilov", "Frank Hutter" ], "title": "A downsampled variant of imagenet as an alternative to the cifar datasets", "venue": "arXiv preprint arXiv:1707.08819,", "year": 2017 }, { "authors": [ "Heinz Werner Engl", "Martin Hanke", "Andreas Neubauer" ], "title": "Regularization of Inverse Problems, volume 375", "venue": "Springer Science & Business Media,", "year": 1996 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "The Elements of Statistical Learning, volume 1", "venue": null, "year": 2001 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J Tibshirani" ], "title": "Surprises in highdimensional ridgeless least squares interpolation", "venue": "arXiv preprint arXiv:1903.08560,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference in Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Master’s thesis, University of Toronto,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet Classification with Deep Convolutional Neural Networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Partha P. Mitra" ], "title": "Understanding overfitting peaks in generalization error: Analytical risk curves for l2 and l1 penalized interpolation", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 1906 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In International Conference in Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Benham Neyshabur" ], "title": "Towards learning convolutions from scratch", "venue": "arXiv preprint arXiv:2007.13657,", "year": 2020 }, { "authors": [ "Quynh Nguyen", "Matthias Hein" ], "title": "Optimization landscape and expressivity of deep cnns", "venue": "In International Conference in Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Adityanarayanan Radhakrishnan", "Mikhail Belkin", "Caroline Uhler" ], "title": "Memorization in overparameterized autoencoders", "venue": "In ICML Workshop on Identifying and Understanding Deep Learning Phenomena,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Gregor Urban", "Krzysztof J. Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan", "Shenjie Wang", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson", "Rich Caruana" ], "title": "Do deep convolutional nets really need to be deep and convolutional", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Roman Vershynin" ], "title": "High-Dimensional Probability: An Introduction with Applications in Data Science, volume 1", "venue": null, "year": 2018 }, { "authors": [ "Lechao Xiao", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Samuel Schoenholz", "Jeffrey Pennington" ], "title": "Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Lechao Xiao", "Jeffrey Pennington", "Samuel Schoenholz" ], "title": "Disentangling trainability and generalization in deep neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Bing Xu", "Naiyan Wang", "Tianqi Chen", "Mu Li" ], "title": "Empirical Evaluation of Rectified Activations in Convolution Network", "venue": null, "year": 2015 }, { "authors": [ "Zitong Yang", "Yaodong Yu", "Chong You", "Jacob Steinhardt", "Yi Ma" ], "title": "Rethinking Bias-Variance Trade-off for Generalization of Neural Networks", "venue": "In International Conference in Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Ke Ye", "Lek-Heng Lim" ], "title": "Every matrix is a product of toeplitz matrices", "venue": "Foundations of Computational Mathematics,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Yoram Singer" ], "title": "Identity crisis: Memorization and generalization under extreme overparameterization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Traditional statistical learning theory argues that over-parameterized models will overfit training data and thus generalize poorly to unseen data (Hastie et al., 2001). This is explained through the bias-variance tradeoff; as model complexity increases, so will variance, and thus more complex models will generalize poorly. Modern deep learning models, however, have been able to achieve state-of-the-art test accuracy by using an increasing number of parameters (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016). In fact, while over-parameterized neural networks have enough capacity to interpolate randomly labeled training data Zhang et al. (2017), in practice training often leads to interpolating solutions that generalize well.\nTo reconcile this apparent conflict, Belkin et al. (2019a) proposed the double descent risk curve, where beyond the interpolation threshhold, the risk decreases as model complexity increases. In neural networks, model complexity has thus far mainly been analyzed by varying network width. Indeed, in line with double descent, Yang et al. (2020); Nakkiran et al. (2020); Belkin et al. (2019a) demonstrated that increasing width beyond the interpolation threshhold while holding depth constant can decrease test loss.\nHowever, model complexity in neural networks can also be increased through depth. In this work, we study the effect of depth on test performance while holding network width constant. In particular, we focus on analyzing the effect of increasing depth in convolutional networks. These networks form the core of state-of-the-art models used for image classification and serve as a prime example of a network with layer constraints. In this paper we answer the following question: What is the role of depth in convolutional networks?\nIn contrast to what has been shown for increasing model complexity through width, we demonstrate that test performance of convolutional networks worsens when increasing network depth beyond a critical point, suggesting that double descent does not happen through depth. Figure 1 demonstrates the difference between increasing width and depth in ResNets (He et al., 2016) trained on CIFAR10. In particular, Figure 1a shows that increasing width leads to a decrease in test error even when training accuracy is 100%. This effect is captured by the double descent curve. On the other hand, Figure 1b demonstrates that training ResNets of increasing depth but fixed width leads to an increase\nin test error. Since network depth is a form of model complexity, this behavior contradicts what is expected based on double descent. It is therefore critical to carefully analyze and understand this phenomenon.\nThe main contributions of our work are as follows:\n1. We conduct a range of experiments in the classification setting on CIFAR10 and ImageNet32 using ResNets, fully-convolutional networks, and convolutional neural tangent kernels, and consistently demonstrate that test performance worsens beyond a critical depth (Section 3). In particular, in several settings, we observe that the test accuracy of convolutional networks is even worse than that of fully connected networks as depth increases.\n2. To gain intuition for this phenomenon we analyze linear neural networks. We demonstrate that increasing depth in linear neural networks with layer constraints (e.g. convolutional networks or Toeplitz networks) leads to a decrease in the Frobenius norm and stable rank of the resulting linear operator. This implies that increasing depth leads to poor generalization, when solutions of lower Frobenius norm (e.g. solutions learned by linear fully connected networks) do not generalize (Section 4).\n3. Against conventional wisdom, our findings indicate that increasing depth does not always lead to better generalization. Namely, our results provide evidence that the driving force behind the success of deep learning is not the depth of the models, but rather their width." }, { "heading": "2 RELATED WORK", "text": "We begin with a discussion of recent works analyzing the role of depth in convolutional networks (CNNs). Yang et al. (2020) study the bias-variance decomposition of deep CNNs and show that as depth increases, bias decreases and variance increases. This work observes that generally the magnitude of bias is greater than that of variance, and thus overall risk decreases. However, the focus of their analysis on depth is not on the interpolating regime. In fact, they posit that it is possible for deeper networks to have increased risk. We extend their experimental methodology for training ResNets and demonstrate that, indeed, deeper networks have increased risk.\nNeyshabur (2020) studied the role of convolutions, but focuses on the benefit of sparsity in weight sharing. Their work analyzed the effect of depth on fully-convolutional networks, but only considered models of two depths. Urban et al. (2017) analyzed the role of depth in student-teacher CNNs, specifically by training shallow CNNs to fit the logits of an ensemble of deep CNNs. This differs from our goal of understanding the effect of depth on CNN’s trained from scratch on CIFAR10; furthermore, the ensemble of CNNs they consider only have eight convolutional layers, which is much smaller than the deep ResNets we consider in our experiments.\nXiao et al. (2018) provides initial evidence that the performance of a CNN may degrade with depth; however, it is unclear whether this phenomenon is universal across CNNs used in practice or simply\nan artifact of their specific initialization designed to train deep CNNs. In fact, Xiao et al. (2020) establish that the convolutional neural tangent kernel (CNTK) solution approaches that of the neural tangent kernel (NTK) as depth increases. In our work, we analyze the generalization of the CNTK as a function of depth in Section 3.3. We show that as depth increases, test error monotonically decreases and then increases. Lastly, Xiao et al. (2018) Figure 4a and Xiao et al. (2020) Figure 2a,b provide examples of accuracy worsening with increasing depth in CNNs, but we demonstrate this phenomenon systematically across a number of settings.\nOther works have aimed to understand the role of depth in CNNs by characterizing implicit regularization in over-parameterized deep CNNs. Radhakrishnan et al. (2019) characterized the inductive bias of over-parameterized autoencoders and demonstrated that with sufficient depth, these networks become locally contractive around training examples. Zhang et al. (2020) similarly studied the role of depth in autoencoders in the more restrictive setting of a single training example. Nguyen & Hein (2018) studied optimization in deep CNNs and showed that increasing depth increases representational power, while increasing width smooths the optimization landscape. While each of these works identified forms of implicit regularization which occur with depth in CNNs, they did not provide an explicit connection to generalization in CNNs used for classification, which is the focus of our work.\nOn the other hand, previous works studying generalization via double descent have primarily focused on over-parameterization through increasing width. In particular, Belkin et al. (2019a) and Nakkiran et al. (2020) demonstrated that double descent occurs when increasing the width of neural networks trained on MNIST (LeCun et al., 1998) and CIFAR10 respectively. Several theoretical works demonstrated double descent theoretically (Hastie et al., 2019; Belkin et al., 2019b; Mitra, 2019; Muthukumar et al., 2020; Bibas et al., 2019; Bartlett et al., 2020), but analyzed linear or shallow non-linear models with an increasing number of features. Our work performs a similar empirical analysis to Nakkiran et al. (2020), but on the impact of depth instead of width in CNNs, thereby identifying contrasting behaviors between the two different ways of increasing model complexity." }, { "heading": "3 EMPIRICAL EVIDENCE IN NON-LINEAR CLASSIFIERS", "text": "We now present our main set of experiments demonstrating that the test accuracy of convolutional networks decreases when increasing depth past a critical threshold. We begin with a demonstration\nof this phenomenon for fully-convolutional networks applied to CIFAR10 and ImageNet32. We then demonstrate that this phenomenon holds also for ResNets applied to CIFAR10. Lastly, we show that this phenomenon occurs for the convolutional neural tangent kernel (CNTK) on subsets of CIFAR10. Our training methodology is outlined in Appendix C." }, { "heading": "3.1 IMAGE CLASSIFICATION WITH FULLY-CONVOLUTIONAL NETWORKS", "text": "To understand the role of depth in convolutional networks, we begin with a simplified model of a convolutional network, which we call the Fully-Conv Net. The architecture of a Fully-Conv Net of depth d and width w for a classification problem with c classes is depicted in Figure 9 of the Appendix and consists of the following layers:\n• A convolutional layer with stride 1, 3 input filers, and w output filters, followed by batch norm (Ioffe & Szegedy, 2015) and a LeakyReLU activation (Xu et al., 2015).\n• d− 1 convolutional layers with stride 1, w input filters, and w output filters, each followed by batch norm and LeakyReLU activation.\n• 1 convolutional layer with stride 1, w input filters, and c output filters. This is followed by an average pool of each of the output filters to produce a c-dimensional prediction.\nCrucially, this network depends only on convolutional layers, a nonlinear activation, and batch norm; it does not depend on other components commonly found in deep learning architectures such as residual connections, dropout, downsampling, or fully connected layers. We note that this model is not designed to necessarily perform well, but rather to isolate and understand the effect of increasing the number of convolutional layers.\nWe trained the Fully-Conv Net on 2, 5, and 10 classes from CIFAR10 (Krizhevsky, 2009). All experiments were performed using 5 random seeds to reduce the impact of random initialization. Models were trained using Adam (Kingma & Ba, 2015) with learning rate 10−4 for 2000 epochs, and we selected the model with the best training accuracy over the course of training. We used the Cross Entropy loss, and down-sampled images to 16 × 16 resolution to reduce the computational burden. See Appendix C for a list of all classes used. The resulting train and test accuracies are\nshown in Figure 2. As expected, as depth increases, training accuracy becomes 100%. However, beyond a critical depth threshold, the test accuracy begins to degrade sharply. Furthermore, the value of this critical depth appears to increase as the number of training classes increases.\nIn addition to CIFAR10, we also applied the Fully-Conv Net to subsets of ImageNet32 (Chrabaszcz et al., 2017), which is ImageNet downsampled to size 32 × 32. We again trained on 2, 5, and 10 classes, using the same training procedure as for CIFAR10. Training and test accuracies for ImageNet32 are shown in Figure 3. Again, we observe that as depth increases past a critical value, test performance degrades.\nRemarks. When training to classify between 2 and 5 classes, the test accuracy continues to decrease even when increasing depth past the interpolation threshold, i.e. even after achieving 100% training accuracy. This in contrast to double descent where increasing model complexity beyond the interpolation threshold leads to an increase in test accuracy. Interestingly, as depth increases, the test accuracy approaches that of a fully connected network. While the Fully-Conv Nets were before or at the interpolation threshold for the 10 class setting in Figures 2 and 3, Figure 4 demonstrates that a similar decrease in test accuracy occurs also after the interpolation threshold for wider models which can interpolate the data." }, { "heading": "3.2 IMAGE CLASSIFICATION WITH MODERN DEEP LEARNING MODELS", "text": "To understand the effect of increasing depth in modern machine learning models, we analyzed variants of ResNet trained on CIFAR10. ResNet-18 and ResNet-34 consist of 4 stages, each of which are connected by a downsampling operation. Each stage is comprised of a number of basic blocks, which are two layers with a residual connection. There is a convolutional layer before the first stage, and a fully connected layer after the last stage. ResNet-18 uses 2 basic blocks in each stage, while ResNet-34 uses (3, 4, 6, 3) blocks in each stage respectively. By varying the number of blocks in each stage, we constructed a variety of ResNet models of different depths; in particular, by choosing (n1, n2, n3, n4) blocks in each stage, we can construct a ResNet model of depth 2 + 2 · (n1 + n2 + n3 + n4). The width w of a model is defined to be the number of filters in the first stage; there are then (w, 2w, 4w, 8w) filters in each stage respectively. See Figure 10 of the Appendix for a diagram. We trained models up to depth 50, see Appendix C for a more detailed description of the models used.\nFor Figure 1 and Figure 5, we trained each of our ResNet models using MSE loss on a random subset of 25,000 training images from CIFAR10, and plotted the test loss as a function of depth. Our experimental methodology is based on that of Yang et al. (2020). We trained for 500 epochs\nusing SGD with learning rate 0.1 and momentum 0.9, and we decreased the learning rate by a factor of 10 every 200 epochs. We also use their data augmentation scheme of random crops and random horizontal flips on the training data. In the width 64 and 32 models, test loss increases beyond a critical depth, and in the width 16 model, test loss does not improve beyond a certain depth. Plots of the train and test losses and accuracies of all models are given in the Appendix in Figure 12.\nRemarks. In Appendix D, we provide the training and test accuracies for the ResNets presented in this work. We also demonstrate that increasing depth in later blocks of ResNet leads to a more drastic increase in test error than increasing depth in earlier blocks." }, { "heading": "3.3 IMAGE CLASSIFICATION WITH THE CONVOLUTIONAL NEURAL TANGENT KERNEL", "text": "To remove the effect of additional factors in modern neural networks such as random initialization, width, batch normalization, and down-sampling, we turn to the setting of infinite width neural networks. With proper scaling as width approaches infinity, the solution given by training neural networks is given by a solution to a corresponding kernel regression problem, where the kernel is referred to as the neural tangent kernel, or NTK (Jacot et al., 2018). Arora et al. (2019) computed the NTK for convolutional networks (CNTK); we use their code for the experiments in this section.\nWe analyze the effect of depth on generalization of the CNTK. Since we are simply running kernel regression, our predictor perfectly interpolates the training data (i.e. 100% training accuracy and 0 training error). In Figure 6, we compute the CNTK with a training set of roughly 250 examples each of CIFAR10 planes and trucks. We then calculate the test loss and accuracy on a test set of roughly 250 planes and trucks. We observe that test error is monotonically decreasing up to a critical depth, after which it is monotonically increasing. This provides further evidence that such a phenomenon occurs more generally in convolutional networks. In Appendix D.6, we present\nadditional empirical evidence that the generalization of the CNTK worsens past a critical depth by considering classification problems with a varying number of classes." }, { "heading": "4 THE ROLE OF DEPTH IN LINEAR NEURAL NETWORKS", "text": "In the previous section, we observed that past a critical depth threshold the performance of CNNs degrades with depth. For Fully-Conv networks, as depth increased, test performance appeared to approach that of a fully connected network. In this section, we aim to better understand this phenomenon, and in particular determine whether the performance of CNNs of increasing depth does indeed approach that of a fully-connected network. We turn to the setting of linear neural networks to simplify this analysis. Linear neural networks are useful for analyzing deep learning phenomena since they represent linear operators but have non-convex optimization landscapes. Furthermore, the solution learned by a single layer fully connected network is well understood as it is simply the minimum Frobenius norm solution Engl et al. (1996)1.\nIn this section, we conduct experiments on linear CNNs of increasing depth in both the classification and autoencoding settings. We show that linear CNNs of increasing depth consistently produce solutions of decreasing Frobenius norm, thus approaching the solution learned by a single layer fully connected network. To connect this to generalization, we provide a specific example of a classification setting where the solution learned by a fully-connected network, and thus linear CNNs of large depth, generalizes poorly, yet shallow CNNs generalize well. We propose that a similar mechanism could also explain the decrease in test accuracy beyond a critical threshold in the nonlinear setting." }, { "heading": "4.1 PRELIMINARIES", "text": "Linear networks are models of the form f(x) = Wd · · ·W1x, where Wi ∈ Rki×ki−1 is a weight matrix. CNNs are a special type of a constrained neural network, where each weight matrix Wi belongs to a subspace Si ⊂ Rki×ki−1 of dimension ri. Since the product of linear operators is linear, these neural networks represent linear functions. In particular, let W = Wd · · ·W1 represent the operator of a linear network. We will analyze the Frobenius norm (square root of the sum of squares of the entries) of W and the stable rank (a surrogate for the rank) of W in our analysis. Definition 1. (Vershynin, 2018, Chapter 7) Given a matrix W ∈ Rd1×d2 , the stable rank of W is given by:\n‖W‖2F ‖W‖2 =\n∑ i σ 2 i\nσ21 ;\nwhere {σi} denote the singular values of W .\nComputing the Linear Operator Efficiently. Given a linear neural network, it is possible to compute the linear operator by first constructing the matrix representation for each layer and then multiplying Radhakrishnan et al. (2019). Instead, we use the following simpler approach for producing the operator. Given a network implementing a map from an s × s image to Rd, we reshape the s2 × s2 identity matrix as an s2 × s × s tensor (so the batch size is s2) and feed this through the network. We then reshape the output matrix to be of size d× s2 to obtain the resulting operator." }, { "heading": "4.2 LINEAR CONVOLUTIONAL CLASSIFIERS", "text": "The following experiment provides an example where fully connected networks do not generalize, and in which linear convolutional classifiers of increasing depth perform similarly to a fully connected network. Consider a toy dataset of 6 × 6 color images as shown in Figure 7a. We use 2 training examples to represent two classes: Class label 1 has a blue pixel in the upper left hand corner and class label −1 has a red pixel in the upper left hand corner. We then construct 200 test examples with 100 having a red pixel and 100 having a blue pixel in a randomly selected location in the lower right 3× 3 quadrant of the square.\n1This assumes zero initialization for the single layer network. When the network has 1 output, the minimum Frobenius norm solution is the minimum `2 norm solution.\nWhile simple, this classification setting is useful for comparing the performance of CNNs and fully connected networks. It is set up to be trivial to solve using the convolution operation, but is such that the minimum Frobenius norm solution would not be able to generalize. Indeed, as demonstrated in Figure 7b, a linear fully convolutional network with 5 layers and 32 filters per layer is able to consistently get 100% test accuracy across 5 random seeds. On the other hand, performing linear regression to learn the minimum Frobenius norm solution yields a 50% test accuracy.\nIn Figure 7b, we see that increasing the depth of the fully convolutional network leads to a degradation in test accuracy. In particular, the average test accuracy across 5 random seeds is approximately 50% for networks of depth 20 or larger, which all obtain 100% training accuracy. In Figure 7c, we compare the Frobenius norm of the corresponding linear operator across depths and see that the Frobenius norm decreases with increasing depth. This simple example demonstrates that the performance of CNNs of increasing depth approaches that of a fully connected network, which does not generalize for many image classification tasks. In Appendix D.7, we present an additional experiment with more training samples that again demonstrates a similar phenomenon." }, { "heading": "4.3 LINEAR AUTOENCODERS", "text": "Similar to the case of linear convolutional classifiers, we now demonstrate that increasing depth in linear convolutional autoencoders leads to a decrease in Frobenius norm and stable rank. As done in Radhakrishnan et al. (2019); Zhang et al. (2020), we begin with the simple example of a linear convolutional autoencoder trained on a single image. When there is only a single layer, the authors in Radhakrishnan et al. (2019); Zhang et al. (2020) prove that the solution must be full rank due to the sparsity and weight sharing in a convolutional layer. This is demonstrated in Figure 8d,e where the depth 1 solution has large stable rank.\nIn Figure 8a,b,d,e, we demonstrate that increasing depth in linear convolutional autoencoders leads to solutions of decreasing Frobenius norm and stable rank. In particular, we observe that the norm of the trained networks decreases to approximately that of the minimum Frobenius norm solution, which is a projection onto the training examples Radhakrishnan et al. (2019); Zhang et al. (2020).\nWhile the previous experiments have considered networks with convolutional layers, in Figure 8c, we present an example of a linear autoencoder with alternate layer constraints (Toeplitz layers) for which increasing depth again decreases the operator’s Frobenius norm and stable rank. The key similarity between this layer constraint and that of a convolutional layer is that both increase representational power through depth. Indeed, the authors in Ye & Lim (2016) proved that every matrix can be decomposed as a product of Toeplitz matrices, and thus, neural networks with Toeplitz layers are expressive through depth. We note that showing that every matrix can be written as a product of convolutional layers remains open, but a simple parameter counting argument as given in Radhakrishnan et al. (2019) implies that representational power increases with depth. This experiment thus\nprovides empirical evidence that the phenomenon of decreasing norm with increasing depth occurs generally for networks with layer constraints that are more expressive through depth." }, { "heading": "5 DISCUSSION", "text": "In this work, we presented an empirical analysis of the impact of depth on generalization in CNNs. We first demonstrated that in modern non-linear CNNs, increasing depth past a critical threshold led to a decrease in test accuracy. This result is in stark contrast to the role of width in CNNs, as explained by double descent. Furthermore, for Fully-Conv Nets, we observed that increasing depth led to performance comparable to that of fully connected networks.\nTo better understand this phenomenon, we analyzed the operators learned by linear CNNs and demonstrated that increasing depth in these networks led to a decrease in Frobenius norm and stable rank of the learned operator. Moreover, as depth increased, we observed that the norm of the operator approached that of the minimum Frobenius norm solution, which is the solution learned by fully connected network. As demonstrated by our example in Section 4, in settings where the minimum Frobenius norm solution does not generalize, we observe that deep convolutional networks do not generalize. Understanding whether the norm of the functions learned by deep non-linear CNNs approaches that of functions learned by non-linear fully connected networks is an important direction of future work that could explain the poor generalization of deep CNNs observed in this work.\nThroughout this work, we consistently observed that increasing depth in CNNs beyond the interpolation threshold led to a decrease in test accuracy. Hence, our findings imply that practitioners should decrease depth in these settings to obtain better test performance. An interesting direction for future work is to understand where the critical depth threshold occurs as a function of width and number of classes. Importantly, if, as initial evidence in this paper suggests, the critical depth is correlated with the number of classes, then practitioners should use shallow, wide convolutional networks for problems with few classes." }, { "heading": "APPENDIX", "text": "A FULLY CONVOLUTIONAL NETWORK ARCHITECTURE" }, { "heading": "B RESNET ARCHITECTURE", "text": "" }, { "heading": "C EXPERIMENTAL DETAILS", "text": "All models were trained on an NVIDIA TITAN RTX GPU using the PyTorch library.\nAn anonymized repository with code used for this paper can be found here: https:// anonymous.4open.science/r/ebf33ffa-565e-408d-ae5f-12d91f942000/" }, { "heading": "Model Depth Blocks per Stage", "text": "" }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "D.1 ADDITIONAL RESNET PLOTS", "text": "In Figure 12, we plot the train and test losses of all ResNet models used (for widths 16, 32, 64). Additionally, in Figure 13 we plot the accuracies of all ResNet models." }, { "heading": "D.2 EFFECT OF DEPTH IN MODELS WITH SMALL WIDTHS", "text": "The only model in which test loss continues to increase is the width 8 model. We argue this is because the width 8 model is not sufficiently over-parameterized; in fact, in Figure 14, we see that the width 8 model is unable to reach zero training loss, while all the other models are after sufficient depth." }, { "heading": "D.3 EFFECT OF DOWNSAMPLING", "text": "In Figure 15 we compare a ResNet model where we increase the number of blocks in the first stage versus a model where we increase the number of blocks in the third stage. We observe that the model where the third stage blocks are increased performs worse. This is likely because adding a block in a later stage, after downsampling, increases the effective depth of the model more than adding a block in an earlier stage." }, { "heading": "D.4 EFFECT OF NUMBER OF SAMPLES", "text": "In Figure 16 is a plot of test losses when training the width 32 ResNet model on 500 samples per class ( 110 of CIFAR10). Number of training epochs is increased accordingly. We observe that test loss increases as depth increases, showing that this phenomenon is robust to change in sample size." }, { "heading": "D.5 EFFECT OF KERNEL SIZE", "text": "Another form of overparameterization is increasing the kernel size for convolutional filters. In Figure 17, we train ResNet-10 and ResNet-18 of width 32 and varying kernel sizes, and observe that as kernel size increases, test loss increases. This is consistent with our proposed explanation based on expressivity, since increasing kernel size increases representational power independent of depth." }, { "heading": "D.6 ADDITIONAL CNTK EXPERIMENTS", "text": "We also train the CNTK on subsets of CIFAR10 of varying number of classes. We use 100 train and 100 test examples per class, and train on both 2 classes (birds and deer), 5 classes (cats, dogs, horses, bids, and deer), and 10 classes (all of CIFAR10). The test losses and accuracies are shown in Figure 18. Again, we see that generalization is unimodel, with test loss decreasing until a critical depth and increasing afterwards, which is in agreement with our main CNTK experiment in Figure 6.\nWe note that training the CNTK for large depths is computationally prohibitive. The runtime scales quadratically in the number of training samples; furthermore, training the depth 500 CNTK on 1 GPU for 500 train and 500 test samples took approximately 2 days." }, { "heading": "D.7 ADDITIONAL LINEAR CONVOLUTIONAL NETWORK EXPERIMENTS", "text": "Te st\nA cc\nur ac\ny\nDepth Depth\nN or\nm o\nf O pe\nra to r (b)(a)\nFigure 19: An additional toy example demonstrating that increasing depth in linear convolutional networks leads to operators of decreasing `2 norm, which manifests as a decrease in test accuracy. Instead of having only 1 training sample of each class, we now sample 4 from each class randomly from the upper left quadrant of a 6 × 6 square. Our network uses 64 filters per layer, with a kernel size of 3, and zero padding. (a) The training and test performance of linear convolutional networks of varying depth across 3 random seeds. The test accuracy of the minimum `2 norm solution for this problem is shown as a dashed black line. (c) The `2 norm of the operator with varying depth. The norm of the minimum `2 norm solution for this problem is shown as a dashed black line." }, { "heading": "D.8 TEST LOSSES FOR FULLY CONVOLUTIONAL EXPERIMENTS", "text": "Te st L os s\nDepth Depth Depth (a) 2 classes (b) 5 classes (c) 10 classes\nFigure 20: Test losses for models in Figure 2 (on CIFAR10). The error for the convolutional networks is in blue and that of full connected networks is in red." } ]
2,020
DO DEEPER CONVOLUTIONAL NETWORKS PERFORM BETTER?
SP:975e5116fe8c4160a6e0c875044d95ee569208a9
[ "This paper aims to evaluate the performance of seven automated labeling algorithms in terms of accuracy. The authors conducted a set of experiments on six datasets from different domains under two typical settings where 10% and 50%of labels in the datasets are available. Experimental results show that the algorithms label spreading with KNN perform better in the aggregated results, the active learning algorithms QBC and query instance uncertainty sample perform better when 10% of labels available." ]
The lack of labeled data is a major problem in both research and industrial settings since obtaining labels is often an expensive and time-consuming activity. In the past years, several machine learning algorithms were developed to assist and perform automated labeling in partially labeled datasets. While many of these algorithms are available in open-source packages, there is no research that investigates how these algorithms compare to each other in different types of datasets and with different percentages of available labels. To address this problem, this paper empirically evaluates and compares seven algorithms for automated labeling in terms of accuracy. We investigate how these algorithms perform in six different and well-known datasets with three different types of data, images, texts, and numerical values. We evaluate these algorithms under two different experimental conditions, with 10% and 50% labels of available labels in the dataset. Each algorithm, in each dataset for each experimental condition, is evaluated independently ten times with different random seeds. The results are analyzed and the algorithms are compared utilizing a Bayesian Bradley-Terry model. The results indicate that while the algorithms label spreading with K-nearest neighbors perform better in the aggregated results, the active learning algorithms query by instance QBC and query instance uncertainty sample perform better when there is only 10% of labels available. These results can help machine learning practitioners in choosing optimal machine learning algorithms to label their data.
[]
[ { "authors": [ "Ralph Allan Bradley", "Milton E Terry" ], "title": "Rank analysis of incomplete block designs: I. the method of paired comparisons", "venue": null, "year": 1952 }, { "authors": [ "Bob Carpenter", "Andrew Gelman", "Matthew D Hoffman", "Daniel Lee", "Ben Goodrich", "Michael Betancourt", "Marcus Brubaker", "Jiqiang Guo", "Peter Li", "Allen Riddell" ], "title": "Stan: A probabilistic programming language", "venue": "Journal of statistical software,", "year": 2017 }, { "authors": [ "Manuela Cattelan" ], "title": "Models for paired comparison data: A review with emphasis on dependent data", "venue": "Statistical Science,", "year": 2012 }, { "authors": [ "Tat-Seng Chua", "Jinhui Tang", "Richang Hong", "Haojie Li", "Zhiping Luo", "Yantao Zheng" ], "title": "Nus-wide: a real-world web image database from national university of singapore", "venue": "In Proceedings of the ACM international conference on image and video retrieval,", "year": 2009 }, { "authors": [ "Yoav Freund", "H Sebastian Seung", "Eli Shamir", "Naftali Tishby" ], "title": "Selective sampling using the query by committee algorithm", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Matthew D Hoffman", "Andrew Gelman" ], "title": "The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Jonathan J. Hull" ], "title": "A database for handwritten text recognition research", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 1994 }, { "authors": [ "Ajay J Joshi", "Fatih Porikli", "Nikolaos Papanikolopoulos" ], "title": "Multi-class active learning for image classification", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "David D Lewis", "Jason Catlett" ], "title": "Heterogeneous uncertainty sampling for supervised learning", "venue": "In Machine learning proceedings", "year": 1994 }, { "authors": [ "Sachin Pawar", "Nitin Ramrakhiyani", "Swapnil Hingmire", "Girish K Palshikar" ], "title": "Topics and label propagation: Best of both worlds for weakly supervised text classification", "venue": "In International Conference on Intelligent Text Processing and Computational Linguistics,", "year": 2016 }, { "authors": [ "Burr Settles" ], "title": "Active learning. morgan claypool", "venue": "Synthesis Lectures on AI and ML,", "year": 2012 }, { "authors": [ "H Sebastian Seung", "Manfred Opper", "Haim Sompolinsky" ], "title": "Query by committee", "venue": "In Proceedings of the fifth annual workshop on Computational learning theory,", "year": 1992 }, { "authors": [ "Jinhui Tang", "Richang Hong", "Shuicheng Yan", "Tat-Seng Chua", "Guo-Jun Qi", "Ramesh Jain" ], "title": "Image annotation by k nn-sparse graph-based label propagation over noisily tagged web images", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2011 }, { "authors": [ "Heather L Turner", "Jacob van Etten", "David Firth", "Ioannis Kosmidis" ], "title": "Modelling rankings in r: The plackettluce package", "venue": "Computational Statistics,", "year": 2020 }, { "authors": [ "Zheng-Jun Zha", "Tao Mei", "Jingdong Wang", "Zengfu Wang", "Xian-Sheng Hua" ], "title": "Graph-based semisupervised learning with multiple labels", "venue": "Journal of Visual Communication and Image Representation,", "year": 2009 }, { "authors": [ "Dengyong Zhou", "Olivier Bousquet", "Thomas N Lal", "Jason Weston", "Bernhard Schölkopf" ], "title": "Learning with local and global consistency", "venue": "In Advances in neural information processing systems,", "year": 2004 }, { "authors": [ "Xiaojin Zhu", "Zoubin Ghahramani" ], "title": "Learning from labeled and unlabeled data with label propagation", "venue": null, "year": 2002 }, { "authors": [ "Xiaojin Jerry Zhu" ], "title": "Semi-supervised learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Supervised learning is the most commonly used machine learning paradigms. There are problems with supervised learning and machine learning in general. The first problem is that machine learning requires huge amounts of data. Secondly, supervised learning needs labels in the data. In a case study performed with industry, several labeling issues were found (Anonymous, 2020a).\nA recent systematic literature review was conducted to see what type of machine learning algorithms exist to make the labeling easier. A recent systematic literature review investigated the use of Semisupervised learning and Active learning for automatic labeling of data (Anonymous, 2020b). From those results the authors concluded which active and semi-supervised learning algorithms were the most popular and which datatypes they can be used on. However, even if there has been work done on active and semi-supervised learning, these learning paradigms are still very new for many companies and consequentially seldomly used.\nUtilizing a simulation study we evaluated seven semi-supervised and active learning algorithms on six datasets of different types, numerical, text and image data. Implementing a Bayesian Bradley Terry model we ranked the algorithms according to accuracy and effort.\nThe contribution of this paper is to provide a taxonomy of automatic labeling algorithms and an empirical evaluation of algorithms in the taxonomy evaluated across two dimensions: Performance, how accurate the algorithm is, and Effort, how much manual work has to be done from the data scientist.\nThe remainder of this paper is organized as follows. In the upcoming section we provide the an overview about semi-supervised and active learning algorithms and how they work. In section 3 we will describe our study, how we preformed the simulations, what datasets and source code we used,\nand what kind of metrics we used to evaluate performance, effort and applicability. In section 4 we provide the results from the simulation study and finally, we will interpret the results and conclude the paper in section 5." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 ACTIVE LEARNING", "text": "Suppose a large unlabeled dataset is to be used for training a classification algorithm. Active Learning (AL),poses query strategies on the data and selects points to be labeled according to a measure of informativeness called a Query Strategy. After the instances has been labeled with the help of the oracle, the machine learning algorithm is trained with this newly labeled data. If the learner thinks that the accuracy of the algorithm is too low and that the accuracy can be improved, the learner will request new and or replace some of the old labels. The algorithm will then be re-trained and evaluated once again. This procedure will continue iterative until some other stopping criteria has been reached. As a reference on AL, the reader is recommended to look at other sources such as (Settles, 2012). We shall now present the query strategies that we used in this text.\nUncertainty Sampling is according to (Anonymous, 2020b) the most commonly used active learning strategy. The idea of this approach is query the instances that we are the least certain about and then label these. Uncertainty sampling strategies are very commonly used and work especially well for probabilistic algorithms such as logistic regression according to (Lewis & Catlett, 1994). (Lewis & Catlett, 1994) concluded that uncertainty sampling has the ability to outperform random sampling by evaluating and comparing it to on a text classification dataset and (Joshi et al., 2009) concluded the same on image data by comparing accuracy scores on two uncertainty-sampling based methods and random sampling.\nQuery-by-Committee(QBC) means that we train a committee of classifiers and then query the instance on which the committee disagrees. Add the newly labeled instance to the labeled training data and retrain the algorithm on the new training set and repeat this procedure. What is important here is the way we measure disagreement. Some way to measure disagreement is through entropy, vote-entropy and KL divergence(Settles, 2012). QBC is relatively straightforward to implement and are applicable to any basic machine learning mode. (Seung et al., 1992) and (Freund et al., 1997) were the first to formulate QBC. In Seung et al. (1992) they use Monte Carlo simulation to show that QBC can outperform random sampling.\nRandom sampling is when the learner chooses to query the instances randomly and not according to any strategy. If a learner does not choose his query strategy carefully with respect to his data and machine learning algorithm, then active learning might not outperform choosing your instances randomly." }, { "heading": "2.2 SEMI-SUPERVISED LEARNING", "text": "Semi-supervised machine learning is a class of machine learning algorithms that utilizes both labeled and unlabeled data. Semi-supervised algorithms are then trained on both the unlabeled and the labeled data and in some cases it even outperforms supervised classifiers. For more information on semi-supervised learning we refer the reader to (Zhu, 2005). According to (Anonymous, 2020b) the second most popular semi-supervised learning algorithms are the graph-based algorithms. The idea of these algorithms is to build a graph from the training data. These graphs contains both labeled and unlabeled instances. Let each pair (xi, yi) and (xj , yj) represent each vertex and its corresponding label. Let the edge weight wij represent the weight of the edge between vertex i and vertex j. The larger wij becomes the more similar are the labels of both vertices. The question is then how to compute the weight wij . Two examples of graph-based methods are Label Propagation and Label Spreading (Zha et al., 2009).\nLabel propagation was first introduced in (Zhu & Ghahramani, 2002) and presented as follows. Given labeled and unlabeled data, define the weight matrix wij . The probabilistic transition matrix T is defined as the probability of jumping from vertex i to vertex j\nTij := P (j → i) = wij∑l+u k=1 wkj .\nThe matrix Y is called the label matrix and it’s ith row represents the label probability distribution of vertex xi The label propagation algorithm consists of the following steps\n1. All nodes propagate for one step: Y ←− TY 2. Row normalize Y\n3. Clamp the labeled data.\nRepeat Step 1-2 until Y converges. (Zhu & Ghahramani, 2002) evaluates the label propagation algorithm on both synthetic data and real-world classification data (Hull, 1994), by comparing it’s error rates to that of kNN with k = 1. The results show that label propagation can outperform kNN when the number for labeled instances is greater than 40. Label propagation algorithms have used and evaluated in image annotation (Tang et al., 2011; Chua et al., 2009) and text classification (Pawar et al., 2016)\nLabel Spreading was first introduced in (Zhou et al., 2004). Given a partially labeled dataset with c different labels. Let F be the set of all matrices of size n× c with non-negative entries. Let F ∈ F . Each entry Fij in F depends on how we label xi. We have that\nyi = argmax j≤c Fij\nDefine a matrix Y ∈ F such that\nY = { Yij = 1 if yi = j Yij = 0 otherwise\nThe label spreading algorithm is:\n1. Define the affinity matrix\nW = { Wij = wij if i 6= j Wii = 0\n2. Define the matrix S = D−1/2WD1/2 where D is a diagonal matrix Dii = ∑ kWik. 3. Iterate F (t+ 1) = αSF (t) + (1− α)Y until convergence, α ∈ (0, 1). 4. Label xi as yi = argmaxj≤c F ∗ ij where F ∗ is the limit of the sequence {F (t)}.\n(Zhou et al., 2004) evaluates the label spreading algorithm on a toy dataset, images in the form of handwritten digits and text classification and concludes that it outperforms baseline models kNN with k = 1, SVM with RBF kernel." }, { "heading": "2.3 THE BRADLEY TERRY MODEL", "text": "The Bradley-Terry model (Bradley & Terry, 1952; Cattelan, 2012) is one of the most commonly used models when it comes to analysis of paired comparison data between two objects i and j for i, j = 1, ..., n. The comparison can be by done several subjects s = 1, ..., S and the total number of possible paired comparisons is equal to n(n− 1)/2. Let ys = (ys,1,2, ..., ys,n−1,1) be the vector of outcomes of all paired comparisons, we will assume that we outcomes are independent.\nLet µi ∈ R, i = 1, 2, ..., n denote a latent “strength” of the algorithm being compared. If the paired comparison can have only two outcomes and ties are randomly resolved, the probability of i beating j can be represented by:\nP [i beats j] = eµi\neµi + eµj\nReducing the expression to a logistic regression (Bradley & Terry, 1952):" }, { "heading": "P (i over j) = logit−1(µi − µj)", "text": "By estimating the strength latent variable µ, we can infer the probability of one algorithm to beat the other and use this information to rank the algorithms" }, { "heading": "3 RESEARCH METHOD", "text": "In this section we present the details about the datasets that we used for our simulations, the experimental conditions and the used algorithms.\nThe goal of this study is to show in detail how machine learning algorithms can be used to help with data labeling and to provide an in-depth comparison on how these different algorithms perform on different types of data. To achieve this we performed an empirical evaluation of seven different active learning and semi-supervised learning algorithms and evaluated them on six datasets under different conditions.\nThe main research questions that we use to evaluate the machine learning algorithms are the following.\n• RQ1: How can we rank different active learning and semi-supervised learning algorithms in terms of accuracy?\n• RQ2: How do the rank of these algorithms with changes in the amount of manual labeling effort prior to applying these methods?" }, { "heading": "3.1 SIMULATIONS", "text": "As recognized in (Anonymous, 2020b) Co-training/multi-view learning are the most popular algorithms but are based on the assumption than we can watch an instance from multiple views. Graph-based algorithms are the second most common type of semi-supervised learning algorithm. Uncertainty sampling methods are very popular active learning query strategies followed by QBC.\nFurthermore we have included two different graph-based algorithms Label Spreading and Label Propagation. Both methods are easy to implement using python an\n• Label Spreading using k-NN is implemented with wij = kNN, k = 7, α = 0.2 (Pyt, b). • Label Spreading using RBF is implemented withwij = exp(−γ|xi−xj |2) , γ = 20, α = 0.2\n• Label Propagation using k-NN is implemented with wij = kNN, k = 7 (Pyt, a). • Label Propagation using RBF is implemented with wij = exp(−γ|xi − xj |2) , γ = 20 • Radnom Sampling, Uncertainty Sampling and QBC: Each dataset was randomly split\ninto training and test set, unlabeled and labeled set. 80% of the data was allocated for training and 20% was allocated for testing. As a stopping criterion we choose to stop after 50 instances had been queried.\nWe choose six benchmarked datasets to be used in our experiments. Two numerical datasets, two text datasets and two image datasets. Due to the size of some datasets and to limited time and computational resources required we had to reduce the number of images used in our experiments. However, we made sure we used the same ration for the classes to get a fair estimated.\n• Image data: – Cifar-10: This dataset originally contains 60000 32x32 colored images that can be\ndivided into ten classes, airplane, automobile, bird, car, deer, dog, frog, horse, ship and truck (cif).\n– Digits: This dataset contains 1797 samples of 8x8 images containing one digit each. There are ten classes that represent which digits is contained in each image. (dig)\n• Text data: – Fake and true news: This is a dataset containing 44594 instances and 5 features. The\nfeatures are, ”title”, the title of the news article. ”text”, the text of the article, ”subject” the article subject and a column representing the label classes, ”False” or ”Truthful”. From this dataset we only extracted the ”text” column and used it as a features to predict the labels. The dataset can be download from Kaggle (fak).\n– 20news: This dataset contains 18846 instances divided into 20 classes that describes the 20 different types of news. (20n).\n• Numerical data – Iris: This dataset is a classic example for multi-class classification. It contains 150\ninstances across three classes.(iri). – Wine: The wine dataset also a classic example of multi-class classification. It contains\n178 instances across three classes.(win).\nFor each dataset we ran each iteration ten times with different random seeds. Furthermore, the only parameter that we change is number of labeled instances. To answer RQ2 we have to vary the amount of instances in dataset that are already labeled. In our experiments we choose 10% to represent small amount of manual effort required and 50% for large amount of effort required. From each iteration we logged the F1-score to measure the accuracy of our predictive labels." }, { "heading": "4 RESULTS", "text": "From the simulations a dataset of 840 instances was collected. To analyze this data, we first rank each algorithm in by each of the ten iterations of each dataset in each experimental condition. This data is them expanded into paired comparisons for the use in the Bradley-Terry model (Turner et al., 2020; Cattelan, 2012). In this model, y is a binary variable that indicates which algorithm beats the other:\ny ∼ Bernoulli(p), p = logit−1(µalgo1 − µalgo0), µi ∼ Normal(0, 5).\nThe same model is used to analyze both research questions. The model is written in Stan (Carpenter et al., 2017), which implements the No U-turn Hamiltonian Monte Carlo sampler (Hoffman & Gelman, 2014). We utilize the following configurations: 4 chains, warm-up of 200 iterations and a total of 2000 iterations. The data transformation, tables, plots and assessing the convergence of the chains are conducted in R together with package rstan and the collection of packages tidyverse.\nThe prior distributions of the µi parameters are adjusted to be weakly-informative distributions. The presented model estimates the posterior distribution of the latent strength parameters µi. In turn, sampling and ranking over the posterior distribution of the strength parameters allows us to obtain a posterior distribution of the ranks." }, { "heading": "4.1 AGGREGATED RESULTS", "text": "The descriptive statistics of these seven datasets are summarized in Table 1. Table 1 contains the mean, standard deviation, median as well as 5% and 95% quantiles for each method. Figure 1 provides descriptive statistics in the form of a boxplot.\nBased on the Bradley-Terry model described above, the parameter strength is computed for each algorithm. Figure 2 illustrates the distribution of the parameter strengths along with their High Posterior Density interval. To rank the algorithms we sample over the posterior distribution of the strength parameters 1000 times. The median ranks and their corresponding variances is displayed in Table 3." }, { "heading": "4.2 MANUAL EFFORT", "text": "The descriptive statistics of the seven datasets are located in Table 3. Table 3 contains the mean, standard deviation, median as well as 5% and 95% quantiles for each method. Figure 3 provides descriptive statistics in the form of two boxplots, one for 10% and one for 50% labels.\nBased on the Bradley-Terry model described above, the parameter strength is computed for each algorithm. Figure 4a and Figure 4b illustrate the distribution of the parameter strengths along with their High Posterior Density interval for 10% and 50% labels respectively. To rank the algorithms we sample over the posterior distribution of the strength parameters 1000 times. The median ranks\nAccuracy (all datasets)\nand their corresponding variances is displayed in Table 4 and Table 5 for 10% and 50% labels respectively.\nLabelPropagationKNN\nLabelPropagationRBF\nLabelSpreadingKNN\nLabelSpreadingRBF\nQBC\nRandomSampling\nUncertaintySampling\n−4 −2 0 2\nEstimate\nM o\nd e\nl\nHPDI of the strength of the model with 10% of available labels\n(a) The HPDI interval of the estimated strength parameters of the algorithms with 10% available labels\nLabelPropagationKNN\nLabelPropagationRBF\nLabelSpreadingKNN\nLabelSpreadingRBF\nQBC\nRandomSampling\nUncertaintySampling\n−2 −1 0 1 2\nEstimate\nM o\nd e\nl\nHPDI of the strength of the model with 50% of available labels\n(b) The HPDI interval of the estimated strength parameters of the algorithms with 50% available labels\nTable 5: Ranking of the algorithms with 50% of available labels\nModels Median Rank Variance of the Rank\nLabelSpreadingKNN 1 0.004 UncertaintySampling 2 0.353 QBC 3 0.359 RandomSampling 4 0.161 LabelSpreadingRBF 5 0.003 LabelPropagationKNN 6 0.234 LabelPropagationRBF 7 0.234" }, { "heading": "5 CONCLUSION", "text": "According to Table 2, Label Spreading using kNN is the highest ranking algorithm followed by uncertainty sampling, QBC and then random sampling. The uncertainty intervals of the posterior distribution are shown in Figure 2. The large overlap between the top three algorithms strength parameters indicates the uncertainty in rank between them (which can be observed in the large variance of each rank).\nAccording to Table 4, the highest ranking algorithm when having access to 10% available labels is uncertainty sampling, followed by QBC, label spreading using kNN and random sampling. When having access to 50% labels the highest ranking algorithm is label spreading using kNN, followed by uncertainty sampling, QBC, and random sampling according to Table 5. The uncertainty intervals of the posterior distribution are shown in Figures 4a and 4b for 10% and 50% respectively. The overlap between the top algorithms strength parameters indicates the uncertainty in their estimates, this can also be observed in their variance.\nThe goal of this study is to provide a detailed overview of what machine learning algorithm should be used for automatic labeling of data in industrial contexts. Based on the results,the top four algorithms are label spreading using kNN, uncertainty sampling, QBC and random sampling. For the aggregated results as well as when having access to 50% labeled data,the highest ranking algorithm is label spreading using kNN. However, when 10% labels are available, Uncertainty Sampling ranks highest followed by QBC. Thus this paper contributes in assisting machine learning practitioners to choose the optimal machine learning algorithm for automatic labeling. In future work, simulations will include more datasets to provide a better understanding of how well algorithms perform on different types of data." }, { "heading": "ACKNOWLEDGMENT", "text": "This work was partially supported by the Wallenberg AI Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Fundation." } ]
2,020
null
SP:4063187f00775058a7d47814b0062648d88f0b8d
[ "In this work, an imitation learning (AL) approach is proposed to imitate multiple active learning algorithms, in order to take their advantages to learn a better active learning algorithm. The main idea is to treat the active learning algorithms as experts and utilize the DAGGER algorithm for imitation learning. The proposed approach is evaluated on MNIST, fashion-MNIST, and Kuzushiji-MNIST, showing that the learned active learner outperforms baseline active learners, meanwhile is transferrable to other datasets." ]
Active learning (AL) prioritizes the labeling of the most informative data samples. However, the performance of AL heuristics depends on the structure of the underlying classifier model and the data. We propose an imitation learning scheme that imitates the selection of the best expert heuristic at each stage of the AL cycle in a batch-mode pool-based setting. We use DAGGER to train the policy on a dataset and later apply it to datasets from similar domains. With multiple AL heuristics as experts, the policy is able to reflect the choices of the best AL heuristics given the current state of the AL process. Our experiment on well-known datasets show that we both outperform state of the art imitation learners and heuristics.
[]
[ { "authors": [ "Jordan T. Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": "In International Conference on Learning Representations (ICLR), Virtual Conference, Formerly Addis Ababa Ethiopia,", "year": 2020 }, { "authors": [ "Philip Bachman", "Alessandro Sordoni", "Adam Trischler" ], "title": "Learning algorithms for active learning", "venue": "In 34th International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Loı̈c Barrault", "Ondřej Bojar", "Marta R. Costa-jussà", "Christian Federmann", "Mark Fishel", "Yvette Graham", "Barry Haddow", "Matthias Huck", "Philipp Koehn", "Shervin Malmasi", "Christof Monz", "Mathias Müller", "Santanu Pal", "Matt Post", "Marcos Zampieri" ], "title": "Findings of the 2019 conference on machine translation", "venue": "In 4th Conference on Machine Translation,", "year": 2019 }, { "authors": [ "William H Beluch", "Tim Genewein", "Andreas Nürnberger", "Jan M Köhler" ], "title": "The power of ensembles for active learning in image classification", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Hong-Min Chu", "Hsuan-Tien Lin" ], "title": "Can active learning experience be transferred", "venue": "IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical japanese literature, 2018", "venue": null, "year": 2018 }, { "authors": [ "Gabriella Contardo", "Ludovic Denoyer", "Thierry Artières" ], "title": "A Meta-Learning Approach to OneStep Active-Learning", "venue": "In International Workshop on Automatic Selection, Configuration and Composition of Machine Learning Algorithms,", "year": 2017 }, { "authors": [ "Yang Fan", "Fei Tian", "Tao Qin", "Xiang-Yang Li", "Tie-Yan Liu" ], "title": "Learning to teach", "venue": "arXiv preprint arXiv:1805.03643,", "year": 2018 }, { "authors": [ "Meng Fang", "Yuan Li", "Trevor Cohn" ], "title": "Learning how to active learn: A deep reinforcement learning approach", "venue": "In Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "In 33rd International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In 34th International Conference on Machine Learning (ICML), Sydney,", "year": 2017 }, { "authors": [ "Lukas Hahn", "Lutz Roese-Koerner", "Peet Cremer", "Urs Zimmermann", "Ori Maoz", "Anton Kummert" ], "title": "On the robustness of active learning", "venue": "In 5th Global Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lukas Hahn", "Lutz Roese-Koerner", "Peet Cremer", "Urs Zimmermann", "Ori Maoz", "Anton Kummert" ], "title": "On the robustness of active learning", "venue": "In 5th Global Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "HM Sajjad Hossain", "MD Abdullah Al Haiz Khan", "Nirmalya Roy" ], "title": "Deactive: Scaling activity recognition with active deep learning", "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,", "year": 2018 }, { "authors": [ "Wei-Ning Hsu", "Hsuan-Tien Lin" ], "title": "Active learning by learning", "venue": "In 29th AAAI Conference on Artificial Intelligence (AAAI),", "year": 2015 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Ksenia Konyushkova", "Raphael Sznitman", "Pascal Fua" ], "title": "Learning active learning from data", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Minghan Li", "Xialei Liu", "Joost van de Weijer", "Bogdan Raducanu" ], "title": "Learning to rank for active learning: A listwise approach, 2020", "venue": null, "year": 2020 }, { "authors": [ "Xin Li", "Yuhong Guo" ], "title": "Adaptive active learning for image classification", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2013 }, { "authors": [ "Ming Liu", "Wray Buntine", "Gholamreza Haffari" ], "title": "Learning how to actively learn: A deep imitation learning approach", "venue": "In 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Dwarikanath Mahapatra", "Behzad Bozorgtabar", "Jean-Philippe Thiran", "Mauricio Reyes" ], "title": "Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2018 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Andriy Mnih", "Danilo J. Rezende" ], "title": "Variational inference for monte carlo objectives", "venue": "In 33rd International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Meta-learning for batch mode active learning", "venue": "In 6th Intl. Conf. on Learning Representations,", "year": 2018 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In 14th International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2011 }, { "authors": [ "Dan Roth", "Kevin Small" ], "title": "Margin-based active learning for structured output spaces", "venue": "Machine Learning: ECML", "year": 2006 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Burr Settles", "Mark Craven", "Soumya Ray" ], "title": "Multiple-instance active learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2008 }, { "authors": [ "Samarth Sinha", "Sayna Ebrahimi", "Trevor Darrell" ], "title": "Variational adversarial active learning", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "LMA Tonnaer" ], "title": "Active learning in vae latent space", "venue": "Eindhoven University of Technology,", "year": 2017 }, { "authors": [ "Athanasios Voulodimos", "Nikolaos Doulamis", "Anastasios Doulamis", "Eftychios Protopapadakis" ], "title": "Deep learning for computer vision: A brief review", "venue": "Comp. Intelligence and Neuroscience,", "year": 2018 }, { "authors": [ "D. Wang", "Y. Shang" ], "title": "A new active labeling method for deep learning", "venue": "In 2014 International Joint Conference on Neural Networks (IJCNN),", "year": 2014 }, { "authors": [ "Mark Woodward", "Chelsea Finn" ], "title": "Active one-shot learning", "venue": "In Advances in Neural Information Processing Systems Workshops,", "year": 2018 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The high performance of deep learning on various tasks from computer vision (Voulodimos et al., 2018) to natural language processing (NLP) (Barrault et al., 2019) also comes with disadvantages. One of their main drawbacks is the large amount of labeled training data they require. Obtaining such data is expensive and time-consuming and often requires domain expertise.\nActive Learning (AL) is an iterative process where during every iteration an oracle (e.g. a human) is asked to label the most informative unlabeled data sample(s). In pool-based AL all data samples are available (while most of them are unlabeled). In batch-mode pool-based AL, we select unlabeled data samples from the pool in acquisition batches greater than 1. Batch-mode AL decreases the number of AL iterations required and makes it easier for an oracle to label the data samples (Settles, 2009). As a selection criteria we usually need to quantify how informative a label for a particular sample is. Well-known criteria include heuristics such as model uncertainty (Gal et al., 2017; Roth & Small, 2006; Wang & Shang, 2014; Ash et al., 2020), data diversity (Sener & Savarese, 2018), query-by-committee (Beluch et al., 2018), and expected model change (Settles et al., 2008). As ideally we label the most informative data samples at each iteration, the performance of a machine learning model trained on a labeled subset of the available data selected by an AL strategy is better than that of a model that is trained on a randomly sampled subset of the data.\nBesides the above mentioned, in the recent past several other data-driven AL approaches emerged. Some are modelling the data distributions (Mahapatra et al., 2018; Sinha et al., 2019; Tonnaer, 2017; Hossain et al., 2018) as a pre-processing step, or similarly use metric-based meta-learning (Ravi & Larochelle, 2018; Contardo et al., 2017) as a clustering algorithm. Others focus on the heuristics and predict the best suitable one using a multi-armed bandits approach (Hsu & Lin, 2015). Recent approaches that use reinforcement learning (RL) directly learn strategies from data (Woodward & Finn, 2018; Bachman et al., 2017; Fang et al., 2017). Instead of pre-processing data or dealing with the selection of a suitable heuristic they aim to learn an optimal selection sequence on a given task.\nHowever, these pure RL approaches not only require a huge amount of samples they also do not resort to existing knowledge, such as potentially available AL heuristics. Moreover, training the RL agents is usually very time-intensive as they are trained from scratch. Hence, imitation learning (IL) helps in settings where very few labeled training data and a potent algorithmic expert are available. IL aims to train, i.e., clone, a policy to transfer the expert to the related few data problem. While IL mitigates some of the previously mentioned issues of RL, current approaches are still limited with respect to their algorithmic expert and their acquisition size (including that of Liu et al. (2018)), i.e., some only pick one sample per iteration, and were so far only evaluated on NLP tasks.\nWe propose an batch-mode AL approach that enables larger acquisition sizes and that allows to make use of a more diverse set of experts from different heuristic families, i.e., uncertainty, diversity,\nexpected model-change, and query-by-committee. Our policy extends previous work (see Section 2) by learning at which stage of the AL cycle which of the available strategies performs best. We use Dataset Aggregation (DAGGER) to train a robust policy and apply it to other problems from similar domains (see Section 3). We show that we can (1) train a policy on image datasets such as MNIST, Fashion-MNIST, Kuzushiji-MNIST, and CIFAR-10, (2) transfer the policy between them, and (3) transfer the policy between different classifier architectures (see Section 4)." }, { "heading": "2 RELATED WORK", "text": "Next to the AL approaches for traditional ML models (Settles, 2009) also ones that are applicable to deep learning have been proposed (Gal et al., 2017; Sener & Savarese, 2018; Beluch et al., 2018; Settles et al., 2008; Ash et al., 2020). Below we discuss AL strategies that are trained on data.\nGenerative Models. Explicitly modeled data distributions capture the informativeness that can be used to select samples based on diversity. Sinha et al. (2019) propose a pool-based semi-supervised AL where a discriminator discriminates between labeled and unlabeled samples using the latent representations of a variational autoencoder. The representations are used to pick data points that are most diverse and representative (Tonnaer, 2017). Mirza & Osindero (2014) use a conditional generative adversarial network to generate samples with different characteristics from which the most informative are selected using the uncertainty measured by a Bayesian neural network (Kendall & Gal, 2017; Mahapatra et al., 2018). Such approaches are similar to ours (as they capture dataset properties) but instead we model the dataset implicitly and infer a selection heuristic via imitation.\nMetric Learning. Metric learners such as Ravi & Larochelle (2018) use a set of statistics calculated from the clusters of un-/labeled samples in a Prototypical Network’s (Snell et al., 2017) embedding space, or learn to rank (Li et al., 2020) large batches. Such statistics use distances (e.g. Euclidean distance) or are otherwise converted into class probabilities. Two MLPs predict either a quality or diversity query selection using backpropagation and the REINFORCE gradient (Mnih & Rezende, 2016). However, while they rely on statistics over the classifier’s embedding and explicitly learn two strategies (quality and diversity) we use a richer state and are not constrained to specific strategies.\nReinforcement Learning (RL). The AL cycle can be modeled as a sequential decision making problem. Woodward & Finn (2018) propose a stream-based AL agent based on memory-augmented neural networks where an LSTM-based agent learns to decide whether to predict a class label or to query the oracle. Matching Networks (Bachman et al., 2017) extensions allow for pool-based AL. Fang et al. (2017) use Deep Q-Learning in a stream-based AL scenario for sentence segmentation. In contrast to them we consider batch-mode AL with acquisition sizes ≥ 1, and work on a poolinstead of a stream-settings. While Bachman et al. (2017) propose a strategy to extend the RL-based approaches to a pool setting, they do still not work on batches. Instead, we allow batches of arbitrary acquisition sizes. Fan et al. (2018) propose a meta-learning approach that trains a student-teacher pair via RL. The teacher also optimizes data teaching by selecting labeled samples from a minibatch that lets the student learn faster. In contrast, our method learns to selects samples from an unlabeled pool, i.e., in a missing target scenario. The analogy of teacher-student is related, however, the objective, method and available (meta-)data to learn a good teacher (policy) are different.\nMulti-armed Bandit (MAB). Baram et al. (2004) treat the online selection of AL heuristics from an ensemble as the choice in a multi-armed bandit problem. COMB uses the known EXP4 algorithm to solve it, and ranks AL heuristics according to a semi-supervised maximum entropy criterion (Classification Entropy Maximization) over the samples in the pool. Building on this Hsu & Lin (2015) learn to select an AL strategy for an SVM-classifier, and use importance-weighted accuracy extension to EXP4 that better estimates each AL heuristics’ performance improvement, as an unbiased estimator for the test accuracy. Furthermore, they reformulate the MAB setting so that the heuristics are the bandits and the algorithm selects the one with the largest performance improvement, in contrast to COMB’s formulation where unlabeled samples are the bandits. Chu & Lin (2016) extend Hsu & Lin (2015) to a setting where the selection of AL heuristics is done through a linear weighting, aggregating experience over multiple datasets. They adapt the semi-supervised reward scheme from Hsu & Lin (2015) to work with their deterministic queries. In our own work, we instead learn a unified AL policy instead of selecting from a set of available heuristics. This allows our policy to learn interpolation between batches of samples proposed by single heuristics and furthermore, to exploit the classifier’s internal state, so that it is especially suited for deep learning models.\nImitation Learning (IL). Liu et al. (2018) propose a neural network that learns an AL strategy based on the classifier’s loss on a validation set using Dataset Aggregation (DAGGER) (Ross et al., 2011). One of their key limitations is that only a single sample is labeled during every acquisition. As the DL model is trained from scratch after every acquisition this results in a very slow active learning process and expensive expert-time is requested less efficiently (Kirsch et al., 2019; Sener & Savarese, 2018). Hence, we extend this work for batch-mode AL using a top-k-like loss function, and select more samples to increase the suitability to deep learning and its efficiency (as we do not retrain after each sample). We also incorporate recent ideas (Ash et al., 2020) to extend the state and imitate multiple AL heuristics. This is computationally more efficient and leads to better results." }, { "heading": "3 IALE: IMITATING AN ENSEMBLE OF ACTIVE LEARNERS", "text": "IALE learns an AL sampling strategy for similar tasks from multiple experts in a pool-based setting. We train a policy with data consisting of states (i.e., that includes an encoding of the labeled data samples) and best expert actions (i.e., samples selected for labeling) collected over the AL cycles. The policy is then used on a similar (but different) task. To see states that are unlikely to be produced by the experts, DAGGER (Ross et al., 2011) collects a large set of states and actions over AL iterations. The policy network is trained on all the previous states and actions after each iteration." }, { "heading": "3.1 BACKGROUND", "text": "In pool-based AL we train a model M on a dataset D by iteratively labeling data samples. Initially, M is trained on a small amount of labeled data Dlab randomly sampled from the dataset. The rest of the data is considered as the unlabeled data pool Dpool, i.e., D = Dlab ∪ Dpool. From that point onwards during the AL iterations a subset of Dsel is selected from Dpool by using an acquisition function a(M,Dpool). The data is labeled and then removed fromDpool and added toDlab. The size of Dsel is based on the acquisition size acq (>1 for batch-mode AL). The AL cycle continues until a labeling budget of B is reached. M is retrained after each acquisition to evaluate the performance boost with respect to the increased labeled dataset only (and not the additional training time).\nThe acquisition function a is a heuristic that uses the trained model M to decide which of the data samples in Dpool are most informative. For deep AL popular heuristics include uncertainty-based MC-Dropout (Gal et al., 2017), query-by-committee-based Ensembles (Beluch et al., 2018), data diversity-based CoreSet (Sener & Savarese, 2018), gradient-based BADGE (Ash et al., 2020) and soft-max-based Confidence- or Entropy-sampling (Wang & Shang, 2014).\nMC-Dropout uses a Monte-Carlo inference scheme based on a dropout layer to approximate the model’s predictive uncertainty (Gal & Ghahramani, 2016). The heuristic (Gal et al., 2017) then uses these values to select the most uncertain samples. Ensembles (Beluch et al., 2018) model predictive uncertainty using a committee of N classifiers initialized with different random seeds. However, while at inference time we need to run only N forward-passes per sample (compared to MC-Dropout performing two dozen or more Monte-Carlo passes), the training ofN−1 additional deep models can become prohibitively expensive in many use-cases. CoreSet (Sener & Savarese, 2018) aims to select diverse samples by solving the k-centers problem on the classifier’s embeddings. This involves minimizing the distance between each of the unlabeled data samples to its nearest labeled samples. BADGE uses (pseudo labels of) the magnitudes of the gradients in a batch to select samples by uncertainty, and the gradient directions together with a k-means++ clustering to select samples by diversity. Soft-max-based heuristics (Confidence- and Entropy-sampling) use predictive uncertainty and are computationally lightweight at lower AL performance (Gal & Ghahramani, 2016; Ash et al., 2020) (Confidence selects the samples with the lowest class probability and Entropy the ones with with largest entropy of their probability distribution)." }, { "heading": "3.2 LEARNING MULTIPLE EXPERTS", "text": "Instead of using specific heuristics we propose to learn the acquisition function using a policy network. Once the policy is trained on a source dataset it can be applied to different target datasets. Figure 1 sketches the idea. The policy network π is a Multi-Layer Perceptron (MLP) trained to predict the usefulness of labeling samples from the unlabeled data poolDpool for training the modelM , similar to an AL acquisition function. As input the policy network takes the current state, consisting\nof the modelM ’s embeddings of pool data next to other elements, extending similar work (Contardo et al., 2017; Konyushkova et al., 2017; Liu et al., 2018), see below. π then outputs the action to be taken at that step. Action here refers to an AL acquisition, i.e., which of the unlabeled data samples should be labeled and added to the training data. π learns the best actions from a set of experts E which predict the best actions for a given AL state. A subset of the pool dataset Dsub with size n is used instead of the whole pool dataset at each active learning iteration for training the policy.\nStates. As π uses the state information to make decisions, a state s should be maximally compact but still unique, i.e., different situations should have different state encodings. Our state encoding uses two types of information: (1) model-dependent parameters (that describe the state from the perspective of the model M and the already labeled samples Dlab), and (2) AL-cycle-dependent parameters (that describe the elements of the samples Dsub that we can choose to label). Together, these parameters form a minimal description of a current state.\nWe use the following parameters to describe the model-dependent aspects: • The mean of the already labeled data samples µ ( Me(Dlab) ) : the embedding Me of a sam-\nple by M is the output of the final layer (i.e., the layer before the soft-max layer in case of a classification model), see Figure 1. The size of this representation is independent of the (growing) size of Dlab and thus will not become a computational bottle-neck.\n• The ground-truth empirical distribution of class labels\n~eDlab = (\n∑ y∈Dlab 1[y == 0]\n|Dlab| , ..,\n∑ y∈Dlab 1[y == i]\n|Dlab| ),\nwhich is a normalized vector of length i, i.e., the number of classes, with percentage of occurrence per class using the labels of the already acquired data samples.\n• M ’s predicted empirical distribution of class labels for the labeled data ~eM(Dlab) (i.e., a normalized vector as above but with predicted class labels instead of ground-truth).\nThe rationale for including both the ground truth and the predicted empirical distribution is to enable the policy to base its decisions on the model M ’s prediction errors. In other words, when the model makes mistakes on already labeled samples for a class (that were part of the model’s training) it likely needs more samples from those classes to correct the erroneous predictions.\nThe AL-cycle-dependent parameters describe the n data samples inDsub that we evaluate in the current iteration. For each data sample xi ∈ Dsub we calculateM ’s embeddingMe(xi) in the same embedding space as the already labeled samples of our model-dependent parameters. We also predict each sample’s label M(xi). Similarly, we capture the expected model change via per-sample gradient information g(Me(xi)) in the embedding space for Dsub, using the method proposed in (Ash et al., 2020). Hence, we consider the gradients of the loss at the embedding layer, given unlabeled samples with proxy labels, both as predictive of model uncertainty and expected model change. The proxy label ŷ is the most likely predictionM(xi) (determined by an argmax operation in the softmax layer). The magnitude of the loss gradient at the embedding layer then describes both the model’s uncertainty and its expected change.\nThis information enables the policy to learn to select samples (1) where the model is uncertain (i.e., where it predicts the wrong labels), (2) where the model might gain most information (i.e., the loss is high), and (3) to learn to select more diverse samples from less well represented classes using the label statistics. Hence, we describe a state s as follows:\ns := [ µ(Me(Dlab)), ~eDlab , ~eM(Dlab),\n( Me(x0)\n.. Me(xn)\n) , ( M(x0) ..\nM(xn)\n) , ( g(Me(x0))\n.. g(Me(xn))\n)] (1)\nActions. The action of the MLP is a desirability score ρi for each unlabeled sample from Dsub, i.e., ρi := π(si). We choose the samples to be labeled based on this score, i.e., we choose the top-k ranked values, where k = acq, resulting in a binary selection vector ~v = top-k(ρ0, · · · , ρn) with∑n\ni=0 ~vi = acq. The ground truth actions provided by the experts are binary vectors of length k, where a 1 at index i means that xi should be selected for labeling (and indices of those sample that should not be labeled are 0). From here we can use a binary cross entropy loss to update π’s weights:\nL(ρ,~t) = n∑\ni=0\n~ti log (ρi)− (1− ~ti) log (1− ρi), (2)\nwhere ~t is the target vector provided by the best expert (similar to a greedy multi-armed bandit approach (Hsu & Lin, 2015)). This brings π’s output closer to the suggestion of the best expert.\nOur IL-based approach uses the experts to turn AL into a supervised learning problem, i.e., the action of the best expert becomes the label for the current state s. Our choice of AL heuristics for the set of experts E includes particular types but is arbitrarily extendable. Using MC-Dropout, Ensemble, CoreSet, BADGE, Confidence or Entropy allows us to only minimally modify the classifier model M . π aims to learn certain derived properties from the state, such as model uncertainty or similarity. These measures are based on M ’s predictions and embeddings.\nOur hypothesis is that π learns to imitate the best suitable heuristic for each phase of the AL cycle, i.e., starting with relying on one type of heuristics for selections of samples in the beginning and later switch to a fine-tuning using a different one (see also Section 4.3). This is in line with previous research that combines uncertainty- and density-based heuristics and that learns an adaptive combination framework that weights them over the training course (Li & Guo, 2013).\n3.3 POLICY TRAINING Algorithm 1: Imitating Active Learner Ensembles 1Input: data D, labeled validation data Dval, classifier\nM , budget B, experts E , acquisition size acq, subset size n, probability p, states S, actions A, random policy π (acq ≥ 1, n = 100).\n2 for e = 1 . . . episodesmax do 3 Dlab,Dpool ← split(D) 5 while |Dlab| < B do 6 M ← initAndTrain(M,Dlab) 7 Dsub ← sample(Dpool, n) 8 e∗ ← bestExpert(E ,M,Dsub,Dval) 9 Dsel ← e∗.SelectQuery(M,Dsub, acq)\n10 S,A ← toState(Dsub,Dlab), toAction(Dsel) 11 if Rnd(0, 1) ≥ p then 12 // We may choose π’s selection 13 Dsel ← π.SelectQuery(M,Dsub, acq) 14 Dlab ← Dlab ∪ Dsel 15 Dpool ← Dpool \\ Dsel 16 Update policy using {S,A} Our policy training builds on the intuition behind DAGGER, which is a well-known algorithm for IL that aims to train a policy by iteratively growing a dataset for supervised learning. The key idea is that the dataset includes the states that are likely to be visited over the course of solving a problem (in other words, those state and action encodings that would have been visited if we would follow a hardcoded AL strategy). To this end, it is common when using DAGGER to determine a policy’s next state by either following the current policy or an available expert (Ross et al., 2011). We thus grow a list of state and action pairs, and randomly either choose expert or policy selections as the action.\nEach episode of the IL cycle lasts until the AL labeling budget is reached for episodesmax iterations. We aggregate the states and actions over all episodes, and continually train the policy on the pairs.\nWe use DAGGER to further randomize the exploration of D. Instead of always following the best expert’s advice, we randomly follow the policy’s prediction, and thus enrich the possible states.\nOur IL approach for training π is given in Algorithm 1. At each AL cycle, we randomly sample a subset Dsub of n = 100 samples from the unlabeled pool Dpool (line 3). We find the best expert e∗ from a set of experts E (line 8) by extending the training dataset by the expert selections (from Dsub) and train a classifier each. This means that each expert constructs one batch according to its heuristic, e.g., a batch composition could maximize model-change, and queries the oracle for labels. We choose the best expert by comparing the resulting classifiers’ accuracies on the labeled validation dataset. We next set its acquisition as this iteration’s chosen target and store state and action for the policy training (line 10). According to DAGGER we then flip a coin and depending on the probability p (line 11) either use the policy or the best expert to increase Dlab for the next iteration (line 14). After each episode we retrain π on the state and action pairs (line 16)." }, { "heading": "4 EXPERIMENTS", "text": "We first describe our experimental setup (Section 4.1). Next, we describe how we trained our policy (Section 4.2) and evaluate our approach by transferring it to test datasets, i.e., to FMNIST and KMNIST (Section 4.3). Finally, we end with a discussion of our ablation studies and the limitations of our approach (Section 4.4). The source code is publicly at https://github.com/ crispchris/iale and can be used to reproduce our experimental results." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Datasets. We use the image classification datasets MNIST (LeCun et al., 1998), Fashion-MNIST (FMNIST) (Xiao et al., 2017), and Kuzushiji-MNIST (KMNIST) (Clanuwat et al., 2018) for our evaluation. They all consist of 70, 000 grey-scale images (28×28px) in total for 10 classes. MNIST contains the handwritten digits 0 − 9, FMNIST contains images of clothing (i.e., bags, shoes, etc.), and KMNIST consists of Hiragana characters, see Figure 2.\nTo evaluate IALE we train a policy π and run it (3 repetitions) on unseen datasets along with the baselines. The similarity between FMNIST and MNIST (that has previously been shown (Nalisnick et al., 2019)) and the difficulty of FMNIST (it has been shown to be a demanding dataset for AL methods (Hahn et al., 2019b)) make these datasets a perfect combination to evaluate IALE. Please find more results for transfering π’s AL strategy learned from MNIST to CIFAR-10 in Appendix A.3.3.\nArchitectures of classifier M . We use the same model used to evaluate AL on MNIST data that has been used in previous research (Gal & Ghahramani, 2016). Our model has two convolutional layers, followed by a max pooling and dense layer. We add dropout layers after the convolution and dense layers and use ReLU activations. A softmax layer allows for classification. We also provide additional results for all the methods with a Resnet-18 (He et al., 2016) and with a two-layer MLP with a dense layer (256 neurons) followed by a soft-max layer (Ash et al., 2020) in Appendix A.3.2.\nArchitecture of policy π. Our policy model π uses an MLP with three dense layers with 128 neurons each. The first two dense layers are followed by a ReLU activation layer, whereas the final layer has only one neuron and the output of this layer is passed onto a sigmoid function to constrain the outputs to the range [0, 1] and to further process it into an aggregating top-k operation.\nBaselines. For the evaluation of the performance of our AL method, we implemented different well-known AL approaches from literature: Random Sampling, MC-Dropout, Ensemble, CoreSet, BADGE, Confidence-sampling, Entropy-sampling and ALIL (which we adapted\nto work on image classification tasks). For a more detailed description of the baselines please see Appendix A.1.1." }, { "heading": "4.2 POLICY TRAINING", "text": "We use the MNIST dataset as our source dataset on which we train our policy for 100 episodes, with each episode containing data from an AL cycle. The initial amount of labeled training data is 20 samples (class-balanced). At each step of the active learning process, 10 samples are labeled and added to the training data until a labeling budget B of 1, 000 is reached. We use the AL heuristics MC-Dropout, Ensemble, CoreSet, BADGE, Confidence and Entropy as experts, and use Dval with 100 labeled samples to score the acquisitions of the experts. The pool dataset is sampled with K = 100 at each AL iteration. We choose p = 0.5 for DAGGER for means of comparison with the baselines (based on preliminary experiments (see Appendix A.2.1 on Exploration-Exploitation). We train the policy’s MLP on the current and growing list of state and action pairs using the binary cross entropy loss from Equation 2 and use the Adam optimizer (Kingma & Ba, 2015) for 30 epochs with a learning rate of 1e− 3, β1 = 0.9, β2 = 0.999, = 1e− 8, and without any weight decay. Figure 3a shows the results of our method in comparison to all the baseline approaches on the MNIST dataset, on which the policy was trained on. Our method consistently outperforms or is at least en par (towards the end, when enough representatives samples are labeled) with all the other methods. IALE performs better acquisitions than Ensemble and MC-Dropout for the important first half of the labeling budget, where it matters the most. Moreover, IALE is faster (see Appendix A.3.3). While MC-Dropout requires 20 forward passes to decide which samples it acquires, and Ensembles N = 5 forward passes, one for each model, our approach requires only 2 inferences for only n = 100 samples plus the labeled pool. Confidence-sampling performs similar as the two more complex methods, even though it uses only the simple soft-max probabilities. While Entropy beats random it is still not competitive. BADGE performs similar to random sampling, which is due to the small acquisition size of 10 (the better performance of BADGE was reported with much larger acquisition sizes of 100 to 10,000 in Ash et al. (2020) as its mix of uncertainty and diversity heuristic benefits from these). The same applies to CoreSet, however, here it performs worst on average over all experiments. This finding is in line with previous research (Sinha et al., 2019; Hahn et al., 2019a) and can be attributed to a weakness of the utilized p-norm distance metric regarding high-dimensional data, called the distance concentration phenomenon. The accuracy of ALIL on MNIST is similar to CoreSet, however, ALIL is designed to add only one sample to the training data at a time (no batch-mode).\nConstruction of acquisition. We compare IALE’s chosen samples with the ones chosen by the baselines, see Figure 4. We show this overlap in relation to the baselines in percent, and plot the values over the 100 AL cycles using the MNIST dataset (more results can be found in Appendix A.2.2). We plot second-order polynomials, fit to the percentages (given as dots) over 100 acquisitions of size 10. π mostly imitates uncertainty-based heuristics, i.e., soft-max heuristics and MC-Dropout, and the uncertainty-/diversity-heuristic BADGE (close behind). Interestingly, Ensemble is overlapping mostly at the beginning. CoreSet has the lowest overlap. Note that IALE’s acquisitions are build\nfrom combinations of the heuristics (instead of single votes). The percentages do not sum up to 1 as the experts are independent and may also overlap with each other.\n4.3 POLICY TRANSFER\nWe investigate how π works on a different dataset than the one that it has been trained on. Hence, we train π on the source dataset MNIST as in Section 4.2 and use it for the AL problem on FMNIST and KMNIST. For the AL we again use an initial class-balanced labeled training dataset of 20 samples and add 10 samples per AL acquisition cycle until we reach a labeling budget of 1,000 samples. All the baselines are evaluated along with our method for comparison.\nFigures 3b and 3c show the performance of IALE along with the baselines on FMNIST and KMNIST. IALE consistently outperforms the baselines on both datasets. We can see that it learns a combined and improved policy\nthat outperforms the individual experts consistently and (sometimes) even with large margins. On FMNIST IALE is the only method that actually beats Random Sampling (similar findings have previously been reported by Hahn et al. (2019b)). IALE is consistently 1− 3% better than Random Sampling on FMNIST, on the harder KMNIST dataset IALE is even 7− 9% ahead. The baselines give a mixed picture. ALIL does not achieve competitive performance on any task and actually never beats a random sampling strategy. We also see unstable performance for MC-Dropout and Ensemble, that generally perform similarly well. The simple soft-max heuristics Entropy and Confidence fail on FMNIST. CoreSet lags far behind, especially on KMNIST. BADGE always performs like random sampling, due to the aforementioned problematic acquisition size. Please find additional experiments in Appendix A.3 including a wider range of classifier models (MLP, CNN, and Resnet-18) and datasets (CIFAR-10)." }, { "heading": "4.4 ABLATION STUDIES", "text": "Hyperparameters. Two important parameters are the acquisition size acq and the size of Dsub. Figures 5a and 5b show results for acq of 1 to 40 and size ofDsub between n=10 and n=10, 000. As expected, IALE performs best at acq=1 and worst at acq=40, if n is unchanged, because n limits the available choices, e.g., bad samples have to be chosen. Increasing n to 1, 000 alleviates this issue. However, there is an upper limit to the size of Dsub after which performance deteriorates again, see Figure 5b. This could be because random sub-sampling actually simplifies the selection of diverse, uncertain samples. The lower limit becomes apparent again when n is smaller than 10 times acq, with n = acq essentially being a random sampling. From our observations, n should be 10 − 100 times acq. From the small differences within this value range, it is suggested that our method is suitable for larger acquisition sizes for batch-mode AL, as its performance is not affected much.\nVarying experts. To investigate the influence of experts, we leave out some types of experts: We categorize them into 4 groups, i.e., uncertainty (McdropEns), soft-max uncertainty (EntrConf), diversity (Coreset) and hybrid (Badge), and leave one subset out. We fully train each method on MNIST with B = 1, 000 and an acquisition size of 10, and present the results of the evaluation on KMNIST in Figure 5c (more results including the ablation of state elements can be found in Appendixes A.4.4 and A.4.3). We see that most combinations perform well compared to the baselines. However, leaving out uncertainty or soft-max uncertainty experts can decrease performance. Even though training time is longer with MC-Dropout, the gain in performance can be worth it. In contrast, the soft-max uncertainty based heuristics are computationally cheap and yield good policies." }, { "heading": "5 CONCLUSION", "text": "We proposed a novel imitation learning approach for active learning. Our method learns to imitate the behavior of different active learning heuristics, such as uncertainty-, diversity-, model changeand query-by-committee-based heuristics, on one initial dataset and model, and transfers the obtained knowledge to work on other (types of) datasets and models (that share an embedding space). Our policy network is a simple MLP that learns to imitate the experts based on embeddings of the dataset samples. Our experiments on well-known datasets show that we outperform the state of the art consistently (despite being a batch-mode AL approach). An ablation study and analysis of the influence of certain hyper-parameters also shows the limitations of our approach. Future work investigates the relationship between acquisition sizes and sub-pools, and an analysis on how the state (embedding) enables π to transfer its active learning strategy, potentially leading to the use of artificial data for finding optimal AL strategies suitable for specific active learning scenarios." }, { "heading": "A APPENDIX", "text": "In this section we provide an extension of the experiments section (Section 4) and feature additional results that support a more complete evaluation of IALE. We adhere to the same section structure." }, { "heading": "A.1 EXPERIMENTAL SETUP", "text": "" }, { "heading": "A.1.1 BASELINES", "text": "In the following is a short explanation of the baselines and experts that we used in our experiments:\n1. Random Sampling randomly samples data points from the unlabeled pool. 2. MC-Dropout (Gal et al., 2017) approximates the sample uncertainty of the model by\nrepeatedly computing inferences of the sample, i.e., 20 times, with dropout enabled in the classification model\n3. Ensemble (Beluch et al., 2018) trains an ensemble of 5 classifiers with different weight initializations. The uncertainty of the samples is quantified by the disagreement between the model predictions.\n4. CoreSet (Sener & Savarese, 2018) solves the k-center problem using the poolembeddings of the last dense layer (128 neurons) before the softmax output to pick samples for labeling.\n5. BADGE (Ash et al., 2020) uses the gradient of the loss (given pseudo labels), both its magnitude and direction, for k-means++ clustering, to select uncertain and diverse samples from a batch.\n6. Confidence-sampling (Wang & Shang, 2014) selects samples with the lowest class probability of the soft-max predictions.\n7. Entropy-sampling (Wang & Shang, 2014) calculates the soft-max class probabilities’ entropy and then selects samples with the largest entropy, i.e., where the model is least certain.\n8. ALIL (Liu et al., 2018): we modify ALIL’s implementation (that is initially intended for NLP tasks) to work on image classification task. Due to the high runtime costs of running ALIL (as the acquisition size is 1), we perform the training of ALIL for 20 episodes. We trained the ALIL policy network with a labeling budget B of 1, 000 and an up-scaled policy network comparable to that of our method along with a similar M as we use to evaluate the other AL approaches. We left the coin-toss parameter at 0.5, and the k parameter for sequential selections from a random subset of Dpool at 10.\nWe use the variation ratio metric (Gal et al., 2017) to quantify and select the data samples for labeling from the uncertainty obtained from MC-Dropout and Ensemble heuristics. The variation ratio metric is given by its Bayesian definition (Gal et al., 2017) for a data sample x ∈ Dpool in Equation 3 and for an ensemble expert (Beluch et al., 2018) in Equation 4:\nvariation-ratio(x) = 1−maxyp(y|x,D) (3)\n= 1− m N , (4)\nwhere m is number of occurrences of the mode and N is the number of forward passes or number of models in the ensemble." }, { "heading": "A.2 POLICY TRAINING", "text": "" }, { "heading": "A.2.1 EXPLORATION-EXPLOITATION IN DAGGER", "text": "DAGGER uses a hyper-parameter p that determines how likely π predicts the next action, and thereby setting the next state, instead of using the best expert from E . In this preliminary study we compare the influence it has to either fix p to 0.5 or to use an exponential decay parameterized by the number of the current episode epi: 1 − 0.9epi. We train the policy on MNIST for 100 episodes with a labeling budget of 1, 000 and an acquisition size of 10 (as before). Our result is that the fixed policy\noutperforms the exponential one by a small margin for the transfer of the policy to another dataset than the trained one, which is in line with previous findings (Liu et al., 2018).\nA balanced (i.e., fixed) ratio does not emphasize one over the other, whereas an exponentially decay quickly relies on the policy for selecting new states of the dataset, and thus it trains on too few optimal states over the AL cycle." }, { "heading": "A.2.2 OVERLAP RATIOS", "text": "We additionally show FMNIST and KMNIST overlap ratios in Fig. 6. The policy chooses about half of the samples differently from any single baseline. The overlap is lower on FMNIST, where our method is the only one that beats random sampling. CoreSet overlaps least often and performs worst." }, { "heading": "A.3 POLICY TRANSFER", "text": "In this section we provide further studies on how our method performs in regard to applying it to unseen scenarios. These include that we (1) train the policy on other source datasets than MNIST, that we (2) apply the policy onto other classifiers than the one it was trained on, and that we (3) use different datasets and classifier models in training and application of the policy.\nWe aim to show that π learns a (relatively) task-agnostic AL strategy, that works in inference and outperforms the baselines, as long as the state can be expressed similarly to the one that has been used during training.\nA.3.1 VARYING SOURCE DATASETS\nOur policy does not coincidentally perform well due to a specific source dataset that is most suitable. In Figure 7, we show two additional policies that were trained on FMNIST or KMNIST (with unchanged hyper-parameter settings). We see comparable performance that indicates that IALE actually learns to actively learn and not just remembers the source datasets as it makes not difference which datasets are chosen as the source and the target dataset." }, { "heading": "A.3.2 CLASSIFIER ARCHITECTURES.", "text": "Our method is not bound to a specific classifier architecture. To show this, we evaluate our approach with MLP and ResNet-18 classifiers. We use the same hyper-parameters as with the CNN and only switch out the underlying classifier. All results for the experiments are given in Figure 8. We see the robustness of π over fundamentally different classifier architectures (2 to 18 layers). The deviations for ResNet-18 are very large due to the very deep architecture and the modest amount of training data. We use median filtering in Figures 8g, 8h, and 8i.\nThese experiments show that π can learn AL strategies for both very small and very deep architectures and still outperform baselines. Even though the strongest baselines, i.e., CoreSet and MC Dropout, come close to our method in accuracy, they are less versatile and require more computational resources, that is especially noticeable on deeper architectures." }, { "heading": "A.3.3 GENERALIZATION", "text": "To show that π learns active learning independent from task and classifier, we conduct experiments where we mix both the source datasets and the classifiers, as only the state retains the same formulation. Since these states are the same for CNN and Resnet-18 classifiers (independant of the datasets) we can transfer a policy trained using a Resnet-18 classifier to a CNN classifier and vice versa.\nWe report the results for applying π (trained on Resnet-18 and MNIST) to a CNN and all the datasets in Figure 9. IALE is always performing at the top. These results convincingly show that our method\nlearns a model- and task-agnostic active learning strategy that transfers knowledge between datasets and even between classifier architectures.\nWe conclude the transfer studies by applying pre-trained π to a Resnet-18 classifier on the CIFAR-10 (Krizhevsky, 2009) (color) image classification dataset. Each image in CIFAR-10 is 32× 32 pixels and has 3 color channels. The transfer of π is possible as the classifier’s state is both invariant to the models and independent from the number of color channels of the image dataset and the input size. For our experiments we use two different π: π1 was trained using Resnet-18 and MNIST (IALE Resnet) and π2 was trained using CNN and MNIST (IALE CNN). We train the classifiers with an acquisition size of 10 until the labeling budget of 10.000 is reached.\nThe results are presented in Figure 10. As the acquisition size of 10 results in noisy curves we report the raw learning curves (Figure 10a)and median filtered learning curves (Figure 10b). We report the interesting segment of the learning curve in more detail (filtered) in Figures 10c. The results generally show the feasibility of transferring π to both different classifiers and datasets. IALE is en par or better than Random, and the other baselines are either en par or worse than Random (some of them considerably).\nMoreover, besides IALE’s better accuracy compared to all other methods, its run time is highly competitive. Per experiment iteration (with 10.000 samples, acquired over 998 steps of size 10, each step including a complete re-training of Resnet-18 for 100 epochs), run times for IALE are 10:17:31 (h:m:s) on a high performance GPU (NVidia V100) (9:45:12 for Random sampling), compared to 49:15:23 for Ensembles, 14:23:18 for MC-Dropout and 11:58:47 for BADGE. Only Conf (10:12:01) and Entropy (10:05:21) are quicker (however, they both also perform worse than Random).\nWhile more experiments are certainly required to further emphasize these initial claims of generalizability to more diverse types of datasets, these findings are already very promising." }, { "heading": "A.4 ABLATION STUDIES", "text": "" }, { "heading": "A.4.1 HYPERPARAMETERS.", "text": "We report fine-granular steps of acquisition sizes in Figure 11a, with values between 1 and 10, plus 20 and 40, for |Dsub| of 100. Overall, a clear difference is not visible below 10 samples. For enhanced readability, we show a magnified section of the varied acquisition sizes and |Dsub| in Fig. 11b, that clearly shows the benefits of tuning |Dsub| to a suitable value for the acquisition size.\nAcquisition sizes for baselines: We additionally compare the baseline active learning methods with our approach, as these exhibit different performance at different acquisition sizes, see Fig. 12. We have included comparisons with acquisition sizes of either 1 or 100 (1 or 3 repetitions). For our method, for an acquisition size of 1 we chose |Dsub| = 100 and for acquisition size of 100 we chose |Dsub| = 2, 000. While the results show that IALE outperforms the baselines they also highlight the large effect that the acquisition size has on some of the baseline methods. For instance, CoreSet constructs better set covers with larger batches, and BADGE increases its accuracy by constructing a representative sampling as well. At the same time the uncertainty-based methods, apart from Entropy, remain unaffected.\nA.4.2 VARYING EXPERTS\nWe present more results for variations of sets of experts in Figure 13. We leave out some types of experts, and train the policies with the unchanged hyper-parameters and the CNN classifier. The results for all three datasets show that the generally high performance of IALE holds for the leaveone-out sets of experts, with the full set of experts being consistently among the best performing policies.\nA.4.3 VARYING STATE ELEMENTS\nNext, we study the state more closely. For unlabeled samples, the state contains two types of representations for predictive uncertainty: the statistics on predicted labels M(xn) and the gradient representations g(Me(xn)). In this study, we focus on leaving out one or the other. To get the full picture, we again train sets of experts for reduced states.\nIn Figure 14 we see that dropping gradients generally decreases performance (bottom row), while dropping predicted labels M(xn) affects performance very little (top row). However, the influence of different sets of experts is more important. We cannot see that a particular set of states and experts generally outperforms others consistently (while the negative effect of leaving out g(Me(xn)) is consistently visible). Overall, we find that using as many experts as available, combined with a full state both performs well and works reliably. Even though training a policy this way does not guarantee the best performance, it always performs among with the group of best policies." }, { "heading": "A.4.4 ADDITIONAL DATASETS", "text": "We additionally run experiment on harder datasets, i.e., SVHN and CIFAR-100 in Fig. 15.\nSVHN: We train a Resnet18 on SVHN with an acquisition size of 1,000 and a labeling budget of 16,000. We initially label 1,000 samples and evaluate over 5 repetitions. We show both the average results with variance in Fig. 15a and the smoothed averages for improved visibility in Fig. 15b. While the results exhibit some variance, we can clearly see that IALE performs best (and is the only AL methods that is consistently able to beat a random sampling.\nCIFAR-100: Next, we evaluate datasets with a larger number of classes by generalizing the composition of the state (removing prediction and empirical class distribution). We train two policies, one with a CNN classifier on MNIST and the second with a Resnet18 classifier on CIFAR-10. Then, we apply the resulting policies in the following experiment. We train a Resnet18 on the very hard CIFAR-100 dataset, with an acquisition size of 1,000 and a labeling budget of 20000, and initially label 1,000 samples. We evaluate over 5 repetitions. Fewer samples or less powerful network architectures tend to fail to converge in our experiments. The results in Fig. 15c show that both policies, i.e., transferred from CNN/MNIST and from Restnet18/CIFAR10 with an arbitrary number of classes, perform best among all heuristics, and beat random sampling." } ]
2,020
IALE: IMITATING ACTIVE LEARNER ENSEMBLES
SP:651166f4bdf2eb56689f790d3c697a43be974521
[ "This work suggests a variant of ensembling that is more compute-efficient. Specifically, it involves forking an ensemble only in the late stage of training, and forming this ensemble via a \"low-dimentional\" family. That is, instead of maintaining independent networks, maintain only \"low-rank\"-style perturbations of the base network (for various instanciations of \"low-rank\")." ]
The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD). Here, we show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning. At the end of learning, we obtain back a single model by taking a spatial average in weight space. To avoid incurring increased computational costs, we investigate a family of low-dimensional late-phase weight models which interact multiplicatively with the remaining parameters. Our results show that augmenting standard models with late-phase weights improves generalization in established benchmarks such as CIFAR-10/100, ImageNet and enwik8. These findings are complemented with a theoretical analysis of a noisy quadratic problem which provides a simplified picture of the late phases of neural network learning.
[ { "affiliations": [], "name": "Johannes von Oswald" }, { "affiliations": [], "name": "Seijin Kobayashi" }, { "affiliations": [], "name": "João Sacramento" }, { "affiliations": [], "name": "Alexander Meulemans" }, { "affiliations": [], "name": "Christian Henning" }, { "affiliations": [], "name": "Benjamin F. Grewe" } ]
[ { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad Hazan" ], "title": "On the optimization of deep networks: implicit acceleration by overparameterization", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jimmy Ba", "Geoffrey E Hinton", "Volodymyr Mnih", "Joel Z Leibo", "Catalin Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Carlo Baldassi", "Fabrizio Pittorino", "Riccardo Zecchina" ], "title": "Shaping the learning landscape in neural networks around wide flat minima", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In Proceedings of COMPSTAT’2010,", "year": 2010 }, { "authors": [ "Kevin S. Brown", "James P. Sethna" ], "title": "Statistical mechanical approaches to models with many poorly known parameters", "venue": "Physical Review E,", "year": 2003 }, { "authors": [ "Pratik Chaudhari", "Stefano Soatto" ], "title": "Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks", "venue": "In Information Theory and Applications Workshop (ITA)", "year": 2018 }, { "authors": [ "Pratik Chaudhari", "Anna Choromanska", "Stefano Soatto", "Yann LeCun", "Carlo Baldassi", "Christian Borgs", "Jennifer Chayes", "Levent Sagun", "Riccardo Zecchina" ], "title": "Entropy-SGD: Biasing gradient descent into wide valleys", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2019 }, { "authors": [ "Terrance DeVries", "Graham W. Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp minima can generalize for deep nets", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sebastian Flennerhag", "Andrei A. Rusu", "Razvan Pascanu", "Francesco Visin", "Hujun Yin", "Raia Hadsell" ], "title": "Meta-learning with warped gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Stanislav Fort", "Huiyi Hu", "Balaji Lakshminarayanan" ], "title": "Deep ensembles: a loss landscape perspective", "venue": "arXiv preprint arXiv:1912.02757,", "year": 2020 }, { "authors": [ "Jonathan Frankle", "David J. Schwab", "Ari S. Morcos" ], "title": "Training batchnorm and only batchnorm: on the expressive power of random features in CNNs", "venue": "arXiv preprint arXiv:2003.00152,", "year": 2020 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of DNNs", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Daniel Golovin", "Benjamin Solnik", "Subhodeep Moitra", "Greg Kochanski", "John Karro", "D. Sculley" ], "title": "Google vizier: a service for black-box optimization", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Dongyoon Han", "Jiwhan Kim", "Junmo Kim" ], "title": "Deep pyramidal residual networks", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: surpassing human-level performance on ImageNet classification", "venue": "arXiv preprint arXiv:1502.01852,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Geoffrey E Hinton", "David C Plaut" ], "title": "Using fast weights to deblur old memories", "venue": "In Proceedings of the ninth annual conference of the Cognitive Science Society,", "year": 1987 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: train 1, get M for free", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "arXiv preprint arXiv:1608.06993,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Kiyosi Itô" ], "title": "On stochastic differential equations", "venue": "Number 4. American Mathematical Soc.,", "year": 1951 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "Siddhant M. Jayakumar", "Wojciech M. Czarnecki", "Jacob Menick", "Jonathan Schwarz", "Jack Rae", "Simon Osindero", "Yee Whye Teh", "Tim Harley", "Razvan Pascanu" ], "title": "Multiplicative interactions and where to find them", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Hossein Mobahi", "Dilip Krishnan", "Samy Bengio" ], "title": "Fantastic generalization measures and where to find them", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Richard Jordan", "David Kinderlehrer", "Felix Otto" ], "title": "The variational formulation of the Fokker–Planck equation", "venue": "SIAM Journal on Mathematical Analysis,", "year": 1998 }, { "authors": [ "Christos Kaplanis", "Murray Shanahan", "Claudia Clopath" ], "title": "Continual reinforcement learning with complex synapses", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: generalization gap and sharp minima", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: a method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Andrey Nikolaevich Kolmogorov" ], "title": "On analytical methods in probability theory", "venue": "Math. Ann,", "year": 1931 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Stefan Lee", "Senthil Purushwalkam", "Michael Cogswell", "David Crandall", "Dhruv Batra" ], "title": "Why M heads are better than one: training a diverse ensemble of deep networks", "venue": "arXiv preprint:", "year": 2015 }, { "authors": [ "Pascal Leimer", "Michael Herzog", "Walter Senn" ], "title": "Synaptic weight decay with selective consolidation enables fast learning without catastrophic forgetting", "venue": null, "year": 2019 }, { "authors": [ "Guan-Horng Liu", "Evangelos A Theodorou" ], "title": "Deep learning theory review: An optimal control and dynamical systems perspective", "venue": "arXiv preprint arXiv:1908.10920,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: stochastic gradient descent with warm restarts", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "David J.C. MacKay" ], "title": "A practical Bayesian framework for backpropagation networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Wesley J Maddox", "Pavel Izmailov", "Timur Garipov", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for Bayesian uncertainty in deep learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "James Martens" ], "title": "Second-order optimization for neural networks", "venue": "PhD thesis, University of Toronto,", "year": 2016 }, { "authors": [ "Gábor Melis", "Chris Dyer", "Phil Blunsom" ], "title": "On the state of the art of evaluation in neural language models", "venue": "arXiv preprint arXiv:1707.05589,", "year": 2017 }, { "authors": [ "Pramod Kaushik Mudrakarta", "Mark Sandler", "Andrey Zhmoginov", "Andrew Howard" ], "title": "K for the price of 1: parameter-efficient multi-task and transfer learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex optimization: a basic course", "venue": null, "year": 2004 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: visual reasoning with a general conditioning layer", "venue": "In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Fabrizio Pittorino", "Carlo Lucibello", "Christoph Feinauer", "Enrico M. Malatesta", "Gabriele Perugini", "Carlo Baldassi", "Matteo Negri", "Elizaveta Demyanenko", "Riccardo Zecchina" ], "title": "Entropic gradient descent algorithms and wide flat minima", "venue": "arXiv preprint arXiv:2006.07897,", "year": 2020 }, { "authors": [ "Boris T Polyak", "Anatoli B Juditsky" ], "title": "Acceleration of stochastic approximation by averaging", "venue": "SIAM Journal on Control and Optimization,", "year": 1992 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Learning multiple visual domains with residual adapters", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael S. Bernstein", "Alexander C. Berg", "Fei-Fei Li" ], "title": "ImageNet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Levent Sagun", "Utku Evci", "V. Ugur Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the Hessian of over-parametrized neural networks", "venue": "arXiv preprint arXiv:1706.04454,", "year": 2018 }, { "authors": [ "Pedro Savarese", "Michael Maire" ], "title": "Learning implicitly recurrent CNNs through parameter sharing", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tom Schaul", "Sixin Zhang", "Yann LeCun" ], "title": "No more pesky learning rates", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning to control fast-weight memories: an alternative to dynamic recurrent networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Johannes von Oswald", "Christian Henning", "João Sacramento", "Benjamin F. Grewe" ], "title": "Continual learning with hypernetworks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Joshua J. Waterfall", "Fergal P. Casey", "Ryan N. Gutenkunst", "Kevin S. Brown", "Christopher R. Myers", "Piet W. Brouwer", "Veit Elser", "James P. Sethna" ], "title": "Sloppy-model universality class and the Vandermonde matrix", "venue": "Physical Review Letters,", "year": 2006 }, { "authors": [ "Max Welling", "Yee Whye Teh" ], "title": "Bayesian learning via stochastic gradient Langevin dynamics", "venue": "In International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Yeming Wen", "Dustin Tran", "Jimmy Ba" ], "title": "BatchEnsemble: an alternative approach to efficient ensemble and lifelong learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yuhuai Wu", "Mengye Ren", "Renjie Liao", "Roger Grosse" ], "title": "Understanding short-horizon bias in stochastic meta-optimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sho Yaida" ], "title": "Fluctuation-dissipation relations for stochastic gradient descent", "venue": "arXiv preprint arXiv:1810.00004,", "year": 2018 }, { "authors": [ "Yoshihiro Yamada", "Masakazu Iwamura", "Takuya Akiba", "Koichi Kise" ], "title": "Shakedrop regularization for deep residual learning", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Fisher Yu", "Yinda Zhang", "Shuran Song", "Ari Seff", "Jianxiong Xiao" ], "title": "LSUN: construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference,", "year": 2016 }, { "authors": [ "Guodong Zhang", "Lala Li", "Zachary Nado", "James Martens", "Sushant Sachdeva", "George Dahl", "Chris Shallue", "Roger B Grosse" ], "title": "Which algorithmic choices matter at which batch sizes? Insights from a noisy quadratic model", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Michael Zhang", "James Lucas", "Jimmy Ba", "Geoffrey E Hinton" ], "title": "Lookahead Optimizer: k steps forward, 1 step back", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Sixin Zhang", "Anna E Choromanska", "Yann LeCun" ], "title": "Deep learning with elastic averaging SGD", "venue": "In Advances in Neural information Processing Systems,", "year": 2015 }, { "authors": [ "Zhanxing Zhu", "Jingfeng Wu", "Bing Yu", "Lei Wu", "Jinwen Ma" ], "title": "The anisotropic noise in stochastic gradient descent: its behavior of escaping from sharp minima and regularization effects", "venue": "arXiv preprint arXiv:1803.00195,", "year": 2018 }, { "authors": [ "Luisa Zintgraf", "Kyriacos Shiarli", "Vitaly Kurin", "Katja Hofmann", "Shimon Whiteson" ], "title": "Fast context adaptation via meta-learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Izmailov" ], "title": "2018), according to which an initial learning rate", "venue": null, "year": 2018 }, { "authors": [ "He" ], "title": "2016) we augment our training datasets using random crops (with a 4-pixel padding for CIFAR) and random horizontal flips. The ImageNet training dataset is augmented with random horizontal flips, as well as random cropping of size 224, while a centered cropping of size", "venue": null, "year": 2018 }, { "authors": [ "Wu et al", "Zhang" ], "title": "2019a;b). For the simple loss landscape of the NQP, there are three main strategies to improve the expected loss after convergence: (i) increase the mini-batch size B (Zhang et al., 2019a), (ii) use more members K in an ensemble (c.f. Section C.3 and (iii) decrease the learning rate", "venue": "batch training in deep neural networks (Schaul et al.,", "year": 2013 }, { "authors": [ "∼ N" ], "title": "Following Liu & Theodorou (2019) and Chaudhari & Soatto (2018), the corresponding continuous-time dynamics are: dxt = −F (xt)dt+", "venue": null, "year": 2018 }, { "authors": [ "Zhu" ], "title": "2018) highlighted this trace quantitiy in equation 23 as a measurement of the escaping efficiency out of poor minima. However, we assume that we are in the final valley of convergence (emphasized by this convex NQP), so now this interpretation does not hold and the quantity should be considered as a proxy measurement of the width of the steady-state parameter", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks trained with SGD generalize remarkably well on a wide range of problems. A classic technique to further improve generalization is to ensemble many such models (Lakshminarayanan et al., 2017). At test time, the predictions made by each model are combined, usually through a simple average. Although largely successful, this technique is costly both during learning and inference. This has prompted the development of ensembling methods with reduced complexity, for example by collecting models along an optimization path generated by SGD (Huang et al., 2017), by performing interpolations in weight space (Garipov et al., 2018), or by tying a subset of the weights over the ensemble (Lee et al., 2015; Wen et al., 2020).\nAn alternative line of work explores the use of ensembles to guide the optimization of a single model (Zhang et al., 2015; Pittorino et al., 2020). We join these efforts and develop a method that fine-tunes the behavior of SGD using late-phase weights: late in training, we replicate a subset of the weights of a neural network and randomly initialize them in a small neighborhood. Together with the stochasticity inherent to SGD, this initialization encourages the late-phase weights to explore the loss landscape. As the late-phase weights explore, the shared weights accumulate gradients. After training we collapse this implicit ensemble into a single model by averaging in weight space.\nBuilding upon recent work on ensembles with shared parameters (Wen et al., 2020) we explore a family of late-phase weight models involving multiplicative interactions (Jayakumar et al., 2020). We focus on low-dimensional late-phase models that can be ensembled with negligible overhead. Our experiments reveal that replicating the ubiquitous batch normalization layers (Ioffe & Szegedy, 2015) is a surprisingly simple and effective strategy for improving generalization1. Furthermore, we find that late-phase weights can be combined with stochastic weight averaging (Izmailov et al., 2018), a complementary method that has been shown to greatly improve generalization.\n1We provide code to reproduce our experiments at https://github.com/seijin-kobayashi/ late-phase-weights" }, { "heading": "2 METHODS AND MODELS", "text": "" }, { "heading": "2.1 LEARNING WITH LATE-PHASE WEIGHTS", "text": "Late-phase weights. To apply our learning algorithm to a given neural network model fw we first specify its weights w in terms of two components, base and late-phase (θ and φ, resp.). The two components interact according to a weight interaction function w = h(θ, φ). Base weights are learned throughout the entire training session, and until time step T0 both θ and φ are learned and treated on equal grounds. At time step T0, a hyperparameter of our algorithm, we introduce K late-phase components Φ = {φk}Kk=1, that are learned together with θ until the end. This procedure yields a late-phase ensemble of K neural networks with parameter sharing: reusing the base weights θ, each late-phase weight φk defines a model with parameters wk = h(θ, φk).\nLate-phase weight averaging at test time. Our ensemble defined by the K late-phase weight configurations in Φ is kept only during learning. At test time, we discard the ensemble and obtain a single model by averaging over the K late-phase weight components. That is, given some input pattern x, we generate a prediction y(x) using the averaged model, computed once after learning:\ny(x) = fw(x), w ≡ h ( θ, 1\nK K∑ k=1 φk\n) . (1)\nHence, the complexity of inference is independent ofK, and equivalent to that of the original model.\nLate-phase weight initialization. We initialize our late-phase weights from a reference base weight. We first learn a base parameter φ0 from time step t = 0 until T0, treating φ0 as any other base parameter in θ. Then, at time t = T0, each configuration φk is initialized in the vicinity of φ0. We explore perturbing φ0 using a symmetric Gaussian noise model,\nφk = φ0 + σ0\nZ(φ0) k, (2)\nwhere k is a standard normal variate of appropriate dimension and σ0 is a hyperparameter controlling the noise amplitude. We allow for a φ0-dependent normalization factor, which we set so as to ensure layerwise scale-invariance, which helps finding a single σ0 that governs the initialization of the entire network. More concretely, for a given neural network layer l with weights φ(l)0 of dimension D(l), we choose Z(φ(l)0 ) = √ D(l)/‖φ(l)0 ‖.\nOur perturbative initialization (Eq. 2) is motivated by ongoing studies of the nonconvex, highdimensional loss functions that arise in deep learning. Empirical results and theoretical analyses of simplified models point to the existence of dense clusters of connected solutions with a locallyflat geometry (Hochreiter & Schmidhuber, 1997a) that are accessible by SGD (Huang et al., 2017; Garipov et al., 2018; Baldassi et al., 2020). Indeed, the eigenspectrum of the loss Hessian evaluated at weight configurations found by SGD reveals a large number of directions of low curvature (Keskar et al., 2017; Chaudhari et al., 2019; Sagun et al., 2018). For not yet completely understood reasons, this appears to be a recurring phenomenon in overparameterized nonlinear problems (Brown & Sethna, 2003; Waterfall et al., 2006).\nBased on these observations, we assume that the initial parameter configuration φ0 can be perturbed in a late phase of learning without leading to mode hopping across the different models wk. While mode coverage is usually a sought after property when learning neural network ensembles (Fort et al., 2020), here it would preclude us from taking the averaged model at the end of learning (Eq. 1).\nStochastic learning algorithm. Having decomposed our weights into base and late-phase components, we now present a stochastic algorithm which learns both θ and Φ. Our algorithm works on the standard stochastic (minibatch) neural network optimization setting (Bottou, 2010). Given a loss function L(D, w) = 1|D| ∑ x∈D L(x,w) to be minimized with respect to the weights w on a set of data D, at every round we randomly sample a subsetM from D and optimize instead the stochastic loss L(M, w). However, in contrast to the standard setting, in late stages of learning (t > T0) we simultaneously optimize K parameterizationsW := {wk | wk = h(θ, φk)}Kk=1, instead of one.\nWe proceed by iteration over W . At each step k, we sample a minibatch Mk and immediately update the late-phase weights φk, while accumulating gradients over the shared base weights θ. Such gradient accumulation has been previously used when learning ensembles (Lee et al., 2015; Wen et al., 2020) and multi-task models (Rebuffi et al., 2017) with shared base parameters. A single iteration is finally concluded by changing the base weights in the direction opposite of the accumulated gradient. We scale the accumulated gradient by γθ; setting γθ = 1/K recovers the original step size in θ, but other choices are possible. In particular, we find that a large γθ of unit size is in practice often tolerated, resulting in accelerated learning.\nAlgorithm 1: Late-phase learning Require: Base weights θ, late-phase weight set Φ, dataset D, gradient scale factor γθ, loss L Require: Training iteration t > T0 for 1 ≤ k ≤ K do Mk ← Sample minibatch from D ∆θk ← ∇θ L(Mk, θ, φk) φk ← Uφ(φk,∇φk L(Mk, θ, φk))\nθ ← Uθ(θ, γθ ∑K k=1 ∆θk) We summarize an iteration of our method in Algorithm 1, where the loss L(M, θ, φ) is now seen as a function of θ and φ. We opt for a general presentation using unspecified gradient-based update operators Uφ and Uθ. These operators can be set to optimizers of choice. For instance, our method might benefit from additional noise injection onto parameter updates (Welling & Teh, 2011). Furthermore, late-phase optimizers need not coincide with the optimizer used in the early phase. In our work we typically set Uφ and Uθ to a single step of SGD with Nesterov momentum (Nesterov, 2004), and explore Adam (Kingma & Ba, 2015) and plain SGD in a smaller set of experiments." }, { "heading": "2.2 LATE-PHASE WEIGHT MODELS", "text": "As detailed next, we consider a number of distinct late-phase weight models in our experiments. In particular, we explore weight interaction functions h in which late-phase weights have low dimensionality, to avoid a large increase in complexity with the ensemble size K. To counteract this reduced dimensionality, we make extensive use of multiplicative base-late weight interactions. This design choice is motivated by the large expressive power of multiplicative interactions despite low dimensionality, which has been demonstrated in a wide range of settings (Jayakumar et al., 2020).\nLate-phase batch normalization layers. Batch normalization layers (BatchNorm; Ioffe & Szegedy, 2015) are a staple of current deep neural network models. Besides standardizing the activity of the layer they are applied to, BatchNorm units introduce a learnable multiplicative (scale) parameter γ and an additive (shift) parameter β. While being low-dimensional, these additional parameters have large expressive power: it has been shown that learning only γ and β keeping the remaining weights frozen can lead to significantly lower loss than when learning random subsets of other weights of matching dimensionality (Frankle et al., 2020; Mudrakarta et al., 2019).\nWe take the scale and shift parameters of BatchNorm layers as our first choice of late-phase weights; the base weights are the remaining parameters of the model. Batch statistics are also individually estimated for each model in W . This late-phase weight parameterization is motivated by (i) the expressive power of γ and β discussed above, and by (ii) practical considerations, as BatchNorm layers are generally already present in feedforward neural network models, and are otherwise easy to implement efficiently.\nMore concretely, let us consider an affine transformation layer l which maps an input vector r(l−1) to θ(l)w r(l−1) + θ (l) b , where the early-phase weight matrix θ (l) w and bias vector θ (l) b are already standardized using the respective batch statistics. For this standard layer, our model introduces a multiplicative interaction between base and late-phase weights, diag(γ(l)) θ(l)w , and an additive interaction between base and late-phase bias parameters, θ(l)b + β (l).\nLate-phase rank-1 matrix weights. We also study a closely related late-phase weight model, where existing weight matrices – the base components, as before – are multiplied elementwise by rank-1 matrices (Wen et al., 2020). For a given affine layer l, we define a late-phase weight matrix with resort to a pair of learnable vectors, φ(l) = u(l) v(l) T . Taking the Hadamard product with the base weight matrix yields the effective weights W (l) = φ(l) ◦ θ(l).\nWith this parameterization, we recover the ensemble proposed by Wen et al. (2020), except that here it is generated late in training using our perturbative initialization (Eq. 2). Unlike BatchNorm layers, which include the shift parameter, rank-1 late-phase weights interact in a purely multiplicative manner with base weights. We study this model since it is easy to implement on neural networks which do not feature BatchNorm layers, such as standard long short-term memories (LSTMs; Hochreiter & Schmidhuber, 1997b).\nHypernetworks with late-phase weight embeddings. Additionally, we generalize the late-phase weight models described above using hypernetworks (Ha et al., 2017). A hypernetwork generates the parameters w of a given target neural network fw based on a weight embedding. In our framework, we can use a hypernetwork to implement the interaction function w = h(θ, φ) directly, with parameters θ corresponding to base weights and embeddings φ to late-phase weights.\nWe experiment with linear hypernetworks and use the same hypernetwork to produce the weights of multiple layers, following Savarese & Maire (2019); Ha et al. (2017); von Oswald et al. (2020). In this scheme, the weight embedding input specifies the target layer whose parameters are being generated. More specifically, the weight matrix for some layer l belonging to a group of layers g which share a hypernetwork is given byW (g,l) = θ(g) φ(g,l), where θ(g) and φ(g,l) are appropriatelysized tensors. Sharing θ(g) over a layer group g allows countering an increase in the overall number of parameters. We parameterize our hypernetworks such that the weight embedding vectors φ(g,l) are small, and therefore cheap to ensemble.\nLate-phase classification layers. Finally, inspired by Lee et al. (2015), in classification experiments we take the weights of the last linear layer as late-phase weights by default. In modern neural network architectures these layers do not usually comprise large numbers of parameters, and our architecture explorations indicated that it is typically beneficial to ensemble them. We therefore include W (L) in our late-phase weights φ, where W (L) denotes the weights of the final layer L." }, { "heading": "3 RESULTS", "text": "" }, { "heading": "3.1 NOISY QUADRATIC PROBLEM ANALYSIS", "text": "Before turning to real-world learning problems, we first focus on a simplified stochastic optimization setup which can be analytically studied. We consider the noisy quadratic problem (NQP; Schaul et al., 2013; Martens, 2016; Wu et al., 2018; Zhang et al., 2019a;b), where the goal is to minimize the scalar loss\nL = 1 2 (w − w∗ + )T H (w − w∗ + ) (3)\nwith respect to w ∈ Rn. In the equation above, w∗ denotes the target weight vector, which is randomly shifted by a noise variable assumed to follow a Gaussian distribution N (0,Σ). The (constant) Hessian matrix H controls the curvature of the problem.\nDespite the simplicity of Eq. 3, the NQP captures a surprising number of empirically-observed aspects of neural network learning (Zhang et al., 2019a). Here, we motivate its study as a model of late stages of learning, by Taylor expanding the loss around a minimum w∗. Thus, for a sufficiently late initialization time T0 (and small σ0) the NQP is particularly well suited to study our algorithm.\nThere are three main strategies to improve the expected NQP loss after convergence: (i) increase the minibatch size B, (ii) use more members K in an ensemble, and (iii) decrease the learning rate η (Zhang et al., 2019a). Our Algorithm 1 combines the first two\nstrategies in a non-trivial manner. First, the gradients for base weights θ are averaged during the inner loop over all ensemble members, corresponding to a minibatch-size rescaling by K. Second, we\nintroduce K ensemble members, to be averaged in weight space, that only differ in their late-phase weights φ.\nIn Appendix C, we show analytically that this combination of an increased effective minibatch size for θ and introducingK ensemble members for φ is successful, resulting in a scaling of the expected loss after convergence by 1K . This analysis holds for general Σ and H , and for both scalar and hypernetwork multiplicative late-phase weights. Hence, our approach combines the benefits of an increased effective minibatch size and of ensembling, while yielding a single model after training.\nWe present a numerical validation of this theoretical result in Fig. 1. Our model includes a multiplicative late-phase weight, wk = θ φk with φk ∈ R and θ ∈ Rn. We simulate a standard instance of the NQP, with diagonal Hessian Hii = 1/i and Σ = H−1 (cf. Zhang et al., 2019a), and report the average loss after convergence. Hyperparameters are given in Appendix C. As predicted by the theory, the loss falls as ∼ 1/K with increasing ensemble size K, and our algorithm performs on par with a full ensemble of K models trained independently with gradient descent." }, { "heading": "3.2 CIFAR-10/100 EXPERIMENTS", "text": "To test the applicability of our method to more realistic problems, we next augment standard neural network models with late-phase weights and examine their performance on the CIFAR-10 and CIFAR-100 image classification benchmarks (Krizhevsky, 2009). We use standard data preprocessing methods (cf. Appendix A) and train our models for 200 epochs from random initializations, except when noted otherwise. All evaluated methods are trained using the same amount of data.\nBesides SGD (with Nesterov momentum), we also investigate stochastic weight averaging (SWA; Izmailov et al., 2018), a recent reincarnation of Polyak averaging (Polyak & Juditsky, 1992) that can strongly improve neural network generalization. For completeness, we present pseudocode for SWA in Algorithm 2 and SGD with Nesterov momentum in Algorithm 3 (cf. Appendix A). When learning neural networks with late-phase weights we set Uφ and Uθ to one step of SGD (or SGD wrapped inside SWA).\nWe compare our method to dropout (Srivastava et al., 2014), a popular regularization method that can improve generalization in neural networks. Like our approach, dropout produces a single model at the end of training. We also consider its Monte Carlo variant (MC-dropout; Gal & Ghahramani, 2016), and the recently proposed BatchEnsemble (Wen et al., 2020). This method generates an ensemble using rank-1 matrices as described in Section 2.2. Predictions still need to be averaged over multiple models, but this averaging step can be parallelized in modern hardware.\nAdditionally, we report single-seed results obtained with an ensemble of K independently-trained models (a deep ensemble, Lakshminarayanan et al., 2017). Deep ensembles provide a strong baseline, at the expense of large computational and memory costs. Therefore, they are not directly comparable to the other methods considered here, and serve the purpose of an upper baseline.\nLate-phase (SWA) 96.81±0.07\nDeep ensemble (SGD) 96.91 Deep ensemble (LPBN, SGD) 96.99\nBy contrast, augmenting the architectures considered here with late-phase weights results in negligible additional costs during learning (with the exception of hypernetworks, which require additional tensor products) and none during testing. In principle, a set of independently-trained models yielded by our algorithm can therefore even be used as the basis of a deep ensemble, when the memory and compute budget allows for one. We present proof-of-concept experiments exploring this option.\nThroughout our CIFAR-10/100 experiments we set K = 10, use a fast base gradient scale factor of γθ = 1, and set our late-phase initialization hyperparameters to T0 = 120 (measured henceforth in epochs; T0 = 100 for SWA) and do not use initialization noise, σ0 = 0. These hyperparameters were tuned manually once on CIFAR-100 and then kept fixed unless otherwise\nnoted. We use standard learning rate scheduling, optimized for SGD and SWA on the base model (cf. Appendices A and B). Last-layer weights are included by default in our late-phase weight set Φ.\nCIFAR-10. For CIFAR-10 we focus on the WRN architecture, a high-performance residual network (WRN; Zagoruyko & Komodakis, 2016) which features BatchNorm layers. Taking advantage of this we implement a late-phase weight model consisting of BatchNorm shift and scale parameters.\nAll algorithms achieve a training error close to zero (cf. Appendix B). The resulting predictive accuracies are shown in Table 1. We find that augmenting the WRN 28-10 (a standard WRN configuration) with BatchNorm late-phase weights leads to a systematic improvement in generalization, reducing the gap with a deep ensemble of K = 10 models. Initializing our ensemble from the onset (T0 = 0) fails to meet the performance of the base model, reaching only 95.68 ± 0.23% (cf. Appendix 12).\nWe also investigate initializing a late-phase (full) deep ensemble at T0 = 120. This results in a test set accuracy of 96.32±0.09%, in between late-phase BatchNorm weights and no late-phase weights at all. This speaks to the data-efficiency of our low-dimensional late-phase ensembles which can be trained with as little data as a single model, besides being memory efficient.\nIn addition, we consider a larger instance of the WRN model (the WRN 28-14), trained for 300 epochs using cutout data augmentation (DeVries & Taylor, 2017), as well as a small convolution neural network without skip connections, cf. Table 3. When late-phase weights are employed in combination with SWA, we observe significant accuracy gains on the WRN 28-14. Thus, our latephase weights impose an implicit regularization that is effective on models with many weights. Similarly, we observe larger gains when training on a random subset of CIFAR-10 with only 104 examples (cf. Appendix B).\nCIFAR-100. We next turn to the CIFAR-100 dataset, which has 10-fold less examples per class and more room for improvements. We study the WRN 28-10, as well as the larger WRN 28-14 variant (using cutout data augmentation as before) and a PyramidNet (Han et al., 2017) with ShakeDrop regularization (Yamada et al., 2019). The latter are trained for 300 epochs.\nPredictive accuracy is again highest for our neural networks with late-phase weights, trained with SGD or SWA, cf. Table 2. We observe that the simplest BatchNorm late-phase weight model reaches the highest accuracy, with late-phase hypernetwork weight embeddings yielding essentially no improvements. Once again, the setting of T0 = 0 (onset ensemble learning) fails to match base model performance, finishing at 80.26± 0.42% test accuracy. As for CIFAR-10, a late-phase full deep ensemble only reached intermediate improvements, at 82.17±0.15% test accuracy. Furthermore, a gap towards deep ensembles persists. This suggests that covering different modes of the loss (Fort et al., 2020) can provide benefits that cannot be captured by ensembling models in a small neighborhood.\nThe final averaged solutions found with late-phase weights are strong base models to build a deep ensemble of independently-trained networks. The fact that our algorithm yields a single model allows further pushing the upper bound of what can be achieved when unrestricted full ensemble training is possible. This improvement comes at no cost compared to a standard deep ensemble.\nWe train additional neural network architectures restricting our experiments to the BatchNorm latephase weight model, which can be readily implemented without architectural modifications. Again, learning with late-phase weights yields a consistent improvement over the baseline, cf. Table 3.\nNotably, SWA can achieve high predictive accuracy with a large constant learning rate (Izmailov et al., 2018). We reproduce these previous results and show that they improve when learning with late-phase weights, cf. Fig. 2. Substantial progress is made both when entering the latephase learning period and when activating SWA.\nOut-of-distribution (OOD) generalization. Deep ensembles are an effective technique for improving the behavior of neural networks in OOD data (Lakshminarayanan et al., 2017). We ask whether our implicit ensembles modeled during late-phase learning could confer a similar advantage to our final averaged model.\nAdditionally, we evaluate the performance of a late-phase weight ensemble obtained with large initialization noise\nσ0 = 0.5 (at T0 = 100), skipping the final weight averaging step. This requires integrating predictions over K late-phase ensemble members at test time, y(x) = 1K ∑K k=1 y(x,wk). Unlike standard deep ensembles, training this ensemble is still as cheap as training a single model.\nWe draw novel images from a collection of datasets (SVHN, Netzer et al. (2011); LSUN, Yu et al. (2015); Tiny ImageNet; CIFAR-10) and present them to a WRN 28-10 trained on CIFAR-100. We use Shannon’s entropy (Cover & Thomas, 2006) to measure the uncertainty in the output predictive distribution, which should be high for OOD and low for CIFAR-100 data. Overall performance is summarized using the area under the receiver operating characteristics curve (AUROC), averaged over all datasets. We report per-dataset results in Appendix B (Table 16) alongside experiments measuring robustness to corruptions in the input data (Hendrycks & Dietterich, 2019).\nWe compare our results to alternative methods with strong uncertainty representation: MC-dropout (Gal & Ghahramani, 2016), SWA-Gaussian (SWAG; Maddox et al., 2019) and BatchEnsemble (Wen et al., 2020). All three methods require integrating predictions over an ensemble at test time.\nWe find that learning with late-phase weights increases prediction uncertainty in OOD data, allowing for a significantly better separation between in and out-of-distribution examples, cf. Table 4. The\nOOD performance of late-phase BatchNorm weights compares favorably to the alternative methods including deep ensembles, even when using a single weight-averaged model, while maintaining high predictive accuracy. Remarkably, keeping the late-phase BatchNorm ensemble at test time allows reaching the highest OOD performance throughout. Paired with non-zero initialization noise σ0 > 0 (cf. Appendix B), this method results in the best OOD performance.\nDespite our improved performance on both predictive accuracy (with late-phase BatchNorm) and OOD discrimination (with late-phase BatchNorm and hypernetwork embeddings), the test set negative log-likelihood (NLL; often used to assess predictive uncertainty, Guo et al., 2017) is surprisingly slightly worse for our solutions. This is aligned with the finding that SWA does not always significantly reduce NLL, even though predictive accuracy increases (Maddox et al., 2019).\nFlatness. Why do our networks generalize better? Approximate Bayesian inference suggests that flat minima generalize better than sharp minima (Hochreiter & Schmidhuber, 1997a; MacKay, 1992). Due to symme-\ntries that are present in neural networks there is some debate surrounding this argument (Dinh et al., 2017), but current evidence seems favorable (Jiang et al., 2020).\nWe hypothesize that sharing base weights over K late-phase weight configurations can implicitly lead to flatter solutions. To investigate whether our algorithm finds flatter minima, we examine a simple flatness score that correlates well with generalization (Pittorino et al., 2020; Jiang et al., 2020). Concretely, we add multiplicative Gaussian noise zi ∼ N (0, w2i σ2z) to each weight wi and then measure the change in the loss δL = Ez[L(w + z) − L(w)]. Our final weight configurations are indeed in flatter regions of weight space according to this measure: δL increases more slowly with σz for the WRN 28-10 models that are learned with BatchNorm late-phase weights, Fig. 3." }, { "heading": "3.3 IMAGENET EXPERIMENTS", "text": "To investigate whether our gains translate to large-scale learning problems, we train deep residual networks (He et al., 2016) and a densely-connected convolutional network (DenseNet; Huang et al., 2018) on the ImageNet dataset (Russakovsky et al., 2015). We start from pretrained models and contrast BatchNorm late-phase weight learning to fine-tuning with SGD for 20 epochs, with γθ = 1/K\nand σ0 = 0 (cf. Appendix A). For simplicity we do not include last-layer weights in Φ.\nFine-tuning with late-phase weights improves the final top-1 validation accuracy of this pretrained model significantly with only minor training, as seen in Table 5. These results serve as a proof-ofconcept that existing models can be further improved, taking our late-phase initialization T0 as the time the previous experimenter stopped training. In Appendix B, we present additional CIFAR-100 experiments where we apply late-phase learning starting at the suboptimal end-of-training T0 = 200, to mimic the pretrained condition." }, { "heading": "3.4 LSTM LANGUAGE MODELING EXPERIMENTS", "text": "Finally, we conduct experiments on the language modeling benchmark enwik8. To show that the benefits of late-phase weights extend to recurrent neural networks, we augment a standard LSTM with multiplicative late-phase weights consisting of rank-1 matrices (Wen et al., 2020, cf. Section 2.2).\nOverfitting is a major issue when training LSTMs. Recent studies have shown that by leveraging vast amounts of computation and smart black-box optimizers (Golovin et al., 2017), properly regularized LSTMs can outperform previously published state-of-the-art models (Melis et al., 2017). To avoid this issue, we train models where the number of parameters (∼1.56M) is drastically smaller than the number of training data points (90M), such that we\ndo not observe any overfitting. Thus, we do not apply any regularization. This helps minimize the effects of hyperparameter tuning. Our only hyperparameter is the learning rate (0.001 here), which we tune via grid search to maximize base model performance.\nWe train our LSTM with 500 units for 50 epochs, optimizing every weight with Adam (Kingma & Ba, 2015). We apply a multiplicative rank-1 matrix elementwise to the recurrent weight matrix. Interestingly, merely adding the multiplicative parameters to the LSTM (Base) accelerates training and leads to better training and test set performance (measured in bits per character, BPC) with no additional changes to the optimizer (Base + Rank1, Table 6). Further improvements can be achieved with our late-phase weights. We generate K = 10 late-phase weight components at epoch 30 with σ0 = 0.35 and set γθ = 1. Additionally, we find that SWA (starting at epoch 40) substantially improves all scores, with smaller gains on the models with multiplicative weights." }, { "heading": "4 RELATED WORK", "text": "Our late-phase weights define an ensemble with the special property that every model shares the same base weights. Such parameter sharing is an established method for ensembling neural networks while controlling for the memory and time complexity of learning (Lee et al., 2015). In designing our late-phase weight models, we draw directly from recent work which proposes sharing a set of base parameters over K rank-1 matrices (Wen et al., 2020) or K heads (Lee et al., 2015).\nThe elastic averaging SGD algorithm learns K neural networks in parallel, coupled through an additional central model (EASGD; Zhang et al., 2015). Like our algorithm, EASGD often yields solutions which generalize better than those found by standard SGD (Pittorino et al., 2020). Our latephase weight learning is intimately related to EASGD, as we optimize the performance of a central model through an ensemble. However, thanks to parameter sharing and late-phase ensembling, we do not find the need to introduce a coupling term to our loss function. Additionally, as we replicate a small number of parameters only, the complexity of our algorithm is greatly reduced in comparison to EASGD, which requires learning a full ensemble of models.\nSplitting the weights of a neural network into a set of fast and slow components which vary on different timescales is a classic technique (Hinton & Plaut, 1987; Schmidhuber, 1992) that has proven useful in a wide range of problems. This list includes applications to few-shot learning (Munkhdalai & Yu, 2017; Nichol et al., 2018; Perez et al., 2018; Zintgraf et al., 2019; Flennerhag et al., 2020), optimization (Zhang et al., 2019b; Chaudhari et al., 2019), improving recurrent neural networks (Ba et al., 2016; Ha et al., 2017), and continual learning with biologically-realistic synapses (Kaplanis et al., 2018; Leimer et al., 2019), to name a few. Although there is no explicit separation of timescales in our weight components, the update accumulation in θ as φk varies (cf. Algorithm 1) suggests interpreting the base θ as slow weights and the late-phase Φ as fast weights.\nThis accumulation is reminiscent of a recent meta-learning algorithm (Zintgraf et al., 2019), which first separates parameters into task-shared and task-specific, and then differentiates through a sequence of accumulated updates performed over the task-specific parameters (Finn et al., 2017). Continuing with the fast-slow weight analogy, our averaging over fast weights at the end of learning (Eq. 1) could be thought of as a synaptic consolidation step which integrates the fast weight components onto a slow, persistent form of memory." }, { "heading": "5 CONCLUSION", "text": "We proposed to replicate and learn in parallel a subset of weights in a late phase of neural network learning. These late-phase weights define an ensemble of models which share every other weight. We studied convolutional neural networks, a common recurrent neural network, and a simple quadratic problem. Surprisingly, across these cases, we found that a small number of appropriately chosen such weights can quickly guide SGD towards solutions that generalize well. Most of our experiments relied on BatchNorm late-phase weights, making our method easy to implement in a wide range of existing models, including pretrained ones. We expect future work to uncover new effective late-phase weight models." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by the Swiss National Science Foundation (B.F.G. CRSII5-173721 and 315230 189251), ETH project funding (B.F.G. ETH-20 19-01), the Human Frontiers Science Program (RGY0072/2019) and funding from the Swiss Data Science Center (B.F.G, C17-18, J.v.O. P18-03). João Sacramento was supported by an Ambizione grant (PZ00P3 186027) from the Swiss National Science Foundation. We would like to thank Nicolas Zucchet, Simon Schug, Xu He, Ângelo Cardoso and Angelika Steger for feedback, Mark van Rossum for discussions on flat minima, Simone Surace for his detailed feedback on Appendix C, and Asier Mujika for providing very useful starter code for our LSTM experiments." }, { "heading": "A ADDITIONAL IMPLEMENTATION DETAILS", "text": "Hypernetwork model. The base neural network architecture we use when parameterizing our weights using a hypernetwork is identical to the WRN 28-10 described by Zagoruyko & Komodakis (2016). Our hypernetwork implementation closely follows Savarese & Maire (2019), who studied high-performing linear hypernetwork architectures for WRNs. We do not use dropout or biases in the convolutional layers. The parameters of every convolutional layer are hypernetwork-generated, with one hypernetwork per layer group (Table 7). The remaining parameters, namely those of BatchNorm units and final linear layer weights, are non-hypernetwork-generated.\nFollowing Savarese & Maire (2019) we turn off weight decay for the model embeddings and initialize these parameters with a random pseudo-orthogonal initialization over layers. The hypernetwork parameters are initialized using a standard Kaiming initialization (He et al., 2015).\nSmall ConvNet model. We train a slight modification of the classic LeNet-5 (Lecun et al., 1998) for 200 epochs on CIFAR-10. Both convolutional and fully-connected layers are left unchanged, but we use rectified linear units on the hidden layers. Furthermore, after each such activation, BatchNorm units are inserted. We optimize the model with SGD and use late-phase BatchNorm weights, with T0 = 50 and σ0 = 0.5. For simplicity of implementation, we do not include the last linear layer in the late-phase weight set Φ.\nOptimization. We optimize the cross-entropy loss, using either SGD with Nesterov momentum (0.9) or SGD with Nesterov momentum (0.9) wrapped inside SWA. LSTM: Our LSTM experiments use Adam with constant learning rate 0.001, batch size 128, and no regularizers such as weight decay or dropout. WRN-28-10: For our WRN experiments on the CIFAR datasets we use the learning rate annealing schedule of Izmailov et al. (2018), according to which an initial learning rate of 0.1 is linearly decreased at every epoch from the end of the 100th epoch (80th for SWA) to the end of the 180th epoch (144th for SWA; SWA is activated at epoch 160), when a final value of 0.001 (0.05 for SWA) is reached. Our optimizers use Nesterov momentum (set to 0.9), a batch size of 128 and weight decay (set to 0.0005). On CIFAR-100 (SGD) we set the weight decay of late-phase weights proportional to the ensemble size, 0.0005K. WRN-28-14: The WRN 28-14 models are trained for 300 epochs on CIFAR-100. The learning rate is initialized at 0.1, then annealed to 0.05 from the 80th epoch to the 240th epoch. SWA is activated at epoch 160. All other hyperparameters are identical to those of WRN 28-10. ConvNet: Same as for the WRN 28-10 model, except that we anneal the learning rate until the 160th epoch.\nBatch normalization units. Whenever we use SWA, we follow Izmailov et al. (2018) and perform a full pass over the training set to re-estimate BatchNorm unit statistics before testing. This correction is required since the online BatchNorm mean and variance estimates track the activations produced with the raw (non-averaged) weights during training, while the averaged solution is the one used when predicting at test time.\nData augmentation and preprocessing. On both CIFAR and ImageNet datasets, all images are normalized channelwise by subtracting the mean and dividing by the standard deviation; both statistics are computed on the training dataset. The same transformation is then applied when testing, including to OOD data. Following a standard procedure (e.g., Zagoruyko & Komodakis, 2016; He et al., 2016) we augment our training datasets using random crops (with a 4-pixel padding for CIFAR) and random horizontal flips. The ImageNet training dataset is augmented with random horizontal flips, as well as random cropping of size 224, while a centered cropping of size 224 was used on the test set. Our OOD datasets are resized to fit whenever necessary; we used the resized images made available by Lee et al. (2018).\nImageNet experiments. The pretrained model for the ImageNet experiment is obtained from torchvision’s models subpackage. We fine-tune the model for 20 additional epochs on ImageNet.\nWe use a multistep learning rate scheduler, starting at 0.001 then decreasing at the 10th epoch to 0.0001. We use SGD with momentum (set to 0.9) and weight decay (set to 0.0001) as our optimizer, with a batch size of 256. We use σ0 = 0 and K = 10 for our late-phase model.\nCode forks. Our hypernetwork implementation was inspired by the code made publicly available by Savarese & Maire (2019). Our implementation of SWA was adapted from the code accompanying the work of Izmailov et al. (2018), now available on the torchcontrib Python package. The SWAG method was evaluated directly using the code provided by the authors (Maddox et al., 2019). We used the same base WRN model as Maddox et al. (2019), which can be retrieved from https://github.com/ meliketoy/wide-resnet.pytorch.\nLSTM All experiments are conducted using the Tensorflow Python framework (Abadi et al., 2016). All base weights are initialized uniform in [−0.01, 0.01] whereas the initial rank-1 matrix weights are centered around 1 i.e. [1−0.01, 1+0.01] to mitigate strong difference in initialization compared to the base model. We use the Tensorflow default values (β1 = 0.9, β2 = 0., = 10−8) for the Adam optimiser. We perform a grid search over σ0 ∈ [0, 0.5] (in steps of size 0.05) for our LSTM experiments (fixing K = 10 and varying T0 ∈ {0, 30}) and obtain the values reported in the main text, T0 = 30 and σ0 = 0.35." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "Pretrained CIFAR-100. We apply our method to a standard WRN 28-10 pretrained on CIFAR100 (i.e., we set T0 = 200) and train for an additional 20 epochs. At the beginning of the finetuning, the learning rate is reset to 0.01, then annealed linearly to 0.001 for 10 epochs. It is then held constant for the remainder of the fine-tuning process. We observe that augmenting with BatchNorm late-phase weights yields an improved predictive accuracy compared to additional fine-tuning with SGD (Base), cf. Table 9. Both methods improve over the initial baseline (Initial), including the base model. This can be explained by the optimization restart and the accompanying spike in the learning rate introduced by our scheduler (Loshchilov & Hutter, 2017).\nImportantly, we find that fine-tuning only BatchNorm late-phase weights while keeping all other weights fixed does not even match the Base control. Together with the finding that the optimal latephase weight initialization time is at T ∗0 = 120 (when learning for 200 epochs), this result speaks to the importance of jointly optimizing both base and late-phase weights through our Algorithm 1.\nGradient accumulation control. Here we show that the improved generalization we report in the main text is not merely due to gradient accumulation over larger batches. We take our base WRN 28- 10 model (without late-phase weights) and start accumulating gradients over K = 10 minibatches at T0 = 120, experimenting both with γθ = 1/K and γθ = 1. The models are trained with SGD using otherwise standard optimization settings. Both controls fail to improve (even match) the performance of the base model trained without any gradient accumulation.\nSensitivity to T0, K and σ0. We present a hyperparameter exploration on the CIFAR-100 dataset using BatchNorm late-phase weights in Tables 8, 11 and 12. We find that our algorithm is largely\nrobust to σ0 when T0 can be set to its optimal value, which is at 60% of training. See also Figure 6 for a visualisation of the same data, specifically the change in mean AUROC score and test set accuracy when changing T0. This result holds also on CIFAR-10, cf. Table 12. When starting from a pretrained condition (T0 = 200), finite σ0 leads to a significant improvement in performance, cf. Table 11. We therefore report results obtained with σ0 = 0 for every CIFAR and ImageNet experiment in the main text. The exception to this is the non-averaged (ensemble) late-phase BatchNorm model presented in Table 4, which was optimized for best OOD performance (corresponding to σ0 = 0.5).\nTable 12: CIFAR-10 and CIFAR-100 test set accuracy (%) depending on different late phase timing T0 for WRN 28-10, SGD. Mean ± std. over 5 seeds.\nT0 CIFAR-10 CIFAR-100\n0 95.68±0.23 74.38±0.71 40 96.34±0.08 79.69±0.11 60 96.42±0.10 80.53±0.21 80 96.50±0.11 81.72±0.18 100 96.45±0.08 82.48±0.21 120 96.48±0.20 82.87±0.22 140 96.26±0.17 82.53±0.21 160 96.23±0.11 81.41±0.31 180 96.25±0.23 81.43±0.27 200 96.16±0.12 81.35±0.16\nRelated work. Here we provide details for the training setups of alternative methods we compare against in the main text. For the results reported for dropout (Srivastava et al., 2014) and MC-dropout (Gal & Ghahramani, 2016), we simply train a WRN 28-10 on CIFAR-100 with the exact same configuration as for our base model, see above, but include dropout layers as usually done (Zagoruyko & Komodakis, 2016) after the first convolution in each residual block. For a scan over the dropout probability p in this setup, see Table 13. p = 0.2 is reported in the main text - for CIFAR-100 and CIFAR10. Note that p was only tuned for CIFAR-100.\nFor the reported results of BatchEnsemble (Wen et al., 2020), we simply execute the code provided by the authors at https://github.com/ google/uncertainty-baselineswith their fine-tuned configuration for CIFAR-10/100. Notably, the authors use a different setup than followed in this manuscript. First, the WRN 28-10 is trained for 250 epochs (we allow for this increased budget exceptionally for BatchEnsemble), with a multi-step learning rate annealing at [80, 160, 180] with a learning rate decay factor of 0.2. Second, a weight decay of 3× 10−4 is used.\nFor the results reported for SWAG (Maddox et al., 2019), we use the code provided by the authors at https://github.com/wjmaddox/swa_gaussian, and the proposed fine-tuned configuration which coincides with the configuration used to obtain all CIFAR-100 results reported in this manuscript, except for BatchEnsembles (see above). We report results for SWAG after training on 200 epochs for fair comparison.\nTraining losses. We provide the final achieved training losses for the base model and when augmenting it with BatchNorm late-phase weights on Table 14, for both CIFAR-10 and CIFAR-100. Using a fast gradient accumulation scale factor of γθ = 1 leads to a higher training loss on CIFAR-100 than that of the standard model, but we found this setting crucial to achieve the largest improvement on test set generalization.\nCIFAR-10 with a reduced training set. Here we evaluate the performance of our method on a reduced training set of CIFAR-10. We randomly pick 10000 training data out of the 50000 available, and use this new set to train different models. After training, the models are evaluated on the standard CIFAR-10 test set. Results are shown in Table 15.\nDetailed OOD results and mean corruption error (mCE) experiments. In order to test the robustness of late-phase weights against input data corruption, we used the corruptions and dataset proposed by Hendrycks & Dietterich (2019), freely available at https://github.com/ hendrycks/robustness. The authors propose 15 noise sources such as random Gaussian noise, spatter or contrast changes to deform the input data and report the model test set accuracy on the corrupted dataset under 5 severity levels (noise strengths). For each source noise, its corruption error is computed by averaging the prediction error over the severity levels. The average of the corruption error of all 15 noises gives us the Mean Corruption Error (mCE). See Table 16 for the mCE computed on the corrupted CIFAR-100 dataset.\nTraining run time. Here we compare the training run time of our method with the baseline. The result was computed in Python 3.7, using the automatic differentiation and GPU acceleration package PyTorch (version 1.4.0). We used the standard datasets (including training and test splits) as provided by the torchvision package unless stated otherwise. We used a single NVIDIA GeForce 2080 Ti GPU for the experiment. Results are presented in Table 17." }, { "heading": "C THEORETICAL ANALYSIS OF THE NOISY QUADRATIC PROBLEM", "text": "In this section, we consider a noisy quadratic problem (NQP) that can be theoretically analyzed and that captures important characteristics of the stochasticity of a minibatch-based optimizer (Schaul et al., 2013; Martens, 2016; Wu et al., 2018; Zhang et al., 2019a;b). The NQP does a second-order Taylor expansion of the loss function around the optimum w∗ and models the minibatch noise as\na random translation of the optimum, while keeping the curvature H the same. This gives us the following minibatch loss:\nL̂ = 1 2 (w −w∗ + 1√ B )TH(w −w∗ + 1√ B ) (4)\nwith ∼ N (0,Σ) and B the minibatch size. Note that we use boldface notation for vectors in this analysis for notational clarity. The NQP can be seen as an approximation of the loss function in the final phase of learning, where we initialize the late-phase ensemble. Despite its apparent simplicity, it remains a challenging optimization problem that has important similarities with stochastic minibatch training in deep neural networks (Schaul et al., 2013; Martens, 2016; Wu et al., 2018; Zhang et al., 2019a;b). For the simple loss landscape of the NQP, there are three main strategies to improve the expected loss after convergence: (i) increase the mini-batch size B (Zhang et al., 2019a), (ii) use more members K in an ensemble (c.f. Section C.3 and (iii) decrease the learning rate η (Schaul et al., 2013; Martens, 2016; Wu et al., 2018; Zhang et al., 2019a;b). The late-phase weights training combines the two first strategies in a non-trivial manner by (i) averaging over the base-weights gradients for all ensemble members and (ii) averaging the late-phase weights in parameter space to obtain a mean-model. The goal of this theoretical analysis is to show that the expected loss after convergence scales inversely with the number of late-phase ensemble members K, which indicates that the non-trivial combination of the two strategies is successful.\nTo model the multiplicative weight interaction between late-phase weights and base weights, we use linear hypernetworks of arbitrary dimension. The linear hypernetworks parameterize the weights as w = θe, with θ ∈ Rn×d the hypernetwork parameters and e ∈ Rd the embedding vector. The embedding vectors e are used as late-phase weights (φ in the main manuscript) to create a latephase ensemble withK members, while using a shared hypernetwork θ as base-weights: wk = θek. Ultimately, we are interested in the expected risk of the the mean model at steady state:\nE[L(ss)] = Eρss [ 1\n2 (w̄ −w∗)TH(w̄ −w∗)] (5)\nwith w̄ , 1K ∑ k θek = θ 1 K ∑ k ek , θē and ρss the steady-state distribution of the parameters. Note that we cannot put w∗ = 0 without loss of generality, because the overparameterization of the hypernetworks makes the optimization problem nonlinear.\nWe start with investigating the discrete time dynamics induced by late-phase learning, after which we derive the corresponding continuous time dynamics to be able to use the rich stochastic dynamical systems literature for analyzing the resulting nonlinear stochastic dynamical system.\nC.1 DISCRETE TIME DYNAMICS\nAs we want to investigate the multiplicative interaction between the shared and late-phase parameters, we substitute w = θe into equation 4, instead of computing a new Taylor approximation in the hypernetwork parameter space. Let us take t as the index for the outer loop (updating θ) and k the index for the ensemble member. Then we have the following stochastic minibatch loss:\nL̂(t,k) = 1 2 (θ(t)e (t) k −w ∗ + 1√ B (t,k))TH(θ(t)e (t) k −w ∗ + 1√ B (t,k)), (6)\nwhich gives rise to the following parameter updates using late-phase learning with learning rate η and minibatch size B:\nθ(t+1) = θ(t) − η 1 K ∑ k H(θ(t)e (t) k −w ∗)e (t)T k + η√ B 1 K ∑ k H (t,k)e (t)T k (7)\ne (t+1) k = e (t) k − ηθ (t)TH(θ(t)e (t) k −w ∗) + η√ B θ(t)TH (t,k) (8)\nThe above discrete time dynamics are nonlinear, giving rise to a non-Gaussian parameter distribution ρ. Hence, it is not possible to characterize these dynamics by the moment-propagating equations of the first and second moment as done in Zhang et al. (2019a;b); Schaul et al. (2013) and Wu et al. (2018), without having full access of the parameter distribution ρ. Furthermore, because of the hypernetwork parameterization, we cannot decouple the system of equations, even if H and Σ are diagonal, which is a common approach in the literature. Therefore, we investigate the corresponding continuous time dynamics, such that we can use the rich literature on stochastic dynamical systems.\nC.2 CONTINUOUS TIME DYNAMICS\nFirst, let us define some compact notations for the various parameters.\net , [e (t)T 1 . . . e (t)T K ] T (9)\nEt , [e (t) 1 . . . e (t) K ] (10)\nθt , vec(θt) (11)\nxt , [θ T t , e T t ] T (12)\nt , [ (t,1)T . . . (t,K)T ]T , (13)\n(14)\nwhere vec(θ) concatenates the columns of θ in a vector. Then the discrete time dynamics (equation 7 and equation 8) can be rewritten as:\nxt+1 = xt − ηF (xt) + η√ B G(xt) t (15)\nwith\nF (xt) ,\n[ 1 K ∑ k ( e (t) k ⊗H )( θte (t) k −w∗ )( I ⊗ (θTt Hθt) ) et − 1⊗ (θTt Hw∗) ] (16)\nG(xt , [ 1 KEt ⊗H I ⊗ (θTt H) ] (17)\n(18)\nwith ⊗ the Kronecker product, I an identity matrix of the appropriate size and 1 a vector full of ones of the appropriate size. As a linear transformation of Gaussian variables remains a Gaussian variable, we can rewrite eq. equation 15 as follows:\nxt+1 = xt − ηF (xt) + η√ B D(xt)ζt (19)\nwith D(xt) , ( G(xt)(I ⊗ Σ)G(xt)T )0.5 and ζ ∼ N (0, I). Following Liu & Theodorou (2019) and Chaudhari & Soatto (2018), the corresponding continuous-time dynamics are:\ndxt = −F (xt)dt+ √ 2β−1D(xt)dWt (20)\nwith Wt Brownian motion and β , 2Bη the inverse temperature. Note that √ η is incorporated in the noise covariance, such that the correct limit to stochastic continuous time dynamics can be made (Liu & Theodorou, 2019; Chaudhari & Soatto, 2018; but see Yaida, 2018). For computing the expected loss E[Lt] of the mean model, we need to have the stochastic dynamics of this loss. Using the Itô lemma (Itô, 1951; Liu & Theodorou, 2019), which is an extension of the chain rule in the ordinary calculus to the stochastic setting, we get\ndL(xt) = [ −∇L(xt)TF (xt) + 1 2 Tr [ D̃HLD̃ ]] dt+ [ ∇L(xt)T D̃ ] dWt (21)\nwith D̃ , √\n2β−1D(xt) for notational simplicity and HL the Hessian of L w.r.t. xt. As we are interested in the expected risk (equation 5), we can take the expectation of equation 21 over the parameter distribution ρt(x) to get the dynamics of the first moment of the loss (also known as the backward Kolmogorov equation (Kolmogorov, 1931)):\ndEρt [ L(xt) ] = Eρt [ −∇L(xt)TF (xt) + 1 2 Tr [ D̃2HL ]] dt (22)\nIn order to obtain the dynamics of the parameter distribution, the Fokker-Planck equation can be used (Jordan et al., 1998). However, due to the nonlinear nature of the stochastic dynamical system, the distribution is non-Gaussian and it is not possible (to our best knowledge) to obtain an analytical solution for equation 22. Nevertheless, we can still gain important insights by investigating the steady-state of equation 22. After convergence, the left-hand side (LHS) is expected to be zero. Hence, we have that\nEρss [ ∇L(xss)TF (xss) ] = 1\n2 Eρss\n[ Tr[D̃2HL] ] (23)\nThe remainder of our arguments is structured as follows. First, we will show that the left-hand-side (LHS) of equation 23 is the expectation of an approximation of a weighted norm of the gradient∇L, after which we will connect this norm to the loss L of the mean model. Second, we will investigate the RHS to show that the late-phase learning with ensembles lowers the expected risk of the NQP at steady-state. For clarity and ease of notation, we will drop the ss subscripts. The gradient of the mean-model loss is given by:\n∇L(x) = [ ( ē⊗H )( θē−w∗ ) 1 K1⊗ ( θTHθē−w∗ )] (24) By introducing ∆ek , ek − ē and using that ∑ k ∆ek = 0, we can rewrite F (x) as:\nF (x) = [ I 0 0 KI ] ∇L(x) + [ (Γ⊗H)θ( I ⊗ (θTHθ) ) ∆e ] (25)\nwith Γ , 1K ∑ k ∆ek∆e T k and ∆e T , [eT1 ...e T K ]. We see that F is an approximation of the gradient ∇Lwhere the lower block of∇L is scaled byK. Importantly, the lower block of the second element of the RHS of equation 25 (the approximation error) will disappear when taking the inner product with ∇L and the upper block is not influenced by the number of ensemble members K, which we will need later. The LHS of equation 23 can now be rewritten as:\nEρss [ ∇L(x)TF (x) ] = Eρss [ ∇L(x)TM∇L(x) ] + Eρss [ Tr[HθΓH(θē−w∗)ēT ] ] (26)\nwith M the diagonal matrix of equation 25 (first element of the RHS). The first term of the RHS of equation 26 is the expectation of a weighted squared norm of ∇L, while the second term is an approximation error due to the covariance of ∆ek. Hence, we see that the LHS of equation 23 can be seen as an approximation of a weighted norm of the gradient ∇L. By investigating the term ∇L(x)TM∇L(x) further, we show that it is closely connected to the loss L.\n∇L(x)TM∇L(x) = (w̄ −w∗)T (ēT ēH2 +HθθTH)(w̄ −w∗) (27)\nWhen comparing to the mean-model loss L = (w̄ − w∗)TH(w̄ − w∗) we see that the two are tightly connected, both using a weighted distance measure between w̄ and w∗, with only a different weighting. Taken everything together, we see that we can take the LHS of equation 23 (and hence also the RHS) as a rough proxy for the expected risk under the steady-state distribution (equation 5), which will be important to investigate the influence of the amount of ensemble members on the expected risk. Zhu et al. (2018) highlighted this trace quantitiy in equation 23 as a measurement of the escaping efficiency out of poor minima. However, we assume that we are in the final valley of convergence (emphasized by this convex NQP), so now this interpretation does not hold and the quantity should be considered as a proxy measurement of the width of the steady-state parameter distribution around the minimum. The trace quantity has HL and D(xss)2 as main elements, which we structure in block matrices below (for clarity and ease of notation, we drop the subscripts ss).\nHL =\n[ (ēēT )⊗H 1K1\nT ⊗QT 1 K1⊗Q 1 K21⊗ θ THθ\n] (28)\nD(x)2 = G(I ⊗ Σ)GT = [ 1 K2 (EE\nT )⊗ (HΣH) 1KE ⊗ (HΣHθ) 1 KE T ⊗ (θTHΣH) I ⊗ (θTHΣHθ)\n] (29)\nwith 1 a matrix or vector of the appropriate size full of ones, ē , 1/K ∑ k ek and the rows of Q ∈ Rd×nd given by:\nQi,: , θ T ( (ēδTi + δiē T )⊗H ) − δTi ⊗ (w∗TH), (30)\nwith δi the i-th column of an appropriately sized identity matrix. After some intermediate calculations and rearranging of terms, we reach the following expression for the RHS of equation 23:\n1 2 Eρss\n[ Tr[D̃2HL] ] = 1\nKβ\n( Eρss [ Tr [ Ẽ2ēēT ]] Tr [ HΣH2 ] + Eρss [ Tr [ ē⊗ (HΣHθQ) ] + ...\n...Tr [( ēT ⊗ (θTHΣH) ) QT ) ] + Tr [ θTHΣHθθTHθ ]]) , (31)\nwith Ẽ2 , 1K ∑ k eke T k = 1 KEE\nT Note that everything between the big brackets in the RHS is independent of K in expectation. Hence, we see that the RHS of equation 23 scales inversely by K, exactly as the case for full ensembles (see Section C.3). Importantly, the approximation errors in equation 25 are independent of K, hence, the found scaling of 1K in equation 31 translates to a scaling of 1K of the expected risk of the NQP, following the above argumentation. Hence, we see that the non-trivial combination of (i) averaging over the base-weights gradients for all ensemble members and (ii) averaging the late-phase weights ek in parameter space to obtain a mean-model, succeeds in scaling the expected loss after convergence inversely by K.\nC.3 NQP WITH FULL ENSEMBLES\nAs a comparison for the above theoretical results, we also analyze the NQP that uses an ensemble of K full weight configurations wk to get a mean model w̄, instead of shared weights θ and ensemblemember-specific weights φk. For the case of linear models, the averaging in weight space to obtain a mean model is equivalent to the averaging of the predictions over the ensemble, which is conventionally done using ensembles. Without loss of generality, we can take w∗ = 0 (corresponding with a simple reparameterization of w). Using equation 4, this results in the following parameter updates for the ensemble members:\nw (t+1) k = (I − ηH)w (t) k + η√ B H (t,k) (32)\nThe mean model w̄ , 1K ∑ kwk has the following corresponding discrete dynamics:\nw̄(t+1) = (I − ηH)w̄(t) + η K √ B H ∑ k (t,k) (33)\nExact moment propagating equations. As this is a discrete linear system with Gaussian noise, the resulting parameter distributions will also be linear and can be fully characterized by the mean\nand covariance of the parameters. Taking the expectation and variance of equation 33 results in: E [ w̄(t+1) ] = (I − ηH)E [ w̄(t) ] (34)\nC [ w̄(t+1) ] = (I − ηH)C [ w̄(t) ] (I − ηH) + η 2\nKB HΣH (35)\nwith Σ the covariance matrix of . For an appropriate η, the above equations converge to the following fixed points at steady-state:\nEρss [ w̄ ] = 0 (36)\nvec ( Cρss [ w̄ ]) = η2\nKB\n( I − (I − ηH)⊗ (I − ηH) )−1 vec ( HΣH) (37)\nWe see that the steady-state covariance of w̄ and hence of the risk L scales with 1K (Eρss [L] = Eρss [w̄THw̄] = Tr [ HCρss [w̄] ] ). The expected risk Eρss [L] obtained with computationally expensive full ensembles can be seen as a lower limit that we try to reach with the economical ensembles of shared weights θ and late-phase weights φk. Note that for the NQP, increasing the batchsizeB has a similar influence as increasing the number of ensemble membersK, as can be seen in equation 37.\nContinuous time stochastic dynamics. We can also do a similar continuous time analysis as Section C.2 for the case of full ensembles, to better relate it to the results of the late-phase learning with shared parameters. Following the same approach, we get the following expression for the trace term:\n1 2 Eρss\n[ Tr[D̃2HL] ] = Tr [ 1 β ( I ⊗ (HΣH) ) 1 K2 ( 1⊗H )] (38)\n= 1 Kβ Tr [ HΣH2] (39)\nWhen comparing to equation 31, we see that the economical ensembles with shared parameters reach the same scaling with 1K as a result of ensembling, however, some extra terms that vanish asymptotically for big K appear as a result of the interplay between shared and late-phase parameters.\nExperimental details for Fig. 1. We take the model w = θ φ (i.e., K = 1) as our baseline, since this overparameterization could already result in accelerated learning (Arora et al., 2018). Our parameters are randomly initialized and scaled such that w̄ has a fixed distance to w∗ of 1. Since the NQP mimics a late phase of learning we set T0 = 0. We study a problem of dimension n = 100 and train the model with gradient descent (without momentum).\nTo validate the theoretical results, we show in Fig. 1 that the steady-state reached by our method scales inversely with K, similarly to an ensemble of independently-trained models. We run experiments with K ∈ [2, 5, 10, 15, 20, 25] and train every configuration for 2 × 107 iterations until convergence. We average over the last 104 weight updates and over 5 different random seeds." } ]
2,021
null
SP:6adf73371c97da34bca974dbffb5b7dd211b9e44
[ "To address the ad hoc team play, the authors propose a residual term of Q function, which additionally considers the states of nearby agents. A novel MARA loss is introduced to the residual term as a regularization to achieve the reward assignment implicitly. The proposed CollaQ could be easily built on QMIX and trained end-to-end. CollaQ outperforms other baselines on various tasks with the ad hoc team play setting. " ]
Recent advances in multi-agent reinforcement learning (MARL) have achieved super-human performance in games like Quake 3 and Dota 2. Unfortunately, these techniques require orders-of-magnitude more training rounds than humans and may not generalize to slightly altered environments or new agent configurations (i.e., ad hoc team play). In this work, we propose Collaborative Q-learning (CollaQ) that achieves state-of-the-art performance in the StarCraft multi-agent challenge and supports ad hoc team play. We first formulate multi-agent collaboration as a joint optimization on reward assignment and show that under certain conditions, each agent has a decentralized Q-function that is approximately optimal and can be decomposed into two terms: the self-term that only relies on the agent’s own state, and the interactive term that is related to states of nearby agents, often observed by the current agent. The two terms are jointly trained using regular DQN, regulated with a Multi-Agent Reward Attribution (MARA) loss that ensures both terms retain their semantics. CollaQ is evaluated on various StarCraft maps, outperforming existing state-of-the-art techniques (i.e., QMIX, QTRAN, and VDN) by improving the win rate by 40% with the same number of environment steps. In the more challenging ad hoc team play setting (i.e., reweight/add/remove units without re-training or finetuning), CollaQ outperforms previous SoTA by over 30%.
[ { "affiliations": [], "name": "REWARD ATTRI" } ]
[ { "authors": [ "Christopher Berner", "Greg Brockman", "Brooke Chan", "Vicki Cheung", "Przemysław Dębiak", "Christy Dennison", "David Farhi", "Quirin Fischer", "Shariq Hashme", "Chris Hesse" ], "title": "Dota 2 with large scale deep reinforcement learning", "venue": "arXiv preprint arXiv:1912.06680,", "year": 2019 }, { "authors": [ "Max Jaderberg", "Wojciech M Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castaneda", "Charles Beattie", "Neil C Rabinowitz", "Ari S Morcos", "Avraham Ruderman" ], "title": "Humanlevel performance in 3d multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Mikayel Samvelyan", "Tabish Rashid", "Christian Schroeder de Witt", "Gregory Farquhar", "Nantas Nardelli", "Tim GJ Rudner", "Chia-Man Hung", "Philip HS Torr", "Jakob Foerster", "Shimon Whiteson" ], "title": "The starcraft multi-agent challenge", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch" ], "title": "Emergent tool use from multi-agent autocurricula", "venue": null, "year": 1909 }, { "authors": [ "Peter Stone", "Gal A Kaminka", "Sarit Kraus", "Jeffrey S Rosenschein" ], "title": "Ad hoc autonomous agent teams: Collaboration without pre-coordination", "venue": "In Twenty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "Samuel Barrett", "Peter Stone", "Sarit Kraus" ], "title": "Empirical evaluation of ad hoc teamwork in the pursuit domain", "venue": "In AAMAS,", "year": 2011 }, { "authors": [ "Samuel Barrett", "Peter Stone" ], "title": "Cooperating with unknown teammates in complex domains: A robot soccer case study of ad hoc teamwork", "venue": "In Twenty-ninth AAAI conference on artificial intelligence,", "year": 2015 }, { "authors": [ "Marc Lanctot", "Vinicius Zambaldi", "Audrunas Gruslys", "Angeliki Lazaridou", "Karl Tuyls", "Julien Pérolat", "David Silver", "Thore Graepel" ], "title": "A unified game-theoretic approach to multiagent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hengyuan Hu", "Adam Lerer", "Alex Peysakhovich", "Jakob Foerster" ], "title": " other-play\" for zero-shot coordination", "venue": "arXiv preprint arXiv:2003.02979,", "year": 2020 }, { "authors": [ "Devin Schwab", "Yifeng Zhu", "Manuela Veloso" ], "title": "Zero shot transfer learning for robot soccer", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 2070–2072. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Qian Long", "Zihan Zhou", "Abhibav Gupta", "Fei Fang", "Yi Wu", "Xiaolong Wang" ], "title": "Evolutionary population curriculum for scaling multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:2003.10423,", "year": 2020 }, { "authors": [ "Ming Tan" ], "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", "venue": "In Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "Peter Sunehag", "Guy Lever", "Audrunas Gruslys", "Wojciech Marian Czarnecki", "Vinicius Zambaldi", "Max Jaderberg", "Marc Lanctot", "Nicolas Sonnerat", "Joel Z Leibo", "Karl Tuyls" ], "title": "Value-decomposition networks for cooperative multi-agent learning", "venue": "arXiv preprint arXiv:1706.05296,", "year": 2017 }, { "authors": [ "Tabish Rashid", "Mikayel Samvelyan", "Christian Schroeder De Witt", "Gregory Farquhar", "Jakob Foerster", "Shimon Whiteson" ], "title": "Qmix: monotonic value function factorisation for deep multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1803.11485,", "year": 2018 }, { "authors": [ "Kyunghwan Son", "Daewoo Kim", "Wan Ju Kang", "David Earl Hostallero", "Yung Yi" ], "title": "Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning", "venue": null, "year": 1905 }, { "authors": [ "Michael L Littman" ], "title": "Markov games as a framework for multi-agent reinforcement learning", "venue": "In Machine learning proceedings", "year": 1994 }, { "authors": [ "Lucian Bu", "Robert Babu", "Bart De Schutter" ], "title": "A comprehensive survey of multiagent reinforcement learning", "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews),", "year": 2008 }, { "authors": [ "Jakob N Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Ryan Lowe", "Yi I Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Shariq Iqbal", "Fei Sha" ], "title": "Actor-attention-critic for multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1810.02912,", "year": 2018 }, { "authors": [ "Jakob Foerster", "Ioannis Alexandros Assael", "Nando De Freitas", "Shimon Whiteson" ], "title": "Learning to communicate with deep multi-agent reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning multiagent communication with backpropagation", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Igor Mordatch", "Pieter Abbeel" ], "title": "Emergence of grounded compositional language in multi-agent populations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Rohan Chitnis", "Shubham Tulsiani", "Saurabh Gupta", "Abhinav Gupta" ], "title": "Efficient bimanual manipulation using learned task schemas", "venue": "arXiv preprint arXiv:1909.13874,", "year": 2019 }, { "authors": [ "Eugene Vinitsky", "Aboudy Kreidieh", "Luc Le Flem", "Nishant Kheterpal", "Kathy Jang", "Cathy Wu", "Fangyu Wu", "Richard Liaw", "Eric Liang", "Alexandre M Bayen" ], "title": "Benchmarks for reinforcement learning in mixed-autonomy traffic", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Joel Z Leibo", "Vinicius Zambaldi", "Marc Lanctot", "Janusz Marecki", "Thore Graepel" ], "title": "Multi-agent reinforcement learning in sequential social dilemmas", "venue": "In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems,", "year": 2017 }, { "authors": [ "Michael Bowling", "Peter McCracken" ], "title": "Coordination and adaptation in impromptu teams", "venue": "In AAAI,", "year": 2005 }, { "authors": [ "Matthew Hausknecht", "Prannoy Mupparaju", "Sandeep Subramanian", "Shivaram Kalyanakrishnan", "Peter Stone" ], "title": "Half field offense: An environment for multiagent learning and ad hoc teamwork", "venue": "In AAMAS Adaptive Learning Agents (ALA) Workshop", "year": 2016 }, { "authors": [ "Samuel Barrett", "Peter Stone", "Sarit Kraus", "Avi Rosenfeld" ], "title": "Learning teammate models for ad hoc teamwork", "venue": "In AAMAS Adaptive Learning Agents (ALA) Workshop,", "year": 2012 }, { "authors": [ "Doran Chakraborty", "Peter Stone" ], "title": "Cooperating with a markovian ad hoc teammate", "venue": "In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems,", "year": 2013 }, { "authors": [ "Mark Woodward", "Chelsea Finn", "Karol Hausman" ], "title": "Learning to interactively learn and assist", "venue": "arXiv preprint arXiv:1906.10187,", "year": 2019 }, { "authors": [ "Maruan Al-Shedivat", "Trapit Bansal", "Yuri Burda", "Ilya Sutskever", "Igor Mordatch", "Pieter Abbeel" ], "title": "Continuous adaptation via meta-learning in nonstationary and competitive environments", "venue": "arXiv preprint arXiv:1710.03641,", "year": 2017 }, { "authors": [ "Shihui Li", "Yi Wu", "Xinyue Cui", "Honghua Dong", "Fei Fang", "Stuart Russell" ], "title": "Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "He He", "Jordan Boyd-Graber", "Kevin Kwok", "Hal Daumé III" ], "title": "Opponent modeling in deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Macheng Shen", "Jonathan P How" ], "title": "Robust opponent modeling via adversarial ensemble reinforcement learning in asymmetric imperfect-information games", "venue": null, "year": 1909 }, { "authors": [ "Jack Serrino", "Max Kleiman-Weiner", "David C Parkes", "Josh Tenenbaum" ], "title": "Finding friend and foe in multi-agent games", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Noam Brown", "Tuomas Sandholm" ], "title": "Superhuman ai for multiplayer poker", "venue": "Science, 365(6456):", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Rodrigo Canaan", "Xianbo Gao", "Julian Togelius", "Andy Nealen", "Stefan Menzel" ], "title": "Generating and adapting to diverse ad-hoc cooperation agents in hanab", "venue": "arXiv preprint arXiv:2004.13710,", "year": 2020 }, { "authors": [ "Nicolas Carion", "Nicolas Usunier", "Gabriel Synnaeve", "Alessandro Lazaric" ], "title": "A structured prediction approach for generalization in cooperative multi-agent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jiachen Yang", "Alireza Nakhaei", "David Isele", "Kikuo Fujimura", "Hongyuan Zha" ], "title": "Cm3: Cooperative multi-goal multi-stage multi-agent reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Weixun Wang", "Tianpei Yang Yong Liu", "Jianye Hao", "Xiaotian Hao", "Yujing Hu", "Yingfeng Chen", "Changjie Fan", "Yang Gao" ], "title": "Action semantics network: Considering the effects of actions in multiagent systems", "venue": "arXiv preprint arXiv:1907.11461,", "year": 2019 }, { "authors": [ "Weixun Wang", "Tianpei Yang", "Yong Liu", "Jianye Hao", "Xiaotian Hao", "Yujing Hu", "Yingfeng Chen", "Changjie Fan", "Yang Gao" ], "title": "From few to more: Large-scale dynamic multiagent curriculum learning", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Anatol Rapoport" ], "title": "Prisoner’s dilemma—recollections and observations. In Game Theory as a Theory of a Conflict Resolution, pages 17–34", "venue": null, "year": 1974 }, { "authors": [ "Paul AM Van Lange", "Jeff Joireman", "Craig D Parks", "Eric Van Dijk" ], "title": "The psychology of social dilemmas: A review", "venue": "Organizational Behavior and Human Decision Processes,", "year": 2013 }, { "authors": [ "Tuomas W Sandholm", "Robert H Crites" ], "title": "Multiagent reinforcement learning in the iterated prisoner’s", "venue": "dilemma. Biosystems,", "year": 1996 }, { "authors": [ "Enrique Munoz de Cote", "Alessandro Lazaric", "Marcello Restelli" ], "title": "Learning to cooperate in multiagent social dilemmas", "venue": "In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems,", "year": 2006 }, { "authors": [ "Michael Wunder", "Michael L Littman", "Monica Babes" ], "title": "Classes of multiagent q-learning dynamics with epsilon-greedy exploration", "venue": "In Proceedings of the 27th International Conference on Machine Learning", "year": 2010 }, { "authors": [ "Natasha Jaques", "Angeliki Lazaridou", "Edward Hughes", "Caglar Gulcehre", "Pedro Ortega", "DJ Strouse", "Joel Z Leibo", "Nando De Freitas" ], "title": "Social influence as intrinsic motivation for multi-agent deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Richard S Sutton" ], "title": "Temporal credit assignment in reinforcement learning", "venue": null, "year": 1985 }, { "authors": [ "Duc Thien Nguyen", "Akshat Kumar", "Hoong Chuin Lau" ], "title": "Credit assignment for collective multiagent rl with global rewards", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sam Devlin", "Logan Yliniemi", "Daniel Kudenko", "Kagan Tumer" ], "title": "Potential-based difference rewards for multiagent reinforcement learning", "venue": "In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems,", "year": 2014 }, { "authors": [ "Sam Michael Devlin", "Daniel Kudenko" ], "title": "Dynamic potential-based reward shaping", "venue": "In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems,", "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, multi-agent deep reinforcement learning (MARL) has drawn increasing interest from the research community. MARL algorithms have shown super-human level performance in various games like Dota 2 (Berner et al., 2019), Quake 3 Arena (Jaderberg et al., 2019), and StarCraft (Samvelyan et al., 2019). However, the algorithms (Schulman et al., 2017; Mnih et al., 2013) are far less sample efficient than humans. For example, in Hide and Seek (Baker et al., 2019), it takes agents 2.69− 8.62 million episodes to learn a simple strategy of door blocking, while it only takes human several rounds to learn this behavior. One of the key reasons for the slow learning is that the number of joint states grows exponentially with the number of agents.\nMoreover, many real-world situations require agents to adapt to new configurations of teams. This can be modeled as ad hoc multi-agent reinforcement learning (Stone et al., 2010) (Ad-hoc MARL) settings, in which agents must adapt to different team sizes and configurations at test time. In contrast to the MARL setting where agents can learn a fixed and team-dependent policy, in the Ad-hoc MARL setting agents must assess and adapt to the capabilities of others to behave optimally. Existing work in ad hoc team play either require sophisticated online learning at test time (Barrett et al., 2011) or prior knowledge about teammate behaviors (Barrett and Stone, 2015). As a result, they do not generalize to complex real-world scenarios. Most existing works either focus on improving generalization towards different opponent strategies (Lanctot et al., 2017; Hu et al., 2020) or simple ad-hoc setting like varying number of test-time teammates (Schwab et al., 2018; Long et al., 2020). We consider a more general setting where test-time teammates may have different capabilities. The need to reason about different team configurations in the Ad-hoc MARL results in an additional exponential increase (Stone et al., 2010) in representational complexity comparing to the MARL setting.\nIn the situation of collaboration, one way to address the complexity of the ad hoc team play setting is to explicitly model and address how agents collaborate. In this paper, one key observation is that when collaborating with different agents, an agent changes their behavior because she realizes that the team could function better if she focuses on some of the rewards while leaving other rewards to other teammates. Inspired by this principle, we formulate multi-agent collaboration as a joint\noptimization over an implicit reward assignment among agents. Because the rewards are assigned differently for different team configurations, the behavior of an agent changes and adaptation follows.\nWhile solving this optimization directly requires centralization at test time, we make an interesting theoretical finding that each agent has a decentralized policy that is (1) approximately optimal for the joint optimization, and (2) only depends on the local configuration of other agents. This enables us to learn a direct mapping from states of nearby agents (or “observation” of agent i) to its Q-function using deep neural network. Furthermore, this finding also suggests that the Q-function of agent i should be decomposed into two terms: Qalonei that only depends on agent i’s own state si, andQ collab i that depends on nearby agents but vanishes if no other agents nearby. To enforce this semantics, we regularize Qcollabi (si, ·) = 0 in training via a novel Multi-Agent Reward Attribution (MARA) loss. The resulting algorithm, Collaborative Q-learning (CollaQ), achieves a 40% improvement in win rates over state-of-the-art techniques for the StarCraft multi-agent challenge. We show that (1) the MARA Loss is critical for strong performance and (2) both Qalone and Qcollab are interpretable via visualization. Furthermore, CollaQ agents can achieve ad hoc team play without retraining or fine-tuning. We propose three tasks to evaluate ad hoc team play performance: at test time, (a) assign a new VIP unit whose survival matters, (b) swap different units in and out, and (c) add or remove units. Results show that CollaQ outperforms baselines by an average of 30% in all these settings.\nRelated Works. The most straightforward way to train such a MARL task is to learn individual agent’s value function Qi independently(IQL) (Tan, 1993). However, the environment becomes non-stationary from the perspective of an individual agent thus this performs poorly in practice. Recent works, e.g., VDN (Sunehag et al., 2017), QMIX (Rashid et al., 2018), QTRAN (Son et al., 2019), adopt centralized training with decentralized execution to solve this problem. They propose to write the joint value function as Qπ(s,a) = φ(s,Q1(o1, a1), ..., QK(oK , aK)) but the formulation of φ differs in each method. These methods successfully utilize the centralized training technique to alleviate the non-stationary issue. However, none of the above methods generalize well to ad-hoc team play since learned Qi functions highly depend on the existence of other agents." }, { "heading": "2 COLLABORATIVE MULTI-AGENT REWARD ASSIGNMENT", "text": "Basic Setting. A multi-agent extension of Markov Decision Process called collaborative partially observable Markov Games (Littman, 1994), is defined by a set of states S describing the possible configurations of allK agents, a set of possible actionsA1, . . . , AK , and a set of possible observations O1, . . . , OK . At every step, each agent i chooses its action ai by a stochastic policy πi : Oi ×Ai → [0, 1]. The joint action a produces the next state by a transition function P : S×A1×· · ·×AK → S. All agents share the same reward r : S × A1 × · · · × AK → R and with a joint value function Qπ = Est+1:∞,at+1:∞ [Rt|st,at] where Rt = ∑∞ j=0 γ jrt+j is the discounted return.\nIn Sec. 2.1, we first model multi-agent collaboration as a joint optimization on reward assignment: instead of acting based on the joint state s, each agent i is acting independently on its own state si, following its own optimal value Vi, which is a function of the perceived reward assignment ri. While the optimal perceived reward assignment r∗i (s) depends on the joint state of all agents and requires centralization, in Sec. 2.2, we prove that there exists an approximate optimal solution r̂i that only depends on the local observation slocali of agent i, and thus enabling decentralized execution. Lastly in Sec. 2.3, we distill the theoretical insights into a practical algorithm CollaQ, by directly learning the compositional mapping slocali 7→ r̂i 7→ Vi in an end-to-end fashion, while keeping the decomposition structure of self state and local observations." }, { "heading": "2.1 BASIC ASSUMPTION", "text": "A naive modeling of multi-agent collaboration is to estimate a joint value function Vjoint := Vjoint(s1, s2, . . . , sK), and find the best action for agent i to maximize Vjoint according to the current joint state s = (s1, s2, . . . , sK). However, it has three fundamental drawbacks: (1) Vjoint generally requires exponential number of samples to learn; (2) in order to evaluate this function, a full observation of the states of all agents is required, which disallows decentralized execution, one key preference of multi-agent RL; and (3) for any environment/team changes (e.g., teaming with different agents), Vjoint needs to be relearned for all agents and renders ad hoc team play impossible.\nOur CollaQ addresses the three issues with a novel theoretical framework that decouples the interactions between agents. Instead of using Vjoint that bundles all the agent interactions together, we consider the underlying mechanism how they interact: in a fully collaborative setting, the reason why\nagent i takes actions towards a state, is not only because that state is rewarding to agent i, but also because it is more rewarding to agent i than other agents in the team, from agent i’s point of view. This is the concept of perceived reward of agent i. Then each agent acts independently following its own value function Vi, which is the optimal solution to the Bellman equation conditioned on the assigned perceived reward, and is a function of it. This naturally leads to collaboration.\nWe build a mathematical framework to model such behaviors. Specifically, we make the following assumption on the behavior of each agent:\nAssumption 1. Each agent i has a perceived reward assignment ri ∈ R|Si||Ai|+ that may depend on the joint state s = (s1, . . . , sK). Agent i acts according to its own state si and individual optimal value Vi = Vi(si; ri) (and associated Qi(si, ai; ri)), which is a function of ri.\nNote that the perceived reward assignment ri ∈ R|Si||Ai|+ is a non-negative vector containing the assignment of scalar reward at each state-action pair (hence its length is |Si||Ai|). We might also equivalently write it as a function: ri(x, a) : Si × Ai 7→ R, where x ∈ Si and a ∈ Ai. Here x is a dummy variable that runs through all states of agent i, while si refers to its current state.\nGiven the perceived rewards assignment {ri}, the values and actions of agents become decoupled. Due to the fully collaborative nature, a natural choice of {ri} is the optimal solution of the following objective J(r1, r2, . . . , rK). Here re is the external rewards of the environment, wi ≥ 0 is the preference of agent i and is the Hadamard (element-wise) product:\nJ(r1, . . . , rK) := K∑ i=1 Vi(si; ri) s.t. K∑ i=1 wi ri ≤ re (1)\nNote that the constraint ensures that the objective has bounded solution. Without this constraints, we could easily take each perceived reward ri to +∞, since each value function Vi(si; ri) monotonously increases with respect to ri. Intuitively, Eqn. 1 means that we “assign” the external rewards re optimally to K agents as perceived rewards, so that their overall values are the highest.\nIn the case of sparse reward, most of the state-action pair (x, a), re(x, a) = 0. By Eqn. 1, for all agent i, their perceived reward ri(x, a) = 0. Then we only focus on nonzero entries for each ri. Define M to be the number of state-action pairs with positive reward: M = ∑ ai∈Ai 1{ri(x, ai) > 0}. Discarding zero-entries, we could regard all ri as M -dimensional vector. Finally, we define the reward matrix R = [r1, . . . , rK ] ∈ RM×K . Clarification on Rewards. There are two kinds of rewards here: external reward re and perceived reward for each agent ri. re is defined to be the environmental reward shared by all the agents: re : S×A1×· · ·×Ak → R. Given this external reward, depending on a specific reward assignment, each agent can receive a perceived reward ri that drives its behavior. If the reward assignment is properly defined/optimized, then all the agents can act based on the perceived reward to jointly optimize (maximize) the shared external reward.\n2.2 LEARN TO PREDICT THE OPTIMAL ASSIGNED REWARD r∗i (s)\nThe optimal reward assignments R∗ of Eq. 1, as well as its i-th assignment r∗i , is a function of the joint states s = {s1, s2, . . . , sK}. Once the optimization is done, each agent can get the best action a∗i = argmaxai Qi(si, ai; r ∗ i (s)) independently from the reconstructed Q function.\nThe formulation Vi(si; ri) avoids learning the value function of statistically infeasible joint states Vi(s). Since an agent acts solely based on ri, ad hoc team play becomes possible if the correct ri is assigned. However, there are still issues. First, since each Vi is a convex function regarding ri, maximizing Eqn. 1 is a summation of convex functions under linear constraints optimization, and is hard computationally. Furthermore, to obtain actions for each agent, we need to solve Eqn. 1 at every step, which still requires centralization at test time, preventing us from decentralized execution.\nTo overcome optimization complexity and enable decentralized execution, we consider learning a direct mapping from the joint state s to optimally assigned reward r∗i (s). However, since s is a joint state, learning such a mapping can be as hard as modeling Vi(s).\nFortunately, Vi(si; ri(s)) is not an arbitrary function, but the optimal value function that satisfies Bellman equation. Due to the speciality of Vi, we could find an approximate assignment r̂i for each agent i, so that r̂i only depends on a local observation slocali of the states of nearby other agents\nobserved by agent i: r̂i(s) = r̂i(slocali ). At the same time, these approximate reward assignments {r̂i} achieve approximate optimal for the joint optimization (Eqn. 1) with bounded error: Theorem 1. For all i ∈ {1, . . . ,K}, all si ∈ Si, there exists a reward assignment r̂i that (1) only depends on slocali and (2) r̂i is the i-th column of a feasible global reward assignment R̂ such that\nJ(R̂) ≥ J(R∗)− (γC + γD)RmaxMK, (2)\nwhere C and D are constants related to distances between agents/rewards (details in Appendix).\nSince r̂i only depends on the local observation of agent i (i.e., agent’s own state si as well as the states of nearby agents), it enables decentralized execution: for each agent i, the local observation is sufficient for an agent to act near optimally.\nLimitation. One limitation of Theorem 1 is that the optimality gap of r̂i heavily depends on the size of slocali . If the local observation of agent i covers more agents, then the gap is smaller but the cost to learn such a mapping is higher, since the mapping has more input states and becomes higher-dimensional. In practice, we found that the observation oi of agent i covers slocali works sufficiently well, as shown in the experiments (Sec. 4)." }, { "heading": "2.3 COLLABORATIVE Q-LEARNING (COLLAQ)", "text": "While Theorem. 1 shows the existence of perceived reward r̂i = r̂i(slocali ) with good properties, learning r̂i(slocali ) is not a trivial task. Learning it in a supervised manner requires (close to) optimal assignments as the labels, which in turn requires solving Eqn. 1. Instead, we resort to an end-to-end learning of Qi for each agent i with proper decomposition structure inspired by the theory above.\nTo see this, we expand the Q-function for agent i: Qi = Qi(si, ai; r̂i) with respect to its perceived reward. We use a Taylor expansion at the ground-zero reward r0i = ri(si), which is the perceived reward when only agent i is present in the environment:\nQi(si, ai; r̂i) = Qi(si, ai; r0i)︸ ︷︷ ︸ Qalone(si,ai) +∇rQi(si, ai; r0i) · (r̂i − r0i) +O(‖r̂i − r0i‖2)︸ ︷︷ ︸ Qcollab(slocali ,ai)\n(3)\nHere Qi(si, ai; r0i) is the alone policy of an agent i. We name it Qalone since it operates as if other agents do not exist. The second term is called Qcollab, which models the interaction among agents via perceived reward r̂i. Both Qalone and Qcollab are neural networks. Thanks to Theorem 1, we only need to feed local observation oi := slocali of agent i, which contains the observation of W < K local agents (Fig. 1), for an approximate optimal Qi. Then the overall Qi is computed by a simple addition (here oalonei := si is the individual state of agent i):\nQi(oi, ai) = Q alone i (o alone i , ai) +Q collab i (oi, ai) (4)\nMulti-Agent Reward Attribution (MARA) Loss. With a simple addition, the solution of Qalonei and Qcollabi might not be unique: indeed, we might add any constant to Q\nalone and subtract that constant from Qcollab to yield the same overall Qi. However, according to Eqn. 3, there is an additional constraint: if oi = oalonei then r̂i = r0i and Q\ncollab(oalonei , ai) ≡ 0, which eliminates such an ambiguity. For this, we add Multi-agent Reward Attribution (MARA) Loss.\nOverall Training Paradigm. For agent i, we use standard DQN training with MARA loss. Define y = Es′∼ε[r + γmaxa′ Qi(o′, a′)|s, a] to be the target Q-value, the overall training objective is:\nL = Esi,ai∼ρ(·)[(y −Qi(oi, ai)) 2︸ ︷︷ ︸\nDQN Objective\n+α(Qcollabi (o alone i , ai)) 2︸ ︷︷ ︸ MARA Objective ] (5)\nwhere the hyper-parameter α determines the relative importance of the MARA objective against the DQN objective. We observe that with MARA loss, training is much stabilized. We use a soft constraint version of MARA Loss. To train multiple agents together, we follow QMIX and feed the output of {Qi} into a top network and train in an end-to-end centralized fashion.\nCollaQ has advantages compared to normal Q-learning. Since Qalonei only takes o alone i whose dimension is independent of the number of agents, this term can be learned exponentially faster than Qcollabi . Thus, CollaQ enjoys a much faster learning speed as shown in Fig. 5, Fig. 6 and Fig. 7.\nAttention-based Architecture. Fig. 1 illustrates the overall architecture. For agent i, the local observation oi := slocali is separated into two parts, o alone i := si and oi = s local i . Here, o alone i is sent to the left tower to obtainQalone, while oi is sent to the right tower to obtainQcollab. We use attention architecture between oalonei and other agents’ states in the field of view of agent i. This is because the observation oi can be spatially large and cover agents whose states do not contribute much to agent i’s action, and effective slocali is smaller than oi. Our architecture is similar to EPC (Long et al., 2020) except that we use a transformer architecture (stacking multiple layers of attention modules). As shown in the experiments, this helps improve the performance in various StarCraft settings.\nIntuition of CollaQ and Connection to the Theory. The intuitive explanation to CollaQ and MARA Loss is that when the agent cannot see others (i.e., other agent has no influence on the particular agent), the Q-value Qi should be equal to individual Q-value Qalonei . This can be interpreted as some equivalent statements: 1. The problem can be decomposed well into local sub-problems. 2. The existence of other agents does not influence the Q-value of the particular agent. The inspired MARA loss helps to eliminate the ambiguity. The semantic meaning of Qalonei and Qi are shown in Fig. 3.\nThe intuition actually connects to the theory. Theorem 1. shows that under some mild assumptions, the CollaQ objective can be viewed as a sub-optimal solution to an optimization problem on reward assignment. Thus each component of CollaQ and MARA loss can be well-justified. Although the problem defined in Eq. 1 is hard to optimize, the empirical success of CollaQ to some extent shows the effectiveness. The theory here serves more as an inspiration to the practical algorithm. We leave the analysis between exact optimization and CollaQ to future work." }, { "heading": "3 EXPERIMENTS ON RESOURCE COLLECTION", "text": "In this section, we demonstrate the effectiveness of CollaQ in a toy gridworld environment where the states are fully observable. We also visualize the trained policy Qi and Qalonei .\nAd hoc Resource Collection. We demonstrate CollaQ in a toy example where multiple agents collaboratively collect resources from a grid world to maximize the aggregated team reward. In this setup, the same type of resources can return different rewards depending on the type of agent that collects it. The reward setup is randomly initialized at the beginning of each episode and can be seen by all the agents.\nThe game ends when all the resources are collected. An agent is expert for a certain resource if it gets the highest reward among the team collecting that. As a consequence, to maximize the shared team reward, the optimal strategy is to let the expert collect the corresponding resource.\nFor testing, we devise the following reward setup: We have apple and lemon as our resources and N agents. For picking lemon, agent 1 receives the highest reward for the team, agent 2 gets the second highest, and so on. For apple, the reward assignment is reversed (agent N gets the highest reward, agent N − 1 gets the second\ni since they are both the\nexpert for the nearest resources; in a) and c), Qcollabi alters the decision of collecting lemon for red agent since it has lower reward for lemon compared with the yellow agent and similar phenomena occurs for the yellow agent.\nhighest, ...). This specific reward setup is excluded from the environment setup for training. This is a very hard ad hoc team play at test time since the agents need to demonstrate completely different behaviors from training time to achieve a higher team reward.\nThe left figure in Fig. 2 shows the training reward and the right one shows the ad hoc team play. We train on 5 agents in this setting. CollaQ outperforms IQL in both training and testing. In this example, random actions work reasonably well. Any improvement over it is substantial.\nVisualization of Qalonei and Qi. In Fig. 3, we visualize the trained Qalonei and Qi (the overall policy for agent i) to show how Qcollabi affects the behaviors of each agent. The policies Q alone i and Qi learned by CollaQ are both meaningful: Qalonei is the simple strategy of collecting the nearest resource (the optimal policy when the agent is the only one acting in the environment) and Qi is the optimal policy described formerly.\nThe leftmost column in Fig. 3 shows the reward setup for different agents on collecting different resources (e.g. the red agent gets 4 points collecting lemon and gets 10 points collecting apple). The red agent specializes at collecting apple and the yellow specializes at collecting lemon. In a), Qalonei directs both agents to collect the nearest resource. However, neither agent is the expert on collecting its nearest resource. Therefore, Qcollabi alters the decision of Q alone i , directing Qi towards resources with the highest return. This behavior is also observed in c) with a different resource placement. b) shows the scenario where both agents are the expert on collecting the nearest resource. Qcollabi reinforces the decision of Qalonei , making Qi points to the same resource as Q alone i ." }, { "heading": "4 EXPERIMENTS ON STARCRAFT MULTI-AGENT CHALLENGE", "text": "StarCraft multi-agent challenge (Samvelyan et al., 2019) is a widely-used benchmark for MARL evaluation. The task in this environment is to manage a team of units (each unit is controlled by an agent) to defeat the team controlled by build-in AIs. While this task has been extensively studied in previous works, the performance of the agents trained by the SoTA methods (e.g., QMIX) deteriorates with a slight modification to the environment setup where the agent IDs are changed. The SoTA methods severely overfit to the precise environment and thus cannot generalize well to ad hoc team play. In contrast, CollaQ has shown better performance in the presence of random agent IDs, generalizes significantly better in more diverse test environments (e.g., adding/swapping/removing a unit at test time), and is more robust in ad hoc team play." }, { "heading": "4.1 ISSUES IN THE CURRENT BENCHMARK", "text": "In the default StarCraft multi-agent environment, the ID of each agent never changes. Thus, a trained agent can memorize what to do based on its ID instead of figuring out the role of its units dynamically during the play. As illustrated in Fig. 4, if we randomly shuffle the IDs of the agents at test time, the performance of QMIX gets much worse. In some cases (e.g., 8m_vs_9m), the win rate drops from 95% to 50%, deteriorating by more than 40%. The results show that QMIX relies on the extra information (the order of agents) for generalization. As a consequence, the resulting agents overfit to the exact setting, making it less robust in ad hoc team play. Introducing random shuffled agent IDs at training time addresses this issue for QMIX as illustrated in Fig. 4." }, { "heading": "4.2 STARCRAFT MULTI-AGENT CHALLENGE WITH RANDOM AGENT IDS", "text": "Since using random IDs facilitates the learning of different roles, we perform extensive empirical study under this setting. We show that CollaQ on multiple maps in StarCraft outperforms existing approaches. We use the hard scenarios (e.g., 27m_vs_30m, MMM2 and 2c_vs_64zg) since they are largely unsolved by previous methods. Maps like 10m_vs_11m, 5m_vs_6m and 8m_vs_9m are considered medium difficult. For completeness, we also provide performance comparison under the regular setting in Appendix D Fig. 10. As shown in Fig. 5, CollaQ outperforms multiple baselines (QMIX, QTRAN, VDN, and IQL) by around 30% in terms of win rate in multiple hard scenarios. With attention model, the performance is even stronger.\nTrained CollaQ agents demonstrate interesting behaviors. On MMM2: (1) Medivac dropship only heals the unit under attack, (2) damaged units move backward to avoid focused fire from the opponent, while healthy units move forward to undertake fire. In comparison, QMIX only learns (1) and it is not obvious (2) was learned. On 2c_vs_64zg, CollaQ learns to focus fire on one side of the attack to clear one of the corridors. It also demonstrates the behavior to retreat along that corridor while attacking while agents trained by QMIX does not. See Appendix D for more video snapshots." }, { "heading": "4.3 AD HOC TEAM WORK", "text": "Now we demonstrate that CollaQ is robust to change of agent configurations and/or priority during test time, i.e., ad hoc team play, in addition to handling random IDs.\nDifferent VIP agent. In this setting, the team would get an additional reward if the VIP agent is alive after winning the battle. The VIP agent is randomly selected from agent 1 to N − 1 during training. At test time, agent N becomes the VIP, which is a new setup that is not seen in training. Fig. 6 shows the VIP agent survival rate at test time. We can see that CollaQ outperforms QMIX by 10%-32%. We also see that CollaQ learns the behavior of protecting VIP: when the team is about to win, the VIP agent is covered by other agents to avoid being attacked. Such behavior is not clearly shown in QMIX when the same objective is presented.\nSwap / Add / Remove different units. We also test the ad hoc team play in three harder settings: we swap the agent type, add and remove one agent at test time. From Fig. 7, we can see that CollaQ can generalize better to the ad hoc test setting. Note that to deal with the changing number of agents at test time, all of the methods (QMIX, QTRAN, VDN, IQL, and CollaQ) are augmented with\nattention-based neural architectures for a fair comparison. We can also see that CollaQ outperforms QMIX, the second best, by 9.21% on swapping, 14.69% on removing, and 8.28% on adding agents." }, { "heading": "4.4 ABLATION STUDY", "text": "We further verify CollaQ in the ablation study. First, we show that CollaQ outperforms a baseline (SumTwoNets) that simply sums over two networks which takes the agent’s full observation as the input. SumToNets does not distinguish between Qalone (which only takes si as the input) and Qcollab (which respects the conditionQcollab(si, ·) = 0). Second, we show that MARA loss is indeed critical for the performance of CollaQ.\nWe compare our method with SumTwoNets trained with QMIX in each agent. The baseline has a similar parameter size compared to CollaQ. As shown in Fig. 8, comparing to SumTwoNets trained with QMIX, CollaQ improves the win rates by 17%-47% on hard scenarios. We also study the importance of MARA Loss by removing it from CollaQ. Using MARA Loss boosts the performance by 14%-39% on hard scenarios, consistent with the decomposition proposed in Sec. 2.3." }, { "heading": "5 RELATED WORK", "text": "Multi-agent reinforcement learning (MARL) has been studied since the 1990s (Tan, 1993; Littman, 1994; Bu et al., 2008). Recent progresses of deep reinforcement learning give rise to an increasing effort of designing general-purpose deep MARL algorithms (including COMA (Foerster et al., 2018), MADDPG (Lowe et al., 2017), MAPPO (Berner et al., 2019), PBT (Jaderberg et al., 2019), MAAC (Iqbal and Sha, 2018), etc) for complex multi-agent games. We utilize the Q-learning framework and consider the collaborative tasks in strategic games. Other works focus on different aspects of collaborative MARL setting, such as learning to communicate (Foerster et al., 2016; Sukhbaatar et al., 2016; Mordatch and Abbeel, 2018), robotics manipulation (Chitnis et al., 2019), traffic control (Vinitsky et al., 2018), social dilemmas (Leibo et al., 2017), etc.\nThe problem of ad hoc team play in multiagent cooperative games was raised in the early 2000s (Bowling and McCracken, 2005; Stone et al., 2010) and is mostly studied in the robotic soccer domain (Hausknecht et al., 2016). Most works (Barrett and Stone, 2015; Barrett et al., 2012; Chakraborty\nand Stone, 2013; Woodward et al., 2019) either require sophisticated online learning at test time or require strong domain knowledge of possible teammates, which poses significant limitations when applied to complex real-world situations. In contrast, our framework achieves zero-shot generalization and requires little changes to the overall existing MARL training. There are also works considering a much simplified ad-hoc teamwork setting by tackling a varying number of test-time homogeneous agents (Schwab et al., 2018; Long et al., 2020) while our method can handle more general scenarios.\nPrevious work on the generalization/robustness in MARL typically considers a competitive setting and aims to learn policies that can generalize to different test-time opponents. Popular techniques include meta-learning for adaptation (Al-Shedivat et al., 2017), adversarial training (Li et al., 2019), Bayesian inference (He et al., 2016; Shen and How, 2019; Serrino et al., 2019), symmetry breaking (Hu et al., 2020), learning Nash equilibrium strategies (Lanctot et al., 2017; Brown and Sandholm, 2019) and population-based training (Vinyals et al., 2019; Long et al., 2020; Canaan et al., 2020). Populationbased algorithms use ad hoc team play as a training component and the overall objective is to improve opponent generalization. Whereas, we consider zero-shot generalization to different teammates at test time. Our work is also related to the hierarchical approaches for multi-agent collaborative tasks (Shu and Tian, 2019; Carion et al., 2019; Yang et al., 2020). They train a centralized manager to assign subtasks to individual workers and it can generalize to new workers at test time. However, all these works assume known worker types or policies, which is infeasible for complex tasks. Our method does not make any of these assumptions and can be easily trained in an end-to-end fashion.\nThere have also been effort on decomposing the observation space through individual networks. ASN (Wang et al., 2019) decomposes the observation space of each agent trying to capture semantic meaning of actions, DyAN (Wang et al., 2020) adopts similar architecture in a curriculum domain. EPC (Long et al., 2020) also proposes to use attention between individual agents to make the network structure invariant to the size of agents. While the network structure of CollaQ to some extent share some similarity with the works aforementioned, the semantic meaning of each component is different. CollaQ models the interaction between agents using an alone network and an attentionbased collaborative network, one used to model self-interest solutions and the other one models the influence of other agents on the particular agent.\nSeveral papers also discuss social dilemma in a multi-agent setting (Leibo et al., 2017; Rapoport, 1974; Van Lange et al., 2013). Several works in reinforcement learning have been proposed to solve problems such as prisoner’s delimma Sandholm and Crites (1996); de Cote et al. (2006); Wunder et al. (2010). However, in our setting, all the agents share the same environmental reward. Thus, the optimal solution for all the agents is to jointly optimize the shared reward. SSD Jaques et al. (2019) gives the agent an extra intrinsic reward when its action has huge influence on others. CollaQ does not use any intrinsic reward.\nLastly, our mathematical formulation is related to the credit assignment problem in RL (Sutton, 1985; Foerster et al., 2018; Nguyen et al., 2018). Some reward shaping literature also fall into this category (Devlin et al., 2014; Devlin and Kudenko, 2012). But our approach does not calculate any explicit reward assignment, we distill the theoretical insight and derive a simple yet effective learning objective." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose CollaQ that models Multi-Agent RL as a dynamic reward assignment problem. We show that under certain conditions, there exist decentralized policies for each agent and these policies are approximately optimal from the point of view of a team goal. CollaQ then learns these policies by resorting to an end-to-end training framework while using decomposition in Q-function suggested by the theoretical analysis. CollaQ is tested in a complex practical StarCraft MultiAgent Challenge and surpasses previous SoTA by 40% in terms of win rates on various maps and 30% in several ad hoc team play settings. We believe the idea of multi-agent reward assignment used in CollaQ can be an effective strategy for ad hoc MARL." }, { "heading": "A COLLABORATIVE Q DETAILS", "text": "We derive the gradient and provide the training details for Eq. 5.\nGradient for Training Objective. Taking derivative w.r.t θan and θcn in Eq. 5, we arrive at the following gradient:\n∇θanLn(θ a n, θ c n) = Esi,a∼ρ(·),ri;s′∼ε[(r + γmax\na′ Qi(s\n′ , a ′ , ri; θ a n−1, θ c n−1)−Qi(oi, a, ri; θan, θcn))\n∇θanQ a i (si, a, ri; θ a n)]\n(6a)\n∇θcnLn(θ a n, θ c n) = Esi,a∼ρ(·),ri;s′∼ε[(r + γmax\na′ Qi(s\n′ , a ′ , ri; θ a n−1, θ c n−1)−Qi(oi, a, ri; θan, θcn))\n∇θcnQ c i (oi, a, ri; θ c n)− αQci (si, a, ri; θcn)∇θcnQ c i (si, a, ri; θ c n)]\n(6b)\nSoft CollaQ. In the actual implementation, we use a soft-constraint version of CollaQ: we subtract Qcollab(oalonei , ai) from Eq. 4. The Q-value Decomposition now becomes:\nQi(oi, ai) = Q alone i (o alone i , ai) +Q collab i (oi, ai)−Qcollab(oalonei , ai) (7)\nThe optimization objective is kept the same as in Eq. 5. This helps reduce variances in all the settings in resource collection and Starcraft multi-agent challenge. We sometimes also replace Qcollab(oalonei , ai) in Eq. 7 by its target to further stabilize training." }, { "heading": "B ENVIRONMENT SETUP AND TRAINING DETAILS", "text": "Resource Collection. We set the discount factor as 0.992 and use the RMSprop optimizer with a learning rate of 4e-5. -greedy is used for exploration with annealed linearly from 1.0 to 0.01 in 100k steps. We use a batch size of 128 and update the target every 10k steps. For temperature parameter α, we set it to 1. We run all the experiments for 3 times and plot the mean/std in all the figures.\nStarCraft Multi-Agent Challenge. We set the discount factor as 0.99 and use the RMSprop optimizer with a learning rate of 5e-4. -greedy is used for exploration with annealed linearly from 1.0 to 0.05 in 50k steps. We use a batch size of 32 and update the target every 200 episodes. For temperature parameter α, we set it to 0.1 for 27m_vs_30m and to 1 for all other maps.\nAll experiments on StarCraft II use the default reward and observation settings of the SMAC benchmark. For ad hoc team play with different VIP, an additional 100 reward is added to the original 200 reward for winning the game if the VIP agent is alive after the episode.\nFor swapping agent types, we design the maps 3s1z_vs_16zg, 1s3z_vs_16zg and 2s2z_vs_16zg (s stands for stalker, z stands for zealot and zg stands for zergling). We use the first two maps for training and the third one for testing. For adding units, we use 27m_vs_30m for training and 28m_vs_30m for testing (m stands for marine). For removing units, we use 29m_vs_30m for training and 28m_vs_30m for testing.\nWe run all the experiments for 4 times and plot the mean/std in all the figures." }, { "heading": "C DETAILED RESULTS FOR RESOURCE COLLECTION", "text": "We compare CollaQ with QMIX and CollaQ with attention-based model in resource collection setting. As shown in Fig. 9, QMIX does not show great performance as it is even worse than random action. Adding attention-based model introduces a larger variance, so the performance degrades by 10.66 in training but boosts by 2.13 in ad ad hoc team play." }, { "heading": "D DETAILED RESULTS FOR STARCRAFT MULTI-AGENT CHALLENGE", "text": "We provide the win rates for CollaQ and QMIX on the environments without random agent IDs on three maps. Fig. 10 shows the results for both method.\nWe show the exact win rates for all the maps and settings mentioned in StarCraft Multi-Agent Challenge. From Tab. 1, we can clearly see that CollaQ improves the previous SoTA by a large margin.\nWe also check the margin of winning scenarios, measured as how many units survive after winning the battle. The experiments are repeated over 128 random seeds. CollaQ surpasses the QMIX by over 2 units on average (Tab. 2), which is a huge gain.\nIn a simple ad hoc team play setting, we assign a new VIP agent whose survival matters at test time. Results in Tab. 3 show that at test time, the VIP agent in CollaQ has substantial higher survival rate than QMIX.\nWe also test CollaQ in a harder ad hoc team play setting: swapping/adding/removing agents at test time. Tab 4 summarizes the results for ad hoc team play, CollaQ outperforms QMIX by a lot.\nE VIDEOS AND VISUALIZATIONS OF STARCRAFT MULTI-AGENT CHALLENGE\nWe extract several video frames from the replays of CollaQ’s agents for better visualization. In addition to that, we provide the full replays of QMIX and CollaQ. CollaQ’s agents demonstrate super interesting behaviors such as healing the agents under attack, dragging back the unhealthy agents, and protecting the VIP agent (under the setting of ad hoc team play with different VIP agent settings). The visualizations and videos are available at https://sites.google.com/view/ collaq-starcraft" }, { "heading": "F PROOF AND LEMMAS", "text": "Lemma 1. If a′1 ≥ a1, then 0 ≤ max(a′1, a2)−max(a1, a2) ≤ a′1 − a1.\nProof. Note that max(a1, a2) = a1+a22 + ∣∣a1−a2 2 ∣∣. So we have: max(a′1, a2)−max(a1, a2) = a′1 − a1 2 + ∣∣∣∣a′1 − a22 ∣∣∣∣−∣∣∣∣a1 − a22 ∣∣∣∣ ≤ a′1 − a12 + ∣∣∣∣a1 − a′12\n∣∣∣∣ = a′1−a1 (8)\nF.1 LEMMAS\nLemma 2. For a Markov Decision Process with finite horizon H and discount factor γ < 1. For all i ∈ {1, . . . ,K}, all r1, r2 ∈ RM , all si ∈ Si, we have:\n|Vi(si; r1)− Vi(si; r2)| ≤ ∑ x,a γ|si−x||r1(x, a)− r2(x, a)| (9)\nwhere |si − x| is the number of steps needed to move from si to x.\nProof. By definition of optimal value function Vi for agent i, we know it satisfies the following Bellman equation:\nVi(xh; ri) = max ai\n( ri(xi, ai) + γExh+1|xh,ah [Vi(xh+1)] ) (10)\nNote that to avoid confusion between agents initial states s = {s1, . . . , sK} and reward at state-action pair (s, a), we use (x, a) instead. For terminal node xH , which exists due to finite-horizon MDP with horizon H , Vi(xH) = ri(xH). The current state si is at step 0 (i.e., x0 = si).\nWe first consider the case that r1 and r2 only differ at a single state-action pair (x0h, a 0 h) for h ≤ H . Without loss of generality, we set r1(x0h, a 0 h) > r2(x 0 h, a 0 h).\nBy definition of finite horizon MDP, Vi(xh′ ; r1) = Vi(xh′ ; r2) for h′ > h. By the property of max function (Lemma 1), we have:\n0 ≤ Vi(x0h; r1)− Vi(x0h; r2) ≤ r1(x0h, a0h)− r2(x0h, a0h) (11)\nSince p(x0h|xh−1, ah−1) ≤ 1, for any (xh−1, ah−1) at step h− 1, we have: 0 ≤ γ [ Exh|xh−1,ah−1 [Vi(xh; r1)]− Exh|xh−1,ah−1 [Vi(xh; r2)] ] (12)\n≤ γ [ r1(x 0 h, a 0 h)− r2(x0h, a0h) ] (13)\nApplying Lemma 1 and notice that all other rewards do not change, we have: 0 ≤ Vi(xh−1; r1)− Vi(xh−1; r2) ≤ γ [ r1(x 0 h, a 0 h)− r2(x0h, a0h) ] (14)\nWe do this iteratively, and finally we have: 0 ≤ Vi(si; r1)− Vi(si; r2) ≤ γh [ r1(x 0 h, a 0 h)− r2(x0h, a0h) ] (15)\nWe could show similar case when r1(x0h, a 0 h) < r2(x 0 h, a 0 h), therefore, we have:\n|Vi(si; r1)− Vi(si; r2)| ≤ γh|r1(x0h, a0h)− r2(x0h, a0h)| (16) where h = |x0h − si| is the distance between si and x0h. Now we consider general r1 6= r2. We could design path {rt} from r1 to r2 so that each time we only change one distinct reward entry. Therefore each (s, a) pairs happens only at most once and we have:\n|Vi(si; r1)− Vi(si; r2)| ≤ ∑ t |Vi(si; rt−1)− Vi(si; rt)| (17)\n≤ ∑ x,a γ|x−si||r1(x, a)− r2(x, a)| (18)\nF.2 THM. 1\nFirst we prove the following lemma: Lemma 3. For any reward assignments ri for agent i for the optimization problem (Eqn. 1) and a local reward set M locali ⊇ {x : |x− si| ≤ C}, if we construct r̃i as follows:\nr̃i(x, a) =\n{ ri(x, a) x ∈M locali\n0 x /∈M locali (19)\nThen we have: |Vi(si; ri)− Vi(si; r̃i)| ≤ γCRmaxM (20)\nwhere M is the total number of sparse reward sites and Rmax is the maximal reward that could be assigned at each reward site x while satisfying the constraint φ(r1(x, a), r2(x, a), . . . , rK(s, a)) ≤ 0.\nProof. By Lemma 2, we know that |Vi(si; r∗i )− Vi(si; r̃i)| ≤ ∑\nx/∈Slocali\nγ|x−si||r∗i (s, a)− r̃i(s, a)| (21)\n≤ γC ∑\nx/∈Slocali\n|r∗i (s, a)| (22)\n≤ γCRmaxM (23)\nNote that “sparse reward site” is important here, otherwise there could be exponential sites x /∈ Slocali and Eqn. 23 becomes vacant.\nThen we prove the theorem.\nProof. Given a constant C, for each agent i, we define the vicinity reward site Bi(C) := {x : |x− si| ≤ C}.\nGiven agent i and its local “buddies” slocali (a subset of multiple agent indices), we construct the corresponding reward site set M locali :\nM locali = ⋃\nsj∈slocali\nBj(C) (24)\nDefine the remote agents sremotei = s\\slocali as all agents that do not belong to slocali .\nDefine the distance D between the M locali and s remote i :\nD = min x∈M locali min sj∈sremotei\n|x− sj | (25)\nIntuitively, the larger D is, the larger distance between relevant rewards sites from remote agents and the tighter the bound. There is a trade-off between C and D: the larger the vicinity, M locali expands and the smaller D is.\nGiven this setting, we then construct a few reward assignments (see Fig. 11), given the current agent states s = {s1, s2, . . . , sK}. For brevity, we write R[M, s] to be the submatrix that relates to reward site M and agents set s.\n• The optimal solution R∗ for Eqn. 1.\n• The perturbed optimal solution R̃∗ by pushing the reward assignment of [M locali , sremotei ] in R∗ to [M locali , s local i ].\n• From R̃∗, we get R̃∗0 by setting the region [M remotei , slocali ] to be zero.\n• The local optimal solution R∗local that only depends on slocali . This solution is obtained by setting [:, sremotei ] to be zero and optimize Eqn. 1.\n• From R∗local, we get R∗local(0) by setting [M remote i , s local i ] to be zero.\nIt is easy to show all these rewards assignment are feasible solutions to Eqn. 1. This is because if the original solution is feasible, then setting some reward assignment to be zero also yields a feasible solution, due to the property of the constraint φ.\nFor simplicity, we define Jlocal to be the partial objective that sums over sj ∈ slocali and similarly for Jremote.\nWe could show the following relationship between these solutions:\nJremote(R̃ ∗) ≥ Jremote(R∗)− γDRmaxMK (26)\nThis is because each of this reward assignment move costs at most γDRmax by Lemma 2 and there are at most MK such movement.\nOn the other hand, for each sj ∈ slocalj , since M locali ⊇ Bj(C), from Lemma 3 we have:\nVj(R ∗ local(0)) ≥ Vj(R ∗ local)− γCRmaxM (27)\nAnd similarly we have: Vj(R̃ ∗ 0) ≥ Vj(R̃∗)− γCRmaxM (28)\nNow we construct a new solution R̂i by combining R∗local(0)[:, s local i ] with r̃ ∗ 0[:, s remote i ]. This is still a feasible solution since in both R∗local(0) and R̃ ∗ 0, their top-right and bottom-left sub-matrices are zero, and its objective is still good:\nJ(R̂) = Jlocal(R ∗ local(0)) + Jremote(R̃ ∗ 0) (29)\n1© ≥ Jlocal(R∗local)− γCRmaxMK + Jremote(R̃∗0) (30) 2© ≥ Jlocal(R̃∗) + Jremote(R̃∗0)− γCRmaxMK (31) 3© ≥ Jlocal(R∗) + Jremote(R̃∗0)− γCRmaxMK (32) 4© = Jlocal(R ∗) + Jremote(R̃ ∗)− γCRmaxMK (33)\n5© ≥ Jlocal(R∗) + Jremote(R∗)−RmaxMK(γC + γD) (34) 6© = J(R∗)−RmaxMK(γC + γD) (35)\nNote that 1© is due to Eqn. 27, 2© is due to the optimality of R∗local (and looser constraints for R ∗ local), 3© is due to the fact that R̃∗ is obtained by adding rewards released from sremotei to s local i . 4© is due to the fact that R̃∗0 and R̃ ∗ has the same remote components. 5© is due to Eqn. 26. 6© is by definition of Jlocal and Jremote.\nTherefore we obtain r̂i = [R̂]i that only depends on slocali . On the other hand, the solution R̂ is close to optimal R∗, with gap (γC + γD)RmaxMK." } ]
2,020
null
SP:85843d0456fb7791c3edfc1f81dec00be5abc41f
[ "The paper introduces a framework to statistically test whether a given model is individually fair or not. In particular, given a model, a distance metric over individuals, and a data point z, the authors propose an algorithm that finds a new data point z' such that z' is similar to z but their corresponding losses are different under the model -- if the model is not individually fair. They provide experimental results to show how their proposed method can detect unfairness in practice." ]
As we rely on machine learning (ML) models to make more consequential decisions, the issue of ML models perpetuating or even exacerbating undesirable historical biases (e.g. gender and racial biases) has come to the fore of the public’s attention. In this paper, we focus on the problem of detecting violations of individual fairness in ML models. We formalize the problem as measuring the susceptibility of ML models against a form of adversarial attack and develop a suite of inference tools for the adversarial cost function. The tools allow auditors to assess the individual fairness of ML models in a statistically-principled way: form confidence intervals for the worst-case performance differential between similar individuals and test hypotheses of model fairness with (asymptotic) non-coverage/Type I error rate control. We demonstrate the utility of our tools in a real-world case study.
[ { "affiliations": [], "name": "Subha Maity" }, { "affiliations": [], "name": "Songkai Xue" } ]
[ { "authors": [ "Alekh Agarwal", "Alina Beygelzimer", "Miroslav Dudík", "John Langford", "Hanna Wallach" ], "title": "A Reductions Approach to Fair Classification", "venue": "[cs],", "year": 2018 }, { "authors": [ "Rachel K.E. Bellamy", "Kuntal Dey", "Michael Hind", "Samuel C. Hoffman", "Stephanie Houde", "Kalapriya Kannan", "Pranay Lohia", "Jacquelyn Martino", "Sameep Mehta", "Aleksandra Mojsilovic", "Seema Nagar", "Karthikeyan Natesan Ramamurthy", "John Richards", "Diptikalyan Saha", "Prasanna Sattigeri", "Moninder Singh", "Kush R. Varshney", "Yunfeng Zhang" ], "title": "AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias", "venue": "[cs],", "year": 1943 }, { "authors": [ "Richard Berk", "Hoda Heidari", "Shahin Jabbari", "Michael Kearns", "Aaron Roth" ], "title": "Fairness in Criminal Justice Risk Assessments: The State of the Art", "venue": "[stat],", "year": 2017 }, { "authors": [ "Jose Blanchet", "Karthyek R.A. Murthy" ], "title": "Quantifying Distributional Model Risk via Optimal Transport", "venue": null, "year": 2016 }, { "authors": [ "Jeffrey Dastin" ], "title": "Amazon scraps secret AI recruiting tool that showed bias against women", "venue": null, "year": 2018 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "John Duchi", "Peter Glynn", "Hongseok Namkoong" ], "title": "Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach", "venue": "[stat],", "year": 2016 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Rich Zemel" ], "title": "Fairness Through Awareness", "venue": "[cs],", "year": 2011 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": null, "year": 2014 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nathan Srebro" ], "title": "Equality of Opportunity in Supervised Learning", "venue": "[cs],", "year": 2016 }, { "authors": [ "Tatsunori B. Hashimoto", "Megha Srivastava", "Hongseok Namkoong", "Percy Liang" ], "title": "Fairness Without Demographics in Repeated Loss Minimization", "venue": null, "year": 2018 }, { "authors": [ "Michael T Heath" ], "title": "Scientific Computing: An Introductory Survey, Revised Second Edition", "venue": null, "year": 2018 }, { "authors": [ "Christina Ilvento" ], "title": "Metric Learning for Individual Fairness", "venue": null, "year": 2019 }, { "authors": [ "Matt J. Kusner", "Joshua R. Loftus", "Chris Russell", "Ricardo Silva" ], "title": "Counterfactual Fairness", "venue": null, "year": 2018 }, { "authors": [ "Jeff Larson", "Surya Mattu", "Lauren Kirchner", "Julia Angwin" ], "title": "How we analyzed the compas recidivism", "venue": "ProPublica", "year": 2016 }, { "authors": [ "Debarghya Mukherjee", "Mikhail Yurochkin", "Moulinath Banerjee", "Yuekai Sun" ], "title": "Two simple ways to learn individual fairness metrics from data", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ya’acov Ritov", "Yuekai Sun", "Ruofei Zhao" ], "title": "On conditional parity as a notion of non-discrimination in machine learning", "venue": null, "year": 2017 }, { "authors": [ "Alexey Romanov", "Maria De-Arteaga", "Hanna Wallach", "Jennifer Chayes", "Christian Borgs", "Alexandra Chouldechova", "Sahin Geyik", "Krishnaram Kenthapadi", "Anna Rumshisky", "Adam Tauman Kalai" ], "title": "What’s in a name? reducing bias in bios without access to protected attributes", "venue": "arXiv preprint arXiv:1904.05233,", "year": 2019 }, { "authors": [ "Neil Vigdor" ], "title": "Apple Card Investigated After Gender Discrimination Complaints", "venue": null, "year": 2019 }, { "authors": [ "Hanchen Wang", "Nina Grgic-Hlaca", "Preethi Lahoti", "Krishna P. Gummadi", "Adrian Weller" ], "title": "An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision", "venue": "[cs],", "year": 2019 }, { "authors": [ "Songkai Xue", "Mikhail Yurochkin", "Yuekai Sun" ], "title": "Auditing ML Models for Individual Bias and Unfairness", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Mikhail Yurochkin", "Amanda Bower", "Yuekai Sun" ], "title": "Training individually fair ML models with sensitive subspace robustness", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "P (Ŷ = c|Y = c" ], "title": "Group fairness LetG be the protected attribute taking values in {0, 1}. The average odds difference (AOD) (Bellamy et al., 2018) for group G", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The problem of bias in machine learning systems is at the forefront of contemporary ML research. Numerous media outlets have scrutinized machine learning systems deployed in practice for violations of basic societal equality principles (Angwin et al., 2016; Dastin, 2018; Vigdor, 2019). In response researchers developed many formal definitions of algorithmic fairness along with algorithms for enforcing these definitions in ML models (Dwork et al., 2011; Hardt et al., 2016; Berk et al., 2017; Kusner et al., 2018; Ritov et al., 2017; Yurochkin et al., 2020). Despite the flurry of ML fairness research, the basic question of assessing fairness of a given ML model in a statistically principled way remains largely unexplored.\nIn this paper we propose a statistically principled approach to assessing individual fairness (Dwork et al., 2011) of ML models. One of the main benefits of our approach is it allows the investigator to calibrate the method; i.e. it allows the investigator to prescribe a Type I error rate. Passing a test that has a guaranteed small Type I error rate is the usual standard of proof in scientific investigations because it guarantees the results are reproducible (to a certain degree). This is also highly desirable in detecting bias in ML models because it allows us to certify whether an ML model will behave fairly at test time. Our method for auditing ML models abides by this standard.\nThere are two main challenges associated with developing a hypothesis test for individual fairness. First, how to formalize the notion of individual fairness in an interpretable null hypothesis? Second, how to devise a test statistic and calibrate it so that auditors can control the Type I error rate? In this paper we propose a test motivated by the relation between individual fairness and adversarial robustness (Yurochkin et al., 2020). At a high-level, our approach consists of two parts:\n1. generating unfair examples: by unfair example we mean an example that is similar to a training example, but treated differently by the ML models. Such examples are similar to adversarial examples (Goodfellow et al., 2014), except they are only allowed to differ from a training example in certain protected or sensitive ways.\n2. summarizing the behavior of the ML model on unfair examples: We propose a loss-ratio based approach that is not only scale-free, but also interpretable. For classification problems, we propose a variation of our test based on the error rates ratio." }, { "heading": "1.1 RELATED WORK", "text": "At a high level, our approach is to use the difference between the empirical risk and the distributionally robust risk as a test statistic. The distributionally robust risk is the maximum risk of the ML model on similar training examples. Here similarity is measured by a fair metric that encodes our intuition of which inputs should be treated similarly by the ML model. We note that DRO has been extensively studied in the recent literature (Duchi et al., 2016; Blanchet & Murthy, 2016; Hashimoto et al., 2018), however outside of the fairness context with the exception of Yurochkin et al. (2020); Xue et al. (2020). Yurochkin et al. (2020) focus on training fair or robust ML models instead of auditing ML models.\nXue et al. (2020) also use the difference between the empirical and distributionally robust risks as a test statistic, but their test is only applicable to ML problems with finite feature spaces. This limitation severely restricts the applicability of their test. On the other hand, our test is suitable for ML problems with continuous features spaces. We note that the technical exposition in Xue et al. (2020) is dependant on the finite feature space assumption and in this work we develop a novel perspective of the problem that allows us to handle continuous feature spaces." }, { "heading": "2 GRADIENT FLOW FOR FINDING UNFAIR EXAMPLES", "text": "In this section, we describe a gradient flow-based approach to finding unfair examples that form the basis of our suite of inferential tools. Imagine an auditor assessing whether an ML model is fair or not. The auditor aims to detect violations of individual fairness in the ML model. Recall Dwork et al. (2011)’s definition of individual fairness. Let X ⊂ Rd and Y ⊂ Rd be the input and output spaces respectively, and f : X → Y be an ML model to audit. The ML model f is known as individually fair if\ndy(f(x1), f(x2)) ≤ Lfairdx(x1, x2) for all x1, x2 ∈ X (2.1) for some Lipschitz constant Lfair > 0. Here dx and dy are metrics on X and Y respectively. Intuitively, individually fair ML model treats similar samples similarly, and the fair metric dx encodes our intuition of which samples should be treated similarly. We should point out that dx(x1, x2) being small does not imply x1 and x2 are similar in all aspects. Even if dx(x1, x2) is small, x1 and x2 may differ much in certain attributes, e.g., protected/sensitive attributes.\nBefore moving on, we comment on the choice of the fair metric dx. This metric is picked by the auditor and reflects the auditor’s intuition about what is fair and what is unfair for the ML task at hand. It can be provided by a subject expert (this is Dwork et al. (2011)’s original recommendation) or learned from data (this is a recent approach advocated by Ilvento (2019); Wang et al. (2019); Mukherjee et al. (2020)). Section 4 provides details of picking a fair metric in our empirical studies.\nTo motivate our approach, we recall the distributionally robust optimization (DRO) approach to training individually fair ML models (Yurochkin et al., 2020). Let f : X → Y be an ML model and `(f(x), y) : Z → R+ be any smooth loss (e.g. cross-entropy loss). To search for differential treatment in the ML model, Yurochkin et al. (2020) solve the optimization problem\nmax P :W (P,Pn)≤ ∫ Z `(f(x), y)dP (z), (2.2)\nwhere W is the Wasserstein distance on probability distributions on feature space induced by the fair metric, Pn is the empirical distribution of the training data, and is a moving budget that ensures the adversarial examples are close to the (original) training examples in the fair metric. Formally, this search for differential treatment checks for violations of distributionally robust fairness.\nDefinition 2.1 (distributionally robust fairness (DRF) (Yurochkin et al., 2020)). An ML model h : X → Y is ( , δ)-distributionally robustly fair (DRF) WRT the fair metric dx iff\nsupP :W (P,Pn)≤ ∫ Z `(z, h)dP (z)− ∫ Z `(z, h)dPn(z) ≤ δ. (2.3)\nThe optimization problem (2.2) is an infinite-dimensional problem, but its dual is more tractable. Blanchet & Murthy show that the dual of (2.2) is\nmax P :W (P,Pn)≤ EP [`(f(x), y)] = min λ≥0 {λ + EPn [`cλ(x, y)]}, (2.4)\n`cλ(xi, yi) , max x∈X {`(f(x), yi)− λd2x(x, xi)}. (2.5)\nIn practice, since (2.5) is highly non-convex in general, auditors use gradient-based optimization algorithm to solve it and terminate the algorithm after few iterations. As a result, one can not guarantee optimality of the solution. However, optimality is required to establish convergence guarantees for DRO algorithms. This issue is typically ignored in practice when developing training algorithms, e.g. as in Yurochkin et al. (2020), but it should be treated with care when considering limiting distribution of the related quantities required to calibrate a test. We note that Xue et al. (2020) needed discrete feature space assumption due to the aforementioned concern: when the feature space is discrete, it is possible to solve (2.5) optimally by simply comparing the objective value at all points of the sample space. In this paper we adapt theory to practice, i.e. analyze the limiting distribution of (2.5) optimized for a fixed number of gradient steps.\nThe effects of early termination can be characterized by a continuous-time approximation of adversarial dynamics, which we called gradient flow attack. Given a sample (x0, y0), the gradient flow attack solves a continuous-time ordinary differential equation (ODE){\nẊ(t) = ∇x{`(f(X(t)), y0)− λd2x(X(t), x0)}, X(0) = x0,\n(2.6)\nover time t ≥ 0. For fixed penalty parameter λ and stopping time T > 0, Φ : X × Y → X is the unfair map\nΦ(x0, y0) , X(T ). (2.7)\nHere the map Φ is well-defined as long as g(x) , ∇x{`(f(x), y0)− λd2x(x, x0)} is Lipschitz, i.e., ‖g(x1)− g(x2)‖2 ≤ L‖x1 − x2‖2 for some L > 0. Under this assumption, the autonomous Cauchy problem (2.6) has unique solution and thus Φ : X × Y → X is a one-to-one function. We call Φ an unfair map because it maps samples in the data to similar areas of the sample space that the ML model performs poorly on. We note that data in this case is an audit dataset chosen by the auditor to evaluate individual fairness of the given model. The audit data does not need to be picked carefully and could be simply an iid sample (e.g. testing data). The unfair map plays the key role as it allows us to identify areas of the sample space where model violates individual fairness, even if the audit samples themselves reveal no such violations.\nIn the rest of the paper, we define the test statistic in terms of the unfair map instead of the optimal point of (2.5). This has two main benefits:\n1. computational tractability: evaluating the unfair map is computationally tractable because integrating initial value problems (IVP) is a well-developed area of scientific computing (Heath, 2018). Auditors may appeal to any globally stable method for solving IVP’s to evaluate the unfair map. 2. reproducibility: the non-convex nature of (2.5) means the actual output of any attempts at solving (2.5) is highly depend on the algorithm and the initial iterate. By defining the test statistic algorithmically, we avoid ambiguity in the algorithm and initial iterate, thereby ensuring reproducibility.\nOf course, the tractability and reproducibility of the resulting statistical tests comes at a cost: power. Because we are not exactly maximizing (2.5), the ability of the test statistic to detect violations of individual fairness is limited by the ability of (2.7) to find (unfair) adversarial examples." }, { "heading": "2.1 EVALUATING THE TEST STATISTIC", "text": "Solving IVP’s is a well-studied problem in scientific computing, and there are many methods for solving IVP’s. For our purposes, it is possible to use any globally stable method to evaluate the unfair map. One simple method is the forward Euler method with sufficiently small step size. Let\n0 = t0 < t1 < . . . < tN = T be a partition of [0, T ], and denote the step size by ηk = tk − tk−1 for k = 1, . . . , N . Initialized at x(0) = x0, the forward Euler method repeats the iterations\nx(k) = x(k−1) + ηk · ∇x{`(f(x(k−1)), y0)− λd2x(x(k−1), x0)} (2.8)\nfor k = 1, . . . , N . The validity of discretizations of the forward Euler method is guaranteed by the following uniform bounds.\nTheorem 2.2 (Global stability of forward Euler method). Consider the solution path {X(t)}0≤t≤T given by (2.6) and the sequence {x(k)}Nk=0 given by (2.8). Let the maximal step size be h = max{η1, . . . , ηN}. Suppose that ‖Jg(x)g(x)‖∞ ≤ m, where g(x) = ∇x{`(f(x), y0)−λd2x(x, x0)} and Jg is the Jacobian matrix of g. Then we have\nmax k=1,...,N\n‖X(tk)− x(k)‖2 ≤ hm √ d\n2L (eLT − 1). (2.9)\nTheorem 2.2 indicates that the global approximation error (2.9) decreases linearly with h, the maximal step size. Therefore by taking small enough h, the value of the unfair map Φ can be approximated by x(N) with arbitrarily small error." }, { "heading": "3 TESTING INDIVIDUAL FAIRNESS OF AN ML MODEL", "text": "Although gradient flows are good ways of finding unfair examples, they do not provide an interpretable summary of the ML model outputs. In this section, we propose a loss-ratio based approach to measuring unfairness with unfair examples. Given a sample point (x0, y0) ∈ Z , gradient flow attack (2.6) always increases the regularized loss in (2.5), that is,\n`(f(x0), y0) ≤ `(f(X(T )), y0)− d2x(X(T ), x0) ≤ `(f(X(T )), y0). (3.1)\nTherefore the unfair map Φ : Z → X always increases the loss value of the original sample. In other words, the ratio\n`(f(Φ(x, y)), y)\n`(f(x), y) ≥ 1 for all (x, y) ∈ Z. (3.2)\nRecall that the unfair map Φ moves a sample point to its similar points characterized by the fair metric dx. The fair metric dx reflects the auditor’s particular concern of individual fairness so that the original sample (x, y) and the mapped sample (Φ(x, y), y) should be treated similarly. If there is no bias/unfairness in the ML model, then we expect the ratio `(f(Φ(x, y)), y)/`(f(x), y) to be close to 1. With this intuition, to test if the ML model is individually fair or not, the auditor considers hypothesis testing problem\nH0 : EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n] ≤ δ versus H1 : EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n] > δ, (3.3)\nwhere P is the true data generating process, and δ > 1 is a constant specified by the auditor. Using the ratio of losses in (3.3) has two main benefits:\n1. scale-free: changing the loss function by multiplying a factor does not change the interpretation of the null hypothesis. 2. The test is interpretable: the tolerance δ is the maximum loss differential above which we consider an ML model unfair. In applications where the loss can be interpreted as a measure of the negative impact of an ML model, there may be legal precedents on acceptable levels of differential impact that is tolerable. In our computational results, we set δ according to the four-fifths rule in US labor law. Please see Section 3.2 for further discussion regarding δ and interpretability.\nWe note that using the ratio of losses as the test statistic is not without its drawbacks. If the loss is small but non-zero, then the variance of the test statistic is inflated and and the test loses power, but Type I error rate control is maintained." }, { "heading": "3.1 THE AUDIT VALUE", "text": "The auditor collects a set of audit data {(xi, yi)}ni=1 and computes the empirical mean and variance of the ratio `(f(Φ(x, y)), y)/`(f(x), y),\nSn = 1\nn n∑ i=1 `(f(Φ(xi, yi)), yi) `(f(xi), yi) and V 2n = 1 n− 1 n∑ i=1 ( `(f(Φ(xi, yi)), yi) `(f(xi), yi) − Sn )2 , (3.4)\nby solving gradient flow attack (2.6). The first two empirical moments, Sn and V 2n , are sufficient for the auditor to form confidence intervals and perform hypothesis testing for the population mean EP [`(f(Φ(x, y)), y)/`(f(x), y)], the audit value. Theorem 3.1 (Asymptotic distribution). Assume that ∇x{`(f(x), y) − λd2x(x, y)} is Lipschitz in x, and `(f(Φ(x, y)), y)/`(f(x), y) has finite first and second moments. If the ML model f is independent of {(xi, yi)}ni=1, then\n√ nV −1n ( Sn − EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n]) d→ N (0, 1) (3.5)\nin distribution, as n→∞.\nThe first inferential task is to provide confidence intervals for the audit value. The two-sided equal-tailed confidence interval for the audit value EP [`(f(Φ(x, y)), y)/`(f(x), y)] with asymptotic coverage probability 1− α is\nCItwo-sided = [ Sn −\nz1−α/2√ n Vn, Sn + z1−α/2√ n Vn\n] , (3.6)\nwhere zq is the q-th quantile of normal distribution N (0, 1). Corollary 3.2 (Asymptotic coverage of two-sided confidence interval). Under the assumptions in Theorem 3.1,\nlim n→∞\nP ( EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n] ∈ [ Sn −\nz1−α/2√ n Vn, Sn + z1−α/2√ n Vn\n]) = 1− α. (3.7)\nThe second inferential task is to test restrictions on the audit value, that is, considering the hypothesis testing problem (3.3). Similar to the two-sided confidence interval (3.6), we may also have one-sided confidence interval for the audit value EP [`(f(Φ(x, y)), y)/`(f(x), y)] with asymptotic coverage probability 1− α, i.e.,\nCIone-sided = [ Sn −\nz1−α√ n Vn,∞\n) . (3.8)\nThe one-sided confidence interval (3.8) allows us to test simple restrictions of the form EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n] ≤ δ (3.9)\nby checking whether δ falls in the (1−α)-level confidence region. By the duality between confidence intervals and hypothesis tests, this test has asymptotic Type I error rate at most α. With this intuition, a valid test is:\nReject H0 if Tn > δ, Tn , Sn − z1−α√ n Vn, (3.10)\nwhere Tn is the test statistic.\nCorollary 3.3 (Asymptotic validity of test). Under the assumptions in Theorem 3.1,\n1. if EP [`(f(Φ(x, y)), y)/`(f(x), y)] ≤ δ, we have limn→∞ P(Tn > δ) ≤ α; 2. if EP [`(f(Φ(x, y)), y)/`(f(x), y)] > δ, we have limn→∞ P(Tn > δ) = 1." }, { "heading": "3.2 TEST INTERPRETABILITY, TEST TOLERANCE AND AN ALTERNATIVE FORMULATION", "text": "To utilize our test, a auditor should set a tolerance (or threshold) parameter δ. It is important that a auditor can understand and interpret the meaning of the threshold they choose. Appropriate choice of δ can vary based on the application, however here we consider a general choice motivated by the US Equal Employment Opportunity Commission’s four-fifths rule, which states “selection rate for any race, sex, or ethnic group [must be at least] four-fifths (4/5) (or eighty percent) of the rate for the group with the highest rate”.1 Rephrasing this rule in the context of the loss ratio, we consider the following: the largest permissible loss increase on an individual should be at most five-fourth (5/4) of its original loss. This corresponds to the null hypothesis threshold δ = 1.25.\nThe aforementioned wording of the four-fifth rule is based on the demographic parity group fairness definition, however it can be generalized to other group fairness definitions as follows: “performance of the model on any protected group must be at least four-fifth of the best performance across groups”. Depending on what we mean by performance, we can obtain other group fairness definitions such as accuracy parity when auditing an ML classification system. In our test, we use the loss ratio because the loss is a general measure of performance of an ML system. However, in the context of supervised learning, the loss is often a mathematically convenient proxy for the ultimate quantity of interest, error rate. For classification problems, it is possible to modify our test statistic based on the ratio of error rates instead of losses (maintaining δ = 1.25 according to the five-fourth rule).\nLet `0,1 be the 0-1 loss. Naively, we could consider the mean of the ratio `0,1(f(Φ(x,y)),y) `0,1(f(x),y) as a test statistic, but this is problematic because the ratio is not well-defined when the classifier correctly classifies x. To avoid this issue, we propose considering the ratio of means (instead of the mean of the ratio) as a test statistic. Formally, we wish to test the hypothesis\nH0 : EP [`0,1(f(Φ(x, y)), y)]\nEP [`0,1(f(x), y)] ≤ δ versus H1 : EP [`0,1(f(Φ(x, y)), y)] EP [`0,1(f(x), y)] > δ, (3.11)\nWe emphasize that the gradient flow attack is still performed with respect to a smooth loss function; we merely use the 0-1 loss function to evaluate the accuracy of the classifier on the original and adversarial examples.\nThe auditor collects a set of audit data {(xi, yi)}ni=1 and computes the ratio of empirical risks\nS̃n = 1 n\n∑n i=1 `0,1(f(Φ(xi, yi)), yi)\n1 n ∑n i=1 `0,1(f(xi), yi)\nand Ṽn , 1\nn n∑ i=1 [ `0,1(f(Φ(xi, yi)), yi) `0,1(f(xi), yi) ] [ `0,1(f(Φ(xi, yi)), yi) `0,1(f(xi), yi) ]> (3.12)\nby performing the gradient flow attack (2.6). Let An and Bn be the numerator and denominator of S̃n. Parallel to the intuition of (3.10), here the proposed test is:\nReject H0 if T̃n > δ, T̃n , S̃n − z1−α B2n\n√ A2n(Ṽn)2,2 +B 2 n(Ṽn)1,1 − 2AnBn(Ṽn)1,2\nn , (3.13)\nwhere T̃n is the test statistic. Please see Appendix C for the asymptotic normality theorem and Type I error guarantees." }, { "heading": "4 INDIVIDUAL FAIRNESS TESTING IN PRACTICE", "text": "In our experiments we first verify our methodology in simulations and then present a case-study of testing individual fairness on the Adult dataset (Dua & Graff, 2017).\nA auditor performing the testing would need to make three important choices: fair metric dx(·, ·), testing threshold δ to have a concrete null hypothesis and level of significance (maximum allowed type I error of hypothesis testing, i.e. p-value cutoff) to make a decision whether the null (classifier is fair) should be rejected. The fair metric can be provided by a subject expert, as considered in our simulation studies, or estimated from data using fair metric learning techniques proposed in the literature, as we do in the Adult experiment. Following the discussion in Section 3.2 we set\n1Uniform Guidelines on Employment Selection Procedures, 29 C.F.R. §1607.4(D) (2015).\nδ = 1.25. For the significance level, typical choice in many sciences utilizing statistical inference is 0.05, which we follow in our experiments, however this is not a universal rule and should be adjusted in practice when needed." }, { "heading": "4.1 STUDYING TEST PROPERTIES WITH SIMULATIONS", "text": "We first investigate the ability of our test to identify an unfair classifier, explore robustness to fair metric misspecification verifying Theorem A.1, and discuss implications of the choice of null hypothesis parameter δ. We simulate a 2-dimensional binary classification dataset with two subgroups of observations that differ only in the first coordinate (we provide additional details and data visualization in Appendix E). One of the subgroups is underrepresented in the training data yielding a corresponding logistic regression classifier that overfits the majority subgroup and consequently differentiates data points (i.e. “individuals”) along both coordinates. Recall that a pair of points that only differ in the first coordinate are considered similar by the problem design, i.e. their fair distance is 0, and prediction for such points should be the same to satisfy individual fairness.\nFair metric Our expert knowledge of the problem allow to specify a fair metric d2x((x1, x2), (x ′ 1, x ′ 2)) = (x2 − x′2)2. Evidently, an individually fair logistic regression should have coefficient of the first coordinate θ1 = 0, while intercept and θ2 can be arbitrary. The more θ1 differs from 0, the larger is the individual fairness violation. In Figure 1 (left) we visualize the heatmap of the test statistic (3.10) over a grid of (θ1, θ2) values (intercept is estimated from the data for each coefficients pair). Recall that when this value exceeds δ = 1.25 our test rejects the null (fairness) hypothesis (red heatmap area). Our test well-aligns with the intuitive interpretation of the problem, i.e. test statistic increases as θ1 deviates from 0 and is independent of the θ2 value.\nMetric misspecification We also consider fair metric misspecification in the center and right heatmaps of Figure 1. Here the discounted movement direction in the metric is rotated, i.e. d2x((x1, x2), (x ′ 1, x ′ 2)) = (sin\n2 β)(x1− x′1)2 + (cos2 β)(x2− x′2)2 for β = 5◦ (center) and β = 10◦ (right). We see that the test statistic starts to reject fairness of the models with larger θ2 magnitudes due to misspecification of the metric, however it remains robust in terms of identifying θ1 = 0 as the fair model.\nNull hypothesis threshold Finally we assess the null hypothesis choice δ = 1.25. We saw that the test permits (approximately) θ1 < 1.5 — whether this causes minor or severe individual fairness violations depends on the problem at hand. A auditor that has access to an expert knowledge for defining the fair metric and desires stricter individual fairness guarantees may consider smaller values of δ. In this simulated example, we see that as δ approaches 1, the test constructed with the correct fair metric (Figure 1 left) will reject all models with θ1 6= 0, while permitting any θ2 values." }, { "heading": "4.2 REAL DATA CASE-STUDY", "text": "We present a scenario of how our test can be utilized in practice. To this end, we consider income classification problem using Adult dataset (Dua & Graff, 2017). The goal is to predict if a person is earning over $50k per year using features such as education, hours worked per week, etc. (we exclude race and gender form the predictor variables; please see Appendix F and code in the supplementary materials for data processing details).\nLearning the fair metric In lieu of an expert knowledge to define a fair metric, we utilize technique by Yurochkin et al. (2020) to learn a fair metric from data. They proposed a fair metric of the form: d2x(x, x\n′) = 〈x− x′, P (x− x′)〉, where P is the projection matrix orthogonal to a “sensitive” subspace. Similar to their Adult experiment, we learn this subspace by fitting two logistic regression classifier to predict gender and race and taking span of the coefficient vectors (i.e. vectors orthogonal to decision boundary) as the sensitive subspace. The intuition behind this metric is that this subspace captures variation in the data pertaining to the racial and gender differences. A fair metric should treat individuals that only differ in gender and/or race as similar, therefore it assigns 0 distance to any pair of individuals that only differ by a vector in the sensitive subspace (similar to the fair metric we used in simulations discounting any variation along the first coordinate). Our hypothesis test is an audit procedure performed at test time, so we learn the fair metric using test data to examine fairness of several methods that only have access to an independent train set to learn their decision function.\nResults We perform testing of the 4 classifiers: baseline NN, group fairness Reductions (Agarwal et al., 2018) algorithm, individual fairness SenSR (Yurochkin et al., 2020) algorithm and a basic Project algorithm that pre-processes the data by projecting out “sensitive” subspace. For SenSR fair metric and Project we use training data to learn the “sensitive” subspace. All methods are trained to account for class imbalance in the data and we report test balanced accuracy as a performance measure following prior studies of this dataset (Yurochkin et al., 2020; Romanov et al., 2019). Results of the 10 experiment repetitions are summarized in Table 1 (see Table 3 in Appendix F.6 for the standard deviations). We compare group fairness using average odds difference (AOD) (Bellamy et al., 2018) for gender and race. Significance level for null hypothesis rejection is 0.05 and δ = 1.25 (see Appendix F and code for details regarding the algorithms and comparison metrics).\nBaseline exhibits clear violation of both individual (Tn, T̃n 1.25 and rejection proportion is 1) and group fairness (both AOD are far from 0). Simple projection pre-processing improved individual fairness, however the null is still rejected in the majority experiment repetitions (balanced accuracy improvement is accidental). A more sophisticated individual fairness algorithm SenSR does perform well according to our test with test statistic close to 1 (ideal value) and the test fails to reject individual fairness of SenSR every time. Lastly we examine the trade-off between individual and group fairness. Enforcing group fairness with Reductions leads to best AOD values, however it worsens individual fairness (comparing to the baseline) as measured by the test statistic. On the contrary, enforcing individual fairness with SenSR also improves group fairness metrics, however at the cost of the lowest balanced accuracy. We present a similar study of the COMPAS dataset in Appendix G. Results there follow the same pattern with the exception of Reductions slightly improving individual fairness in comparison to the baseline, but still being rejected by our test in all experiment repetitions.\nBoth loss-ratio test Tn and error-rate ratio test T̃n results are almost identical. The only difference is that loss ratio test rejected Project in 9 out of 10 trials, while error-rate ratio test in 8 out of 10 trials.\nSetting stopping time T . Recall that the test statistic Tn depends on the number of steps T of the gradient flow attack (2.6). Corollary 3.3 guarantees Type I error control for any T , i.e. it controls the error of rejecting a fair classifier regardless of the stopping time choice. Theoretical guarantees for Type II error, i.e. failing to reject an unfair classifier, are hard to provide in general (one needs to know expected value of the loss ratio for a given T and a specific model). In practice, we recommend running the gradient flow attack long enough (based on the available computation budget) to guarantee small Type II error. In our Adult experiment we set T = 500. In Figure 2\n(note the log-log scale) we present an empirical study of the test statistic Tn as a function of stopping time T . We see that our test fails to reject SenSR, the classifier we found individually fair, for any value of T verifying our Type I error guarantees in practice. Rejection of the unfair classifiers requires sufficiently large T , supporting our recommendation for Type II error control in practice." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "We developed a suite of inferential tools for detecting and measuring individual bias in ML models. The tools require access to the gradients/parameters of the ML model, so they’re most suitable for internal investigators. We hope our tools can help auditors verify individual fairness of ML models and help researchers benchmark individual fairness algorithms. Future work on learning flexible individual fairness metrics from data will expand the applicability range of our test.\nWe demonstrated the utility of our tools by using them to reveal the gender and racial biases in an income prediction model. In our experiments, we discovered that enforcing group fairness may incur individual bias. In other words, the algorithm may sacrifice individual fairness in order to preserve parity of certain metrics across groups. For example, one of the earliest methods for enforcing group fairness explicitly treated examples from the majority and minority groups differently (Hardt et al., 2016). We conjecture that the even the more modern methods for enforcing group fairness could be forcibly balancing outcomes among demographic groups, leading to instances where similar individuals in different demographic groups are treated differently. The possible trade-off between individual and group fairness warrants further investigation, but is beyond the scope of this paper." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This paper is based upon work supported by the National Science Foundation (NSF) under grants no. 1830247 and 1916271. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the NSF." }, { "heading": "A ROBUSTNESS OF TEST STATISTIC TO THE CHOICE OF FAIR METRIC", "text": "Since the fair metric dx is picked by an expert’s domain knowledge or in a data-driven way, the auditor may incur the issue of fair metric misspecification. Fortunately, the test statistic (3.10) is robust to small changes in fair metrics. Let d1, d2 : X ×X → R+ be two different fair metrics on X . Let Φ1,Φ2 : X × Y → X be the unfair maps induced by d1, d2. We start by stating the following assumptions:\n(A1) ∇x{`(f(x), y)− λd21(x, y)} is L-Lipschitz in x with respect to ‖ · ‖2; (A2) supx,x′∈X ‖x− x′‖2 , D <∞; (A3) `(f(x), y) is L0-Lipschitz in x with respect to ‖ · ‖2, and lower bounded by c > 0; (A4) supx,x′∈X ‖∇xd21(x, x′)−∇xd22(x, x′)‖2 ≤ Dδd for some constant δd ≥ 0.\nAssumption A1 is always assumed for the existence and uniqueness of ODE solution. Assumption A2, that the feature space X is bounded, and the first part of Assumption A3 are standard in the DRO literature. The second part of Assumption A3 is to avoid singularity in computing loss ratio. Assumption A4 is worthy of being discussed. The constant δd in A4 characterizes the level of fair metric misspecification. Moreover, Assumption A4 is mild under Assumption A2. For example, let\nd21(x, x ′) = 〈x− x′,Σexact(x− x′)〉 and d22(x, x′) = 〈x− x′,Σmis(x− x′)〉 (A.1)\nbe the exact and misspecified fair metric respectively. Then sup\nx,x′∈X ‖∇xd21(x, x′)−∇xd22(x, x′)‖2 = sup x,x′∈X ‖2Σexact(x− x′)− 2Σmis(x− x′)‖2 (A.2)\n≤ sup x,x′∈X 2‖Σexact − Σmis‖2 · ‖x− x′‖2 (A.3)\n≤ D · 2‖Σexact − Σmis‖2. (A.4) The level of fair metric misspecification δd vanishes as long as Σmis estimates Σexact consistently. Theorem A.1 (Robustness of test statistic). Suppose that the support of data distribution P satisfies for any (x0, y0) ∈ supp(P ), the solution path of ODE (2.6) corresponding to (x0, y0) and d1 (or d2) lies in X . Under Assumptions A1 – A4, we have∣∣∣∣`(f(Φ1(x0, y0)), y0)`(f(x0), y0) − `(f(Φ2(x0, y0)), y0)`(f(x0), y0) ∣∣∣∣ ≤ √ λδd L L0De LT c (A.5) for any (x0, y0) ∈ supp(P ).\nThe first assumption in Theorem A.1 is mild since we always perform early termination in gradient flow attack.\nIn the literature, the fair metric can be learned from an additional dataset different from the training, test, and audit dataset. In this case, the constant δd, which characterizes the goodness-of-fit of the estimated fair metric to the exact fair metric, shrinks to 0 as n→∞. Then Theorem A.1 provides two key insights.\nFirst, as long as δd tends to 0, we ultimately test the same null hypothesis since∣∣∣∣EP [`(f(Φ1(x0, y0)), y0)`(f(x0), y0) ] − EP [ `(f(Φ2(x0, y0)), y0) `(f(x0), y0) ]∣∣∣∣ ≤ √ λδd L L0De LT c → 0 (A.6) as δd → 0. Second, the error of test statistic induced by the misspecification of fair metric is negligible as long as δd = o( 1n ). This is due to the fact that the fluctuations of test statistic are Op( 1√ n ), so √ δd must vanish faster than O( 1√ n ) to not affect the test statistic asymptotically." }, { "heading": "B PROOFS", "text": "" }, { "heading": "B.1 PROOF OF THEOREM IN SECTION 2", "text": "Proof of Theorem 2.2. Let X(t) = (X(1)(t), . . . , X(d)(t))>. For i = 1, . . . , d and k = 1, . . . , N , we have\nX(i)(tk) = X (i)(tk−1) + ηkẊ (i)(tk−1) + 1\n2 η2kẌ (i)(t̃ (i) k−1) (B.1)\nfor some t̃(i)k−1 ∈ [tk−1, tk]. Compactly, we have\nX(tk) = X(tk−1) + ηkẊ(tk−1) + 1\n2 η2k\n( Ẍ(1)(t̃\n(1) k−1), . . . , Ẍ (d)(t̃ (d) k−1)\n)> . (B.2)\nFor k = 1, . . . , N , we let\nTk , X(tk)−X(tk−1)\nηk − g(X(tk−1)) (B.3)\n= 1\nηk\n( X(tk)−X(tk−1)− ηkẊ(tk−1) ) . (B.4)\nNote that ‖Ẍ(t)‖∞ = ‖Jg(X(t))g(X(t))‖∞ ≤ m, (B.5)\nand ηk ≤ h, we have\n‖Tk‖2 = 1\n2 ηk ∥∥∥∥(Ẍ(1)(t̃(1)k−1), . . . , Ẍ(d)(t̃(d)k−1))>∥∥∥∥ 2\n(B.6)\n≤ 1 2 ηk √ d ∥∥∥∥(Ẍ(1)(t̃(1)k−1), . . . , Ẍ(d)(t̃(d)k−1))>∥∥∥∥ ∞\n(B.7)\n≤ hm √ d\n2 . (B.8)\nLet ek = X(tk)− x(k) for k = 1, . . . , N , we have ek = X(tk−1)− x(k−1) + ηk ( g(X(tk−1))− g(x(k−1)) ) + ηkTk (B.9)\n= ek−1 + ηk ( g(X(tk−1))− g(x(k−1)) ) + ηkTk. (B.10)\nSince g is L-Lipschitz, we have\n‖ek‖2 ≤ ‖ek−1‖2 + ηkL‖ek−1‖2 + ηk hm √ d\n2 . (B.11)\nThen,\n‖ek‖2 + hm √ d\n2L ≤ (1 + Lηk)\n( ‖ek−1‖2 + hm √ d\n2L\n) (B.12)\n≤ eLηk ( ‖ek−1‖2 + hm √ d\n2L\n) . (B.13)\nFor k = 1, . . . , N ,\n‖ek‖2 + hm √ d\n2L ≤ eL(η1+···+ηk)hm\n√ d\n2L ≤ eLT hm\n√ d\n2L . (B.14)\nTherefore,\nmax k=1,...,N ‖X(tk)− x(k)‖2 = max k=1,...,N\n‖ek‖ ≤ hm √ d\n2L (eLT − 1). (B.15)" }, { "heading": "B.2 PROOF OF THEOREMS AND COROLLARIES IN SECTION 3", "text": "Proof of Theorem 3.1. By central limit theorem (CLT),\n√ n ( VarP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n])− 12 ( Sn − EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n]) d→ N (0, 1) (B.16)\nSince\nV 2n p→ VarP\n[ `(f(Φ(x, y)), y)\n`(f(x), y)\n] , (B.17)\nby Slutsky’s theorem, we conclude that\n√ nV −1n ( Sn − EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n]) d→ N (0, 1). (B.18)\nProof of Corollary 3.2. By asymptotic distribution given by Theorem 3.1, P ( EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n] ∈ [ Sn −\nz1−α/2√ n Vn, Sn + z1−α/2√ n Vn\n]) (B.19)\n=P ( zα/2 ≤ √ nV −1n ( Sn − EP [ `(f(Φ(x, y)), y)\n`(f(x), y)\n]) ≤ z1−α/2 ) → 1− α (B.20)\nas n→∞. Proof of Corollary 3.3. Let τ = EP [`(f(Φ(x, y)), y)/`(f(x), y)]. By asymptotic distribution given by Theorem 3.1,\nP(Tn > δ) = 1− P ( Sn −\nz1−α√ n Vn ≤ δ\n) (B.21)\n= 1− P (√ nV −1n (Sn − τ) ≤ z1−α + √ nV −1n (δ − τ) ) (B.22)\n→ { 0, if τ < δ α, if τ = δ 1, if τ > δ\n(B.23)\nas n→∞." }, { "heading": "B.3 PROOF OF THEOREM IN APPENDIX A", "text": "Proof of Theorem A.1. For any (x0, y0) ∈ Z , let {X1(t)}0≤t≤T solve{ Ẋ1(t) = ∇x{`(f(X1(t), y0))− λd21(X1(t), x0)}, X1(0) = x0,\n(B.24)\nand {X2(t)}0≤t≤T solve{ Ẋ2(t) = ∇x{`(f(X2(t), y0))− λd22(X1(t), x0)}, X2(0) = x0.\n(B.25)\nConsider\ny(t) = ‖X1(t)−X2(t)‖22 + λD2δ\nL , (B.26)\nwe have\ny(0) = λD2δ\nL , y(t) ≥ 0, (B.27)\nand\nẏ(t) = 2〈X1(t)−X2(t), Ẋ1(t)− Ẋ2(t)〉 (B.28) ≤ 2‖X1(t)−X2(t)‖2 · ‖Ẋ1(t)− Ẋ2(t)‖2 (B.29) ≤ 2‖X1(t)−X2(t)‖2 · {L‖X1(t)−X2(t)‖2 + λDδ} (B.30)\n≤ 2L { ‖X1(t)−X2(t)‖22 + λD2δ\nL\n} (B.31)\n= 2L · y(t). (B.32)\nBy Gronwall’s inequality, y(T ) ≤ e2LT y(0), (B.33)\nthat is,\n‖X1(T )−X2(T )‖22 ≤ λD2δ\nL (e2LT − 1), (B.34)\nwhich implies ‖Φ1(x0, y0)− Φ2(x0, y0)‖2 = ‖X1(T )−X2(T )‖2 ≤ √ λδ\nL DeLT . (B.35)\nBy Assumption A3, we have∣∣∣∣`(f(Φ1(x0, y0)), y0)`(f(x0), y0) − `(f(Φ2(x0, y0)), y0)`(f(x0), y0) ∣∣∣∣ ≤ √ λδ L L0De LT c . (B.36)" }, { "heading": "C ASYMPTOTIC NORMALITY AND ASYMPTOTIC VALIDITY OF THE ERROR-RATES RATIO TEST", "text": "The auditor collects a set of audit data {(xi, yi)}ni=1 and computes the ratio of empirical risks\nS̃n = 1 n\n∑n i=1 `0,1(f(Φ(xi, yi)), yi)\n1 n ∑n i=1 `0,1(f(xi), yi)\nand Ṽn , 1\nn n∑ i=1 [ `0,1(f(Φ(xi, yi)), yi) `0,1(f(xi), yi) ] [ `0,1(f(Φ(xi, yi)), yi) `0,1(f(xi), yi) ]> (C.1)\nby performing the gradient flow attack (2.6). Let An and Bn be the numerator and denominator of S̃n.\nFirst we derive the limiting distribution of a calibrated test statistic for the error-rates ratio test. Theorem C.1 (Asymptotic distribution). Assume that∇x{`(f(x), y)− λd2x(x, y)} is Lipschitz in x. If the ML model f is independent of {(xi, yi)}ni=1, then\n√ nB2n√\nA2n(Ṽn)2,2 +B 2 n(Ṽn)1,1 − 2AnBn(Ṽn)1,2\n( S̃n −\nEP [`0,1(f(Φ(x, y)), y)] EP [`0,1(f(x), y)]\n) d→ N (0, 1)\n(C.2) in distribution, as n→∞.\nType I error rate control is formalized in the following: Corollary C.2 (Asymptotic validity of test). Under the assumptions in Theorem C.1,\n1. if EP [`0,1(f(Φ(x, y)), y)]/EP [`0,1(f(x), y)] ≤ δ, we have limn→∞ P(T̃n > δ) ≤ α; 2. if EP [`0,1(f(Φ(x, y)), y)]/EP [`0,1(f(x), y)] > δ, we have limn→∞ P(T̃n > δ) = 1.\nProof of Theorem C.1. By central limit theorem (CLT),\n√ n ([ An Bn ] − [ µx µy ]) d→ N (0,Σ), (C.3)\nfor finite µx, µy, and Σ with finite entries. Let g(x, y) = x/y, then ∇g(µx, µy) = µ−2y (µy,−µx)>. By continuous mapping theorem, we have\n√ n (g(An, Bn)− g(µx, µy))\nd→ N (0,∇g(µx, µy)>Σ∇g(µx, µy)), (C.4) which implies\n√ n ( S̃n −\nEP [`0,1(f(Φ(x, y)), y)] EP [`0,1(f(x), y)]\n) d→ N ( 0, µ2xΣ2,2 + µ\n2 yΣ1,1 − 2µxµyΣ1,2 µ4y\n) , (C.5)\nor √ nµ2y√\nµ2xΣ2,2 + µ 2 yΣ1,1 − 2µxµyΣ1,2\n( S̃n −\nEP [`0,1(f(Φ(x, y)), y)] EP [`0,1(f(x), y)]\n) d→ N (0, 1). (C.6)\nSince An p→ µx, Bn p→ µy and Ṽn p→ Σ, we therefore conclude by Slutsky’s Theorem that\n√ nB2n√\nA2n(Ṽn)2,2 +B 2 n(Ṽn)1,1 − 2AnBn(Ṽn)1,2\n( S̃n −\nEP [`0,1(f(Φ(x, y)), y)] EP [`0,1(f(x), y)]\n) d→ N (0, 1).\nProof of Corollary C.2. Let τ = EP [`0,1(f(Φ(x, y)), y)]/EP [`0,1(f(x), y)]. By asymptotic distribution given by Theorem C.1,\nP(T̃n > δ) = 1− P S̃n − z1−α B2n √ A2n(Ṽn)2,2 +B 2 n(Ṽn)1,1 − 2AnBn(Ṽn)1,2 n ≤ δ = 1− P\n √nB2n√ A2n(Ṽn)2,2 +B 2 n(Ṽn)1,1 − 2AnBn(Ṽn)1,2 (S̃n − τ)\n≤ z1−α + √ nB2n√\nA2n(Ṽn)2,2 +B 2 n(Ṽn)1,1 − 2AnBn(Ṽn)1,2\n(δ − τ) →\n{ 0, if τ < δ α, if τ = δ 1, if τ > δ\nas n→∞.\nD IMPLEMENTATION OF THE PROPOSED TEST\nThe algorithm 1 provides a step-by-step procedure for calculating the lower bound. For a choice of δ (threshold for null hypothesis, see equation 3.3), at a level of significance 0.05, we reject the null hypothesis if lower bound is greater than δ.\nAlgorithm 1 Individual fairness testing Input: ML model f ; loss `; data {(Xi, Yi)}ni=1; fair-distance dx; regularization parameters λ; number of steps T ; and step size { t}Tt=1; Require: f provides class probabilities; t is decreasing for i = 1, . . . , n do\nInitialize X ′i ← Xi for t = 1, . . . , T do X ′i ← X ′i + t∇{` (f(X ′i), Yi)− λdx(X ′i, Xi)} end for ri ← `(f(X′i),Yi) `(f(Xi),Yi)\nend for\nOutput: lower bound = mean(r)− 1.645√ n ∗ std(r)" }, { "heading": "E SUPPLEMENTARY DETAILS FOR SIMULATIONS", "text": "Here we provide further details for the experiment with simulated data." }, { "heading": "E.1 DATA", "text": "We considered one one group variable G with two labels. The 2 dimensional features were generated with the idea that they will differ in first co-ordinate. We present the detailed model for generating\nthe data. Gi ∼ iid Bernoulli(0.1) Xi ∼ N ( (1−Gi) [ −1.5 0 ] +Gi [ 1.5 0 ] , (0.25)2I2 ) Yi = 1 {( (1−Gi) [ −0.2 −0.01 ] +Gi [ 0.2 −0.01 ])T Xi +N ( 0, 10−4 ) > 0 } for i = 1, . . . , 400.\n(E.1)\nThe data is plotted in Figure 3. As seen in the figure, feature vectors for two groups mainly differ in the first co-ordinate. So, the discounted movement direction is (1, 0)T ." }, { "heading": "E.2 CLASSIFIERS", "text": "For comparison purpose we consider logistic model of the form fb,w1,w2(x) = expit ( b+ w1X (1) i + w2X (2) i ) (E.2)\nwhere expit(x) , e x\n1+ex and the weights are chosen as (w1, w2) ∈ {−4,−3.6, . . . , 4} 2. For a given\n(w1, w2) the bias b is chosen as:\nb(w1, w2) , arg minb∈R 400∑ i=1 `(fb,w1,w2(Xi), Yi),\nwhere ` is the logistic loss." }, { "heading": "E.3 LOWER BOUND", "text": "To calculate the lower bounds we use Algorithm 1 with the following choices: the choices of ` and f is provided in the previous subsection. The choice of fair distances are provided in Section 4. We choose regularizer λ = 100, number of steps T = 400 and step sizes t = 0.02t2/3 ." }, { "heading": "F ADDITIONAL ADULT EXPERIMENT DETAILS", "text": "" }, { "heading": "F.1 DATA PREPROCESSING", "text": "The continuous features in Adult are: Age, fnlwgt, capital-gain, capital-loss, hours-per-week, and education-num. The categorical features are: work-class, education, marital-status, occupation, relationship, race, sex, native-country. The detailed descriptions can be found in Dua & Graff (2017). We remove fnlwgt, education, native-country from the features. race and sex are considered as protected attributes and they are not included in feature vectors for classification. race is treated as binary attribute: White and non-White. We remove any data-point with missing entry and end up having 45222 data-points." }, { "heading": "F.2 FAIR METRIC", "text": "To learn the sensitive subspace, we perform logistic regression for race and sex on other features, and use the weight vectors as the vectors spanning the sensitive subspace (H). The fair metric is then obtained as\nd2x(x1, x2) = ‖(I −ΠH)x1 − x2‖22." }, { "heading": "F.3 HYPERPARAMETERS AND TRAINING", "text": "For each model, 10 random train/test splits of the dataset is used, where we use 80% data for training purpose. All compared methods are adjusted to account for class imbalance during training." }, { "heading": "F.3.1 BASELINE AND PROJECT", "text": "Baseline is the obtained by fitting 2 layer fully connected neural network with 50 neurons for the hidden layer. It doesn’t enforce any fairness for the model. Project also uses similar architecture, except a pre-processing layer for projecting out sensitive subspace from features. So, Project model is a simple and naive way to enforce fairness. For both the models same parameters are involved: learning_rate for step size for Adam optimizer, batch_size for mini-batch size at training time, and num_steps for number of training steps to be performed. We present the choice of hyperparameters in Table 2" }, { "heading": "F.3.2 SENSR", "text": "Codes for SenSR (Yurochkin et al., 2020) is provided with submission with a demonstration for fitting the model, where the choice of hyperparameters are provided." }, { "heading": "F.3.3 REDUCTION", "text": "We provide codes for reduction (Agarwal et al., 2018) approach. We also provide a demonstration for fitting reduction model with the choice of hyperparameters for this experiment. The codes can also be found in https://github.com/fairlearn/fairlearn. We used Equalized Odds fairness constraint (Hardt et al., 2016) with constraints violation tolerance parameter = 0.03." }, { "heading": "F.4 LOWER BOUND AND TESTING", "text": "To calculate the lower bounds we use Algorithm 1. The loss ` is the logistic loss. Test data is provided as an input, whereas the fair metric is also learnt from the test data. For each of the models we choose regularizer λ = 50, number of steps T = 500 and step size t = 0.01." }, { "heading": "F.5 COMPARISON METRICS", "text": "Performance Let C be the set of classes. Let Y and Ŷ be the observed and predicted label for a data-point, respectively. The balanced accuracy is defined as\nBalanced Acc = 1 |C| ∑ c∈C P (Ŷ = c|Y = c)\nGroup fairness LetG be the protected attribute taking values in {0, 1}. The average odds difference (AOD) (Bellamy et al., 2018) for group G is defined as\nAODG = 1\n2\n[ (P (Ŷ = 1|Y = 1, G = 1)− P (Ŷ = 1|Y = 1, G = 0))\n+ (P (Ŷ = 1|Y = 0, G = 1)− P (Ŷ = 1|Y = 0, G = 0)) ]" }, { "heading": "F.6 FULL TABLE", "text": "In Table 3 we present extended results of the Adult experiment with standard deviations computed from 10 experiment repetitions.\nG COMPAS EXPERIMENT\nIn COMPAS recidivism prediction dataset (Larson et al., 2016) the task is to predict whether a criminal defendant would recidivate within two years. We consider race (Caucasian or notCaucasian) and sex (binary) as the sensitive attributes. The features in COMPAS are: sex, race, priors_count age_cat= 25 to 45, age_cat= Greater than 45, age_cat= Less than 25, and c_charge_degree=F. prior_count is standardized.\nAs before we perform testing on four classifiers: baseline NN, group fairness Reductions (Agarwal et al., 2018) algorithm, individual fairness SenSR (Yurochkin et al., 2020) algorithm and a basic Project algorithm that pre-processes the data by projecting out the “senstive” subspace. Baseline and Project have same architecture and parameters as in the experiment with Adult dataset. For SenSR fair metric and Project we use train data to learn the “senstive” subspace. A further detail for choice of parameters is provided in the code. For Reduction we used Equalized Odds fairness constraint (Hardt et al., 2016) with constraints violation tolerance parameter = 0.16.\nAll methods are trained to account for class imbalance in the data and we report test balanced accuracy as a performance measure. Results of the 10 experiment repetitions are summarized in Table 4. We compare group fairness using average odds difference (AOD) (Bellamy et al., 2018) for gender and race. Significance level for the null hypothesis rejection is 0.05 and δ = 1.25.\nBaseline exhibits clear violations of both individual (test is rejected with proportion 1) and group fairness (both AODs are big in terms of absolute magnitude). Reductions method achieves significant group fairness improvements, but is individually unfair. Simple pre-processing is more efficient (comparing to the Adult experiment) with rejection proportion of 0.2. SenSR is the most effective\nand our test fails to reject its individual fairness in all experiment repetitions. Examining the tradeoff between individual and group fairness, we see that both SenSR and Reductions improve all fairness metrics in comparison to the baseline. However, improvement of individual fairness with Reductions is marginal. SenSR provides a sizeable improvement of gender AOD, but only a marginal improvement of race AOD." } ]
2,021
null
SP:e7bd23e8d01a469909890d06581882da634a3e0f
[ "This paper proposes a simple model-free method to estimate the generalization performance of deep neural architectures based on their early training losses. The proposed method uses the sum of training losses during training to estimate the performance and is motivated by recent empirical and theoretical results. The experimental results show that the proposed estimator outperforms the existing methods that predict the performance ranking among architectures." ]
Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS). Traditional approaches face a variety of limitations: training each architecture to completion is prohibitively expensive, early stopping estimates may correlate poorly with fully trained performance, and model-based estimators require large training sets. Instead, motivated by recent results linking training speed and generalisation with stochastic gradient descent, we propose to estimate the final test performance based on the sum of training losses. Our estimator is inspired by the marginal likelihood, which is used for Bayesian model selection. Our modelfree estimator is simple, efficient, and cheap to implement, and does not require hyperparameter-tuning or surrogate training before deployment. We demonstrate empirically that our estimator consistently outperforms other baselines under various settings and can achieve a rank correlation of 0.95 with final test accuracy on the NAS-Bench201 dataset within 50 epochs.
[]
[ { "authors": [ "Bowen Baker", "Otkrist Gupta", "Ramesh Raskar", "Nikhil Naik" ], "title": "Accelerating neural architecture search using performance prediction", "venue": "arXiv preprint arXiv:1705.10823,", "year": 2017 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "Journal of machine learning research,", "year": 2012 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2011 }, { "authors": [ "Yuan Cao", "Quanquan Gu" ], "title": "Generalization bounds of stochastic gradient descent for wide and deep neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Tobias Domhan", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": "arXiv preprint arXiv:1703.11008,", "year": 2017 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": null, "year": 2018 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "BOHB: Robust and efficient hyperparameter optimization at scale", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Hossein Mobahi", "Dilip Krishnan", "Samy Bengio" ], "title": "Fantastic generalization measures and where to find them", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Simon Bartels", "Philipp Hennig", "Frank Hutter" ], "title": "Fast Bayesian optimization of machine learning hyperparameters on large datasets", "venue": null, "year": 2016 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Learning curve prediction with bayesian neural networks. 2016b", "venue": null, "year": 2016 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": null, "year": 2019 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": null, "year": 2016 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Tongliang Liu", "Gábor Lugosi", "Gergely Neu", "Dacheng Tao" ], "title": "Algorithmic stability and hypothesis complexity", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Clare Lyle", "Lisa Schut", "Binxin Ru", "Mark van der Wilk", "Yarin Gal" ], "title": "A Bayesian perspective on training speed and model selection", "venue": "Thirty-fourth Conference on Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "David JC MacKay" ], "title": "Bayesian methods for adaptive models", "venue": "PhD thesis, California Institute of Technology,", "year": 1992 }, { "authors": [ "David A McAllester" ], "title": "Pac-bayesian model averaging", "venue": "In Proceedings of the twelfth annual conference on Computational learning theory, pp", "year": 1999 }, { "authors": [ "Jishnu Mukhoti", "Viveka Kulharia", "Amartya Sanyal", "Stuart Golodetz", "Philip HS Torr", "Puneet K Dokania" ], "title": "Calibrating deep neural networks using focal loss", "venue": "arXiv preprint arXiv:2002.09437,", "year": 2020 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "Sixth Indian Conference on Computer Vision, Graphics & Image Processing,", "year": 2008 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Esteban Real", "Sherry Moore", "Andrew Selle", "Saurabh Saxena", "Yutaka Leon Suematsu", "Jie Tan", "Quoc V Le", "Alexey Kurakin" ], "title": "Large-scale evolution of image classifiers", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Albert Shaw", "Wei Wei", "Weiyang Liu", "Le Song", "Bo Dai" ], "title": "Meta architecture search", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Saining Xie", "Alexander Kirillov", "Ross Girshick", "Kaiming He" ], "title": "Exploring randomly wired neural networks for image recognition", "venue": null, "year": 2019 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "SNAS: Stochastic neural architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Yuhui Xu", "Lingxi Xie", "Xiaopeng Zhang", "Xin Chen", "Guo-Jun Qi", "Qi Tian", "Hongkai Xiong" ], "title": "Pcdarts: Partial channel connections for memory-efficient differentiable architecture", "venue": null, "year": 1907 }, { "authors": [ "Antoine Yang", "Pedro M. Esperança", "Fabio M. Carlucci" ], "title": "NAS evaluation is frustratingly hard", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "NASBench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Kaicheng Yu", "Christian Sciuto", "Martin Jaggi", "Claudiu Musat", "Mathieu Salzmann" ], "title": "Evaluating the search phase of neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Arber Zela", "Aaron Klein", "Stefan Falkner", "Frank Hutter" ], "title": "Towards automated deep learning: Efficient joint neural architecture and hyperparameter search", "venue": "arXiv preprint arXiv:1807.06906,", "year": 2018 }, { "authors": [ "Arber Zela", "Julien Siems", "Frank Hutter" ], "title": "Nas-bench-1shot1: Benchmarking and dissecting oneshot neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Pan Zhou", "Caiming Xiong", "Richard Socher", "Steven CH Hoi" ], "title": "Theory-inspired path-regularized differential network architecture search", "venue": "arXiv preprint arXiv:2006.16537,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Tan", "Le" ], "title": "2019) on top of the conventional training and validation loss/accuracies", "venue": null, "year": 2019 }, { "authors": [ "Val Acc" ], "title": "Random Search (Bergstra & Bengio, 2012) is a very simple yet competitive NAS search strategy (Dong & Yang, 2020). We also combined our estimator, SoTL-E, at training epoch T = 50 with Random Search to perform NAS. We compare it against the baselines using the final validation accuracy at T = 200, denoted as Val Acc (T=200), and the early-stop validation accuracy at T = 50", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reliably estimating the generalisation performance of a proposed architecture is crucial to the success of Neural Architecture Search (NAS) but has always been a major bottleneck in NAS algorithms (Elsken et al., 2018). The traditional approach of training each architecture for a large number of epochs and evaluating it on validation data (full evaluation) provides a reliable performance measure, but requires prohibitively high computational resources on the order of thousands of GPU days (Zoph & Le, 2017; Real et al., 2017; Zoph et al., 2018; Real et al., 2019; Elsken et al., 2018). This motivates the development of methods for speeding up performance estimation to make NAS practical for limited computing budgets. A popular simple approach is early-stopping which offers a low-fidelity approximation of generalisation performance by training for fewer epochs (Li et al., 2016; Falkner et al., 2018; Li & Talwalkar, 2019). However, if we stop the training early at a small number of epochs and evaluate the model on validation data, the relative performance ranking may not correlate well with the performance ranking of the full evaluation (Zela et al., 2018). Another line of work focuses on learning curve extrapolation (Domhan et al., 2015; Klein et al., 2016b; Baker et al., 2017), which trains a surrogate model to predict the final generalisation performance based on the initial learning curve and/or meta-features of the architecture. However, the training of the surrogate often requires hundreds of fully evaluated architectures to achieve satisfactory extrapolation performance and the hyper-parameters of the surrogate also need to be optimised. Alternatively, the idea of weight sharing is adopted in one-shot NAS methods to speed up evaluation (Pham et al., 2018; Liu et al., 2019; Xie et al., 2019b). Despite leading to significant cost-saving, weight sharing heavily underestimates the true performance of good architectures and is unreliable in predicting the relative ranking among architectures (Yang et al., 2020; Yu et al., 2020).\nIn view of the above limitations, we propose a simple model-free method which provides a reliable yet computationally cheap estimation of the generalisation performance ranking of architectures: the Sum over Training Losses (SoTL). Our method harnesses the training losses of the commonly-used SGD optimiser during training, and is motivated by recent empirical and theoretical results linking training speed and generalisation (Hardt et al., 2016; Lyle et al., 2020). We ground our method in the Bayesian update setting, where we show that the SoTL estimator computes a lower bound to the\nmodel evidence, a quantity with sound theoretical justification for model selection (MacKay, 1992). We show empirically that our estimator can outperform a number of strong existing approaches to predict the relative performance ranking among architectures, while speeding up different NAS approaches significantly." }, { "heading": "2 METHOD", "text": "We propose a simple metric that estimates the generalisation performance of a deep neural network model via the Sum of its Training Losses (SoTL). After training a deep neural network whose prediction is fθ(·) for T epochs1, we sum the training losses collected so far:\nSoTL = T∑ t=1\n[ 1\nB B∑ i=1 l ( fθt,i(Xi),yi\n)] (1)\nwhere l is the training loss of a mini-batch (Xi,yi) at epoch t and B is the number of training steps within an epoch. If we use the first few epochs as the burn-in phase for θt,i to converge to certain distribution P (θ) and start the sum from epoch t = T −E + 1 instead of t = 1, we obtain a variant SoTL-E. In the case where E = 1, we start the sum at t = T and our estimator corresponds to the sum over training losses within epoch t = T . We discuss SoTL’s theoretical interpretation based on Bayesian marginal likelihood and training speed in Section 3, and empirically show that SoTL, despite its simple form, can reliably estimate the generalisation performance of neural architectures in Section 5.\nIf the sum over training losses is a useful indicator for the generalisation performance, one might expect the sum over validation losses to be a similarly effective performance estimator. The sum over validation losses (SoVL) lacks the link to the Bayesian model evidence, and so its theoretical motivation is different from our SoTL. Instead, the validation loss sum can be viewed as performing a bias-variance trade-off; the parameters at epoch t can be viewed as a potentially high-variance sample from a noisy SGD trajectory, and so summation reduces the resulting variance in the validation loss estimate at the expense of incorporating some bias due to the relative ranking of models’ test performance changing during training. We show in Section 5 that SoTL clearly outperforms SoVL in estimating the true test performance." }, { "heading": "3 THEORETICAL MOTIVATION", "text": "The SoTL metric is a direct measure of training speed and draws inspiration from two lines of work: the first is a Bayesian perspective that connects training speed with the marginal likelihood in the model selection setting, and the second is the link between training speed and generalisation (Hardt et al., 2016). In this section, we will summarize recent results that demonstrate the connection between SoTL and generalisation, and further show that in Bayesian updating regimes, the SoTL metric corresponds to an estimate of a lower bound on the model’s marginal likelihood, under certain assumptions." }, { "heading": "3.1 TRAINING SPEED AND THE MARGINAL LIKELIHOOD", "text": "We motivate the SoTL estimator by a connection to the model evidence, also called the marginal likelihood, which is the basis for Bayesian model selection. The model evidence quantifies how likely a dataset D is to have been generated by a model, and so can be used to update a prior belief distribution over which model from a given set is most likely to have generated D. Given a model with parameters θ, prior π(θ), and likelihood P (D|θ) for a training data set D = {D1, . . . ,Dn} with data points Di = (xi, yi), the (log) marginal likelihood is expressed as follows.\nlogP (D) = logEπ(θ) [P (D|θ)]⇔ logP (D) = n∑ i=1 logP (Di|D<i) = n∑ i=1 log [ EP (θ|D<i) [P (Di|θ)] ] Interpreting the negative log posterior predictive probability− logP (Di|D<i) of each data point as a ‘loss’ function, the log evidence then corresponds to the area under a training loss curve, where each\n1T can be far from the total training epochs Tend used in complete training\ntraining step would be computed by sampling a data point Di, taking the log expected likelihood under the current posterior P (θ|D<i) as the current loss, and then updating the posterior by incorporating the new sampled data point: D<i+1 := D<i ∪{Di}. One can therefore interpret the marginal likelihood as a measure of training speed in a Bayesian updating procedure. In the setting where we cannot compute the posterior analytically and only samples θ̂ from the posterior over parameters are available, we obtain an unbiased estimator of a lower bound L(D) = ∑ EP (θ|D<i) [logP (Di|θ)] on the marginal likelihood by Jensen’s inequality, which again corresponds to minimizing a sum over training losses∑\nlogP (Di|θ̂) ≈ ∑ EP (θ|D<i) [logP (Di|θ)] ≤ ∑ log [ EP (θ|D<i)[P (Di|θ)] ] = logP (D)\nwith ≈ denoting equality in expectation. A full analysis of the Bayesian setting is outside of the scope of this work. We refer the reader to (Lyle et al., 2020) for more details of the properties of this estimator in Bayesian models. Although the NAS setting does not yield the same interpretation of SoTL as model evidence estimation, we argue that the SoTL metric is still plausibly useful for model selection. Just as the marginal likelihood measures the utility of updates based on early data points in predicting later data points, the SoTL of a model trained with SGD will be lower for models whose mini-batch gradient descent updates improve the loss of later mini-batches seen during optimisation. We refer the reader to Apppendix B to see a demonstration of the SoTL metric in the Bayesian linear regression setting. We emphasize that the Bayesian connection thus justifies the sum over training losses as a tool for model selection, but not the training loss from a single parameter update." }, { "heading": "3.2 TRAINING SPEED AND GENERALISATION", "text": "Independent of the accuracy of SoTL in estimating the Bayesian model evidence, it is also possible to motivate our method by its relationship with training speed: models which achieve low training loss quickly will have low SoTL. There are both empirical and theoretical lines of work that illustrate a deep connection between training speed and generalisation. On the theoretical front, we find that models which train quickly can attain lower generalisation bounds. Training speed and generalisation can be related via stability-based generalisation bounds (Hardt et al., 2016; Liu et al., 2017), which characterize the dependence of the solution found by a learning algorithm on its training data. In networks of sufficient width, (Arora et al., 2019) propose a neural-tangent-kernel-based data complexity measure which bounds both the convergence rate of SGD and the generalisation error of the model obtained by optimisation. A similar generalisation bound and complexity measure is obtained by (Cao & Gu, 2019).\nWhile theoretical work has largely focused on ranking bounds on the test error, current results do not provide guarantees on consistency between the ranking of different models’ test set performance and their generalisation bounds. The empirical work of (Jiang* et al., 2020) demonstrates that many complexity measures are uncorrelated or negatively correlated with the relative performance of models on their test data but notably, a particular measure of training speed – the number of steps required to reach cross-entropy loss of 0.1, was highly correlated with the test set performance ranking of different models. The connection between training speed and generalisation is also observed by (Zhang et al., 2016), who find that models trained on true labels converge faster than models trained on random labels, and attain better generalisation performance." }, { "heading": "4 RELATED WORK", "text": "Various approaches have been developed to speed up architecture performance estimation, thus improving the efficiency of NAS. Low-fidelity estimation methods accelerate NAS by using the validation accuracy obtained after training architectures for fewer epochs (namely early-stopping) (Li et al., 2016; Falkner et al., 2018; Zoph et al., 2018; Zela et al., 2018), training a down-scaled model with fewer cells during the search phase (Zoph et al., 2018; Real et al., 2019) or training on a subset of the data (Klein et al., 2016a). However, low-fidelity estimates underestimate the true performance of the architecture and can change the relative ranking among architectures (Elsken et al., 2018). This undesirable effect on relative ranking is more prominent when the cheap approximation set-up is too dissimilar to the full evaluation (Zela et al., 2018). As shown in our Fig. 2 below, the validation accuracy at early epochs of training suffers low rank correlation with the final test performance.\nAnother way to cheaply estimate architecture performance is to train a regression model to extrapolate the learning curve from what is observed in the initial phase of training. Regression model choices that have been explored include Gaussian processes with a tailored kernel function (Domhan et al., 2015), an ensemble of parametric functions (Domhan et al., 2015), a Bayesian neural network (Klein et al., 2016b) and more recently a ν-support vector machine regressor (ν-SVR)(Baker et al., 2017) which achieves state-of-the-art prediction performance. Although these model-based methods can often predict the performance ranking better than their model-free early-stopping counterparts, they require a relatively large amount of fully evaluated architecture data (e.g. 100 fully evaluated architectures in (Baker et al., 2017)) to train the regression surrogate properly and optimise the model hyperparameters in order to achieve good prediction performance. The high computational cost of collecting the training set makes such model-based methods less favourable for NAS unless the practitioner has already evaluated hundreds of architectures on the target task. Moreover, both low-fidelity estimates and learning curve extrapolation estimators are empirically developed and lack theoretical motivation.\nFinally, one-shot NAS methods employ weight sharing to reduce computational costs (Pham et al., 2018; Liu et al., 2019; Xie et al., 2019b). Under the one-shot setting, all architectures are considered as subgraphs of a supergraph. Only the weights of the supergraph are trained while the architectures (subgraphs) inherit the corresponding weights from the supergraph. Weight sharing removes the need for retraining each architecture during the search and thus achieves a significant speed-up. However, the weight sharing ranking among architectures often correlates very poorly with the true performance ranking (Yang et al., 2020; Yu et al., 2020; Zela et al., 2020), meaning architectures chosen by one-shot NAS are likely to be sub-optimal when evaluated independently (Zela et al., 2020). Moreover, one-shot methods are often outperformed by sample-based NAS methods (Dong & Yang, 2020; Zela et al., 2020).\nApart from the above mentioned performance estimators used in NAS, many complexity measures have been proposed to analyse the generalisation performance of deep neural networks. (Jiang* et al., 2020) provides a rigorous empirical analysis of over 40 such measures. This investigation finds that sharpness-based measures (McAllester, 1999; Keskar et al., 2016; Neyshabur et al., 2017; Dziugaite & Roy, 2017) (including PAC-Bayesian bounds) provide good correlation with test set performance, but their estimation requires adding randomly generated perturbations to the network parameters and the magnitude of the perturbations needs to be carefully optimised with additional training, making them unsuitable performance estimators for NAS. Optimisation-based complexity measures also perform well in predicting generalisation. Specifically, the number of steps required to reach loss of 0.1, as mentioned in Section 3.2, is closely related to our approach as both quantities measure the training speed of architectures. To our knowledge though, this measure has never been used in the NAS context before." }, { "heading": "5 EXPERIMENTS", "text": "In this section we compare the following measures. Note T denotes the intermediate training epoch, which is smaller than the final epoch number Tend > T : Our proposed estimator Sum of training losses over all preceding epochs (SoTL), which sums the training losses of an architecture from epoch t = 0 to the current epoch t = T , and its variant Sum of training losses over the most recent E epochs (SoTL-E), which uses the sum of the training losses from epoch t = T − E to t = T . Sum of validation losses over all preceding epochs (SoVL) computes the sum of the validation losses of an neural architecture from epoch t = 0 to the current epoch t = T . Validation accuracy at an early epoch (VAccES) corresponds to early-stopping practice whereby the user assumes the validation accuracy of an architecture at early epoch t = T < Tend is a good estimator of its final test performance at epoch t = Tend. Learning curve extrapolation (LcSVR) method is the state-ofthe-art extrapolation method proposed in (Baker et al., 2017) which uses a trained ν-SVR to predict the final validation accuracy of an architecture. The inputs for the SVR regression model comprise architecture meta-features (e.g. number of parameters and depth of the architecture), training hyperparameters (e.g. initial learning rate, mini-batch size and weight decay), learning curve features up to epoch t = T (e.g. the validation accuracies up to epoch t = T , the 1st-order and 2nd-order differences of validation curve up to epoch t = T ). In our experiments, we train the SVR on data of 200 randomly sampled architectures and following the practice in (Baker et al., 2017), we optimise the SVR hyperparameters via random search using 3-fold cross-validation. We also compare against\nwith two baselines on the DARTS search space: the training losses at each mini batch (TLmini) and the variant of VAccES, VAccES(EMA) whereby the exponential moving average of the weights (Tan & Le, 2019) is used during validation to improve validation accuracy.\nThe datasets we used to compare these performance estimators are:\n• NASBench-201 (Dong & Yang, 2020): the dataset contains information of 15,625 different neural architectures, each of which is trained with SGD optimiser for 200 epochs (Tend = 200) and evaluated on 3 different datasets: CIFAR10, CIFAR100, IMAGENET-16-120. The NASBench201 datasets can be used to benchmark almost all up-to-date NAS search strategies.\n• RandWiredNN: we produced this dataset by generating 552 randomly wired neural architectures from the random graph generators proposed in (Xie et al., 2019a) and evaluating the architecture performance on the FLOWERS102 dataset (Nilsback & Zisserman, 2008). We explored 69 sets of hyperparameter values for the random graph generators and for each set of hyperparameter values, we sampled 8 randomly wired neural networks from the generator. All the architectures are trained with SGD optimiser for 250 epochs (Tend = 250). This dataset allows us to evaluate the performance of our simple estimator on model selection for the random graph generator in Section 5.3.\n• DARTS: we produce this dataset by randomly sampling 100 architectures from the search space used in DARTS (Liu et al., 2019) and evaluating them on CIFAR10. This search space is more general than that of NASBench-201 and widely adopted in NAS (Zoph et al., 2018; Liu et al., 2019; Chen et al., 2019; Xie et al., 2019b; Xu et al., 2019; Real et al., 2019; Li & Talwalkar, 2020; Pham et al., 2018; Shaw et al., 2019; Zhou et al., 2020). We experiment with different evaluation set-ups in Section 5.2 and use this dataset to assess the stability/robustness of our estimator as well as make comparison to TLmini and VAcc(EMA).\nMore details on the three datasets are provided in Appendix A. In NAS, the relative performance ranking among different models matters more than the exact test performance of models. Thus, we evaluate different performance estimators by comparing their rank correlation with the model’s true/final test accuracy. We adopt Spearman’s rank correlation following (Ying et al., 2019; Dong & Yang, 2020). We flip the sign of SoTL/SoTL-E/SoVL/TLmini (which we want to minimise) to compare to the Spearman’s rank correlation of the other methods (which we want to maximise). We test different summation window sizes in Appendix C and find E = 1 consistently give the best results. Thus, we set E = 1 as the default choice for our SoTL-E estimator in the following experiments. Note SoTL-E withE = 1 corresponds to the sum of training losses over all the batches in one single epoch. All experiments were conducted on a 36-core 2.3GHz Intel Xeon processor with 512 GB RAM." }, { "heading": "5.1 TRAINING LOSS VS VALIDATION LOSS", "text": "We perform a simple sanity check against the validation loss on NASBench-201 datasets. Specifically, we compare our proposed estimators, SoTL and SoTL-E, against two equivalent variants of validation loss-based estimators: SoVL and Sum of validation losses over the most recent epoch (SoVL-E with E = 1). For each image dataset, we randomly sample 5000 different neural network architectures from the search space and compute the rank correlation between the true test accuracies (at T = 200) of these architectures and their corresponding SoTL/SoTL-E as well as SoVL/SoVL-E up to epoch T . The results in Fig. 1 show that our proposed estimators SoTL and SoTL-E clearly outperform their validation counterparts.\nAnother intriguing observation is that the rank correlation performance of SoVL-E drops significantly in the later phase of the training (after around 100 epochs for CIFAR10 and 150 epochs for CIFAR100) and the final test loss, TestL (T=200), also correlates poorly with final test accuracy. This implies that the validation/test losses can become unreliable indicator for the validation/test accuracy on certain datasets; as training proceeds, the validation accuracy keeps improving but the validation losses could stagnate at a relatively high level or even start to rise (Mukhoti et al., 2020; Soudry et al., 2018). This is because while the neural network can make more correct classifications on validation points (which depend on the argmax of the logits) over the training epochs, it also gets more and more confident on the correctly classified training data and thus the weight norm and maximum of the logits keeps increasing. This can make the network overconfident on the misclassified validation data and cause the corresponding validation loss to rise, thus offsetting or even outweighing the gain due to improved prediction performance (Soudry et al., 2018). Training loss won’t suffer from this problem (Appendix D). While SoTL-E struggles to distinguish architectures once their training losses have converged to approximately zero, this contributes to a much smaller drop in estimation performance of SoTL-E compared to that of SoVL-E and only happens near the very late phase of training (after 150 epochs) which will hardly be reached if we want efficient NAS using as few training epochs as possible. Therefore, the possibility of network overconfidence under misclassification is another reason for our use of training losses instead of the validation losses." }, { "heading": "5.2 COMPARISON AGAINST OTHER BASELINES", "text": "We now compare our estimators SoTL and SoTL-E against other baselines mentioned at the start of Section 5. The results on both NASBench-201 and RandWiredNN datasets are shown in Fig. 2. Our proposed estimator SoTL-E, despite its simple form and cheap computation, outperforms all other methods under evaluation for T < 100 for all architecture/image datasets. Although the validation accuracy(VAccES) at T ≥ 150 can reach similar rank correlation, this is less interesting for applications like NAS where we want to speed up the evaluation as much as possible and thus use as fewer training epochs as possible. The learning curve extrapolation method, LcSVR, is competitive.\nHowever, the method requires hundreds of fully trained architecture data2 to train the regression surrogate. Lots of computational resources are needed to obtain such training data.\nWe further verify the robustness of our estimator across different training set-ups adopted in (Liu et al., 2019) on the DARTS dataset. Specifically, we evaluated on architectures of different sizes (8 cells and 20 cells) as well as different training set-ups (initial learning rate, learning rate scheduler and batch size). The results in Fig. 3 show that our estimator again outperforms the competing methods. Note here the curve of TLmini corresponds to the average rank correlation with final test accuracy achieved by the mini-batch training loss over the epoch. The clear performance gain of our SoTL estimator over TLmini supports our claim that it is the sum of training losses, which carries the theoretical interpretation explained in Section 3, instead of the training loss at a single minibatch, that serves as a good estimator of generalisation performance. Further, the results of VAcc(EMA) show that the EMA technique, which smooths and improves the accuracies during validation, does not necessarily improve the rank correlation of validation accuracy with the final test performance." }, { "heading": "5.3 ARCHITECTURE GENERATOR SELECTION", "text": "For the RandWiredNN dataset, we use 69 different hyperparameter values for the random graph generator which generates the randomly wired neural architecture. Here we would like to investigate whether our estimator can be used in place of the true test accuracy to select among different hyperparameter values. For each graph generator hyperparameter value, we sample 8 neural architectures with different wiring. The mean and standard error of both the true test accuracies and SoTL-E scores over the 8 samples are presented in Fig. 4. Our estimator can well predict the relative performance ranking among different hyperparameters (Rank correlation≥ 0.85) based on as few as 10 epochs of training. The rank correlation between our estimator and the final test accuracy improves as we use the training loss in later epochs." }, { "heading": "5.4 SPEED UP NAS", "text": "Similar to early stopping, our method is model-free and can significantly speed up the architecture performance evaluation by using information from early training epochs. In this section, we incorporate our estimator, SoTL-E, at T = 50 into several NAS search strategies: Regularised Evolution (Real et al., 2019) (top row in Fig. 5), TPE (Bergstra et al., 2011) (bottom row in Fig. 5) and Random Search (Bergstra & Bengio, 2012) (Appendix E) and performance architecture search on NASBench-201 datasets. We compare this against the other two benchmarks which use the final validation accuracy at T = 200, denoted as Val Acc (T=200) and the early-stop validation accuracy at T = 50, denoted as Val Acc (T=50), respectively to evaluate the architecture’s generalisation\n2We follow (Baker et al., 2017) and train the SVR on 200 architectures.\nperformance. All the NAS search strategies start their search from 10 random initial data and are repeated for 20 seeds. The mean and standard error results over the search time are shown in Fig. 5. By using our estimator, the NAS search strategies can find architectures with lower test error given the same time budget or identify the top performing architectures using much less runtime as compared to using final or early-stopping validation accuracy. Also the gain of using our estimator is more significant for NAS methods performing both exploitation and exploration (RE and TPE) than that doing pure exploration (Random Search in Appendix E)." }, { "heading": "6 CONCLUSION", "text": "We propose a simple yet reliable method for estimating the generalisation performance of neural architectures based on its early training losses. Our estimator enables significant speed-up for performance estimation in NAS while outperforming other efficient estimators in terms of rank correlation with the true test performance. More importantly, our estimator has theoretical interpretation based on training speed and Bayesian marginal likelihood, both of which have strong links with generalisation. We believe our estimator can be a very useful tool for achieving efficient NAS." }, { "heading": "A DATASETS DESCRIPTION", "text": "The datasets we experiment with are:\n• NASBench-201 (Dong & Yang, 2020): the dataset contains information of 15,625 different neural architectures, each of which is trained with SGD optimiser and evaluated on 3 different datasets: CIFAR10, CIFA100, IMAGENET-16-120 for 3 random initialisation seeds. The training accuracy/loss, validation accuracy/loss after every training epoch as well as architecture meta-information such as number of parameters, and FLOPs are all accessible from the dataset. The search space of the NASBench-201 dataset is a 4-node cell and applicable to almost all up-to-date NAS algorithms. The dataset is available at https://github.com/D-X-Y/ NAS-Bench-201.\n• RandWiredNN: we produce this dataset by generating 552 randomly wired neural architectures from the random graph generators proposed in (Xie et al., 2019a) and evaluating their performance on the image dataset FLOWERS102 (Nilsback & Zisserman, 2008). We explore 69 sets of hyperparameter values for the random graph generators and for each set of hyperparameter values, we sample 8 randomly wired neural networks from the generator. A randomly wired neural network comprises 3 cells connected in sequence and each cell is a 32-node random graph. The wiring/connection within the graph is generated with one of the three classic random graph models in graph theory: Erdos-Renyi(ER), Barabasi-Albert(BA) and Watt-Strogatz(WS) models. Each random graph models have 1 or 2 hyperparameters which decide the generative distribution over edge/node connection in the graph. All the architectures are trained with SGD optimiser for 250 epochs and other training set-ups follow those in (Liu et al., 2019). This dataset allows us to evaluate the performance of our simple estimator on hyperparameter/model selection for the random graph generator. We will release this dataset after paper publication.\n• DARTS: we produce this dataset by randomly sampling 100 architectures from the search space used in DARTS (Liu et al., 2019) and evaluating them on CIFAR10. This search space comprises a cell of 7 nodes. An architecture from this search space is formed by stacking the cell 8 or 20 times. Specifically, the first two nodes in cell k are the input nodes which equals to the outputs of cell k−2 and cell k−1 respectively. The last node in the cell k is the output node which gives a depthwise concatenation of all the intermediate nodes. The remaining four intermediate nodes are operation nodes take can take one out of eight operation choices. This search space is larger and more general than that of NASBench-201, and is also widely adopted in NAS (Zoph et al., 2018; Liu et al., 2019; Chen et al., 2019; Xie et al., 2019b; Xu et al., 2019; Real et al., 2019; Li & Talwalkar, 2020; Pham et al., 2018; Shaw et al., 2019; Zhou et al., 2020). In Section 5.2, we experiment with the three different evaluation set-ups used in (Liu et al., 2019):\n1. Search phase: We stack 8 cells to form the architecture and train the architecture for 150 epoch on CIFAR10 with a batch size of 128. We use the SGD optimiser with an initial learning rate of 0.05 and a cosine-annealing schedule, momentum of 0.9 and weight decay of 3× 10−4;\n2. Retraining phase for CIFAR10: We stack 20 cells to form the architecture and train the architecture for 150 epoch on CIFAR10 with a batch size of 96. We use the SGD optimiser with an initial learning rate of 0.025 and a cosine-annealing schedule, momentum of 0.9 and weight decay of 3× 10−4;\n3. Retraining phase for ImageNet: We stack 20 cells to form the architecture and train the architecture for 150 epoch on CIFAR10 with a batch size of 128. We use the SGD optimiser with an initial learning rate of 0.1 and a step-decay schedule (decayed by a factor of 0.97 after each epoch), momentum of 0.9 and weight decay of 3× 10−4.\nFor this dataset, we also record the training loss for each minibatch and an alternative validation accuracy value evaluated using the exponential moving average (EMA) of the network weights (Tan & Le, 2019) on top of the conventional training and validation loss/accuracies. The minibatch training loss is used to verify our claim that it is the sum of training losses, which has nice theoretical interpretation, instead of individual training loss that gives good correlation with the generalisation performance of the architectures. The EMA version of the validation accuracy is used to check whether a smoothed and improved version of the early-stopped validation accuracy will have better correlation with the final true test performance." }, { "heading": "B EXAMPLE ON BAYESIAN LINEAR REGRESSION", "text": "We illustrate how the SoTL metric corresponds to a lower bound on the marginal likelihood that can be used for model selection in a simple Bayesian linear regression setting. We consider an idealised data set (X, y) with X ∈ Rn×(n+1) and y ∈ Rn, with X of the form X = (xi)ni=1 = ((yi + i0, 0, . . . , i, . . . , 0)) n i=1, and i ∼ N (0, 1). We wish to compare two Bayesian linear regression models M1 and M2, each of which uses one of two different feature embeddings: φ1 and φ2, where φ1(x) = x is the identity and φ2(x) = x>e1 = (y + 0) retains only the single dimension that is correlated with the target, removing the noisy components of the input. The model which uses φ2 will have less opportunity to overfit to its training data, and will therefore generalise better than the model which uses φ1; similarly, it will also have a higher marginal likelihood. We demonstrate empirically in Fig. 6 that the SoTL estimator computed on the iterative posterior updates of the Bayesian linear regression models also exhibits this relative ranking, and illustrate how the SoTL relates to the lower bound described in Section 3." }, { "heading": "C EFFECT OF SUMMATION WINDOW E", "text": "As shown in Fig. 1, summing the training losses over E most recent epochs (SoTL-E) can achieve higher rank correlation with the true test accuracy than summing over all the previous T epochs (SoTL), especially early on in training. We grid-search different summation window sizes E = 1, 10, . . . , 70 to investigate the effect of E and observe consistently across all 3 image datasets that smaller window size gives higher rank correlation during the early training phase and all E values converge to the same maximum rank correlation (Fig. 7).\nWe further verify this observation by performing the same experiments on DARTS dataset for which we have saved the mini-batch training losses and thus can compute the sum of training losses for less than one epoch E < 1. For example, E = 0.3 corresponds to the sum of training losses over the first 30% of the mini-batches/optimisation steps in the epoch. The results in Fig. 8 show again that E = 1 is the optimal choices although smaller summation window in general leads to better performance than large window sizes at the very early part of the training. Thus, we recommend E = 1 as the default choice for our SoTL-E estimator. Note SoTL-E=1 corresponds to the sum of training losses over all the batches in one single epoch." }, { "heading": "D TRAINING LOSSES VS VALIDATION LOSSES", "text": "D.1 EXAMPLE SHOWING TRAINING LOSS IS BETTER CORRELATED WITH VALIDATION ACCURACY THAN VALIDATION LOSS\nWe sample three example architectures from the NASBench-201 dataset and plot their losses and validation accuracies on CIFAR100 over the training epochs T . The relative ranking for the validation accuracy is: Arch A (0.70) > Arch B (0.67) > Arch C (0.64), which corresponds perfectly (negatively) with the relatively ranking for the training loss: Arch A (0.05) < Arch B (0.31) < Arch C (0.69). Namely, the best performing architecture also has the lowest final training epoch loss. However, the ranking among their validation losses is poorly/wrongly correlated with that of validation accuracy; the worst-performing architecture has the lowest final validation losses but the\nbest-performing architecture has the highest validation losses. Moreover, in all three examples, especially the better-performing ones, the validation loss stagnates at a relatively high value while the\nvalidation accuracy continues to rise. The training loss doesn’t have this problem and it decreases while the validation accuracy increases. This confirms the observation we made in Section 5.2 that the validation loss will become an unreliable predictor for the final validation accuracy as well as the generalisation performance of the architecture as the training proceeds due to overconfident misclassification.\nD.2 COMPARISON WITH SUM OVER VALIDATION ACCURACY" }, { "heading": "101 102", "text": "D.3 OVERFITTING ON CIFAR10 AND CIFAR100\nIn Figure 2 in Section 5.2, the rank correlation achieved by SoTL-E on CIFAR10 and CIFAR100 will drop slighted after around T = 150 epochs but similar trend is not observed for IMAGENET16-120. We hypothesise that this is due to the fact that many architectures converge to very small training losses on CIFAR10 and CIFAR100 in the later training phase, making it more difficult to distinguish these good architectures based on their later-epoch training losses. But this doesn’t happen on IMAGENET-16-120 because it’s a more challenging dataset. We test this by visualising the training loss curves of all 5000 architectures in Figure 11a where the solid line and error bar correspond to the mean and standard error respectively. We also plot out the number of architectures with training losses below 0.1 3 in Figure 11b. It is evident that CIFAR10 and CIFAR100 both see an increasing number of overfitted architectures as the training proceeds whereas all architectures still have high training losses on IMAGENET-16-120 at end of the training T = 200 with none of them overfits. Thus, our hypothesis is confirmed. In addition, similar observation is also shared in (Jiang* et al., 2020) where the authors find the number of optimisation iterations required to reach loss equals 0.1 correlates well with generalisation but the number of iterations required going from loss equals 0.1 to loss equals 0.01 doesn’t.\n3the threshold 0.1 is chosen following the threshold for optimisation-based measures in (Jiang* et al., 2020)" }, { "heading": "E ADDITIONAL NAS EXPERIMENTS", "text": "In this work, we incorporate our estimator, SoTL-E, at T = 50 into three NAS search strategies: Regularised Evolution (Real et al., 2019), TPE (Bergstra et al., 2011) and Random Search (Bergstra & Bengio, 2012) and performance architecture search on NASBench-201 datasets. We modify the implementation available at https://github.com/automl/nas_benchmarks for these three methods.\nRandom Search (Bergstra & Bengio, 2012) is a very simple yet competitive NAS search strategy (Dong & Yang, 2020). We also combined our estimator, SoTL-E, at training epoch T = 50 with Random Search to perform NAS. We compare it against the baselines using the final validation accuracy at T = 200, denoted as Val Acc (T=200), and the early-stop validation accuracy at T = 50, denoted as Val Acc (T=50). Other experimental set-ups follow Section 5.5. The results over running\nhours on all three image tasks are shown in Figure 12. The use of our estimator clearly leads to faster convergence as compared to the use of final validation i.e. Val Acc (T=200). Moreover, our estimator also outperforms the early-stop validation accuracy, Val Acc (T=50) on the two more challenging image tasks, CIFAR100 and IMAGENET-16-120, and is on par with it on CIFAR10. The performance gain of using our estimator or the early-stopped validation accuracy is relatively less significant in the case of Random Search compared to the cases of Regularised Evolution and TPE. For example, given a budget of 150 hours on CIFAR100, Regularised Evolution and TPE when combined with our estimator can find an architecture with a test error around or below 0.26 but Random Search only finds architecture with test error of around 0.27. This is due to the fact that Random Search is purely explorative while Regularised Evolution and TPE both trade off exploration and exploitation during their search; our estimator by efficiently estimating the final generalisation performance of the architectures will enable better exploitation. Therefore, we recommend the users to deploy our proposed estimator onto search strategies which involve some degree of exploitation to maximise the potential gain." } ]
2,020
null
SP:ddf5fcf80d3a1d2c18cf4432d29c0eda32dbbef3
[ "This paper outlines a method for forecasting and parameter estimation when you have a partial physics model (possibly with unknown parameters) and time series data. This is a hybrid approach where the data-driven (deep learning) approach only learns the parts not accounted for by the physical model. A key feature is being able to decompose the problem in such a way that the data-driven model only models what cannot be captured by the physical model. The parameters of these two models must be fit jointly so that the physical model's parameters are more correct. They prove existence and uniqueness for this decomposition. " ]
Forecasting complex dynamical phenomena in settings where only partial knowledge of their dynamics is available is a prevalent problem across various scientific fields. While purely data-driven approaches are arguably insufficient in this context, standard physical modeling based approaches tend to be over-simplistic, inducing non-negligible errors. In this work, we introduce the APHYNITY framework, a principled approach for augmenting incomplete physical dynamics described by differential equations with deep data-driven models. It consists in decomposing the dynamics into two components: a physical component accounting for the dynamics for which we have some prior knowledge, and a data-driven component accounting for errors of the physical model. The learning problem is carefully formulated such that the physical model explains as much of the data as possible, while the data-driven component only describes information that cannot be captured by the physical model, no more, no less. This not only provides the existence and uniqueness for this decomposition, but also ensures interpretability and benefits generalization. Experiments made on three important use cases, each representative of a different family of phenomena, i.e. reaction-diffusion equations, wave equations and the non-linear damped pendulum, show that APHYNITY can efficiently leverage approximate physical models to accurately forecast the evolution of the system and correctly identify relevant physical parameters.
[ { "affiliations": [], "name": "Yin ∗Vincent" }, { "affiliations": [], "name": "Le Guen" }, { "affiliations": [], "name": "Ayed Nicolas Thome" }, { "affiliations": [], "name": "Patrick Gallinari" } ]
[ { "authors": [ "Ibrahim Ayed", "Nicolas Cedilnik", "Patrick Gallinari", "Maxime Sermesant" ], "title": "Ep-net: Learning cardiac electrophysiology models for physiology-based constraints in data-driven predictions", "venue": "Functional Imaging and Modeling of the Heart - 10th International Conference,", "year": 2019 }, { "authors": [ "Ibrahim Ayed", "Emmanuel de Bézenac", "Arthur Pajot", "Julien Brajard", "Patrick Gallinari" ], "title": "Learning dynamical systems from partial observations", "venue": "arXiv preprint arXiv:1902.11136,", "year": 2019 }, { "authors": [ "Philipp Becker", "Harit Pandya", "Gregor Gebhardt", "Cheng Zhao", "James Taylor", "Gerhard Neumann" ], "title": "Recurrent kalman networks: Factorized inference in high-dimensional deep feature spaces", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Dimitri P. Bertsekas" ], "title": "Constrained Optimization and Lagrange Multiplier Methods (Optimization and Neural Computation Series)", "venue": "Athena Scientific,", "year": 1996 }, { "authors": [ "Steven L. Brunton", "Joshua L. Proctor", "J. Nathan Kutz" ], "title": "Discovering governing equations from data by sparse identification of nonlinear dynamical systems", "venue": "Proceedings of the National Academy of Sciences,", "year": 2016 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K. Duvenaud" ], "title": "Neural ordinary differential equations. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2018 }, { "authors": [ "Wen-Hua Chen" ], "title": "Disturbance observer based control for nonlinear systems", "venue": "IEEE/ASME transactions on mechatronics,", "year": 2004 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic recurrent neural networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Edward Choi", "Mohammad Taha Bahadori", "Jimeng Sun", "Joshua Kulas", "Andy Schuetz", "Walter Stewart" ], "title": "RETAIN: An interpretable predictive model for healthcare using reverse time attention mechanism", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Philippe Courtier", "J-N Thépaut", "Anthony Hollingsworth" ], "title": "A strategy for operational implementation of 4d-var, using an incremental approach", "venue": "Quarterly Journal of the Royal Meteorological Society,", "year": 1994 }, { "authors": [ "Emmanuel de Bézenac", "Arthur Pajot", "Patrick Gallinari" ], "title": "Deep learning for physical processes: Incorporating prior scientific knowledge", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jérémie Donà", "Jean-Yves Franceschi", "Sylvain Lamprier", "Patrick Gallinari" ], "title": "Pde-driven spatiotemporal disentanglement", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "John R Dormand", "Peter J Prince" ], "title": "A family of embedded runge-kutta formulae", "venue": "Journal of computational and applied mathematics,", "year": 1980 }, { "authors": [ "P. Gentine", "M. Pritchard", "S. Rasp", "G. Reinaudi", "G. Yacalis" ], "title": "Could machine learning break the convection parameterization deadlock", "venue": "Geophysical Research Letters,", "year": 2018 }, { "authors": [ "Samuel Greydanus", "Misko Dzamba", "Jason Yosinski" ], "title": "Hamiltonian neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy optimization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Gordon G Johnson" ], "title": "A nonconvex set which has the unique nearest point property", "venue": "Journal of Approximation Theory,", "year": 1987 }, { "authors": [ "Rudolph Emil Kalman" ], "title": "A new approach to linear filtering and prediction problems", "venue": null, "year": 1960 }, { "authors": [ "Gene A. Klaasen", "William C. Troy" ], "title": "Stationary wave solutions of a system of reaction-diffusion equations derived from the fitzhugh–nagumo equations", "venue": "SIAM Journal on Applied Mathematics,", "year": 1984 }, { "authors": [ "William Large", "Stephen Yeager" ], "title": "Diurnal to decadal global forcing for ocean and sea-ice models: The data sets and flux climatologies", "venue": null, "year": 2004 }, { "authors": [ "Vincent Le Guen", "Nicolas Thome" ], "title": "Disentangling physical dynamics from unknown factors for unsupervised video prediction", "venue": "In Computer Vision and Pattern Recognition (CVPR)", "year": 2020 }, { "authors": [ "Shihua Li", "Jun Yang", "Wen-Hua Chen", "Xisong Chen" ], "title": "Disturbance observer-based control: methods and applications", "venue": "CRC press,", "year": 2014 }, { "authors": [ "Yun Long", "Xueyuan She", "Saibal Mukhopadhyay" ], "title": "Hybridnet: integrating model-based and data-driven learning to predict evolution of dynamical systems", "venue": "Conference on Robot Learning (CoRL),", "year": 2018 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "PDE-Net: Learning PDEs from data", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Viraj Mehta", "Ian Char", "Willie Neiswanger", "Youngseog Chung", "Jeff Schneider" ], "title": "Neural dynamical systems", "venue": "ICLR 2020 Deep Differential Equations Workshop,", "year": 2020 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Boris N. Oreshkin", "Dmitri Carpov", "Nicolas Chapados", "Yoshua Bengio" ], "title": "N-BEATS: Neural basis expansion analysis for interpretable time series forecasting", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Pascal Pernot", "Fabien Cailliez" ], "title": "A critical review of statistical calibration/prediction models handling data inconsistency and model inadequacy", "venue": "AIChE Journal,", "year": 2017 }, { "authors": [ "Dimitris C Psichogios", "Lyle H Ungar" ], "title": "A hybrid neural network-first principles approach to process modeling", "venue": "AIChE Journal,", "year": 1992 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George Em Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Markus Reichstein", "Gustau Camps-Valls", "Bjorn Stevens", "Martin Jung", "Joachim Denzler", "Nuno Carvalhais", "Prabhat" ], "title": "Deep learning and process understanding for data-driven Earth system science", "venue": null, "year": 2019 }, { "authors": [ "R Rico-Martinez", "JS Anderson", "IG Kevrekidis" ], "title": "Continuous-time nonlinear signal processing: a neural network based approach for gray box identification", "venue": "In Proceedings of IEEE Workshop on Neural Networks for Signal Processing,", "year": 1994 }, { "authors": [ "David Rolnick", "Priya L Donti", "Lynn H Kaack", "Kelly Kochanski", "Alexandre Lacoste", "Kris Sankaran", "Andrew Slavin Ross", "Nikola Milojevic-Dupont", "Natasha Jaques", "Anna Waldman-Brown" ], "title": "Tackling climate change with machine learning", "venue": "In NeurIPS 2019 workshop on Climate Change with Machine Learning,", "year": 2019 }, { "authors": [ "Priyabrata Saha", "Saurabh Dash", "Saibal Mukhopadhyay" ], "title": "PHICNet: Physics-incorporated convolutional recurrent neural networks for modeling dynamical systems", "venue": "arXiv preprint arXiv:2004.06243,", "year": 2020 }, { "authors": [ "Sungyong Seo", "Chuizheng Meng", "Yan Liu" ], "title": "Physics-aware difference graph networks for sparselyobserved dynamics", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Xingjian Shi", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2015 }, { "authors": [ "Justin Sirignano", "Konstantinos Spiliopoulos" ], "title": "Dgm: A deep learning algorithm for solving partial differential equations", "venue": "Journal of computational physics,", "year": 2018 }, { "authors": [ "Michael L Thompson", "Mark A Kramer" ], "title": "Modeling chemical processes using prior knowledge and neural networks", "venue": "AIChE Journal,", "year": 1994 }, { "authors": [ "Jean-François Toubeau", "Jérémie Bottieau", "François Vallée", "Zacharie De Grève" ], "title": "Deep learningbased multivariate probabilistic forecasting for short-term scheduling in power markets", "venue": "IEEE Transactions on Power Systems,", "year": 2018 }, { "authors": [ "Benjamin Ummenhofer", "Lukas Prantl", "Nils Thuerey", "Vladlen Koltun" ], "title": "Lagrangian fluid simulation with continuous convolutions", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Qi Wang", "Feng Li", "Yi Tang", "Yan Xu" ], "title": "Integrating model-driven and data-driven methods for power system frequency stability assessment and control", "venue": "IEEE Transactions on Power Systems,", "year": 2019 }, { "authors": [ "Yunbo Wang", "Zhifeng Gao", "Mingsheng Long", "Jianmin Wang", "Philip S. Yu" ], "title": "PredRNN++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Cranmer" ], "title": "derivatives on ground truth trajectory, as performed in Greydanus et al", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modeling and forecasting complex dynamical systems is a major challenge in domains such as environment and climate (Rolnick et al., 2019), health science (Choi et al., 2016), and in many industrial applications (Toubeau et al., 2018). Model Based (MB) approaches typically rely on partial or ordinary differential equations (PDE/ODE) and stem from a deep understanding of the underlying physical phenomena. Machine learning (ML) and deep learning methods are more prior agnostic yet have become state-of-the-art for several spatio-temporal prediction tasks (Shi et al., 2015; Wang et al., 2018; Oreshkin et al., 2020; Donà et al., 2020), and connections have been drawn between deep architectures and numerical ODE solvers, e.g. neural ODEs (Chen et al., 2018; Ayed et al., 2019b). However, modeling complex physical dynamics is still beyond the scope of pure ML methods, which often cannot properly extrapolate to new conditions as MB approaches do.\nCombining the MB and ML paradigms is an emerging trend to develop the interplay between the two paradigms. For example, Brunton et al. (2016); Long et al. (2018b) learn the explicit form of PDEs directly from data, Raissi et al. (2019); Sirignano & Spiliopoulos (2018) use NNs as implicit methods for solving PDEs, Seo et al. (2020) learn spatial differences with a graph network, Ummenhofer et al. (2020) introduce continuous convolutions for fluid simulations, de Bézenac et al. (2018) learn the\n∗Equal contribution, authors sorted by reverse alphabetical order.\nvelocity field of an advection-diffusion system, Greydanus et al. (2019); Chen et al. (2020) enforce conservation laws in the network architecture or in the loss function.\nThe large majority of aforementioned MB/ML hybrid approaches assume that the physical model adequately describes the observed dynamics. This assumption is, however, commonly violated in practice. This may be due to various factors, e.g. idealized assumptions and difficulty to explain processes from first principles (Gentine et al., 2018), computational constraints prescribing a fine grain modeling of the system (Ayed et al., 2019a), unknown external factors, forces and sources which are present (Large & Yeager, 2004). In this paper, we aim at leveraging prior dynamical ODE/PDE knowledge in situations where this physical model is incomplete, i.e. unable to represent the whole complexity of observed data. To handle this case, we introduce a principled learning framework to Augment incomplete PHYsical models for ideNtIfying and forecasTing complex dYnamics (APHYNITY). The rationale of APHYNITY, illustrated in Figure 1 on the pendulum problem, is to augment the physical model when—and only when—it falls short.\nDesigning a general method for combining MB and ML approaches is still a widely open problem, and a clear problem formulation for the latter is lacking (Reichstein et al., 2019). Our contributions towards these goals are the following:\n• We introduce a simple yet principled framework for combining both approaches. We decompose the data into a physical and a data-driven term such that the data-driven component only models information that cannot be captured by the physical model. We provide existence and uniqueness guarantees (Section 3.1) for the decomposition given mild conditions, and show that this formulation ensures interpretability and benefits generalization.\n• We propose a trajectory-based training formulation (Section 3.2) along with an adaptive optimization scheme (Section 3.3) enabling end-to-end learning for both physical and deep learning components. This allows APHYNITY to automatically adjust the complexity of the neural network to different approximation levels of the physical model, paving the way to flexible learned hybrid models.\n• We demonstrate the generality of the approach on three use cases (reaction-diffusion, wave equations and the pendulum) representative of different PDE families (parabolic, hyperbolic), having a wide spectrum of application domains, e.g. acoustics, electromagnetism, chemistry, biology, physics (Section 4). We show that APHYNITY is able to achieve performances close to complete physical models by augmenting incomplete ones, both in terms of forecasting accuracy and physical parameter identification. Moreover, APHYNITY can also be successfully extended to the partially observable setting (see discussion in Section 5)." }, { "heading": "2 RELATED WORK", "text": "Correction in data assimilation Prediction under approximate physical models has been tackled by traditional statistical calibration techniques, which often rely on Bayesian methods (Pernot & Cailliez, 2017). Data assimilation techniques, e.g. the Kalman filter (Kalman, 1960; Becker et al., 2019), 4D-var (Courtier et al., 1994), prediction errors are modeled probabilistically and a correction using observed data is applied after each prediction step. Similar residual correction procedures are commonly used in robotics and optimal control (Chen, 2004; Li et al., 2014). However, these sequential (two-stage) procedures prevent the cooperation between prediction and correction. Besides, in model-based reinforcement learning, model deficiencies are typically handled by considering only short-term rollouts (Janner et al., 2019) or by model predictive control (Nagabandi et al., 2018). The originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parametrized dynamics. It does so while ensuring optimal cooperation between the prior model and the augmentation.\nAugmented physical models Combining physical models with machine learning (gray-box or hybrid modeling) was first explored from the 1990’s: Psichogios & Ungar (1992); Thompson & Kramer (1994); Rico-Martinez et al. (1994) use neural networks to predict the unknown parameters of physical models. The challenge of proper MB/ML cooperation was already raised as a limitation of gray-box approaches but not addressed. Moreover these methods were evaluated on specific applications with a residual targeted to the form of the equation. In the last few years, there has been a renewed interest in deep hybrid models bridging data assimilation techniques and machine learning to identify complex PDE parameters using cautiously constrained forward model (Long et al., 2018b; de Bézenac et al., 2018), as discussed in introduction. Recently, some approaches have specifically targetted the MB/ML cooperation. HybridNet (Long et al., 2018a) and PhICNet (Saha et al., 2020) both use data-driven networks to learn additive perturbations or source terms to a given PDE. The former considers the favorable context where the perturbations can be accessed, and the latter the special case of additive noise on the input. Wang et al. (2019); Mehta et al. (2020) propose several empirical fusion strategies with deep neural networks but lack theoretical groundings. PhyDNet (Le Guen & Thome, 2020) tackles augmentation in partially-observed settings, but with specific recurrent architectures dedicated to video prediction. Crucially, all the aforementioned approaches do not address the issues of uniqueness of the decomposition or of proper cooperation for correct parameter identification. Besides, we found experimentally that this vanilla cooperation is inferior to the APHYNITY learning scheme in terms of forecasting and parameter identification performances (see experiments in Section 4.2)." }, { "heading": "3 THE APHYNITY MODEL", "text": "In the following, we study dynamics driven by an equation of the form: dXt dt = F (Xt) (1)\ndefined over a finite time interval [0, T ], where the state X is either vector-valued, i.e. we have Xt ∈ Rd for every t, (pendulum equations in Section 4), or Xt is a d-dimensional vector field over a spatial domain Ω ⊂ Rk, with k ∈ {2, 3}, i.e. Xt(x) ∈ Rd for every (t, x) ∈ [0, T ] × Ω (reaction-diffusion and wave equations in Section 4). We suppose that we have access to a set of observed trajectories D = {X· : [0, T ] → A | ∀t ∈ [0, T ], dXt/dt = F (Xt)}, where A is the set of X values (either Rd or vector field). In our case, the unknown F has A as domain and we only assume that F ∈ F , with (F , ‖ · ‖) a normed vector space." }, { "heading": "3.1 DECOMPOSING DYNAMICS INTO PHYSICAL AND AUGMENTED TERMS", "text": "As introduced in Section 1, we consider the common situation where incomplete information is available on the dynamics, under the form of a family of ODEs or PDEs characterized by their temporal evolution Fp ∈ Fp ⊂ F . The APHYNITY framework leverages the knowledge of Fp while mitigating the approximations induced by this simplified model through the combination of physical and data-driven components. F being a vector space, we can write:\nF = Fp + Fa\nwhere Fp ∈ Fp encodes the incomplete physical knowledge and Fa ∈ F is the data-driven augmentation term complementing Fp. The incomplete physical prior is supposed to belong to a known family, but the physical parameters (e.g. propagation speed for the wave equation) are unknown and need to be estimated from data. Both Fp and Fa parameters are estimated by fitting the trajectories from D. The decomposition F = Fp + Fa is in general not unique. For example, all the dynamics could be captured by the Fa component. This decomposition is thus ill-defined, which hampers the interpretability and the extrapolation abilities of the model. In other words, one wants the estimated parameters of Fp to be as close as possible to the true parameter values of the physical model and Fa to play only a complementary role w.r.t Fp, so as to model only the information that cannot be captured by the physical prior. For example, when F ∈ Fp, the data can be fully described by the physical model, and in this case it is sensible to desire Fa to be nullified; this is of central importance in a setting where one wishes to identify physical quantities, and for the model to generalize and extrapolate to new conditions. In a more general setting where the physical model is incomplete, the action of Fa on the dynamics, as measured through its norm, should be as small as possible.\nThis general idea is embedded in the following optimization problem:\nmin Fp∈Fp,Fa∈F ‖Fa‖ subject to ∀X ∈ D,∀t, dXt dt = (Fp + Fa)(Xt) (2)\nThe originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parametrized dynamics. It does so while ensuring optimal cooperation between the prior model and the augmentation.\nA first key question is whether the minimum in Eq. (2) is indeed well-defined, in other words whether there exists indeed a decomposition with a minimal norm Fa. The answer actually depends on the geometry of Fp, and is formulated in the following proposition proven in Appendix B: Proposition 1 (Existence of a minimizing pair). If Fp is a proximinal set1, there exists a decomposition minimizing Eq. (2).\nProximinality is a mild condition which, as shown through the proof of the proposition, cannot be weakened. It is a property verified by any boundedly compact set. In particular, it is true for closed subsets of finite dimensional spaces. However, if only existence is guaranteed, while forecasts would be expected to be accurate, non-uniqueness of the decomposition would hamper the interpretability of Fp and this would mean that the identified physical parameters are not uniquely determined.\nIt is then natural to ask under which conditions solving problem Eq. (2) leads to a unique decomposition into a physical and a data-driven component. The following result provides guarantees on the existence and uniqueness of the decomposition under mild conditions. The proof is given in Appendix B: Proposition 2 (Uniqueness of the minimizing pair). If Fp is a Chebyshev set1, Eq. (2) admits a unique minimizer. The Fp in this minimizer pair is the metric projection of the unknown F onto Fp.\nThe Chebyshev assumption condition is strictly stronger than proximinality but is still quite mild and necessary. Indeed, in practice, many sets of interest are Chebyshev, including all closed convex spaces in strict normed spaces and, if F = L2, Fp can be any closed convex set, including all finite dimensional subspaces. In particular, all examples considered in the experiments are Chebyshev sets.\nPropositions 1 and 2 provide, under mild conditions, the theoretical guarantees for the APHYNITY formulation to infer the correct MB/ML decomposition, thus enabling both recovering the proper physical parameters and accurate forecasting." }, { "heading": "3.2 SOLVING APHYNITY WITH DEEP NEURAL NETWORKS", "text": "In the following, both terms of the decomposition are parametrized and are denoted as F θpp and F θap . Solving APHYNITY then consists in estimating the parameters θp and θa. θp are the physical parameters and are typically low-dimensional, e.g. 2 or 3 in our experiments for the considered physical models. For Fa, we need sufficiently expressive models able to optimize over all F : we\n1A proximinal set is one from which every point of the space has at least one nearest point. A Chebyshev set is one from which every point of the space has a unique nearest point. More details in Appendix A.\nthus use deep neural networks, which have shown promising performances for the approximation of differential equations (Raissi et al., 2019; Ayed et al., 2019b).\nWhen learning the parameters of F θpp and F θaa , we have access to a finite dataset of trajectories discretized with a given temporal resolution ∆t: Dtrain = {(X(i)k∆t)0≤k≤bT/∆tc}1≤i≤N . Solving Eq. (2) requires estimating the state derivative dXt/dt appearing in the constraint term. One solution is to approximate this derivative using e.g. finite differences as in (Brunton et al., 2016; Greydanus et al., 2019; Cranmer et al., 2020). This numerical scheme requires high space and time resolutions in the observation space in order to get reliable gradient estimates. Furthermore it is often unstable, leading to explosive numerical errors as discussed in Appendix D. We propose instead to solve Eq. (2) using an integral trajectory-based approach: we compute X̃ik∆t,X0 from an initial state X (i) 0 using the current F θpp + F θaa dynamics, then enforce the constraint X̃ i k∆t,X0\n= Xik∆t. This leads to our final objective function on (θp, θa):\nmin θp,θa ∥∥F θaa ∥∥ subject to ∀i,∀k, X̃(i)k∆t = X(i)k∆t (3) where X̃(i)k∆t is the approximate solution of the integral ∫X(i)0 +k∆t X\n(i) 0\n(F θp p + F θaa )(Xs) dXs obtained\nby a differentiable ODE solver.\nIn our setting, where we consider situations for which F θpp only partially describes the physical phenomenon, this coupled MB + ML formulation leads to different parameter estimates than using the MB formulation alone, as analyzed more thoroughly in Appendix C. Interestingly, our experiments show that using this formulation also leads to a better identification of the physical parameters θp than when fitting the simplified physical model F θpp alone (Section 4). With only an incomplete knowledge on the physics, θp estimator will be biased by the additional dynamics which needs to be fitted in the data. Appendix F also confirms that the integral formulation gives better forecasting results and a more stable behavior than supervising over finite difference approximations of the derivatives." }, { "heading": "3.3 ADAPTIVELY CONSTRAINED OPTIMIZATION", "text": "The formulation in Eq. (3) involves constraints which are difficult to enforce exactly in practice. We considered a variant of the method of multipliers (Bertsekas, 1996) which uses a sequence of Lagrangian relaxations Lλj (θp, θa):\nLλj (θp, θa) = ‖F θaa ‖+ λj · Ltraj(θp, θa) (4)\nwhere Ltraj(θp, θa) = ∑N i=1 ∑T/∆t h=1 ‖X (i) h∆t − X̃ (i) h∆t‖.\nAlgorithm 1: APHYNITY Initialization: λ0 ≥ 0, τ1 > 0, τ2 > 0; for epoch = 1 : Nepochs do\nfor iter in 1 : Niter do for batch in 1 : B do\nθj+1 = θj − τ1∇ [λjLtraj(θj) + ‖Fa‖]\nλj+1 = λj+ τ2Ltraj(θj+1) This method needs an increasing sequence (λj)j such that the successive minima of Lλj converge to a solution (at least a local one) of the constrained problem Eq. (3). We select (λj)j by using an iterative strategy: starting from a value λ0, we iterate, minimizing Lλj by gradient descent2, then update λj with: λj+1 = λj + τ2Ltraj(θj+1), where τ2 is a chosen hyper-parameter and θ = (θp, θa). This procedure is summarized in Algorithm 1. This adaptive iterative procedure allows us to obtain stable and robust results, in a reproducible fashion, as shown in the experiments." }, { "heading": "4 EXPERIMENTAL VALIDATION", "text": "We validate our approach on 3 classes of challenging physical dynamics: reaction-diffusion, wave propagation, and the damped pendulum, representative of various application domains such as chemistry, biology or ecology (for reaction-diffusion) and earth physic, acoustic, electromagnetism or\n2Convergence to a local minimum isn’t necessary, a few steps are often sufficient for a successful optimization.\neven neuro-biology (for waves equations). The two first dynamics are described by PDEs and thus in practice should be learned from very high-dimensional vectors, discretized from the original compact domain. This makes the learning much more difficult than from the one-dimensional pendulum case. For each problem, we investigate the cooperation between physical models of increasing complexity encoding incomplete knowledge of the dynamics (denoted Incomplete physics in the following) and data-driven models. We show the relevance of APHYNITY (denoted APHYNITY models) both in terms of forecasting accuracy and physical parameter identification." }, { "heading": "4.1 EXPERIMENTAL SETTING", "text": "We describe the three families of equations studied in the experiments. In all experiments,F = L2(A) where A is the set of all admissible states for each problem, and the L2 norm is computed on Dtrain by: ‖F‖2 ≈ ∑ i,k ‖F (X (i) k∆t)‖2. All considered sets of physical functionalsFp are closed and convex in F and thus are Chebyshev. In order to enable the evaluation on both prediction and parameter identification, all our experiments are conducted on simulated datasets with known model parameters. Each dataset has been simulated using an appropriate high-precision integration scheme for the corresponding equation. All solver-based models take the first state X0 as input and predict the remaining time-steps by integrating F through the same differentiable generic and common ODE solver (4th order Runge-Kutta)3. Implementation details and architectures are given in Appendix E.\nReaction-diffusion equations We consider a 2D FitzHugh-Nagumo type model (Klaasen & Troy, 1984). The system is driven by the PDE ∂u∂t = a∆u + Ru(u, v; k), ∂v ∂t = b∆v + Rv(u, v) where a and b are respectively the diffusion coefficients of u and v, ∆ is the Laplace operator. The local reaction terms are Ru(u, v; k) = u − u3 − k − v,Rv(u, v) = u − v. The state is X = (u, v) and is defined over a compact rectangular domain Ω with periodic boundary conditions. The considered physical models are: • Param PDE (a, b), with unknown (a, b) diffusion terms and without reaction terms: Fp = {F a,bp : (u, v) 7→ (a∆u, b∆v) | a ≥ amin > 0, b ≥ bmin > 0}; • Param PDE (a, b, k), the full PDE with unknown parameters: Fp = {F a,b,kp : (u, v) 7→ (a∆u+Ru(u, v; k), b∆v +Rv(u, v) | a ≥ amin > 0, b ≥ bmin > 0, k ≥ kmin > 0}.\nDamped wave equations We investigate the damped-wave PDE: ∂ 2w ∂t2 − c 2∆w + k ∂w∂t = 0 where k is the damping coefficient. The state is X = (w, ∂w∂t ) and we consider a compact spatial domain Ω with Neumann homogeneous boundary conditions. Note that this damping differs from the pendulum, as its effect is global. Our physical models are: • Param PDE (c), without damping term: Fp = {F cp : (u, v) 7→ (v, c2∆u) | c ∈ [ ,+∞) with > 0}; • Param PDE (c, k): Fp = {F c,kp : (u, v) 7→ (v, c2∆u− kv) | c, k ∈ [ ,+∞) with > 0}.\nDamped pendulum The evolution follows the ODE d2θ/dt2 + ω20 sin θ + αdθ/dt = 0, where θ(t) is the angle, ω0 the proper pulsation (T0 the period) and α the damping coefficient. With state X = (θ, dθ/dt), the ODE is Fω0,αp : X 7→ (dθ/dt,−ω20 sin θ − αdθ/dt). Our physical models are: • Hamiltonian (Greydanus et al., 2019), a conservative approximation, with Fp = {FHp : (u, v) 7→ (∂yH(u, v),−∂xH(u, v)) | H ∈ H1(R2)}, H1(R2) is the first order Sobolev space. • Param ODE (ω0), the frictionless pendulum: Fp = {Fω0,α=0p | ω0 ∈ [ ,+∞) with > 0} • Param ODE (ω0, α), the full pendulum equation: Fp = {Fω0,αp | ω0, α ∈ [ ,+∞) with > 0}.\nBaselines As purely data-driven baselines, we use Neural ODE (Chen et al., 2018) for the three problems and PredRNN++ (Wang et al., 2018, for reaction-diffusion only) which are competitive models for datasets generated by differential equations and for spatio-temporal data. As MB/ML methods, in the ablations studies (see Appendix F), we compare for all problems, to the vanilla MB/ML cooperation scheme found in (Wang et al., 2019; Mehta et al., 2020). We also show results for True PDE/ODE, which corresponds to the equation for data simulation (which do not lead to zero error due to the difference between simulation and training integration schemes). For the pendulum, we compare to Hamiltonian neural networks (Greydanus et al., 2019; Toth et al., 2020) and to the the deep Galerkin method (DGM, Sirignano & Spiliopoulos, 2018). See additional details in Appendix E." }, { "heading": "4.2 RESULTS", "text": "We analyze and discuss below the results obtained for the three kind of dynamics. We successively examine different evaluation or quality criteria. The conclusions are consistent for the three problems, which allows us to highlight clear trends for all of them.\nForecasting accuracy The data-driven models do not perform well compared to True PDE/ODE (all values are test errors expressed as log MSE): -4.6 for PredRNN++ vs. -9.17 for reaction-diffusion, -2.51 vs. -5.24 for wave equation, and -2.84 vs. -8.44 for the pendulum in Table 1. The Deep Galerkin method for the pendulum in complete physics DGM (ω0, α), being constrained by the equation, outperforms Neural ODE but is far inferior to APHYNITY models. In the incomplete physics case, DGM (ω0) fails to compensate for the missing information. The incomplete physical models, Param PDE (a, b) for the reaction-diffusion, Param PDE (c) for the wave equation, and Param ODE (ω0) and Hamiltonian models for the damped pendulum, have even poorer performances than purely data-driven ones, as can be expected since they ignore important dynamical components, e.g. friction in the pendulum case. Using APHYNITY with these imperfect physical models greatly improves forecasting accuracy in all cases, significantly outperforming purely data-driven models, and reaching results often close to the accuracy of the true ODE, when APHYNITY and the true ODE models are integrated with the same numerical scheme (which is different from the one used for data generation, hence the non-null errors even for the true equations), e.g. -5.92 vs. -5.24 for wave equation in\n3This integration scheme is then different from the one used for data generation, the rationale for this choice being that when training a model one does not know how exactly the data has been generated.\nTable 1. This clearly highlights the capacity of our approach to augment incomplete physical models with a learned data-driven component.\nPhysical parameter estimation Confirming the phenomenon mentioned in the introduction and detailed in Appendix C, incomplete physical models can lead to bad estimates for the relevant physical parameters: an error respectively up to 67.6% and 10.4% for parameters in the reaction-diffusion and wave equations, and an error of more than 13% for parameters for the pendulum in Table 1. APHYNITY is able to significantly improve physical parameters identification: 2.3% error for the reaction-diffusion, 0.3% for the wave equation, and 4% for the pendulum. This validates the fact that augmenting a simple physical model to compensate its approximations is not only beneficial for prediction, but also helps to limit errors for parameter identification when dynamical models do not fit data well. This is crucial for interpretability and explainability of the estimates.\nAblation study We conduct ablation studies to validate the importance of the APHYNITY augmentation compared to a naive strategy consisting in learning F = Fp + Fa without taking care on the quality of the decomposition, as done in (Wang et al., 2019; Mehta et al., 2020). Results shown in Table 1 of Appendix F show a consistent gain of APHYNITY for the three use cases and for all physical models: for instance for Param ODE (a, b) in reaction-diffusion, both forecasting performances (log MSE =-5.10 vs. -4.56) and identification parameter (Error= 2.33% vs. 6.39%) improve. Other ablation results are provided in Appendix F showing the relevance of the the trajectory-based approach described in Section 3.2 (vs supervising over finite difference approximations of the derivative F ).\nFlexibility When applied to complete physical models, APHYNITY does not degrade accuracy, contrary to a vanilla cooperation scheme (see ablations in Appendix F). This is due to the least action principle of our approach: when the physical knowledge is sufficient for properly predicting the observed dynamics, the model learns to ignore the data-driven augmentation. This is shown by the norm of the trained neural net component Fa, which is reported in Table 1 last column: as expected, ‖Fa‖2 diminishes as the complexity of the corresponding physical model increases, and, relative to incomplete models, the norm becomes very small for complete physical models (for example in the pendulum experiments, we have ‖Fa‖ = 8.5 for the APHYNITY model to be compared with 132 and 623 for the incomplete models). Thus, we see that the norm of Fa is a good indication of how imperfect the physical models Fp are. It highlights the flexibility of APHYNITY to successfully adapt to very different levels of prior knowledge. Note also that APHYNITY sometimes slightly improves over the true ODE, as it compensates the error introduced by different numerical integration methods for data simulation and training (see Appendix E).\nQualitative visualizations Results in Figure 2 for reaction-diffusion show that the incomplete diffusion parametric PDE in Figure 2(a) is unable to properly match ground truth simulations: the\nbehavior of the two components in Figure 2(a) is reduced to simple independent diffusions due to the lack of interaction terms between u and v. By using APHYNITY in Figure 2(b), the correlation between the two components appears together with the formation of Turing patterns, which is very similar to the ground truth. This confirms that Fa can learn the reaction terms and improve prediction quality. In Figure 3, we see for the wave equation that the data-driven Neural ODE model fails at approximating dw/dt as the forecast horizon increases: it misses crucial details for the second component dw/dt which makes the forecast diverge from the ground truth. APHYNITY incorporates a Laplacian term as well as the data-driven Fa thus capturing the damping phenomenon and succeeding in maintaining physically sound results for long term forecasts, unlike Neural ODE.\nExtension to non-stationary dynamics We provide additional results in Appendix G to tackle datasets where physical parameters of the equations vary in each sequence. To this end, we design an encoder able to perform parameter estimation for each sequence. Results show that APHYNITY accommodates well to this setting, with similar trends as those reported in this section.\nAdditional illustrations We give further visual illustrations to demonstrate how the estimation of parameters in incomplete physical models is improved with APHYNITY. For the reaction-diffusion equation, we show that the incomplete parametric PDE underestimates both diffusion coefficients. The difference is visually recognizable between the poorly estimated diffusion (Figure 4(a)) and the true one (Figure 4(c)) while APHYNITY gives a fairly good estimation of those diffusion parameters as shown in Figure 4(b).\n(a) a = 0.33 × 10−3, b = 0.94 × 10−3, diffusion estimated with Param PDE (a, b) (b) a = 0.97 × 10−3, b = 4.75 × 10−3, diffusion estimated with APHYNITY Param PDE (a, b) (c) a = 1.0×10−3, b = 5.0×10−3, true diffusion\nFigure 4: Diffusion predictions using coefficient learned with (a) incomplete physical model Param PDE (a, b) and (b) APHYNITY-augmented Param PDE(a, b), compared with the (c) true diffusion" }, { "heading": "5 CONCLUSION", "text": "In this work, we introduce the APHYNITY framework that can efficiently augment approximate physical models with deep data-driven networks, performing similarly to models for which the underlying dynamics are entirely known. We exhibit the superiority of APHYNITY over data-driven, incomplete physics, and state-of-the-art approaches combining ML and MB methods, both in terms of forecasting and parameter identification on three various classes of physical systems. Besides, APHYNITY is flexible enough to adapt to different approximation levels of prior physical knowledge.\nAn appealing perspective is the applicability of APHYNITY on partially-observable settings, such as video prediction. Besides, we hope that the APHYNITY framework will open up the way to the design of a wide range of more flexible MB/ML models, e.g. in climate science, robotics or reinforcement learning. In particular, analyzing the theoretical decomposition properties in a partially-observed setting is an important direction for future work." }, { "heading": "ACKNOWLEDGEMENTS:", "text": "Funding (P. Gallinari), Chaires de recherche et d’enseignement en intelligence artificielle (Chaires IA), DL4Clim project." }, { "heading": "A REMINDER ON PROXIMINAL AND CHEBYSHEV SETS", "text": "We begin by giving a definition of proximinal and Chebyshev sets, taken from (Fletcher & Moors, 2014):\nDefinition 1. A proximinal set of a normed space (E, ‖ · ‖) is a subset C ⊂ E such that every x ∈ E admits at least a nearest point in C. Definition 2. A Chebyshev set of a normed space (E, ‖ · ‖) is a subset C ⊂ E such that every x ∈ E admits a unique nearest point in C.\nProximinality reduces to a compacity condition in finite dimensional spaces. In general, it is a weaker one: Boundedly compact sets verify this property for example.\nIn Euclidean spaces, Chebyshev sets are simply the closed convex subsets. The question of knowing whether it is the case that all Chebyshev sets are closed convex sets in infinite dimensional Hilbert spaces is still an open question. In general, there exists examples of non-convex Chebyshev sets, a famous one being presented in (Johnson, 1987) for a non-complete inner-product space.\nGiven the importance of this topic in approximation theory, finding necessary conditions for a set to be Chebyshev and studying the properties of those sets have been the subject of many efforts. Some of those properties are summarized below:\n• The metric projection on a boundedly compact Chebyshev set is continuous. • If the norm is strict, every closed convex space, in particular any finite dimensional subspace\nis Chebyshev.\n• In a Hilbert space, every closed convex set is Chebyshev." }, { "heading": "B PROOF OF PROPOSITIONS 1 AND 2", "text": "We prove the following result which implies both propositions in the article:\nProposition 3. The optimization problem:\nmin Fp∈Fp,Fa∈F ‖Fa‖ subject to ∀X ∈ D,∀t, dXt dt = (Fp + Fa)(Xt) (5)\nis equivalent a metric projection onto Fp." }, { "heading": "If Fp is proximinal, Eq. (5) admits a minimizing pair.", "text": "" }, { "heading": "If Fp is Chebyshev, Eq. (5) admits a unique minimizing pair which Fp is the metric projection.", "text": "Proof. The idea is to reconstruct the full functional from the trajectories of D. By definition, A is the set of points reached by trajectories in D so that:\nA = {x ∈ Rd | ∃X· ∈ D,∃t, Xt = x}\nThen let us define a function FD in the following way: For a ∈ A, we can find X· ∈ D and t0 such that Xt0 = a. Differentiating X at t0, which is possible by definition of D, we take:\nFD(a) = dXt dt ∣∣∣∣ t=t0\nFor any (Fp, Fa) satisfying the constraint in Eq. (5), we then have that (Fp + Fa)(a) = dXt/dt|t0 = FD(a) for all a ∈ A. Conversely, any pair such that (Fp, Fa) ∈ Fp×F and Fp +Fa = FD, verifies the constraint.\nThus we have the equivalence between Eq. (5) and the metric projection formulated as:\nminimize Fp ∈ Fp\n∥∥FD − Fp∥∥ (6)\nIf Fp is proximinal, the projection problem admits a solution which we denote F ?p . Taking F ?a = FD −F ?p , we have that F ?p +F ?a = FD so that (F ?p , F ?a ) verifies the constraint of Eq. (2). Moreover, if there is (Fp, Fa) satisfying the constraint of Eq. (2), we have that Fp + Fa = FD by what was shown above and ‖Fa‖ = ‖FD − Fp‖ ≥ ‖FD − F ?p ‖ by definition of F ?p . This shows that (F ?p , F ?a ) is minimal.\nMoreover, if Fp is a Chebyshev set, by uniqueness of the projection, if Fp 6= F ?p then ‖Fa‖ > ‖F ?a ‖. Thus the minimal pair is unique." }, { "heading": "C PARAMETER ESTIMATION IN INCOMPLETE PHYSICAL MODELS", "text": "Classically, when a set Fp ⊂ F summarising the most important properties of a system is available, this gives a simplified model of the true dynamics and the adopted problem is then to fit the trajectories using this model as well as possible, solving:\nminimize Fp ∈ Fp\nEX∼DL(X̃X0 , X)\nsubject to ∀g ∈ I, X̃g0 = g and ∀t, dX̃gt dt = Fp(X̃ g t )\n(7)\nwhere L is a discrepancy measure between trajectories. Recall that X̃X0 is the result trajectory of an ODE solver taking X0 as initial condition. In other words, we try to find a function Fp which gives trajectories as close as possible to the ones from the dataset. While estimation of the function becomes easier, there is then a residual part which is left unexplained and this can be a non negligible issue in at least two ways:\n• When F 6∈ Fp, the loss is strictly positive at the minimum. This means that reducing the space of functions Fp makes us lose in terms of accuracy.4\n• The obtained function Fp might not even be the most meaningful function from Fp as it would try to capture phenomena which are not explainable with functions in Fp, thus giving the wrong bias to the calculated function. For example, if one is considering a dampened periodic trajectory where only the period can be learned in Fp but not the dampening, the estimated period will account for the dampening and will thus be biased.\nThis is confirmed in the paper in Section 4: the incomplete physical models augmented with APHYNITY get different and experimentally better physical identification results than the physical models alone.\nLet us compare our approach with this one on the linearized damped pendulum to show how estimates of physical parameters can differ. The equation is the following:\nd2θ dt2 + ω20θ + α dθ dt = 0\nWe take the same notations as in the article and parametrize the simplified physical models as:\nF ap : X 7→ ( dθ\ndt ,−aθ)\nwhere a > 0 corresponds to ω20 . The corresponding solution for an initial state X0, which we denote Xa, can then written explicitly as:\nθat = θ0 cos √ at\nLet us consider damped pendulum solutions X written as:\nθt = θ0e −t cos t\nwhich corresponds to:\nF : X 7→ (dθ dt ,−2(θ + dθ dt ))\n4This is true in theory, although not necessarily in practice when F overfits a small dataset.\nIt is then easy to see that the estimate of a with the physical model alone can be obtained by minimizing: ∫ T\n0\n|e−t cos t− cos √ at|2\nThis expression depends on T and thus, depending on the chosen time interval and the way the integral is discretized will almost always give biased estimates. In other words, the estimated value of a will not give us the desired solution t 7→ cos t. On the other hand, for a given a, in the APHYNITY framework, the residual must be equal to:\nF ar : X 7→ (0, (a− 2)θ − 2 dθ\ndt )\nin order to satisfy the fitting constraint. Here a corresponds to 1 + ω20 not to ω 2 0 as in the simplified case. Minimizing its norm, we obtain a = 2 which gives us the desired solution:\nθt = θ0e −t cos t\nwith the right period." }, { "heading": "D DISCUSSION ON SUPERVISION OVER DERIVATIVES", "text": "In order to find the appropriate decomposition (Fp, Fa), we use a trajectory-based error by solving:\nminimize Fp ∈ Fp, Fa ∈ F\n‖Fa‖\nsubject to ∀g ∈ I, X̃g0 = g and ∀t, dX̃gt dt = (Fp + Fa)(X̃ g t ),\n∀X ∈ D, L(X, X̃X0) = 0\n(8)\nIn the continuous setting where the data is available at all times t, this problem is in fact equivalent to the following one:\nminimize Fp ∈ Fp\nEX∼D ∫ ∥∥∥∥dXtdt − Fp(Xt) ∥∥∥∥ (9) where the supervision is done directly over derivatives, obtained through finite-difference schemes. This echoes the proof in Section B of the Appendix where F can be reconstructed from the continuous data.\nHowever, in practice, data is only available at discrete times with a certain time resolution. While Eq. (9) is indeed equivalent to Eq. (8) in the continuous setting, in the practical discrete one, the way error propagates is not anymore: For Eq. (8) it is controlled over integrated trajectories while for Eq. (9) the supervision is over the approximate derivatives of the trajectories from the dataset. We argue that the trajectory-based approach is more flexible and more robust for the following reasons:\n• In Eq. (8), if Fa is appropriately parameterized, it is possible to perfectly fit the data trajectories at the sampled points. • The use of finite differences schemes to estimate F as is done in Eq. (9) necessarily induces\na non-zero discretization error. • This discretization error is explosive in terms of divergence from the true trajectories.\nThis last point is quite important, especially when time sampling is sparse (even though we do observe this adverse effect empirically in our experiments with relatively finely time-sampled trajectories). The following gives a heuristical reasoning as to why this is the case. Let F̃ = F + be the function estimated from the sampled points with an error such that ‖ ‖∞ ≤ α. Denoting X̃ the corresponding trajectory generated by F̃ , we then have, for all X ∈ D:\n∀t, d(X − X̃)t dt = F (Xt)− F (X̃t)− (X̃t)\nIntegrating over [0, T ] and using the triangular inequality as well as the mean value inequality, supposing that F has uniformly bounded spatial derivatives:\n∀t ∈ [0, T ], ‖(X − X̃)t‖ ≤ ‖∇F‖∞ ∫ t\n0\n‖Xs − X̃s‖+ αt\nwhich, using a variant of the Grönwall lemma, gives us the inequality:\n∀t ∈ [0, T ], ‖Xt − X̃t‖ ≤ α\n‖∇F‖∞ (exp(‖∇F‖∞t)− 1)\nWhen α tends to 0, we recover the true trajectories X . However, as α is bounded away from 0 by the available temporal resolution, this inequality gives a rough estimate of the way X̃ diverges from them, and it can be an equality in many cases. This exponential behaviour explains our choice of a trajectory-based optimization.\nE IMPLEMENTATION DETAILS\nWe describe here the three use cases studied in the paper for validating APHYNITY. All experiments are implemented with PyTorch (Paszke et al., 2019) and the differentiable ODE solvers with the adjoint method implemented in torchdiffeq.5" }, { "heading": "E.1 REACTION-DIFFUSION EQUATIONS", "text": "The system is driven by a FitzHugh-Nagumo type PDE (Klaasen & Troy, 1984)\n∂u ∂t = a∆u+Ru(u, v; k), ∂v ∂t = b∆v +Rv(u, v)\nwhere a and b are respectively the diffusion coefficients of u and v, ∆ is the Laplace operator. The local reaction terms are Ru(u, v; k) = u− u3 − k − v,Rv(u, v) = u− v. The state X = (u, v) is defined over a compact rectangular domain Ω = [−1, 1]2 with periodic boundary conditions. Ω is spatially discretized with a 32 × 32 2D uniform square mesh grid. The periodic boundary condition is implemented with circular padding around the borders. ∆ is systematically estimated with a 3× 3 discrete Laplace operator.\nDataset Starting from a randomly sampled initial state Xinit ∈ [0, 1]2×32×32, we generate states by integrating the true PDE with fixed a, b, and k in a dataset (a = 1×10−3, b = 5×10−3, k = 5×10−3). We firstly simulate high time-resolution (δtsim = 0.001) sequences with explicit finite difference method. We then extract states every δtdata = 0.1 to construct our low time-resolution datasets.\nWe set the time of random initial state to t = −0.5 and the time horizon to t = 2.5. 1920 sequences are generated, with 1600 for training/validation and 320 for test. We take the state at t = 0 as X0 and predict the sequence until the horizon (equivalent to 25 time steps) in all reaction-diffusion experiments. Note that the sub-sequence with t < 0 are reserved for the extensive experiments in Appendix G.1.\nNeural network architectures Our Fa here is a 3-layer convolution network (ConvNet). The two input channels are (u, v) and two output ones are (∂u∂t , ∂v ∂t ). The purely data-driven Neural ODE uses such ConvNet as its F . The detailed architecture is provided in Table 2. The estimated physical parameters θp in Fp are simply a trainable vector (a, b) ∈ R2+ or (a, b, k) ∈ R3+.\n5https://github.com/rtqichen/torchdiffeq\nOptimization hyperparameters We choose to apply the same hyperparameters for all the reactiondiffusion experiments: Niter = 1, λ0 = 1, τ1 = 1× 10−3, τ2 = 1× 103." }, { "heading": "E.2 WAVE EQUATIONS", "text": "The damped wave equation is defined by\n∂2w\n∂t2 − c2∆w + k∂w ∂t = 0\nwhere c is the wave speed and k is the damping coefficient. The state is X = (w, ∂w∂t ).\nWe consider a compact spatial domain Ω represented as a 64× 64 grid and discretize the Laplacian operator similarly. ∆ is implemented using a 5× 5 discrete Laplace operator in simulation whereas in the experiment is a 3× 3 Laplace operator. Null Neumann boundary condition are imposed for generation.\nDataset δt was set to 0.001 to respect Courant number and provide stable integration. The simulation was integrated using a 4th order finite difference Runge-Kutta scheme for 300 steps from an initial Gaussian state, i.e for all sequence at t = 0, we have:\nw(x, y, t = 0) = C × exp (x−x0) 2+(y−y0) 2 σ2 (10)\nThe amplitude C is fixed to 1, and (x0, y0) = (32, 32) to make the Gaussian curve centered for all sequences. However, σ is different for each sequence and uniformly sampled in [10, 100]. The same δt was used for train and test. All initial conditions are Gaussian with varying amplitudes. 250 sequences are generated, 200 are used for training while 50 are reserved as a test set. In the main paper setting, c = 330 and k = 50. As with the reaction diffusion case, the algorithm takes as input a state Xt0 = (w, dw dt )(t0) and predicts all states from t0 + δt up to t0 + 25δt.\nNeural network architectures The neural network for Fa is a 3-layer convolution neural network with the same architecture as in Table 2. For Fp, the parameter(s) to be estimated is either a scalar c ∈ R+ or a vector (c, k) ∈ R2+. Similarly, Neural ODE networks are build as presented in Table 2.\nOptimization hyperparameters We use the same hyperparameters for the experiments: Niter = 3, λ0 = 1, τ1 = 1× 10−4, τ2 = 1× 102." }, { "heading": "E.3 DAMPED PENDULUM", "text": "We consider the non-linear damped pendulum problem, governed by the ODE\nd2θ dt2 + ω20 sin θ + α dθ dt = 0\nwhere θ(t) is the angle, ω0 = 2πT0 is the proper pulsation (T0 being the period) and α is the damping coefficient. With the state X = (θ, dθdt ), the ODE can be written as dXt dt = F (Xt) with F : X 7→ (dθdt ,−ω 2 0 sin θ − αdθdt ).\nDataset For each train / validation / test split, we simulate a dataset with 25 trajectories of 40 timesteps (time interval [0, 20], timestep δt = 0.5) with fixed ODE coefficients (T0 = 12, α = 0.2) and varying initial conditions. The simulation integrator is Dormand-Prince Runge-Kutta method of order (4)5 (DOPRI5, Dormand & Prince, 1980). We also add a small amount of white gaussian noise (σ = 0.01) to the state. Note that our pendulum dataset is much more challenging than the ideal frictionless pendulum considered in Greydanus et al. (2019).\nNeural network architectures We detail in Table 3 the neural architectures used for the damped pendulum experiments. All data-driven augmentations for approximating the mapping Xt 7→ F (Xt) are implemented by multi-layer perceptrons (MLP) with 3 layers of 200 neurons and ReLU activation functions (except at the last layer: linear activation). The Hamiltonian (Greydanus et al., 2019; Toth et al., 2020) is implemented by a MLP that takes the state Xt and outputs a scalar estimation of the Hamiltonian H of the system: the derivative is then computed by an in-graph gradient of H with respect to the input: F (Xt) = ( ∂H ∂(dθ/ dt) ,− ∂H dθ ) .\nOptimization hyperparameters The hyperparameters of the APHYNITY optimization algorithm (Niter, λ0, τ1, τ2) were cross-validated on the validation set and are shown in Table 4. All models were trained with a maximum number of 5000 steps with early stopping." }, { "heading": "F ABLATION STUDY", "text": "We conduct ablation studies to show the effectiveness of APHYNITY’s adaptive optimization and trajectory-based learning scheme." }, { "heading": "F.1 ABLATION TO VANILLA MB/ML COOPERATION", "text": "In Table 5, we consider the ablation case with the vanilla augmentation scheme found in Le Guen & Thome (2020); Wang et al. (2019); Mehta et al. (2020), which does not present any proper decomposition guarantee. We observe that the APHYNITY cooperation scheme outperforms this vanilla scheme in all case, both in terms of forecasting performances (e.g. log MSE= -0.35 vs. -3.97 for the Hamiltonian in the pendulum case) and parameter identification (e.g. Err Param=8.4% vs. 2.3 for Param PDE (a, b for reaction-diffusion). It confirms the crucial benefits of APHYNITY’s principled decomposition scheme." }, { "heading": "F.2 DETAILED ABLATION STUDY", "text": "We conduct also two other ablations in Table 6:\n• derivative supervision: in which Fp + Fa is trained with supervision over approximated derivatives on ground truth trajectory, as performed in Greydanus et al. (2019); Cranmer et al. (2020). More precisely, APHYNITY’s Ltraj is here replaced with Lderiv = ‖dXtdt − F (Xt)‖ as in Eq. (9), where dXtdt is approximated by finite differences on Xt. • non-adaptive optim.: in which we train APHYNITY by minimizing ‖Fa‖ without the adaptive optimization of λ shown in Algorithm 1. This case is equivalent to λ = 1, τ2 = 0.\nWe highlight the importance to use a principled adaptive optimization algorithm (APHYNITY algorithm described in paper) compared to a non-adpative optimization: for example in the reactiondiffusion case, log MSE= -4.55 vs. -5.10 for Param PDE (a, b). Finally, when the supervision occurs on the derivative, both forecasting and parameter identification results are systematically lower than with APHYNITY’s trajectory based approach: for example, log MSE=-1.16 vs. -4.64 for Param PDE (c) in the wave equation. It confirms the good properties of the APHYNITY training scheme." }, { "heading": "G ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "G.1 REACTION-DIFFUSION SYSTEMS WITH VARYING DIFFUSION PARAMETERS", "text": "In Table 7, we observe that combining data-driven and physical components outperforms the pure data-driven one. When applying APHYNITY to Param PDE (a, b), the prediction precision is significantly improved (log MSE: -1.32 vs. -4.32) with a and b respectively reduced from 55.6% and 54.1% to 11.8% and 18.7%. For complete physics cases, the parameter estimations are also improved for Param PDE (a, b, k) by reducing over 60% of the error of b (3.10 vs. 1.23) and 10% to 20% of the errors of a and k (resp. 1.55/0.59 vs. 1.29/0.39).\nThe extensive results reflect the same conclusion as shown in the main article: APHYNITY improves the prediction precision and parameter estimation. The same decreasing tendency of ‖Fa‖ is also confirmed." }, { "heading": "G.2 ADDITIONAL RESULTS FOR THE WAVE EQUATION", "text": "We conduct an experiment where each sequence is generated with a different wave celerity. This dataset is challenging because both c and the initial conditions vary across the sequences. For each simulated sequence, an initial condition is sampled as described previously, along with a wave celerity c also sampled uniformly in [300, 400]. Finally our initial state is integrated with the same Runge-Kutta scheme. 200 of such sequences are generated for training while 50 are kept for testing.\nFor this experiment, we also use a ConvNet encoder to estimate the wave speed c from 5 consecutive reserved states (w, ∂w∂t ). The architecture of the encoder E is the same as in Table 2 but with 10 input channels. Here also, k is fixed for all sequences and k = 50. The hyper-parameters used in these experiments are the same than described in the Section E.2.\nThe results when multiple wave speeds c are in the dataset are consistent with the one present when only one is considered. Indeed, while prediction performances are slightly hindered, the parameter estimation remains consistent for both c and k. This extension provides elements attesting for the robustness and adaptability of our method to more complex settings. Finally the purely data-driven Neural-ODE fails to cope with the increasing difficulty." }, { "heading": "G.3 DAMPED PENDULUM WITH VARYING PARAMETERS", "text": "To extend the experiments conducted in the paper (section 4) with fixed parameters (T0 = 6, α = 0.2) and varying initial conditions, we evaluate APHYNITY on a much more challenging dataset where we vary both the parameters (T0, α) and the initial conditions between trajectories.\nWe simulate 500/50/50 trajectories for the train/valid/test sets integrated with DOPRI5. For each trajectory, the period T0 (resp. the damping coefficient α) are sampled uniformly in the range [3, 10] (resp. [0, 0.5]).\nWe train models that take the first 20 steps as input and predict the next 20 steps. To account for the varying ODE parameters between sequences, we use an encoder that estimates the parameters based\non the first 20 timesteps. In practice, we use a recurrent encoder composed of 1 layer of 128 GRU units. The output of the encoder is fed as additional input to the data-driven augmentation models and to an MLP with final softplus activations to estimate the physical parameters when necessary (ω0 ∈ R+ for Param ODE (ω0), (ω0, α) ∈ R2+ for Param ODE (ω0, α)). In this varying ODE context, we also compare to the state-of-the-art univariate time series forecasting method N-Beats (Oreshkin et al., 2020).\nResults shown in Table 9 are consistent with those presented in the paper. Pure data-driven models Neural ODE (Chen et al., 2018) and N-Beats (Oreshkin et al., 2020) fail to properly extrapolate the pendulum dynamics. Incomplete physical models (Hamiltonian and ParamODE (ω0)) are even worse since they do not account for friction. Augmenting them with APHYNITY significantly and consistently improves forecasting results and parameter identification." } ]
2,021
null
SP:839dcc82412b1e77aa5e3f267ef421dae1bc0cfc
[ "This paper proposes an effective method for managing power grid topology to increase efficiency. They use Transformer attention over a Graph Neural Network as the basic architecture, then propose a hierarchical technique in which the upper level learns to output goal network topologies, which are then implemented by a lower-level policy or a rule-based algorithm. An ablation study reveals that one of the most important components of the algorithm is using an \"afterstate\" representation, which learns a value function for the state after the agent changes the topology, but before the network is affected by random external factors, including supply and demand. " ]
Safe and reliable electricity transmission in power grids is crucial for modern society. It is thus quite natural that there has been a growing interest in the automatic management of power grids, exemplified by the Learning to Run a Power Network Challenge (L2RPN), modeling the problem as a reinforcement learning (RL) task. However, it is highly challenging to manage a real-world scale power grid, mostly due to the massive scale of its state and action space. In this paper, we present an off-policy actor-critic approach that effectively tackles the unique challenges in power grid management by RL, adopting the hierarchical policy together with the afterstate representation. Our agent ranked first in the latest challenge (L2RPN WCCI 2020), being able to avoid disastrous situations while maintaining the highest level of operational efficiency in every test scenario. This paper provides a formal description of the algorithmic aspect of our approach, as well as further experimental studies on diverse power grids.
[ { "affiliations": [], "name": "Deunsol Yoon" }, { "affiliations": [], "name": "Sunghoon Hong" }, { "affiliations": [], "name": "Byung-Jun Lee" }, { "affiliations": [], "name": "Kee-Eung Kim" } ]
[ { "authors": [ "Lucas Agussurja", "Shih-Fen Cheng", "Hoong Chuin Lau" ], "title": "A state aggregation approach for stochastic multiperiod last-mile ride-sharing problems", "venue": "Transp. Sci.,", "year": 2019 }, { "authors": [ "M. Alhazmi", "P. Dehghanian", "S. Wang", "B. Shinde" ], "title": "Power grid optimal topology control considering correlations of system uncertainties", "venue": "Technical Conference (I CPS),", "year": 2019 }, { "authors": [ "Andrew G. Barto", "Sridhar Mahadevan" ], "title": "Recent advances in hierarchical reinforcement learning", "venue": "Discrete Event Dynamic Systems,", "year": 2003 }, { "authors": [ "Peter Dayan", "Geoffrey E. Hinton" ], "title": "Feudal reinforcement learning", "venue": "In Advances in Neural Information Processing Systems 5, [NIPS Conference],", "year": 1992 }, { "authors": [ "Payman Dehghanian", "Yaping Wang", "Gurunath Gurrala", "Erick Moreno-Centeno", "Mladen Kezunovic" ], "title": "Flexible implementation of power system corrective topology control", "venue": "Electric Power Systems Research, 128:79–89,", "year": 2015 }, { "authors": [ "A.L. Dimeas", "N.D. Hatziargyriou" ], "title": "Multi-agent reinforcement learning for microgrids", "venue": "In IEEE PES General Meeting, pp", "year": 2010 }, { "authors": [ "J. Duan", "D. Shi", "R. Diao", "H. Li", "Z. Wang", "B. Zhang", "D. Bian", "Z. Yi" ], "title": "Deep-reinforcementlearning-based autonomous voltage control for power grid operations", "venue": "IEEE Transactions on Power Systems,", "year": 2020 }, { "authors": [ "D. Ernst", "M. Glavic", "L. Wehenkel" ], "title": "Power systems stability control: reinforcement learning framework", "venue": "IEEE Transactions on Power Systems,", "year": 2004 }, { "authors": [ "E.B. Fisher", "R.P. O’Neill", "M.C. Ferris" ], "title": "Optimal transmission switching", "venue": "IEEE Transactions on Power Systems,", "year": 2008 }, { "authors": [ "J.D. Fuller", "R. Ramasra", "A. Cha" ], "title": "Fast heuristics for transmission-line switching", "venue": "IEEE Transactions on Power Systems,", "year": 2012 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sangwook Han" ], "title": "Control Method of Buses and Lines Using Reinforcement Learning for Short Circuit", "venue": "Current Reduction. Sustainability,", "year": 2020 }, { "authors": [ "Haochen Hua", "Yuchao Qin", "Chuantong Hao", "Junwei Cao" ], "title": "Optimal energy management strategies for energy Internet via deep reinforcement learning approach", "venue": "Applied Energy,", "year": 2019 }, { "authors": [ "Q. Huang", "R. Huang", "W. Hao", "J. Tan", "R. Fan", "Z. Huang" ], "title": "Adaptive power system emergency control using deep reinforcement learning", "venue": "IEEE Transactions on Smart Grid,", "year": 2020 }, { "authors": [ "Jiechuan Jiang", "Chen Dun", "Tiejun Huang", "Zongqing Lu" ], "title": "Graph convolutional reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Adrian Kelly", "Aidan O’Sullivan", "Patrick de Mars", "Antoine Marot" ], "title": "Reinforcement learning for electricity network", "venue": "operation. ArXiv,", "year": 2020 }, { "authors": [ "A. Khodaei", "M. Shahidehpour" ], "title": "Transmission switching in security-constrained unit commitment", "venue": "IEEE Transactions on Power Systems,", "year": 1937 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Tu Lan", "Jiajun Duan", "Bei Zhang", "Di Shi", "Zhiwei Wang", "Ruisheng Diao", "Xiaohu Zhang" ], "title": "Aibased autonomous line flow control via topology adjustment for maximizing time-series atcs", "venue": null, "year": 1911 }, { "authors": [ "Andrew Levy", "Robert Platt", "Kate Saenko" ], "title": "Hierarchical reinforcement learning with hindsight", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "A. Marot", "B. Donnot", "S. Tazi", "P. Panciatici" ], "title": "Expert system for topological remedial action discovery in smart grids", "venue": "IET Conference Proceedings,", "year": 2018 }, { "authors": [ "Antoine Marot", "Benjamin Donnot", "Camilo Romero", "Luca Veyrin-Forrer", "Marvin Lerousseau", "Balthazar Donon", "Isabelle Guyon" ], "title": "Learning to run a power network challenge for training topology controllers", "venue": "The Power Systems Computation Conference,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "Proceedings of The 33th International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Shixiang (Shane) Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang (Shane) Gu", "Vikash Kumar" ], "title": "Multi-agent manipulation via locomotion using hierarchical sim2real", "venue": "Proceedings of the Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Emilio Parisotto", "H Francis Song", "Jack W Rae", "Razvan Pascanu", "Caglar Gulcehre", "Siddhant M Jayakumar", "Max Jaderberg", "Raphael Lopez Kaufman", "Aidan Clark", "Seb Noury" ], "title": "Stabilizing transformers for reinforcement learning", "venue": "In Proceedings of The 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ronald Parr", "Stuart Russell" ], "title": "Reinforcement learning with hierarchies of machines", "venue": "In Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems 10,", "year": 1998 }, { "authors": [ "Warren B. Powell" ], "title": "Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)", "venue": null, "year": 2007 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Sanket Shah", "Meghna Lowalekar", "Pradeep Varakantham" ], "title": "Neural approximate dynamic programming for on-demand ride-pooling", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "David Silver", "Aja Huang", "Chris J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of Go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "Satinder Singh", "Dimitri Bertsekas" ], "title": "Reinforcement learning for dynamic channel allocation in cellular telephone systems", "venue": "In Proceedings of the 9th International Conference on Neural Information Processing Systems,", "year": 1996 }, { "authors": [ "Medha Subramanian", "Jan Viebahn", "Simon Tindemans", "Benjamin Donnot", "Antoine Marot" ], "title": "Exploring grid topology reconfiguration using a simple deep reinforcement learning approach", "venue": "CoRR, abs/2011.13465,", "year": 2020 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": "A Bradford Book,", "year": 2018 }, { "authors": [ "Richard S. Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial Intelligence,", "year": 1999 }, { "authors": [ "Marlin W. Ulmer", "Barrett W. Thomas", "Dirk C. Mattfeld" ], "title": "Preemptive depot returns for dynamic same-day delivery", "venue": "EURO Journal on Transportation and Logistics,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "A.N. Venkat", "I.A. Hiskens", "J.B. Rawlings", "S.J. Wright" ], "title": "Distributed mpc strategies with application to power system automatic generation control", "venue": "IEEE Transactions on Control Systems Technology,", "year": 2008 }, { "authors": [ "Tingwu Wang", "Renjie Liao", "Jimmy Ba", "Sanja Fidler" ], "title": "Nervenet: Learning structured policy with graph neural networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "In Proceedings of the 33th International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Zhitao Ying", "Vijay Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Vinícius Flores Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David P. Reichert", "Timothy P. Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter W. Battaglia" ], "title": "Deep reinforcement learning with relational inductive biases", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tianren Zhang", "Shangqi Guo", "Tian Tan", "Xiaolin Hu", "Feng Chen" ], "title": "Generating adjacencyconstrained subgoals in hierarchical reinforcement learning", "venue": "NeurIPS", "year": 2020 }, { "authors": [ "Z. Zhang", "D. Zhang", "R.C. Qiu" ], "title": "Deep reinforcement learning for power system applications: An overview", "venue": "CSEE Journal of Power and Energy Systems,", "year": 2020 }, { "authors": [ "C. Zhao", "U. Topcu", "N. Li", "S. Low" ], "title": "Design and stability of load-side primary frequency control in power systems", "venue": "IEEE Transactions on Automatic Control,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "The power grid, an interconnected network for delivering electricity from producers to consumers, has become an essential component of modern society. For a safe and reliable transmission of electricity, it is constantly monitored and managed by human experts in the control room. Therefore, there has been growing interest in automatically controlling and managing the power grid. As we make the transition to sustainable power sources such as solar, wind, and hydro (Rolnick et al., 2019), power grid management is becoming a very complex task beyond human expertise, calling for data-driven optimization.\nYet, automatic control of a large-scale power grid is a challenging task since it requires complex yet reliable decision-making. While most approaches have focused on controlling the generation or the load of electricity (Venkat et al., 2008; Zhao et al., 2014; Huang et al., 2020), managing the power grid through the topology control (changing the connection of power lines and bus assignments in substations) would be the ultimate goal. By reconfiguring the topology of the power grid, it can reroute the flow of electricity, which enables the transmission of electricity from the producers to consumers efficiently and thus prevent surplus production. There are preliminary studies of the grid topology control in the power systems literature (Fisher et al., 2008; Khodaei & Shahidehpour, 2010), but due to its large, combinatorial, and non-linear nature, these methods do not provide a practical solution to be deployed to the real-world.\nOn the other hand, deep Reinforcement Learning (RL) has shown significant progress in complex sequential decision-making tasks, such as Go (Silver et al., 2016) and arcade video games (Mnih et al., 2015), purely from data. RL is also perceived as a promising candidate to address the challenges of power grid management (Ernst et al., 2004; Dimeas & Hatziargyriou, 2010; Duan et al., 2020; Zhang et al., 2020; Hua et al., 2019). In this regard, we present Semi-Markov\n∗ : Equal contribution\nAfterstate Actor-Critic (SMAAC), an RL algorithm that effectively tackles the challenges in power grid management.\nOne of the main challenges in RL for the real-world scale power grid management lies in its massive state and action space. We address the problem by adopting a goal-conditioned hierarchical policy with the afterstate representation. First, we represent state-action pairs as afterstates (Sutton & Barto, 2018), the state after the agent has made its decision but before the environment has responded, to efficiently cover the large state-action space. The afterstate representation can be much more succinct than the state-action pair representation when multiple state-action pairs are leading to an identical afterstate. For example, in the case of controlling the topology of the power grid, a pair of a current topology and an action of topology modification can be represented as a reconfigured topology, since the topology is deterministically reconfigured by the action. Then the next state is determined by random external factors, such as the change of power demands in load. Second, we extend this idea to a hierarchical framework, where the high-level policy produces a desirable topology under the current situation, and the low-level policy takes care of figuring out an appropriate sequence of primitive topology changes. Combined together, our hierarchical policy architecture with afterstates facilitates effective exploration for good topology during training.\nOur algorithm ranked first in the latest international competition on training RL agents to manage power grids, Learning To Run a Power Network (L2RPN) WCCI 2020. In this paper, we further evaluate our approach using Grid2Op, the open-source power grid simulation platform used in the competition, by training and testing the agent in 3 different sizes of power grids. We show that the agent significantly outperforms all of the baselines in all grids except for the small grid where the task was easy for all algorithms." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 GRID2OP ENVIRONMENT", "text": "We briefly overview Grid2Op, the open-source simulation platform for power grid operation used in the L2RPN WCCI 2020 challenge. Grid2Op models realistic concepts found in realworld operations used to test advanced control algorithms, which follow real-world power system operational constraints and distributions (Kelly et al., 2020).\nThe power grid is essentially a graph composed of nodes corresponding to substations that are connected to loads, generators, and power lines. The generator produces electricity, the load consumes electricity, and the power line transmits electricity between substations. The substation can be regarded as a router in the network, which determines where to transmit electricity. Grid2Op considers 2 conductors per substation, known as the double busbar system. This means that the elements connected to a substation, i.e. loads, generators, and power lines, can be assigned to one of the two busbars, and the power travels only over the elements on the same busbar. Thus, each substation can be regarded as being split into two nodes.\nThe state of the power grid consists of various features such as a topology configuration (the connectivity of each power line and the bus assignment in each substation), as well as the amount of power provided by each generator, required by each load, transmitted in each line, and so on. The power supplied by generators and demanded by loads changes over time, and the power transmitted in lines also changes according to the current topology configuration together with supply and demand. In addition, each line has its own capacity to transmit electricity and can be automatically disconnected when there is an overflow of electricity.\nThe agent can apply actions on substations and lines to managing the power grid. The action on a substation, called bus assignment, assigns the elements in the substation to a busbar. The action on a line, called line switch, disconnects (both ends of the line is assigned to neither bus) a line or reconnects a disconnected line. The agent is allowed to perform one line switch or one bus assignment action per step, and cannot successively perform actions on the same line or substation.\nThe power grid is simulated for a given period, typically for several days at a 5-minute interval. The simulation can terminate prematurely when the agent fails to manage the grid, i.e. (1) the amount of power required by loads are not delivered, which can happen if there are too many disconnected lines, or (2) a disconnected subgraph is formed as a result of applying an action. This is reflected in the failure penalty when measuring the performance of the agent, given by the number of remaining simulation time steps upon termination. Another important performance metric is the power loss penalty, given by the amount of power that disappeared during transmitting due to resistive loss. Thus, the goal of the agent is to operate the power grid both safely and efficiently by minimizing the failure penalty and the power loss penalty.\nFigure 1 illustrates how the actions affect the state of the power grid using the bus assignment action as an example. The simulator provides 3 different sizes of power grids, (1) IEEE-5 is the power grid with 5 substations, (2) IEEE-14 is the power grid with 14 substations, and (3) L2RPN WCCI 2020 is the power grid with 36 substations. See Appendix A.1 for more details on the environment." }, { "heading": "2.2 AFTERSTATES IN RL", "text": "Grid2Op provides a natural framework to use RL for operating power grids: we assume a Markov decision process (MDP) defined by (S,A, p, r, γ) to represent the RL task, where S is the state space, A is the action space, and p(st+1|st, at) is the (unknown) state transition probability, rt = r(st, at) ∈ R is the immediate reward, and γ ∈ (0, 1) is the discount factor. We assume learning a stochastic policy π(at|st), which is a probability distribution over actions conditioned on states. The state and action value functions under π are V π(s) = Eπ[ ∑ l≥0 γ\nlrt+l|st = s] and Qπ(s, a) = Eπ[ ∑ l≥0 γ lrt+l|st = s, at = a] respectively.\nAs shown in Figure 1 in the previous section, the transition in Grid2Op comprises two steps: the topological change that results directly from the action, and then the rest of the state changes that arise from exogenous events. This motivates the use of the afterstate (Sutton & Barto, 2018), also known as the post-decision state in Approximate Dynamic Programming (ADP) (Powell, 2007), which refers to the state after the agent has made its decision but before the arrival of new information.\nLet us define the state S as (T , X) where T is the part of the state that is deterministically changed by an action, and X as independent or affected indirectly from an action. Following the modeling in (Powell, 2007), the transition is decomposed into two parts using fA and fE :\nst+1 = [τt+1, xt+1] = f E ([τt+1, xt], et+1) , s at t = [τt+1, xt] = f A ([τt, xt], at) , (1) where τt+1, the deterministic part of st+1, is given by the the function fA(st, at), and xt+1, the stochastic part, is given by the function fE(sat , et+1) where et+1 is the source of the randomness in the transition sampled from some unknown distribution pE . Note that et+1 itself can be included as a part in xt+1.\nUsing the afterstate has a number of advantages. For example, if the state and the action spaces are very large but the set of unique afterstates is relatively small, learning the value function of afterstates would be much more efficient. The value of an afterstate sa under policy π is defined as V π(sa) = Eπ[ ∑ l≥0 γ lrt+l|sa = fA(st, at)] and its recursive form can be written as :\nV π(satt ) = Eet+1∼pE ,at+1∼π [ r(st, at) + γV π(fA (st+1, at+1))|st+1 = fE(satt , et+1) ]\n(2)\nThe optimal afterstate value function and the optimal policy can be obtained by iteratively alternating between the policy evaluation by Eq. (2) and policy improvement :\nπnew(st) = argmax at\n[ V πold ( fA(st, at) )] (3)\nNote that we cannot gain much from the afterstate representation when using the individual power grid operations as actions since they result in unique changes in the grid topology. However, we shall see that the afterstate becomes very powerful when we consider the sequences of grid operations as the action space, where their permutations result in identical changes in the final topology." }, { "heading": "3 APPROACH", "text": "We first present the state space, the action space, and the reward function modeled in our approach. Then we briefly explain the unique challenge in Grid2Op and describe our approach to tackle the challenge. Finally, we will describe the overall architecture of the RL agent." }, { "heading": "3.1 MODELING STATES, ACTIONS AND REWARDS", "text": "State We also define the state S in the Grid2Op environment as (T , X) where T is set of topology configuration (deterministically changed by action) and X as various features as power demands and supplies (independent of the action), power being transmitted in each line (affected indirectly from the action) and so on. The detail about the features of states used in this work is provided in Appendix A.1.\nAction We only consider bus assignment actions in our agent: we assume that it is desirable to have as many lines connected as possible since the overflow is less likely to occur when there are many routes for the power delivery. Thus, for line switch actions, we simply follow the rule of always reconnecting the power lines whenever they get disconnected due to the overflow.\nLet us define the number of the substation as Nsub and elements in ith substation as Sub(i). Each end of lines, generators, and loads in the substation can be assigned to one of two busbars, so the total number of actions is |A| = ∑Nsub i=0 2\nSub(i) (i.e. each action chooses one of the substations and perform a bus assignment therein). Following the approach taken by the winner of the previous challenge L2RPN 2019 (Lan et al., 2019), we made our agent act (i.e. intervene) only in hazardous situations. The condition for being hazardous is determined by the existence of a line in which the power flow is larger than the threshold hyperparameter. This naturally yields a semi-MDP setting for RL (Sutton et al., 1999).\nReward We define the reward in intermediate time steps to be the efficiency of the power grid, defined by the ratio of the total load to the total production, i.e. loadtprodt . Note that if the ratio becomes greater than 1, the episode terminates with a large penalty for the failure since the production does not meet the demand." }, { "heading": "3.2 ACTOR-CRITIC ALGORITHM WITH AFTERSTATES", "text": "The main challenge of the Grid2Op environment is the large state and action spaces. For the power grid with 36 substations used in the L2RPN WCCI 2020 competition, there are about 70,000 actions that yield unique changes to the topology. We address this problem by adopting the actor-critic architecture, where the policy and the value function are represented by function approximators. In addition, we use the afterstate representation to capture many state-action pairs being led to an identical afterstate by leveraging the transition structure, shown in Figure 1. For notational simplicity, all the derivations assume MDP in this section, which shall be extended to the semiMDP setting in the next section.\nWe use function approximators for the afterstate value function Vψ(satt ) and policy πθ(at|st) parameterized byψ and θ respectively. The actor is trained to maximize Jπ and the critic to minimize LV :\nJπ(θ) = Est∼D,at∼πθ(·|st) [ Vψ(f A(st, at)) ]\n(4)\nLV (ψ) = E(satt ,st+1)∼D [( Vψ(s at t )− r(st, at)− γEat+1∼πθ(·|st+1) [ Vψ(f A(st+1, at+1)) ])2] (5)\nwhere the replay buffer D stores the transition tuple [st, satt , r(st, at), st+1] for off-policy learning. The actor and the critic are trained using Soft Actor-Critic (SAC) (Haarnoja et al., 2018). Note that it learns a value function over an afterstate with a reconfigured topology, rather than a state-action pair, which is more succinct. Although the above equation defines a state-value critic, we can still train off-policy since it is essentially an action-value critic (i.e. an afterstate is defined by a state and an action).\nFurthermore, we aim to apply the gradient estimator through a reparameterization trick similar to Haarnoja et al. (2018), since it is known to have lower variance than the likelihood ratio gradient estimator, resulting in stable learning. In order to update the actor via reparameterization trick, the transition fA must be differentiable, but it is not straightforward to define fA, which maps from the bus assignment actions to the topology configurations, as a differentiable formula. In the next section, we will mitigate the problem by re-defining the action space." }, { "heading": "3.3 EXTENSION TO GOAL-CONDITIONED HIERARCHICAL FRAMEWORK", "text": "It is very challenging to take exploratory actions in the Grid2Op environment: if the agent takes random actions, the power grid would fail in a few time steps. For example, the agent with the random policy would mostly fail in less than 10 time steps, whereas the agent with the noop policy (naively maintaining the initial grid topology throughout time steps) would survive approximately 500 time steps on average. Thus, it is very difficult for the agent to explore diverse grid topology configurations that are significantly different from the initial ones, and thereby the random exploration policy (e.g. -greedy) would be often stuck at bad local optima that executes only one or two actions. Therefore, a more structured exploration is a key to successful training.\nTo this end, we extend the afterstate actor-critic algorithm to a two-level hierarchical decision model by defining the goal topology configuration as the high-level action. Specifically, we define the highlevel actions as the goal topology configuration g ∈ {0, 1}n where n = ∑Nsub i=0 Sub(i), which is learned by the high-level policy πh. This leads to the temporally extended afterstate representation, given by sgtt = [τt+d = gt, xt] = f\nA([τt, xt], gt) where t denotes the time a hazard occurs and d denotes the time interval next hazard occurs. Note that we can now take full advantage of the afterstate representation since the equivalence of many different sequences of primitive actions (i.e. individual bus assignment actions) that lead to the identical topology are now captured by the goal topology configuration.\nIn addition, exploration with goal topology is more effective than with primitive actions since the policy only needs to focus on where to go, i.e. the desirable topology under the current situation, without needing to care about how to get there, i.e. figuring out a suitable primitive action sequence that would yield the goal topology, with the help from an appropriate low-level policy. Finally, we can now use the reparameterization trick for the actor update in a straightforward manner since the result of fA is merely a copy of the action gt.\nThe replay buffer D stores the transition tuple, [st, gt, rt:t+d, st+d] where rt:t+d = ∑t+d t′=t γ\nt′−trt′ . The high-level policy can be trained through the objective function of the actor and the critic written as:\nJπ(θ) = Egt∼πhθ [Vψ ([gt, xt])] (6) JV (ψ) = ED [( Vψ(s\ngt t )− rt:t+d − γdEgt+d∼πhθ [Vψ([gt+d, xt+d])]\n)2] (7)\nAs for the low-level policy, it is relatively simple to find the action sequence that changes the current topology into the goal topology: we just need to identify the set of substations that requires changes in the bus assignment and make appropriate reassignments therein. Thus, we take a rulebased approach for the low-level policy, at = πlrule(st, gt) where the rule determines the order of substations to execute bus assignment actions. For example, we could impose a priority on substations such that the substations with the least room in the capacity make their bus reassignment first because they are the ones requiring the most urgent interventions. In the experiments section, we compare the results using various rules including a learning-based approach." }, { "heading": "3.4 IMPLEMENTATION", "text": "In order to leverage the interconnection structure of the power grid, we apply graph neural networks (GNN) (Scarselli et al., 2008). As illustrated in Figure 2, given the power grid with n substations, we reshape xt in the state st = [τt, xt], given as a flat vector in Grid2Op, into (M, x̃t), where M ∈ {0, 1}n×n is the adjacency matrix, and x̃t ∈ Rn×k is the node matrix with k features. We adopted the transformer (Vaswani et al., 2017) as the GNN block, where the adjacency matrix M is used for masking out the attention weights of nodes, following the architecture proposed by Parisotto et al. (2020). The actor and the critic share the lower layers, consisting of GNN blocks and linear layers. Additionally, we add an entropy of policy to the objective function of the actor and the critic, following the SAC formulation. Details of the architecture are provided in Appendix A.2." }, { "heading": "4 RELATED WORKS", "text": "The topology control of the power grid through line switch has been previously studied in the power systems literature. Previous works, Fisher et al. (2008) and Khodaei & Shahidehpour (2010), solve the optimal transmission switching problem by mixed-integer programming. Since then, several heuristics have been introduced to tackle the computational cost (Fuller et al., 2012; Dehghanian et al., 2015; Alhazmi et al., 2019). Recently, Marot et al. (2018) explores bus assignment, more complex than the line switch, and presents an algorithm based on expert knowledge, which shows the utility of bus assignment. Their algorithm can find remedial bus assignment action that can revert overflow with a high probability of success and acceptable computational time. Han (2020) explores bus and line separation to solve the problem of short circuit current reduction in power systems through RL. Marot et al. (2020) models the power grid management through line switch and bus assignment as a RL task and releases an open-source simulation called Grid2Op for power grid management in multi-step time horizons. Additionally, they held the international power grid management competition, L2RPN 2019 challenge, where IEEE-14 is chosen for the competition environment, and Subramanian et al. (2020) present a simple deep RL approach for IEEE-14.\nThe winner of the L2RPN 2019 challenge (Lan et al., 2019) tackles the problem through pretraining and guided exploration. They collect massive data sets from the simulator which can restore particular states, and pre-train an agent to generate a good initial policy. For exploration in the large action space, they use guided exploration instead of random exploration, where the agent simulates the top few actions with high action values before performs its action to the environment at every time step. They also design the agent to act only in hazardous situations, and they train it using dueling Deep Q-Networks (DQN) (Wang et al., 2016) and prioritized replay buffer (Schaul et al., 2016).\nThe afterstate representation has been applied to address the resource allocation problems and dynamic routing problems. Singh & Bertsekas (1996) formulate the dynamic channel allocation problem in the cellular network as a dynamic programming problem using the afterstate value function. More recently, there has been research on utilizing the afterstate representation combined with ADP in a dynamic vehicle routing problem (Agussurja et al., 2019; Ulmer et al., 2019). Shah et al. (2020) also apply an afterstate-based deep RL method in a ride-pool matching problem. The hierarchical framework has long held the promise to tackle complex RL tasks (Dayan & Hinton, 1992; Parr & Russell, 1998; Barto & Mahadevan, 2003), and especially one of the prevailing approaches, the goal-conditioned hierarchical framework has recently achieved significant success in various tasks, such as simulated and real-world quadrupedal manipulation (Nachum et al., 2018a; 2020) and complex navigation (Levy et al., 2019; Zhang et al., 2020). However, to the best of our knowledge, none of the works combines the afterstate representation with a hierarchical framework.\nGNN has been widely used in deep RL to directly tackle graph-structured problems or to represent the interaction between entities in a state. You et al. (2018) formulates goal-directed graph generation as MDP and solves designing a molecular structure with specific desired properties problem through an RL algorithm. Wang et al. (2018) apply GNN for continuous control by modeling controllable joints as nodes for a graph, and the physical dependencies between joints as edges to capture underlying graph structure. Zambaldi et al. (2019) adopt GNN for a navigation and planning task where complex relational reasoning is required to represent pairwise interactions between objects in a state, and Jiang et al. (2020) adopt it for learning cooperation in multi-agent environments by modeling agents as nodes in a graph where they communicate through GNN." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Our experiments are conducted on the 3 power grids, IEEE-5 (smallest), IEEE-14, and L2RPN WCCI 2020 (largest, used in the challenge), provided by Grid2Op. Details of each grid are provided in Table 1. Each grid has a set of scenarios, and each scenario specifies the variations in the simulation such as the power supplies and demands at each time step. The length of each scenario is 864 time steps, which corresponds to 3 days at 5 minute time-resolution.\nSince Grid2Op is relatively new to the research community, there are few RL methods applied to the grid topology control. Therefore we implement 3 baselines for performance comparison to verify the effectiveness of our method: (1) DDQN (Dueling DQN) has similar architecture as the last winner of the challenge, which learns the action-value function with the primitive action space (2) SAC is similar to DDQN but utilizes maximum entropy exploration following SAC algorithm. (3) SMAAC\\AS is SMAAC without the afterstate representation, where we use action-value critic Qπ(s, g). Thus, DDQN and SAC assume the MDP setting with primitive actions, SMAAC\\AS assumes the goal-conditioned semi-MDP setting but without the afterstate representation. We additionally compare our approach with the 3rd placed participant in the L2RPN WCCI 2020 grid,1 (4) YZM, the only agent with publicly available code. This agent heuristically selects 596 actions among the primitive actions in prior and trains the agent with the reduced action space using Asynchronous Advantage Actor-Critic (Mnih et al., 2016). YZM additionally trains a backup agent\n1Their algorithm is designed specifically for L2RPN WCCI 2020 grid and cannot be applied to other grids.\nwith another set of actions and invokes the backup agent when the base agent can lead to overflow or termination by using the simulation function.2\nFor a fair comparison, all baselines except for YZM encode the input state through the same GNN architecture, and the agents get activated only in hazardous situations. The detail of implementation is provided in Appendix A.3 and the code is provided in https://github.com/sunghoonhong/SMAAC." }, { "heading": "5.2 RESULTS", "text": "Figure 3 shows the total average scaled score of evaluation rollouts on the 10 validation scenario set during training: the scores are scaled in the range [-100,100], with the return of the no-op agent scaled and translated to 0, indicating how better the agent manages the power grid than the no-op agent in terms of safety and power efficiency. Each algorithm was trained and evaluated for 3 runs for averaging the scores.\nAs shown in Figure 3, all algorithms easily solve the smallest grid (IEEE-5). In the medium (IEEE14) and the large (L2RPN WCCI 2020) grids, both DDQN and SAC perform poorly. DDQN performs slightly better than the no-op agent in the medium grid and worse than the no-op agent in the largest grid. Exploring with primitive actions is extremely difficult since most actions can lead to disastrous termination, and thereby it cannot find grids other than the initial one. This yields the DDQN to be stuck at bad local optima, not much better than the no-op agent. SAC performs slightly better than DDQN in the larger grids. This is due to the sophisticated optimization scheme in SAC that is shown to affect a number of other RL benchmark tasks. However, in Grid2Op, the performance was barely better than the no-op agent due to the same challenge faced by DDQN.\nPerhaps surprisingly, the performance of SMAAC\\AS is no better than using primitive actions, although the hierarchical decision encourages deviating from the initial topology. Without the afterstate representation, the critic was not able to learn a good action-value function due to the massive state and action spaces. YZM uniquely leverages the simulation function and can show good performance from the beginning. However, exploring primitive actions is still hard even with\n2The simulation function is a predefined method in Grid2Op, which returns a next state given an action through an approximate simulation.\nthe reduced set of actions, and it can be observed that YZM struggles to improve its performance. The performance on the test scenarios is provided in Table 2. We provide a qualitative analysis of how each agent behaves differently and how SMAAC remedies the hazardous power grid with a detailed example in Appendix A.4.\nOn the contrary, our method learns significantly fast and outperforms all the baselines, effectively combining the benefits of the hierarchical decision model and the afterstate representation. Finally, Table 3 shows the leaderboard in the L2RPN WCCI 2020 challenge.\n5.3 LOW-LEVEL RULE DESIGN\nIn this section, we examine how the low-level policy affects the overall performance. (1) FIXED gives priority to substations randomly that are predefined and fixed during training. We implement this low-level agent to find out whether our highlevel agent can manage the power network on the poor lowlevel agent. (2) CAPA gives high priority to substations with lines under high utilization of their capacity, which applies an action to substations that require urgent care. (3) DESC imposes a priority on large substations, i.e. many connected elements. A change to a large substation can be seen as making a large change in the overall topology with a single action. (4) OPTI optimizes execution order by training, making the actor additionally output Nsub values that represents the priority of substations. All rules\nachieve similar performance with overlapped confidence intervals except for FIXED.\nAs shown in Figure 4, especially, CAPA converges fast compared to OPTI and DESC, hence we use this low-level agent in Section 5.2. We assume that most of the rules could achieve similar final performances since SMAAC is resilient to suboptimal low-level rules. By generating subgoals that include a subset of intended topology reconfiguration, the high-level policy can adapt to suboptimal low-level rules to form an optimal policy overall. However, as the result of FIXED suggests, a very poorly designed low-level policy can lead to instability and degrade the performance." }, { "heading": "6 CONCLUSION", "text": "In this paper, we presented SMAAC, a deep RL approach demonstrated to be very effective for power grid management. SMAAC is an actor-critic algorithm that combines the afterstate representation with a hierarchical decision model. This is very important for power grid management modeled by Grid2Op, where actions are too primitive for effective exploration and many permutations of action sequences lead to identical changes in the power grid topology. Besides, naive explorations with primitive actions are subject to immediate failure due to the unique nature of power grid management. We empirically demonstrated that the presented method significantly outperforms several baselines in the real-world scale power grids, and ranked first in the latest international competition, L2RPN WCCI 2020 challenge. Our work shows the possibility of an intelligent agent that automatically operates the power grid for several days without expert help." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the National Research Foundation (NRF) of Korea (NRF2019M3F2A1072238 and NRF-2019R1A2C1087634), and the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2019-0-00075, IITP No. 2020-0-00940 and IITP No. 2017-0-01779 XAI)." }, { "heading": "A APPENDIX", "text": "A.1 ENVIRONMENT DETAIL\nGrid2Op provides a simulation for power grid operation in real-time over several days at a 5-minute time-resolution. There are 3 power grids, IEEE-5, IEEE-14, and L2RPN WCCI 2020 (a subgraph of IEEE-118), where each has different size.\nState Space The state of the power grid consists of 12 features presented in Table 4. We use 5 features provided by the environment and 1 feature defined by us, which we consider enough to represent the current state of the grid; Active power, rho, topology configuration, time step overflow, maintenance, hazard. The maintenance is a boolean vector representing whether a line is in maintenance, and the hazard represent is also a boolean vector representing whether electricity flows of a line is larger than a predefined threshold δh. We use 0.9 for the threshold, and this threshold is same as the one we used for the hazardous state. Further details are provided in grid2op.readthedocs.io/en/latest/observation.html\nAction Space The agent can apply actions on substations and lines to manage the power grid. The action on a substation, called bus assignment, assigns the elements in the substation to a busbar. The action on a line, called line switch, disconnects (both ends of line is assigned to neither bus) a line or reconnects a disconnected line. Let us define the number of lines in the power grid as Nline, the number of substations as Nsub, and the number of elements in ith substation as Sub(i). Then at each time step the agent selects an action at from the action space A where |A| = Nline + 22 × Nline + ∑Nsub i=0 2 Sub(i). 3\nRule There are some rules that make the task more realistic and challenging in Grid2Op. Lines can be automatically disconnected due to overflow of current, i.e. if more current flows than a line can hold for 3 time steps, a line is automatically disconnected. There is cooldown time for each component, i.e. the agent cannot apply its action to the same component successively and it is reactivated after 3 time steps later. There is a stochastic event called maintenance that happens intermittently. During maintenance, a line is disconnected by force and it cannot be reconnected.\nScore The performance of an agent can be evaluated by a score which consists of power loss penalty, failure penalty, and redispatching penalty. It is defined as :\nScore = tover∑ t=0 (prodt − loadt) + tend∑ t=tover penalty + tover∑ t=0 redispatcht (8)\nwhere tover is the time game over occurs, prodt is total amount of power supply by all generators, loadt is total amount of power demand by loads. prodt−loadt stands for total amount of power loss. The failure penalty is given by the sum over the large constant penalty for the remaining simulation time steps upon termination. The redispatching penalty is incurred when the agent do redispatching action, which controls generators to produce more or less electricity. Since redispatching action always incurs the additional penalty, we do not consider this action, so the redispatching penalty is always 0 in our case. Therefore, the goal of the agent is to operate the power grid both safely and efficiently by minimizing the failure penalty and the power loss penalty.\nA.2 MODEL ARCHITECTURE\nGiven the state s = [τ, x], we reshape x into (M, x̃), where M ∈ {0, 1}n×n is the adjacency matrix, and x̃ ∈ Rn×k is the node matrix with k features. The shared layers consisted of Ls GNN blocks that computes the node embedding of an input graph through transformer layers at their beginning. Given input nodes matrix x̃, a linear layer with ReLU activation increases input dimension k to embedding dimension ks, which maps x̃ ∈ Rn×d to H0 ∈ Rn×ks . After the linear layer, Ls\n3Both ends of the line can be assigned to one of two buses in the substation, so 22 × Nline is the number of reconnection. Each end of the line, generator, and load in the substation also can be assigned to one of two buses in the substation, so bus switching actions are ∑Nsub i=0 2 Sub(i)\ntransformer layers follow. The input of the transformer block at the lth block is an embedding from the previous layer H l−1 and the adjacency matrix A, H l = Transformer(H l−1, A). The actor’s head consists of La transformer blocks and 2 linear layers. Given the final node embedding HLe from the shared layers, the transformer layers in actor’s head takes it as the input and outputs node embedding HLa ∈ Rn×ka . The first linear layer transforms 2D node embedding HLa to a vector node embedding Rn by reducing the embedding dimension ka to 1, which is then concatenated with the current topology τ to form the state s and the next linear layers outputs mean and standard deviation of the normal distribution. We sample continuous values g′ ∈ Rn from the normal distribution followed by tanh non-linearity and the desirable topology g ∈ {0, 1}n is constructed by assigning 1 to values in g′ larger than predefined topology threshold δτ and 0 otherwise. We empirically find out that an agent without the threshold has difficulty learning in the large grid. However, an appropriate threshold helps stable learning and fast convergence.\nThe critic’s head has a similar structure except for linear layers. There are Lc GNN blocks in the critic that also takes HLe and outputs HLc ∈ Rn×kc . After transforming it into a vector node embedding Rn by a linear layer, g′ is concatenated to HLc , and the following two linear layers take it as the input and outputs a scalar value. The overall architecture is shown in the Figure 2.\nAlthough the agent observes all substations, we reduce the goal dimension n to ñ by restricting controllable substation. As we mentioned in subsection 3.1, we do not consider disconnection or reconnection. Therefore, the agent only controls substations that have more than 2 elements since there are only two possible cases, elements on a same bus (connection) or elements on a different bus (disconnection). For L2RPN WCCI 2020 grid, the agent acts on substations that have more than 5 elements in order for fast convergence. As a result, the goal dimension is reduced from 21 to 16 in IEEE-5, from 57 to 42 in IEEE-14, and from 177 to 79 in L2RPN WCCI 2020.\nA.3 IMPLEMENTATION DETAILS\nFor all models, we use 6 state features, active power, rho, topology configuration, time step overflow, maintenance, and hazard, where δh = 0.9. All the agent acts only when there is a rho of a line, ratio between current flow and thermal limit that is capacity of a line, is larger than the δh = 0.9. Additionally the last 6 states of a history are stacked to represent the input state, since the difference between the state at time step t and t + 1 is not significantly different in Grid2Op. Since the first decimal place of reward does not change significantly ( loadtprodt varies from 0.85 to 0.99 most of time), we transform it as ( loadtprodt ×10−9)×0.1 to use the second decimal place. Adam optimizer (Kingma & Ba, 2015) is used for training with 5e−5 learning rate and 128 batch size. We perform grid search to find the best hyperparameters for each model.\nSMAAC SMAAC is our proposed model, which learns the afterstate value function on the goal space, namely topology configuration space. We use Ls = 6 GNN blocks with embedding dimension ks = 64/128 for shared layers. For actor’s head, we use La = 3 GNN blocks with embedding dimension ka = 64/128. For critic’s head, we use Lc = 1 GNN block with embedding dimension kc = 64/128 followed by linear layers with kc+ñ4 hidden units. We use δτ = 0/0.1/0.15. In practice, the reward for the high-level policy in Equation 6 is not discounted, rt:t+d = ∑t+d t′=t rt′ and γ is used instead of γd following Nachum et al. (2018b). For the competition, we use ks = 128, kc = 128, ka = 128, δτ = 0.35, and τ is used as an extra input feature for the shared layers.\nSMAAC\\AS SMAAC\\AS is a baseline, which learns the action-value function of desired relative change in hierarchical framework. The overall architecture is similar to SMAAC, but the critic takes both desired relative change and topology configuration to learn on state-action pairs. We use δτ = 0/0.1/0.15.\nSAC SAC is a baseline, which learns the action-value function on the primitive bus assignment action space. We use Ls = 6 GNN blocks with embedding dimension ks = 64/128 for shared layers. For actor’s head, we useLa = 3 GNN blocks with embedding dimension ka = 64/128. And, we use softmax to output categorical distribution while utilize relaxed categorical distribution in training. For critic’s head, we use Lc = 1 GNN block with embedding dimension kc = 32/64/128 followed by concatenation with one-hot encoded action and one linear layer.\nDDQN DDQN is a baseline, which learns the action-value function on the primitive bus assignment action space. It does not have separate actor but critic outputs |A| action-values. We use Ls = 6 GNN blocks with embedding dimension ks = 64/128 for embedding layers. Following it, the critic has Lc = 1 GNN blocks with embedding dimension ks = 64/128. Then, it utilizes the technique used by dueling DQN, namely we compute action-values through both value network and advantage network.\nA.4 QUALITATIVE ANALYSIS\nIn this section, we present qualitative analysis based on example behaviors of agents. Figure 5 indirectly shows how each agent behaves in 2 grids. In the medium grid, after our agent reaches to the certain grid, it keeps staying in that grid. We speculate that the agent finds the optimal grid during training where electricity could distribute evenly all the time so no further actions required. It is reasonable in Grid2Op where a single action can potentially destroy the grid. On the other hand, it shows different behavior in the large grid. It changes the topology configuration diversely to revert the hazardous situations since the agent cannot find the grid such as the one in the medium grid.\nSAC, which shows the best performance among the baselines, only changes a few of the topology configuration by executing only one or two actions in both grids, since the initial topology is strong local optima in Grid2Op. DDQN also shows similar behavior, but it shows worse performance than the SAC, since it changes the initial grid more. Likewise, SMAAC\\AS that changes the topology the most shows the worst performance. It is extremely difficult to find the better topology than the initial one in Grid2Op without the effective exploration. The efficient learning together with the effective exploration is the key to successful management.\nWe further examine how SMAAC learned to revert a hazardous state back to the safe state. As shown in Figure 6, the line 4-5 (between substation 4 and 5) in t is in the hazardous situation. At time step t+1, our agent makes bus assignment change in the substation 12 by assigning line 12-13 to yellow busbar. As a result, some amount of electricity that flows in the line 4-5 moves to line 4-3 to meet the load demand in the substation 13 where electricity is supplied only from the line 8-13 due to the action in t+1. Then the last action at t+2 that changes the substation 3 further disperses electricity from the substation 4. In the end, a more balanced distribution is achieved.\nA.5 ACTIVATION OF THE AGENT\nAs we mentioned in subsection 3.1, the agent acts only in hazardous situations, i.e. there is a line of which usage rate (ratio between current flow and thermal limit) is larger than the threshold hyperparameter δh. Note that the usage rate larger than 1.0 implies that a line is overflowed.\nTable 5 shows how the final performance changes according to δh in the test scenarios. If δh is too high, e.g. δh = 1.1, the agent may not be able to recover from the hazardous situation, and show relatively worse performance. On the other hand, the agent with δh = 0.8 faces more diverse situations, requiring far more samples to reach the performance of the other agents with higher δh." } ]
2,021
SEMI-MARKOV AFTERSTATE ACTOR-CRITIC
SP:2d1b5b2da4802fb7f229112fb841bc194ba47204
[ "This work studies optimization dynamics for neural network models that are scaling invariant with respect to parameters. A general formulation of optimization algorithms is considered, covering many widely used algorithms like SGD and Adam. The projected dynamics (to the unit sphere) is studied, and the effective learning rate and update direction on the unit sphere are derived. Focusing on the projected dynamics, the equivalence is built between SGD and a type of \"Adam\". Then, different factors in the Adam dynamics that can potentially influence the optimization performance are identified, and empirically studied. " ]
Batch Normalization (BN) is a prominent deep learning technique. In spite of its apparent simplicity, its implications over optimization are yet to be fully understood. While previous studies mostly focus on the interaction between BN and stochastic gradient descent (SGD), we develop a geometric perspective which allows us to precisely characterize the relation between BN and Adam. More precisely, we leverage the radial invariance of groups of parameters, such as filters for convolutional neural networks, to translate the optimization steps on the L2 unit hypersphere. This formulation and the associated geometric interpretation shed new light on the training dynamics. Firstly, we use it to derive the first effective learning rate expression of Adam. Then we show that, in the presence of BN layers, performing SGD alone is actually equivalent to a variant of Adam constrained to the unit hypersphere. Finally, our analysis outlines phenomena that previous variants of Adam act on and we experimentally validate their importance in the optimization process.
[]
[ { "authors": [ "Sanjeev Arora", "Zhiyuan Li", "Kaifeng Lyu" ], "title": "Theoretical analysis of auto rate-tuning by batch normalization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Nils Bjorck", "Carla P Gomes", "Bart Selman", "Kilian Q Weinberger" ], "title": "Understanding batch normalization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Avrim Blum", "Ronald L Rivest" ], "title": "Training a 3-node neural network is np-complete", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 1989 }, { "authors": [ "Yongqiang Cai", "Qianxiao Li", "Zuowei Shen" ], "title": "A quantitative analysis of the effect of batch normalization on gradient descent", "venue": "In 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Minhyung Cho", "Jaehyung Lee" ], "title": "Riemannian approach to batch normalization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2011 }, { "authors": [ "Behrooz Ghorbani", "Shankar Krishnan", "Ying Xiao" ], "title": "An investigation into neural net optimization via hessian eigenvalue density", "venue": "In 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Elad Hoffer", "Ron Banner", "Itay Golan", "Daniel Soudry" ], "title": "Norm matters: efficient and accurate normalization schemes in deep networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Fix your classifier: the marginal value of training the last weight layer", "venue": "arXiv preprint arXiv:1801.04540,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In 32nd International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny", "venue": null, "year": 2009 }, { "authors": [ "Zhiyuan Li", "Sanjeev Arora" ], "title": "An exponential learning rate schedule for deep learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Xiangru Lian", "Ji Liu" ], "title": "Revisit batch normalization: New understanding and refinement via composition optimization", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Weiyang Liu", "Yan-Ming Zhang", "Xingguo Li", "Zhiding Yu", "Bo Dai", "Tuo Zhao", "Le Song" ], "title": "Deep hyperspherical learning", "venue": "In Advances in Neural Information Processing S ystems (NeurIPS),", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "arXiv preprint arXiv:1711.05101,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Madry" ], "title": "How does batch normalization help optimization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research (JMLR),", "year": 2018 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In 30th International Conference on Machine Learning (ICML),", "year": 2013 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural networks for machine learning,", "year": 2012 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Twan van Laarhoven" ], "title": "L2 regularization versus batch and weight normalization, 2017", "venue": "arXiv preprint arXiv:1706.05350", "year": 2017 }, { "authors": [ "Guodong Zhang", "Chaoqi Wang", "Bowen Xu", "Roger Grosse" ], "title": "Three mechanisms of weight decay regularization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 } ]
[ { "heading": "A SPHERICAL ANALYSIS OF ADAM WITH BATCH NORMALIZATION", "text": "Anonymous authors Paper under double-blind review\nBatch Normalization (BN) is a prominent deep learning technique. In spite of its apparent simplicity, its implications over optimization are yet to be fully understood. While previous studies mostly focus on the interaction between BN and stochastic gradient descent (SGD), we develop a geometric perspective which allows us to precisely characterize the relation between BN and Adam. More precisely, we leverage the radial invariance of groups of parameters, such as filters for convolutional neural networks, to translate the optimization steps on the L2 unit hypersphere. This formulation and the associated geometric interpretation shed new light on the training dynamics. Firstly, we use it to derive the first effective learning rate expression of Adam. Then we show that, in the presence of BN layers, performing SGD alone is actually equivalent to a variant of Adam constrained to the unit hypersphere. Finally, our analysis outlines phenomena that previous variants of Adam act on and we experimentally validate their importance in the optimization process.\n1 INTRODUCTION\nThe optimization process of deep neural networks is still poorly understood. Their training involves minimizing a high-dimensional non-convex function, which has been proved to be a NP-hard problem (Blum & Rivest, 1989). Yet, elementary gradient-based methods show good results in practice. To improve the quality of reached minima, numerous methods have stemmed in the last years and become common practices. One of the most prominent is Batch Normalization (BN) (Ioffe & Szegedy, 2015), which improves significantly both the optimization stability and the prediction performance; it is now used in most deep learning architectures. However, the interaction of BN with optimization and its link to regularization remain open research topics. Previous studies highlighted mechanisms of the interaction between BN and SGD, both empirically (Santurkar et al., 2018) and theoretically (Arora et al., 2019; Bjorck\net al., 2018; Hoffer et al., 2018b). None of them studied the interaction between BN and one of the most common adaptive schemes for Neural Networks (NN), Adam (Kingma & Ba, 2015), except van Laarhoven (2017), which tackled it only in the asymptotic regime. In this work, we provide an extensive analysis of the relation between BN and Adam during the whole training procedure.\nOne of the key effects of BN is to make NNs invariant to positive scalings of groups of parameters. The core idea of this paper is precisely to focus on these groups of radially-invariant parameters and analyze their optimization projected on the L2 unit hypersphere (see Fig. 1), which is topologically equivalent to the quotient manifold of the parameter space by the scaling action. One could directly optimize parameters on the hypersphere as Cho & Lee (2017), yet, most optimization methods are still performed successfully in the original parameter space. Here we propose to study an optimization scheme for a given group of radially-invariant parameters through its image scheme on the unit hypersphere. This geometric perspective sheds light on the interaction between normalization layers and Adam, and also outlines an interesting link between standard SGD and a variant of Adam adapted and constrained to the unit hypersphere: AdamG (Cho & Lee, 2017). We believe this kind of analysis\nis an important step towards a better understanding of the effect of BN on NN optimization. Please note that, although our discussion and experiments focus on BN, our analysis could be applied to any radially-invariant model.\nThe paper is organized as follows. In Section 2, we introduce our spherical framework to study the optimization of radially-invariant models. We also define a generic optimization scheme that encompasses methods such as SGD with momentum (SGD-M) and Adam. We then derive its image step on the unit hypersphere, leading to definitions and expressions of effective learning rate and effective learning direction. This new definition is explicit and has a clear interpretation, whereas the definition of van Laarhoven (2017) is asymptotic and the definitions of Arora et al. (2019) and of Hoffer et al. (2018b) are variational. In Section 3, we leverage the tools of our spherical framework to demonstrate that in presence of BN layers, SGD has an adaptive behaviour. Formally, we show that SGD is equivalent to AdamG, a variant of Adam adapted and constrained to the hypersphere, without momentum. In Section 4, we analyze the effective learning direction for Adam. The spherical framework highlights phenomena that previous variants of Adam (Loshchilov & Hutter, 2017; Cho & Lee, 2017) act on. We perform an empirical study of these phenomena and show that they play a significant role in the training of convolutional neural networks (CNNs). In Section 5, these results are put in perspective with related work.\nOur main contributions are the following: • A framework to analyze and compare order-1 optimization schemes of radially-invariant models; • The first explicit expression of the effective learning rate for Adam; • The demonstration that, in the presence of BN layers, standard SGD has an adaptive behaviour; • The identification and study of geometrical phenomena that occur with Adam and impact significantly the training of CNNs with BN." }, { "heading": "2 SPHERICAL FRAMEWORK AND EFFECTIVE LEARNING RATE", "text": "In this section, we provide background on radial invariance and introduce a generic optimization scheme.\nProjecting the scheme update on the unit hypersphere leads to the formal definitions of effective learning rate and learning direction. This geometric perspective leads to the first explicit expression of the effective learning rate for Adam. The main notations are summarized in Figure 1." }, { "heading": "2.1 RADIAL INVARIANCE", "text": "We consider a family of parametric functions φx : Rin → Rout parameterized by a group of radiallyinvariant parameters x ∈ Rdr {0}, i.e., ∀ρ> 0, φρx =φx (possible other parameters of φx are omitted for clarity), a dataset D ⊂ Rin ×Rout, a loss function ` : Rout ×Rout → R and a training loss function L : Rd → R defined as:\nL(x) def= 1 |D| ∑ (s,t)∈D `(φx(s), t). (1)\nIt verifies: ∀ρ > 0, L(ρx) = L(x). In the context of NNs, the group of radially-invariant parameters x can be the parameters of a single neuron in a linear layer or the parameters of a whole filter in a convolutional layer, followed by BN (see Appendix A for details, and Appendix B for the application to other normalization schemes such as InstanceNorm (Ulyanov et al., 2016), LayerNorm (Ba et al., 2016) or GroupNorm (Wu & He, 2018)).\nThe quotient of the parameter space by the equivalence relation associated to radial invariance is topologically equivalent to a sphere. We consider here the L2 sphere Sd−1 = {u ∈ Rd/‖u‖2 = 1} whose canonical metric corresponds to angles: dS(u1,u2) = arccos(〈u1,u2〉). This choice of metric is relevant to study NNs since filters in CNNs or neurons in MLPs are applied through scalar product to input data. Besides, normalization in BN layers is also performed using the L2 norm.\nOur framework relies on the decomposition of vectors into radial and tangential components. During optimization, we write the radially-invariant parameters at step k ≥ 0 as xk = rkuk where rk = ‖xk‖ and uk = xk/‖xk‖. For any quantity qk ∈ Rd at step k, we write q⊥k = qk−〈qk,uk〉uk its tangential component relatively to the current direction uk.\nThe following lemma states that the gradient of a radially-invariant loss function is tangential and −1 homogeneous: Lemma 1 (Gradient of a function with radial invariance). If L : Rd → R is radially invariant and almost everywhere differentiable, then, for all ρ > 0 and all x ∈ Rd where L is differentiable:\n〈∇L(x),x〉 = 0 and ∇L(x) = ρ∇L(ρx). (2)" }, { "heading": "2.2 GENERIC OPTIMIZATION SCHEME", "text": "There is a large body of literature on optimization schemes (Sutskever et al., 2013; Duchi et al., 2011; Tieleman & Hinton, 2012; Kingma & Ba, 2015; Loshchilov & Hutter, 2019). We focus here on two of the most popular ones, namely SGD and Adam (Kingma & Ba, 2015). Yet, to establish general results that may apply to a variety of other schemes, we introduce here a generic optimization update:\nxk+1 = xk − ηkak bk, (3) ak = βak−1 +∇L(xk) + λxk, (4)\nwhere xk ∈ Rd is the group of radially-invariant parameters at iteration k, L is the group’s loss estimated on a batch of input data, ak ∈ Rd is a momentum, bk ∈ Rd is a division vector that can depend on the trajectory (xi,∇L(xi))i∈J0,kK, ηk ∈ R is the scheduled trajectory-independent learning rate, denotes the Hadamard element-wise division, β is the momentum parameter, and λ is the L2-regularization parameter. We show how it encompasses several known optimization schemes.\nStochastic gradient descent (SGD) has proven to be an effective optimization method in deep learning. It can include L2 regularization (also called weight decay) and momentum. Its updates are:\nxk+1 = xk − ηkmk, (5) mk = βmk−1 +∇L(xk) + λxk, (6)\nwhere mk is the momentum, β is the momentum parameter, and λ is the L2-regularization parameter. It corresponds to our generic scheme (Eqs. 3-4) with ak = mk and bk = [1 · · · 1]>. Adam is likely the most common adaptive scheme for NNs. Its updates are:\nxk+1 = xk − ηk mk 1− βk+11 √ vk 1− βk+12 + , (7)\nmk = β1mk−1+(1− β1)(∇L(xk) + λxk), vk = β2vk−1 + (1− β2)(∇L(xk) + λxk)2, (8)\nwhere mk is the momentum with parameter β1, vk is the second-order moment with parameter β2, and prevents division by zero. (Here and in the following, the square and the square root of a vector are to be understood as element-wise.) It corresponds to our generic scheme (Eqs. 3-4) with β=β1 and:\nak = mk\n1− β1 , bk = 1− βk+11 1− β1\n√ vk\n1− βk+12 + . (9)" }, { "heading": "2.3 IMAGE OPTIMIZATION ON THE HYPERSPHERE", "text": "The radial invariance implies that the radial part of the parameter update x does not change the function φx encoded by the model, nor does it change the loss L(x). The goal of training is to find the best possible function encodable by the network. Due to radial invariance, the parameter space projected on the unit hypersphere is topologically closer to the functional space of the network than the full parameter space. It hints that looking at optimization behaviour on the unit hypersphere might be interesting. Thus, we need to separate the quantities that can (tangential part) and cannot (radial part) change the model function. Theorem 2 formulates the spherical decomposition (Eqs. 3-4) in simple terms. It relates the update of radially-invariant parameters in the parameter space Rd and their update on Sd−1 through an exponential map. Theorem 2 (Image step on Sd−1). The update of a group of radially-invariant parameters xk at step k corresponds to an update of its projection uk on Sd−1 through an exponential map at uk with velocity ηekc ⊥ k , at order 3:\nuk+1 = Expuk\n( − [ 1 +O (( ηek‖c⊥k ‖ )2)] ηekc ⊥ k ) , (10)\nwhere Expuk is the exponential map on Sd−1, and with\nck def = rkak\nbk\nd−1/2‖bk‖ , ηek def =\nηk\nr2kd −1/2‖bk‖\n( 1− ηk〈ck,uk〉\nr2kd −1/2‖bk‖\n)−1 . (11)\nMore precisely:\nuk+1 = uk − ηekc⊥k√\n1 + (ηek‖c⊥k ‖)2 . (12)\nThe proof is given in Appendix C.1.1 and the theorem is illustrated in the case of SGD in Figure 1. Note that with typical values in CNN training we have 1− ηk〈ck,uk〉\nr2kd −1/2‖bk‖\n> 0, which is a property\nneeded for the proof. Another hypothesis is that steps on the hypersphere are shorter than π. These hypotheses are discussed and empirically verified in Appendix C.1.2." }, { "heading": "2.4 EFFECTIVE LEARNING RATE FOR ADAM", "text": "In Theorem 2, the normalized parameters update in Eq. 10 can be read uk+1 ≈ Expuk ( −ηekc⊥k ) , where ηek and c ⊥ k can then be respectively interpreted as the learning rate and the direction of an optimization step constrained to Sd−1 since ak is the momentum and, with Lemma 1, the quantity rkak in ck can be seen as a momentum on the hypersphere. Due to the radial invariance, only the change of parameter on the unit hypersphere corresponds to a change of model function. Hence we can interpret ηek and c ⊥ k as effective learning rate and effective learning direction. In other words, these quantities correspond to the learning rate and direction on the hypersphere that reproduce the function update of the optimization step.\nUsing Theorem 2, we can derive actual effective learning rates for any optimization scheme that fits our generic framework. These expressions, summarized in Table 1 are explicit and have a clear interpretation, in contrast to learning rates in (van Laarhoven, 2017), which are approximate and asymptotic, and in (Hoffer et al., 2018a; Arora et al., 2019), which are variational and restricted to SGD without momentum only.\nIn particular, we provide the first explicit expression of the effective learning rate for Adam:\nηek = ηk rνk\n( 1− ηk〈ck,uk〉\nrνk\n)−1 (13)\nwhere νk = rkd−1/2‖bk‖ is homogeneous to the norm of a gradient on the hypersphere and can be related to an second-order moment on the hypersphere (see Appendix.C.1.3 for details). This notation also simplifies the in-depth analysis in Section 4, allowing a better interpretation of formulas.\nThe expression of the effective learning rate of Adam, i.e., the amplitude of the step taken on the hypersphere, reveals a dependence on the dimension d (through ν) of the considered group of radiallyinvariant parameters. In the case of an MLP or CNN that stacks layers with neurons or filters of different dimensions, the learning rate is thus tuned differently from one layer to another.\nWe can also see that for all schemes the learning rate is tuned by the dynamics of radiuses rk, which follow:\nrk+1 rk =\n( 1− ηk〈ck,uk〉\nr2kd −1/2‖bk‖\n)√ 1 + (ηek‖c⊥k ‖)2. (14)\nIn contrast to previous studies (Arora et al., 2019; van Laarhoven, 2017), this result demonstrates that for momentum methods, 〈ck,uk〉, which involves accumulated gradients terms in the momentum as well as L2 regularization, tunes the learning rate (cf. Fig.1)." }, { "heading": "3 SGD IS A VARIATION OF ADAM ON THE HYPERSPHERE", "text": "We leverage the tools introduced in the spherical framework to find a scheme constrained to the hypersphere that is equivalent to SGD. It shows that for radially-invariant models, SGD is actually an adaptive optimization method. Formally SGD is equivalent to a version of AdamG, a variation of Adam adapted and constrained to the unit hypersphere, without momentum." }, { "heading": "3.1 EQUIVALENCE BETWEEN TWO OPTIMIZATION SCHEMES", "text": "Due to the radial invariance, the functional space of the model is encoded by Sd−1. In other words, two schemes with the same sequence of groups of radially-invariant parameters on the hypersphere (uk)k≥0 encode the same sequence of model functions. Two optimization schemes S and S̃ are equivalent iff ∀k ≥ 0,uk = ũk. By using Eq. 12, we obtain the following lemma, which is useful to prove the equivalence of two given optimization schemes: Lemma 3 (Sufficient condition for the equivalence of optimization schemes).{\nu0 = ũ0 ∀k ≥ 0, ηek = η̃ek, c⊥k = c̃⊥k ⇒ ∀k ≥ 0,uk = ũk. (15)" }, { "heading": "3.2 A HYPERSPHERE-CONSTRAINED SCHEME EQUIVALENT TO SGD", "text": "We now study, within our spherical framework, SGD with L2 regularization, i.e., the update xk+1 = xk − ηk(∇L(xk)− λkxk). From the effective learning rate expression, we know that SGD yields an adaptive behaviour because it is scheduled by the radius dynamic, which depends on gradients. In fact, the tools in our framework allow us to find a scheme constrained to the unit hypersphere that is equivalent to SGD: AdamG (Cho & Lee, 2017). More precisely, it is AdamG with a null momentum factor β1 = 0, an non-null initial second-order moment v0, an offset of the scalar second-order moment k + 1 → k and the absence of the bias correction term 1 − βk+12 .Dubbed AdamG* this scheme reads:\n(AdamG*) : x̂k+1 = xk − ηk∇L(xk)√vk , xk+1 = x̂k+1 ‖x̂k+1‖ ,\nvk+1 = βvk + ‖∇L(xk)‖2. Starting from SGD, we first use Lemma 3 to find an equivalence scheme with simpler radius dynamic. We resolve this radius dynamic with a Taylor expansion at order 2 in (ηk‖∇L(uk)‖)2/r2k. A second use of Lemma 3 finally leads to the following scheme equivalence in Theorem (see proof in Appendix C.1.4). If we call « equivalent at order 2 in the step » a scheme equivalence that holds when we use for rk an expression that satisfies the radius dynamic with a Taylor expansion at order 2 we have the following theorem: Theorem 4 (SGD equivalent scheme on the unit hypersphere). For any λ > 0, η > 0, r0 > 0, we have the following equivalence when using the radius dynamic at order 2 in (ηk‖∇L(uk)‖)2/r2k:\n(SGD) x0 = r0u0 λk = λ ηk = η is scheme-equivalent at order 2 in step with (AdamG*) x0 = u0 β = (1− ηλ)4 ηk = (2β) −1/2\nv0 = r 4 0(2η 2β1/2)−1.\nThis result is unexpected because SGD, which is not adaptive by itself, is equivalent to a second order moment adaptive method The scheduling performed by the radius dynamics actually replicates the effect of dividing the learning rate by the second-order moment of the gradient norm: vk. First, the only assumption for this equivalence is to neglect the approximation in the Taylor expansion at order 2 of the radius which is highly verified in practice (order of magnitude of 1e− 4 isee Appendix C.1.5). Second, with standard values of the hyper-parameters : learning rate η < 1 and weight decay λ < 1, we have β ≤ 1 which corresponds to a standard value for a moment factor. Interestingly, the L2 regularization parameter λ controls the memory of the past gradients norm. If β = 1 (with λ = 0) there is no attenuation, each gradient norm has the same contribution in the order of two moments. If λ 6= 0, there is a decay factor (β < 1) on past gradients norm in the order 2 moment." }, { "heading": "4 GEOMETRIC PHENOMENA IN ADAM", "text": "Our framework with its geometrical interpretation reveals intriguing behaviors occurring in Adam. The unit hypersphere is enough to represent the functional space encoded by the network. From the perspective of manifold optimization, the optimization direction would only depend on the trajectory on that manifold. In the case of Adam, the effective direction not only depends on the trajectory on the hypersphere but also on the deformed gradients and additional radial terms. These terms are thus likely to play a role in Adam optimization.\nIn order to understand their role, we describe these geometrical phenomena in Section 4.1. Interestingly, previous variants of Adam, AdamW (Loshchilov & Hutter, 2017) and AdamG (Cho & Lee, 2017) are related to these phenomena. To study empirically their importance, we consider in Section 4.2 variants of Adam that first provide a direction intrinsic to the unit hypersphere, without deformation of the gradients, and then where radial terms are decoupled from the direction. The empirical study of these variants over a variety of datasets and architectures suggests that these behaviors do play a significant role in CNNs training with BN." }, { "heading": "4.1 IDENTIFICATION OF GEOMETRICAL PHENOMENA IN ADAM", "text": "Here, we perform an in-depth analysis of the effective learning direction of Adam.\n(a) Deformed gradients. Considering the quantities defined for a generic scheme in Eq. 11, bk has a deformation effect on ak, due to the Hadamard division by bk\nd−1/2‖bk‖ , and a scheduling effect\nd−1/2‖bk‖ on the effective learning rate. In the case where the momentum factor is null β1 = 0, the direction of the update at step k is ∇L(uk) bk\nd−1/2‖bk‖ (Eq. 11) and the deformation bk d−1/2‖bk‖ may\npush the direction of the update outside the tangent space of Sd−1 at uk, whereas the gradient itself lies in the tangent space. This deformation is in fact not isotropic: the displacement of the gradient from the tangent space depends on the position of uk on the sphere. We illustrate this anisotropy in Fig. 2(b).\n(b) Additional radial terms. In the momentum on the sphere ck, quantities that are radial (resp. orthogonal) at a point on the sphere may not be radial (resp. orthogonal) at another point. To clarify the contribution of ck in the effective learning direction c⊥k , we perform the following decomposition (cf. Appendix D.1):\nck = (c grad k + λr 2 kc L2 k )\nbk\nd−1/2‖bk‖ with: (16)\ncgradk def = ∇L(uk) + k−1∑ i=0 βk−i rk ri ∇L(ui) and cL2k def = uk + k−1∑ i=0 βk−i ri rk ui. (17)\n1. Contribution of cgradk . At step k, the contribution of each past gradient corresponds to the orthogonal part ∇L(ui)− 〈∇L(ui),uk〉uk. It impacts the effective learning direction depending on its orientation relatively to uk. Two past points, although equally distant from uk on the sphere and with equal gradient amplitude may thus contribute differently in c⊥k due to their orientation (cf. Fig. 2(c)). 2. Contribution of cL2k . Naturally, the current point uk does not contribute to the effective learning direction c⊥k , unlike the history of points in ∑k−1 i=0 β k−i ri rk ui, which does. This dependency can be avoided if we decouple the L2 regularization, in which case we do not accumulate L2 terms in the momentum. This shows that the decoupling proposed in AdamW (Loshchilov & Hutter, 2019) actually removes the contribution of L2 regularization in the effective learning direction.\n(c) The radius ratio rkri present in both c grad k and c L2 k (in inverse proportion) impacts the effective learning direction c⊥k : it can differ for identical sequences (ui)i≤k on the sphere but with distinct radius histories (ri)i≤k. Since the radius is closely related to the effective learning rate, it means that the effective learning direction c⊥k is adjusted according to the learning rates history.\nNote that AdamG (Cho & Lee, 2017), by constraining the optimization to the unit hypersphere and thus removing L2 regularization, neutralizes all the above phenomena. However, this method has no scheduling effect allowed by the radius dynamics (cf. Eq.14) since it is kept constant during training." }, { "heading": "4.2 EMPIRICAL STUDY", "text": "To study empirically the importance of the identified geometric phenomena, we perform an ablation study: we compare the performance (accuracy and training loss speed) of Adam and variants that neutralize each of them. We recall that AdamW neutralizes (b2) and that AdamG neutralizes all of above phenomena but loses the scheduling effect identified in Eq. 14. To complete our analysis, we use geometrical tools to design variations of Adam which neutralizes sequentially each phenomenon while preserving the natural scheduling effect in Theorem 2. We neutralize (a) by replacing the element-wise second-order moment, (b1) and (b2) by transporting the momentum from a current point to the new one, (c) by re-scaling the momentum at step k. The details are in Appendix. D.2. The final scheme reads:\nxk+1 = xk − ηk mk\n1− βk+11 /\n√ vk\n1− βk+12 + , (18)\nmk = β1 rk−1 rk Γukuk−1(mk−1)+(1− β1)(∇L(xk) + λxk), (19)\nvk = β2 r2k−1 r2k vk−1 + (1− β2)d−1‖∇L(xk) + λxk‖2, (20)\nwhere Γukuk−1 is the hypersphere canonical transport from uk−1 to uk. Implementation details are in Appendix D.3.\nProtocol. For evaluation, we conduct experiments on two architectures: VGG16 (Simonyan & Zisserman, 2015) and ResNet (He et al., 2016) – more precisely ResNet20, a simple variant designed for small images (He et al., 2016), and ResNet18, a popular variant for image classification. We consider three datasets: SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky et al., 2009).\nSince our goal is to evaluate the significance of phenomena on radially-invariant parameters, i.e., the convolution filters followed by BN, we only apply variants of Adam including AdamG and AdamW on convolution layers. For comparison consistency, we keep standard Adam on the remaining parameters. We also use a fixed grid hyperparameter search budget and frequency for each method and each architecture (see Appendix D.3 for details).\nResults. In Table 2 we report quantitative results of Adam variants across architectures and datasets. In addition, we compare the evolution of the training loss in Fig. 3. We observe that each phenomenon displays a specific trade-off between generalization (accuracy on the test set) and training speed, as following. Neutralizing (a) has little effect on the speed over Adam, yet achieves better accuracy. Although it slows down training, neutralizing (ab) leads to minima with the overall best accuracy on test set. Note that AdamW† neutralizes (b2) with its decoupling and is the fastest method, but finds minima with overall worst generalization properties. By constraining the optimization to the hypersphere, AdamG† speeds up training over the other variants. Finally, neutralizing (c) with Adam\nw/o (abc) brings a slight acceleration, though reaches lower accuracy than Adam w/o (ab). We can see that the revealed geometrical phenomena impact substantially training of BN-equipped CNNs." }, { "heading": "5 RELATED WORK", "text": "Understanding Batch Normalization. Albeit conceptually simple, BN has been shown to have complex implications over optimization. The argument of Internal Covariate Shift reduction (Ioffe & Szegedy, 2015) has been challenged and shown to be secondary to smoothing of optimization landscape (Santurkar et al., 2018; Ghorbani et al., 2019) or its modification by creating a different objective function (Lian & Liu, 2019), or enabling of high learning rates through improved conditioning (Bjorck et al., 2018). Arora et al. (2019) demonstrate that (S)GD with BN is robust to the choice of the learning rate, with guaranteed asymptotic convergence, while a similar finding for GD with BN is made by Cai et al. (2019).\nInvariances in neural networks. Cho & Lee (2017) propose optimizing over the Grassmann manifold using Riemannian GD. Liu et al. (2017) project weights and activations on the unit hypersphere and compute a function of the angle between them instead of inner products, and subsequently generalize these operators by scaling the angle (Liu et al., 2018). In (Li & Arora, 2020) the radial invariance is leveraged to prove that weight decay (WD) can be replaced by an exponential learning-rate scheduling for SGD with or without momentum. Arora et al. (2019) investigate the radial invariance and show that radius dynamics depends on the past gradients, offering an adaptive behavior to the learning rate. Here we go further and show that SGD projected on the unit hypersphere corresponds to Adam constrained to the hypersphere, and we give an accurate definition of this adaptive behavior.\nEffective learning rate. Due to its scale invariance, BN can adaptively adjust the learning rate (van Laarhoven, 2017; Cho & Lee, 2017; Arora et al., 2019; Li & Arora, 2020). van Laarhoven (2017) shows that in BN-equipped networks, WD increases the effective learning rate by reducing the norm of the weights. Conversely, without WD, the norm grows unbounded (Soudry et al., 2018), decreasing the effective learning rate. Zhang et al. (2019) brings additional evidence supporting hypothesis in van Laarhoven (2017), while Hoffer et al. (2018a) finds an exact formulation of the effective learning rate for SGD in normalized networks. In contrast with prior work, we find generic definitions of the effective learning rate with exact expressions for SGD and Adam." }, { "heading": "6 CONCLUSION", "text": "The spherical framework introduced in this study provides a powerful tool to analyse Adam optimization scheme through its projection on the L2 unit hypersphere. It allows us to give a precise definition and expression of the effective learning rate for Adam, to relate SGD to a variant of Adam, and to identify geometric phenomena which empirically impact training. The framework also brings light to existing variations of Adam, such as L2-regularization decoupling. This approach could be extended to other invariances in CNNs such as as filter permutation." } ]
null
null
SP:23124b43054b8f3b0cf5860a1fa0728f7edf8e63
[ "This paper tries to solve the curse-of-dimensionality problem of KSD and corresponding mode-collapse problem of SVGD by projecting both the input and output of test function onto 1D slices. By doing so, the paper proposes the new discrepancies called SSD and maxSKSD, and a new variant of SVGD called S-SVGD. Experiments on goodness-of-fit test (synthetic high-dim Gaussian & RBM) and model learning (ICA on synthetic data & amortized SVGD on MNIST) are reported in the main body of the paper." ]
Kernelized Stein discrepancy (KSD), though being extensively used in goodness-offit tests and model learning, suffers from the curse-of-dimensionality. We address this issue by proposing the sliced Stein discrepancy and its scalable and kernelized variants, which employ kernel-based test functions defined on the optimal one-dimensional projections. When applied to goodness-of-fit tests, extensive experiments show the proposed discrepancy significantly outperforms KSD and various baselines in high dimensions. For model learning, we show its advantages over existing Stein discrepancy baselines by training independent component analysis models with different discrepancies. We further propose a novel particle inference method called sliced Stein variational gradient descent (S-SVGD) which alleviates the mode-collapse issue of SVGD in training variational autoencoders.
[ { "affiliations": [], "name": "SLICED KERNELIZED" }, { "affiliations": [], "name": "STEIN DISCREPANCY" }, { "affiliations": [], "name": "Wenbo Gong" }, { "affiliations": [], "name": "Yingzhen Li" }, { "affiliations": [], "name": "José Miguel Hernández-Lobato" } ]
[ { "authors": [ "Miguel A Arcones", "Evarist Gine" ], "title": "On the bootstrap of u and v statistics", "venue": "The Annals of Statistics,", "year": 1992 }, { "authors": [ "Adi Ben-Israel" ], "title": "The change-of-variables formula using matrix volume", "venue": "SIAM Journal on Matrix Analysis and Applications,", "year": 1999 }, { "authors": [ "Ronald N Bracewell" ], "title": "Strip integration in radio astronomy", "venue": "Australian Journal of Physics,", "year": 1956 }, { "authors": [ "Claudio Carmeli", "Ernesto De Vito", "Alessandro Toigo", "Veronica" ], "title": "Umanitá. Vector valued reproducing kernel hilbert spaces and universality", "venue": "Analysis and Applications,", "year": 2010 }, { "authors": [ "Ciwan Ceylan", "Michael U Gutmann" ], "title": "Conditional noise-contrastive estimation of unnormalised models", "venue": "arXiv preprint arXiv:1806.03664,", "year": 2018 }, { "authors": [ "Peng Chen", "Omar Ghattas" ], "title": "Projected Stein variational gradient descent", "venue": "arXiv preprint arXiv:2002.03469,", "year": 2020 }, { "authors": [ "Tianqi Chen", "Emily Fox", "Carlos Guestrin" ], "title": "Stochastic gradient hamiltonian monte carlo", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Kacper Chwialkowski", "Heiko Strathmann", "Arthur Gretton" ], "title": "A kernel test of goodness of fit", "venue": "JMLR: Workshop and Conference Proceedings,", "year": 2016 }, { "authors": [ "Arthur P Dempster", "Nan M Laird", "Donald B Rubin" ], "title": "Maximum likelihood from incomplete data via the em algorithm", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1977 }, { "authors": [ "Ishan Deshpande", "Ziyu Zhang", "Alexander G Schwing" ], "title": "Generative modeling using the sliced Wasserstein distance", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Ishan Deshpande", "Yuan-Ting Hu", "Ruoyu Sun", "Ayis Pyrros", "Nasir Siddiqui", "Sanmi Koyejo", "Zhizhen Zhao", "David Forsyth", "Alexander G Schwing" ], "title": "Max-sliced Wasserstein distance and its use for gans", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Yihao Feng", "Dilin Wang", "Qiang Liu" ], "title": "Learning to draw samples with amortized Stein variational gradient descent", "venue": "arXiv preprint arXiv:1707.06626,", "year": 2017 }, { "authors": [ "Jackson Gorham", "Lester Mackey" ], "title": "Measuring sample quality with Stein’s method", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2015 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jorn-Henrik Jacobsen", "David Duvenaud", "Richard Zemel" ], "title": "Cutting out the middle-man: Training and evaluating energy-based models without sampling", "venue": "arXiv preprint arXiv:2002.05616,", "year": 2020 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Wassily Hoeffding" ], "title": "A class of statistics with asymptotically normal distribution", "venue": "In Breakthroughs in Statistics,", "year": 1992 }, { "authors": [ "Jonathan Huggins", "Lester Mackey" ], "title": "Random feature Stein discrepancies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marie Huskova", "Paul Janssen" ], "title": "Consistency of the generalized bootstrap for degenerate u-statistics", "venue": "The Annals of Statistics,", "year": 1993 }, { "authors": [ "Michael F Hutchinson" ], "title": "A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines", "venue": "Communications in Statistics-Simulation and Computation,", "year": 1990 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Wittawat Jitkrittum", "Wenkai Xu", "Zoltán Szabó", "Kenji Fukumizu", "Arthur Gretton" ], "title": "A linear-time kernel goodness-of-fit test", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Soheil Kolouri", "Yang Zou", "Gustavo K Rohde" ], "title": "Sliced Wasserstein kernels for probability distributions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Soheil Kolouri", "Kimia Nadjahi", "Umut Simsekli", "Roland Badeau", "Gustavo Rohde" ], "title": "Generalized sliced Wasserstein distances", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Qiang Liu", "Yihao Feng" ], "title": "Two methods for wild variational inference", "venue": "arXiv preprint arXiv:1612.00081,", "year": 2016 }, { "authors": [ "Qiang Liu", "Dilin Wang" ], "title": "Stein variational gradient descent: A general purpose Bayesian inference algorithm", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Qiang Liu", "Jason Lee", "Michael Jordan" ], "title": "A kernelized Stein discrepancy for goodness-of-fit tests", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Yuchen Pu", "Zhe Gan", "Ricardo Henao", "Chunyuan Li", "Shaobo Han", "Lawrence Carin" ], "title": "VAE learning via Stein variational gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Random features for large-scale kernel machines", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Rajesh Ranganath", "Dustin Tran", "Jaan Altosaar", "David Blei" ], "title": "Operator variational inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Robert J Serfling" ], "title": "Approximation theorems of mathematical statistics, volume 162", "venue": null, "year": 2009 }, { "authors": [ "Raghav Singhal", "Xintian Han", "Saad Lahlou", "Rajesh Ranganath" ], "title": "Kernelized complete conditional Stein discrepancy", "venue": "arXiv preprint arXiv:1904.04478,", "year": 2019 }, { "authors": [ "Yang Song", "Sahaj Garg", "Jiaxin Shi", "Stefano Ermon" ], "title": "Sliced score matching: A scalable approach to density and score estimation", "venue": null, "year": 1905 }, { "authors": [ "Bharath K Sriperumbudur", "Kenji Fukumizu", "Arthur Gretton", "Bernhard Schölkopf", "Gert RG Lanckriet" ], "title": "On integral probability metrics,\\phi-divergences and binary classification", "venue": "arXiv preprint arXiv:0901.2698,", "year": 2009 }, { "authors": [ "Charles Stein", "Persi Diaconis", "Susan Holmes", "Gesine Reinert" ], "title": "Use of exchangeable pairs in the analysis of simulations", "venue": "In Stein’s Method,", "year": 2004 }, { "authors": [ "Charles Stein" ], "title": "A bound for the error in the normal approximation to the distribution of a sum of dependent random variables", "venue": "In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory. The Regents of the University of California,", "year": 1972 }, { "authors": [ "Dilin Wang", "Zhe Zeng", "Qiang Liu" ], "title": "Stein variational message passing for continuous graphical models", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yuhuai Wu", "Yuri Burda", "Ruslan Salakhutdinov", "Roger Grosse" ], "title": "On the quantitative analysis", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Discrepancy measures for quantifying differences between two probability distributions play key roles in statistics and machine learning. Among many existing discrepancy measures, Stein discrepancy (SD) is unique in that it only requires samples from one distribution and the score function (i.e. the gradient up to a multiplicative constant) from the other (Gorham & Mackey, 2015). SD, a special case of integral probability metric (IPM) (Sriperumbudur et al., 2009), requires finding an optimal test function within a given function family. This optimum is analytic when a reproducing kernel Hilbert space (RKHS) is used as the test function family, and the corresponding SD is named kernelized Stein discrepancy (KSD) (Liu et al., 2016; Chwialkowski et al., 2016). Variants of SDs have been widely used in both Goodness-of-fit (GOF) tests (Liu et al., 2016; Chwialkowski et al., 2016) and model learning (Liu & Feng, 2016; Grathwohl et al., 2020; Hu et al., 2018; Liu & Wang, 2016).\nAlthough theoretically elegant, KSD, especially with RBF kernel, suffers from the ”curseof-dimensionality” issue, which leads to significant deterioration of test power in GOF tests (Chwialkowski et al., 2016; Huggins & Mackey, 2018) and mode collapse in particle inference (Zhuo et al., 2017; Wang et al., 2018). A few attempts have been made to address this problem, however, they either are limited to specific applications with strong assumptions (Zhuo et al., 2017; Chen & Ghattas, 2020; Wang et al., 2018) or require significant approximations (Singhal et al., 2019). As an alternative, in this work we present our solution to this issue by adopting the idea of “slicing”. Here the key idea is to project the score function and test inputs onto multiple one dimensional slicing directions, resulting in a variant of SD that only requires to work with one-dimensional inputs for the test functions. Specifically, our contributions are as follows.\n• We propose a novel theoretically validated family of discrepancies called sliced Stein discrepancy (SSD), along with its scalable variant called max sliced kernelized Stein discrepancy (maxSKSD) using kernel tricks and the optimal test directions.\n• A GOF test is derived based on an unbiased estimator of maxSKSD with optimal test directions. MaxSKSD achieves superior performance on benchmark problems and restricted Boltzmann machine models (Liu et al., 2016; Huggins & Mackey, 2018).\n∗Work done at Microsoft Research Cambridge\n• We evaluate the maxSKSD in model learning by two schemes. First, we train an independent component analysis (ICA) model in high dimensions by directly minimising maxSKSD, which results in faster convergence compared to baselines (Grathwohl et al., 2020). Further, we propose a particle inference algorithm based on maxSKSD called the sliced Stein variational gradient descent (S-SVGD) as a novel variant of the original SVGD (Liu & Wang, 2016). It alleviates the posterior collapse of SVGD when applied to training variational autoencoders (Kingma & Welling, 2013; Rezende et al., 2014)." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 KERNELIZED STEIN DISCREPANCY", "text": "For two probability distributions p and q supported on X ⊆ RD with continuous differentiable densities p(x) and q(x), we define the score sp(x) = ∇x log p(x) and sq(x) accordingly. For a test function f : X → RD, the Stein operator is defined as\nApf(x) = sp(x)T f(x) +∇Txf(x). (1) For a function f0 : RD → R, the Stein class Fq of q is defined as the set of functions satisfying Stein’s identity (Stein et al., 1972): Eq[sq(x)f0(x) +∇xf0(x)] = 0. This can be generalized to a vector function f : RD → RD where f = [f1(x), . . . , fD(x)]T by letting fi belongs to the Stein class of q for each i ∈ D. Then the Stein discrepancy (Liu et al., 2016; Gorham & Mackey, 2015) is defined as\nD(q, p) = sup f∈Fq Eq[Apf(x)] = sup f∈Fq Eq[(sp(x)− sq(x))T f(x)]. (2)\nWhen Fq is sufficiently rich, and q vanishes at the boundary of X , the supremum is obtained at f∗(x) ∝ sp(x)− sq(x) with some mild regularity conditions on f (Hu et al., 2018). Thus, the Stein discrepancy focuses on the score difference of p and q. Kernelized Stein discrepancy (KSD) (Liu et al., 2016; Chwialkowski et al., 2016) restricts the test functions to be in a D-dimensional RKHS HD with kernel k to obtain an analytic form. By defining up(x,x′) = sp(x)Tsp(x′)k(x,x′) + sp(x) T∇x′k(x,x′) + sp(x′)T∇xk(x,x′) + Tr(∇x,x′k(x,x′)) the analytic form of KSD is:\nD2(q, p) = ( sup\nf∈HD,||f ||HD≤1 Eq[Apf(x)]\n)2 = Eq(x)q(x′)[up(x,x′)]. (3)" }, { "heading": "2.2 STEIN VARIATIONAL GRADIENT DESCENT", "text": "Although SD and KSD can be directly minimized for variational inference (VI) (Ranganath et al., 2016; Liu & Feng, 2016; Feng et al., 2017), Liu & Wang (2016) alternatively proposed a novel particle inference algorithm called Stein variational gradient descent (SVGD). It applies a sequence of deterministic transformations to a set of points such that each of mappings maximally decreases the Kullback-Leibler (KL) divergence from the particles’ underlying distribution q to the target p.\nTo be specific, we define the mapping T (x) : RD → RD as T (x) = x+ φ(x) whereφ characterises the perturbations. The result from Liu & Wang (2016) shows that the optimal perturbation inside the RKHS is exactly the optimal test function in KSD. Lemma 1. (Liu & Wang, 2016) Let T (x) = x + φ(x) and q[T ](z) be the density of z = T (x) when x ∼ q(x). If the perturbation φ is in the RKHSHD and ||φ||HD ≤ D(q, p), then the steepest descent directions φ∗q,p is φ∗q,p(·) = Eq[∇x log p(x)k(x, ·) +∇xk(x, ·)] (4) and ∇ KL[q[T ]||p]| =0 = −D2(q, p).\nThe first term in Eq.(4) is called drift, which drives the particles towards a mode of p. The second term controls the repulsive force, which spreads the particles around the mode. When particles stop moving, the KL decrease magnitude D2(q, p) is 0, which means the KSD is zero and p = q a.e." }, { "heading": "3 SLICED KERNELIZED STEIN DISCREPANCY", "text": "We propose the sliced Stein discrepancy (SSD) and kernelized version named maxSKSD. Theoretically, we prove their correctness as discrepancy measures. Methodology-wise, we apply maxSKSD to GOF tests, and develop two ways for model learning." }, { "heading": "3.1 SLICED STEIN DISCREPANCY", "text": "Before moving to the details, we give a brief overview of the intuition on how to tackle the curse-offimensionality issue of SD (The right figure of Figure 1). For detailed explanation, refer to appendix B.1. This issue of Stein discrepancy (Eq.2) comes from two sources: the score function sp(x) and the test function f(x) defined on X ⊂ RD. First, we notice that comparing sp and sq is equivalent to comparing projected score srp = s T p r and s r q for all r ∈ SD−1 on an hyper-sphere (Green square in Figure 1 (Right)). This operation reduces the test function’s output from RD to R (Green circle in Figure 1 (Right)). However, its input dimension is not affected. Reducing the input dimension of test functions is non-trivial, as directly removing input dimensions results in the test power decrease. This is because less information is accessed by the test function (see examples in appendix B.1). Our solution to this problem uses Radon transform which is inspired by CT-scans. It projects the original test function f(x) in Stein discrepancy (Eq. 2) (as an RD → R mapping) to a group of R → R functions along a set of directions (g ∈ SD−1). Then, this group of functions are used as the new test functions to define the proposed discrepancy. The invertibility of Radon transform ensures that testing with input in the original space RD is equivalent to the test using a group of low dimensional functions with input in R. Thus, the above two steps not only reduce the dimensions of the test function’s output and input, but also maintain the validity of the resulting discrepancy as each step is either equivalent or invertible.\nIn detail, assume two distributions p and q supported on RD with differentiable densities p(x) and q(x), and define the test functions f(·; r, g) : RD → R such that f(x; r, g) = frg ◦ hg(x) = frg(x\nTg), where hg(·) is the inner product with g and frg : R→ R. One should note that the r and g in f(·; r, g) should not just be treated as parameters in a test function f . In fact, they are more like the index to indicate that for each pair of r, g, we need a new f(·; r, g), i.e. new frg, which is completely independent to other test functions. The proposed sliced Stein discrepancy (SSD), defined using two uniform distributions pr(r) and pg(g) over the hypersphere SD−1, is given by the following, with frg ∈ Fq meaning f(·; r, g) ∈ Fq:\nS(q, p) = Epr,pg\n[ sup\nfrg∈Fq Eq[srp(x)frg(xTg) + rTg∇xT gfrg(xTg)]\n] . (5)\nWe verify the proposed SSD is a valid discrepancy measure, namely, S(q, p) = 0 iff. q = p a.e. Theorem 1. (SSD Validity) If assumptions 1-4 in appendix A are satisfied, then for two probability distributions p and q, S(q, p) ≥ 0, and S(q, p) = 0 if and only if p = q a.e.\nDespite this attractive theoretical result, SSD is difficult to compute in practice. Specifically, the expectations over r and g can be approximated by Monte Carlo but this typically requires a very\nlarge number of samples in high dimensions (Deshpande et al., 2019). We propose to relax such limitations by using only a finite number of slicing directions r from an orthogonal basis Or of RD, e.g. the standard basis of one-hot vectors, and the corresponding optimal test direction gr for each r. We call this variant maxSSD, which is defined as follows and validated in Corollary 1.1:\nSmax(q, p) = ∑ r∈Or sup frgr∈Fq,gr∈SD−1 Eq[srp(x)frgr (xTgr) + rTgr∇xT grfrgr (xTgr)]. (6)\nCorollary 1.1. (maxSSD) Assume the conditions in Theorem 1, then Smax(q, p) = 0 iff. p = q a.e." }, { "heading": "3.2 CLOSED FORM SOLUTION WITH THE KERNEL TRICK", "text": "The optimal test function given r and g is intractable without further assumptions on the test function families. This introduces another scalability issue as optimizing these test functions explicitly can be time consuming. Fortunately, we can apply the kernel trick to obtain its analytic form. Assume for each test function frg ∈ Hrg, where Hrg is a scalar-valued RKHS equipped with kernel k(x,x′; r, g) = krg(xTg,x′Tg) that satisfies assumption 5 in appendix A and frg(xTg) = 〈frg, krg(xTg, ·)〉Hrg . We define the following quantities:\nξp,r,g(x, ·) = srp(x)krg(xTg, ·) + rTg∇xT gkrg(xTg, ·), (7) hp,r,g(x,y) = s r p(x)krg(x Tg,yTg)srp(y) + r Tgsrp(y)∇xT gkrg(xTg,yTg)+\nrTgsrp(x)∇yT gkrg(xTg,yTg) + (rTg)2∇2xT g,yT gkrg(xTg,yTg). (8) The following theorem describes the optimal test function inside SSD (Eq.(5)) and maxSSD (Eq.(6)). Theorem 2. (Closed form solution) If Eq[hp,r,g(x,x)] <∞, then\nD2rg(q, p) = || sup frg∈Hrg,||frg||≤1 Eq[srp(x)frg(xTg) + rTg∇xT gfrg(xTg)]||2\n= ||Eq[ξp,r,g(x)]||2Hrg = Eq(x)q(x′)[hp,r,g(x,x′)]. (9)\nNext, we propose the kernelized version of SSD with orthogonal basis Or, called SKSD. Theorem 3. (SKSD as a discrepancy) For two probability distributions p and q, given assumptions 1,2 and 5 in appendix A and Eq[hp,r,g(x,x)] <∞ for all r and g, we define SKSD as\nSKo(q, p) = ∑ r∈Or ∫ SD−1 pg(g)D 2 rg(q, p)dg, (10)\nwhich is equal to 0 if and only if p = q a.e.\nFollowing the same idea of maxSSD (Eq.6), it suffices to use optimal slice direction gr for each r ∈ Or, resulting in a slicing matrix G ∈ SD×(D−1). We name this discrepancy as maxSKSD, or maxSKSD-g when we need to distinguish it from another variant described later. Corollary 3.1. (maxSKSD) Assume the conditions in Theorem 3 are satisfied. Then\nSKmax(q, p) = ∑ r∈Or sup gr D2rgr (q, p) (11)\nis equal to 0 if and only if p = q a.e.\nFigure 1 (Left) clarifies the connections between the mentioned discrepancies. We emphasise that using a single projection g in maxSKSD may be insufficient when no single projected feature xTg is informative enough to describe the difference between p and q. Instead, in maxSKSD, for each score projection r ∈ Or, we have a corresponding gr. One can also use the optimal r to replace the summation over Or, which provides additional benefits in certain GOF tests. We call this discrepancy maxSKSD-rg, and its validity can be proved accordingly. Interestingly, in appendix G, we show under certain scenarios maxSKSD-g can have inferior performance due to the noisy information provided by the redundant dimensions. Further, we show that such limitation can be efficiently addressed by using maxSKSD-rg.\nKernel choice and optimalG RBF kernel with median heuristics is a common choice. However, better kernels, e.g. deep kernels which evaluate a given kernel on the transformed input φ(x), might be preferred. It is non-trivial to directly use such kernel on SKSD or maxSKSD. We propose an adapted form of Eq.(10) to incorporate such kernel and maintain its validity. We include the details in appendix D and leave the experiments for future work.\nThe quality of sliced directionG is crucial for the performance of both maxSKSD-g or maxSKSD-rg. Indeed, it represents the projection directions that two distributions differ the most. The closed-form solutions of G is not analytic in general, in practice, finding the optimal G involves solving other difficult optimizations as well (projection r and test function frg). For the scope of this work, we obtained G by optimizing maxSKSD-g or maxSKSD-rg using standard gradient optimization, e.g. Adam, with random initialization. Still in some special cases (e.g. p, q are full-factorized), analytic solutions of optimalG exists, which is further discussed in appendix E." }, { "heading": "3.3 APPLICATION OF MAXSKSD", "text": "Goodness-of-fit Test Assume the optimal test directions gr ∈ G are available, maxSKSD (Eq.(11)) can then be estimated using U-statistics (Hoeffding, 1992; Serfling, 2009). Given i.i.d. samples {xi}Ni=1 ∼ q, we have an unbiased minimum variance estimator:\nSK ∧ max(q, p) = 1 N(N − 1) ∑ r∈Or ∑ 1≤i 6=j≤N hp,r,gr (xi,xj). (12)\nThe asymptotic behavior of the estimator is analyzed in appendix F.1. We use bootstrap (Liu et al., 2016; Huskova & Janssen, 1993; Arcones & Gine, 1992) to determine the threshold for rejecting the null hypothesis as indicated in algorithm 1. The bootstrap samples can be calculated by\nSK ∧∗ m = ∑\n1≤i 6=j≤N\n(wmi − 1\nN )(wmj −\n1 N ) ∑ r∈Or hp,r,gr (xi,xj) (13)\nwhere (wm1 , . . . , w m N ) M m=1 are random weights drawn from multinomial distributions Multi(N, 1N , . . . , 1 N ).\nAlgorithm 1: GOF Test with maxSKSD U-statistics Input :Samples {xi}Ni=1 ∼ q(x), score function sp(x), Orthogonal basis Or, optimal test\ndirection gr for each r ∈ Or, kernel function krg , significant level α, and bootstrap sample size M .\nHypothesis :H0: p = q v.s. H1: q 6= p Compute SK ∧ max(q, p) using U-statistic Eq.(12); Generate M bootstrap samples {SK ∧∗\nm}Mm=1 using Eq.(13); Reject null hypothesis H0 if the proportion SK ∧∗ m > SK ∧ max(q, p) is less than α.\nModel Learning The proposed maxSKSD can be applied to model learning in two ways. First, it can be directly used as a training objective, in such case q is the data distribution and p is the model to be learned, and the learning algorithm performs minp SKmax(q, p). The second model learning scheme is to leverage the particle inference for latent variables and train the model parameters using an EM-like (Dempster et al., 1977) algorithm. Similar to the relation between SVGD and KSD, we can derive a corresponding particle inference algorithm based on maxSKSD, called sliced-SVGD (S-SVGD). In short, we define a specific form of the perturbation as φ(x) = [φgi(x Tgi), . . . , φgD (x TgD)]\nT and modify the proofs of Lemma 1 accordingly. The resulting S-SVGD algorithm uses kernels defined on one dimensional projected samples, which sidesteps the vanishing repulsive force problem of SVGD in high dimensions (Zhuo et al., 2017; Wang et al., 2018). We illustrate this in Figure 2 by estimating the variance\nof a standard Gaussian with the particles obtained by SVGD or S-SVGD (see appendix J.1). We see that as the dimension increases, SVGD severely under-estimates the variance of p, while the S-SVGD remains robust. Furthermore, its validity is justified since in such case the KL gradient equals to maxSKSD which is a valid discrepancy. Readers are referred to appendix F.2 for the derivations. We also give an analysis of their memory and computational cost for both GOF and model learning in appendix H." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 GOODNESS OF FIT TEST", "text": "We evaluate maxSKSD (Eq.(11)) for GOF tests in high dimensional problems. First, we demonstrate its robustness to the increasing dimensionality using the Gaussian GOF benchmarks (Jitkrittum et al., 2017; Huggins & Mackey, 2018; Chwialkowski et al., 2016). Next, we show the advantage of our method for GOF tests on 50-dim Restricted Boltzmann Machine (RBM) (Liu et al., 2016; Huggins & Mackey, 2018; Jitkrittum et al., 2017). We included in comparison extensive baseline test statitics for GOF test: Gaussian or Cauchy random Fourier features (RFF) (Rahimi & Recht, 2008), KSD with RBF kernel (Liu et al., 2016; Chwialkowski et al., 2016), finite set Stein discrepancy (FSSD) with random or optimized test locations (Jitkrittum et al., 2017), random feature Stein discrepancy (RFSD) with L2 SechExp and L1 IMQ kernels (Huggins & Mackey, 2018), and maximum mean discrepancy (MMD) (Gretton et al., 2012) with RBF kernel. Notice that we use gradient descent to obtain the test directions gr (and potentially the slicing directions r) for Eq.(11)." }, { "heading": "4.1.1 GOF TESTS WITH HIGH DIMENSIONAL GAUSSIAN BENCHMARKS", "text": "We conduct 4 different benchmark tests with p = N (0, I): (1) Null test: q = p; (2) Laplace: q(x) = ∏D d=1 Lap(xd|0, 1/ √ 2) with mean/variance matched to p; (3) Multivariate-t: q is fully factorized multivariate-t with 5 degrees of freedom, 0 mean and scale 1. In order to match the variance of p and q, we change the variance of p to 55−2 ; (4) Diffusion: q(x) = N (0,Σ1) where the variance of 1st-dim is 0.3 and the rest is the same as in I . For the testing setup, we set the significance level α = 0.05. For FFSD and RFSD, we use the open-sourced code from the original publications. We only consider maxSKSD-g here as it already performs nearly optimally. We refer to appendix I.1 for details.\nFigure 3 shows the GOF test performances and the corresponding discrepancy values. In summary, the proposed maxSKSD outperforms the baselines in all tests, where the result is robust to the increasing dimensions and the discrepancy values match the expected behaviours.\nNull The left-most column in Figure 3 shows that all methods behave as expected as the rejection rate is closed to the significance level, except for RFSD with L2 SechExp kernel. All the discrepancy values oscillate around 0, with the KSD being less stable.\nLaplace and Multivariate-t The two middle columns of Figure 3 show that maxSKSD-g achieves a nearly perfect rejection rate consistently as the dimension increases, while the test power for all\nbaselines decreases significantly. For the discrepancy values, similar to the KL divergence between q and p, maxSKSD-g linearly increases with dimensions due to the independence assumptions..\nDiffusion This is a more challenging setting since p and q only differ in one of their marginal distributions, which can be easily buried in high dimensions. As shown in the rightmost column of Figure 3, all methods failed in high dimensions except maxSKSD-g, which still consistently achieves optimal performance. For the discrepancy values, we expect a positive constant due to the one marginal difference between p and q. Only maxSKSD-g behaves as expected as the problem dimension increases. The decreasing value at the beginning is probably due to the difficulty in finding the optimal direction g in high dimensions when the training set is small.\n4.1.2 RBM GOF TEST\nWe demonstrate the power maxSKSD for GOF tests on RBMs, but we now also include results for maxSKSD-rg. We follow the test setups in Liu et al. (2016); Jitkrittum et al. (2017); Huggins & Mackey (2018) where different amounts of noise are injected into the weights to form the alternative hypothesis q. The samples are drawn using block Gibbs samplers. Refer to appendix I.2 for details. Figure 4 shows that maxSKSD based methods dominate the baselines, especially with maxSKSD-rg significantly outperforming the others. At perturbation level 0.01, maxSKSD-rg achieves 0.96 rejection rate, while others are all below 0.5. This result shows the advantages of optimizing the slicing directions r." }, { "heading": "4.2 MODEL LEARNING", "text": "We evaluate the efficiency of maxSKSD-based algorithms in training machine learning models. First, we use independent component analysis (ICA) which is often used as a benchmark for evaluating training methods for energy-based model (Gutmann & Hyvärinen, 2010; Hyvärinen, 2005; Ceylan & Gutmann, 2018). Our approach trains the ICA model by directly minimizing maxSKSD. Next, we evaluate the proposed S-SVGD particle inference algorithm, when combined with amortization (Feng et al., 2017; Pu et al., 2017), in the training of a variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) on binarized MNIST. Appendix J.5 also shows superior results for S-SVGD when training a Bayesian neural network (BNN) on UCI datasets (Dua & Graff, 2017)." }, { "heading": "4.2.1 ICA", "text": "ICA consists of a simple generative process z ∼ Lap(0, 1) andx =Wz, where the model parameters are a non-singular matrix W ∈ RD×D. The log density for x is log p(x) = log pz(W−1x) + C, where the normalization constant C can be ignored when training with Stein discrepancies. We train the models on data sampled from a randomly initialized ICA model and evaluate the corresponding test log likelihoods. We compare maxSKSD with KSD and the state-of-the-art LSD (Grathwohl et al., 2020). For more details on the setup, we refer the reader to appendix J.2.\nTable 1 shows that both maxSKSD and LSD are robust to increasing dimensions, with maxSKSD being better when D is very large. Also at D = 200, maxSKSD converges significantly faster than LSD (see Figure 10 in appendix J.3). This faster convergence is due to the closed-form solution for the optimal test functions, whereas LSD requires adversarial training. While KSD is also kernel-based, it suffers from the curse-of-dimensionality and fails to train the model properly for D > 20. Instead the proposed maxSKSD can successfully avoid the problems of KSD with high dimensional data.\nTable 3: Label entropy and accuracy for imputed images.\nMethod Entropy Accuracy\nVanilla VAE 0.297 0.718 SVGD VAE 0.538 0.691 S-SVGD VAE 0.542 0.728" }, { "heading": "4.2.2 AMORTIZED SVGD", "text": "Finally, we consider training VAEs with implicit encoders on dynamically binarized MNIST. The decoder is trained as in vanilla VAEs, but the encoder is trained by amortization (Feng et al., 2017; Pu et al., 2017), which minimizes the mean square error between the initial samples from the encoder, and the modified samples driven by the SVGD/S-SVGD dynamics (Algorithm 3 in appendix J.4).\nWe report performance in terms of test log-likelihood (LL). Furthermore we consider an imputation task, by removing the pixels in the lower half of the image and imputing the missing values using (approximate) posterior sampling from the VAE models. The performance is measured in terms of imputation diversity and correctness, using label entropy and accuracy. For fair comparisons, we do not tune the coefficient of the repulsive force. We refer to appendix J.4 for details.\nTable 2 reports the average test LL. We observe that S-SVGD is much more robust to the increasing latent dimensions compared to SVGD. To be specific, with D = 16, SVGD performs the best where S-SVGD performs slightly worse than SVGD. However, when the dimension starts to increase, LL of SVGD drops significantly. For D = 64, a common choice for latent space, it performs even significantly worse than vanilla VAE. On the other hand, S-SVGD is much more robust. Notice that the purpose of this experiment is to show compare their robustness instead of achieving the state-ofthe-art performance. Still the performance can be easily boosted, e.g. running longer S-SVGD steps before encoder update, we leave it for the future work.\nFor the imputation task, we compute the label entropy and accuracy for the imputed images (Table 3). We observe S-SVGD has higher label entropy compared to vanilla VAE and better accuracy compared to SVGD. This means both S-SVGD and SVGD capture the muli-modality nature of the posterior compared to uni-modal Gaussian distribution. However, high label entropy itself may not be a good indicator for the quality of the learned posterior. One can think of a counter-example that the imputed images are diverse but does not look like any digits. This may also gives a high label entropy but the quality of the posterior is poor. Thus, we use the accuracy to indicate the “correctness” of the imputed images, with higher label accuracy meaning the imputed images are closed to the original image. Together, a good model should give a higher label entropy along with the high label accuracy. We observe S-SVGD has more diverse imputed images with high imputation accuracy." }, { "heading": "4.3 SUMMARY OF THE EXPERIMENTS IN APPENDIX", "text": "We present further empirical results on GOF tests and model learning in the appendix to demonstrate the advantages of the proposed maxSKSD. As a summary glance of the results:\n• In appendix G, we analyse the potential limitations of maxSKSD-g and show that they can be mitigated by maxSKSD-rg, i.e. optimising the slicing direction r;\n• In appendix I.3, we successfully apply maxSKSD to selecting the step size for stochastic gradient Hamiltonian Monte Carlo (SGHMC) (Chen et al., 2014);\n• In appendix J.5, we show that the proposed S-SVGD approach out-performs the original SVGD on Bayesian neural network regression tasks." }, { "heading": "5 RELATED WORK", "text": "Stein Discrepancy SD (Gorham & Mackey, 2015) and KSD (Liu et al., 2016; Chwialkowski et al., 2016) are originally proposed for GOF tests. Since then research progress has been made to improve these two discrepancies. For SD, LSD (Grathwohl et al., 2020; Hu et al., 2018) is proposed to increase the capacity of test functions using neural networks with L2 regularization. On the other hand, FSSD (Jitkrittum et al., 2017) and RFSD (Huggins & Mackey, 2018) aim to reduce\nthe computation cost of KSD from O(n2) to O(n) where n is the number of samples. Still the curse-of-dimensionality issue remains to be addressed in KSD, and the only attempt so far (to the best of our knowledge) is the kernelized complete conditional Stein discrepancy (KCC-SD (Singhal et al., 2019)), which share our idea of avoiding kernel evaluations on high dimensional inputs but through comparing conditional distributions. KCC-SD requires the sampling from q(xd|x−d), which often needs significant approximations in practice due to its intractability. This makes KCC-SD less suited for GOF test due to estimation quality in high dimensions. On the other hand, our approach does not require this approximation, and the corresponding estimator is well-behaved asymptotically.\nWasserstein Distance and Score matching Sliced Wasserstein distance (SWD) (Kolouri et al., 2016) and sliced score matching (SSM) (Song et al., 2019) also uses the “slicing” idea. However, their motivation is to address the computational issues rather than statistical difficulties in high dimensions. SWD leveraged the closed-form solution of 1D Wasserstein distance by projecting distributions onto 1D slices. SSM uses Hutchson’s trick (Hutchinson, 1990) to approximate the trace of Hessian.\nParticle Inference Zhuo et al. (2017); Wang et al. (2018) proposed message passing SVGD to tackle the well-known mode collapse problem of SVGD using local kernels in the graphical model. However, our work differs significantly in both theory and applications. Theoretically, the discrepancy behind their work is only valid if p and q have the same Markov blanket structure (refer to Section 3 in Wang et al. (2018) for detailed discussion). Thus, unlike our method, no GOF test and practical inference algorithm can be derived for generic cases. Empirically, the Markov blanket structure information is often unavailable, whereas our method only requires projections that can be easily obtained using optimizations. Projected SVGD (pSVGD) is a very recent attempt (Chen & Ghattas, 2020) which updates the particles in an adaptively constructed low dimensional space, resulting in a biased inference algorithm. The major difference compared to S-SVGD is that our work still updates the particles in the original space with kernel being evaluated in 1D projections. Furthermore, S-SVGD can theoretically recover the correct target distribution. There is no real-world experiments provided in (Chen & Ghattas, 2020), and a stable implementation of pSVGD is non-trivial, so we did not consider pSVGD when selecting the baselines." }, { "heading": "6 CONCLUSION", "text": "We proposed sliced Stein discrepancy (SSD), as well as its scalable and kernelized version maxSKSD, to address the curse-of-dimensionality issues in Stein discrepancy. The key idea is to project the score function on one-dimensional slices and define (kernel-based) test functions on one-dimensional projections. We also theoretically prove their validity as a discrepancy measure. We conduct extensive experiments including GOF tests and model learning to show maxSKSD’s improved performance and robustness in high dimensions. There are three exciting avenues of future research. First, although validated by our theoretical study in appendix D, practical approaches to incorporate deep kernels into SSD remains an open question. Second, the performance of maxSKSD crucially depends on the optimal projection direction, so better optimization methods to efficiently construct this direction is needed. Lastly, we believe “slicing” is a promising direction for kernel design to increase the robustness to high dimensional problems in general. For example, MMD can be easily extended to high dimensional two-sample tests using this kernel design trick." } ]
null
null
SP:3f164a85f782ec9beeb00b19638f98d0cb6a6265
[ "The paper proposes to use a GAN framework to generate the realistic neuronal calcium signals, enabling to scale-up the neuronal population activity data. The solution is based on WAVEGAN architecture with Wasserstein distance to train on calcium fluorescent signals. The experiments are performed in comparison to artificial calcium signals with known ground-truth closely resembles the underlying data distribution. The accuracy of the approach, robustness of generated signals from the model are evaluated." ]
Calcium imaging has become a powerful and popular technique to monitor the activity of large populations of neurons in vivo. However, for ethical considerations and despite recent technical developments, recordings are still constrained to a limited number of trials and animals. This limits the amount of data available from individual experiments and hinders the development of analysis techniques and models for more realistic sizes of neuronal populations. The ability to artificially synthesize realistic neuronal calcium signals could greatly alleviate this problem by scaling up the number of trials. Here, we propose a Generative Adversarial Network (GAN) model to generate realistic calcium signals as seen in neuronal somata with calcium imaging. To this end, we propose CalciumGAN, a model based on the WaveGAN architecture and train it on calcium fluorescent signals with the Wasserstein distance. We test the model on artificial data with known ground-truth and show that the distribution of the generated signals closely resembles the underlying data distribution. Then, we train the model on real calcium traces recorded from the primary visual cortex of behaving mice and confirm that the deconvolved spike trains match the statistics of the recorded data. Together, these results demonstrate that our model can successfully generate realistic calcium traces, thereby providing the means to augment existing datasets of neuronal activity for enhanced data exploration and modelling.
[]
[ { "authors": [ "Michael J Berridge", "Peter Lipp", "Martin D Bootman" ], "title": "The versatility and universality of calcium signalling", "venue": "Nature reviews Molecular cell biology,", "year": 2000 }, { "authors": [ "Piotr Bojanowski", "Armand Joulin", "David Lopez-Paz", "Arthur Szlam" ], "title": "Optimizing the latent space of generative networks", "venue": "arXiv preprint arXiv:1707.05776,", "year": 2017 }, { "authors": [ "Emery N Brown", "Robert E Kass", "Partha P Mitra" ], "title": "Multiple neural spike train data analysis: state-of-the-art and future challenges", "venue": "Nature neuroscience,", "year": 2004 }, { "authors": [ "Massimo Caccia", "Lucas Caccia", "William Fedus", "Hugo Larochelle", "Joelle Pineau", "Laurent Charlin" ], "title": "Language gans falling short", "venue": "arXiv preprint arXiv:1811.02549,", "year": 2018 }, { "authors": [ "Peter Dayan", "Laurence F Abbott" ], "title": "Theoretical neuroscience: computational and mathematical modeling of neural systems", "venue": null, "year": 2001 }, { "authors": [ "M. Denker", "A. Yegenoglu", "S. Grün" ], "title": "Collaborative HPC-enabled workflows on the HBP Collaboratory using the Elephant framework", "venue": "Neuroinformatics", "year": 2018 }, { "authors": [ "Chris Donahue", "Julian McAuley", "Miller Puckette" ], "title": "Adversarial audio synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Johannes Friedrich", "Pengcheng Zhou", "Liam Paninski" ], "title": "Fast online deconvolution of calcium imaging data", "venue": "PLoS computational biology,", "year": 2017 }, { "authors": [ "Flavio Fröhlich" ], "title": "Chapter 11 - optical measurements and perturbations", "venue": "In Flavio Fröhlich (ed.), Network Neuroscience,", "year": 2016 }, { "authors": [ "Aidan N Gomez", "Sicong Huang", "Ivan Zhang", "Bryan M Li", "Muhammad Osama", "Lukasz Kaiser" ], "title": "Unsupervised cipher cracking using discrete gans", "venue": "arXiv preprint arXiv:1801.04883,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kenneth D Harris", "Rodrigo Quian Quiroga", "Jeremy Freeman", "Spencer L Smith" ], "title": "Improving data quality in neuronal population recordings", "venue": "Nature neuroscience,", "year": 2016 }, { "authors": [ "Julia U Henschke", "Evelyn Dylda", "Danai Katsanevaki", "Nathalie Dupuy", "Stephen P Currie", "Theoklitos Amvrosiadis", "Janelle MP Pakan", "Nathalie L Rochefort" ], "title": "Reward association enhances stimulus-specific representations in primary visual cortex", "venue": "Current Biology,", "year": 2020 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Dmitry R Lyamzin", "Jakob H Macke", "Nicholas A Lesica" ], "title": "Modeling population spike trains with specified time-varying spike rates, trial-to-trial variability, and pairwise signal and noise correlations", "venue": "Frontiers in computational neuroscience,", "year": 2010 }, { "authors": [ "Jakob H Macke", "Philipp Berens", "Alexander S Ecker", "Andreas S Tolias", "Matthias Bethge" ], "title": "Generating spike trains with specified correlation coefficients", "venue": "Neural computation,", "year": 2009 }, { "authors": [ "Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables, 2016", "venue": null, "year": 2016 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory Diamos", "Erich Elsen", "David Garcia", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh" ], "title": "Mixed precision training", "venue": "arXiv preprint arXiv:1710.03740,", "year": 2017 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Manuel Molano-Mazon", "Arno Onken", "Eugenio Piasini", "Stefano Panzeri" ], "title": "Synthesizing realistic neural population activity patterns using generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Augustus Odena", "Vincent Dumoulin", "Chris Olah" ], "title": "Deconvolution and checkerboard artifacts. Distill, 2016", "venue": "doi: 10.23915/distill.00003. URL http://distill.pub/2016/ deconv-checkerboard", "year": 2016 }, { "authors": [ "Janelle MP Pakan", "Stephen P Currie", "Lukas Fischer", "Nathalie L Rochefort" ], "title": "The impact of visual cues, reward, and motor feedback on the representation of behaviorally relevant spatial locations in primary visual cortex", "venue": "Cell reports,", "year": 2018 }, { "authors": [ "Poornima Ramesh", "Mohamad Atayi", "Jakob H Macke" ], "title": "Adversarial training of neural encoding models on population spike trains", "venue": null, "year": 2019 }, { "authors": [ "Hernan Gonzalo Rey", "Carlos Pedreira", "Rodrigo Quian Quiroga" ], "title": "Past, present and future of spike sorting techniques", "venue": "Brain research bulletin,", "year": 2015 }, { "authors": [ "MCW van Rossum" ], "title": "A novel spike distance", "venue": "Neural computation,", "year": 2001 }, { "authors": [ "Shreya Saxena", "John P. Cunningham" ], "title": "Towards the neural population doctrine, 2019", "venue": null, "year": 2019 }, { "authors": [ "Elad Schneidman", "Michael J Berry", "Ronen Segev", "William Bialek" ], "title": "Weak pairwise correlations imply strongly correlated network states in a neural population", "venue": null, "year": 2006 }, { "authors": [ "Benjamin Staude", "Stefan Rotter", "Sonja Grün" ], "title": "Cubic: cumulant based inference of higher-order correlations in massively parallel spike trains", "venue": "Journal of computational neuroscience,", "year": 2010 }, { "authors": [ "Ian H. Stevenson", "Konrad P. Kording" ], "title": "How advances in neural recording affect data analysis", "venue": "Nature Neuroscience,", "year": 2011 }, { "authors": [ "Carsen Stringer", "Michalis Michaelos", "Marius Pachitariu" ], "title": "High precision coding in visual cortex. bioRxiv, 2019. doi: 10.1101/679324", "venue": "URL https://www.biorxiv.org/content/ early/2019/11/04/679324", "year": 2019 }, { "authors": [ "Gašper Tkačik", "Olivier Marre", "Dario Amodei", "Elad Schneidman", "William Bialek", "Michael J Berry II" ], "title": "Searching for collective behavior in a large network of sensory neurons", "venue": "PLoS Comput Biol,", "year": 2014 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Yizhe Zhang", "Zhe Gan", "Kai Fan", "Zhi Chen", "Ricardo Henao", "Dinghan Shen", "Lawrence Carin" ], "title": "Adversarial feature matching for text generation", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The ability to record accurate neuronal activities from behaving animals is essential for the study of information processing in the brain. Electrophysiological recording, which measures the rate of change in voltage by microelectrodes inserted in the cell membrane of a neuron, has high temporal resolution and is considered the most accurate method to measure spike activities (Dayan & Abbott, 2001). However, this method is not without shortcomings (Harris et al., 2016). For instance, a single microelectrode can only detect activity from few neurons in close proximity, and extensive pre-processing is required to infer single-unit activity from a multi-unit signal. Disentangling circuit computations in neuronal populations of a large scale remains a difficult task (Rey et al., 2015). On the other hand, calcium imaging monitors the calcium influx in the cell as a proxy of an action potential (Berridge et al., 2000). Contrary to electrophysiological recordings, this technique yields data with high spatial resolution and low temporal resolution (Grienberger & Konnerth, 2012), and has become a powerful imaging technique to monitor large neuronal populations. With the advancements in these recording technologies, it has become increasingly easier to obtain high-quality neuronal activity data in vivo from live animals. However, due to ethical considerations, the acquired datasets are often limited by the number of trials or the duration of each trial on a live animal. This poses a problem for assessing analysis techniques that take into account higher-order correlations (Brown et al., 2004; Staude et al., 2010; Stevenson & Kording, 2011; Saxena & Cunningham, 2019). Even for linear decoders, the number of trials can be more important for determining coding accuracy than the number of neurons (Stringer et al., 2019).\nGenerative models of neuronal activity hold the promise of alleviating the above problem by enabling the synthesis of an unlimited number of realistic samples for assessing advanced analysis methods. Popular modelling approaches such as the maximum entropy framework (Schneidman et al., 2006; Tkačik et al., 2014) and the latent variable model (Macke et al., 2009; Lyamzin et al., 2010) have shown ample success in modelling spiking activities, though many of these models re-\nquire strong assumptions on the data and cannot generalize to different cortical areas. To this end, GANs have shown tremendous success in synthesizing data across a vast variety of domains and data-types (Karras et al., 2017; Gomez et al., 2018; Donahue et al., 2019), and are good candidates for modelling neuronal activities. Spike-GAN (Molano-Mazon et al., 2018) demonstrated that GANs can model neural spikes that accurately match the statistics of real recorded spiking behaviour from a small number of neurons. Moreover, the discriminator in Spike-GAN is able to learn to detect which population activity pattern is the relevant feature, and this can provide insights into how a population of neurons encodes information. Ramesh et al. (2019) trained a conditional GAN (Mirza & Osindero, 2014), conditioned on the stimulus, to generate multivariate binary spike trains. They fitted the generative model with data recorded in the V1 area of macaque visual cortex, and the GAN generated spike trains were able to capture the firing rate and pairwise correlation statistics better than the dichotomized Gaussian model (Macke et al., 2009) and a deep supervised convolution model.\nNevertheless, the aforementioned deep generative models operate on spike trains which are discrete in nature, and back-propagation on discrete data remains a difficult task (Caccia et al., 2018). For instance, Ramesh et al. (2019) used the REINFORCE gradient estimate (Williams, 1992) to train the generator in order to perform back-propagation on discrete data. Still, gradient estimation with the REINFORCE approach yields large variance, which is known to be challenging for optimization (Maddison et al., 2016; Zhang et al., 2017). In addition, generating and training on binary spike trains directly introduces uncertainty as the generator has to learn the deconvolution process as well, making it an even more difficult task.\nIn this work, we investigate the possibility of synthesising continuous calcium fluorescent signals using the GAN framework, as a method to scale-up or augment the amount of population activity data. In addition, modelling the calcium signals directly has several advantages (a) the generator needs to learn the deconvolution process when synthesising directly on binary spike trains, hence there is additional uncertainty, which is not present for calcium signals. (b) Calcium imaging signals have inherently more information about the neuronal activities than binary spike trains. (c) Based on calcium signals with known ground-truth, calcium deconvolution algorithms can be evaluated. Hence, We devised a workflow to synthesize and evaluate calcium imaging signals, then validate the method on artificial data with known ground-truth as well as mimicking real two-photon calcium (Ca2+) imaging data as recorded from the primary visual cortex of a behaving mouse (Pakan et al., 2018; Henschke et al., 2020)." }, { "heading": "2 METHODS", "text": "" }, { "heading": "2.1 NETWORK ARCHITECTURE", "text": "The original GAN framework, introduced in Goodfellow et al. (2014), plays a min-max game where the generator G attempts to generate convincing samples from the latent space Z, and the discriminator D learns to distinguish between generated samples and real samples X . In this work, we use the WGAN-GP (Gulrajani et al., 2017) formulation of the loss function without the need of incorporating any information of the neural activities into the training objective:\nLD = E z∼Z [D(G(z))]− E x∼X [D(x)] + λ E x̃∼X̃ [(‖ ∇x̃D(x̃) ‖2 −1)2] (1)\nwhere λ denotes the gradient penalty coefficient, x̃ = x+ (1− )x̂ are samples taken between the real and generated data distribution.\nFor learning calcium signal generation, we adapted the WaveGAN architecture (Donahue et al., 2019), which has shown promising results in audio signal generation. In the generator, we used 1-dimensional transposed convolution layers to up-sample the input noise. We added Layer Normalization (Ioffe & Szegedy, 2015) in between each convolution and activation layer, in order to stabilize training as well as to make the operation compatible with the WGAN-GP framework. To improve the model learning performance and stability, the calcium signals were scaled to the range between 0 and 1 by normalizing with the maximum value of the calcium signal in the data. Correspondingly, we chose sigmoid activation in the output layer of the generator and then re-scaled the signals to their original range before inferring their spike trains.\nThe architecture of the discriminator in our model is largely a mirror of the generator, with the exception of the removal of Layer Normalization and instead of up-sampling the input with transposed convolution, we used a simple convolution layer. Samples generated using transposed convolution often exhibit the ”checkerboard” artifacts described by Odena et al. (2016), where the output exhibits repeated patterns (usually very subtle to the eye) due to a filter being applied unevenly to the receptive field. In the context of signal generation, the discrimination could exploit the periodic artifacts pattern and learn a naive policy to reject generated samples. Donahue et al. (2019) proposed the Phase Shuffle mechanism in the discriminator to address the aforementioned issue. The Phase Shuffle layer randomly shifts the activated units after each convolution layer within [−n, n], in order to distort the periodic pattern. Hence, the resulting samples constitute a more challenging task for the discriminator. Figure A.4 shows a simple illustration of the Phase Shuffle operation. In our network, we incorporated the Phase Shuffle operation, as well as using a kernel size that is divisible by the stride size, as suggested in Odena et al. (2016). We apply the Phase Shuffle operation after each convolution layer, which has led to a noticeable improvement in the generated samples. Table A.1 shows the exact architecture of our model." }, { "heading": "2.2 MODEL PIPELINE", "text": "We devised a consistent model analysis pipeline to evaluate the quality of samples generated by the model, as well as its ability to generalize, in the context of neuronal population spiking activities. The complete model analysis pipeline is shown in Figure A.2.\nAs calcium imaging is largely being used as a proxy to monitor spiking activities, we have decided to evaluate and present the inferred spike trains instead of raw calcium signals. We used the Online Active Set method to Infer Spikes (OASIS) AR1 deconvolution algorithm (Friedrich et al., 2017) to infer spiking activities from calcium fluorescent signals. We apply OASIS to both the training data and generated data to ensure the potential bias in the deconvolution process applies to the two sets of data. We then trained both the generator and discriminator with the WGAN-GP framework (Gulrajani et al., 2017), with 5 discriminator update steps for each generator update step. We used the Adam optimizer (Kingma & Ba, 2014) to optimize both networks, with a learning rate of λ = 10−4, β1 = 0.9 and β2 = 0.9999. To speed up the training process, we incorporated Mixed Precision training (Micikevicius et al., 2017) in our codebase. The exact hyper-parameters being used in this work can be found in Table A.2.\nAfter inferring the spike trains from the generated calcium signals, we then measure the spike train statistics and similarities using the Electrophysiology Analysis Toolkit (Denker et al., 2018). Following some of the previous works in spike generation (Macke et al., 2009; Molano-Mazon et al., 2018; Ramesh et al., 2019), we evaluate the performance of our model with the following statistics and similarities: (a) mean firing rate for evaluating single neuron statistics; (b) pairwise Pearson correlation coefficient for evaluating pairwise statistics; (c) pairwise van-Rossum distance (Rossum, 2001) for evaluating general spike train similarity. Importantly, we evaluate these quantities across the whole population for each neuron or neuron pair and each short time interval (100 ms) and compare the resulting distributions over these quantities obtained from training data as well as generated data. We therefore validate the whole spatiotemporal first- and second-order statistics as well as general spike train similarities." }, { "heading": "2.3 DATA", "text": "" }, { "heading": "2.3.1 DICHOTOMIZED GAUSSIAN ARTIFICIAL DATA", "text": "In order to verify that CalciumGAN is able to learn the underlying distribution and statistics of the training data, we generated our own ground-truth dataset with pre-defined mean and covariance using the dichotomized Gaussian (DG) model (Macke et al., 2009). The model uses a multivariate normal distribution to generate latent continuous random variables which are then thresholded to generate binary variables representing spike trains. The DG model has mean vector and covariance matrix as free parameters. To generate data from this model, we used the sample means and sample covariances obtained from real recorded data (see Section 2.3.2). In alignment with the recorded data, we generated correlated spike trains for N = 102 neurons with a duration of 899 seconds and at 24Hz, hence a matrix with shape (21576, 102). In order to obtain calcium-like signals c from spike trains s with length T , we convolved the generated spike trains with a calcium response kernel\nand added noise, as described in Friedrich et al. (2017):\nst = gst−1 + st 1 ≤ t ≤ T (2) c = b+ s+ σu u ∼ N (0, 1) (3)\nwhere g denotes a finite impulse response filter, b is the baseline value of the signal and σ is the noise standard deviation. In our work, we set g = 0.95, σ = 0.3 and b = 0. We scale the signal range to the unit interval. The data is then segmented using a sliding window along the time dimension with a stride of 2 and a window size of T = 2048 (around 85 seconds in experiment time). We apply the segmentation procedure to both the signal and spike data, hence resulting in two matrices with shape (9754, 2048, 102). Examples of signals and spikes generated from the DG model can be found in Figure A.1a." }, { "heading": "2.3.2 TWO-PHOTON CALCIUM IMAGING RECORDED DATA", "text": "Next, we used two-photon calcium imaging data recorded in the primary visual cortex of behaving mice. The data were collected with the same setup as specified in Pakan et al. (2018) and Henschke et al. (2020). Head-fixed mice were placed on a cylindrical treadmill, and navigated a virtual corridor rendered on two monitors that covered the majority of their visual field. A lick spout was placed in front of the mice, where a water drop would be made available to the mice as a reward if it licked at the correct location within the virtual environment. Hence, the mice would learn to utilize both the visual information and the self-motion feedback in order to maximize the rewards. Neuronal activity was monitored from the same primary visual cortex populations over multiple consecutive behavioural sessions. The basic characteristics of the recorded data are shown in Table A.3. We first experiment with calcium imaging data recorded on the 4th day of the experiment, where the mice were familiar with the virtual environment and the given task. In this particular recording, neurons were labelled with GCamP6f, and N = 102 neurons were recorded at a sampling rate of 24Hz, and the mouse performed 204 trials in 898.2 seconds (raw data shape (21556, 102)). Due to the fact that GAN models require a significant amount of training data, information about the trial and position of the mouse in the virtual environment were not used in this work." }, { "heading": "3 RESULTS", "text": "We propose CalciumGAN as a generative model to synthesize realistic calcium traces as imaged from neuronal populations. To validate our model, we used artificial data with known groundtruth as well as real data recorded from the primary visual cortex of behaving mice. We used the WGAN-GP training objective (Section 2.1) to train both the generator and discrminator. We have also experimented with the objective function from the original GAN (Goodfellow et al., 2014) and LSGAN (Mao et al., 2017). From our experiments, the WGAN-GP formulation had the best training performance and stability." }, { "heading": "3.1 SYNTHETIC DATA MIMICKING DICHOTOMIZED GAUSSIAN DATA", "text": "We first fit our model with the artificial dataset sampled from the DG distribution. We trained the model for 400 epochs with 8,754 samples and held out 1,000 samples for evaluation. Since we defined the model from which we generated the training dataset, we can validate the statistics of the dataset generated by CalciumGAN on the known ground-truth directly. Examples of generated signals and its inferred spikes can be found in Figure A.1b.\nHere, we compare both the trend and variation of the generated data statistics with the DG data. We estimated the mean firing rates and the covariances of data generated by CalciumGAN and compared it to the DG ones (Figure 1). We plotted the values of 5 samples for each neuron and neuron-pair, and sorted them by their mean in ascending order. The variation of the firing rate across samples matched with those of the ground-truth data. The majority of the neuron pairs have low correlation, a characteristic which was also found in the generated data. The neuron pairs that have highly positive and highly negative covariance also have a greater variation across samples." }, { "heading": "3.2 SYNTHETIC DATA MIMICKING RECORDED DATA", "text": "After validating our model on data with known ground-truth, we applied CalciumGAN on twophoton calcium imaging data recorded in the primary visual cortex of mice performing a virtual reality task. We applied the OASIS deconvolution algorithm to infer the spike activities from the recorded calcium signals, and performed the same normalization and segmentation steps as mentioned in Section 2.3.1. Figure 2a shows examples of the recorded calcium signals and inferred spike trains. There are multiple challenges for both the generator and discriminator to learn from the calcium imaging signals. Since data were segmented with a sliding window and the information of the trial was not used, some samples might consist of abnormal signal activity, such as a peak\nbeing cropped off. Generated signals could have the same number of peaks or ranges, though might not preserve the peak and decay characteristics of calcium imaging data. Real and synthetic activity from less active neurons might be more difficult for the discriminator to distinguish due to the absence of prominent spiking characteristics.\nSimilar to the DG analysis, we trained the model for 400 epochs, with 8,754 training samples, and 1,000 samples were held out for evaluation. Note that since we are not taking the trial and position of the mice in the virtual environment into consideration when training the model, the generated data and the evaluation data do not have a one-to-one mapping.\nWe first inspect the generated data and the deconvolved spike trains visually. The calcium signals and inferred spike trains of randomly selected neurons from a randomly selected sample are shown in Figure 2b. Both the synthetic raw traces as well as the inferred spikes visually match the characteristics of the recorded ones.\nWe then compared the spiking characteristics across the whole population. Figure 3 shows the inferred spike trains of the complete 102 neurons population from a randomly selected sample of the real and the synthetic data, with the distribution histogram plotted on the x and y axis. The synthetic data mimicks the firing patterns across neurons and across time remarkably well with occasional small deviations in the rates at particular temporal intervals. Notably, the samples are clearly not identical meaning that the network did not just replicate the training set data.\nIn order to examine if CalciumGAN is able to capture the first and second order statistics of the recorded data, we measured the mean firing rate, pairwise correlation, and van-Rossum distance (see Figure 4). The randomly selected neurons shown in Figure 4a have very distinct firing rate distributions, and CalciumGAN is able to model all of them relatively well, with KL divergence of\n0.31 and 0.16 with respect to the recorded firing rate over 1000 samples. We show the pairwise van-Rossum distance of the same neuron between recorded and generated data across 45 samples in Figure 4c as sorted heatmaps. Less active neurons, such as neuron 75, have a low distance value across samples, mainly due to the scarcity of firing events. Conversely, a high frequency neuron, such as neuron 27, exhibits a clear trend of lower distance values in the diagonal of the heatmap, implying the existence of a pair of recorded and generated sample that are similar. In order to ensure that the data generated by our model capture the underlying distribution of the training data, we also compute the KL divergence between the distributions of the above-mentioned metrics (see Figure 5). Note that we measure the pairwise distance of the same neuron across 50 samples in Figure 4c, whereas in Figure 5c, we measure pairwise van-Rossum distance of each neuron with respect to other neurons within the same sample. We also fitted the DG model to the recorded data\nand measure the same statistics on the DG generated spike trains as a baseline. Table 1 shows the mean KL divergence of the generated data from CalciumgGAN, CalciumGAN with Phase Shuffle disabled (see Appendix A.2) and the DG model.\nThe results we presented above were trained on recordings collected from a mouse that was already familiar with the specific task. However, we were also interested in our model’s capability to learn from neuronal activities that are more stochastic and potentially less correlated. To this end, we trained CalciumGAN on data recorded on the first day of the experiment (average firing rate of 58.07Hz on day 1 versus 35.83Hz on day 4, see Table A.3). Appendix A.3 shows the generated samples and the statistics of the inferred spike trains. The generated data were able to reflect the first and second-order statistics of the recorded data, with mean KL divergence of 0.32, 0.06 and 0.51 when comparing with the mean firing rate, pairwise correlation and van-Rossum distance, respectively. Overall, CalciumGAN was able to capture the statistics and underlying distribution of the real calcium imaging data acquired in the primary visual cortex of awake, behaving mice." }, { "heading": "4 DISCUSSION", "text": "Despite the recent advancement and popularity of calcium imaging of neuronal activity in vivo, the number of trials and the duration of imaging sessions in animal experiments is limited due to ethical and practical considerations. This work provides a readily applicable tool to fit a GAN on calcium signals, enabling the generation of more data that matches the statistics of the provided data.\nWe demonstrated that the GAN framework is capable of synthesizing realistic calcium fluorescent signals similar to those imaged in the somata of neuronal populations of behaving animals. To achieve this, we adapted the WaveGAN (Donahue et al., 2019) architecture with the Wasserstein distance training objective. We generated artificial neuronal activities using a dichotomized Gaussian model, showing that CalciumGAN is able to learn the underlying distribution of the data. We then fitted our model to imaging data from the primary visual cortex of a behaving mouse. Importantly, we showed that the statistics of the synthetic spike trains match the statistics of the recorded data, without the need of incorporating any information of the neuronal activities into the model or the objective function.\nWe would like to highlight one potential bias in this work. To infer spike trains from the real and synthetic calcium traces, we used the OASIS deconvolution algorithm by Friedrich et al. (2017), a method which has great real-time deconvolution performance, as well as an existing Python implementation of the algorithm by the authors (Friedrich, 2017). Speed was a crucial characteristic for evaluating a large number of trials. Nonetheless, we found that this advantage often came at the cost of performance in the form of clearly missed spikes (c.f. Figure 2). However, we stress that these shortcomings apply to both the real data and the synthetic data in exactly the same way. In the end, we use the inferred spikes as a way to validate the plausibility of the synthesized traces. The comparison is fair as long as real and synthetic deconvolutions are subject to the same biases.\nAs the work in deep generative models continue to develop and expand, there is a limitless number of possibilities to explore at the intersection of the GAN framework and neural coding. One potential future direction for this work is to provide a meaningful interpretation for the latent generator representation. In many image generation tasks with GANs (Bojanowski et al., 2017; Karras et al., 2017) it has been shown that the output image can be modified or targeted by interpolating the latent variable that is fed to the generator. Similarly, one could potentially have final control of the generated calcium signals by exploring the synthetic calcium signals generated after interpolating samples in the latent space. Thereby, one could generate calcium imaging data that resemble the neuronal activities of an animal performing a particular novel task. Another interesting research direction would be using a GAN to learn the relationship between different neuronal populations,\nor to reveal changes in activity of the same neuronal population in different training phases of an animal learning a behavioral task. This could be achieved by using, for instance, CycleGAN (Zhu et al., 2017), an unsupervised learning model that can learn the mapping between two distributions without paired data, as a potential model architecture." }, { "heading": "A APPENDIX", "text": "A.1 CALCIUMGAN PIPELINE\nIn order to train and evaluate our GAN model, we have to first pre-process the calcium signals so that they have a standardized format. For calcium imaging data ofN neurons with a recorded length of L, we would receive a raw data shape of (L,N). We then used a slicing window of size T to segment the data along the time dimension into M segments (see Figure A.3), resulting in a matrix with shape (M,T,N). To improve the network training performance, we scale the raw calcium signals x to the range [0, 1] before we train our generative model:\nx[0,1] = x− xmin xmax − xmin\n(4)\nWe use a[0,1] to denote datum a that has a range of [0, 1].\nAfter the above pre-processing step, we train CalciumGAN in mini-batches and store 1,000 samples for evaluation. Since we evaluate our model performance in terms of spike activities, we needed a deconvolution algorithm to infer the spike trains from calcium signals. In this work, we used the OASIS deconvolution algorithm (Friedrich et al., 2017) for its fast online deconvolution performance. Prior to inferring the spiking activities from the generated signals x̂[0,1], we first have to scale the signal back to the same range as the raw calcium signals:\nx̂ = x̂[0,1](xmax − xmin) + xmin (5)\nWe inferred the spike trains from the generated signals as well as the real recorded data with OASIS in order to ensure the possible biases of the deconvolution algorithm are the same for both data.\nA.2 PHASE SHUFFLE\nIn order to reduce the effect of the ”checkerboard” artifact, we adapted the Phase Shuffle mechanism (see 2.1) in the discriminator. In this section we examine the effectiveness of Phase Shuffle in terms of the visual quality of the generated traces as well as the effect it had on the inferred spike trains. A common characteristic of the calcium indicators when an action potential occur is a sharp\nonset followed by a slow decay in the signal (Fröhlich, 2016). In Figure A.5, we can see that such characteristic in the calcium traces were more prominent when Phase Shuffle was enabled. We believe that such differences in the generation quality exist mainly because of the repetitive patterns in the transposed convolution layer Odena et al. (2016), since the discriminator can simply distinguish generated samples from real samples by learning if such patterns exists. As the Phase Shuffle mechanism shifts the temporal dimension (by 10 units in our experiment) randomly, it forces the discriminator to learn from other features in the data instead of the ”shortcut” provided by the (undesired) nature of transposed convolution.\nMoreover, not only did Phase Shuffle affect the visual quality of the generated samples, it also impacted the spike train statistics. The traces generated without Phase Shuffle lack the spiking characteristics, which made it more difficult for the deconvolution algorithm to register a spike in the data, thus increasing the inaccuracy of the inferred spike trains. When comparing the KL divergence of the spike train statistics, the samples generated without Phase Shuffle suffer worse results across the 3 statistics (see Table 1), especially with mean firing rate.\nA.3 DAY 1 RECORDINGS\nThe following figures are the generated data and spike train statistics of CalciumGAN trained on the calcium imaging recordings collected on the first day of the mice experiment." } ]
2,020
SYNTHESISING REALISTIC CALCIUM TRACES OF NEURONAL POPULATIONS USING GAN
SP:3e360ec6c3c576d09fc38169789f9df9dada9bea
[ "Most model-based RL algorithms learn dynamics models that predicts the next timestep. However, because of model-bias, frequency of timesteps, and objective timescales, the dynamics models can accumulate errors and limited by timescales. The authors propose subjective-timescale model (STM) that instead of predicting the next timesteps they find the \"surprising\" subsequences of the trajectories and learn temporal-skipping dynamics models over them. The paper shows the improvement over single-step prediction baselines in a first-person navigation domain." ]
In model-based learning, an agent’s model is commonly defined over transitions between consecutive states of an environment even though planning often requires reasoning over multi-step timescales, with intermediate states either unnecessary, or worse, accumulating prediction error. In contrast, intelligent behaviour in biological organisms is characterised by the ability to plan over varying temporal scales depending on the context. Inspired by the recent works on human time perception, we devise a novel approach to learning a transition dynamics model, based on the sequences of episodic memories that define the agent’s subjective timescale – over which it learns world dynamics and over which future planning is performed. We implement this in the framework of active inference and demonstrate that the resulting subjective-timescale model (STM) can systematically vary the temporal extent of its predictions while preserving the same computational efficiency. Additionally, we show that STM predictions are more likely to introduce future salient events (for example new objects coming into view), incentivising exploration of new areas of the environment. As a result, STM produces more informative action-conditioned roll-outs that assist the agent in making better decisions. We validate significant improvement in our STM agent’s performance in the Animal-AI environment against a baseline system, trained using the environment’s objective-timescale dynamics.
[]
[ { "authors": [ "Martı́n Abadi", "Ashish Agarwal", "Paul Barham", "Eugene Brevdo", "Zhifeng Chen", "Craig Citro", "Greg S. Corrado", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Ian Goodfellow" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Pieter Abbeel", "Morgan Quigley", "Andrew Y. Ng" ], "title": "Using inaccurate models in reinforcement learning", "venue": "In ICML ’06,", "year": 2006 }, { "authors": [ "J. Andrew Bagnell", "Jeff G. Schneider" ], "title": "Autonomous helicopter control using reinforcement learning policy search methods", "venue": "Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164),", "year": 2001 }, { "authors": [ "Benjamin Beyret", "José Hernández-Orallo", "Lucy G Cheke", "Marta Halina", "Murray Shanahan", "Matthew Crosby" ], "title": "The Animal-AI Environment: Training and testing animal-like artificial", "venue": "cognition. ArXiv,", "year": 2019 }, { "authors": [ "Matthew M Botvinick", "Sam Ritter", "Jane X. Wang", "Zeb Kurth-Nelson", "Demis Hassabis" ], "title": "Reinforcement learning, fast and slow", "venue": "Trends in cognitive sciences,", "year": 2019 }, { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Catalin V. Buhusi", "Warren H. Meck" ], "title": "What makes us tick? functional and neural mechanisms of interval timing", "venue": "Nature Reviews Neuroscience,", "year": 2005 }, { "authors": [ "L. Cheke", "N. Clayton" ], "title": "Eurasian jays (garrulus glandarius) overcome their current desires to anticipate two distinct future needs and plan for them appropriately", "venue": "Biology Letters,", "year": 2011 }, { "authors": [ "Nicola S. Clayton", "Timothy J. Bussey", "Anthony Dickinson" ], "title": "Can animals recall the past and plan for the future", "venue": "Nature Reviews Neuroscience,", "year": 2003 }, { "authors": [ "Matthew Crosby" ], "title": "Building thinking machines by solving animal cognition tasks", "venue": "Minds and Machines,", "year": 2020 }, { "authors": [ "Matthew Crosby", "Benjamin Beyret", "Murray Shanahan", "José Hernández-Orallo", "Lucy Cheke", "Marta Halina" ], "title": "The Animal-AI testbed and competition", "venue": "Proceedings of Machine Learning Research,", "year": 2020 }, { "authors": [ "Marc Peter Deisenroth", "Carl E. Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Stefan Depeweg", "José Miguel Hernández-Lobato", "Finale Doshi-Velez", "Steffen Udluft" ], "title": "Learning and policy search in stochastic dynamical systems with bayesian neural", "venue": "networks. ArXiv,", "year": 2017 }, { "authors": [ "Ben Deverett", "Ryan Faulkner", "Meire Fortunato", "Greg Wayne", "Joel Z. Leibo" ], "title": "Interval timing in deep reinforcement learning", "venue": "agents. ArXiv,", "year": 2019 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Rémi Munos", "Karen Simonyan", "Volodymyr Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning", "Shane Legg", "Koray Kavukcuoglu" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner", "venue": "architectures. ArXiv,", "year": 2018 }, { "authors": [ "Vladimir Feinberg", "Alvin Wan", "Ion Stoica", "Michael I. Jordan", "Joseph E. Gonzalez", "Sergey Levine" ], "title": "Model-based value expansion for efficient model-free reinforcement", "venue": null, "year": 2018 }, { "authors": [ "Zafeirios Fountas", "Noor Sajid", "Pedro AM Mediano", "Karl Friston" ], "title": "Deep active inference agents using monte-carlo methods", "venue": "Accepted to NeurIPS,", "year": 2020 }, { "authors": [ "Zafeirios Fountas", "Anastasia Sylaidi", "Kyriacos Nikiforou", "Anil K. Seth", "Murray Shanahan", "Warrick Roseboom" ], "title": "A predictive processing model of episodic memory and time perception. bioRxiv, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Karl J. Friston" ], "title": "A free energy principle for a particular physics", "venue": "arXiv: Neurons and Cognition,", "year": 2019 }, { "authors": [ "Karl J. Friston", "James Kilner", "Lee Harrison" ], "title": "A free energy principle for the brain", "venue": "Journal of Physiology-Paris,", "year": 2006 }, { "authors": [ "Karl J. Friston", "Francesco Rigoli", "Dimitri Ognibene", "Christoph Mathys", "Thomas H.B. FitzGerald", "Giovanni Pezzulo" ], "title": "Active inference and epistemic value", "venue": "Cognitive Neuroscience,", "year": 2015 }, { "authors": [ "Karl J. Friston", "Thomas H.B. FitzGerald", "F. Rigoli", "P. Schwartenbeck", "J. O’Doherty", "G. Pezzulo" ], "title": "Active inference and learning", "venue": "Neuroscience and Biobehavioral Reviews,", "year": 2016 }, { "authors": [ "Karl J. Friston", "Thomas H.B. FitzGerald", "F. Rigoli", "P. Schwartenbeck", "G. Pezzulo" ], "title": "Active inference: A process theory", "venue": "Neural Computation,", "year": 2017 }, { "authors": [ "Karl J. Friston", "Thomas H.B. FitzGerald", "Francesco Rigoli", "Philipp Schwartenbeck", "Giovanni Pezzulo" ], "title": "Active inference: A process theory", "venue": "Neural Computation,", "year": 2017 }, { "authors": [ "Karl J. Friston", "Marco Lin", "Christopher D. Frith", "Giovanni Pezzulo", "J. Allan Hobson", "Sasha Ondobaka" ], "title": "Active inference, curiosity and insight", "venue": "Neural Computation,", "year": 2017 }, { "authors": [ "Karl J. Friston", "Richard E Rosch", "Thomas Parr", "Cathy J. Price", "Howard Bowman" ], "title": "Deep temporal models and active inference", "venue": "Neuroscience and Biobehavioral Reviews,", "year": 2017 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Samuel J. Gershman", "Nathaniel D. Daw" ], "title": "Reinforcement learning and episodic memory in humans and animals: An integrative framework", "venue": "Annual Review of Psychology,", "year": 2017 }, { "authors": [ "Alex Graves", "Greg Wayne", "Malcolm Reynolds", "Tim Harley", "Ivo Danihelka", "Agnieszka GrabskaBarwińska", "Sergio Gómez Colmenarejo", "Edward Grefenstette" ], "title": "Hybrid computing using a neural network with dynamic external memory", "venue": null, "year": 2016 }, { "authors": [ "Andrea Greve", "Elisa Cooper", "Alexander Kaula", "Michael C. Anderson", "Richard N.A. Henson" ], "title": "Does prediction error drive one-shot declarative learning", "venue": "Journal of Memory and Language,", "year": 2017 }, { "authors": [ "David R Ha", "Jürgen Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Jessica B. Hamrick" ], "title": "Analogues of mental simulation and imagination in deep learning", "venue": "Current Opinion in Behavioral Sciences,", "year": 2019 }, { "authors": [ "Steven Hansen", "Pablo Sprechmann", "Alexander Pritzel", "Andr’e Barreto", "Charles Blundell" ], "title": "Fast deep reinforcement learning using online adjustments from the past", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Demis Hassabis", "Dharshan Kumaran", "Seralynne D. Vann", "Eleanor A. Maguire" ], "title": "Patients with hippocampal amnesia cannot imagine new experiences", "venue": "Proceedings of the National Academy of Sciences,", "year": 2007 }, { "authors": [ "Thomas T. Hills" ], "title": "Towards a unified theory of animal event timing", "venue": null, "year": 2003 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Chia-Chun Hung", "T. Lillicrap", "Josh Abramson", "Yan Wu", "M. Mirza", "F. Carnevale", "Arun Ahuja", "G. Wayne" ], "title": "Optimizing agent behavior over long time scales by transporting value", "venue": "Nature Communications,", "year": 2019 }, { "authors": [ "Anthony I Jang", "Matthew R. Nassar", "Daniel G. Dillon", "Michael J. Frank" ], "title": "Positive reward prediction errors strengthen incidental memory encoding", "venue": "bioRxiv,", "year": 2018 }, { "authors": [ "Hyunwoo Jung", "Moonsu Han", "Minki Kang", "Sung Ju Hwang" ], "title": "Learning what to remember: Long-term episodic memory networks for learning from streaming", "venue": "data. ArXiv,", "year": 2018 }, { "authors": [ "Lukasz Kaiser", "Ofir Nachum", "Aurko Roy", "Samy Bengio" ], "title": "Learning to remember rare", "venue": "events. ArXiv,", "year": 2017 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H. Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine", "Ryan Sepassi", "George Tucker", "Henryk Michalewski" ], "title": "Model-based reinforcement learning for atari", "venue": "ArXiv, abs/1903.00374,", "year": 2020 }, { "authors": [ "Gabriel Kalweit", "Joschka Boedecker" ], "title": "Uncertainty-driven imagination for continuous deep reinforcement learning", "venue": "CoRL,", "year": 2017 }, { "authors": [ "Ken Kansky", "Tom Silver", "David A. Mély", "Mohamed Eldawy", "Miguel Lázaro-Gredilla", "Xinghua Lou", "N. Dorfman", "Szymon Sidor", "Scott Phoenix", "Dileep George" ], "title": "Schema networks: Zero-shot transfer with a generative causal model of intuitive", "venue": "physics. ArXiv,", "year": 2017 }, { "authors": [ "Nan Rosemary Ke", "Amanpreet Singh", "Ahmed Touati", "Anirudh Goyal", "Yoshua Bengio", "Devi Parikh", "Dhruv Batra" ], "title": "Learning dynamics model in reinforcement learning by incorporating the long term future", "venue": null, "year": 1903 }, { "authors": [ "Jonathan Ko", "Daniel J. Klein", "Dieter Fox", "Dirk Hähnel" ], "title": "Gaussian processes and reinforcement learning for identification and control of an autonomous blimp", "venue": "Proceedings 2007 IEEE International Conference on Robotics and Automation,", "year": 2007 }, { "authors": [ "Vikash Kumar", "Emanuel Todorov", "Sergey Levine" ], "title": "Optimal control with learned local models: Application to dexterous manipulation", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy", "venue": "optimization. ArXiv,", "year": 2018 }, { "authors": [ "Máté Lengyel", "Peter Dayan" ], "title": "Hippocampal contributions to control: The third way", "venue": "In NIPS,", "year": 2007 }, { "authors": [ "Sergey Levine", "Pieter Abbeel" ], "title": "Learning neural network policies with guided policy search under unknown dynamics", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Sergey Levine", "Pieter Abbeel" ], "title": "Learning neural network policies with guided policy search under unknown dynamics", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Johannes B. Mahr", "Gergely Csibra" ], "title": "Why do we remember? the communicative function of episodic memory", "venue": "The Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Rowan McAllister", "Carl Edward Rasmussen" ], "title": "Improving pilco with bayesian neural network dynamics models", "venue": null, "year": 2016 }, { "authors": [ "Kourken Michaelian" ], "title": "Mental time travel: Episodic memory and our knowledge of the personal past", "venue": null, "year": 2016 }, { "authors": [ "Beren Millidge" ], "title": "Deep active inference as variational policy", "venue": "gradients. ArXiv,", "year": 2019 }, { "authors": [ "Nikhil Mishra", "Pieter Abbeel", "Igor Mordatch" ], "title": "Prediction and control with temporal segment models. ArXiv", "venue": null, "year": 2017 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin A. Riedmiller" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518:529–533,", "year": 2015 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S. Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Junhyuk Oh", "Valliappa Chockalingam", "Satinder P. Singh", "Honglak Lee" ], "title": "Control of memory, active perception, and action in minecraft", "venue": null, "year": 2016 }, { "authors": [ "Sébastien Racanière", "Theophane Weber", "David P. Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Warrick Roseboom", "Z. Fountas", "Kyriacos Nikiforou", "David Bhowmik", "M. Shanahan", "A. Seth" ], "title": "Activity in perceptual classification networks as a basis for human subjective time perception", "venue": "Nature Communications,", "year": 2019 }, { "authors": [ "Nina Rouhani", "Kenneth A. Norman", "Yael Niv" ], "title": "Dissociable effects of surprising rewards on learning and memory", "venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition,", "year": 2018 }, { "authors": [ "Noor Sajid", "Philip J. Ball", "Karl J. Friston" ], "title": "Active inference: demystified and compared", "venue": "arXiv: Artificial Intelligence,", "year": 2019 }, { "authors": [ "D. Schacter", "D. Addis", "D. Hassabis", "V.C. Martı́n", "R.N. Spreng", "K. Szpunar" ], "title": "The future of memory: Remembering", "venue": "imagining, and the brain. Neuron,", "year": 2012 }, { "authors": [ "Daniel L. Schacter", "Donna Rose Addis", "Randy L. Buckner" ], "title": "Remembering the past to imagine the future: the prospective brain", "venue": "Nature Reviews Neuroscience,", "year": 2007 }, { "authors": [ "Maxine T Sherman", "Zafeirios Fountas", "Anil K Seth", "Warrick Roseboom" ], "title": "Accumulation of salient perceptual events predicts subjective time", "venue": "bioRxiv,", "year": 2020 }, { "authors": [ "D. Silver", "Julian Schrittwieser", "K. Simonyan", "Ioannis Antonoglou", "Aja Huang", "A. Guez", "T. Hubert", "L. Baker" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Kai Ueltzhöffer" ], "title": "Deep active inference", "venue": "Biological Cybernetics,", "year": 2018 }, { "authors": [ "Manuel Watter", "Jost Tobias Springenberg", "Joschka Boedecker", "Martin A. Riedmiller" ], "title": "Embed to control: A locally linear latent dynamics model for control from raw images", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Guangxiang Zhu", "Zichuan Lin", "Guangwen Yang", "Chongjie Zhang" ], "title": "Under review as a conference paper at ICLR", "venue": null, "year": 2021 }, { "authors": [ "Furthermore", "following Fountas" ], "title": "2020a), we define the MCTS upper confidence bound as, U(s, a) = G̃(s, a) + cexplore", "venue": null, "year": 2020 }, { "authors": [ "Fountas" ], "title": "2020a)), and to encourage better object-centric representations. In particular, on-line learning has three major issues associated with it. First, training is performed on correlated data points, which is generally considered to be detrimental for training neural networks (Schaul et al., 2016)", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "An agent endowed with a model of its environment has the ability to predict the consequences of its actions and perform planning into the future before deciding on its next move. Models can allow agents to simulate the possible action-conditioned futures from their current state, even if the state was never visited during learning. As a result, model-based approaches can provide agents with better generalization abilities across both states and tasks in an environment, compared to their model-free counterparts (Racanière et al., 2017; Mishra et al., 2017).\nThe most popular framework for developing agents with internal models is model-based reinforcement learning (RL). Model-based RL has seen great progress in recent years, with a number of proposed architectures attempting to improve both the quality and the usage of these models (Kaiser et al., 2020; Racanière et al., 2017; Kansky et al., 2017; Hamrick, 2019). Nevertheless, learning internal models affords a number of unsolved problems. The central one of them is model-bias, in which the imperfections of the learned model result in unwanted over-optimism and sequential error accumulation for long-term predictions (Deisenroth & Rasmussen, 2011). Long-term predictions are additionally computationally expensive in environments with slow temporal dynamics, given that all intermediary states must be predicted. Moreover, slow world dynamics1 can inhibit the learning of dependencies between temporally-distant events, which can be crucial for environments with sparse rewards. Finally, the temporal extent of future predictions is limited to the objective timescale of the environment over which the transition dynamics has been learned. This leaves little room for flexible and context-dependent planning over varying timescales which is characteristic to animals and humans (Clayton et al., 2003; Cheke & Clayton, 2011; Buhusi & Meck, 2005).\nThe final issue exemplifies the disadvantage of the classical view on internal models, in which they are considered to capture the ground-truth transition dynamics of the environment. Furthermore,\n1Worlds with small change in state given an action\nin more complex environments with first-person observations, this perspective does not take into account the apparent subjectivity of first-person experiences. In particular, the agent’s learned representations of the environment’s transition dynamics implicitly include information about time. Little work has been done to address the concept of time perception in model-based agents (Deverett et al., 2019). Empirical evidence from the studies of human and animal cognition suggests that intelligent biological organisms do not perceive time precisely and do not possess an explicit clock mechanism responsible for keeping track of time (Roseboom et al., 2019; Sherman et al., 2020; Hills, 2003). For instance, humans tend to perceive time slower in environments rich in perceptual content (e.g. busy city), and faster in environments with little perceptual change (e.g. empty field). The mechanisms of subjective time perception still remain unknown; however, recent computational models based on episodic memory were able to closely model the deviations of human time perception from veridical perception (Fountas et al., 2020b).\nInspired by this account, in this work we propose subjective-timescale model (STM), an alternative approach to learning a transition dynamics model, by replacing the objective timescale with a subjective one. The latter represents the timescale by which an agent perceives events in an environment, predicts future states, and which is defined by the sequences of episodic memories. These memories are accumulated on the basis of saliency (i.e. how poorly an event was predicted by the agent’s transition model), which attempts to mimic the way humans perceive time, and resulting in the agent’s ability to plan over varying timescales and construct novel future scenarios.\nWe employ active inference as the agent’s underlying cognitive framework. Active inference is an emerging framework within computational neuroscience, which attempts to unify perception and action under the single objective of minimising the free-energy functional. Similar to model-based RL, an active inference agent relies almost entirely on the characteristics and the quality of its internal model to make decisions. Thus, it is naturally susceptible to the previously mentioned problems associated with imperfect, objective-timescale models. The selection of active inference for the purposes of this paper is motivated by its biological plausibility as a normative framework for understanding intelligent behaviour (Friston et al., 2017a; 2006), which is in line with the general theme of this work. Furthermore, being rooted in variational inference, the free energy objective generates a distinct separation between the information-theoretic quantities that correspond to the different components of the agent’s model, which is crucial to define the memory formation criterion.\nWe demonstrate that the resulting characteristics of STM allow the agent to automatically perform both short- and long-term planning using the same computational resources and without any explicit mechanism for adjusting the temporal extent of its predictions. Furthermore, for long-term predictions STM systematically performs temporal jumps (skipping intermediary steps), thus providing more informative future predictions and reducing the detrimental effects of one-step prediction error accumulation. Lastly, being trained on salient events, STM much more frequently imagines futures that contain epistemically-surprising events, which incentivises exploratory behaviour." }, { "heading": "2 RELATED WORK", "text": "Model-based RL. Internal models are extensively studied in the field of model-based RL. Using linear models to explicitly model transition dynamics has achieved impressive results in robotics (Levine & Abbeel, 2014a; Watter et al., 2015; Bagnell & Schneider, 2001; Abbeel et al., 2006; Levine & Abbeel, 2014b; Levine et al., 2016; Kumar et al., 2016). In general, however, their application is limited to low-dimensional domains and relatively simple environment dynamics. Similarly, Gaussian Processes (GPs) have been used (Deisenroth & Rasmussen, 2011; Ko et al., 2007). Their probabilistic nature allows for state uncertainty estimation, which can be incorporated in the planning module to make more cautious predictions; however, GPs struggle to scale to high-dimensional data. An alternative and recently more prevalent method for parametrising transition models is to use neural networks. These are particularly attractive due to their recent proven success in a variety of domains, including deep model-free RL (Silver et al., 2017), ability to deal with high-dimensional data, and existence of methods for uncertainty quantification (Blundell et al., 2015; Gal & Ghahramani, 2016). Different deep learning architectures have been utilised including fully-connected neural networks (Nagabandi et al., 2018; Feinberg et al., 2018; Kurutach et al., 2018) and autoregressive models (Ha & Schmidhuber, 2018; Racanière et al., 2017; Ke et al., 2019), showing promising results in environments with relatively high-dimensional state spaces. In particular, autoregressive\narchitectures, such as Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997), are capable of modelling non-Markovian environments and of learning temporal dependencies. Nevertheless, LSTMs are still limited in their ability to learn relations between temporally-distant events, which is exacerbated in environments where little change occurs given an action.\nUncertainty quantification using ensemble methods (Kalweit & Boedecker, 2017; Clavera et al., 2020; Buckman et al., 2018) or Bayesian neural networks (McAllister & Rasmussen, 2016; Depeweg et al., 2017) have been proposed to tackle model bias and sequential error accumulation. Other works have focused on techniques to create more accurate long-term predictions. Mishra et al. (2017) used a segment-based approach to predict entire trajectories at once in an attempt to avoid one-step prediction error accumulation. A work by Ke et al. (2019) used an autoregressive model and introduced a regularising auxiliary cost with respect to the encodings of future observations, thus forcing the latent states to carry useful information for long-horizon predictions. In contrast, the work presented in this paper re-focuses the objective from attempting to create better parametrisation techniques or mitigating methods to simply transforming the timescale over which the dynamics of an environment is learned. As will be seen, our approach can lead to more accurate and efficient long-term predictions without compromising agent’s ability to plan over short time-horizons.\nEpisodic Memory. In neuroscience, episodic memory is used to describe autobiographical memories that link a collection of first-person sensory experiences at a specific time and place (Tulving, 1972). Past studies in the field suggest that episodic memory plays an important role in human learning (Mahr & Csibra, 2017), and may capture a wide range of potential functional purposes, such as construction of novel future scenarios (Schacter et al., 2007; 2012; Hassabis et al., 2007), mental time-travel (Michaelian, 2016) or assisting in the formation of new semantic memories (Greenberg & Verfaellie, 2010). A recent computational model of episodic memory (Fountas et al., 2020b) also relates it to the human ability to estimate time durations.\nThe application of episodic memory in reinforcement learning has been somewhat limited. Some works have employed simple forms of memory to improve the performance of a deep model-free RL agent via experience replay (Mnih et al., 2015; Espeholt et al., 2018; Schaul et al., 2016). However, these methods do not incorporate information about associative or temporal dependencies between the memories (Hansen et al., 2018). Read-write memory banks have also been implemented alongside gradient-based systems (memory-augmented neural networks) for assisting in learning and prediction (Graves et al., 2014; 2016; Hung et al., 2019; Oh et al., 2016; Jung et al., 2018). Further, episodic memory has been used for non-parametric Q-function approximation (Blundell et al., 2016; Pritzel et al., 2017; Hansen et al., 2018; Zhu et al., 2020). It has also been proposed to be used directly for control as a faster and more efficient alternative to model-based and model-free approaches in RL, such as instance-based control (Lengyel & Dayan, 2007; Botvinick et al., 2019; Gershman & Daw, 2017) and one-shot learning (Kaiser et al., 2017). In contrast, our paper considers a novel way of using episodic memories – in defining the agent’s subjective timescale of the environment and training a transition dynamics model over the sequences of these memories.\nActive Inference. Until now, most of the work on active inference has been done in low-dimensional and discrete state spaces (Friston et al., 2015; 2017b;c;d). Recently, however, there has been a rising interest in scaling active inference and applying it to environments with continuous and/or large state spaces (Fountas et al., 2020a; Tschantz et al., 2019; Çatal et al., 2019; Millidge, 2019; Ueltzhöffer, 2018). Although these works used deep learning techniques, their generative models have so far been designed to be Markovian and trained over the objective timescale of the environment." }, { "heading": "3 BASELINE ARCHITECTURE", "text": "We take the deep active inference system devised by Fountas et al. (2020a) as the starting point with a few architectural and operational modifications. The generative model of this baseline agent is defined as p(o1:t, s1:t, a1:t; θ), where st denotes latent states at time t, ot (visual) observations, at actions, and θ = {θo, θs} the parameters of the model. st is assumed to be Gaussian-distributed with a diagonal covariance, ot follows Bernoulli and a1:t categorical distributions. For a single time step, as illustrated in Figure 1A, this generative model includes two factors, a transition model p(st|st−1, at−1; θs) and a latent state decoder p(ot|st; θo) parametrised by feed-forward neural net-\nworks with parameters θs and θo, respectively. We modify the transition model from the original study to predict the change in state, rather than the full state2.\nThe agent also possesses two inference networks, which are trained using amortized inference: a habitual network q(at;φa) and observation encoder q(st;φs) parametrised by φa and φs, respectively. The habitual network acts as a model-free component of the system, learning to map inferred states directly to actions. Following Fountas et al. (2020a), the variational free energy for an arbitrary time-step t is defined as:\nFt =− Eq(st) [log p(ot|st; θo)] (1a) +DKL [q(st; θs)‖p(st|st−1, at−1; θs)] (1b) + Eq(st) [DKL [q(at;φa)‖p(at)]] (1c)\nwhere p(a) = ∑ π:a1=a\np(π) is the summed probability of all policies beginning with action a. All the divergence terms are computable in closed-form, given the assumption about Gaussian- and Bernoulli-distributed variables. Finally, the expected free energy (EFE) of the generative model up to some time horizon T can be defined as:\nG(π) = T∑ τ=t G(π, τ) = T∑ τ=t Eq̃ [log q(sτ , θ|π)− log p(oτ , sτ , θ|π)] , (2)\nwhere q̃ = q(oτ , sτ , θ|π) and p(oτ , sτ , θ|π) = p(oτ |π)q(sτ |oτ , π)p(θ|sτ , oτ , π). To make expression 2 computationally feasible, it is decomposed such that,\nG(π, τ) =− Eq(θ|π)q(sτ |θ,π)q(oτ |sτ ,θ,π) [log p(oτ |π)] + Eq(θ|π) [ Eq(oτ |θ,π)H(sτ |oτ , π)−H(sτ |π) ] + Eq(θ|π)q(sτ |θ,π) [H(oτ |sτ , θ, π)] − Eq(sτ |π) [H(oτ |sτ , π)] ,\n(3)\nwhere expectations can be taken by performing sequential sampling of θ, sτ and oτ and entropies are calculated in closed-form using standard formulas for Bernoulli and Gaussian distributions. Network parameters, θ, are sampled using Monte Carlo (MC) dropout (Gal & Ghahramani, 2016).\nThe system also makes use of top-down attention mechanism by introducing variable ω, which modulates uncertainty about hidden states, promoting latent state disentanglement and more efficient learning. Specifically, the latent state distribution is defined as a Gaussian such that s ∼ N (s;µ,Σ/ω), where µ and Σ are the mean and the diagonal covariance, and ω is a decreasing logistic function over the divergence DKL [q(a;φa)‖p(a)]. Finally, action selection is aided with Monte Carlo tree search (MCTS), ensuring a more efficient trajectory search. Specifically, MCTS generates a weighted tree that is used to sample policies from the current timestep, where the weights refer to the agent’s estimation of the EFE given a state-action pair, G̃(s, a). The nodes of the tree are predicted via the transition model, p(st|st−1, at−1; θs). At the end of the search, MCTS is used to construct the action prior, p(a) = N(ai, s)/ ∑ j N(aj , s), where N(s, a) is the number of times action a has been taken from state s.\nThe baseline agent is trained with prioritised experience replay (PER) (Schaul et al., 2016) to mitigate the detrimental consequences of on-line learning (which was used in the original paper), and to encourage better object-centric representations. The details of the baseline implementation and training with PER can be found in Appendices B.1 and B.2, respectively." }, { "heading": "4 SUBJECTIVE-TIMESCALE MODEL", "text": "We introduce subjective-timescale model (STM) that records sequences of episodic memories over which a new transition model is trained. As such, the system consists of a memory accumulation system to selectively record salient events, a simple action heuristic to summarise sequences of actions between memories, and an autoregressive transition model.\n2This has largely become common practice in the field of model-based RL (Nagabandi et al., 2018), improving algorithm efficiency and accuracy especially in environments with slow temporal dynamics.\nWe define a ground-truth sequence as a sequence of all states experienced in an environment during a single episode, Sg = {s0, s1, s2, ..., sT }, and an S-sequence (subjective sequence) as a sequence of states selectively picked by our system, and over which the new transition model would be learned, Se = {sτ1 , sτ2 , sτ3 , ..., sτN }. Each unit in an S-sequence is called an episodic memory and consists of a set of sufficient statistics, s = {µs,σs}, where µs and σs are mean and variance vectors of a Gaussian-distributed state s, respectively. Additionally, each episodic memory contains a reference to its preceding (parent) episodic memory and all actions until the next one. The process of recording S-sequences is called memory accumulation." }, { "heading": "4.1 MEMORY ACCUMULATION", "text": "Previous work on time perception and episodic memory (Fountas et al., 2020b) employed saliency of an event, or the generative model’s prediction error, as the memory formation criterion. Selection of this criterion is informed by the experimental evidence from neuroscience on episodic memory (Greve et al., 2017; Jang et al., 2018; Rouhani et al., 2018). Inspired by this account, our memory accumulation system employs the free energy of the objective-timescale transition model3 (Eq.1b) as a measure of event saliency, and forms memories when a pre-defined threshold is exceeded.\nTo train STM, an active inference agent moves in the environment under a pre-trained generative model described in Section 3. During this process, each transition is evaluated based on the objective transition model free energy, DKL [q(st; θs)‖p(st|st−1, at−1; θs)], which represents the degree of surprise experienced by the transition model upon taking an action. If the value of the free energy exceeds a pre-defined threshold, , a memory is formed and placed into an S-sequence. At the end of each episode, the recorded S-sequence is saved for later use.\nWe can categorise the transitions that cause higher values of transition model free energies into two main groups: epistemic surprise and model-imperfection surprise. The former refers to transitions that the model could not have predicted accurately due to the lack of information about the current state of the environment (e.g. objects coming into view). The latter refers to the main bulk of these high prediction-error transitions and stems from the inherent imperfections of the learned dynamics. Specifically, less frequently-occurring observations with richer combinatorial structure would systematically result in higher compounded transition model errors, given that these would be characteristic of more complex scenes. As will become apparent, the presence of these two categories in the recorded S-sequences results in the model’s ability to vary its prediction timescale based on the perceptual context and systematically imagine future salient events.\nA transition dynamics model is necessarily trained with respect to actions that an agent took to reach subsequent states. However, STM records memories over an arbitrary number of steps, thus leaving action sequences of variable length. For the purposes of this paper, we implement a simple heuristic to summarise agent’s trajectories, which is enough to provide STM with the necessary information\n3Components of the total free energy correspond to a measure of belief update for each of the networks, and therefore, loosely speaking, quantify the prediction error generated by each of the respective system constituents: autoencoder (Eqs.1a, 1b), objective-timescale transition model (Eq.1b), and habitual network (Eq.1c).\nto learn action-conditioned predictions. We do it by estimating the angle between the agent’s initial position and its final position at the time-step of the subsequent memory. Full details of this heuristic can be found in Appendix B.4." }, { "heading": "4.2 TRANSITION DYNAMICS MODEL", "text": "As mentioned, S-sequences are characterised by the presence of epistemically-surprising and salient events squeezed together in the recorded episodes. As a result, training on these sequences is more conducive for learning temporal dependencies between important states. For this reason, we train an LSTM model over the S-sequences, which utilises internal memory states to store information about preceding inputs. In our architecture, an LSTM calculates hidden state hτ at subjective time τ using a deterministic mapping,\nhτ = fθh(sτ , aτ , hτ−1) = σ(xτWh + hτ−1Uh + bh), (4)\nwhere sτ and aτ are the latent state and action taken at subjective time τ respectively, xτ is the concatenated vector of sτ and aτ , and θh = {Wh, Uh, bh} are deterministic LSTM model parameters. Importantly, function fθs is deterministic and serves only to encode information about preceding steps into the hidden state of the LSTM. This hidden state hτ is then mapped to a latent state sτ+1 at the next subjective time τ + 1 via a feed-forward neural network with random-variable parameters, θhs, using p(sτ |hτ−1; θhs) with MC dropout. The parameters of both of the networks are trained via backpropagation with a loss function defined as\nL = 1 T T∑ τ DKL [ q(sτ+1;φs)‖p(sτ+1|fθh(sτ , aτ , hτ−1); θhs) ] (5)\nThe new generative model of observations is shown in Figure 1B. Because the mapping of LSTM is deterministic, the formulation of the variational free energy remains intact with the exception of the second term that now includes the state prediction produced by the network p(sτ |hτ−1; θhs) conditioned on the hidden state of the LSTM,\nFτ =− Eq(sτ ) [log p(oτ |sτ ; θo)] +DKL [q(sτ ;φs)‖p(sτ |hτ−1; θhs)] + Eq(sτ ) [DKL [q(aτ ;φa)‖p(aτ )]]\n(6)\nArchitectural and training details of the model can be found in Appendix B.3. The source code will be made available after the review process." }, { "heading": "5 EXPERIMENTS", "text": "The Animal-AI (AAI) environment is a virtual testbed that provides an open-ended sandbox training environment for interacting with a 3D environment from first-person observations (Crosby et al.,\n2020; Crosby, 2020). In AAI, an agent is tasked with reaching a green sphere given a particular setup that may include intermediary rewards (yellow spheres), terminal negative rewards (red spheres), obstacles (e.g. walls), etc. For the purposes of this work, we use a sparsely populated configuration with single green, red, and yellow spheres, in which a successful agent would be forced to perform both short- and long-distance planning, as well as more extensive exploration of the environment." }, { "heading": "5.1 EXPERIMENTAL RESULTS", "text": "We tested the STM agent using 100,000 steps in randomly-generated environments (max episode length of 500) against the baseline system with two different planning procedures – MCTS and model-predictive control (MPC). In contrast to MCTS, the MPC agent re-evaluates its plan after every action. Figure 3 summarises the experimental results. Our STM-MCTS agent outperforms the baseline systems in acquiring more rewards within the 100,000 steps. In particular, we note that the STM-MCTS agent showed significant improvement against the Baseline-MCTS. Similarly, we show that STM-MCTS model retrieves more cumulative reward than the Baseline-MPC agent, which uses a computationally expensive planning procedure. Specifically, our agent achieves a higher cumulative reward in less than half the time, ∼6 hours, compared to ∼14 hours." }, { "heading": "5.2 ROLL-OUT INSPECTION", "text": "Inspecting prediction roll-outs produced by the STM-based system provides great insight into its practical benefits for the agent’s performance. Specifically, our agent is capable of varying the temporal extent of its predictions and imagining future salient events." }, { "heading": "5.2.1 VARYING PREDICTION TIMESCALE", "text": "Much like human perception of time changes depending on the perceptual content of the surroundings, our agent varies the prediction timescale depending on the context it finds itself in. Specifically, in the AAI environment the complexity of any given observation is primarily driven by the presence of objects, which may appear in different sizes, colours, and configurations. As a result, our agent consistently predicts farther into the future in the absence of any nearby objects, and slows its timescale, predicting at finer temporal rate, when the objects are close.\nPractically, this has several important implications. First, performing temporal jumps and skipping unnecessary intermediary steps affords greater computational efficiency, and reduces the detrimental effects of sequential error accumulation, as can be seen in Figure 4. Second, while STM is able to predict far ahead, its inherent flexibility to predict over varying timescales does not compromise the agent’s performance when the states of interest are close. Thus, a separate mechanism for adjusting how far into the future an agent should plan is not necessary and is implicitly handled by our model. Third, STM allows the agent to make more informed decisions in an environment, as it tends to populate the roll-outs with salient observations of the short- and long-term futures depending on the context. As a result, STM effectively re-focuses the central purpose of a transition model from\nmost accurately modeling the ground-truth dynamics of an environment to predicting states more informative with respect to the affordances of the environment." }, { "heading": "5.2.2 IMAGINING SURPRISING EVENTS", "text": "As mentioned, S-sequences frequently include epistemically-surprising transitions, which, in the context of the AAI environment, constitute events where objects come into view. As a result, STM is significantly more likely to include roll-outs with new objects appearing in the frame, in contrast to the baseline that employs the objective-timescale transition model.\nThe ability of the STM to imagine novel and salient future events encourages exploratory behaviour, which is distinct from the active inference agent’s intrinsic exploratory motivations. We again stress that although the predicted futures may be inaccurate with respect to the ground-truth positions of the objects, they are nevertheless more informative with respect to the agent’s potential affordances in the environment. This is in stark contrast with the objective-timescale model, which imagines futures in the absence of any objects. As a result, the STM agent is less prone to get stuck in a sub-optimal state, which was commonly observed in the baseline system, and is more inclined to explore the environment beyond its current position." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We proposed STM, a novel approach to learning a transition dynamics model with the use of sequences of episodic memories, which define an agent’s more useful, subjective timescale. STM showed significant improvement against the baseline agent’s performance in the AAI environment. Inspired by the problems of inaccurate and inefficient long-term predictions in model-based RL and the recent neuroscience literature on episodic memory and human time perception, we merged ideas from the different fields into one new technique of learning a forward model. We further emphasised two important characteristics of the newly-devised model – its ability to vary the temporal extent of future predictions and to predict future salient events. The application of our technique is not limited to active inference, and can be adapted for use in other model-based frameworks.\nFuture work may explore more generalised approaches of action summarisation and dynamic thresholding for memory formation. Another enticing direction of research is to investigate the feasibility of having a single transition model that slowly transitions from training on an objective timescale to training on a subjective timescale, as the memory formation goes on." }, { "heading": "A PRELIMINARIES", "text": "A.1 ACTIVE INFERENCE\nActive inference is a corollary of the free-energy principle applied to action (Friston et al., 2016; Friston, 2019; Sajid et al., 2019). In this framework, an agent embedded in an environment aims to do two things: (i) minimise surprisal from the observations of the environment under the agent’s internal model of this environment, and (ii) perform actions so as to minimise the expected surprisal in the future. More formally, an agent is equipped with a generative model p(ot, st; θ), where ot is the agent’s observation at time t, st is the hidden state of the environment, and θ denotes the parameters of the generative model. The agent’s surprise at time t is defined as the negative loglikelihood, − log p(ot; θ). We can upper-bound this intractable expression using variational inference by introducing an approximate posterior distribution, q(st), over st, such that:\n− log p(ot; θ) ≤ Eq(st) [log q(st)− log p(ot, st; θ)] = F , (7)\nwhere F is the variational free energy. The minimisation of this quantity realises objective (i) and is performed by optimising the parameters of the generative model, θ. It is also equivalent to the maximisation of model evidence, which intuitively implies that the agent aims to perfect its generative model at explaining the sensory observations from the environment. To realise objective (ii), the agent must select actions that lead to the lowest expected surprise in the future, which can be calculated using the expected free energy (EFE), G:\nG(π, τ) = Ep(oτ |sτ ) [ Eq(sτ |π) [log q(sτ |π)− log p(oτ , sτ |π)]︸ ︷︷ ︸\nvariational free energy, F\n] , (8)\nwhere τ > t and π = {at, at+1, ..., aτ−1} is a sequence of actions (policy) between the present time t and the future time τ . The free-energy minimising system must, therefore, imagine the future observations given a policy and calculate the expected free energy conditioned on taking this policy. Then, actions that led to lower values of the EFE are chosen with higher probability, as opposed to actions that led to higher values of EFE, such that:\np(π) = σ (−γG(π)) , (9) where G(π) = ∑ τ>tG(π, τ), γ is the temperature parameter, σ(·) denotes a softmax function, and t is the present timestep." }, { "heading": "B ARCHITECTURAL DETAILS AND TRAINING", "text": "B.1 BASELINE IMPLEMENTATION\nAs mentioned, each component of the generative and inference models is parametrised by feedforward neural networks (including fully-connected, convolutional and transpose-convolutional layers), whose architectural details can be found in Figure 6. The latent bottleneck of the autoencoder, s, was of size 10. The hyperparameters of the top-down attention mechanism were: a = 2, b = 0.5, c = 0.1, and d = 5, chosen to match those in Fountas et al. (2020a). Similarly, we restricted the action space to just 3 actions – forward, left, right. For testing, we optimised the MCTS parameters of the baseline agent, setting the exploration hyperparameter cexplore = 0.1 (see Eq.10), and performing 30 simulation loops, each with depth of 1. The networks were trained using separate optimisers for stability reasons. The habitual and transition networks are trained with a learning rate of 0.0001; the autoencoder’s optimiser had a learning rate of 0.001. The batch size was set to 50 and the model was trained for 750k iterations under a green observational prior. All of the networks were implemented using Tensorflow v2.2 (Abadi et al., 2015). Tests were performed in Animal-AI v2.0.1 (Beyret et al., 2019).\nFurthermore, following Fountas et al. (2020a), we define the MCTS upper confidence bound as,\nU(s, a) = G̃(s, a) + cexplore ·Qφa(a|s) · 1\nN(s, a) + 1 (10)\nAs discussed, each network was trained with its corresponding loss function, which are the constituent parts of the total variational free energy. In particular, the autoencoder was trained using Eqs. 1a and 1b, transition model using Eq. 1b, and habitual network using Eq. 1c.\nFurthermore, following the training procedure from Fountas et al. (2020a), we stabilise the convergence of the autoencoder by modifying the loss function to:\nLautoencoder =− Eq(st) [log p(ot|st; θo)] + γDKL [q(st;φs)||p(st|st−1, at−1; θs)] + (1− γ)DKL [q(st;φs)||N(0, I)] ,\n(11)\nwhere γ is a hyperparameter that gradually increases from 0 to 0.8 during training.\nB.2 PRIORITISED EXPERIENCE REPLAY\nAs part of the baseline system’s training procedure, we utilise prioritised experience replay (PER) (Schaul et al., 2016) to mitigate the detrimental effects of on-line learning (which was used in the original paper by Fountas et al. (2020a)), and to encourage better object-centric representations.\nIn particular, on-line learning has three major issues associated with it. First, training is performed on correlated data points, which is generally considered to be detrimental for training neural networks (Schaul et al., 2016). Second, observations that are rarely encountered in an environment are discarded in on-line learning and are used for training only when visited again. These are likely to be the observations for which there is most room for improvement. Instead, the agent will often be training on already well-predicted transitions that it happens to visit often. Finally, an on-line learning agent is constrained by its current position in an environment to sample new data and, thus, has very limited control over the content of its training batches.\nFurthermore, as mentioned in Section 4.1, in the Animal-AI environment rare observations are those that include objects; yet, objects are a central component of this environment – the only way to interact, get rewards, and importantly, the only means of minimising the free energy optimally. To encourage our agent to learn better object representations, we employ PER with the objectivetimescale transition model free energy as the priority metric. As discussed, observations with higher values of this metric tend to constitute more complex scenes, which include objects – as the only source of complexity in the AAI. See Figure 7 for qualitative evidence of this trend. The use of PER resulted in a considerable improvement in the baseline’s performance and better ability to reconstruct observations with objects (See Figure 8).\nB.3 STM IMPLEMENTATION\nThe STM introduces two additional components: STM habitual network and STM transition model. The habitual network was trained using the same training procedure as described in Appendix B.1. The transition model was trained on batch size 15 and a learning rate of 0.0005. Each batch consisted of zero-padded S-sequences with length 50. We use a Masking layer to ignore zero-padded parts of the sequences in the computational graph. The training was stopped at 200k training iterations. For testing STM-MCTS in Section 5.1, we optimise the MCTS parameters, setting cexplore = 0.1, and performing 15 simulation loops, each with depth of 3. The threshold (objective-timescale transition model free energy), , was manually set to 5 after inspection of the buffer and value distribution.\nB.4 ACTION HEURISTIC\nTo train the STM transition model in the Animal-AI environment, we implement a simple heuristic that is used to summarise a sequence of actions taken by the agent from one memory to reach the next one. A sequence of actions, A = {aτ1 , aτ1+1, ...aτ1+(N−1)}, takes the agent from a recorded memory sτ1 to memory sτ2 , where the time between these states τ2 − τ1 = N , and a ∈ {aforward, aright, aleft}. We employ polar coordinates relative to the agent’s initial position in Cartesian coordinates at time τ1 and perform iterative updates of its position after every action until the time-step of the next episodic memory, τ2, is reached. Given the agent’s orientation in the environment, θ, the next position of the agent is calculated using,\npt+1 = pt + [ sin θ cos θ ] , where pt = [ 0 0 ] t=τ1\n(12)\nFinally, we retrieve angle φ, which describes the direction in which the agent has travelled with respect to its initial position and orientation. This angle is used to decide on the action that summarises the trajectory using\na = aforward |φ| ≤ 22.5◦ aright 22.5 ◦ < φ < 180◦\naleft −22.5◦ > φ ≥ −180◦, Although this heuristic provided satisfactory results, trajectory encoding is one of the most limiting parts of the STM and is a promising direction for further research." }, { "heading": "C ADDITIONAL RESULTS", "text": "We provide additional results of the STM prediction roll-outs:\na) Figure 10: random roll-outs generated by the system. These diverse roll-outs demonstrate that STM is able to: i) make correct action-conditioned predictions, ii) speed up its prediction timescale when objects are far away, iii) slow down the prediction timescale when objects are nearby.\nb) Figure 11: STM consistently imagines objects coming into view. The observations produced by the model are entirely plausible given the path the agent is taking and the context it finds itself in. This indicates that STM does indeed produce semantically meaningful predictions. It is pertinent to note that the roll-outs comply with the physics of the environment, which is crucial, as it potentially refutes the hypothesis that these imagined objects were predicted at random.\nc) Figure 12: shows the roll-outs produced by the objective-timescale model using the same starting states as in Figure 11. These roll-outs are in stark contrast to those produced by STM, exemplifying the baseline’s inability to imagine objects that are not present in the initial frame." } ]
2,020
null
SP:074d113e06bfa79b8a5314560ef0b6669278abd5
[ "This paper presents a denoising-based method for randomized smoothing that converts a base classifier into a smoothed one with p-robustness to adversarial examples. It considers a practical setting where the retraining/finetuning of the base classifier is largely inapplicable (e.g. the commercial classification service with only API provided to users). To do this, it adopts a recently proposed methodology termed denoised smoothing [1] by prepending a custom-trained denoiser to the pretrained classifier. The major novelty of this work lies at the proposed denoising method using learned score function. The new denoising method only requires training one score network and is readily applicable to defend various $l_p$ adversaries, which is a key feature not available in [1]. The experiments show the proposed method outperforms the previous denoising-based approach, and is sometimes on par with the white-box approach [2] that manipulates the classifier. " ]
The randomized smoothing with various noise distributions is a promising approach to protect classifiers from `p adversarial attacks. However, it requires an ensemble of classifiers trained with different noise types and magnitudes, which is computationally expensive. In this work, we present an efficient method for randomized smoothing that does not require any re-training of classifiers. We built upon denoised smoothing, which prepends denoiser to the pre-trained classifier. We investigate two approaches to the image denoising problem for randomized smoothing and show that using the score function suits for both. Moreover, we present an efficient algorithm that can scale to randomized smoothing and can be applied regardless of noise types or levels. To validate, we demonstrate the effectiveness of our methods through extensive experiments on CIFAR-10 and ImageNet, under various `p adversaries.
[]
[ { "authors": [ "Muhammad Asim", "Ali Ahmed", "Paul Hand" ], "title": "Invertible generative models for inverse problems: mitigating representation error and dataset bias", "venue": null, "year": 1905 }, { "authors": [ "Muhammad Asim", "Ali Ahmed", "Paul Hand" ], "title": "Invertible generative models for inverse problems: mitigating representation error and dataset bias, 2020. URL https://openreview.net/ forum?id=BJgkbyHKDS", "venue": null, "year": 2020 }, { "authors": [ "Anish Athalye", "Nicholas Carlini" ], "title": "On the robustness of the cvpr 2018 white-box adversarial example defenses", "venue": "arXiv preprint arXiv:1804.03286,", "year": 2018 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "N. Carlini", "D. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Ruediger Ehlers" ], "title": "Formal verification of piece-wise linear feed-forward neural networks", "venue": "In International Symposium on Automated Technology for Verification and Analysis,", "year": 2017 }, { "authors": [ "Matteo Fischetti", "Jason Jo" ], "title": "Deep neural networks and mixed integer linear optimization. Constraints", "venue": null, "year": 2018 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": null, "year": 2014 }, { "authors": [ "Kyong Hwan Jin", "Michael T McCann", "Emmanuel Froustey", "Michael Unser" ], "title": "Deep convolutional neural network for inverse problems in imaging", "venue": "IEEE Transactions on Image Processing,", "year": 2017 }, { "authors": [ "Matt Jordan", "Justin Lewis", "Alexandros G Dimakis" ], "title": "Provable certificates for adversarial examples: Fitting a ball in the union of polytopes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Guy Katz", "Clark Barrett", "David L Dill", "Kyle Julian", "Mykel J Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Durk P Kingma", "Yann L. Cun" ], "title": "Regularized estimation of image statistics by score matching", "venue": "Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Guang-He Lee", "Yang Yuan", "Shiyu Chang", "Tommi Jaakkola" ], "title": "Tight certificates of adversarial robustness for randomly smoothed classifiers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Certified adversarial robustness with additive noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Fangzhou Liao", "Ming Liang", "Yinpeng Dong", "Tianyu Pang", "Xiaolin Hu", "Jun Zhu" ], "title": "Defense against adversarial attacks using high-level representation guided denoiser", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Alessio Lomuscio", "Lalit Maganti" ], "title": "An approach to reachability analysis for feed-forward relu neural networks", "venue": "arXiv preprint arXiv:1706.07351,", "year": 2017 }, { "authors": [ "Mengyin Lu", "Matthew Stephens" ], "title": "Empirical bayes estimation of normal means, accounting for uncertainty in estimated standard errors, 2019", "venue": null, "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Dongyu Meng", "Hao Chen" ], "title": "Magnet: a two-pronged defense against adversarial examples", "venue": "In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security,", "year": 2017 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Herbert Robbins" ], "title": "An empirical bayes approach to statistics", "venue": "In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics,", "year": 1956 }, { "authors": [ "Hadi Salman", "Jerry Li", "Ilya Razenshteyn", "Pengchuan Zhang", "Huan Zhang", "Sebastien Bubeck", "Greg Yang" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hadi Salman", "Greg Yang", "Huan Zhang", "Cho-Jui Hsieh", "Pengchuan Zhang" ], "title": "A convex relaxation barrier to tight robustness verification of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hadi Salman", "Mingjie Sun", "Greg Yang", "Ashish Kapoor", "J. Zico Kolter" ], "title": "Denoised smoothing: A provable defense for pretrained classifiers, 2020", "venue": null, "year": 2020 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Saeed Saremi", "Rupesh Srivastava" ], "title": "Provable robust classification via learned smoothed densities, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Improved techniques for training score-based generative models", "venue": "arXiv preprint arXiv:2006.09011,", "year": 2020 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "arXiv preprint arXiv:1710.10766,", "year": 2017 }, { "authors": [ "Yang Song", "Sahaj Garg", "Jiaxin Shi", "Stefano Ermon" ], "title": "Sliced score matching: A scalable approach to density and score estimation", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Ying Tai", "Jian Yang", "Xiaoming Liu", "Chunyan Xu" ], "title": "Memnet: A persistent memory network for image restoration", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2020 }, { "authors": [ "Vincent Tjeng", "Kai Y. Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Pascal Vincent" ], "title": "A connection between score matching and denoising autoencoders", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Jay Whang", "Qi Lei", "Alexandros G Dimakis" ], "title": "Compressed sensing with invertible generative models and dependent noise", "venue": "arXiv preprint arXiv:2003.08089,", "year": 2020 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Greg Yang", "Tony Duan", "Edward Hu", "Hadi Salman", "Ilya Razenshteyn", "Jerry Li" ], "title": "Randomized smoothing of all shapes and sizes", "venue": "arXiv preprint arXiv:2002.08118,", "year": 2020 }, { "authors": [ "Runtian Zhai", "Chen Dan", "Di He", "Huan Zhang", "Boqing Gong", "Pradeep Ravikumar", "Cho-Jui Hsieh", "Liwei Wang" ], "title": "Macer: Attack-free and scalable robust training via maximizing certified radius", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dinghuai Zhang", "Mao Ye", "Chengyue Gong", "Zhanxing Zhu", "Qiang Liu" ], "title": "Black-box certification with randomized smoothing: A functional optimization based framework", "venue": "arXiv preprint arXiv:2002.09169,", "year": 2020 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": null, "year": 1901 }, { "authors": [ "Kai Zhang", "Wangmeng Zuo", "Yunjin Chen", "Deyu Meng", "Lei Zhang" ], "title": "Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising", "venue": "IEEE Transactions on Image Processing,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The deep image classifiers are susceptible to deliberate noises as known as adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014; Carlini & Wagner, 2017). Even though many works proposed heuristics that can annul or mitigate adversarial attacks, most of them were broken by stronger attacks (Athalye et al., 2018; Athalye & Carlini, 2018). The vulnerability of empirical defenses had led the researchers to scrutinize on certified defenses, which ensure the models to have constant output within the allowed set around given input. Unfortunately, many provable defenses are not feasible to large-scale neural networks because of their constraints on the architecture.\nOn the other hand, randomized smoothing is a practical method that does not restrain the choice of neural networks. The randomized smoothing converts any base classifier to a smoothed classifier by making predictions over randomly perturbed samples. Then the smoothed classifiers are guaranteed to have a `p certified radius, which is theoretically derived by the noise type used for smoothing. Since Cohen et al. (2019) derived tight `2 certified radius for Gaussian randomized smoothing, sequential works studied the certification bounds for various distributions (Teng et al., 2020; Yang et al., 2020). As base classifiers are required to predict randomly perturbed samples, natural classifiers are not sufficient for randomized smoothing. Therefore, many works proposed training ensemble of base classifiers accustomed for randomized smoothing. However, since each trained classifier only applies to specific noise distribution and level, it is expensive to protect against various `p adversaries and robustness strength.\nIn this work, we tackle the inefficiency of training random-ensemble of base classifiers by using one universal image denoiser to the pre-trained classifier. The idea of using denoiser for randomized smoothing was first introduced by Salman et al. (2020) and is refer to denoised smoothing. One step further, we study general image denoising problem for randomized smoothing with two different approaches: 1) direct training of image denoiser, and 2) solve the optimization problem by using a generative model to project to the learned data manifold. Then, we show that the score function, which is the gradient of log-density, is crucial for both approaches. We exploit multi-scale denoising score matching (Song & Ermon, 2019) for score estimation, and propose an efficient algorithm simulated annealing for image denoising. Remark that we only require one score network to certify various noise distributions and levels. We provide experimentations on ImageNet and CIFAR-10 datasets to show the efficacy of our methods. Specifically, our denoisers perform better than original denoised smoothing, while can be applied to various noise types without any re-training. Further-\nmore, we compare with the random-ensemble based method, which we refer to white-box smoothing, and show that our method works are comparable to them. In sum, we list our contributions:\n• We propose novel score-based image denoisers for randomized smoothing. • We improve denoised smoothing, which was originally proposed by Salman et al. (2020)\nand generalize to other distributions without training any neural networks." }, { "heading": "2 RANDOMIZED SMOOTHING AND DENOISED SMOOTHING", "text": "" }, { "heading": "2.1 BACKGROUNDS ON RANDOMIZED SMOOTHING", "text": "Let f : Rd → Y be a classifier and q be a distribution on Rd. Then the randomized smoothing with q is a method that converts the base classifier f to the associated smoothed classifier g, where g(x) returns the class which is most likely to be predicted by the base classifier f when x is perturbed by a random noise sampled from q, i.e.,\ng(x) = arg max c∈Y Pr u∼q(u)\n[ f(x + u) = c ] . (1)\nThe noise distribution is usually a symmetric log-concave distribution, i.e. q(u) = exp(−φ(u)) for some even and convex φ. Note that to control the robustness/accuracy tradeoff, we embed the noise level λ to q, then we have qλ(u) = exp(−φ(uλ )). We mix the notations q and qλ throughout the paper.\nRobustness guarantee for smoothed classifiers Suppose an adversary can perturb the input x inside the allowed set B, which is usually an `p ball centered at x. For the case when B is `2 ball and q is Gaussian distribution N (0, σ2I), g(x) is robust within the radius\nR = σ\n2\n( Φ−1(p1)− Φ−1(p2) ) (2)\nwhere Φ is inverse cumulative distribution function, and p1 = maxc Pr[f(x + u) = c] and p2 = maxc6=g(x) Pr[f(x + u) = c]. Cohen et al. (2019) first derived the certified radius by using Neyman-Pearson lemma, and later Salman et al. (2019a) showed alternative derivation using the Lipschitz property of smoothed classifier. Furthermore when q is a centered Laplace distribution, the robustness certificate for `1 radius was derived by Teng et al. (2020). Later, the proof methods are generalized to various distributions (may not be log-concave) that can certify various `p radius (Yang et al., 2020). Remark that the robustness guarantee depends on the noise distribution qλ and the performance of base classifier f under random perturbation with qλ." }, { "heading": "2.2 RANDOMIZED SMOOTHING VIA IMAGE DENOISING", "text": "Even though the randomized smoothing can convert any classifier to a provably robust classifier, the smoothed classifier from natural classifiers are below the standard as they are not capable of predicting randomly perturbed samples. Many previous studies focused on training classifiers accustomed to randomized smoothing, which spans from noisy data augmentation (Cohen et al., 2019; Li et al., 2019) to its variants such as adversarial training (Salman et al., 2019a) or stability training (Lee et al., 2019; Zhai et al., 2019). However, such methods are computationally expensive and require a massive number of classifiers per noise types and levels.\nThe idea of prepending denoiser to the classifier was first introduced by Salman et al. (2020). By training denoiser Dθ : Rd → Rd, the smoothed classifier converted from f ◦ Dθ outperforms ’no-denoiser’ baseline. They proposed training denoisers with mean squared error (MSE) loss or classification (CLF) loss, or combining both methods. Formally, they are\nLMSE(θ) = Ex∼p,u∼q[‖Dθ(x + u)− x‖2], (3) LCLF(θ) = Ex∼p,u∼q[LCE(F (Dθ(x + u)), f(x))]. (4)\nwhere LCE is the cross-entropy loss and F is soft version of hard classifier f . They showed that training with CLF loss makes perform better than denoiser with only MSE loss. Alternatively, Saremi & Srivastava (2020) trained neural empirical bayes estimator that can refine the white noise. Nonetheless, those methods still suffer from expensive training of numerous denoisers with respect to each noise types and levels." }, { "heading": "3 SCORE-BASED IMAGE DENOISING", "text": "" }, { "heading": "3.1 FORMULATION OF IMAGE DENOISING PROBLEM", "text": "The image denoising is an example of linear inverse problem, which can be formulated as following: given an observation y = x + u with u ∼ q(u) finds x̂(y) that is close to original x. Let x ∼ p(x) then the distribution of y is pq(y) = ∫ p(y,x)dx = ∫ p(y|x)p(x)dx = ∫ q(y − x)p(x)dx = (p ∗ q)(y).\nOne-step denoiser Like equation 3, the most common approach to achieve denoiser is to train denoising autoencoder (DAE) Dθ with MSE loss (Zhang et al., 2017; ?). Suppose q is a Gaussian distributionN (0, σ2I) and let the distribution of y by pσ2 . Then the following proposition (Robbins, 1956; Lu & Stephens, 2019; Saremi & Hyvarinen, 2020) reveals the relationship between the optimal denoiser Dθ∗ and pσ2 . Proposition 3.1. Assume θ∗ ∈ arg minθ LMSE(θ), then the following equation holds:\nDθ∗(y) = y + σ2∇y log pσ2(y) (5)\nThe proof of proposition 3.1 is in Appendix A. Let us define the score function of density p(x) by ∇x log p(x), then the optimal DAE can be obtained by estimating the score of pσ2 . Let sθ(·;σ) be score network that estimates score of smoothed density pσ2 . Then the denoiser from sθ is given by\nx̂(y) = y + σ2sθ(y;σ). (6)\nRemark that it is only valid when q is Gaussian distribution.\nMulti-step denoiser Consider the maximum a posteriori (MAP) estimator that maximizes the conditional distribution p(x|y). Formally the MAP loss is given by,\narg min x LMAP(x;y) = arg min x\n− log p(x|y) (7)\n= arg min x\n− log p(x)− log p(y|x) + log p(y) (8)\n= arg min x\n− log p(x)− log q(y − x) (9)\n= arg min x\n− log p(x) + φ(y − x). (10)\nNote that we simply remove density term p(y) and rewrite with q. Lastly, we rewrite q with φ. Since the density p(x) is usually intractable for high-dimensional dataset, one may use approximation to make the MAP loss tractable. Many recent works focused on using cutting edge generative models such as generative adversarial network (GAN) or invertible neural networks to approximate p(x) in equation 9 (Ulyanov et al., 2018; Whang et al., 2020; Asim et al., 2020). However, GAN suffer from mode collapse, and invertible neural networks require extremely long steps to reach local minima, which are not sufficient for randomized smoothing.\nInstead, we aim to approximate the gradient of LMAP by the score of Gaussian smooth densities. Let the approximate MAP loss with σ̃ by\nLMAP,σ̃(x;y) = − log pσ̃2(x) + φ(y − x). (11) Then we can approximate the gradient of LMAP,σ̃(x;y) by score network and perform gradient descent initialized with x0 = y as following:\nxt+1 = xt − α∇xtLMAP,σ̃(x;y) ≈ xt + α(sθ(xt; σ̃) +∇xtφ(y − xt)). (12) Remark that the proposed method can be applied to any log-concave noise distributions. Following theorem shows the recovery guarantee of our methods when q is a Gaussian distribution. Theorem 3.2. Let x∗ be local optimum of p(x), and y = x∗ + u where u ∼ N (0, σ2I). Assume − log p is µ-strongly convex within the neighborhood Br(x) = {z : ‖z − x‖ ≤ r}. Then, the gradient descent method on approximate loss LMAP,σ̃2(x;y) initialized by x0 = y converges to its local minima x̂(y; σ̃) ∈ arg minLMAP,σ̃2(x;y) that satisfies:\nE‖x̂(y; σ̃)− x∗‖2 ≤ σ √ d(1 + µσ̃2)\n1 + µσ̃2 + µσ2 + σ̃ √ d (13)\nThe proof of theorem 3.2 is in Appendix A. Remark that the upper bound in equation 13 increases as σ increases, which shows that the recovery becomes harder as σ becomes larger. Also the upper bound is strictly increasing function of σ̃, and has the minimum when σ̃ = 0." }, { "heading": "3.2 EFFICIENT IMAGE DENOISING WITH SIMULATED ANNEALING", "text": "From theorem 3.2, for small σ̃ the error bound is tight but the approximation is inaccurate at nascent steps. Otherwise, when σ̃ is large, the error bound is too large. To arbiter the tradeoff, and to make the method scalable, we propose simulated annealing for score-based image denoising. Let {σi}Li=1 be a decreasing sequence of noise levels, then simulated annealing runs T steps of approximate gradient descent for each σi. The algorithm for simulated annealing for image denoising is in Algorithm 1.\nAlgorithm 1 Simulated Annealing for denoising\nRequire: y, {σi}Li=1, α, T 1: initialize x0 = y 2: for i← 1 : L do 3: αi ← α · σ2i /σ̃2 4: for t← 1 : T do 5: xt+1 ← xt + αi ( sθ,σi(xt) +∇xtφ(xt − y)\n) 6: end for 7: x0 ← xT 8: end for 9: return xT\nNote that Song & Ermon (2019; 2020) used annealed Langevin dynamics for generative modeling. Our approach is similar to them, but we consider the image denoising problem instead. Also, note that Kingma & Cun (2010) trained score network for image denoising, but they used primitive neural networks where exact score-matching was possible." }, { "heading": "3.3 SCORE ESTIMATION VIA SCORE MATCHING", "text": "Score estimation has been studied through various topics such as generative modeling (Song et al., 2020; Song & Ermon, 2019) and reinforcement learning (Sutton et al., 2000). Score matching is a method that trains a score network sθ(x) to estimate score. The original score matching objective is given by\nEx∼p(x) [ tr(∇xsθ(x)) + 1\n2 ‖sθ(x)‖22\n] . (14)\nHowever, due to heavy computation of tr(∇sθ(x)), and since we are only interested in score of smoothed densities, we use different approach.\nDenoising Score Matching Denoising score matching is a method that learns the score of smooth densities. More concretely, the score network sθ estimates the score of density pσ2(y) =∫ N (x, σ2I)p(x)dx. The objective was proved to be equivalent to the following (Vincent, 2011):\nEy∼qσ2 (y|x),x∼p(x)[‖sθ(y;σ)−∇y log qσ2(y|x)‖ 2 2]. (15)\nRemark that the optimal score network satisfies sθ∗(x;σ) = ∇ log pσ2(x) for each σ, and as σ → 0, sθ∗,σ(x)→ ∇ log p(x).\nMulti-Scale Denoising Score Matching Recently, training score network with multi-scale denoising score matching has been proposed (Song & Ermon, 2019). Multi-scale denoising score matching trains one score network with various noise magnitudes. Given a sequence of noise levels {σi}Li=1, which is the variance of centered Gaussian distribution, by rewriting the denoising score matching objective for each σi, we have\nL(θ;σi) = 1\n2 Ex∼p,y∼N (x,σ2i I) [∥∥∥∥sθ(y;σi) + y − xσ2i ∥∥∥∥2 2 ] . (16)\nThen the total loss is\nL(θ; {σi}Li=1) = 1\nL L∑ i=1 σ2iL(θ;σi), (17)\nnote that each loss is weighted by σi which allows the loss of each noise level has the same order of magnitude. It is worth to notify that our method is unsupervised, and classifier-free.\nHere we demonstrate some advantages of multi-scale denoising score matching. First, through learning various noise magnitudes at once, it suffices to train only one neural network to apply image denoising. Therefore, we can do randomized smoothing regardless of the noise level. Second, the noise makes the support of the score function to be whole space, making score estimation more consistent. Moreover, a large amount of noise fills the low-density region, which helps to estimate the score of the non-Gaussian or off-the-manifold samples. Empirically, we found out that multiscale learning helps the denoising performance. See Appendix C for details." }, { "heading": "4 EXPERIMENTS", "text": "We study the performance of our proposed denoiser applied for randomized smoothing. We experimented on ImageNet (Deng et al., 2009) and CIFAR-10 Krizhevsky et al. (2009) datasets. For comparison, we measured the certified accuracy at R, which is the fraction of test set for which the smoothed classifier correctly predicts and certifies robust at an `p radius bigger than R. Due to computational issue, we conducted our experiments with N = 10, 000 samples and failure probability α = 0.001. Everything besides, we follow the same experimental procedures as in Cohen et al. (2019). At first, we depict the perceptual performance of our proposed denoisers." }, { "heading": "4.1 VISUAL PERFORMANCE OF PROPOSED DENOISERS.", "text": "We demonstrate the visual performance of our denoiser. For an image sampled from the ImageNet dataset, we perturbed the image with Gaussian noise (σ = 1.0), and the denoised images from each one-step and multi-step methods are different. Note that the result from the one-step denoiser is more blurry, but the multi-step denoiser produces a sharper edge. We refer to Appendix D for more examples of CIFAR-10 and ImageNet under various noise types." }, { "heading": "4.2 CERTIFICATION WITH ONE-STEP DENOISER", "text": "We experimented the performance of one-step denoiser for Gaussian randomized smoothing. We compare with 1) white-box smoothing, which is canonical approach that trains base classifiers with\nGaussian data augmentation (Cohen et al., 2019), and 2) the denoised smoothing with denoisers trained by Salman et al. (2020). As Salman et al. (2020) trained denoisers with various methods, we just compare with their best values. Note that they assumed query access and full access, which are discriminated based on how much information on the base classifier is provided. Remark that our method is ’no access’ that we don’t need any classifier information. For all experiments on denoised smoothing, we used same ResNet110 classifier for CIFAR-10 and pytorch pretrained ResNet50 classifier for ImageNet. In addition, as our method is unbiased to the base classifiers, we found out that using stronger classifier results in better certified accuracy. See Appendix C for additional experiments.\nCIFAR-10 The results for CIFAR-10 are shown in Table 1. Remark that even without using classifier loss, our method outperforms the query access baseline, and slightly better than the full access baseline. Also, the results are comparable to white-box smoothing, which is an upper bound on our framework. We suspect two reasons for the performance boost: the use of better architecture and the effect of multi-scale training. We conducted additional experiments on the effect of multi-scale training, and found out that multi-scale training helps learning the score estimation and therefore helps denoised smoothing. The results for additional experiments are in Appendix C. However, note that using classifier loss helps for large radii certification because the images denoised from large-scale noise is too blurry that conventional classifiers can’t predict it.\nImageNet The results for ImageNet are shown in Table 2. Note that our method outperforms previous denoised smoothing baselines. We believe the same reason as in CIFAR-10. However, there is still a large gap between denoised smoothing and white-box smoothing, which is due to the difficulty of learning score function of high-resolution images." }, { "heading": "4.3 CERTIFICATION WITH MULTI-STEP DENOISER", "text": "We demonstrate the effectiveness of our multi-step denoiser on denoised smoothing using various noise types. For a baseline, we compare with white-box smoothing which is training classifiers with noisy data augmentation. We experimented on Gaussian noise (Cohen et al., 2019), Laplace noise (Teng et al., 2020), and uniform noise (Yang et al., 2020) for both CIFAR-10 and ImageNet. For all experiments, we used ResNet110 classifiers for CIFAR-10 and ResNet50 classifiers for ImageNet. See Appendix B for more details. It is important to claim that all experiments are done with the only one score-network for each CIFAR-10 and ImageNet.\nNote that for CIFAR-10, the denoised smoothing with our denoiser is slightly worse than white-box smoothing except for uniform distribution. As Yang et al. (2020) reported, the uniform distribution is well-fitted to the convolutional neural network, therefore the white-box smoothing achieves higher performance. For ImageNet, we’ve found out that score estimation on ImageNet is difficult, and the\ndenoising algorithm takes too long time to be certify with myriad of samples. However, we found out that our approach can stack up against white-box smoothing." }, { "heading": "5 RELATED WORKS", "text": "" }, { "heading": "5.1 DEFENSE AGAINST ADVERSARIAL ATTACKS", "text": "Empirical Defense methods The empirical defenses include erasing adversarial perturbation and making models predict well in the presence of adversarial examples. The former defenses are similar to our approach that they use trained denoiser (Meng & Chen, 2017; Liao et al., 2018), or project the adversarial examples to the learned data manifold using generative models (Song et al., 2017; Samangouei et al., 2018). However, all these methods are broken by adaptive attacks (Athalye et al., 2018; Athalye & Carlini, 2018; Tramer et al., 2020), while our method has provable robustness. The latter defenses are referred to adversarial training (Madry et al., 2017; Kurakin et al., 2016; Zhang et al., 2019), which augments adversarial examples at training. Although the adversarial training methods are shown to have great empirical robustness against various adversarial attacks, they suffer from the undiscovered attacks.\nCertified Defense methods provides provable guarantees that the classifier’s prediction remains unchanged within a neighborhood of an input. Those methods are mainly based on certification methods that are either exact or conservative. The exact certification methods are based on Satisfiability Modulo theories solvers (Katz et al., 2017; Ehlers, 2017) or mixed-integer linear programming (Fischetti & Jo, 2018; Tjeng et al., 2019; Lomuscio & Maganti, 2017). However, those methods have the computational burden and depend on the architecture of the neural network. Otherwise, conservative methods are based on Lipschitz bound of the neural network, which is more computationally efficient (Jordan et al., 2019; Salman et al., 2019b; Wong & Kolter, 2018; Raghunathan et al., 2018). However, the above methods aren’t scaled for practical neural networks. Instead, randomized smoothing which is the Weierstrass transformation of a classifier is shown to be scalable with architecture independence.\nRandomized smoothing was first presented with guarantee derived from differential privacy perspective (Lecuyer et al., 2019). Then using Li et al. (2019) showed tighter certificates using αdivergence minimization of original and smoothed distributions. Recently, Cohen et al. (2019) proposed the tightest `2 robust guarantee with Gaussian distribution. Furthermore, series of works derived certification bounds for various `p adversaries including `1-adversary (Teng et al., 2020), `∞ (Zhang et al., 2020) and `0 (Lee et al., 2019; Levine & Feizi). Later, Yang et al. (2020) showed generic proof methods for certification with Wulff Crystal theory.\nEven though randomized smoothing does not constrain the base classifier, to achieve non-trivial robustness, several works have proposed custom-trained methods for randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019; Salman et al., 2019a; 2020; Yang et al., 2020). Alternatively, Lecuyer et al. (2019) trained denoising autoencoders to promote to scale PixelDP to practical neural networks. Our work is based on Salman et al. (2020), with some improvements and generalizations. Note that 1) our approach does not require any information on the base classifier, and 2) we propose general image denoising that doesn’t require training denoisers per noise types or levels." }, { "heading": "5.2 IMAGE DENOISING", "text": "The image denoising had a huge development by exploiting deep neural networks (Zhang et al., 2017; Jin et al., 2017; Tai et al., 2017). Moreover, inverse imaging using generative models have been studied. Ulyanov et al. (2018) showed that GAN can act as an image prior and be used for various inverse imaging problems. On the other hand, Asim et al. (2019) claimed that GAN suffers from mode collapse, and is biased toward the dataset that isn’t sufficient for general image denoising. Instead, several studies (Asim et al., 2019; Whang et al., 2020) showed that invertible neural networks such as Glow (Kingma & Dhariwal, 2018) or RealNVP (Dinh et al., 2016), can be used for the deep image before various inverse imaging applications. Our work is based on score function, where using score function for inverse imaging is less studied. Note that Kingma & Cun (2010) used regularized score matching for image denoising, but their neural network is primeval, and regularized score matching is hard to be scale to practical neural networks." }, { "heading": "6 CONCLUSION", "text": "In this work, we presented a score-based image denoising methods for randomized smoothing. Our method does not require any re-training of classifiers, and trains only one score network that can be used for denoising of any noise type and level. We empirically found out that our denoiser performs better than conventional image denoisers and denoisers trained with the classification loss, while comparable to the random ensemble approach.\nWe believe that current randomized smoothing is theoretically well-designed but needs to be scalable to be deployed for real world applications. On that perspective, our approach is a good initial point that can endow robustness to any classifier without any re-training. However, the hardness of estimating score function of high-dimensional data should be compromised. We believe using better architecture or devising faster optimization algorithm might help." } ]
2,020
EFFICIENT RANDOMIZED SMOOTHING BY DENOISING WITH LEARNED SCORE FUNCTION
SP:e79752ff486049e2e9ec9f588aa918ca2399a5e2
[ "The paper presents a linear time and space attention mechanism based on random features to approximate the softmax. The paper is clearly written and easy to follow. The results are convincing: not chasing SOTA, but comparing to sensible baselines, namely [Baevski & Auli 2019] for language modeling on Wikitext-103, and [Vaswani et al. 2017] for machine translation on WMT14 EN-DE/EN-FR and IWSLT14 DE-EN." ]
Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep. While attention is powerful, it does not scale efficiently to long sequences due to its quadratic time and space complexity in the sequence length. We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function, and explore its application in transformers. RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism. Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines. In the machine translation experiment, RFA decodes twice as fast as a vanilla transformer. Compared to existing efficient transformer variants, RFA is competitive in terms of both accuracy and efficiency on three long text classification datasets. Our analysis shows that RFA’s efficiency gains are especially notable on long sequences, suggesting that RFA will be particularly useful in tasks that require working with large inputs, fast decoding speed, or low memory footprints.
[ { "affiliations": [], "name": "Hao Peng" }, { "affiliations": [], "name": "Nikolaos Pappas" }, { "affiliations": [], "name": "Dani Yogatama" }, { "affiliations": [], "name": "Roy Schwartz" }, { "affiliations": [], "name": "Noah A. Smith" }, { "affiliations": [], "name": "Lingpeng Kong" }, { "affiliations": [], "name": "♣DeepMind ♦Allen" } ]
[ { "authors": [ "Joshua Ainslie", "Santiago Ontanon", "Chris Alberti", "Vaclav Cvicek", "Zachary Fisher", "Philip Pham", "Anirudh Ravula", "Sumit Sanghai", "Qifan Wang", "Li Yang" ], "title": "ETC: Encoding long and structured inputs in transformers", "venue": "In Proc. of EMNLP,", "year": 2020 }, { "authors": [ "Maximilian Alber", "Pieter-Jan Kindermans", "Kristof Schütt", "Klaus-Robert Müller", "Fei Sha" ], "title": "An empirical study on the properties of random bases for kernel methods", "venue": "In Proc. of NeurIPS,", "year": 2017 }, { "authors": [ "Haim Avron", "Vikas Sindhwani", "Jiyan Yang", "Michael W. Mahoney" ], "title": "Quasi-Monte Carlo feature maps for shift-invariant kernels", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Haim Avron", "L. Kenneth Clarkson", "P. David", "Woodruff" ], "title": "Faster kernel ridge regression using sketching and preconditioning", "venue": "SIAM J. Matrix Analysis Applications,", "year": 2017 }, { "authors": [ "Jimmy Ba", "Geoffrey E Hinton", "Volodymyr Mnih", "Joel Z Leibo", "Catalin Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": "In Proc. of NeurIPS,", "year": 2016 }, { "authors": [ "Alexei Baevski", "Michael Auli" ], "title": "Adaptive input representations for neural language modeling", "venue": "In Proc. of ICLR,", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Proc. of ICLR,", "year": 2015 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "arXiv: 2004.05150,", "year": 2020 }, { "authors": [ "S. Bochner" ], "title": "Harmonic Analysis and the Theory of Probability", "venue": null, "year": 1955 }, { "authors": [ "Ondřej Bojar", "Christian Buck", "Christian Federmann", "Barry Haddow", "Philipp Koehn", "Johannes Leveling", "Christof Monz", "Pavel Pecina", "Matt Post", "Herve Saint-Amand", "Radu Soricut", "Lucia Specia", "Aleš Tamchyna" ], "title": "Findings of the 2014 workshop on statistical machine translation", "venue": "In Proc. of WMT,", "year": 2014 }, { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": "arXiv: 2005.14165,", "year": 2020 }, { "authors": [ "Mauro Cettolo", "Jan Niehues", "Sebastian Stüker", "Luisa Bentivogli", "Marcello Federico" ], "title": "Report on the 11th IWSLT evaluation campaign", "venue": "In Proc. of IWSLT,", "year": 2014 }, { "authors": [ "Kehai Chen", "Rui Wang", "Masao Utiyama", "Eiichiro Sumita" ], "title": "Recurrent positional embedding for neural machine translation", "venue": "In Proc. of EMNLP,", "year": 2019 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": null, "year": 1904 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder–decoder for statistical machine translation", "venue": "In Proc. of EMNLP,", "year": 2014 }, { "authors": [ "Youngmin Cho", "Lawrence K. Saul" ], "title": "Kernel methods for deep learning", "venue": "In Proc. of NeurIPS,", "year": 2009 }, { "authors": [ "Krzysztof Marcin Choromanski", "Valerii Likhosherstov", "David Dohan", "Xingyou Song", "Andreea Gane", "Tamas Sarlos", "Peter Hawkins", "Jared Quincy Davis", "Afroz Mohiuddin", "Lukasz Kaiser", "David Benjamin Belanger", "Lucy J Colwell", "Adrian Weller" ], "title": "Rethinking attention with performers", "venue": "In Proc. of ICLR,", "year": 2021 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (ELUs)", "venue": "In Proc. of ICLR,", "year": 2016 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "venue": "In Proc. of ACL,", "year": 2019 }, { "authors": [ "Mostafa Dehghani", "Stephan Gouws", "Oriol Vinyals", "Jakob Uszkoreit", "Lukasz Kaiser" ], "title": "Universal transformers", "venue": "In Proc. of ICLR,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proc. of NAACL,", "year": 2019 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier", "Marc’Aurelio Ranzato" ], "title": "Classical structured prediction losses for sequence to sequence learning", "venue": "In Proc. of NAACL,", "year": 2018 }, { "authors": [ "Yingbo Gao", "Christian Herold", "Weiyue Wang", "Hermann Ney" ], "title": "Exploring kernel functions in the softmax layer for contextual word classification", "venue": "In International Workshop on Spoken Language Translation,", "year": 2019 }, { "authors": [ "Jie Hao", "Xing Wang", "Baosong Yang", "Longyue Wang", "Jinfeng Zhang", "Zhaopeng Tu" ], "title": "Modeling recurrence for transformer", "venue": "In Proc. of NAACL,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NeurIPs Deep Learning and Representation Learning Workshop,", "year": 2015 }, { "authors": [ "Jonathan Ho", "Nal Kalchbrenner", "Dirk Weissenborn", "Tim Salimans" ], "title": "Axial attention in multidimensional transformers", "venue": null, "year": 1912 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Thomas Hofmann", "Bernhard Schölkopf", "Alexander J. Smola" ], "title": "Kernel methods in machine learning", "venue": "Annals of Statistics,", "year": 2008 }, { "authors": [ "Neil Houlsby", "Andrei Giurgiu", "Stanislaw Jastrzebski", "Bruna Morrone", "Quentin De Laroussilhe", "Andrea Gesmundo", "Mona Attariyan", "Sylvain Gelly" ], "title": "Parameter-efficient transfer learning for NLP", "venue": "In Proc. of ICML,", "year": 2019 }, { "authors": [ "Jungo Kasai", "Nikolaos Pappas", "Hao Peng", "James Cross", "Noah A. Smith" ], "title": "Deep encoder, shallow decoder: Reevaluating the speed-quality tradeoff in machine translation", "venue": "In Proc. of ICLR,", "year": 2021 }, { "authors": [ "Angelos Katharopoulos", "Apoorv Vyas", "Nikolaos Pappas", "Francois Fleuret" ], "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "venue": "In Proc. of ICML,", "year": 2020 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proc. of ICLR,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Proc. of ICLR,", "year": 2014 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In Proc. of ICLR,", "year": 2020 }, { "authors": [ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam Kosiorek", "Seungjin Choi", "Yee Whye Teh" ], "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "venue": "In Proc. of ICML,", "year": 2019 }, { "authors": [ "Shiyang Li", "Xiaoyong Jin", "Yao Xuan", "Xiyou Zhou", "Wenhu Chen", "Yu-Xiang Wang", "Xifeng Yan" ], "title": "Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting", "venue": "In Proc. of NeurIPS,", "year": 2019 }, { "authors": [ "Peter J. Liu", "Mohammad Saleh", "Etienne Pot", "Ben Goodrich", "Ryan Sepassi", "Lukasz Kaiser", "Noam Shazeer" ], "title": "Generating wikipedia by summarizing long sequences", "venue": "In Proc. of ICLR,", "year": 2018 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proc. of ACL,", "year": 2011 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In Proc. of ICLR,", "year": 2017 }, { "authors": [ "Stephen Merity", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Regularizing and Optimizing LSTM Language Models", "venue": "In Proc. of ICLR,", "year": 2018 }, { "authors": [ "Thomas Miconi", "Kenneth Stanley", "Jeff Clune" ], "title": "Differentiable plasticity: training plastic neural networks with backpropagation", "venue": "In Proc. of ICML,", "year": 2018 }, { "authors": [ "Lesly Miculicich", "Dhananjay Ram", "Nikolaos Pappas", "James Henderson" ], "title": "Document-level neural machine translation with hierarchical attention networks", "venue": "In Proc. of EMNLP,", "year": 2018 }, { "authors": [ "Abdelrahman Mohamed", "Dmytro Okhonko", "Luke Zettlemoyer" ], "title": "Transformers with convolutional context for ASR", "venue": null, "year": 1904 }, { "authors": [ "Nikita Nangia", "Samuel Bowman" ], "title": "ListOps: A diagnostic dataset for latent tree learning", "venue": "In Proc. of NAACL Student Research Workshop,", "year": 2018 }, { "authors": [ "Junier Oliva", "William Neiswanger", "Barnabas Poczos", "Eric Xing", "Hy Trac", "Shirley Ho", "Jeff Schneider" ], "title": "Fast function to function regression", "venue": "In Proc. of AISTATS,", "year": 2015 }, { "authors": [ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ], "title": "Scaling neural machine translation", "venue": "In Proc. of WMT,", "year": 2018 }, { "authors": [ "Emilio Parisotto", "H. Francis Song", "Jack W. Rae", "Razvan Pascanu", "Caglar Gulcehre", "Siddhant M. Jayakumar", "Max Jaderberg", "Raphael Lopez Kaufman", "Aidan Clark", "Seb Noury", "Matthew M. Botvinick", "Nicolas Heess", "Raia Hadsell" ], "title": "Stabilizing transformers for reinforcement learning", "venue": "In Proc. of ICML,", "year": 2020 }, { "authors": [ "Hao Peng", "Roy Schwartz", "Sam Thomson", "Noah A. Smith" ], "title": "Rational recurrences", "venue": "In Proc. of EMNLP,", "year": 2018 }, { "authors": [ "Matt Post" ], "title": "A call for clarity in reporting", "venue": "BLEU scores. In Proc. of WMT,", "year": 2018 }, { "authors": [ "Jiezhong Qiu", "Hao Ma", "Omer Levy", "Wen-tau Yih", "Sinong Wang", "Jie Tang" ], "title": "Blockwise selfattention for long document understanding", "venue": "In Findings of EMNLP,", "year": 2020 }, { "authors": [ "Dragomir R. Radev", "Pradeep Muthukrishnan", "Vahed Qazvinian" ], "title": "The ACL Anthology network", "venue": "In Proc. of the Workshop on Text and Citation Analysis for Scholarly Digital Libraries,", "year": 2009 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jack W. Rae", "Anna Potapenko", "Siddhant M. Jayakumar", "Chloe Hillier", "Timothy P. Lillicrap" ], "title": "Compressive transformers for long-range sequence modelling", "venue": "In Proc. of ICLR,", "year": 2020 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Random features for large-scale kernel machines", "venue": "In Proc. of NeurIPS,", "year": 2007 }, { "authors": [ "Ankit Singh Rawat", "Jiecao Chen", "Felix Xinnan X Yu", "Ananda Theertha Suresh", "Sanjiv Kumar" ], "title": "Sampled softmax with random Fourier features", "venue": "In Proc. of NeurIPS,", "year": 2019 }, { "authors": [ "Aurko Roy", "Mohammad Taghi Saffar", "David Grangier", "Ashish Vaswani" ], "title": "Efficient content-based sparse attention with routing transformers", "venue": null, "year": 2003 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "J. Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to dynamic recurrent networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "J. Schmidhuber" ], "title": "Reducing the ratio between learning complexity and number of time varying variables in fully recurrent nets", "venue": "In Proc. of ICANN,", "year": 1993 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In Proc. of ACL,", "year": 2016 }, { "authors": [ "Sheng Shen", "Zhen Dong", "Jiayu Ye", "Linjian Ma", "Zhewei Yao", "Amir Gholami", "Michael W. Mahoney", "Kurt Keutzer" ], "title": "Q-BERT: Hessian based ultra low precision quantization of BERT", "venue": "In Proc. of AAAI,", "year": 2020 }, { "authors": [ "Sainbayar Sukhbaatar", "Edouard Grave", "Piotr Bojanowski", "Armand Joulin" ], "title": "Adaptive attention span in transformers", "venue": "In Proc. of ACL,", "year": 2019 }, { "authors": [ "Yitong Sun" ], "title": "Random Features Methods in Supervised Learning", "venue": "PhD thesis, The University of Michigan,", "year": 2019 }, { "authors": [ "Yi Tay", "Dara Bahri", "Donald Metzler", "Da-Cheng Juan", "Zhe Zhao", "Che Zheng" ], "title": "Synthesizer: Rethinking self-attention in transformer models", "venue": "arXiv: 2005.00743,", "year": 2020 }, { "authors": [ "Yi Tay", "Dara Bahri", "Liu Yang", "Don Metzler", "Da-Cheng Juan" ], "title": "Sparse sinkhorn attention", "venue": "In Proc. of ICML,", "year": 2020 }, { "authors": [ "Yi Tay", "Mostafa Dehghani", "Dara Bahri", "Donald Metzler" ], "title": "Efficient transformers: A survey", "venue": "arXiv: 2009.06732,", "year": 2020 }, { "authors": [ "Yi Tay", "Mostafa Dehghani", "Samira Abnar", "Yikang Shen", "Dara Bahri", "Philip Pham", "Jinfeng Rao", "Liu Yang", "Sebastian Ruder", "Donald Metzler" ], "title": "Long range arena: A benchmark for efficient transformers", "venue": "In Proc. of ICLR,", "year": 2021 }, { "authors": [ "Yao-Hung Hubert Tsai", "Shaojie Bai", "Makoto Yamada", "Louis-Philippe Morency", "Ruslan Salakhutdinov" ], "title": "Transformer dissection: An unified understanding for transformer’s attention via the lens of kernel", "venue": "In Proc. of EMNLP,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proc. of NeurIPS,", "year": 2017 }, { "authors": [ "Sinong Wang", "Belinda Z. Li", "Madian Khabsa", "Han Fang", "Hao Ma" ], "title": "Linformer: Self-attention with linear complexity", "venue": null, "year": 2006 }, { "authors": [ "Ronald J. Williams", "David Zipser" ], "title": "A learning algorithm for continually running fully recurrent neural networks", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "In Proc. of ICLR,", "year": 2019 }, { "authors": [ "Zhanghao Wu", "Zhijian Liu", "Ji Lin", "Yujun Lin", "Song Han" ], "title": "Lite transformer with long-short range attention", "venue": "In Proc. of ICLR,", "year": 2020 }, { "authors": [ "Weiqiu You", "Simeng Sun", "Mohit Iyyer" ], "title": "Hard-coded Gaussian attention for neural machine translation", "venue": "In Proc. of ACL,", "year": 2020 }, { "authors": [ "Felix Xinnan X Yu", "Ananda Theertha Suresh", "Krzysztof M Choromanski", "Daniel N Holtmann-Rice", "Sanjiv Kumar" ], "title": "Orthogonal random features", "venue": "In Proc. of NeurIPS,", "year": 2016 }, { "authors": [ "Manzil Zaheer", "Guru Guruganesh", "Avinava Dubey", "Joshua Ainslie", "Chris Alberti", "Santiago Ontanon", "Philip Pham", "Anirudh Ravula", "Qifan Wang", "Li Yang", "Amr Ahmed" ], "title": "Big bird: Transformers for longer sequences", "venue": null, "year": 2007 } ]
[ { "heading": "1 INTRODUCTION", "text": "Transformer architectures (Vaswani et al., 2017) have achieved tremendous success on a variety of sequence modeling tasks (Ott et al., 2018; Radford et al., 2018; Parmar et al., 2018; Devlin et al., 2019; Parisotto et al., 2020, inter alia). Under the hood, the key component is attention (Bahdanau et al., 2015), which models pairwise interactions of the inputs, regardless of their distances from each other. This comes with quadratic time and memory costs, making the transformers computationally expensive, especially for long sequences. A large body of research has been devoted to improving their time and memory efficiency (Tay et al., 2020c). Although better asymptotic complexity and prominent gains for long sequences have been achieved (Lee et al., 2019; Child et al., 2019; Beltagy et al., 2020, inter alia), in practice, many existing approaches are less well-suited for moderatelength ones: the additional computation steps required by some approaches can overshadow the time and memory they save (Kitaev et al., 2020; Wang et al., 2020; Roy et al., 2020, inter alia).\nThis work proposes random feature attention (RFA), an efficient attention variant that scales linearly in sequence length in terms of time and space, and achieves practical gains for both long and moderate length sequences. RFA builds on a kernel perspective of softmax (Rawat et al., 2019). Using the well-established random feature maps (Rahimi & Recht, 2007; Avron et al., 2016; §2), RFA approximates the dot-then-exponentiate function with a kernel trick (Hofmann et al., 2008): exp(x · y) ≈ φ(x) · φ(y). Inspired by its connections to gated recurrent neural networks (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) and fast weights (Schmidhuber, 1992), we further augment RFA with an optional gating mechanism, offering a straightforward way of learning with recency bias when locality is desired.\n∗The majority of this work was done while these authors were at DeepMind.\nRFA and its gated variant (§3) can be used as a drop-in substitute for the canonical softmax attention, and increase the number of parameters by less than 0.1%. We explore its applications in transformers on language modeling, machine translation, and long text classification (§4). Our experiments show that RFA achieves comparable performance to vanilla transformer baselines in all tasks, while outperforming a recent related approach (Katharopoulos et al., 2020). The gating mechanism proves particularly useful in language modeling: the gated variant of RFA outperforms the transformer baseline on WikiText-103. RFA shines in decoding, even for shorter sequences. In our head-to-head comparison on machine translation benchmarks, RFA decodes around 2× faster than a transformer baseline, without accuracy loss. Comparisons to several recent efficient transformer variants on three long text classification datasets show that RFA is competitive in terms of both accuracy and efficiency. Our analysis (§5) shows that more significant time and memory efficiency improvements can be achieved for longer sequences: 12× decoding speedup with less than 10% of the memory for 2,048-length outputs." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 ATTENTION IN SEQUENCE MODELING", "text": "The attention mechanism (Bahdanau et al., 2015) has been widely used in many sequence modeling tasks. Its dot-product variant is the key building block for the state-of-the-art transformer architectures (Vaswani et al., 2017). Let {qt}Nt=1 denote a sequence of N query vectors, that attend to sequences of M key and value vectors. At each timestep, the attention linearly combines the values weighted by the outputs of a softmax:\nattn (qt, {ki}, {vi}) = ∑ i exp (qt · ki/τ)∑ j exp (qt · kj/τ) v>i . (1)\nτ is the temperature hyperparameter determining how “flat” the softmax is (Hinton et al., 2015).1\nCalculating attention for a single query takes O(M) time and space. For the full sequence of N queries the space amounts to O(MN). When the computation cannot be parallelized across the queries, e.g., in autoregressive decoding, the time complexity is quadratic in the sequence length." }, { "heading": "2.2 RANDOM FEATURE METHODS", "text": "The theoretical backbone of this work is the unbiased estimation of the Gaussian kernel by Rahimi & Recht (2007). Based on Bochner’s theorem (Bochner, 1955), Rahimi & Recht (2007) proposed random Fourier features to approximate a desired shift-invariant kernel. The method nonlinearly transforms a pair of vectors x and y using a random feature map φ; the inner product between φ(x) and φ(y) approximates the kernel evaluation on x and y. More precisely: Theorem 1 (Rahimi & Recht, 2007). Let φ : Rd → R2D be a nonlinear transformation:\nφ (x) = √ 1/D [ sin (w1 · x) , . . . , sin (wD · x) , cos (w1 · x) , . . . , cos (wD · x) ]> . (2)\nWhen d-dimensional random vectors wi are independently sampled from N (0, σ2Id), Ewi [φ (x) · φ (y)] = exp ( −‖x− y‖2 /2σ2 ) . (3)\nVariance of the estimation is inversely proportional to D (Appendix A.2; Yu et al., 2016).\nRandom feature methods proved successful in speeding up kernel methods (Oliva et al., 2015; Avron et al., 2017; Sun, 2019, inter alia), and more recently are used to efficiently approximate softmax (Rawat et al., 2019). In §3.1, we use it to derive an unbiased estimate to exp(〈· , ·〉) and then an efficient approximation to softmax attention." }, { "heading": "3 MODEL", "text": "This section presents RFA (§3.1) and its gated variant (§3.2). In §3.3 we lay out several design choices and relate RFA to prior works. We close by practically analyzing RFA’s complexity (§3.4).\n1M = N in self-attention; they may differ, e.g., in the cross attention of a sequence-to-sequence model." }, { "heading": "3.1 RANDOM FEATURE ATTENTION", "text": "RFA builds on an unbiased estimate to exp(〈· , ·〉) from Theorem 1, which we begin with:\nexp ( x · y/σ2 ) = exp ( ‖x‖2 /2σ2 + ‖y‖2 /2σ2 ) exp ( −‖x− y‖2 /2σ2 ) ≈ exp ( ‖x‖2 /2σ2 + ‖y‖2 /2σ2 ) φ (x) · φ (y) .\n(4)\nThe last line does not have any nonlinear interaction between φ(x) and φ(y), allowing for a linear time/space approximation to attention. For clarity we assume the query and keys are unit vectors.2\nattn (qt, {ki}, {vi}) = ∑ i\nexp ( qt · ki/σ2 )∑ j exp (qt · kj/σ2) v>i\n≈ ∑ i φ (qt) > φ (ki)v > i∑ j φ (qt) · φ (kj)\n= φ (qt)\n>∑ i φ (ki)⊗ vi\nφ (qt) · ∑ j φ (kj) = RFA (qt, {ki}, {vi}) .\n(5)\n⊗ denotes the outer product between vectors, and σ2 corresponds to the temperature term τ in Eq. 1. RFA can be used as a drop-in-replacement for softmax-attention.\n(a) The input is revealed in full to cross attention and encoder self-attention. Here RFA calculates attention using Eq. 5. (b) In causal attention RFA attends only to the prefix.3 This allows for a recurrent computation. Tuple (St ∈ R2D×d, zt ∈ R2D) is used as the “hidden state” at time step t to keep track of the history, similar to those in RNNs. Then RFA(qt, {ki}i≤t, {vi}i≤t) = φ(qt)\n>St/(φ(qt) · zt), where St = St−1 + φ (kt)⊗ vt, zt = zt−1 + φ (kt) . (6)\n2D denotes the size of φ(·). Appendix A.1 summarizes the computation procedure of RFA, and Figure 1 compares it against the softmax attention. Appendix A.3 derives causal RFA in detail.\nAnalogously to the softmax attention, RFA has its multiheaded variant (Vaswani et al., 2017). In our experiments we use causal RFA in a transformer language model (§4.1), and both cross and causal RFA in the decoder of a sequence-to-sequence machine translation model." }, { "heading": "3.2 RFA-GATE: LEARNING WITH RECENCY BIAS", "text": "The canonical softmax attention does not have any explicit modeling of distance or locality. In learning problems where such inductive bias is crucial (Ba et al., 2016; Parmar et al., 2018; Miconi et al., 2018; Li et al., 2019, inter alia), transformers heavily rely on positional encodings. Answering to this, many approaches have been proposed, e.g., learning the attention spans (Sukhbaatar et al.,\n2This can be achieved by `2-normalizing the query and keys. See §3.3 for a related discussion. 3It is also sometimes called “decoder self-attention” or “autoregressive attention.”\n2019; Wu et al., 2020), and enhancing the attention computation with recurrent (Hao et al., 2019; Chen et al., 2019) or convolutional (Wu et al., 2019; Mohamed et al., 2019) components.\nRFA faces the same issue, but its causal attention variant (Eq. 6) offers a straightforward way of learning with recency bias. We draw inspiration from its connections to RNNs, and augment RFA with a learned gating mechanism (Hochreiter & Schmidhuber, 1997; Cho et al., 2014; Peng et al., 2018, inter alia):\ngt = sigmoid(wg · xt + bg), St = gt St−1 + (1− gt)φ (kt)⊗ vt, zt = gt zt−1 + (1− gt)φ (kt) .\n(7)\nwg and bg are learned parameters, and xt is the input representation at timestep t.4 By multiplying the learned scalar gates 0 < gt < 1 against the hidden state (St, zt), history is exponentially decayed, favoring more recent context.\nThe gating mechanism shows another benefit of RFA: it would be otherwise more difficult to build similar techniques into the softmax attention, where there is no clear sense of “recurrence” (Appendix A.5). It proves useful in our language modeling experiments (§4.1)." }, { "heading": "3.3 DISCUSSION", "text": "On query and key norms, and learned random feature variance. Eq. 5 assumes both the query and keys are of norm-1. It therefore approximates a softmax attention that normalizes the queries and keys before multiplying them, and then scales the logits by dividing them by σ2. Empirically, this normalization step scales down the logits (Vaswani et al., 2017) and enforces that −1 ≤ q>k ≤ 1. In consequence, the softmax outputs would be “flattened” if not for σ, which can be set a priori as a hyperparameter (Yu et al., 2016; Avron et al., 2017; Sun, 2019, inter alia). Here we instead learn it from data with the reparameterization trick (Kingma & Welling, 2014):\nw̃i ∼ N (0, Id), wi = σ ◦ w̃i. (8) Id is the d× d identity matrix, and ◦ denotes elementwise product between vectors. d-dimensional vector σ is learned, but random vectors w̃i are not.5\nThis norm-1 constraint is never mandatory. Rather, we employ it for notation clarity and easier implementation. In preliminary experiments we find it has little impact on the performance when σ is set properly or learned from data. Eq. 12 in Appendix A presents RFA without imposing it.\nGoing beyond the Gaussian kernel. More broadly, random feature methods can be applied to a family of shift-invariant kernels, with the Gaussian kernel being one of them. In the same family, the order-1 arc-cosine kernel (Cho & Saul, 2009) can be approximated with feature map: φarccos(x) = √ 1/D[ReLU(w1 · x), . . . ,ReLU(wD · x)]> (Alber et al., 2017).6 In our experiments, the Gaussian and arc-cosine variants achieve similar performance. This supplements the exploration of alternatives to softmax in attention (Tsai et al., 2019; Gao et al., 2019).\nRelations to prior work. Katharopoulos et al. (2020) inspire the causal attention variant of RFA. They use a feature map based on the exponential linear unit activation (Clevert et al., 2016): elu(·) + 1. It significantly underperforms both the baseline and RFA in our controlled experiments, showing the importance of a properly-chosen feature map. Random feature approximation of attention is also explored by a concurrent work (Choromanski et al., 2021), with applications in masked language modeling for proteins. They propose positive random features to approximate softmax, aiming for a lower variance in critical regions. RFA instead normalizes the queries and keys before random projection to reduce variance. Going beyond both, RFA establishes the benefits of random feature methods as a more universal substitute for softmax across all attention variants, facilitating its applications in, e.g., sequence-to-sequence learning.\n4In multihead attention (Vaswani et al., 2017), kt and vt are calculated from xt using learned affine transformations.\n5This departs from Eq. 2 by lifting the isotropic assumption imposed on the Gaussian distribution: note the difference between the vector σ in Eq. 8 and the scalar σ in Eq. 3. We find this improves the performance in practice (§4), even though the same result in Theorem 1 may not directly apply.\n6Apart from replacing the sinusoid functions with ReLU, it constructs wi in the same way as Eq. 8.\nThere are interesting connections between gated RFA and fast weights (Schmidhuber, 1992; 1993; Ba et al., 2016; Miconi et al., 2018, inter alia). Emphasizing recent patterns, they learn a temporal memory to store history similarly to Eqs. 7. The main difference is that RFA additionally normalizes the output using φ(qt) · z as in Eq. 6, a by-product of approximating softmax’s partition function. It is intriguing to study the role of this normalization term, which we leave to future work." }, { "heading": "3.4 COMPLEXITY ANALYSIS", "text": "Time. Scaling linearly in the sequence lengths, RFA needs less computation (in terms of number of operations) for long sequences. This implies speedup wherever the quadratic-time softmax attention cannot be fully-parallelized across time steps. More specifically:\n• Significant speedup can be expected in autoregressive decoding, both conditional (e.g., machine translation) and unconditional (e.g., sampling from a language model). For example, 1.9× speedup is achieved in our machine translation experiments (§4.2); and more for longer sequences (e.g., 12× for 2,048-length ones; §5). • Some applications (e.g., language modeling, text classification) reveal inputs to the model in full.7 When there are enough threads to parallelize softmax attention across time steps, hardly any speedup from RFA can be achieved; when there are not, typically for very long sequences (>1,000), substantial speed gain is possible. For example, RFA does not achieve any speedup when working with 512-length context (§4.1), but achieves a 5.3× speedup with 4,000-length context (§4.2).\nMemory. Asymptotically, RFA has a better memory efficiency than its softmax counterpart (linear vs. quadratic). To reach a more practical conclusion, we include in our analysis the cost of the feature maps. φ’s memory overhead largely depends on its size D. For example, let’s consider the cross attention of a decoder. RFA usesO(4D+2Dd) space to store φ(qt), ∑ i φ(ki)⊗vi, and ∑ i φ(ki) (Eq. 5; line 12 of Algo. 2).8 In contrast, softmax cross attention stores the encoder outputs with O(Md) memory, with M being the source length. In this case RFA has a lower memory overhead when 2D M . Typically D should be no less than d in order for reasonable approximation (Yu et al., 2016); In a transformer model, d is the size of an attention head, which is usually around 64 or 128 (Vaswani et al., 2017; Ott et al., 2018). This suggests that RFA can achieve significant memory saving with longer sequences, which is supported by our empirical analysis in §5. Further, using moderate sized feature maps is also desirable, so that its overhead does not overshadow the time and memory RFA saves. We experiment with D at d and 2d; the benefit of using D > 2d is marginal.\nAppendix A.6 discusses the time and space complexity in more detail, and Appendix C.2 studies the effect of random feature size on performance." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate RFA on language modeling, machine translation, and long text classification." }, { "heading": "4.1 LANGUAGE MODELING", "text": "Setting. We experiment with WikiText-103 (Merity et al., 2017). It is based on English Wikipedia. Table 5 in Appendix B summarizes some of its statistics. We compare the following models:\n• BASE is our implementation of the strong transformer-based language model by Baevski & Auli (2019). • RFA builds on BASE, but replaces the softmax attention with random feature attention. We experiment with both Gaussian and arc-cosine kernel variants. • RFA-GATE additionally learns a sigmoid gate on top of RFA (§3.2). It also has a Gaussian kernel variant and a arc-cosine kernel one.9 • φelu is a baseline to RFA. Instead of the random feature methods it uses the elu(·) + 1 feature map, as in Katharopoulos et al. (2020).\n7A causal masking is usually used to prevent the model from accessing future tokens in language models. 8RFA never constructs the M × 2D × d tensor [φ(ki)⊗ vi]i, but sequentially processes the sequence. 9This gating technique is specific to RFA variants, in the sense that it is less intuitive to apply it in BASE.\nTo ensure fair comparisons, we use comparable implementations, tuning, and training procedure. All models use a 512 block size during both training and evaluation, i.e., they read as input a segment of 512 consecutive tokens, without access to the context from previous mini-batches. RFA variants use 64-dimensional random feature maps. We experiment with two model size settings, small (around 38M parameters) and big (around 242M parameters); they are described in Appendix B.1 along with other implementation details." }, { "heading": "Model Dev. Test Dev. Test", "text": "Results. Table 1 compares the models’ performance in perplexity on WikiText-103 development and test data. Both kernel variants of RFA, without gating, outperform φelu by more than 2.4 and 2.1 test perplexity for the small and big model respectively, confirming the benefits from using random feature approximation.10 Yet both underperform BASE, with RFA-Gaussian having a smaller gap. Comparing RFA against its gated variants, a more than 1.8 perplexity improvement can be attributed to the gating mechanism; and the gap is larger for small models. Notably, RFA-GATE-Gaussian outperforms BASE under both size settings by at least 1.2 perplexity. In general, RFA models with Gaussian feature maps outperform their arc-cosine counterparts.11 From the analysis in §3.4 we would not expect speedup by RFA models, nor do we see any in the experiments.12\nClosing this section, we explore a “stateful” variant of RFA-GATE-Gaussian. It passes the last hidden state (St, zt) to the next mini-batch during both training and evaluation, a technique commonly used in RNN language models (Merity et al., 2018). This is a consequence of RFA’s RNN-style computation, and is less straightforward to be applicable in the vanilla transformer models.13 From the last row of Table 1 we see that this brings a more than 1.5 test perplexity improvement." }, { "heading": "4.2 MACHINE TRANSLATION", "text": "Datasets. We experiment with three standard machine translation datasets.\n• WMT14 EN-DE and EN-FR (Bojar et al., 2014). Our data split and preprocessing follow those of Vaswani et al. (2017). We share the source and target vocabularies within each language pair, with 32,768 byte pair encoding types (BPE; Sennrich et al., 2016). • IWSLT14 DE-EN (Cettolo et al., 2014) is based on TED talks. The preprocessing follows Edunov et al. (2018). Separate vocabularies of 9K/7K BPE types are used for the source and target.\nTable 5 in Appendix B summarizes some statistics of the datasets. 10All models are trained for 150K steps; this could be part of the reason behind the suboptimal performance of φelu: it may need 3 times more gradient updates to reach similar performance to the softmax attention baseline (Katharopoulos et al., 2020).\n11We observe that RFA Gaussian variants are more stable and easier to train than the arc-cosine ones as well as φelu. We conjecture that this is because the outputs of the Gaussian feature maps have an `2-norm of 1, which can help stabilize training. To see why, sin2(x) + cos2(x) = cos(x− x) = 1.\n12In fact, RFA trains around 15% slower than BASE due to the additional overhead from the feature maps. 13Some transformer models use a text segment from the previous mini-batch as a prefix (Baevski & Auli, 2019; Dai et al., 2019). Unlike RFA, this gives the model access to only a limited amount of context, and significantly increases the memory overhead.\nSetting. We compare the RFA variants described in §4.1. They build on a BASE model that is our implementation of the base-sized transformer (Vaswani et al., 2017). All RFA models apply random feature attention in decoder cross and causal attention, but use softmax attention in encoders. This setting yields the greatest decoding time and memory savings (§3.4). We use 128/64 for D in cross/causal attention. RFA-GATE learns sigmoid gates in the decoder causal attention. The φelu baseline uses the same setting and applies feature map in both decoder cross and causal attention, but not in the encoders. Further details are described in Appendix B.2." }, { "heading": "Model EN-DE EN-FR DE-EN Speed", "text": "Results. Table 2 compares the models’ test set BLEU on three machine translation datasets. Overall both Gaussian and arc-cosine variants of RFA achieve similar performance to BASE on all three datasets, significantly outperforming Katharopoulos et al. (2020). Differently from the trends in the language modeling experiments, here the gating mechanism does not lead to substantial gains. Notably, all RFA variants decode more than 1.8× faster than BASE." }, { "heading": "4.3 LONG TEXT CLASSIFICATION", "text": "We further evaluate RFA’s accuracy and efficiency when used as text encoders on three NLP tasks from the recently proposed Long Range Arena benchmark (Tay et al., 2021), designed to evaluate efficient Transformer variants on tasks that require processing long sequences.14\nExperimental setting and datasets. We compare RFA against baselines on the following datasets:\n• ListOps (LO; Nangia & Bowman, 2018) aims to diagnose the capability of modelling hierarchically structured data. Given a sequence of operations on single-digit integers, the model predicts the solution, also a single-digit integer. It is formulated as a 10-way classification. We follow Tay et al. (2021) and consider sequences with 500–2,000 symbols. • Character-level text classification with the IMDb movie review dataset (Maas et al., 2011). This is a binary sentiment classification task. • Character-level document retrieval with the ACL Anthology Network (AAN; Radev et al., 2009) dataset. The model classifies whether there is a citation between a pair of papers.\nTo ensure fair comparisons, we implement RFA on top of the transformer baseline by Tay et al. (2021), and closely follow their preprocessing, data split, model size, and training procedure. Speed and memory are evaluated on the IMDb dataset. For our RFA model, we use D = 64 for the IMDb dataset, and D = 128 for others. We refer the readers to Tay et al. (2021) for further details.\nResults. From Table 3 we can see that RFA outperforms the transformer baseline on two out of the three datasets, achieving the best performance on IMDb with 66% accuracy. Averaging across three datasets, RFA outperforms the transformer by 0.3% accuracy, second only to Zaheer et al. (2020) with a 0.1% accuracy gap. In terms of time and memory efficiency, RFA is among the strongest. RFA speeds up over the transformer by 1.1–5.3×, varying by sequence length. Importantly, compared to the only two baselines that perform comparably to the baseline transformer model (Tay et al., 2020a; Zaheer et al., 2020), RFA has a clear advantage in both speed and memory efficiency, and is the only model that is competitive in both accuracy and efficiency.\n14https://github.com/google-research/long-range-arena" }, { "heading": "5 ANALYSIS", "text": "Decoding time and memory varying by sequence length. §3.4 shows that RFA can potentially achieve more significant speedup and memory saving for longer sequences, which we now explore.\nWe use a simulation conditional generation experiment on to compare RFA’s sequence-to-sequence decoding speed and memory overhead against the baseline’s. Here we assume the input and output sequences are of the same length. The compared models are of the same size as those described in §4.2, with 6-layer encoders and decoders. Other hyperparameters are summarized in Appendix B.2. All models are tested using greedy decoding with the same batch size of 16, on a TPU v2 accelerator.\nFrom Figures 2 (a) and (b) we observe clear trends. Varying the lengths, both RFA variants achieve consistent decoding speed with nearly-constant memory overhead. In contrast, the baseline decodes slower for longer sequences, taking an increasing amount of memory. Notably, for 2,048-length sequences, RFA decodes around 12× faster than the baseline while using less than 10% of the memory. RFA-arccos slightly outperforms RFA-Gaussian in terms of speed and memory efficiency. This is because when using the same D (as we do here), the φarccos is half the size of φGaussian. These results suggest that RFA can be particularly useful in sequence-to-sequence tasks with longer sequences, e.g., document-level machine translation (Miculicich et al., 2018).\nFigure 3 in Appendix C.1 compares the speed and memory consumption in unconditional decoding (e.g., sampling from a language model). The overall trends are similar to those in Figure 2.\nNotes on decoding speed. With a lower memory overhead, RFA can use a larger batch size than the baseline. As noted by Katharopoulos et al. (2020) and Kasai et al. (2021), if we had used mini-\nbatches as large as the hardware allows, RFA could have achieved a more significant speed gain. Nonetheless, we control for batch size even though it is not the most favorable setting for RFA, since the conclusion translates better to common applications where one generates a single sequence at a time (e.g., instantaneous machine translation). For the softmax attention baseline, we follow Ott et al. (2018) and cache previously computed query/key/value representations, which significantly improves its decoding speed (over not caching).\nFurther analysis results. RFA achieves comparable performance to softmax attention. Appendix C.3 empirically shows that this cannot be attributed to RFA learning a good approximation to softmax: when we train with one attention but evaluate with the other, the performance is hardly better than randomly-initialized untrained models. Yet, an RFA model initialized from a pretrained softmax transformer achieves decent training loss after a moderate amount of finetuning steps (Appendix C.4). This suggests some potential applications, e.g., transferring knowledge from a pretrained transformer (e.g., GPT-3; Brown et al., 2020) to an RFA model that is more efficient to sample from." }, { "heading": "6 RELATED WORK", "text": "One common motivation across the following studies, that is shared by this work and the research we have already discussed, is to scale transformers to long sequences. Note that there are plenty orthogonal choices for improving efficiency such as weight sharing (Dehghani et al., 2019), quantization (Shen et al., 2020), knowledge distillation (Sanh et al., 2020), and adapters (Houlsby et al., 2019). For a detailed overview we refer the reader to Tay et al. (2020c).\nSparse attention patterns. The idea behind these methods is to limit the reception field of attention computation. It motivates earlier attempts in improving attention’s efficiency, and still receives lots of interest. The sparse patterns can be set a priori (Liu et al., 2018; Qiu et al., 2020; Ho et al., 2020; You et al., 2020, inter alia) or learned from data (Sukhbaatar et al., 2019; Roy et al., 2020, inter alia). For most of these approaches, it is yet to be empirically verified that they are suitable for large-scale sequence-to-sequence learning; few of them have recorded decoding speed benefits.\nCompressed context. Wang et al. (2020) compress the context along the timesteps so that the effective sequence length for attention computation is reduced. Another line of work aims to store past context into a memory module with limited size (Lee et al., 2019; Ainslie et al., 2020; Rae et al., 2020, inter alia), so that accessing longer history only moderately increases the overhead. Reminiscent of RNN language models, RFA attends beyond a fixed context window through a stateful computation, without increasing time or memory overhead." }, { "heading": "7 CONCLUSION", "text": "We presented random feature attention (RFA). It views the softmax attention through the lens of kernel methods, and approximates it with random feature methods. With an optional gating mechanism, RFA provides a straightforward way of learning with recency bias. RFA’s time and space complexity is linear in the sequence length. We use RFA as a drop-in substitute for softmax attention in transformer models. On language modeling, machine translation, and long text classification benchmarks, RFA achieves comparable or better performance than strong baselines. In the machine translation experiment, RFA decodes twice as fast. Further time and memory efficiency improvements can be achieved for longer sequences." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Phil Blunsom, Chris Dyer, Nando de Freitas, Jungo Kasai, Adhiguna Kuncoro, Dianqi Li, Ofir Press, Lianhui Qin, Swabha Swayamdipta, Sam Thomson, the language team at DeepMind and the ARK group at the University of Washington for their helpful feedback. We also thank Tay Yi for helping run the Long Range Arena experiments, Richard Tanburn for the advice on implementations, and the anonymous reviewers for their thoughtful comments. This work was supported in part by NSF grant 1562364 and a Google Fellowship. Nikolaos Pappas was supported by the Swiss National Science Foundation under grant number P400P2 183911 “UNISON.”" }, { "heading": "Appendices", "text": "" }, { "heading": "A RANDOM FEATURE ATTENTION IN MORE DETAIL", "text": "" }, { "heading": "A.1 DETAILED COMPUTATION PROCEDURE", "text": "Algorithms 1 and 2 describe causal and cross random feature attention’s computation procedures.\nAlgorithm 1 Causal random feature attention.\n1: procedure RFA-CAUSAL( {qi}Ni=1, {ki}Ni=1, {vi}Ni=1) 2: . S is a D × d matrix 3: . z is a D-dimensional vector 4: S, z← 0, 0 5: for i = 1 to N do 6: q̃i, k̃i ← φ(qi), φ(ki) . Random feature maps 7: S← S+ k̃i ⊗ vi 8: z← z+ k̃i 9: h>i ← q̃>i S/(q̃i · z)\n10: end for 11: return {hi}Ni=1 12: end procedure\nAlgorithm 2 Cross random feature attention.\n1: procedure RFA-CROSS( {qi}Ni=1, {ki}Mi=1, {vi}Mi=1) 2: . S is a D × d matrix 3: . z is a D-dimensional vector 4: S, z← 0, 0 5: for i = 1 to M do 6: k̃i ← φ(ki) . Random feature map 7: S← S+ k̃i ⊗ v>i 8: z← z+ k̃i 9: end for\n10: for i = 1 to N do 11: q̃i ← φ(qi) . Random feature map 12: h>i ← q̃>i S/(q̃i · z) 13: end for 14: return {hi}Ni=1 15: end procedure\nA.2 VARIANCE OF RANDOM FOURIER FEATURES\nThe following result is due to Yu et al. (2016). Using the same notation as in §2.2:\nVar(φ (x) · φ (y)) = 1 2D\n( 1− e−z2 )2 , (9)\nwhere z = ‖x− y‖ /σ." }, { "heading": "A.3 DERIVATION OF CAUSAL RFA", "text": "This section presents a detailed derivation of causal RFA as in §3.1. Following Eq. 5 but changing the attended keys and values to the prefix:\nRFA(qt, {ki}i≤t, {vi}i≤t) = φ (qt)\n>∑ i≤t φ (ki)⊗ vi\nφ (qt) · ∑ j≤t φ (kj) (10)\nLet St , ∑ i≤t φ(ki) ⊗ vi, and zt , ∑\ni≤t φ(ki); both can be calculated recurrently. Assuming" }, { "heading": "S0 = 0 and z0 = 0:", "text": "St = St−1 + φ (kt)⊗ vt, zt = zt−1 + φ (kt) , t ≥ 1. (11) This completes the derivation of causal RFA as in §3.1." }, { "heading": "A.4 RFA WITHOUT NORM-1 CONSTRAINTS", "text": "§3.1 assumes that the queries and keys are unit vectors. This norm-1 constraint is not a must. Here we present a RFA without imposing this constraint. Let C(x) = exp(‖x‖2 /2σ2). From Eq. 4 we have attn (qt, {ki}, {vi}) =∑\ni\nexp ( qt · ki/σ2 )∑ j exp (qt · kj/σ2) v>i ≈ ∑ i C(qt)C(ki)φ (qt) > φ (ki)v > i∑ j C(qt)C(kj)φ (qt) · φ (kj)\n= φ (qt)\n>∑ i C(ki)φ (ki)⊗ vi\nφ (qt) · ∑ j C(kj)φ (kj) .\n(12)\nThe specific attention computation is similar to those in §3.1. In sum, lifting the norm-1 constraint brings an additional scalar term C(·)." }, { "heading": "A.5 RELATING RFA-GATE TO SOFTMAX ATTENTION", "text": "Drawing inspiration from gated RNNs, §3.2 introduces a gated variant of RFA. Now we study its “softmax counterpart.”\nk̃i = ki (1− gi) t∏\nj=i+1\ngj , ṽi = vi (1− gi) t∏\nj=i+1\ngj , i = 1, . . . , t\nht = attn(qt, {k̃i}i≤t, {ṽi}i≤t). (13)\nht is the output at timestep t and is used for onward computation.\nAt each step, all prefix keys and values are decayed by a gate value before calculating the attention. This implies that the attention computation for qt+1 cannot start until that of qt is finished. Combined with the linear complexity of softmax normalization, this amounts to quadratic time in sequence length, even for language modeling training.\nThe above model is less intuitive and more expensive in practice, without the RFA perspective. This shows that RFA brings some benefits in developing new attention models." }, { "heading": "A.6 DETAILED COMPLEXITY ANALYSIS", "text": "Table 4 considers a sequence-to-sequence model, and breaks down the comparisons to training (with teacher forcing; Williams & Zipser, 1989) and autoregressive decoding. Here we assume enough threads to fully parallelize softmax attention across timesteps when the inputs are revealed to the model in full. RFA has a lower space complexity, since it never explicitly populates the attention matrices. As for time, RFA trains in linear time, and so does the softmax attention: in teacher-forcing training a standard transformer decoder parallelizes the attention computation across time steps. The trend of the time comparison differs during decoding: when only one output token is produced at a time, RFA decodes linearly in the output length, while softmax attention decodes quadratically." }, { "heading": "Setting Model Encoder Cross Causal Encoder Cross Causal", "text": "" }, { "heading": "Data Train Dev. Test Vocab.", "text": "" }, { "heading": "B EXPERIMENTAL DETAILS", "text": "Table 5 summarizes some statistics of the datasets used in our experiments. Our implementation is based on JAX.15\nDuring training, we sample a different random projection matrix for each attention head. Preliminary experiments suggest this performs better than using the same random projection throughout training (Table 6). Our conjecture is that this helps keep the attention heads from “over committing” to any particular random projection (Peng et al., 2020). To avoid the overhead of sampling from Gaussian during training, we do this in an offline manner. I.e., before training we construct a pool of random matrices (typically 200), at each training step we draw from the pool. At test time each attention head uses the same random projection, since no accuracy benefit is observed by using different ones for different test instances." }, { "heading": "B.1 LANGUAGE MODELING", "text": "We compare the models using two model size settings, summarized in Table 7. We use the fixed sinusoidal position embeddings by Vaswani et al. (2017). All models are trained for up to 150K gradient steps using the Adam optimizer (Kingma & Ba, 2015). No `2-regularization is used. We apply early stopping based on development set perplexity. All models are trained using 16 TPU v3 accelerators, and tested using a single TPU v2 accelerator.\n15https://github.com/google/jax." }, { "heading": "B.2 MACHINE TRANSLATION", "text": "WMT14. We use the fixed sinusoidal position embeddings by Vaswani et al. (2017). For both EN-DE and EN-FR experiments, we train the models using the Adam (with β1 = 0.1, β2 = 0.98, and = 10−9) optimizer for up to 350K gradient steps. We use a batch size of 1,024 instances for EN-DE, while 4,096 for the much larger EN-FR dataset. The learning rate follows that by Vaswani et al. (2017). Early stopping is applied based on development set BLEU. No `2 regularization or gradient clipping is used. All models are trained using 16 TPU v3 accelerators, and tested using a single TPU v2 accelerator. Following standard practice, we average 10 most recent checkpoints at test time. We evaluate the models using SacreBLEU (Post, 2018).16 A beam search with beam size 4 and length penalty 0.6 is used. Other hyperparameters are summarized in Table 8." }, { "heading": "Hyperprams. WMT14 IWSLT14", "text": "" }, { "heading": "C MORE ANALYSIS RESULTS", "text": "" }, { "heading": "C.1 MORE RESULTS ON DECODING SPEED AND MEMORY OVERHEAD", "text": "Figure 3 compares the RFA’s unconditional decoding speed and memory against the softmax attention. The setting is the same as that in §5 except that here the models do not have an encoder. This experiment aims to simulate the applications such as sampling from a language model." }, { "heading": "C.2 EFFECT OF RANDOM FEATURE SIZE", "text": "This section studies how the size of φ(·) affects the performance. Table 9 summarize RFAGaussian’s performance on WMT14 EN-DE development set. The model and training are the same as that used in §4.2 except random feature size. Recall from §2.2 that the size of φ(·) is 2D for\n16https://github.com/mjpost/sacrebleu\nRFA-Gaussian. When the size of φ(·) is too small (32 or 64 for cross attention, 32 for causal attention), training does not converge. We observe accuracy improvements by using random features sufficiently large (256 for cross attention and 128 for causal attention); going beyond that, the benefit is marginal." }, { "heading": "C.3 TRAIN AND EVALUATE WITH DIFFERENT ATTENTION FUNCTIONS", "text": "RFA achieves comparable performance to its softmax counterpart. Does this imply that it learns a good approximation to the softmax attention? To answer this question, we consider:\n(i) an RFA-Gaussian model initialized from a pretrained softmax-transformer; (ii) a softmax-transformer initialized from a pretrained an RFA-Gaussian model.\nIf RFA’s good performance can be attributed to learning a good approximation to softmax, both, without finetunining, should perform similarly to the pretrained models. However, this is not the case on IWSLT14 DE-EN. Both pretrained models achieve more than 35.2 development set BLEU. In contrast, (i) and (ii) respectively get 2.3 and 1.1 BLEU without finetuning, hardly beating a randomly-initialized untrained model. This result aligns with the observation by Choromanski et al. (2021), and suggests that it is not the case that RFA performs well because it learns to imitate softmax attention’s outputs." }, { "heading": "C.4 KNOWLEDGE TRANSFER FROM SOFTMAX ATTENTION TO RFA", "text": "We first supplement the observation in Appendix C.3 by finetuning (i) on the same pretraining data. Figure 4 plots the learning curves. It takes RFA roughly 1,500 steps to reach similar training loss to the pretrained model. As a baseline, “RFA Reset” resets the multihead attention parameters (i.e., those for query, key, value, and output projections) to randomly initialized ones. Its learning curve is similar to that of (i), suggesting that the pretrained multihead attention parameters are no more useful to RFA than randomly initialized ones. To further confirm this observation, “softmax Reset”\nresets the multihead attention parameters without changing the attention functions. It converges to the pretraining loss in less than 200 steps.\nTakeaway. From the above results on IWSLT14, pretrained knowledge in a softmax transformer cannot be directly transferred to an RFA model. However, from Figure 4 and a much larger-scale experiment by Choromanski et al. (2021), we do observe that RFA can recover the pretraining loss, and the computation cost of finetuning is much less than training a model from scratch. This suggests some potential applications. For example, one might be able to initialize an RFA language model from a softmax transformer pretrained on large-scale data (e.g., GPT-3; Brown et al., 2020), and finetune it at a low cost. The outcome would be an RFA model retaining most of the pretraining knowledge, but is much faster and more memory-friendly to sample from. We leave such exploration to future work." } ]
2,021
RANDOM FEATURE ATTENTION
SP:3d2faa84203e50f95080e9d2de9660affe58e157
[ "This paper introduces a model, Directed Acyclic Graph Neural Network (DAGNN), which processes information according to the flow defined by partial order. DAGNN can be regarded as a special case of previous GNN models, but specific to directed acyclic graph structures. The authors prove that the model satisfies the properties desired by DAG-based graph representation learning.Then they study topology batching on the proposed model to maximize parallel concurrency in processing DAGs. A comprehensive empirical evaluation is conducted on datasets from three domains to verify its effectiveness." ]
Graph-structured data ubiquitously appears in science and engineering. Graph neural networks (GNNs) are designed to exploit the relational inductive bias exhibited in graphs; they have been shown to outperform other forms of neural networks in scenarios where structure information supplements node features. The most common GNN architecture aggregates information from neighborhoods based on message passing. Its generality has made it broadly applicable. In this paper, we focus on a special, yet widely used, type of graphs—DAGs—and inject a stronger inductive bias—partial ordering—into the neural network design. We propose the directed acyclic graph neural network, DAGNN, an architecture that processes information according to the flow defined by the partial order. DAGNN can be considered a framework that entails earlier works as special cases (e.g., models for trees and models updating node representations recurrently), but we identify several crucial components that prior architectures lack. We perform comprehensive experiments, including ablation studies, on representative DAG datasets (i.e., source code, neural architectures, and probabilistic graphical models) and demonstrate the superiority of DAGNN over simpler DAG architectures as well as general graph architectures.
[ { "affiliations": [], "name": "Veronika Thost" }, { "affiliations": [], "name": "Jie Chen" } ]
[ { "authors": [ "Miltiadis Allamanis", "Earl T. Barr", "Premkumar T. Devanbu", "Charles A. Sutton" ], "title": "A survey of machine learning for big code and naturalness", "venue": "ACM Comput. Surv.,", "year": 2018 }, { "authors": [ "Samuel R. Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M. Dai", "Rafal Józefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "In Yoav Goldberg and Stefan Riezler (eds.), Proc. of Conference on Computational Natural Language Learning, CoNLL,", "year": 2016 }, { "authors": [ "Maxwell Crouse", "Ibrahim Abdelaziz", "Cristina Cornelio", "Veronika Thost", "Lingfei Wu", "Kenneth Forbus", "Achille Fokoue" ], "title": "Improving graph neural network representations of logical formulae with subgraph pooling, 2019", "venue": null, "year": 2019 }, { "authors": [ "Javid Ebrahimi", "Dejing Dou" ], "title": "Chain based RNN for relation classification", "venue": "In Proc. of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT,", "year": 2015 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In Proc. of ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "Proc. of International Conference on Machine Learning, ICML,", "year": 2017 }, { "authors": [ "Weihua Hu", "Matthias Fey", "Marinka Zitnik", "Yuxiao Dong", "Hongyu Ren", "Bowen Liu", "Michele Catasta", "Jure Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on", "venue": "graphs. CoRR,", "year": 2020 }, { "authors": [ "Eliyahu Kiperwasser", "Yoav Goldberg" ], "title": "Easy-first dependency parsing with hierarchical tree lstms", "venue": "Trans. Assoc. Comput. Linguistics,", "year": 2016 }, { "authors": [ "Thomas Kipf", "Max Welling" ], "title": "Semi-supervised learning with graph convolutional neural networks", "venue": "In Proc. of International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "S.L. Lauritzen", "D.J. Spiegelhalter" ], "title": "Local computations with probabilities on graphical structures and their application to expert systems", "venue": "Journal of the Royal Statistical Society. Series B (Methodological),", "year": 1988 }, { "authors": [ "Junhyun Lee", "Inyeop Lee", "Jaewoo Kang" ], "title": "Self-attention graph pooling", "venue": "In Proc. of International Conference on Machine Learning, ICML,", "year": 2019 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard S. Zemel" ], "title": "Gated graph sequence neural networks", "venue": "In Yoshua Bengio and Yann LeCun (eds.), Proc. of International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter W. Battaglia" ], "title": "Learning deep generative models of graphs", "venue": null, "year": 2018 }, { "authors": [ "Tengfei Ma", "Patrick Ferber", "Siyu Huo", "Jie Chen", "Michael Katz" ], "title": "Online planner selection with graph neural networks and adaptive scheduling", "venue": "In Proc. of Thirty-Fourth Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Hieu Pham", "Melody Y. Guan", "Barret Zoph", "Quoc V. Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "Proc. of International Conference on Machine Learning, ICML,", "year": 2018 }, { "authors": [ "Ekagra Ranjan", "Soumya Sanyal", "Partha P. Talukdar" ], "title": "ASAP: adaptive structure aware pooling for learning hierarchical graph representations", "venue": "In Proc. of The Thirty-Fourth Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Jonathan Godwin", "Tobias Pfaff", "Rex Ying", "Jure Leskovec", "Peter W. Battaglia" ], "title": "Learning to simulate complex physics with graph networks", "venue": "CoRR, abs/2002.09405,", "year": 2020 }, { "authors": [ "Adam Santoro", "David Raposo", "David G.T. Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter W. Battaglia", "Tim Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": null, "year": 2017 }, { "authors": [ "Marco Scutari" ], "title": "Learning bayesian networks with the bnlearn r package", "venue": "Journal of Statistical Software, Articles,", "year": 2010 }, { "authors": [ "Bing Shuai", "Zhen Zuo", "Bing Wang", "Gang Wang" ], "title": "Dag-recurrent neural networks for scene labeling", "venue": "In Proc. of Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Edward Snelson", "Zoubin Ghahramani" ], "title": "Sparse gaussian processes using pseudo-inputs", "venue": "In Proc. Advances in Neural Information Processing,", "year": 2005 }, { "authors": [ "Richard Socher", "Cliff Chiung-Yu Lin", "Andrew Y. Ng", "Christopher D. Manning" ], "title": "Parsing natural scenes and natural language with recursive neural networks", "venue": "In Proc. of International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Richard Socher", "Brody Huval", "Christopher D. Manning", "Andrew Y. Ng" ], "title": "Semantic compositionality through recursive matrix-vector spaces", "venue": "In Proc. of Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning,", "year": 2012 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Y. Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proc. of Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Kai Sheng Tai", "Richard Socher", "Christopher D. Manning" ], "title": "Improved semantic representations from tree-structured long short-term memory networks", "venue": "In Proc. of Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing,", "year": 2015 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph Attention Networks", "venue": "Proc. of International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In Proc. of International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Jingling Li", "Mozhi Zhang", "Simon S. Du", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "What can neural networks reason about", "venue": "In Proc. of International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William L. Hamilton", "Jure Leskovec" ], "title": "Graphrnn: Generating realistic graphs with deep auto-regressive models", "venue": "Proc. of International Conference on Machine Learning, ICML,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Shali Jiang", "Zhicheng Cui", "Roman Garnett", "Yixin Chen" ], "title": "D-VAE: A variational autoencoder for directed acyclic graphs", "venue": "In Proc. of Annual Conference on Neural Information Processing Systems, NeurIPS,", "year": 2019 }, { "authors": [ "Xingxing Zhang", "Liang Lu", "Mirella Lapata" ], "title": "Top-down tree long short-term memory networks", "venue": "Proc. of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT,", "year": 2016 }, { "authors": [ "Xiao-Dan Zhu", "Parinaz Sobhani", "Hongyu Guo" ], "title": "Long short-term memory over recursive structures", "venue": "In Proc. of International Conference on Machine Learning, ICML,", "year": 2015 }, { "authors": [ "Marinka Zitnik", "Monica Agrawal", "Jure Leskovec" ], "title": "Modeling polypharmacy side effects with graph convolutional networks", "venue": null, "year": 2018 }, { "authors": [ "Hu" ], "title": "sub-tokens forming the method name, also known as “code summarization”. The task is considered a proxy measure of how well a model captures the code semantics (Allamanis et al., 2018). We additionally consider the task of predicting the length of the longest path in the graph. We treat it as a 275-way classification because the maximum length is 275. The distribution of the lengths/classes is shown in Appendix E", "venue": null, "year": 2018 }, { "authors": [ "Zhang" ], "title": "2019) for evaluating their model D-VAE. To compare with the results reported", "venue": null, "year": 2019 }, { "authors": [ "sort. See Zhang" ], "title": "2019, Appendix I) for further details", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph-structured data is ubiquitous across various disciplines (Gilmer et al., 2017; Zitnik et al., 2018; Sanchez-Gonzalez et al., 2020). Graph neural networks (GNNs) use both the graph structure and node features to produce a vectorial representation, which can be used for classification, regression (Hu et al., 2020), and graph decoding (Li et al., 2018; Zhang et al., 2019). Most popular GNNs update node representations through iterative message passing between neighboring nodes, followed by pooling (either flat or hierarchical (Lee et al., 2019; Ranjan et al., 2020)), to produce a graph representation (Li et al., 2016; Kipf & Welling, 2017; Gilmer et al., 2017; Veličković et al., 2018; Xu et al., 2019). The relational inductive bias (Santoro et al., 2017; Battaglia et al., 2018; Xu et al., 2020)—neighborhood aggregation—empowers GNNs to outperform graph-agnostic neural networks. To facilitate subsequent discussions, we formalize a message-passing neural network (MPNN) architecture, which computes representations h`v for all nodes v in a graph G in every layer ` and a final graph representation hG , as (Gilmer et al., 2017):\nh`v = COMBINE ` ( h`−1v , AGGREGATE ` ( {h`−1u | u ∈ N (v)} )) , ` = 1, . . . , L, (1)\nhG = READOUT ( {hLv , v ∈ V} ) , (2)\nwhere h0v is the input feature of v, N (v) denotes a neighborhood of node v (sometimes including v itself), V denotes the node set of G, L is the number of layers, and AGGREGATE`, COMBINE`, and READOUT are parameterized neural networks. For notational simplicity, we omit edge attributes; but they can be straightforwardly incorporated into the framework (1)–(2).\nDirected acyclic graphs (DAGs) are a special type of graphs, yet broadly seen across domains. Examples include parsing results of source code (Allamanis et al., 2018), logical formulas (Crouse et al., 2019), and natural language sentences, as well as probabilistic graphical models (Zhang et al., 2019), neural architectures (Zhang et al., 2019), and automated planning problems (Ma et al., 2020). ∗To whom correspondence should be addressed.\nA directed graph is a DAG if and only if the edges define a partial ordering over the nodes. The partial order is an additionally strong inductive bias one naturally desires to incorporate into the neural network. For example, a neural architecture seen as a DAG defines the acyclic dependency of computation, an important piece of information when comparing architectures and predicting their performance. Hence, this information should be incorporated into the architecture representation for higher predictive power.\nIn this work, we propose DAGNNs—directed acyclic graph neural networks—that produce a representation for a DAG driven by the partial order. In particular, the order allows for updating node representations based on those of all their predecessors sequentially, such that nodes without successors digest the information of the entire graph. Such a processing manner substantially differs from that of MPNNs where the information landed on a node is limited by a multi-hop local neighborhood and thus restricted by the depth L of the network.\nModulo details to be elaborated in sections that follow, the DAGNN framework reads h`v = F ` ( h`−1v , G ` ( {h`u | u ∈ P(v)}, h`−1v )) , ` = 1, . . . , L, (3)\nhG = R ( {h`v, ` = 0, 1, . . . , L, v ∈ T } ) , (4)\nwhere P(v) denotes the set of direct predecessors of v, T denotes the set of nodes without (direct) successors, and G`, F `, and R are parameterized neural networks that play similar roles to AGGREGATE`, COMBINE`, and READOUT, respectively.\nA notable difference between (3)–(4) and (1)–(2) is that the superscript ` − 1 inside the underlined part of (1) is advanced to ` in the counterpart in (3). In other words, MPNN aggregates neighborhood information from the past layer, whereas DAGNN uses the information in the current layer. An advantage is that DAGNN always uses more recent information to update node representations.\nEquations (3)–(4) outline several other subtle but important differences between DAGNN and MPNNs, such as the use of only direct predecessors for aggregation and the pooling on only nodes without successors. All these differences are unique to the special structure a DAG enjoys. Exploiting this structure properly should yield a more favorable vectorial representation of the graph. In Section 2, we will elaborate the specifics of (3)–(4). The technical details include (i) attention for node aggregation, (ii) multiple layers for expressivity, and (iii) topological batching for efficient implementation, all of which yield an instantiation of the DAGNN framework that is state of the art.\nFor theoretical contributions, we study topological batching and justify that this technique yields maximal parallel concurrency in processing DAGs. Furthermore, we show that the mapping defined by DAGNN is invariant to node permutation and injective under mild assumptions. This result reassures that the graph representation extracted by DAGNN is discriminative.\nBecause DAGs appear in many different fields, neural architectures for DAGs (including, notably, D-VAE (Zhang et al., 2019)) or special cases (e.g., trees) are scattered around the literature over the years. Generally, they are less explored compared to MPNNs; and some are rather applicationspecific. In Section 3, we unify several representative architectures as special cases of the framework (3)–(4). We compare the proposed architecture to them and point out the differences that lead to its superior performance.\nIn Section 4, we detail our comprehensive, empirical evaluation on datasets from three domains: (i) source code parsed to DAGs (Hu et al., 2020); (ii) neural architecture search (Zhang et al., 2019), where each architecture is a DAG; and (iii) score-based Bayesian network learning (Zhang et al., 2019). We show that DAGNN outperforms many representative DAG architectures and MPNNs.\nOverall, this work contributes a specialized graph neural network, a theoretical study of its properties, an analysis of a topological batching technique for enhancing parallel concurrency, a framework interpretation that encompasses prior DAG architectures, and comprehensive evaluations. Supported code is available at https://github.com/vthost/DAGNN." }, { "heading": "2 THE DAGNN MODEL", "text": "A DAG is a directed graph without cycles. Denote by G = (V, E) a DAG, where V and E ⊂ V × V are the node set and the edge set, respectively. A (strong) partial order over a set S is a binary\nrelation ≤ that is transitive and asymmetric. Some authors use reflexivity versus irreflexivity to distinguish weak partial order over strong partial order. To unify concepts, we forbid self-loops (which otherwise are considered cycles) in the DAG and mean strong partial order throughout. A set S with partial order ≤ is called a poset and denoted by a tuple (S,≤). A DAG (V, E) and a poset (S,≤) are closely related. For any DAG, one can define a unique partial order≤ on the node set V , such that for all pairs of elements u, v ∈ V , u ≤ v if and only if there is a directed path from u to v. On the other hand, for any poset (S,≤), there exists (possibly more than) one DAG that uses S as the node set and that admits a directed path from u to v whenever u ≤ v. In a DAG, all nodes without (direct) predecessors are called sources and we collect them in the set S. Similarly, all nodes without (direct) successors are called targets and we collect them in the set T . Additionally, we let X = {h0v, v ∈ V} be the set of input node features." }, { "heading": "2.1 MODEL", "text": "The main idea of DAGNN is to process nodes according to the partial order defined by the DAG. Using the language of MPNN, at every node v, we “aggregate” information from its neighbors and “combine” this aggregated information (the “message”) with v’s information to update the representation of v. The main differences to MPNN are that (i) we use the current-layer, rather than the past-layer, information to compute the current-layer representation of v and that (ii) we aggregate from the direct-predecessor set P(v) only, rather than the entire (or randomly sampled) neighborhood N (v). They lead to a straightforward difference in the final “readout” also. In the following, we propose an instantiation of Equations (3)–(4). See Figure 1 for an illustration of the architecture.\nOne layer. We use the attention mechanism to instantiate the aggregate operator G`. For a node v at the `-th layer, the output message m`v computed by G\n` is a weighted combination of h`u for all nodes u ∈ P(v) at the same layer `:\nm`v︸︷︷︸ message\n:= G` ( {h`u | u ∈ P(v)}, h`−1v ) = ∑ u∈P(v) α`vu ( h`−1v︸︷︷︸ query , h`u︸︷︷︸ key ) h`u︸︷︷︸ value . (5)\nThe weighting coefficients α`vu follow the query-key design in usual attention mechanisms, whereby the representation of v in the past layer, h`−1v , serves as the query. Specifically, we define\nα`vu ( h`−1v , h ` u ) = softmax\nu∈P(v)\n( w`1 > h`−1v + w ` 2 > h`u ) , (6)\nwhere w`1 and w ` 2 are model parameters. We use the additive form, as opposed to the usual dotproduct form,1 since it involves fewer parameters. An additional advantage is that it is straightforward to incorporate edge attributes into the model, as will be discussed soon.\nThe combine operator F ` combines the message m`v with the previous representation of v, h `−1 v , and produces an updated representation h`v . We employ a recurrent architecture, which is usually used for processing data in sequential order but similarly suits processing in partial order:\nh`v = F ` ( h`−1v , m ` v ) = GRU` ( h`−1v ,︸ ︷︷ ︸\ninput message︷︸︸︷ m`v︸︷︷︸ state ) , (7)\nwhere h`−1v , m ` v , and h ` v are treated as the input, past state, and updated state/output of a GRU, respectively. This design differs from most MPNNs that use simple summation or concatenation to combine the representations. It further differs from GG-NN (Li et al., 2016) (which also employs a GRU), wherein the roles of the two arguments are switched. In GG-NN, the message is treated as the input and the node representation is treated as the state. In contrast, we start from node features and naturally use them as inputs. The message tracks the processed part of the graph and serves better the role of a hidden state, being recurrently updated.\nBy convention, we define G`(∅, ·) = 0 for the aggregator so that for nodes with an empty directpredecessor set, the message (or, equivalently, the initial state of the GRU) is zero.\nBidirectional processing. Just like in sequence models where a sequence may be processed by either the natural order or the reversed order, we optionally invert the directions of the edges in G to create a reverse DAG G̃. We will use the tilde notation for all terms related to the reverse DAG. For example, the representation of node v in G̃ at the `-th layer is denoted by h̃`v . Readout. After L layers of (bidirectional) processing, we use the computed node representations to produce the graph representation. We follow a common practice—concatenate the representations across layers, perform a max-pooling across nodes, and apply a fully-connected layer to produce the output. Different from the usual practice, however, we pull across only the target nodes and concatenate the pooling results from the two directions. Recall that the target nodes contain information of the entire graph following the partial order. Mathematically, the readout R produces\nhG = FC ( Max-Pool\nv∈T\n( L ‖ `=0 h`v ) ‖ Max-Pool u∈S ( L ‖ `=0 h̃`u )) . (8)\nNote that the target set T̃ of G̃ is the same as the source set S of G. If the processing is unidirectional, the right pooling in (8) is dropped.\nEdge attributes. The instantiation of the framework so far has not considered edge attributes. It is in fact simple to incorporate them. Let τ(u, v) be the type of an edge (u, v) and let yτ be a representation of edges of type τ . We insert this information during message calculation in the aggregator. Specifically, we replace the attention weights α`vu defined in (6) by\nα`vu ( h`−1v , h ` u ) = softmax\nu∈P(v)\n( w`1 > h`−1v + w ` 2 > h`u + w ` 3 > yτ(u,v) ) . (9)\nIn practice, we experiment with slightly fewer parameters by setting w`3 = w ` 1 and find that the model performs equally well. The edge representations yτ are trainable embeddings of the model. Alternatively, if input edge features are provided, yτ(u,v) can be replaced by a neural networktransformed embedding for the edge (u, v)." }, { "heading": "2.2 TOPOLOGICAL BATCHING", "text": "A key difference to MPNN is that DAGNN processes nodes sequentially owing to the nature of the aggregator G`, obeying the partial order. Thus, for computational efficiency, it is important to maximally exploit concurrency so as to better leverage parallel computing resources (e.g., GPUs). One\n1The usual dot-product form reads α`vu(h`−1v , h`u) = softmax(〈W `1 > h`−1v , W ` 2 > h`u〉). We find that in practice the dot-product form and the additive form perform rather similarly, but the former requires substantially more parameters. We are indebted to Hyoungjin Lim who pointed out that, however, in the additive form, the query term will be canceled out inside the softmax computation.\nobservation is that nodes without dependency may be grouped together and processed concurrently, if their predecessors have all been processed. See Figure 2 for an illustration.\nTo materialize this idea, we consider topological batching, which partitions the node set V into ordered batches {Bi}i≥0 so that (i) the Bi’s are disjoint and their union is V; (ii) for every pair of nodes u, v ∈ Bi for some i, there is not a directed path from u to v or from v to u; (iii) for every i > 0, there exists one node in Bi such that it is the tail of an edge whose head is in Bi−1. The concept was proposed by Crouse et al. (2019);2 in what follows, we derive several properties that legitimizes its use in our setting. First, topological batching produces the minimum number of sequential batches such that all nodes in each batch can be processed in parallel. Theorem 1. The number of batches from a partitioning that satisfies (i)–(iii) described in the preceding paragraph is equal to the number of nodes in the longest path of the DAG. As a consequence, this partitioning produces the minimum number of ordered batches such that for all u ≤ v, if u ∈ Bi and v ∈ Bj , then i < j. Note that the partial order ≤ is defined at the beginning of Section 2.\nThe partitioning procedure may be as follows. All nodes without direct predecessors, S, form the initial batch. Iteratively, remove the batch just formed from the graph, as well as the edges emitting from these nodes. The nodes without direct predecessors in the remaining graph form the next batch. Remark 1. To satisfy Properties (i)–(iii), it is not necessary that B0 = S; but the above procedure achieves so. Applying this procedure on the reverse DAG G̃, we obtain B̃0 = T . Note that the last batch for G may not be the same as T ; and the last batch for G̃ may not be the same as S either. Remark 2. Topological batching can be straightforwardly extended to multiple graphs for better parallel concurrency: one merges the Bi for the same i across graphs into a single batch. This is equivalent to treating the multiple DAGs as a single (albeit disconnected) DAG and applying topological batching on it." }, { "heading": "2.3 PROPERTIES", "text": "In the following, we summarize properties of the DAGNN model; they are consistent with the corresponding results for MPNNs. To formalize these results, we letM : V × E ×X → hG denote the mapping defined by Equations (3)–(4). For notational consistency, we omit bidirectional processing, and thus ignore the tilde term in (8). The first results state that DAGNN produces the same graph representation invariant to node permutation. Theorem 2. The graph representation hG is invariant to node indexing if all G`, F `, and R are so. Corollary 3. The functions G`, F `, and R defined in (5)–(8) are invariant to node indexing. Hence, the resulting graph representation hG is, too.\nThe next result states that the framework will not produce the same graph representation for different graphs (i.e., non-isomorphic graphs), under a common condition. Theorem 4. The mappingM is injective if G`, F `, and R, considered as multiset functions, are so.\nThe condition required by Theorem 4 is not restrictive. There exist (infinitely many) injective multiset functions G`, F `, and R, although the ones instantiated by (5)–(8) are not necessarily injective. The modification to injection can be done by using the -trick applied in GIN (Xu et al., 2019), but,\n2See also an earlier implementation in https://github.com/unbounce/pytorch-tree-lstm\nsimilar to the referenced work, the that ensures injection is unknown. In practice, it is either set as zero or treated as a tunable hyperparameter." }, { "heading": "3 COMPARISON TO RELATED MODELS", "text": "In this section, we compare to the most closely related architectures for DAGs, including trees. Natural language processing is a major source of these architectures, since semantic parsing forms a rooted tree or a DAG. Recently, D-VAE (Zhang et al., 2019) has been suggested as a generalpurpose autoencoder for DAGs. Its encoder architecture is the most similar one to ours, but we highlight notable differences that support the improvement DAGNN gains over the D-VAE encoder. All the models we compare with may be considered as restricted cases of the framework (3)–(4).\nRooted trees do usually not come with directed edges, because either direction (top-down or bottomup) is sensible. Hence, we use the terminology “parent” and “child” instead. Unified under our framework, recursive neural networks tailored to trees (Socher et al., 2011; 2012; 2013; Ebrahimi & Dou, 2015) are applied to a fixed number of children when the aggregator acts on a concatenation of the child representations. Moreover, they assume that internal nodes do not come with input representations and hence the combine operator misses the first argument.\nTree-LSTM (Tai et al., 2015; Zhu et al., 2015; Zhang et al., 2016; Kiperwasser & Goldberg, 2016) and DAG-RNN (Shuai et al., 2016), like DAGNN, employ a recurrent architecture as the combine operator, but the message (hidden state) therein is a naive sum or element-wise product of child representations. In a variant of Tree-LSTM, the naive sum is replaced by a sum of child representations multiplied by separate weight matrices. A limitation of this variant is that the number of children must be the same and the children must be ordered. Another limitation is that both architectures assume that there is a single terminal node (in which case a readout is not invoked).\nThe most similar architecture to DAGNN is the encoder of D-VAE. There are two notable differences. First, D-VAE uses the gated sum as aggregator but we use attention which leverages the information of not only the summands (h`u) but also that of the node under consideration (h `−1 v ). This additional source of information enables attention driven by external factors and improves over self attention. Second, similar to all the aforementioned models, D-VAE does not come with a layer notion. On the contrary, we use multiple layers, which are more natural and powerful in the light of findings about general GNNs. Our empirical results described in the following section confirm so." }, { "heading": "4 EVALUATION", "text": "In this section, we demonstrate the effectiveness of DAGNN on multiple datasets and tasks over a comprehensive list of baselines. We compare timing and show that the training cost of DAGNN is comparable with that of other DAG architectures. We also conduct ablation studies to verify the importance of its components, which prior DAG architectures lack." }, { "heading": "4.1 DATASETS, TASKS, METRICS, AND BASELINES", "text": "The OGBG-CODE dataset (Hu et al., 2020) contains 452,741 Python functions parsed into DAGs. We consider the TOK task, predicting the tokens that form the function name; it is included in the Open Graph Benchmark (OGB). Additionally, we introduce the LP task, predicting the length of the longest path of the DAG. The metric for TOK is the F1 score and that for LP is accuracy. Because of the vast size, we also create a 15% training subset, OGBG-CODE-15, for similar experiments.\nFor this dataset, we consider three basic baselines and several GNN models for comparison. For the TOK task, the Node2Token baseline predicts tokens from the attributes of the second graph node, while the TargetInGraph baseline predicts tokens that appear in both the ground truth and in the attributes of some graph node. These baselines exploit the fact that the tokens form node attributes and that the second node’s attribute contains the function name if it is part of the vocabulary. For the LP task, the MajorityInValid baseline constantly predicts the majority length seen from the validation set. The considered GNN models include four from OGB: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2019), GCN-VN, GIN-VN (where -VN means adding a virtual node connecting all existing nodes); two using attention/gated-sum mechanisms: GAT (Veličković et al., 2018), GG-NN\n(Li et al., 2016); two hierarchical pooling approaches using attention: SAGPool (Lee et al., 2019), ASAP (Ranjan et al., 2020); and the D-VAE encoder.\nThe NA dataset (Zhang et al., 2019) contains 19,020 neural architectures generated by the ENAS software. The task is to predict the architecture performance on CIFAR-10 under the weight-sharing scheme. Since it is a regression task, the metrics are RMSE and Pearson’s r. To gauge performance with Zhang et al. (2019), we similarly train (unsupervised) autoencoders and use sparse Gaussian process regression on the latent representation to predict the architecture performance. DAGNN serves as the encoder and we pair it with an adaptation of the D-VAE decoder (see Appendix D). We compare to D-VAE and all the autoencoders compared therein: S-VAE (Bowman et al., 2016), GraphRNN (You et al., 2018), GCN (Zhang et al., 2019), and DeepGMG (Li et al., 2018).\nThe BN dataset (Zhang et al., 2019) contains 200,000 Bayesian networks generated by using the R package bnlearn. The task is to predict the BIC score that measures how well a BN fits the Asia dataset (Lauritzen & Spiegelhalter, 1988). We use the same metrics and baselines as for NA." }, { "heading": "4.2 RESULTS AND DISCUSSION", "text": "Prediction performance, token prediction (TOK), Table 1. The general trend is the same across the full dataset and the 15% subset. DAGNN performs the best. GAT achieves the second best result, surprisingly outperforming D-VAE (the third best). Hence, using attention as aggregator during message passing benefits this task. On the 15% subset, only DAGNN, GAT, and D-VAE match or surpass the TargetInGraph baseline. Note that not all ground-truth tokens are in the vocabulary and thus the best achievable F1 is 90.99. Even so, all methods are far from reaching this ceiling performance. Furthermore, although most of the MPNN models (middle section of the table) use as many as five layers for message passing, the generally good performance of DAGNN and D-VAE indicates that DAG architectures not restricted by the network depth benefit from the inductive bias.\nPrediction performance, length of longest path (LP), Table 1. This analytical task interestingly reveals that many of the findings for the TOK task do not directly carry over. DAGNN still performs the best, but the second place is achieved by D-VAE while GAT lags far behind. The unsatisfactory performance of GAT indicates that attention alone is insufficient for DAG representation learning. The hierarchical pooling methods also perform disappointingly, showing that ignoring nodes may modify important properties of the graph (in this case, the longest path). It is worth noting that DAGNN and D-VAE achieve nearly perfect accuracy. This result corroborates the theory of Xu et al. (2020), who state that when the inductive bias is aligned with the reasoning algorithm (in this case, path tracing), the model learns to reason more easily and achieves better sample efficiency.\nPrediction performance, scoring the DAG, Table 2. On NA and BN, DAGNN also outperforms D-VAE, which in turn outperforms the other four baselines (among them, DeepGMG works the best on NA and S-VAE works the best on BN, consistent with the findings of Zhang et al. (2019).) While D-VAE demonstrates the benefit of incorporating the DAG bias, DAGNN proves the superiority of its architectural components, as will be further verified in the subsequent ablation study.\nTime cost, Figure 3. The added expressivity of DAGNN comes with a tradeoff: the sequential processing of the topological batches requires more time than does the concurrent processing of all graph nodes, as in MPNNs. Figure 3 shows that such a trade-off is innate to DAG architectures, including the D-VAE encoder. Moreover, the figure shows that, when used as a component of a larger architecture (autoencoder), the overhead of DAGNN may not be essential. For example, in this particular experiment, DeepGMG (paired with the S-VAE encoder) takes an order of magnitude more time than does DAGNN (paired with the D-VAE decoder). Most importantly, not reflected in the figure is that DAGNN learns better and faster at larger learning rates, leading to fewer learning epochs. For example, DAGNN reaches the best performance at epoch 45, while D-VAE at around 200.\nAblation study, Table 3. While the D-VAE encoder performs competitively owing similarly to the incorporation of the DAG bias, what distinguishes our proposal are several architecture components that gain further performance improvement. In Table 3, we summarize results under the following cases: replacing attention in the aggregator by gated sum; reducing the multiple layers to one; replacing the GRUs by fully connected layers; modifying the readout by pooling over all nodes; and removing the edge attributes. One observes that replacing attention generally leads to the highest degradation in performance, while modifying other components yields losses too. There are two exceptions. One occurs on LP-15, where gated-sum aggregation surprisingly outperforms attention by a tight margin, considering the standard deviation. The other occurs on the modification of\nthe readout for the BN dataset. In this case, a Bayesian network factorizes the joint distribution of all variables (nodes) it includes. Even though the DAG structure characterizes the conditional independence of the variables, they play equal roles to the BIC score and thus it is possible that emphasis of the target nodes adversely affects the predictive performance. In this case, pooling over all nodes appears to correct the overemphasis.\nSensitivity analysis, Table 4 and Figure 4. It is well known that MPNNs often achieve best performance with a small number of layers, a curious behavior distinct from other neural networks. It is important to see if such a behavior extends to DAGNN. In Table 4, we list the results for up to four layers. One observes that indeed the best performance occurs at either two or three layers. In other words, one layer is insufficient (as already demonstrated in the ablation study) and more than three layers offer no advantage. We further extend the experimentation on TOK-15 with additional layers and plot the results in Figure 4. The trend corroborates that the most significant improvement occurs when going beyond a single layer. It is also interesting to see that a single layer yields the highest variance subject to randomization.\nStructure learning, Figure 5. For an application of DAGNN, we extend the use of the BN dataset to learn the Bayesian network for the Asia data. In particular, we take the Bayesian optimization approach and optimize the BIC score over the latent space of DAGs. We use the graphs in BN as pivots and encode every graph by using DAGNN. The optimization yields a DAG with BIC score−11107.29 (see Figure 5). This DAG is almost the same as the ground truth (see Figure 2 of Lauritzen & Spiegelhalter (1988)), except that it does not include the edge from “visit to Asia?” to “Tuberculosis?”. It is interesting to note that the identified DAG has a higher BIC score than that of the ground truth, −11109.74. Furthermore, the BIC score is also much higher than that found by using the D-VAE encoder, −11125.75 (Zhang et al., 2019). This encouraging result corroborates the superior encoding quality of DAGNN and the effective use of it in downstream tasks." }, { "heading": "5 CONCLUSIONS", "text": "We have developed DAGNN, a GNN model for a special yet widely used class of graphs—DAGs. It incorporates the partial ordering entailed by DAGs as a strong inductive bias towards representation learning. With the blessing of this inductive bias, we demonstrate that DAGNNs outperform MPNNs on several representative datasets and tasks. Through ablation studies, we also show that the DAGNN model is well designed, with several components serving as crucial contributors to the performance gain over other models that also incorporate the DAG bias, notably, D-VAE. Furthermore, we theoretically study a batching technique that yields maximal parallel concurrency in processing DAGs and prove that DAGNN is permutation invariant and injective." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work is supported in part by DOE Award DE-OE0000910. Most experiments were conducted on the Satori cluster (satori.mit.edu)." }, { "heading": "A PROOFS", "text": "Proof of Theorem 1. Let (v1, v2, . . . , vd) be a longest path of the DAG. The number of batches must be at least d, because otherwise there exists a batch that contains at least two nodes on this path, violating Property (ii). On the other hand, given the partitioning, according to Property (iii), one may trace a directed path, one node from each batch, starting from the last one. The longest path must be at least that long. In other words, the number of batches must be at most the number of nodes on the longest path. Hence, these two numbers are equal. The consequence stated by the theorem straightforwardly follows.\nProof of Theorem 2. We first show that h`v is invairant to the indexing of v by double induction on ` and v. The base case is ` = 1 and v ∈ B0. In this case, m1v = G1(∅, h0v) = 0 is invairant to the indexing of v. Then, h1v = F 1(h0v,m 1 v) is, too. In the induction, if for all `\n′ < ` and all v′, and for `′ = ` and v′ ∈ B0 ∪ · · · ∪ Bi−1, h` ′ v′ is invairant to the indexing of v ′, then for `′ = ` and v ∈ Bi, m`v = G `({h`u | u ∈ P(v)}, h`−1v ) and h`v = F `(h`v,m`v) are both invairant to the indexing of v. Thus, by induction, for all `′ = ` and all v, h` ′\nv is invairant to the indexing of v. Then, by an outer induction, for all ` and all v, h`v is invairant to the indexing of v.\nTherefore, hG = R({h`v, ` = 0, 1, . . . , L, v ∈ T }) is invairant to the indexing of the nodes in T and thus of the entire node set.\nProof of Corollary 3. The function G` is invariant to node indexing because it is a weighted sum of the elements in its first argument, {h`u}, whereas the weights are parameterized by using the same parameter w`2 for these elements.\nThe function F ` is invariant to node indexing because its two arguments are clearly distinguished.\nThe function R is invariant to node indexing because the FC layer applies to the pooling result of h`v for a fixed set of v.\nProof of Theorem 4. Suppose two graphs G and G′ have the same representation hG = hG′ . Then, from the function R, they must have the same target set T and same node representations h`v for all nodes v ∈ T and all layers `. In particular, for the last layer ` = L, from the functions FL and GL, each of these nodes, v, from the two graphs must have the same set of direct predecessors P(v), each element u of which have the same representation hLu across graphs. By backward induction, the two graphs must have the same node set V and edge set E . Moreover, for each node v ∈ V , the last-layer representation hLv must be the same.\nFurthermore, from the injection property of F `, if a node v shares the same node representation h`v across graphs, then its past-layer representation h `−1 v must also be the same across graphs. A backward reduction traces back to the initial representation h0v , which concludes that the two graphs must have the same set of input node features X ." }, { "heading": "B DATASET DETAILS", "text": "OGBG-CODE. The OGBG-CODE dataset was recently included in the Open Graph Benchmark (OGB) (Hu et al., 2020, Section 6.3). It contains 452,741 Python method definitions extracted from thousands of popular Github repositories. The method definitions are represented as DAGs by augmenting the abstract syntax trees with edges connecting the sequence of source code tokens. Hence, there are two types of edges. The min/avg/max numbers of nodes in the graphs are 11/125/36123, respectively. We use the node features provided by the dataset, including node type, attributes, depth in the AST, and pre-order traversal index.\nThe task suggested by Hu et al. (2020) is to predict the sub-tokens forming the method name, also known as “code summarization”. The task is considered a proxy measure of how well a model captures the code semantics (Allamanis et al., 2018). We additionally consider the task of predicting the length of the longest path in the graph. We treat it as a 275-way classification because the maximum length is 275. The distribution of the lengths/classes is shown in Appendix E. To avoid triviality, for this task we remove the AST depth from the node feature set.\nWe adopt OGB’s project split, whose training set consists of Github projects not seen in the validation and test sets. We also experiment with a subset of the data, OGBG-CODE-15, which contains only randomly chosen 15% of the OGBG-CODE training data. Validation and test sets remain the same.\nIn addition to OGBG-CODE, we further experiment with two DAG datasets, NA and BN, used by Zhang et al. (2019) for evaluating their model D-VAE. To compare with the results reported in the referenced work, we focus on the predictive performance of the latent representations of the DAGs obtained from autoencoders. We adopt the given 90/10 splits.\nNeural architectures (NA). This dataset is created in the context of neural architecture search. It contains 19,020 neural architectures generated from the ENAS software (Pham et al., 2018). Each neural architecture has 6 layers (i.e., nodes) sampled from 6 different types of components, plus an input and output layer. The input node vectors are one-hot encodings of the component types. The weight-sharing accuracy (Pham et al., 2018) (a proxy of the true accuracy) on CIFAR-10 (Krizhevsky, 2009) is taken as performance measure. Details about the generation process can be found in Zhang et al. (2019, Appendix H).\nBayesian networks (BN). This dataset contains 200,000 random 8-node Bayesian networks generated by using the R package bnlearn (Scutari, 2010). The Bayesian Information Criterion (BIC) score is used to measure how well the DAG structure fits the Asia dataset (Lauritzen & Spiegelhalter, 1988). The input node vectors are one-hot encodings of the node indices according to topological sort. See Zhang et al. (2019, Appendix I) for further details." }, { "heading": "C BASELINE DETAILS", "text": "Baselines for OGBG-CODE. We use three basic measures to set up baseline performance, two for token prediction and one for the longest path task. (1) Node2Token: This method uses the attribute of the second node of the graph as prediction. We observe that the second node either contains the function name, if the token occurs in the vocabulary (which is not always the case because some function names consist of multiple words), or contains “None”. (2) TargetInGraph: This method pretends that it knows the ground-truth tokens but predicts only those occurring in the graph. One would expect that a learning model may be able to outperform this method if it learns the associations of tokens outside the current graph. (3) MajorityInValid: This method always predicts the majority length seen in the validation set.\nAdditionally, we compare with multiple GNN models. Some of them are the GNN implementations offered by OGB: GCN, GIN, GCN-VN, and GIN-VN. The latter two are extensions of the first two by including a virtual node (i.e., an additional node that is connected to all nodes in the graph). Note that the implementations do not strictly follow the architectures described in the original papers (Kipf & Welling, 2017; Xu et al., 2019). In particular, edge types are incorporated and inverse edges are added for bidirectional message passing.\nSince our model features attention mechanisms, we include GAT (Veličković et al., 2018) and GGNN (Li et al., 2016) for comparison. We also include two representative hierarchical pooling approaches, which use attention to determine node pooling: SAGPool (Lee et al., 2019) and ASAP (Ranjan et al., 2020). Lastly, we compare with the encoder of D-VAE (Zhang et al., 2019, Appendix E, F).\nBaselines for NA and BN. Over NA and BN, we consider D-VAE and the baselines in Zhang et al. (2019, Appendix J). S-VAE (Bowman et al., 2016) applies a standard GRU-based RNN variational autoencoder to the topologically sorted node sequence, with node features augmented by the information of incoming edges, and decodes the graph by generating an adjacency matrix. GraphRNN (You et al., 2018) by itself serves as a decoder; we pair it with S-VAE encoder. GCN uses a GCN encoder while takes the decoder of D-VAE. DeepGMG (Li et al., 2018) similarly uses a GNN-based encoder but employs its own decoder (which is similar to the one in D-VAE). Note that all these baselines are autoencoders and our objective is to compare the performance of the latent representations." }, { "heading": "D MODEL CONFIGURATIONS AND TRAINING", "text": "D.1 EXPERIMENT PROTOCOL AND HYPERPARAMETER TUNING\nOur evaluation protocols and procedures closely follow those of Hu et al. (2020); Zhang et al. (2019). For OGBG-CODE, we only changed the following. We used 5-fold cross validation due to the size of the dataset and the number of baselines for comparison. Since we compared with vast kinds of models in addition to the OGB baselines, we swept over a large range of learning rates and, for each model, picked the best from the set {1e-4, 5e-4, 1e-3, 15e-4, 2e-3, 5e-3, 1e-2, 15e-3} based on performance on OGBG-CODE-15. We stopped training when the validation metric did not improve further under a patience of 20 epochs, for all models but D-VAE and DAGNN. For the latter two, we used a patience of 10. Moreover, for these two models we used gradient clipping (at 0.25) due to the recurrent layers and a batch size of 80. Note that OGB uses 10-fold cross validation with a fixed learning rate of 1e-3, a fixed epoch number 30, and a batch size 128.\nFor NA and BN, we followed the exact training settings of Zhang et al. (2019, Appendix K). For DAGNN, we started the learning rate scheduler at 1e-3 (instead of 1e-4) and stopped at a maximum number of epochs, 100 for NA and 50 for BN (instead of 300 and 100, respectively). We also trained a sparse Gaussian process (SGP) (Snelson & Ghahramani, 2005) as the predictive model, as described in Zhang et al. (2019, Appendix L), to evaluate the performance of the latent representations. The prediction results were averaged over 10 folds.\nFor the Bayesian network learning experiment we similarly took over the settings of Zhang et al. (2019), running ten rounds of Bayesian optimization.\nD.2 BASELINE MODELS\nAll models were implemented in PyTorch (Paszke et al., 2019). For OGBG-CODE, we used the GCN and GIN models provided by the benchmark. We implemented a GAT model as described in Veličković et al. (2018) and GG-NN in Li et al. (2016). We used the SAGPool implementation of Lee et al. (2019) and ASAP from the Pytorch Geometric Benchmark Suite https://github.com/ rusty1s/pytorch_geometric/tree/master/benchmark. All these models were implemented using PyTorch Geometric (Fey & Lenssen, 2019). We used the parameters suggested in OGB (e.g., 5 GNN layers, with embedding and hidden dimension 300), with the exception of ASAP where we used 3 instead of 5 layers due to memory constraints.\nSince the D-VAE implementation does not support topological batching as we do, and also because of other miscellaneous restrictions (e.g., a single source node and target node), we reimplement D-VAE by using our DAGNN codebase. The reimplementation reproduces the results reported by Zhang et al. (2019). See Appendix F for more details.\nD.3 DAGNN IMPLEMENTATION\nFor DAGNN, we used hidden dimension 300. As suggested by OGB, we used independent linear classifiers to predict sub-tokens at each position of the sub-token sequence. Similarly, we used a linear classifier to predict the length of the longest path.\nFor the NA and BN datasets, we took the baseline implementations as well as training and evaluation procedures from Zhang et al. (2019). In particular, we used the corresponding configuration of DVAE for the BN dataset. For DAGNN, we used the same hidden dimension 501 and adapted the decoder of D-VAE (by replacing the use of D-VAE encoder in part of the decoding process with our encoder). Additionally, we used bidirectional processing for token prediction over OGBG-CODE and for the experiment over BN. Since it did not offer improvement in performance for the longest path length prediction and for the experiment over NA but consumed too much time, for these cases we used unidirectional processing." }, { "heading": "E DETAILS ON THE LONGEST PATH EXPERIMENT", "text": "We observe that for the MPNN baselines, the longest path results shown in Table 1 are much worse on the 15% subset than on the full dataset. We speculate whether the poorer performance is caused\nby purely the size of training data, or additionally by the discrepancy of data distributions. Figure 6 shows that the data distributions are rather similar. Hence, we conclude that the degrading performance of MPNNs on a smaller training set is due to their low sample efficiency, in contrast to DAG architectures (D-VAE and DAGNN) that perform similarly on both the full set and the subset." }, { "heading": "F REIMPLEMENTATION OF D-VAE", "text": "The original D-VAE implementation processes nodes sequentially and thus is time consuming. Therefore, we reimplement D-VAE by using our DAGNN codebase, in particular supporting topological batching. Table 5 shows that our reimplementation reproduces closely the results obtained by the original D-VAE implementation." }, { "heading": "G ADDITIONAL ABLATION RESULTS", "text": "As mentioend in the main text, bidirectional processing is optional; it does not necessarily improve over unidirectional. Indeed, Table 6 shows that bidirectional works better on TOK-15 and BN, but unidirectional works better on LP-15 and NA. However, either way, DAGNN outperforms all baselines reported in Table 1 and 2, with only one exception: on LP-15, D-VAE performs worse than unidirectional but better than bidirectional." } ]
2,021
DIRECTED ACYCLIC GRAPH NEURAL NETWORKS
SP:081c48c667eef561333c5b0d739e9dbebefa0f34
[ "This paper challenges the common belief that self-attention with dot product is necessary to train good NLP models. Several variants of the Synthesizer model is proposed. The effectiveness of Synthesizer is surprisingly good, although not beating the dot-product attention. The authors further showed that mixing synthesizer and dot-product attention sometimes achieve better results. The idea is validated on Translation, NLU, Summarization, Dialogue, and Language Modeling." ]
The dot product self-attention is known to be central and indispensable to stateof-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is useful but not that important after all. To this end, we propose SYNTHESIZER, a model that learns synthetic attention weights without token-token interactions. In our experiments, we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks, including machine translation, language modeling, text generation and GLUE/SuperGLUE benchmarks. When composed with dot product attention, we find that Synthesizers consistently outperform Transformers. Moreover, we conduct additional comparisons of Synthesizers against Dynamic Convolutions, showing that simple Random Synthesizer is not only 60% faster but also improves perplexity by a relative 3.5%. Finally, we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": null, "year": 2004 }, { "authors": [ "Jianpeng Cheng", "Li Dong", "Mirella Lapata" ], "title": "Long short-term memory-networks for machine reading", "venue": "arXiv preprint arXiv:1601.06733,", "year": 2016 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Jean-Baptiste Cordonnier", "Andreas Loukas", "Martin Jaggi" ], "title": "On the relationship between selfattention and convolutional layers", "venue": "arXiv preprint arXiv:1911.03584,", "year": 2019 }, { "authors": [ "Yann N Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Nikita Kitaev", "Łukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "arXiv preprint arXiv:2001.04451,", "year": 2020 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "arXiv preprint arXiv:1508.04025,", "year": 2015 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,", "year": 2011 }, { "authors": [ "Ankur P Parikh", "Oscar Täckström", "Dipanjan Das", "Jakob Uszkoreit" ], "title": "A decomposable attention model for natural language inference", "venue": null, "year": 1933 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Alessandro Raganato", "Yves Scherrer", "Jörg Tiedemann" ], "title": "Fixed encoder self-attention patterns in transformer-based machine translation", "venue": "arXiv preprint arXiv:2002.10260,", "year": 2020 }, { "authors": [ "Minjoon Seo", "Tom Kwiatkowski", "Ankur P Parikh", "Ali Farhadi", "Hannaneh Hajishirzi" ], "title": "Phraseindexed question answering: A new challenge for scalable document comprehension", "venue": "arXiv preprint arXiv:1804.07726,", "year": 2018 }, { "authors": [ "Shikhar Sharma", "Layla El Asri", "Hannes Schulz", "Jeremie Zumer" ], "title": "Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation", "venue": "CoRR, abs/1706.09799,", "year": 2017 }, { "authors": [ "Noam Shazeer", "Youlong Cheng", "Niki Parmar", "Dustin Tran", "Ashish Vaswani", "Penporn Koanantakool", "Peter Hawkins", "HyoukJoong Lee", "Mingsheng Hong", "Cliff Young" ], "title": "Mesh-tensorflow: Deep learning for supercomputers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yi Tay", "Dara Bahri", "Liu Yang", "Donald Metzler", "Da-Cheng Juan" ], "title": "Sparse sinkhorn attention", "venue": "arXiv preprint arXiv:2002.11296,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sinong Wang", "Belinda Li", "Madian Khabsa", "Han Fang", "Hao Ma" ], "title": "Linformer: Self-attention with linear complexity", "venue": "arXiv preprint arXiv:2006.04768,", "year": 2020 }, { "authors": [ "Wenhui Wang", "Nan Yang", "Furu Wei", "Baobao Chang", "Ming Zhou" ], "title": "Gated self-matching networks for reading comprehension and question answering", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann N Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": null, "year": 1901 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in neural information processing systems,", "year": 2015 } ]
[ { "heading": null, "text": "The dot product self-attention is known to be central and indispensable to stateof-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is useful but not that important after all. To this end, we propose SYNTHESIZER, a model that learns synthetic attention weights without token-token interactions. In our experiments, we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks, including machine translation, language modeling, text generation and GLUE/SuperGLUE benchmarks. When composed with dot product attention, we find that Synthesizers consistently outperform Transformers. Moreover, we conduct additional comparisons of Synthesizers against Dynamic Convolutions, showing that simple Random Synthesizer is not only 60% faster but also improves perplexity by a relative 3.5%. Finally, we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks." }, { "heading": "1 INTRODUCTION", "text": "Transformer models (Vaswani et al., 2017) have demonstrated success across a wide range of tasks. This has resulted in Transformers largely displacing once popular auto-regressive and recurrent models in recent years. At the heart of Transformer models lies the query-key-value dot product attention. The success of Transformer models is widely attributed to this self-attention mechanism since fully connected token graphs, which are able to model long-range dependencies, provide a robust inductive bias.\nBut is the dot product self-attention really so important? Do we need it? Is it necessary to learn attention weights via pairwise dot products? This paper seeks to develop a deeper understanding of the role that the dot product self-attention mechanism plays in Transformer models.\nThe fundamental role of dot product self-attention is to learn self-alignment, i.e., to determine the relative importance of a single token with respect to all other tokens in the sequence. To this end, there have been memory metaphors and analogies constructed to support this claim. Indeed, the terms query, keys, and values imply that self-attention emulates a content-based retrieval process which leverages pairwise interactions at its very core.\nMoving against convention, this paper postulates that we cannot only do without dot product self-attention but also content-based memory-like self-attention altogether. Traditionally, attention weights are learned at the instance or sample level, where weights are produced by instance-level pairwise interactions. As a result, these instance-specific interactions often fluctuate freely across different instances as they lack a consistent global context.\nThis paper proposes SYNTHESIZER, a new model that learns to synthesize the self-alignment matrix instead of manually computing pairwise dot products. We propose a diverse suite of synthesizing functions and extensively evaluate them. We characterize the source information that these synthesizing functions receive, i.e., whether they receive information from individual tokens, token-token\ninteractions, and/or global task information. Intuitively, different source inputs to the synthesizing functions should capture diverse views, which may be useful when employed in conjunction.\nAside from generalizing the standard Transformer model, we show that it is possible to achieve competitive results with fully global attention weights that do not consider token-token interactions or any instance-level (local) information at all. More specifically, a random matrix SYNTHESIZER model achieves a 27.27 BLEU score on WMT 2014 English-German1. Via a set of rigorous experiments, we observe that the popular and well-established dot-product content-based attention can be approximated with simpler variants such as random matrices or dense layers without sacrificing much performance in some cases.\nIn our experiments, we also show that our relatively simple Synthesizer models also outperform Dynamic Convolutions (Wu et al., 2019) with a +3.5% relative improvement in perplexity while being 60% faster. On encoding tasks, our factorized Synthesizers can outperform other low-rank efficient Transformer models such as Linformers (Wang et al., 2020).\nWhile simple Synthesizer models are able to perform competitively, our experiments show that the pairwise dot product is still ultimately helpful. When composing our synthesizing functions with dot products, we find that they consistently improve the performance of Transformers. In general, we believe our findings will spur further investigation and discussion about the true role and utility of the self-attention mechanism in Transformer models.\nOur Contributions Our key contributions are described as follows:\n• We propose Synthetic Attention, a new way of learning to attend without explicitly attending (i.e., without dot product attention or content-based attention). Instead, we generate the alignment matrix independent of token-token dependencies and explore a potpourri of parameterized functions for synthesizing attention matrices.\n• We propose SYNTHESIZER, a new model that leverages Synthetic Attention. The model performs competitive to state-of-the-art Transformer models on a wide range of language tasks, including machine translation and language modeling.\n• Moreover, we show that (1) random learnable alignment matrices perform competitively and (2) token-token dependencies are not necessary to achieve good performance with Transformer models on certain tasks.\n• On large-scale masked language modeling on the C4 dataset (Raffel et al., 2019) and finetuning on SuperGLUE and GLUE benchmarks, we show that simple random Synthesizers can outperform/match Lightweight Dynamic convolutions (Wu et al., 2019) along with outperforming Transformers and Universal Transformers (Dehghani et al., 2018). On two encoding tasks, factorized random Synthesizers outperform low-rank Linformers (Wang et al., 2020)." }, { "heading": "2 RELATED WORK", "text": "Attention-based models are used across a wide spectrum of problem domains. Such models are especially popular, due to their effectiveness, in the language and vision domains. Attention models can be traced back to the machine translation models of (Bahdanau et al., 2014) and (Luong et al., 2015), where attention is employed to learn soft word alignments between language pairs. The intuition behind the attention mechanism is deeply-rooted in the notion of memory-based retrieval (Graves et al., 2014; Weston et al., 2014), in which soft differentiable addressing of memory was initially proposed.\nThe paradigm of learning self-alignments, also known as self-attention, has been largely popularized by Transformer models (Vaswani et al., 2017). This technical narrative has also been explored by a number of other recent studies, including those on intra-attention (Parikh et al., 2016), selfmatching networks (Wang et al., 2017), and LSTMN (Cheng et al., 2016). To this end, Transformer models, which function primarily based on self-attention and feed-forward layers, generally serve as a reliable replacement for autoregressive recurrent models.\n1The originally reported result is 27.30.\nThe self-attention layer itself has been the subject of many recent technical innovations. For example, recent studies have investigated improving the layer’s overall efficiency via sparsification and reducing the complexity of computing the alignment matrix (Child et al., 2019; Kitaev et al., 2020; Huang et al., 2018; Tay et al., 2020; Beltagy et al., 2020). These methods are tightly coupled with the query-key-value paradigm, employing a form of memory-based content retrieval as an attention mechanism. On the other end of the spectrum, there have been studies that advocate for replacing self-attention with convolution (Wu et al., 2019). The recent surge in interest in simplifying the attention mechanism raises important questions about the role and utility of the pairwise dot products, which are one the defining characteristics of self-attention models. Meanwhile, in the image domain, (Cordonnier et al., 2019) shows connection of Transformers with CNNs.\nOur work is a new take on the self-attention mechanism in Transformer models. We delve deeper, starting with replacing the pairwise dot products with what we call synthesizing functions that learn attention matrices that may or may not depend on the input tokens. The most closely related work is ((Raganato et al., 2020)), in which the authors propose using fixed (i.e., not learned) attention patterns in Transformer encoders. However, the scope of their work is limited to encoders and relies on manually defined handcrafted patterns that seem to work well. Our work takes this intuition further and expands on this narrative." }, { "heading": "3 THE PROPOSED METHOD", "text": "This section introduces our proposed SYNTHESIZER model. At its core, our model is essentially a Transformer model with self-attention modules replaced with our Synthetic Attention modules.\nFigure 3.1 illustrates the key ideas behind (a) Transformer (b) Dense Synthesizers and (c) Random Synthesizers." }, { "heading": "3.1 SYNTHESIZER MODEL", "text": "This section introduces Synthetic Attention, our proposed self-attention module. Our model removes the notion of query-key-values in the self-attention module and directly synthesizes the alignment matrix instead.\nDense Synthesizer Let us consider the simplest variation of the SYNTHESIZER model which is conditioned on each input token. Overall, our method accepts an input X ∈ R`×d and produces an output of Y ∈ R`×d. Here, ` refers to the sequence length and d refers to the dimensionality of the model. We first adopt F (.), a parameterized function, for projecting input Xi from d dimensions to ` dimensions.\nBi = F (Xi) (1)\nwhere F (.) is a parameterized function that maps Rd to R` and i is the i-th token of X and is applied position-wise (to each vector in the sequence of length `). Intuitively, this can be interpreted as learning a token-wise projection to the sequence length `. Essentially, with this model, each token predicts weights for each token in the input sequence. In practice, we adopt a simple two layered feed-forward layer with ReLU activations for F (.):\nF (Xi) =W2(σR(W1(Xi) + b1)) + b2 (2)\nwhere σR is the ReLU activation function and W1 ∈ Rd×d and W2 ∈ Rd×`. Hence, Bi is now of R`. Given B ∈ R`×`, we now compute:\nY = Softmax(B)G(X) (3)\nwhere G(.) is another parameterized function of X that is analogous to V (value) in the standard Transformer model. This approach eliminates the dot product attention Y = Softmax(QK>)V altogether by replacing QK> in standard Transformers with the synthesizing function F (.).\nRandom Synthesizer The previous variant learns synthetic attention by conditioning on each input of X and projecting to ` dimensions. Hence, the Dense Synthesizer conditions on each token independently, as opposed to pairwise token interactions in the vanilla Transformer model. We consider another variation of SYNTHESIZER where the attention weights are not conditioned on any\ninput tokens. Instead, the attention weights are initialized to random values. These values can then either be trainable or kept fixed (denoted as Fixed).\nLet R be a randomly initialized matrix. The Random Synthesizer is defined as:\nY = Softmax(R)G(X). (4)\nwhere R ∈ R`×`. Notably, each head adds `2 parameters to the network. The basic idea2 of the Random Synthesizer is to not rely on pairwise token interactions or any information from individual token but rather to learn a task-specific alignment that works well globally across many samples. This is a direct generalization of the recently proposed fixed self-attention patterns Raganato et al. (2020).\nFactorized Models The Dense Synthesizer adds d × ` parameters to the network. On the other hand, the Random Synthesizer adds `×` parameters. Here, note that we omit theQ,K projections in the standard Transformer which results in further parameter savings. Despite these savings, synthesized models can be cumbersome to learn when ` is large. Hence, we propose factorized variations of the SYNTHESIZER models and show that these variants perform comparably in practice.\nFactorized Dense Synthesizer Factorized outputs not only slightly reduce the parameter cost of the SYNTHESIZER but also aid in preventing overfitting. The factorized variant of the dense synthesizer can be expressed as follows:\nA,B = FA(Xi), FB(Xi) (5)\nwhere FA(.) projects input Xi into a dimensions, FB(.) projects Xi to b dimensions, and a× b = `. The output of the factorized module is now written as:\nY = Softmax(C)G(X). (6)\nwhere C = HA(A)∗HB(B) where HA, HB are tiling functions and C ∈ R`×`. The tiling function simply duplicates the vector k times, i.e., R` → R`×k. In this case, HA(·) is a projection of Ra → Ra×b and HB(·) is a projection of Rb → Rb×a. To avoid having similar values within the same block, we compose the outputs of HA and HB .\nFactorized Random Synthesizer Similar to Factorized Synthesizers, we are also able to factorize R into low rank matrices R1, R2 ∈ R`×k.\nY = Softmax(R1R>2 )G(X). (7)\n2We were not expecting this variation to work at all, but it turns out to be a strong baseline.\nTherefore, it is easy to see that, for each head, this reduces the parameter costs from `2 to 2(`k) where k << ` and hence helps prevent overfitting. In practice, we use a small value of k = 8.\nMixture of Synthesizers Finally, we note that all of the proposed synthetic attention variants can be mixed in an additive fashion. This can be expressed as:\nY = Softmax(α1S1(X) + · · ·αNSN (X))G(X). (8) where S(.) is a parameterized synthesizing function and the α (where ∑ α = 1) are learnable weights. In the case of mixing Random Factorized with standard Dense Synthesizers, this is expressed as:\nY = Softmax(α1R1R>2 + α2F (X))G(X). (9)\nWe investigate several Mixture of Synthesizers variants in our experiments.\nOn Parameters Depending on Sequence Length Random and dense Synthesizers both rely on parameters that depend on length `. In general, we define a maximum length and dynamically truncate to the actual length of each batch. We note that this is in similar spirit to trainable positional encodings which have been common practice in Transformer models. Hence, we do not forsee any issue here. In the case that this is really a problem, one potential solution is to project to a smaller value b and tile b to the maximum sequence length. We leave this exploration to future work." }, { "heading": "3.2 DISCUSSION", "text": "This paper asks fundamental questions about the attention matrix A and whether it is possible to synthesize A by alternate means other than pairwise attention. It is worth noting that the regular dot product attention can also be subsumed by our SYNTHESIZER framework, i.e., SYNTHESIZER generalizes the Transformer model. In the case of the Transformer, the synthesizing function in question is S(X) = FQ(X)FK(X)>.\nTable 1 lists the different model variants explored within our SYNTHESIZER framework. The ’condition on’ column refers to whether the synthesized output is produced as a function of Xi or every Xi, Xj pair. The ‘sample‘ column indicates whether a given variant leverages local or global context. Random Synthesizers are global because they share the same global alignment patterns across all samples. Dense Synthesizers are considered to be local as they are conditioned on Xi, which makes the alignment pattern dependent on each individual sample. To this end, it is imperative for synthesized models to have multiple heads to be effective." }, { "heading": "4 EXPERIMENTS", "text": "This section outlines our experimental setup and results. We first conduct experiments on five tasks to evaluate the effectiveness3 of different Synthesizer variants along with how they compare to the vanilla Transformer. Specifically, we conduct experiments on (1) machine translation (EnDe, EnFr) (2) autoregressive language modeling (LM1B) (3) text generation (summarization and dialogue modeling and (4) multi-task natural language processing (GLUE/SuperGLUE). Details of each experiments can be found in the appendix.\n3Note that we are primarily interested in making controlled comparisons instead of going for the state-ofthe-art result on each task.\nNotation of Variants We use R to denote Random, D to denote Dense and V to denote vanilla dot product attention. Fix to represent Fixed Random, FR to represent Factorized Random and FD to represent Factorized random. For Mixture Synthesizers, we use + to denote that two methods are mixed." }, { "heading": "4.1 COMPARING SYNTHESIZER VARIANTS AND TRANSFORMER MODELS", "text": "This section dives into a detailed study of multiple Synthesizer variants and the base Transformer model.\nNMT (BLEU) LM (PPL) Model |θ| EnDe EnFr |θ| LM Trans.† 67M 27.30 38.10 - - Trans. 67M 27.67 41.57 70M 38.21 Synthesizer Models Fix 61M 23.89 38.31 53M 50.52 R 67M 27.27 41.12 58M 40.60 FR 61M 27.30 41.12 53M 42.40 D 62M 27.43 41.39 53M 40.88 FD 61M 27.32 41.57 53M 41.20 R+D 67M 27.68 41.21 58M 42.35 D+V 74M 27.57 41.38 70M 37.27 R+V 73M 28.47 41.85 70M 40.05\nResults on Text Generation For summarization, we find that the (R) and (D) variants do not outperform Transformers. The performance of the (D) model is ≈ 2 Rouge-L points below Transformers. Hence, we postulate that the local sample-wise pairwise interactions are important for the summarization task. On the other hand, the utility of synthesized attention can also be observed, i.e., the (R+V) and (R+D) models both outperform Transformers. On the dialogue task, Synthesizers (R) and (D) both outperform vanilla Transformers by a reasonable margin (≈ 1-3) points across most/all metrics. The best performing model here is the (D) variant. Surprisingly, unlike most other tasks, the (+V) variants do not perform well, signifying that dot product self-attention may actually be harmful for this task." }, { "heading": "4.2 COMPARING SYNTHESIZERS WITH DYNAMIC CONVOLUTIONS", "text": "To ascertain the competitiveness of Synthesizers, we also compare them with Dynamic convolutions (Wu et al., 2019). We compare them on (1) pretraining perplexity using the masked language modeling objective on C4 and (2) downtream finetuning results on GLUE and SuperGLUE.\nResults on Masked Language Modeling We also benchmark the speed of these models. In order to do so, we conduct additional experiments on the T5 adaptation of masked language modeling on the C4 dataset (Raffel et al., 2019) by comparing against lightweight dynamic convolutions (Wu et al., 2019) on a masked language modeling task. We also take this chance to benchmark the\nspeed of Synthesizers compared with Transformers. Experiments are conducted on Mesh Tensorflow (Shazeer et al., 2018) and ran on 2x2 TPU V3 Chips for approximately 524K steps.\nResults on MLM Table 4 reports the validation set log perplexity on masked language modeling4. We observe that Synthesizers (R) can outperform Dynamic Convolutions by a relative +3.5% while being +60% faster. Against Lightweight Dynamic Convolutions, we match the performance while being +5% faster. Given that this is the simple random Synthesizer baseline, we find this extremely interesting how it is able to outperform dynamic convolutions, a relatively complex model. The Random Synthesizer also has less FLOPS compared to both convolution models. On the other hand, the Mixture Synthesizer models that use the dot product attention improves the performance of the base Transformer model with relatively an equal model speed. Finally, similar to the earlier results, we see a consistent performance gain of Synthesizer (D+V) and Synthesizer (R+V) outperforming the base Transformer model.\nResults on GLUE and SuperGLUE Tables 5 and 6 report results on the GLUE and SuperGLUE benchmarks. We note that the (R) and (D) variants of SYNTHESIZER do not achieve reasonable performance. This can be largely attributed to the fact that the encoder self-attention in the T5 setting also functions as a cross-sentence attention. For example, in the entailment or reading comprehension tasks, the premise and hypothesis are concatenated together and self-attention effectively acts as cross-sentence attention5. On datasets like SST, a straightforward sentiment classification\n4Note that this follows the sequence transduction style in T5. 5On a related note, the perceived success of pairwise self-attention might also be attributed to the fact that these public benchmarks are bias towards pairwise matching tasks. In reality, this is computationally prohibitive for many practical real-world applications (Seo et al., 2018).\ntask, this cross sentence attention is not necessary and therefore Syn (R) and Syn (D) both perform competitively. To this end, Dynamic Convolutions (Wu et al., 2019) also do not have this encoder ”cross-attention” and therefore also suffer on many of these pairwise matching tasks. Notably, in this ‘no cross attention’ setting, the Random Synthesizers are are 4 to 5 percentage points higher in GLUE/SuperGLUE score compared to Dynamic Convolutions.\nOptimistically, we observe that the mixture model Syn (R+V) outperforms the T5 model by a substantial margin (+1.9 points on SuperGLUE and +0.6 points on GLUE). Naturally, the hybrid mixture model also very substantially outperforms Dynamic Convolution. Finally to ensure that the Syn (+V) variations are not outperforming Transformers due to simply having more parameters, we also compared with T5 (Base+) which has equal number of parameters to Syn (+V) variants (approximately ≈ 10M more parameters). Our results show that Synthesizers (+V) still outperform T5 (Base+)." }, { "heading": "4.3 COMPARING SYNTHESIZERS WITH LINFORMERS", "text": "We conduct more experiments comparing factorized random Synthesizers with Linformers. Since Linformer cannot be used to decode, we compare them on two encoding tasks from tensorflow datasets (AGnews (Zhang et al., 2015) and movie reviews (Maas et al., 2011)). We use k=32 for both factorized models. We also benchmark Transformers on this task. Note we do not use contextualized embeddings so results are not comparable with other work.\nModel News Reviews Steps/Sec Transformer 88.83 81.34 1.09 Linformer 86.50 82.86 1.09 Syn (FR) 86.53 83.39 1.10 Syn (FR+V) 89.13 84.61 0.80" }, { "heading": "4.4 OVERALL SUMMARY OF QUANTITATIVE RESULTS", "text": "This section summarizes our overall findings.\n• Synthetic Attention is competitive even without Dot Product Attention On all evaluated tasks, we showed that synthesized attention functions competitively, i.e., it achieves performance reasonably close to the dot product self-attention. On one task (dialogue generation), the dot product self-attention is found to actually degrade performance. Amongst the other tasks, machine translation is the least affected by the removal of the vanilla dot product. These findings allow us to introspect about whether pairwise comparisons for self-attention are even necessary. On the multi-task language understanding benchmark, the self-attention functions as a form of cross-attention by concatenating sentence pairs. Hence, synthesize attention performance is considerably worse than vanilla Transformers.\n• Synthetic Attention and Dot Product Attention are highly complementary Overall, we also observe that the dot product attention is very helpful. To this end, synthetic attention is highly complementary to the pairwise dot product attention. While Synthetic Attention can usually achieve competitive and fast performance on its own, synthetic attention boosts performs, composing multiple synthetic attention (and dot product attention) together shows gains on almost all tasks that we have investigated. Hence, we believe this to be a robust finding.\nThe simplest Synthesizers such as Random Synthesizers are fast competitive baselines Finally, we note that simple random Synthesizers are competitive with dynamic convolutions and Linformers, which are recently proposed models. On two encoding task and a large-scale masked language modeling task, we show that random (or factorized random) Synthesizers remain competitive to other fast or efficient Transformer models." }, { "heading": "5 CONCLUSION", "text": "This paper proposed SYNTHESIZER, a new Transformer model that employs Synthetic Attention. We conducted a principled study to better understand and evaluate the utility of global alignment and local, instance-wise alignment (e.g., independent token and token-token based) in self-attention. We show that, on multiple tasks such as machine translation, language modeling, dialogue generation, masked language modeling and document classification, synthetic attention demonstrates competitive performance compared to vanilla self-attention. Moreover, for the dialogue generation task, pairwise interactions actually hurt performance. Notably, we reemphasize that this study refers to self-attention. We found that we are not able to replace cross-attention with simpler variants in most cases. Via a set of additional large-scale experiments, also find that Synthesizers can outperform or match Dynamic Convolutions and Factorized Synthesizers can outperform other low rank Linformer models." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILED SETUP FOR EXPERIMENTS", "text": "Machine Translation We implement our models in Tensor2Tensor, using the standard base hyperparameter settings. Specifically, we use byte-pair encoding (BPE), 6-layered Transformer networks with hidden size 512, filter size of 2048 and 8 heads. We use label smoothing of 0.1. The maximum sequence length is set to 256. Training is performed using 8 x V100 GPUs. We train all models for 250K steps and report results at the last checkpoint. We use a length penalty of 0.6 and beam size of 4 following the default settings. We also compare with standard Transformer models. In the interest of keeping a consistent, fair evaluation across all model settings, we do not use checkpoint averaging or tune the decoding hyperparameters although this generally leads to better performance. We evaluate BLEU scores using sacrebleu.\nLanguage Modeling We implement our models in Tensor2Tensor using the packed TPU setup of sequence length 256. We train our models on 300K steps on 16 TPU V2 chips. We use the lmx base model setting for fair comparison across all model variations. The model has 6 layers and 8 heads, along with a filter width of 2048 and hidden size of 512. We used conv relu for the positional feed-forward layers across all baselines since we find them to perform slightly better. We report results (subword level perplexity scores) on the test set at the final checkpoint.\nSummarization For the summarization task, we train all models for 300K steps and a batch size of 128. All models use the base size setting. For the dialogue generation task, due to the smaller dataset size, we train a small model for 20K steps. All results are reported on the test set. For the summarization task, we use the well-established metrics, i.e., Rouge-1, Rouge-2 and Rouge-L. Experiments are conducted using Mesh Tensorflow.\nDialogue Generation For the dialogue generation task, we train our models on the small size for 20K steps. Experiments are conducted in Tensor2Tensor. We use NLG-Eval6 (Sharma et al., 2017) and report BLEU-1, BLEU-4, Rouge-L, Meteor, CIDr and Embedding based similarity scores (Emb).\nMulti-Task Language Understanding Our experiments are based on the T5 repository7 implemented in Mesh Tensorflow (Shazeer et al., 2018). We pre-train the vanilla T5 models and our models for 524288 steps using the span denoising objective. We then co-train the model on multiple tasks. We co-train on the en mix mixture (SuperGLUE and GLUE) for 100k steps with a constant learning rate of 10−3. Embedding and Softmax output layer parameters are kept fixed. The maximum sequence length is set to 512. We evaluate on the en mix mixture as defined in the original codebase which is comprised of training GLUE, SuperGLUE and SQuAD in a single model.\nPretraining experiments on C4 Experiments are conducted on Mesh Tensorflow. We pretrain for 524288 steps and report the perplexity on the validation set. We use 2x2 TPU V3 chips for our experiments. The sequence length is 512 and optimizer is Adafactor.\nExperiments on Document Classification We run experiments in JAX/FLAX (https:// github.com/google/flax) with base size models of 8 heads, 6 layers, MLP dimensions of 2048 and a hidden size of 512. We use the Adam optimizer with learning rate 0.05 and 8K steps linear warmup. We train for 10K steps and report evaluation results at 10K step. We use a batch size of 128. We build a new sentencepiece model for each new dataset comprising of 32K tokens. No pretraining or contextualized embeddings are used. Experiments are run on 16 TPU v3 chips." }, { "heading": "A.2 ADDITIONAL VARIANTS OF SYNTHESIZER", "text": "We report results of several additional variants of SYNTHESIZER, most of which we found to have marginal or no improvement over the simple dense/random variations.\n• Convolution - Applying a 1D convolution instead of a 2 layer nonlinear network. We vary the filter width in our experiments.\n• Bottleneck - Converting the 2 layered feed forward network to a bottleneck layer, e.g., 512 → 16 → 512. We also experiment with a convolutional variant of bottleneck, i.e., projecting to low dimension space and then projecting back to high dimensions.\n• Gated Linear Units (GLU), applying the GLU units of (Dauphin et al., 2017) as the Synthesizing function.\n6https://github.com/Maluuba/nlg-eval. 7https://github.com/google-research/text-to-text-transfer-transformer" }, { "heading": "A.3 EFFECT OF NUMBER OF HEADS", "text": "We also investigate the impact of the number of heads on performance. We trained three Random Synthesizer models for the small version of the machine translation tasks using the T5 framework without pretraining. For simplicity, evaluation is done via greedy decoding. We report scores on the development set. We are mainly interested in relative performance and not absolute numbers. Table 9 reports the results on varying the number of heads on performance." }, { "heading": "A.4 ANALYSIS", "text": "Distribution of Weights We are interested in investigating how the synthetically generated attention weights differ from the dot product attention weights. Figure 2 shows the attention histograms on trained Transformer and SYNTHESIZER models. We report histograms at layers 1, 3, and 5 of a 6 layered (Transformer or SYNTHESIZER) model at 50K steps. We found that the weight distributions remain relatively identical thereafter. Figure 3 shows the initialization state. We observe that there are distinct differences in the weight distribution of SYNTHESIZER and Transformer models. The variance of the SYNTHESIZER weights tends to be higher. On the other hand, the weights on the Transformer model tends to gravitate near 0 and have smaller variance. There are also notable differences across the (R) and (D) SYNTHESIZER variants. Specifically, the (D) model in general has greater max values with more values in the 0.1-0.2 range while the values of the R model tends to stay closer to 0." }, { "heading": "A.5 WHAT PATTERNS DO SYNTHESIZERS LEARN?", "text": "In this section, we perform a deeper analysis of the SYNTHESIZER model.\nSynthesizer weights on LM1B. Transformer weights on LM1B. Analysis Finally, we are interested to understand what these Synthesizer models are learning. We inspect the random synthetic attention weights for language modeling task LM1B and visualise the differences compared to the vanilla attention. We find that, for the LM task, Synthesizers are capable of learning a local window, emulating the vanilla Transformer quite closely despite starting from completely random. The weights, however, seem smoother and less coarse as compared to the Transformer. This seems to reflect what we expect since the Synthesizer does not benefit from token specific information. We provide additional analysis and visualisation of weights for the Machine Translation task in the supplementary material." }, { "heading": "A.6 MORE ATTENTION WEIGHTS ANALYSIS", "text": "This section illustrates the attention weights extracted from different variants of Synthesizer on the machine translation (En-De) task. Weights are extracted from lower layers although we do not find any substantial difference in the patterns in early layers and deeper layers. We extract them from Tensorboard midway during training.\nAnalysis We first observe that these weights differ a lot from the LM weights shown in the main paper in Section 4.5. This shows that the Synthesizer learns very different weights for different tasks. Next, based on the weighs on MT, we observe a very different pattern in all variants of Synthesizer. For the decoder weights, the main difference seems to be the overall magnitude and distribution values of the weights. However, we can easily observe the cracks and lines of the factorized variants. For the encoder weights, we observe that the Random and Dense variants are more uniform. On the other hand, there appears to be structural/regional clustering of values in the factorized variants." }, { "heading": "A.7 CONVERGENCE OF SYNTHESIZERS VS TRANSFORMERS", "text": "" } ]
2,020
null
SP:e8de5995140c90ed95c915f5724c0a910a99cfb9
[ "This paper proposes and evaluates different normalization techniques for graph neural networks. Also, the authors argue that the best normalization technique is task dependent, so they propose to use a weighted average of different normalizations that is learned during training, called AGN. In the paper they propose 4 different normalizations some of which are structure-dependent, and compare the performance of GCN, GAT and GatedGCN with and without these normalizations, and the learned combination of all of them." ]
Graph Neural Networks (GNNs) have emerged as a useful paradigm to process graph-structured data. Usually, GNNs are stacked to multiple layers and the node representations in each layer are computed through propagating and aggregating the neighboring node features with respect to the graph. To effectively train a GNN with multiple layers, some normalization techniques are necessary. Though the existing normalization techniques have been shown to accelerate training GNNs, the structure information on the graph is ignored yet. In this paper, we propose two graph-aware normalization methods to effectively train GNNs. Then, by taking into account that normalization methods for GNNs are highly task-relevant and it is hard to know in advance which normalization method is the best, we propose to learn attentive graph normalization by optimizing a weighted combination of multiple graph normalization methods at different scales. By optimizing the combination weights, we can automatically select the best or the best combination of multiple normalization methods for a specific task. We conduct extensive experiments on benchmark datasets for different tasks and confirm that the graph-aware normalization methods lead to promising results and that the learned weights suggest the more appropriate normalization methods for specific task.
[]
[ { "authors": [ "Xavier Bresson", "Thomas Laurent" ], "title": "Residual gated graph convnets", "venue": "CoRR, abs/1711.07553,", "year": 2017 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on", "venue": "graphs. CoRR,", "year": 2014 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "arXiv preprint arXiv:1801.10247,", "year": 2018 }, { "authors": [ "Ke Cheng", "Yifan Zhang", "Xiangyu He", "Weihan Chen", "Jian Cheng", "Hanqing Lu" ], "title": "Skeleton-based action recognition with shift graph convolutional network", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Neofytos Dimitriou", "Ognjen Arandjelovic" ], "title": "A new look at ghost normalization", "venue": "arXiv preprint arXiv:2007.08554,", "year": 2020 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K. Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Zheng Huang", "Kai Chen", "Jianhua He", "Xiang Bai", "Dimosthenis Karatzas", "Shijian Lu", "CV Jawahar" ], "title": "Icdar2019 competition on scanned receipt ocr and information extraction", "venue": "In 2019 International Conference on Document Analysis and Recognition (ICDAR),", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "CoRR, abs/1609.02907,", "year": 2016 }, { "authors": [ "Xia Li", "Yibo Yang", "Qijie Zhao", "Tiancheng Shen", "Zhouchen Lin", "Hong Liu" ], "title": "Spatial pyramid based graph reasoning for semantic segmentation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard S. Zemel" ], "title": "Gated graph sequence neural networks", "venue": null, "year": 2016 }, { "authors": [ "Jaechang Lim", "Seongok Ryu", "Kyubyong Park", "Yo Joong Choe", "Jiyeon Ham", "Woo Youn Kim" ], "title": "Predicting drug–target interaction using a novel graph neural network with 3d structure-embedded graph representation", "venue": "Journal of chemical information and modeling,", "year": 2019 }, { "authors": [ "Ping Luo", "Ruimao Zhang", "Jiamin Ren", "Zhanglin Peng", "Jingyu Li" ], "title": "Switchable normalization for learning-to-normalize deep representation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2019 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Hadar Serviansky", "Yaron Lipman" ], "title": "Provably powerful graph networks", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodolà", "Jan Svoboda", "Michael M. Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "F. Scarselli", "M. Gori", "A.C. Tsoi", "M. Hagenbuchner", "G. Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Sheng Shen", "Zhewei Yao", "Amir Gholami", "Michael W. Mahoney", "Kurt Keutzer" ], "title": "Powernorm: Rethinking batch normalization in transformers, 2020", "venue": null, "year": 2020 }, { "authors": [ "Weijing Shi", "Raj Rajkumar" ], "title": "Point-gnn: Graph neural network for 3d object detection in a point cloud", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor S. Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "ArXiv,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "CoRR, abs/1706.03762,", "year": 2017 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "S Yu Philip" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "arXiv preprint arXiv:1806.03536,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks? ArXiv", "venue": null, "year": 2019 }, { "authors": [ "Liang Yao", "Chengsheng Mao", "Yuan Luo" ], "title": "Graph convolutional networks for text classification", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Si Zhang", "Hanghang Tong", "Jiejun Xu", "Ross Maciejewski" ], "title": "Graph convolutional networks: Algorithms, applications and open challenges", "venue": "In International Conference on Computational Social Networks,", "year": 2018 }, { "authors": [ "Lingxiao Zhao", "Leman Akoglu" ], "title": "Pairnorm: Tackling oversmoothing in gnns", "venue": "ArXiv, abs/1909.12223,", "year": 2020 }, { "authors": [ "Welling", "Velickovic" ], "title": "Graph neural networks (Kipf", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph Neural Networks (GNNs) have shown great popularity due to their efficiency in learning on graphs for various application areas, such as natural language processing (Yao et al., 2019; Zhang et al., 2018), computer vision (Li et al., 2020; Cheng et al., 2020), point cloud (Shi & Rajkumar, 2020), drug discovery (Lim et al., 2019), citation networks (Kipf & Welling, 2016), and social networks (Chen et al., 2018). A graph consists of nodes and edges, where nodes represent individual objects and edges represent relationships among those objects. In the GNN framework, the node or edge representations are alternately updated by propagating information along the edges of a graph via non-linear transformation and aggregation functions (Wu et al., 2020; Zhang et al., 2018). GNN captures long-range node dependencies via stacking multiple message-passing layers, allowing the information to propagate over multiple-hops (Xu et al., 2018).\nIn essence, GNN is a new kind of neural networks which exploits neural network operations over graph structure. Among the numerous kinds of GNNs (Bruna et al., 2014; Defferrard et al., 2016; Maron et al., 2019; Xu et al., 2019), message-passing GNNs (Scarselli et al., 2009; Li et al., 2016; Kipf & Welling, 2016; Velickovic et al., 2018; Bresson & Laurent, 2017) have been the most widely used due to their ability to leverage the basic building blocks of deep learning such as batching, normalization and residual connections. To update the feature representation of a node, many approaches are designed. For example, Graph ConvNet (GCN) (Kipf & Welling, 2016) employs an averaging operation over the neighborhood node with the same weight value for each of its neighbors; GraphSage (Hamilton et al., 2017) samples a fixed-size neighborhood of each node and performs mean aggregator or LSTM-based aggregator over the neighbors; Graph Attention Network (GAT) (Velickovic et al., 2018) incorporates an attention mechanism into the propagation step, which updates the feature representation of each code via a weighted sum of adjacent node representations; MoNet (Monti et al., 2017) designs a Gaussian kernel with learnable parameters to assign different weights to neighbors; GatedGCN (Bresson & Laurent, 2017) explicitly introduces edge features at each layer and updates edge features by considering the feature representations of these two con-\nIt is well known that one of the critical ingredients to effectively train deep neural networks is normalization technique, e.g., Batch Normalization (BN) (Ioffe & Szegedy, 2015) is widely used to accelerate the deep neural networks training. Other than BN, several normalization methods have been developed from different perspectives, e.g., Layer Normalization (LN) (Ba et al., 2016) and Group Normalization (Wu & He, 2018) which operate along the channel dimension, Instance Normalization (Ulyanov et al., 2016) which performs a BN-like normalization for each sample, Switchable Normalization (Luo et al., 2019) which utilizes three distinct scopes—including channel, layer, and minibatch—to compute the first order and second order statistics. Each normalization method has its advantages and is suitable for some particular tasks. For instance, BN has achieved perfect performance in computer vision whereas LN outperforms BN in natural language processing (Vaswani et al., 2017).\nAs an analogue, in Dwivedi et al. (2020), BN is utilized for each graph propagation layer during training GNNs. In Zhao & Akoglu (2020), a novel normalization layer, denoted as PAIRNORM, is introduced to mitigate the over-smoothing problem and prevent all node representations from homogenization by differentiating the distances between different node pairs. Although these methods mentioned above have been demonstrated being useful in training GNNs, the local structure and global structure of the graph are ignored in these existing methods. Moreover, in previous work, only one of the mentioned normalization methods is selected and it is used for all normalization layers. This may limit the potential performance improvement of the normalization method and it is also hard to decide which normalization method is suitable to a specific task.\nGraph data contains rich structural information. By considering the structure information in the graph, in this paper, we propose two graph-aware normalization methods at different scales: a) adjacency-wise normalization, and b) graph-wise normalization. Unlike BN and LN, the adjacencywise normalization takes into account the local structure in the graph whereas the graph-wise normalization takes into account the global structure in the graph. On other hand, while multiple normalization methods are available for training GNNs and it is still hard to know in advance which normalization method is the most suitable to a specific task. To tackle with this deficiency, we further propose to learn attentive graph normalization by optimizing a weighted combination of multiple normalization methods. By optimizing the combination weights, we can select the best or the best combination of multiple normalization methods for training GNNs at a specific task automatically.\nThe contributions of the paper are highlighted as follows.\n• We propose two graph-aware normalization methods: adjacency-wise normalization and graphwise normalization. To the best of our knowledge, it is for the first time that the graph-aware normalization method is proposed for training GNNs.\n• We present to learn attentive graph normalization by optimizing a weighted combination of different normalization methods. By learning the combination weights, we can automatically select the\nbest normalization method or the best combination of multiple normalization methods for training GNNs at a specific task.\n• We conduct extensive experiments on benchmark datasets for different tasks and confirm that the graph-aware normalization methods leads to promising results and that the learned weights suggest the more appropriate normalization methods for specific task." }, { "heading": "2 GRAPH-AWARE NORMALIZATION AT DIFFERENT SCALES", "text": "Suppose that we haveN graphs G1,G2, ...,GN in a mini-batch. Let Gk = (Vk, Ek) be the k-th graph, where Vk is the set of nodes and Ek is the set of edges. We use vk,i to denote the i-th node of graph Gk and use ek,i,j to denote the edge between nodes vk,i and vk,j of graph Gk. Moreover, we use hvk,i ∈ Rd to represent the feature of node vk,i and hjvk,i to represent the j-th element of hvk,i . We use N (vk,i) to represent the neighbors of node vk,i (including node vk,i itself). For clarity, we formulate the normalization methods for training GNNs from different scales, as illustrated in Figure 1 (a)-(d), including node-wise normalization, adjacency-wise normalization, graph-wise normalization and batch-wise normalization.\nNode-wise Normalization. Node-wise normalization on graph, denoted as GNn, considers to normalize the feature vector hvk,i of each node vk,i and compute the first and the second statistics over the d entries of the feature vector hvk,i as follows:\nĥ(n)vk,i = hvk,i − µ (n) k,i 1\nσ (n) k,i\n, µ (n) k,i =\n1\nd d∑ j=1 hjvk,i , σ (n) k,i = √√√√1 d d∑ j=1 (hjvk,i − µ (n) k,i ) 2, (1)\nwhere µ(n)k,i and σ (n) k,i are the mean and the standard deviation along the feature dimension for node vk,i, and 1 ∈ Rd represents a d-dimension vector of all 1. Note that node-wise normalization is equivalent to applying LN to each node of the graph to reduce the “covariate shift” problem1.\nAdjacency-wise Normalization. Each node in a graph has its neighbors. However, node-wise normalization performs normalization on each node individually and ignores the local structure in the graph. Here, we propose to take into account the adjacency structure in the graph and normalize the node features of the adjacent neighbors. We term it as adjacency-wise normalization on graph, denoted as GNa. For each node vk,i in graph Gk, we consider its adjacent nodesN (vk,i), as illustrated in Figure 1 (b). Specifically, the adjacency-wise normalization for node vk,i is defined as follows:\nĥ(a)vk,i = hvk,i − µ (a) k,i1\nσ (a) k,i\n, (2)\nµ (a) k,i =\n1 |N (vk,i)| × d ∑\nj′∈N (vk,i)\nd∑ j=1 hjvk,j′ , (3)\nσ (a) k,i = √√√√ 1 |N (vk,i)| × d ∑ j′∈N (vk,i) d∑ j=1 (hjvk,j′ − µ (a) k,i ) 2, (4)\nwhere µ(a)k,i and σ (a) k,i are the first order and second order statistics over the adjacent nodes. 2\nGraph-wise Normalization. Note that nodes belonging to graph Gk naturally form a group. In order to preserve the global structure of a graph, we propose to normalize the node feature based on the first and the second order statistics computed over graph Gk. Specifically, we define a graph-wise\n1The node-wise normalization method in Equation (1) can also be used to normalize the feature at each edge, as illustrated in Figure 1 (e).\n2For the edge ek,i,j , as Figure 1 (f), the adjacent edgesN (ek,i) can be considered in a similar way.\nnormalization on graph, denoted as GNg, for node vk,i as follows: ĥ(g)vk,i = (hvk,i − µ (g) k )Λ −1 k , (5)\nµ (g) k =\n1 |Gk| ∑\nvk,i∈Gk\nhvk,i , (6)\nwhere µ(g)k and Λk are the first order and the second order statistics in graph Gk in which Λk is a diagonal matrix with diagonal entry Λjjk is defined as\nΛjjk = √√√√ 1 |Gk| ∑ vk,i∈Gk (hjvk,i − µ (g),j k ) 2. (7)\nIf the task has only a single graph, then graph-wise normalization is similar to BN. However, unlike in BN, graph-wise normalization does not use a smoothing average updater.3\nBatch-wise Normalization. To keep training stable, BN is one of the most critical components. For a mini-batch, there are N graphs. We compute the mean and standard deviation across over the graphs of a mini-batch, then each node feature hvk,i is normalized as follows:\nĥ(b)vk,i = (hvk,i − µ (b))Λ−1, (8)\nµ(b) = 1\nT N∑ k=1 |Gk|∑ i=1 hvk,i , (9)\nwhere T = ∑N\nk=1 |Gk| means the total number of the nodes in the N graphs and Λ is a diagonal matrix to keep the standard deviation of the note features over N graphs in which the diagonal entry Λjj is defined\nΛjj = √√√√ 1 T N∑ k=1 |Gk|∑ i=1 (hjvk,i − µ(b),j)2. (10)\nNote that batch-wise normalization on graph, named as GNb, is effectively BN (Ioffe & Szegedy, 2015), which performs normalization over all nodes of the N graphs in a mini-batch.\nThe normalization methods applying to node features hvk,i can also be extended to edge features hek,i,j where hek,i,j denotes the feature of edge ei,j in graph Gk, as illustrated in Figure 1 (e)-(h). Remark. The properties of the four normalization methods are summarized as follows.\n• Node-wise normalization only considers to normalize the feature of each node individually but ignores the adjacency structure and the whole graph structures. It is equivalent to LN (Ba et al., 2016) in operation.\n• Adjacency-wise normalization takes the adjacent nodes into account, whereas graph-wise normalization takes into account the features of all nodes in a graph.\n• Batch-wise normalization is the same as the standard batch normalization (Ioffe & Szegedy, 2015). If the task only involves a single graph, then the batch-wise normalization is similar to the graph normalization except that momentum average used in batch-wise normalization is not used in the graph-wise normalization." }, { "heading": "3 LEARNING ATTENTIVE GRAPH NORMALIZATION", "text": "Although we have defined several normalization methods for the graph-structured data, different tasks prefer to different normalization methods and for a specific task, it is hard to decide which normalization method should be used. Moreover, one normalization approach is utilized in all normalization layers of a GNN. This will sacrifice the performance of a GNN.\nTo remedy these issues, we propose to learn attentive graph normalization for training GNNs by optimizing a weighted combination of the normalization methods. Specifically, we combine the node feature ĥvk,i under different normalization methods as follows:\nĥvk,i = γ(α (n) ĥ(n)vk,i +α (a) ĥ(a)vk,i +α (g) ĥ(g)vk,i +α (b) ĥ(b)vk,i) + β, (11)\n3For the edges Ek of graph Gk (Figure 1 (g)), we can also define the same normalization.\nwhere α(n),α(a),α(g) and α(b) ∈ Rd are trainable gate parameters with the same dimension as hvk,i ,γ ∈ R and β ∈ Rd are the trainable scale and shift parameters, respectively.\nNote that we attempt to use the learned α(n),α(a),α(g) and α(b) indicate the contribution of the corresponding normalized feature to ĥvk,i . Thus, we impose normalization constraints on each dimension of α(n),α(a),α(g) and α(b) that α(u)j ∈ [0, 1] where u ∈ {n, a, g, b} and j = 1, · · · , d, and ∑ u∈{n,a,g,b} α (u) j = 1 where j = 1, · · · , d. In this way, if a normalization method is better for a specific task, the learned corresponding weights will be higher than others. Thus, we term the learned attentive graph normalization method in Equation (11) as Automatic Graph Normalization (AGN). In AGN, multiple normalization methods collaborate and compete with each others to improve the performance of GNNs.\nDifferent normalization methods are suitable for different tasks. In AGN, the attention weights α(n),α(a),α(g) and α(b) are optimized for a specific task and thus the best-performing normalization method will have a set of significant weights. Therefore AGN can serve as an effective strategy to select one of the best-performing normalization method or the best combination of multiple normalization methods for a specific task." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate GNn, GNa, GNg, GNb, and AGN under three GNN frameworks, including Graph Convolution Network (GCN) , Graph Attention Network (GAT) and GatedGCN. We also assess the performance of GNNs without normalization layer named as “No Norm”. The benchmark datasets consist of three types of tasks including node classification, link prediction, and graph classification/regression. We use all seven datasets from Dwivedi et al. (2020), which are PATTERN, CLUSTER, SROIE, TSP, COLLAB, MNIST, CIFAR10, and ZINC. In addition, we apply GatedGCN for key information extraction problem and evaluate the effect of different normalization methods on SROIE (Huang et al., 2019), which is used for extracting key information from receipt in ICDAR 2019 Challenge (task 3). The detailed statistics of the datasets are presented in Appendix C.1.\nThe implementations of GCN, GAT and GatedGCN are from GNN benchmarking framwork4. The hyper-parameters and optimizers of the models and the details of the experimental settings are kept the same as in (Dwivedi et al., 2020). We run experiments on CLUSTER and PATTERN datasets with GNNS of depth of layers L = {4, 16}, respectively. For the other datasets, we fix the number of GCN layers to L = 4." }, { "heading": "4.1 NODE CLASSIFICATION", "text": "For datasets CLUSTER and PATTREN, the average node-level accuracy which is weighted with respect to the class sizes is used to evaluate the performance of all models. For each model, we conduct 4 trials with different seeds {41, 95, 35, 12} to compute the average accuracy and the results are shown in Table 1.\nAs can be read, graph-wise normalization (GNg) outperforms batch-wise normalization (GNb) obviously in most situations. For instance, when the depth of GNNs is 4, GatedGCN with GNg achieves 9% improvement over GNb on CLUSTER. Batch-wise normalization computes the statistics over a batch data and ignores the differences between different graphs. Different from GNb, GNg performs normalization only for each graph. Thus, GNg can learn the dedicated information of each graph and normalize the feature of each graph into a reasonable range. As we known, the performance of the adjacency-wise normalization (GNa) is similar with that of the node-wise normalization (GNn). Compared with GNn, GNa consider the neighbors of each node and gets higher accuracies. AGN gets comparable results for different GNNs and the results of AGN are close to the best results in most cases due to its flexibility and adaptability. AGN can adaptively learn the optimal combination of the normalization methods which better adapt to the node classification task.\nMoreover, we apply node classification to key information extraction on SROIE which consists of 626 receipts for training and 347 receipts for testing. Each image is annotated with text bounding\n4https://github.com/graphdeeplearning/benchmarking-gnns\nWe can observe that GNg achieves the best performance among all compared normalization methods. In the receipt, there are many nodes with only numeric texts. It is hard to differentiate the “Total” field from other nodes with numeric text. GNg performs well in this field and outperforms the second best by 2.0%. We believe that the graph-wise normalization can make the “Total” field stand out from the other bounding boxes with numeric text by aggregating the relevant anchor point information from its neighbors and removing the mean number information. Similarly, graph-wise normalization can promote extracting information for the other three key fields. It is interesting that the graph of each receipt is special with neighboring nodes that usually belong to different classes. Thus, the performance of adjacency-wise normalization is worse than node-wise normalization." }, { "heading": "4.2 LINK PREDICTION", "text": "Link prediction is to predict whether there is a link between two nodes vi and vj in a graph. Two node features of vi and vj , at both ends of edge ei,j , are concatenated to make a prediction. Experimental results are shown in Table 3. All the five normalization methods achieve similar performance on dataset TSP. Compared with others, the results of AGN are very stable. For each GNN, the result of AGN is comparable with the best result.\nDataset COLAB contains only a graph with 235,868 nodes. Due to out-of-memory problem, we do not report the results of GNa and AGN. Compared with GNNs with normalization layer, the results of GNNs without normalization layer (No Norm) seriously degenerates. GNg performs better than GNb. GatedGCN with GNg achieves the best result." }, { "heading": "4.3 GRAPH CLASSIFICATION AND GRAPH REGRESSION", "text": "Graph classification is to assign one label to each graph. We conduce experiments on CIFAR10 and MNIST. Average class accuracy is reported in Table 4. ZINC is a dataset for graph regression. The mean absolute error (MAE) between the predicted value and the ground truth is calculated for each" }, { "heading": "4.4 ANALYSIS AND FURTHER INVESTIGATIONS", "text": "The above experimental results indicate that GNg outperforms batch normalization on most node classification tasks. For each single normalization method, it performs very well on some datasets, while its performance may decrease sharply on other datasets. Meanwhile, our proposed AGN which integrates several normalization methods into a unified optimization framework achieves competitive results compared with the best single normalization method on various datasets.\nBehaviours of Learned Weights in AGN. To gain more insight into AGN, we conduct a set of experiments to analyze the effect of each normalization method on different datasets. Note that AGN combines the results of several normalization methods and {α(u)}u∈n,a,g,b in Equation (11) indicate the importance of the corresponding normalization methods, respectively. We initialize the weights {α(u)}u∈n,a,g,b in each layer with the equal values, i.e., α(m)j = 0.25 for j = 1, · · · , d and m ∈ {n, a, g, b}. In the training phase, the value of each component of {α(u)}u∈n,a,g,b changes between 0 and 1. On each dataset, we investigate the learned optimal weights on average at different layers of GatedGCN. Particularly, We collect the learned weights of each normalization method in each layer and calculate the averaged weights of each normalization method over all of the d entries of α(u). We show the learned weights on average in Figure 2. As can be observed, the learned weights of each normalization method on average not only change for different dataset but also\nvary for different layers. This implies that different layers prefer to different normalization method in order to yield good performance. We can also observe that the weights on GNg are larger than others on node classification tasks and GNb is more important on others. Our proposed AGN has the ability to automatically choose the suitable normalization method for a specific task.\nEvaluation on Selected Normalization Methods via AGN. To further evaluate the performance of the selected normalization methods, we select the two best-performing normalization methods, combine them into a new normalization method as in Equation (11), and conduct experiments on each dataset. The experimental results are listed in Table 5. We can read that the combined normalization method obtains the comparable results with the best normalization method. Therefore these results show that the learnt weights indicate whether the corresponding normalization method is suitable for the current task.\nTraining Loss and Test Accuracy Curves vs. Iteration Steps. To show the effect of different normalization methods, we draw the curves of training loss and test result with respect to the iteration steps in Figures 4 and 5 in Appendix C.3. We see that when a proper normalization method is used, the training loss converges faster and better test accuracy can be obtained." }, { "heading": "5 CONCLUSIONS", "text": "We formulated four normalization methods for training GNNs at different scales: node-wise normalization, adjacency-wise normalization, graph-wise normalization, and batch-wise normalization. Particularly, the adjacency-wise normalization and graph-wise normalization are graph-aware normalization methods, which are designed with respect to the local and the global structure of the graph, respectively. Moreover, we proposed a novel optimization framework, called Automatic Graph Normalization, to learn attentive graph normalization by optimizing an attentively weighted combination of multiple graph normalization methods. We conducted extensive experiments on seven benchmark datasets at different tasks and confirmed that the graph-aware normalization methods and the automatically learned graph normalization method lead to promising results and that the learned optimal weights suggest more appropriate normalization methods for specific tasks." }, { "heading": "A GRAPH NEURAL NETWORKS", "text": "Graph neural networks (Kipf & Welling, 2016; Velickovic et al., 2018) are effective in learning graph representations. For node v, GNNs update its representation by utilizing itself and its adjacent neighbors. To capture high-order structure information of the graph, GNNs learn a new feature representation of each node over multiple layers. In a layer of GNNs, each node v sends a “message”its feature representation, to the nodes in N (v); and then the feature representation of v is updated according to all collected information from the neighborhood N (v). Mathematically, at the `-th layer, we have\nh`+1v = ψ `+1(C{h`v,M{φ`+1(h`u)|u ∈ N (v)}}) (12)\nwhere h`u denote the feature vector at the `-th layer of node u ∈ N (v), ψ and φ are learnable functions,M is the aggregation function for nodes inN (v), and C is utilized to combine the feature of node v and its neighbors. Especially, the initial node representation h0v = xv represents the original input feature vector of node v.\nGraph ConvNets(Kipf & Welling, 2016) treats each neighbor node u equally to update the representation of a node v as:\nh`+1v = ReLU( 1\ndegv ∑ u∈N (v) W `h`u), (13)\nwhere W ∈ Rd×d, degv is the in-degree of node v. One graph convolutional layer only considers immediate neighbors. To use neighbors within k hops, in practice, multiple GCN layers are stacked. All neighbors contribute equally in the information passing of GCN. One key issue of the GCN is an over-smoothing problem, which can be partially eased by residual shortcut across layers. Another effective approach is to use spatial GNNs, such as GAT (Velickovic et al., 2018) and GatedGCN (Bresson & Laurent, 2017).\nGAT (Velickovic et al., 2018) learns to assign different weight to adjacent nodes by adopting attention mechanism. In GAT, the feature representation of v can be updated by: h`+1v = σ( ∑\nu∈N (v)\na`u,vW `h`u), (14)\nwhere a`u,v measures the contribution of node u’s feature to node v defined as follows:\na`u,v = exp(g(αT [W `h`u||W `h`v]))∑\nk∈N (v) exp(g(α T [W `h`k||W `h`v]))\n, (15)\nwhere g(·) is a LeaklyReLU activation function, α is a weight vector and || is the concatenation operation. Similar to Vaswani et al. (2017), to expand GAT’s expressive capability and stabilize the learning process, multi-head attention is employed in GAT. GAT has achieved an impressive improvement over GCN on node classification tasks. However, as the number of graph convolutional layers increases, nodes representations will converge to the same value. Unfortunately, the oversmoothing problem still exists.\nTo mitigate the over-smoothing problem, GatedGCN (Bresson & Laurent, 2017) integrates gated mechanism (Hochreiter & Schmidhuber, 1997), batch normalization (Ioffe & Szegedy, 2015), and residual connections (He et al., 2016) into the network design. Unlike GCNs, which treats all edges equally, GatedGCN uses an edge gated mechanism to give different weights to different nodes. Thus, for node v, the formulation for updating the feature representation is:\nh`+1v = h ` v + ReLU(BN(W `h`v + ∑\nu∈N (v)\ne`vu U `h`u)), (16)\nwhere W `, U ` ∈ Rd×d, is the Hadamard product, and the edge gates e`v,u are defined as:\ne`v,u = σ(ê`v,u)∑\nu′∈N (v) σ(ê ` v,u′) + c\n,\nê`v,u = ê `−1 v,u + ReLU(BN(A `h`−1v +B `h`−1u + C `ê`−1v,u )),\n(17)\nwhere σ(·) is a sigmoid function, c is a small fixed constant, A`, B`, C` ∈ Rd×d. Different from traditional GNNs, GatedGCN explicitly considers edge feature êv,u at each layer." }, { "heading": "B NORMALIZATION METHODS", "text": "B.1 BATCH NORMALIZATION\nBatch normalization (BN) (Ioffe & Szegedy, 2015) has become one of the critical components in training a deep neural network, which normalizes the features by using the first order and the second order statistics computed within a mini-batch. BN can reduce the internal covariate shift problem and accelerate the training process. We briefly introduce the formulation of BN. Firstly, H = {h1,h2, ...,hm} ∈ Rd×m is denoted as the input of a normalization layer, where m is the batch size and hi represents a sample. Then, µ(m) ∈ Rd and σ(m) ∈ Rd denote the mean vector and the variance vector of the m sample in H, respectively. BN normalizes each dimension of features\nusing µ(m) and σ(m) as: ĥ = γ(h− µ(m))./σ(m) + β,\nµ(m) = 1\nm m∑ j=1 hj , σ (m) i = √√√√ 1 m m∑ j=1 (hij − µ(m)i )2,\nµ = αµ+ (1− α)µ(m), σ2 = ασ2 + (1− α)(σ(m))2,\n(18)\nwhere ./means element-wise division, γ and β are trainable scale and shift parameters, respectively. In Equation (18),µ andσ denote the running mean vector and the variance vector to approximate the mean vector and the variance vector of the dataset. During testing, they are used for normalization.\nB.2 LAYER NORMALIZATION\nLayer Normalization (LN) (Ba et al., 2016) is widely adopted in Natural Language Processing, specially Transformer (Vaswani et al., 2017) incorporates LN as a standard normalization scheme. BN computes a mean and a variance over a mini-batch and the stability of training is highly dependent on these two statistics. Shen et al. (2020) has showed that transformer with BN leads to poor performance because of the large fluctuations of batch statistics throughout training. Layer normalization computes the mean and variance along the feature dimension for each training case. Different from BN, for each sample hj ∈ Rd, LN computes mean µ(L)j and variance σ (L) j across the feature dimension. The normalization equations of LN are as follows:\nĥj = γ hj − µ(L)j 1\nσ (L) j\n+ β,\nµ (L) j =\n1\nd d∑ i=1 hij , σ (L) j = √√√√1 d d∑ i=1 (hij − µ(L)j )2,\n(19)\nwhere ĥj ∈ Rd is the normalized feature vector, 1 ∈ Rd is a d dimension vector of 1’s, γ ∈ Rd and β ∈ Rd are scale and shift parameters of dimension d. Overall, there are many normalization approaches (Ulyanov et al., 2016; Wu & He, 2018; Shen et al., 2020; Dimitriou & Arandjelovic, 2020). Shen et al. (2020) has indicated that BN is suitable for computer vision tasks, while LN achieves better results on NLP. For a normalization approach, its performance may vary a lot in different tasks. Thus, it is very important to investigate the performance of normalization approaches in GNNs." }, { "heading": "C DATASETS AND EXPERIMENTAL DETAILS", "text": "C.1 DATASET STATISTICS\nTable C.1 summarizes the statistics of the datasets used for our experiments.\nC.2 SROIE\nFor a receipt, each text bounding box (bbox) is viewed as a node of a graph. The positions and the attributes of the bounding box, and the corresponding text are used as the node feature. To describe the relationships among all the text bounding boxes on a receipt, we consider the distance between two nodes vi and vj . If the distance between two nodes is less than a threshold θ, we connect vi and vj by an edge ei,j . Since that the relative positions of two text bounding boxes are important for node classification, we encode the relative coordinates of vi and vj to represent the edge eij . In this way, such an information extraction task from a receipt can be treated as a node classification task on a graph. Our goal is to label each node (i.e., text bounding box) with five different classes: “Company”, “Date”, “Address”, “Total” and “Other”. Since that GatedGCN explicitly exploits edge features and has achieved state-of-the-art performance on various tasks, we use GatedGCN with 8 GCN layers for this task.\nC.3 TRAINING AND TEST CURVES FOR NODE-WISE, ADJACENCY-WISE, GRAPH-WISE, BATCH-WISE NORMALIZATION AND AUTO GRAPH NORMALIZATION." }, { "heading": "D ACKNOWLEDGEMENT", "text": "We would like to thank Vijay et al. to release their benchmarking code for our research. We also want to thank the DGL team for their excellent toolbox." } ]
2,020
null
SP:3647115d0449f579f5ad7305103ecb553046d613
[ "Learning graph-level representations with only labels has been explored by many works. However, it's not easy to annotate every graph. This paper applies the ideas from semi-supervised classification task to improve the representation quality learned by graph neural network. Specifically the proposed solution combines several kinds of existing techniques including diffusion graph augmentation, mean teacher consistency, debiased contrastive loss and pseudo class consistency. Finally they are combined together to act as a regularization term by utilizing the unlabelled data. From this point of view, the novelty of this work is incremental, but it's still an interesting work for improving graph-level representations." ]
How to discriminatively vectorize graphs is a fundamental challenge that attracts increasing attentions in recent years. Inspired by the recent success of unsupervised contrastive learning, we aim to learn graph-level representation in an unsupervised manner. Specifically, we propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD) which iteratively performs the teacher-student distillation with graph augmentations. Different from conventional knowledge distillation, IGSD constructs the teacher with an exponential moving average of the student model and distills the knowledge of itself. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and unsupervised contrastive loss. Finally, we show that finetuning the IGSDtrained models with self-training can further improve the graph representation power. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
[]
[ { "authors": [ "Ben Athiwaratkun", "Marc Finzi", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "There are many consistent explanations of unlabeled data: Why you should average", "venue": "arXiv preprint arXiv:1806.05594,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Karsten M Borgwardt", "Hans-Peter Kriegel" ], "title": "Shortest-path kernels on graphs", "venue": "In Fifth IEEE international conference on data mining (ICDM’05), pp. 8–pp. IEEE,", "year": 2005 }, { "authors": [ "Chih-Chung Chang", "Chih-Jen Lin" ], "title": "Libsvm: A library for support vector machines", "venue": "ACM transactions on intelligent systems and technology (TIST),", "year": 2011 }, { "authors": [ "Dexiong Chen", "Laurent Jacob", "Julien Mairal" ], "title": "Convolutional kernel networks for graph-structured data", "venue": "arXiv preprint arXiv:2003.05189,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big self-supervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "David K Duvenaud", "Dougal Maclaurin", "Jorge Iparraguirre", "Rafael Bombarell", "Timothy Hirzel", "Alán Aspuru-Guzik", "Ryan P Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Thomas Gärtner", "Peter Flach", "Stefan Wrobel" ], "title": "On graph kernels: Hardness results and efficient alternatives", "venue": "In Learning theory and kernel machines,", "year": 2003 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "arXiv preprint arXiv:1704.01212,", "year": 2017 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H. Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Koray Kavukcuoglu", "Rémi Munos", "Michal Valko" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kaveh Hassani", "Amir Hosein Khasahmadi" ], "title": "Contrastive multi-view representation learning on graphs", "venue": "In Proceedings of International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2020. doi: 10.1109/cvpr42600.2020.00975. URL http: //dx.doi.org/10.1109/cvpr42600.2020.00975", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Weihua Hu", "Bowen Liu", "Joseph Gomes", "Marinka Zitnik", "Percy Liang", "Vijay Pande", "Jure Leskovec" ], "title": "Strategies for pre-training graph neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Hisashi Kashima", "Koji Tsuda", "Akihiro Inokuchi" ], "title": "Marginalized kernels between labeled graphs", "venue": "In Proceedings of the 20th international conference on machine learning", "year": 2003 }, { "authors": [ "Kristian Kersting", "Nils M. Kriege", "Christopher Morris", "Petra Mutzel", "Marion Neumann" ], "title": "Benchmark data sets for graph kernels, 2016", "venue": "URL http://graphkernels.cs.tu-dortmund. de", "year": 2016 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Stefan Weißenberger", "Stephan Günnemann" ], "title": "Diffusion improves graph learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Risi Kondor", "Horace Pan" ], "title": "The multiscale laplacian graph kernel", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on challenges in representation learning, ICML,", "year": 2013 }, { "authors": [ "Takeru Miyato", "Shin-Ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: A regularization method for supervised and semi-supervised learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Hossein Mobahi", "Mehrdad Farajtabar", "Peter L Bartlett" ], "title": "Self-distillation amplifies regularization in hilbert space", "venue": "arXiv preprint arXiv:2002.05715,", "year": 2020 }, { "authors": [ "Annamalai Narayanan", "Mahinthan Chandramohan", "Rajasekar Venkatesan", "Lihui Chen", "Yang Liu", "Shantanu Jaiswal" ], "title": "graph2vec: Learning distributed representations of graphs", "venue": "arXiv preprint arXiv:1707.05005,", "year": 2017 }, { "authors": [ "Mark EJ Newman", "Michelle Girvan" ], "title": "Finding and evaluating community structure in networks", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin Raffel", "Ekin D. Cubuk", "Ian J. Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms, 2018", "venue": null, "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Jiezhong Qiu", "Qibin Chen", "Yuxiao Dong", "Jing Zhang", "Hongxia Yang", "Ming Ding", "Kuansan Wang", "Jie Tang" ], "title": "Gcc: Graph contrastive coding for graph neural network pre-training", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training, 2018", "venue": null, "year": 2018 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "Chuck Rosenberg", "Martial Hebert", "Henry Schneiderman" ], "title": "Semi-supervised self-training of object detection models", "venue": null, "year": 2005 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Nino Shervashidze", "SVN Vishwanathan", "Tobias Petri", "Kurt Mehlhorn", "Karsten Borgwardt" ], "title": "Efficient graphlet kernels for large graph comparison", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Nino Shervashidze", "Pascal Schweitzer", "Erik Jan Van Leeuwen", "Kurt Mehlhorn", "Karsten M Borgwardt" ], "title": "Weisfeiler-lehman graph kernels", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Fan-Yun Sun", "Jordan Hoffmann", "Vikas Verma", "Jian Tang" ], "title": "Infograph: Unsupervised and semisupervised graph-level representation learning via mutual information maximization", "venue": null, "year": 1908 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning", "venue": null, "year": 2017 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Petar Veličković", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "arXiv preprint arXiv:1809.10341,", "year": 2018 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "David Lopez-Paz", "Yoshua Bengio" ], "title": "Manifold mixup: Better representations by interpolating hidden states", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": null, "year": 2015 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Pinar Yanardag", "SVN Vishwanathan" ], "title": "Deep graph kernels", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Yuning You", "Tianlong Chen", "Zhangyang Wang", "Yang Shen" ], "title": "When does self-supervision help graph convolutional networks", "venue": "arXiv preprint arXiv:2006.09136,", "year": 2020 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization, 2017", "venue": null, "year": 2017 }, { "authors": [ "Tong Zhao", "Yozen Liu", "Leonardo Neves", "Oliver Woodford", "Meng Jiang", "Neil Shah" ], "title": "Data augmentation for graph neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yadi Zhou", "Fei Wang", "Jian Tang", "Ruth Nussinov", "Feixiong Cheng" ], "title": "Artificial intelligence in covid-19 drug repurposing", "venue": "The Lancet Digital Health,", "year": 2020 }, { "authors": [ "Kondor", "Pan" ], "title": "2016) due to some procedures like path extraction and recursive subgraph construction. Recently, there has been increasing interest in Graph Neural Network (GNN) approaches for graph representation learning and many GNN variants have been proposed (Ramakrishnan et", "venue": "Xu et al.,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs are ubiquitous representations encoding relational structures across various domains. Learning low-dimensional vector representations of graphs is critical in various domains ranging from social science (Newman & Girvan, 2004) to bioinformatics (Duvenaud et al., 2015; Zhou et al., 2020). Many graph neural networks (GNNs) (Gilmer et al., 2017; Kipf & Welling, 2016; Xu et al., 2018) have been proposed to learn node and graph representations by aggregating information from every node’s neighbors via non-linear transformation and aggregation functions. However, the key limitation of existing GNN architectures is that they often require a huge amount of labeled data to be competitive but annotating graphs like drug-target interaction networks is challenging since it needs domainspecific expertise. Therefore, unsupervised learning on graphs has been long studied, such as graph kernels (Shervashidze et al., 2011) and matrix-factorization approaches (Belkin & Niyogi, 2002).\nInspired by the recent success of unsupervised representation learning in various domains like images (Chen et al., 2020b; He et al., 2020) and texts (Radford et al., 2018), most related works in the graph domain either follow the pipeline of unsupervised pretraining (followed by fine-tuning) or InfoMax principle (Hjelm et al., 2018). The former often needs meticulous designs of pretext tasks (Hu et al., 2019; You et al., 2020) while the latter is dominant in unsupervised graph representation learning, which trains encoders to maximize the mutual information (MI) between the representations of the global graph and local patches (such as subgraphs) (Veličković et al., 2018; Sun et al., 2019; Hassani & Khasahmadi, 2020). However, MI-based approaches usually need to sample subgraphs as local views to contrast with global graphs. And they usually require an additional discriminator for scoring local-global pairs and negative samples, which is computationally prohibitive (Tschannen et al., 2019). Besides, the performance is also very sensitive to the choice of encoders and MI estimators (Tschannen et al., 2019). Moreover, MI-based approaches cannot be handily extended to the semi-supervised setting since local subgraphs lack labels that can be utilized for training. Therefore, we are seeking an approach that learns the entire graph representation by contrasting the whole graph directly without the need of MI estimation, discriminator and subgraph sampling.\nMotivated by recent progress on contrastive learning, we propose the Iterative Graph Self-Distillation (IGSD), a teacher-student framework to learn graph representations by contrasting graph instances directly. The high-level idea of IGSD is based on graph contrastive learning where we pull sim-\nilar graphs together and push dissimilar graph away. However, the performance of conventional contrastive learning largely depends on how negative samples are selected. To learn discriminative representations and avoid collapsing to trivial solutions, a large set of negative samples (He et al., 2020; Chen et al., 2020b) or a special mining strategy (Schroff et al., 2015; He et al., 2020) are necessary. In order to alleviate the dependency on negative samples mining and still be able to learn discriminative graph representations, we propose to use self-distillation as a strong regularization to guide the graph representation learning.\nIn the IGSD framework, graph instances are augmented as several views to be encoded and projected into a latent space where we define a similarity metric for consistency-based training. The parameters of the teacher network are iteratively updated as an exponential moving average of the student network parameters, allowing the knowledge transfer between them. As merely small amount of labeled data is often available in many real-world applications, we further extend IGSD to the semi-supervised setting such that it can effectively utilize graph-level labels while considering arbitrary amounts of positive pairs belonging to the same class. Moreover, in order to leverage the information from pseudo-labels with high confidence, we develop a self-training algorithm based on the supervised contrastive loss for fine-tuning.\nWe experiment with real-world datasets in various scales and compare the performance of IGSD with state-of-the-art graph representation learning methods. Experimental results show that IGSD achieves competitive performance in both unsupervised and semi-supervised settings with different encoders and data augmentation choices. With the help of self-training, our performance can exceed state-of-the-art baselines by a large margin.\nTo summarize, we make the following contributions in this paper:\n• We propose a self-distillation framework called IGSD for unsupervised graph-level representation learning where the teacher-student distillation is performed for contrasting graph pairs under different augmented views.\n• We further extend IGSD to the semi-supervised scenario, where the labeled data are utilized effectively with the supervised contrastive loss and self-training.\n• We empirically show that IGSD surpasses state-of-the-art methods in semi-supervised graph classification and molecular property prediction tasks and achieves performance competitive with state-of-the-art approaches in unsupervised graph classification tasks." }, { "heading": "2 RELATED WORK", "text": "Contrastive Learning Modern unsupervised learning in the form of contrastive learning can be categorized into two types: context-instance contrast and context-context contrast (Liu et al., 2020). The context-instance contrast, or so-called global-local contrast focuses on modeling the belonging relationship between the local feature of a sample and its global context representation. Most unsupervised learning models on graphs like DGI (Veličković et al., 2018), InfoGraph (Sun et al., 2019), CMC-Graph (Hassani & Khasahmadi, 2020) fall into this category, following the InfoMax principle to maximize the the mutual information (MI) between the input and its representation. However, estimating MI is notoriously hard in MI-based contrastive learning and in practice tractable lower bound on this quantity is maximized instead. And maximizing tighter bounds on MI can result in worse representations without stronger inductive biases in sampling strategies, encoder architecture and parametrization of MI estimators (Tschannen et al., 2019). Besides, the intricacies of negative sampling in MI-based approaches impose key research challenges like improper amount of negative samples or biased negative sampling (Tschannen et al., 2019; Chuang et al., 2020). Another line of contrastive learning approaches called context-context contrast directly study the relationships between the global representations of different samples as what metric learning does. For instance, a recently proposed model BYOL (Grill et al., 2020) bootstraps the representations of the whole images directly. Focusing on global representations between samples and corresponding augmented views also allows instance-level supervision to be incorporated naturally like introducing supervised contrastive loss (Khosla et al., 2020) into the framework for learning powerful representations. Graph Contrastive Coding (GCC) (Qiu et al., 2020) is a pioneer to leverage instance discrimination as the pretext task for structural information pre-training. However, our work is fundamentally different from theirs. GCC focuses on structural similarity to find common and transferable structural\npatterns across different graph datasets and the contrastive scheme is done through subgraph instance discrimination. On the contrary, our model aims at learning graph-level representation by directly contrasting graph instances such that data augmentation strategies and graph labels can be utilized naturally and effectively.\nKnowledge Distillation Knowledge distillation (Hinton et al., 2015) is a method for transferring knowledge from one architecture to another, allowing model compression and inductive biases transfer. Self-distillation (Furlanello et al., 2018) is a special case when two architectures are identical, which can iteratively modify regularization and reduce over-fitting if perform suitable rounds (Mobahi et al., 2020). However, they often focus on closing the gap between the predictive results of student and teacher rather than defining similarity loss in latent space for contrastive learning.\nSemi-supervised Learning Modern semi-supervised learning can be categorized into two kinds: multi-task learning and consistency training between two separate networks. Most widely used semisupervised learning methods take the form of multi-task learning: argminθ Ll(Dl, θ) +wLu(Du, θ) on labeled data Dl and unlabeled data Du. By regularizing the learning process with unlabeled data, the decision boundary becomes more plausible. Another mainstream of semi-supervised learning lies in introducing student network and teacher network and enforcing consistency between them (Tarvainen & Valpola, 2017; Miyato et al., 2019; Lee, 2013). It has been shown that semisupervised learning performance can be greatly improved via unsupervised pre-training of a (big) model, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge (Chen et al., 2020c). However, whether task-agnostic self-distillation would benefit semi-supervised learning is still underexplored." }, { "heading": "3 PRELIMINARIES", "text": "" }, { "heading": "3.1 FORMULATION", "text": "Unsupervised Graph Representation Learning Given a set of unlabeled graphs G = {Gi}Ni=1, we aim at learning the low-dimensional representation of every graph Gi ∈ G favorable for downstream tasks like graph classification.\nSemi-supervised Graph Representation Learning Consider a whole dataset G = GL ∪ GU composed by labeled data GL = {(Gi, yi)}li=1 and unlabeled data GU = {Gi} l+u i=l+1 (usually u l), our goal is to learn a model that can make predictions on graph labels for unseen graphs. And with K augmentations, we get G′L = {(G′k, y′k)}Klk=1 and G′U = {G′k} K(l+u) k=l+1 as our training data." }, { "heading": "3.2 GRAPH REPRESENTATION LEARNING", "text": "We represent a graph instance as G(V, E) with the node set V and the edge set E . The dominant ways of graph representation learning are graph neural networks with neural message passing mechanisms (Hamilton et al., 2017): for every node v ∈ V , node representation hkv is iteratively computed from the features of their neighbor nodes N (v) using a differentiable aggregation function. Specifically, at the iteration k we get the node embedding as:\nhkv = σ ( Wk · CONCAT ( hk−1v ,AGGREGATEk ({ hk−1u ,∀u ∈ N (v) }))) (1)\nThen the graph-level representations can be attained by aggregating all node representations using a readout function like summation or set2set pooling (Vinyals et al., 2015)." }, { "heading": "3.3 GRAPH DATA AUGMENTATION", "text": "It has been shown that the learning performance of GNNs can be improved via graph diffusion, which serves as a homophily-based denoising filter on both features and edges in real graphs (Klicpera et al., 2019). The transformed graphs can also serve as effective augmented views in contrastive learning (Hassani & Khasahmadi, 2020). Inspired by that, we transform a graph G with transition matrix T via graph diffusion and sparsification S = ∑∞ k=0 θkT\nk into a new graph with adjacency matrix S as an augmented view in our framework. While there are many design choices in coefficients θk like heat kernel, we employ Personalized PageRank (PPR) with θPPRk = α(1 − α)k due to its\nsuperior empirical performance (Hassani & Khasahmadi, 2020). As another augmentation choice, we randomly remove edges of graphs to attain corrupted graphs as augmented views to validate the robustness of models to different augmentation choices." }, { "heading": "4 ITERATIVE GRAPH SELF-DISTILLATION", "text": "Intuitively, the goal of contrastive learning on graphs is to learn graph representations that are close in the metric space for positive pairs (graphs with the same labels) and far between negative pairs (graphs with different labels). To achieve this goal, IGSD employs the teacher-student distillation to iteratively refine representations by contrasting latent representations embedded by two networks and using additional predictor and EMA update to avoid collapsing to trivial solutions. Overall, IGSD encourages the closeness of augmented views from the same graph instances while pushing apart the representations from different ones." }, { "heading": "4.1 ITERATIVE GRAPH SELF-DISTILLATION FRAMEWORK", "text": "In IGSD, we introduce a teacher-student architecture comprises two networks in similar structure composed by encoder fθ, projector gθ and predictor hθ. We denote the components of the teacher network and the student network as fθ′ , gθ′ and fθ, gθ, hθ respectively.\nThe overview of IGSD is illustrated in Figure 1. In IGSD, the procedure of contrastive learning on negative pairs is described as follows: we first augment the original input graphs Gj to get augmented view(s) G′j . Then G ′ j and different graph instance Gi are fed respectively into two encoders fθ, fθ′ for extracting graph representations h,h′ = fθ(Gi), fθ′(G′j) with iterative message passing in Eqn. (1) and readout functions. The following projectors gθ, gθ′ transform\ngraph representations to projections z, z′ via z = gθ(h) = W (2)σ(W (1)h) and z′ = gθ′(h′) = W ′(2)σ(W ′(1)h′), where σ denotes a ReLU nonlinearity1. To prevent collapsing into a trivial solution (Grill et al., 2020), a specialized predictor is used in the student network for attaining the prediction hθ(z) =W (2) h σ(W (1) h z) of the projection z. For positive pairs, we follow the same procedure except feeding the original and augmented view of the same graph into two networks respectively.\nTo contrast latents hθ(z) and z′, we use L2 norm in the latent space to approximate the semantic distance in the input space and the consistency loss can be defined as the mean square error between the normalized prediction hθ(z) and projection z′. By passing two graph instances Gi and Gj symmetrically, we can obtain the overall consistency loss:\nLcon(Gi, Gj) = ∥∥hθ (zi)− z′j∥∥22 + ‖hθ (z′i)− zj‖22 (2)\nWith the consistency loss, the teacher network provides a regression target to train the student network, and its parameters θ′ are updated as an exponential moving average (EMA) of the student parameters θ after weights of the student network have been updated using gradient descent:\nθ′t ← τθ′t−1 + (1− τ)θt (3)\nWith the above iterative self-distillation procedure, we can aggregate information for averaging model weights over each training step instead of using the final weights directly (Athiwaratkun et al., 2018). It should be noted that maintaining a slow-moving average network is also employed in some models like MoCo (He et al., 2020) with different motivations: MoCo uses an EMA of encoder\n1Although IGSD could directly predict the representations without projections, previous contrastive learning work (Chen et al., 2020b) in the image domain has shown that using projections improves performance empirically. We include the experimental results to validate the effects of projectors in Appendix A.3\nand momentum encoder to update the encoder, ensuring the consistency of dictionary keys in the memory bank. On the other hand, IGSD uses a moving average network to produce prediction targets, enforcing the consistency of teacher and student for training the student network." }, { "heading": "4.2 UNSUPERVISED LEARNING WITH IGSD", "text": "In IGSD, to contrast the anchor Gi with other graph instances Gj (i.e. negative samples), we employ the following unsupervised InfoNCE objective (Oord et al., 2018):\nLunsup = −EGi∼G\n[ log\nexp (−Lcon(Gi, Gi)) exp (−Lcon(Gi, Gi)) + ∑N−1 j=1 Ii6=j · exp (−Lcon(Gi, Gj))\n] (4)\nAt the inference time, as semantic interpolations on samples, labels and latents result in better representations and can improve learning performance greatly (Zhang et al., 2017; Verma et al., 2019; Berthelot et al., 2019), we obtain the graph representation h̃ by interpolating the latent representations h = fθ(G) and h′ = fθ′(G) with Mixup function Mixλ(a, b) = λ · a+ (1− λ) · b:\nh̃ = Mixλ(h,h ′) (5)" }, { "heading": "4.3 SEMI-SUPERVISED LEARNING WITH IGSD", "text": "To bridge the gap between unsupervised pretraining and downstream tasks, we extend our model to the semi-supervised setting. In this scenario, it is straightforward to plug in the unsupervised loss as a regularizer for representation learning. However, the instance-wise supervision limited to standard supervised learning may lead to biased negative sampling problems (Chuang et al., 2020). To tackle this challenge, we can use a small amount of labeled data further to generalize the similarity loss to handle arbitrary number of positive samples belonging to the same class:\nLsupcon = Kl∑ i=1 1 KNy′i Kl∑ j=1 Ii6=j · Iy′i=y′j · L con(Gi, Gj) (6)\nwhere Ny′i denotes the total number of samples in the training set that have the same label y ′ i as anchor i. Thanks to the graph-level contrastive nature of IGSD, we are able to alleviate the biased negative sampling problems (Khosla et al., 2020) with supervised contrastive loss, which is crucial (Chuang et al., 2020) but unachievable in most MI-based contrastive learning models since subgraphs are generally hard to assign labels to. Besides, with this loss we are able to fine-tune our model effectively using self-training where pseudo-labels are assigned iteratively to unlabeled data.\nWith the standard supervised loss like cross entropy or mean square error L(GL, θ), the overall objective can be summarized as:\nLsemi = L(GL, θ) + wLunsup(GL ∪ GU , θ) + w′Lsupcon(GL, θ) (7)\nCommon semi-supervised learning methods use consistency regularization to measure discrepancy between predictions made on perturbed unlabeled data points for better prediction stability and generalization (Oliver et al., 2018). By contrast, our methods enforce consistency constraints between latents from different views, which acts as a regularizer for learning directly from labels.\nLabeled data provides extra supervision about graph classes and alleviates biased negative sampling. However, they are costly to attain in many areas. Therefore, we develop a contrastive self-training algorithm to leverage label information more effectively than cross entropy in the semi-supervised scenario. In the algorithm, we train the model using a small amount of labeled data and then fine-tune it by iterating between assigning pseudo-labels to unlabeled examples and training models using the augmented dataset. In this way, we harvest massive pseudo-labels for unlabeled examples.\nWith increasing size of the augmented labeled dataset, the discriminative power of IGSD can be improved iteratively by contrasting more positive pairs belonging to the same class. In this way, we accumulate high-quality psuedo-labels after each iteration to compute the supervised contrastive loss in Eqn. (6) and make distinction from conventional self-training algorithms (Rosenberg et al., 2005). On the other hand, traditional self-training can use psuedo-labels for computing cross entropy only." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Evaluation Tasks. We conduct experiments by comparing with state-of-the-art models on three tasks. In graph classification tasks, we experiment in both the unsupervised setting where we only have access to all unlabeled samples in the dataset and the semi-supervised setting where we use a small fraction of labeled examples and treat the rest as unlabeled ones by ignoring their labels. In molecular property prediction tasks where labels are expensive to obtain, we only consider the semi-supervised setting.\nDatasets. For graph classification tasks, we employ several widely-used graph kernel datasets (Kersting et al., 2016) for learning and evaluation: 3 bioinformatics datasets (MUTAG, PTC, NCI1) and 3 social network datasets (COLLAB, IMDB-B, IMDB-M) with statistics summarized in Table 1. In the semi-supervised graph regression tasks, we use the QM9 dataset containing 134,000 drug-like organic molecules (Ramakrishnan et al., 2014) with 9 heavy atoms and select the first ten physicochemical properties as regression targets for training and evaluation. For detailed description of the properties in the QM9 dataset, see the Appendix C of (Sun et al., 2019).\nBaselines. In the unsupervised graph classification, we compare with the following representative baselines: CMC-Graph (Hassani & Khasahmadi, 2020), InfoGraph (Sun et al., 2019), GCC (Qiu et al., 2020), Graph2Vec (Narayanan et al., 2017) and Graph Kernels including Random Walk Kernel (Gärtner et al., 2003), Shortest Path Kernel (Kashima et al., 2003), Graphlet Kernel (Shervashidze et al., 2009), Weisfeiler-Lehman Sub-tree Kernel (WL SubTree) (Shervashidze et al., 2011), Deep Graph Kernels (Yanardag & Vishwanathan, 2015), Multi-Scale Laplacian Kernel (MLG) (Kondor & Pan, 2016) and Graph Convolutional Kernel Network (GCKN) (Chen et al., 2020a).\nFor the semi-supervised graph classification, we compare our method with competitive baselines like InfoGraph, InfoGraph* and Mean Teachers. And the GIN baseline doesn’t have access to the unlabeled data. In the semi-supervised molecular property prediction tasks, baselines include InfoGraph, InfoGraph* and Mean Teachers (Tarvainen & Valpola, 2017).\nModel Configuration. In our framework, We use GCNs (Kipf & Welling, 2016) and GINs (Xu et al., 2018) as encoders to attain node representations for unsupervised and semi-supervised graph classification respectively. For semi-supervised molecular property prediction, we employ message passing neural networks (MPNNs) (Gilmer et al., 2017) as backbone encoders to encode molecular graphs with rich edge attributes. All projectors and predictors are implemented as two-layer MLPs. For more details on hyper-parameters selection, refer to appendix A.2\nIn semi-supervised molecular property prediction tasks, we generate multiple views based on edge attributes (bond types) of rich-annotated molecular graphs for improving performance. Specifically, we perform label-preserving augmentation to attain multiple diffusion matrixes of every graph on different edge attributes while ignoring others respectively. The diffusion matrix gives a denser graph based on each type of edges to leverage edge features better. We train our models using different numbers of augmented training data and select the amount using cross validation.\nFor unsupervised graph classification, we adopt LIB-SVM (Chang & Lin, 2011) with C parameter selected in {1e-3, 1e-2, . . . , 1e2, 1e3} as our downstream classifier. Then we use 10-fold cross validation accuracy as the classification performance and repeat the experiments 5 times to report the mean and standard deviation. For semi-supervised graph classification, we randomly select 5% of training data as labeled data while treat the rest as unlabeled one and report the best test set accuracy in 300 epochs. Following the experimental setup in (Sun et al., 2019), we randomly choose 5000, 10000, 10000 samples for training, validation and testing respectively and the rest are treated as unlabeled training data for the molecular property prediction tasks." }, { "heading": "5.2 NUMERICAL RESULTS", "text": "Results on unsupervised graph classification. We first present the results of the unsupervised setting in Table 1. All graph kernels give inferior performance except in the PTC dataset. The Random Walk kernel runs out of memory and the Multi-Scale Laplacian Kernel suffers from a long running time (exceeds 24 hours) in two larger datasets. IGSD outperforms state-of-the-art baselines\nlike InfoGraph, CMC-Graph and GCC in most datasets, showing that IGSD can learn expressive graph-level representations for downstream classifiers. Besides, our model still achieve competitive results in datasets like IMDB-M and NCI1 with random dropping augmentation, which demonstrates the robustness of IGSD with different choices of data augmentation strategies.\nResults on semi-supervised graph classification. We further apply our model to semi-supervised graph classification tasks with results demonstrated in Table 4, where we set w and w′ in Eqn. (7) to be 1 and 0 as Ours (Unsup) while 0 and 1 as Ours (SupCon). In this setting, our model performs better than Mean Teachers and InfoGraph*. Both the unsupervised loss and supervised contrastive loss provide extra performance gain compared with GIN using supervised data only. Besides, both of their performance can be improved significantly combined using self-training especially with supervised contrastive loss. It makes empirical sense since self-training iteratively assigns psuedolabels with high confidence to unlabeled data, which provides extra supervision on their categories under contrastive learning framework.\nResults on semi-supervised molecular property prediction. We present the regression performance of our model measured in the QM9 dataset in Figure 2. We display the performance of our model and baselines as mean square error ratio with respect to supervised results and our model outperforms all baselines in 9 out of 10 tasks compared with strong baselines InfoGraph, InfoGraph* and Mean Teachers. And in some tasks like R2 (5), U0 (7) and U (8), IGSD achieves significant performance gains against its counterparts, which demonstrates the ability to transfer knowledge learned from unsupervised data for supervised tasks." }, { "heading": "5.3 ABLATION STUDIES AND ANALYSIS", "text": "Effects of self-training. We first investigate the effects of self-training for our model performance in table 4. Results show that self-training can improve the GIN baseline and our models with unsupervised loss (Unsup) or supervised contrastive loss (SupCon). The improvement is even more significant combined with supervised contrastive loss since high-quality pseudo-labels provide\nadditional information of graph categories. Moreover, our self-training algorithm consistently outperforms the traditional self-training baseline, which further validates the superiority of our model.\nEffects of different amount of negative pairs. We then conduct ablation experiments on the amount of negative pairs by varying batch size over {16, 32, 64, 128} with results on IMDB-B dataset shown in Figure 3a. Both methods contrast negative pairs batch-wise and increasing batch size improves the performance of IGSD while degrades CMC-Graph. When batch size is greater than 32, IGSD outperforms CMC-Graph and the performance gap becomes larger as the batch size increases, which means IGSD is better at leveraging negative pairs for learning effective representations than CMC-Graph.\nEffects of different proportion of labeled data. We also investigate the performance of different models with different proportion of labeled data with IMDB-B dataset. As illustrated in Figure 3b, IGSD outperforms strong InfoGraph* baseline given different amount of labeled data consistently. And the performance gain is most significant when the fraction of labeled data is 10% since our models can leverage labels more effectively by regularizing original unsupervised learning objective when labels are scarce." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we propose IGSD, a novel unsupervised graph-level representation learning framework via self-distillation. Our framework iteratively performs teach-student distillation by contrasting augmented views of graph instances. Experimental results in both unsupervised and semi-supervised settings show that IGSD is not only able to learn effective graph representations competitive with state-of-the-art models but also robust with choices of encoders and augmentation strategies. In the future, we plan to apply our framework to other graph learning tasks and investigate the design of view generators to generative effective views automatically." }, { "heading": "A APPENDIX", "text": "A.1 RELATED WORK\nGraph Representation Learning Traditionally, graph kernels are widely used for learning node and graph representations. This common process includes meticulous designs like decomposing graphs into substructures and using kernel functions like Weisfeiler-Leman graph kernel (Shervashidze et al., 2011) to measure graph similarity between them. However, they usually require non-trivial hand-crafted substructures and domain-specific kernel functions to measure the similarity while yields inferior performance on downstream tasks like node classification and graph classification. Moreover, they often suffer from poor scalability (Borgwardt & Kriegel, 2005) and great memory consumption (Kondor & Pan, 2016) due to some procedures like path extraction and recursive subgraph construction. Recently, there has been increasing interest in Graph Neural Network (GNN) approaches for graph representation learning and many GNN variants have been proposed (Ramakrishnan et al., 2014; Kipf & Welling, 2016; Xu et al., 2018). However, they mainly focus on supervised settings.\nData augmentation Data augmentation strategies on graphs are limited since defining views of graphs is a non-trivial task. There are two common choices of augmentations on graphs (1) featurespace augmentation and (2) structure-space augmentation. A straightforward way is to corrupt the adjacency matrix which preserves the features but adds or removes edges from the adjacency matrix with some probability distribution (Veličković et al., 2018). Zhao et al. (2020) improves performance in GNN-based semi-supervised node classification via edge prediction. Empirical results show that diffusion matrix can serve as a denoising filter to augment graph data for improving graph representation learning significantly both in supervised (Klicpera et al., 2019) and unsupervised settings (Hassani & Khasahmadi, 2020). Hassani & Khasahmadi (2020) shows the benefits of treating diffusion matrix as an augmented view of mutual information-based contrastive graph representation learning. Attaining effective views is non-trivial since we need to consider factors like mutual information to preserve label information w.r.t the downstream task (Tian et al., 2020).\nA.2 HYPER-PARAMETERS\nFor hyper-parameter tuning, we select number of GCN layers over {2, 8, 12}, batch size over {16, 32, 64, 128, 256, 512}, number of epochs over {20, 40, 100} and learning rate over {1e-4, 1e-3} in unsupervised graph classification.\nThe hyper-parameters we tune for semi-supervised graph classification and molecular property prediction are the same in (Xu et al., 2018) and (Sun et al., 2019), respectively.\nIn all experiments, we fix the fixed α = 0.2 for PPR graph diffusion and set the weighting coefficient of Mixup function to be 0.5 and tune our projection hidden size over {1024, 2048} and projection size over {256, 512}. We start self-training after 30 epochs and tune the number of iterations over {20, 50}, pseudo-labeling threshold over {0.9, 0.95}.\nA.3 EFFECT OF PROJECTORS\nWhile we could directly predict the representation y and not a projection z, previous contrastive learning works in the image domain like (Chen et al., 2020b) have empirically shown that using this projection improves performance. We also further investigate the performance with and without the projector on 4 datasets:\nResults above show that dropping the projector degrades the performance, which indicates the necessity of a projector.\nMeanwhile, to investigate the effect of projectors on model performance, we fix the output size of layers in encoders so that their output size is always 512. Then we conducted ablation experiments on different size of the projection head on IMDB-B with the following results:\nIn general, the performance is insensitive to the projection size while a larger projection size could slightly improve the unsupervised learning performance." } ]
2,020
null
SP:1fc676213cbcfd690a3aea055066a3004f974325
[ "This paper investigates the use of importance sampling in budgeted training. Four importance sampling techniques from prior works are applied within the context of fixed training budgets, and compared under different conditions of training set selection, learning rate schedule and data augmentations. Each aims to sample more useful examples more frequently, by using the loss or gradient magnitude as an importance measure. Uniform sampling with and without replacement are used as baselines, and experiments are performed on cifar-10 and cifar-100. The final conclusion is that importance sampling with budgets as low as 20% the original training schedule offer little if any improvement over uniform sampling, while additional data augmentations work well to make up lost validation accuracy." ]
Long iterative training processes for Deep Neural Networks (DNNs) are commonly required to achieve state-of-the-art performance in many computer vision tasks. Core-set selection and importance sampling approaches might play a key role in budgeted training regimes, i.e. when limiting the number of training iterations. The former demonstrate that retaining informative samples is important to avoid large drops in accuracy, and the later aim at dynamically estimating the sample importance to speed-up convergence. This work explores this paradigm and how a budget constraint interacts with importance sampling approaches and data augmentation techniques. We show that under budget restrictions, importance sampling approaches do not provide a consistent improvement over uniform sampling. We suggest that, given a specific budget, the best course of action is to disregard the importance and introduce adequate data augmentation. For example, training in CIFAR-10/100 with 30% of the full training budget, a uniform sampling strategy with certain data augmentation surpasses the performance of 100% budget models trained with standard data augmentation. We conclude from our work that DNNs under budget restrictions benefit greatly from variety in the samples and that finding the right samples to train is not the most effective strategy when balancing high performance with low computational requirements. The code will be released after the review process.
[]
[ { "authors": [ "Guillaume Alain", "Alex Lamb", "Chinnadhurai Sankar", "Aaron Courville", "Yoshua Bengio" ], "title": "Variance reduction in sgd by distributed importance sampling", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Paul Albert", "Diego Ortego", "Eric Arazo", "Noel E. O’Connor", "Kevin McGuinness" ], "title": "Relab: Reliable label bootstrapping for semi-supervised learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Hadi Amiri", "Timothy Miller", "Guergana Savova" ], "title": "Repeat before forgetting: Spaced repetition for efficient and effective training of neural networks", "venue": "In Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Eric Arazo", "Diego Ortego", "Paul. Albert", "Noel O’Connor", "Kevin McGuinness" ], "title": "Unsupervised Label Noise Modeling and Loss Correction", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Eric Arazo", "Diego Ortego", "Paul Albert", "Noel E O’Connor", "Kevin McGuinness" ], "title": "Pseudo-labeling and confirmation bias in deep semi-supervised learning", "venue": "In International Joint Conference on Neural Networks (IJCNN),", "year": 2020 }, { "authors": [ "Jordan T Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": null, "year": 2020 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2009 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Review,", "year": 2018 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Haw-Shiuan Chang", "Erik Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": null, "year": 2019 }, { "authors": [ "Hao Cheng", "Dongze Lian", "Bowen Deng", "Shenghua Gao", "Tao Tan", "Yanlin Geng" ], "title": "Local to global learning: Gradually adding classes for training deep neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Cody Coleman", "Christopher Yeh", "Stephen Mussmann", "Baharan Mirzasoleiman", "Peter Bailis", "Percy Liang", "Jure Leskovec", "Matei Zaharia" ], "title": "Selection via proxy: Efficient data selection for deep learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRw),", "year": 2020 }, { "authors": [ "Ekin Dogus Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Bin Dai", "Chen Zhu", "Baining Guo", "David Wipf" ], "title": "Compressing neural networks using the variational information bottleneck", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc’aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang" ], "title": "Large scale distributed deep networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": null, "year": 2017 }, { "authors": [ "Yanbo Fan", "Siwei Lyu", "Yiming Ying", "Baogang Hu" ], "title": "Learning with average top-k loss", "venue": "In Advances in Neural Information Processing systems (NeurIPS),", "year": 2017 }, { "authors": [ "Guy Hacohen", "Daphna Weinshall" ], "title": "On the power of curriculum learning in training deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Daniel Ho", "Eric Liang", "Xi Chen", "Ion Stoica", "Pieter Abbeel" ], "title": "Population based augmentation: Efficient learning of augmentation policy schedules", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "George Ioannou", "Thanos Tagaris", "Andreas Stafylopatis" ], "title": "Improving the convergence speed of deep neural networks with biased sampling", "venue": "In International Conference on Advances in Artificial Intelligence (ICAAI),", "year": 2019 }, { "authors": [ "Angela H Jiang", "Daniel L-K Wong", "Giulio Zhou", "David G Andersen", "Jeffrey Dean", "Gregory R Ganger", "Gauri Joshi", "Michael Kaminksy", "Michael Kozuch", "Zachary C Lipton" ], "title": "Accelerating deep learning by focusing on the biggest losers", "venue": null, "year": 1910 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Tyler B Johnson", "Carlos Guestrin" ], "title": "Training deep models faster with robust, approximate importance sampling", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Mohammad Kachuee", "Orpaz Goldstein", "Kimmo Karkkainen", "Sajad Darabi", "Majid Sarrafzadeh" ], "title": "Opportunistic learning: Budgeted cost-sensitive learning from data streams", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Kenji Kawaguchi", "Haihao Lu" ], "title": "Ordered sgd: A new stochastic optimization framework for empirical risk minimization", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2020 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": null, "year": 2004 }, { "authors": [ "Sungyeon Kim", "Dongwon Kim", "Minsu Cho", "Suha Kwak" ], "title": "Proxy anchor loss for deep metric learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Junnan Li", "Richard Socher", "Steven CH Hoi" ], "title": "DivideMix: Learning with Noisy Labels as Semisupervised Learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Mengtian Li", "Ersin Yumer", "Deva Ramanan" ], "title": "Budgeted training: Rethinking deep neural network training under resource constraints", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Shaohui Lin", "Rongrong Ji", "Chenqian Yan", "Baochang Zhang", "Liujuan Cao", "Qixiang Ye", "Feiyue Huang", "David Doermann" ], "title": "Towards optimal structured cnn pruning via generative adversarial learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Online batch selection for faster training of neural networks", "venue": null, "year": 2015 }, { "authors": [ "Haihao Lu", "Rahul Mazumder" ], "title": "Randomized gradient boosting machine", "venue": null, "year": 2018 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Baharan Mirzasoleiman", "Jeff Bilmes", "Jure Leskovec" ], "title": "Coresets for data-efficient training of machine learning models", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Feng Nan", "Venkatesh Saligrama" ], "title": "Adaptive classification for prediction under a budget. In Advances in neural information processing systems (NeurIPS), 2017", "venue": null, "year": 2017 }, { "authors": [ "Deanna Needell", "Rachel Ward", "Nati Srebro" ], "title": "Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2014 }, { "authors": [ "Y. Netzer", "T. Wang", "A. Coates", "A. Bissacco", "B. Wu", "A.Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2011 }, { "authors": [ "Pengzhen Ren", "Yun Xiao", "Xiaojun Chang", "Po-Yao Huang", "Zhihui Li", "Xiaojiang Chen", "Xin Wang" ], "title": "A survey of deep active learning", "venue": null, "year": 2009 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Leslie N Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "In IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "Leslie N Smith", "Nicholay Topin" ], "title": "Super-convergence: Very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ke Sun", "Bin Xiao", "Dong Liu", "Jingdong Wang" ], "title": "Deep high-resolution representation learning for human pose estimation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Qianru Sun", "Yaoyao Liu", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Ryo Takahashi", "Takashi Matsubara", "Kuniaki Uehara" ], "title": "Ricap: Random image cropping and patching data augmentation for deep cnns", "venue": "In Asian Conference on Machine Learning (ACML),", "year": 2018 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Mariya Toneva", "Alessandro Sordoni", "Remi Tachet des Combes", "Adam Trischler", "Yoshua Bengio", "Geoffrey J Gordon" ], "title": "An empirical study of example forgetting during deep neural network learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Hugo Touvron", "Andrea Vedaldi", "Matthijs Douze", "Hervé Jégou" ], "title": "Fixing the train-test resolution discrepancy", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "O. Vinyals", "C. Blundell", "T. Lillicrap", "K. Kavukcuoglu", "D. Wierstra" ], "title": "Matching Networks for One Shot Learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Daphna Weinshall", "Gad Cohen", "Dan Amir" ], "title": "Curriculum learning by transfer learning: Theory and experiments with deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V. Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Ting Yao", "Yingwei Pan", "Yehao Li", "Tao Mei" ], "title": "Exploring visual relationship for image captioning", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Donggeun Yoo", "In So Kweon" ], "title": "Learning loss for active learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Cheng Zhang", "Cengiz Öztireli", "Stephan Mandt", "Giampiero Salvi" ], "title": "Active mini-batch sampling using repulsive point processes", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jiong Zhang", "Hsiang-Fu Yu", "Inderjit S Dhillon" ], "title": "Autoassist: A framework to accelerate training of deep neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Ruixiang Zhang", "Tong Che", "Zoubin Ghahramani", "Yoshua Bengio", "Yangqiu Song" ], "title": "Metagan: An adversarial approach to few-shot learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Peilin Zhao", "Tong Zhang" ], "title": "Stochastic optimization with importance sampling for regularized loss minimization", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Linjun Zhou", "Peng Cui", "Xu Jia", "Shiqiang Yang", "Qi Tian" ], "title": "Learning to select base classes for few-shot classification", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The availability of vast amounts of labeled data is crucial in training deep neural networks (DNNs) (Mahajan et al., 2018; Xie et al., 2020). Despite prompting considerable advances in many computer vision tasks (Yao et al., 2018; Sun et al., 2019a), this dependence poses two challenges: the generation of the datasets and the large computation requirements that arise as a result. Research addressing the former has experienced great progress in recent years via novel techniques that reduce the strong supervision required to achieve top results (Tan & Le, 2019; Touvron et al., 2019) by, e.g. improving semi-supervised learning (Berthelot et al., 2019; Arazo et al., 2020), fewshot learning (Zhang et al., 2018b; Sun et al., 2019b), self-supervised learning (He et al., 2020; Misra & Maaten, 2020), or training with noisy web labels (Arazo et al., 2019; Li et al., 2020a). The latter challenge has also experienced many advances from the side of network efficiency via DNN compression (Dai et al., 2018; Lin et al., 2019) or, neural architecture search (Tan & Le, 2019; Cai et al., 2019); and optimization efficiency by better exploiting the embedding space (Khosla et al., 2020; Kim et al., 2020). All these approaches are designed under a common constraint: the large dataset size needed to achieve top results (Xie et al., 2020), which conditions the success of the training process on computational resources. Conversely, a smart reduction of the amount of samples used during training can alleviate this constraint (Katharopoulos & Fleuret, 2018; Mirzasoleiman et al., 2020).\nThe selection of samples plays an important role in the optimization of DNN parameters during training, where Stochastic Gradient Descent (SGD) (Dean et al., 2012; Bottou et al., 2018) is often used. SGD guides the parameter updates using the estimation of model error gradients over sets of samples (mini-batches) that are uniformly randomly selected in an iterative fashion. This strategy assumes equal importance across samples, whereas other works suggest that alternative strategies for revisiting samples are more effective in achieving better performance (Chang et al., 2017; Kawaguchi & Lu, 2020) and faster convergence (Katharopoulos & Fleuret, 2018; Jiang et al., 2019). Similarly, the\nselection of a unique and informative subset of samples (core-set) (Toneva et al., 2018; Coleman et al., 2020) can alleviate the computation requirements during training, while reducing the performance drop with respect to training on all data. However, while removing data samples speeds-up the training, a precise sample selection often requires a pretraining stage that hinders the ability to reduce computation (Mirzasoleiman et al., 2020; Sener & Savarese, 2018).\nA possible solution to this limitation might be to dynamically change the important subset during training as done by importance sampling methods (Amiri et al., 2017; Zhang et al., 2019b), which select the samples based on a sampling probability distribution that evolves with the model and often changes based on the loss or network logits (Loshchilov & Hutter, 2015; Johnson & Guestrin, 2018). An up-to-date importance estimation is key for current methods to succeed but, in practice, is infeasible to compute (Katharopoulos & Fleuret, 2018). The real importance of a sample changes after every iteration and estimations become out-dated, yielding considerable drops in performance (Chang et al., 2017; Zhang et al., 2019b). Importance sampling methods, then, focus on selecting samples and achieve a speed-up during training as a side effect. They do not, however, strictly study possible benefits on DNN training when restricting the number of iterations used for training, i.e. the budget.\nBudgeted training (Nan & Saligrama, 2017; Kachuee et al., 2019; Li et al., 2020b) imposes an additional constraint on the optimization of a DNN: a maximum number of iterations. Defining this budget provides a concise notion of the limited training resources. Li et al. (2020b) propose to address the budget limitation using specific learning rate schedules that better suit this scenario. Despite the standardized scenario that budgeted training poses to evaluate methods when reducing the computation requirements, there are few works to date in this direction (Li et al., 2020b; Katharopoulos & Fleuret, 2018). As mentioned, importance sampling methods are closely related, but the avoidance of budget restrictions makes it difficult to understand their utility given the sensitivity to hyperparamenters that they often exhibit (Chang et al., 2017; Loshchilov & Hutter, 2015).\nIn this paper, we overcome the limitations outlined above by analyzing the effectiveness of importance sampling methods when a budget restriction is imposed (Li et al., 2020b). Given a budget restriction, we study synergies among important sampling, and data augmentation (Takahashi et al., 2018; Cubuk et al., 2020; Zhang et al., 2018a). We find the improvements of importance sampling approaches over uniform random sampling are not always consistent across budgets and datasets. We argue and experimentally confirm (see Section 4.4) that when using certain data augmentation (Takahashi et al., 2018; Cubuk et al., 2020; Zhang et al., 2018a), existing importance sampling techniques do not provide further benefits, making data augmentation the most effective strategy to exploit a given budget." }, { "heading": "2 RELATED WORK", "text": "Few works exploit a budgeted training paradigm (Li et al., 2020b). Instead, many approaches aim to speed up the training convergence to a given performance by computing a better sampling strategy or carefully organizing the samples to allow the CNN to learn faster and generalize better. Other works, however, explore how to improve model performance by labeling the most important samples from an unlabeled set of data (Yoo & Kweon, 2019; Ash et al., 2020; Ren et al., 2020) or how to better train DNNs when a limited number of samples per class is available (Chen et al., 2019; Zhou et al., 2020; Albert et al., 2020). This section reviews relevant works aiming to improve the efficiency of the DNN training.\nSelf-paced learning (SPL) and curriculum learning (CL) aim to optimize the training process and improve model performance by ordering the samples from easy to difficult (Weinshall et al., 2018; Bengio et al., 2009; Hacohen & Weinshall, 2019; Cheng et al., 2019). For instance, CL manages to speed the convergence of the training at the initial stages due to focusing on samples whose gradients are better estimations of the real gradient (Weinshall et al., 2018). The main drawback of these methods is that, in most of the cases, the order of the samples (curriculum) has to be defined before training, which is already a costly task that requires manually assessing the sample difficulty, transferring knowledge from a fully trained model, or pre-training the model on the given dataset. Some approaches remedy this drawback with a simple curriculum (Lin et al., 2017) or by learning the curriculum during the training (Jiang et al., 2018); these methods, however, do not aim to speed up\nthe training by ordering the samples, but to improve network convergence by weighting the sample contribution to the loss.\nCore-set selection approaches aim to find the subset of samples that is most useful (Toneva et al., 2018; Coleman et al., 2020; Mirzasoleiman et al., 2020). By identifying the most useful samples from a dataset, these methods aim at maintaining accuracy despite training in a subset of the data. The ability of these methods to reduce the training cost is very limited, since they require pre-training the model. However, these methods demonstrate that DNNs only need a portion of the samples to achieve peak performance. For example, Toneva et al. (2018) define “forgetting events” as the count of times that samples are miss-classified after being correctly predicted during training. They show that higher forgetting and importance are related, as removing samples with lower forgetting events damages the model less than removing the more forgotten ones. Mirzasoleiman et al. (2020) build clusters with the features from the model and use the centroids as the most informative samples. Coleman et al. (2020) demonstrate that the difficulty of a sample is invariant to the model capacity and show that they can speed up several sample selection tasks by reducing the size of the model.\nImportance sampling approaches lie in the middle ground between the previous two: they aim to speed up training convergence by leveraging the most useful samples at every training stage (Katharopoulos & Fleuret, 2018; Jiang et al., 2019; Zhang et al., 2019b) – which correspond to sample losses with highest gradient magnitude (Needell et al., 2014; Zhao & Zhang, 2015; Alain et al., 2016). More recently, Johnson & Guestrin (2018) has shown that the last layer gradients are a good approximation and are easier to obtain in deep learning frameworks. Alternative importance measures often used include the loss (Jiang et al., 2019), the probability predicted for the true class (Chang et al., 2017), or the ranking order of these probabilities (Loshchilov & Hutter, 2015).\nThe approximation of the optimal distribution by importance sampling approaches avoids the cost of computing each sample importance at every iteration. However, they face one main challenge: the optimal sampling distribution changes very rapidly between iterations, leading to outdated estimations. Initial attempts on addressing this challenge included several hyper-parameters to smooth the estimated distribution (Chang et al., 2017), more frequent distribution updates via additional forward passes (Loshchilov & Hutter, 2015), or different alternative measures to estimate the sampling distribution (Amiri et al., 2017). Several works added complex support techniques to the training that aimed to estimate a better distribution: using robust optimization (Johnson & Guestrin, 2018), introducing repulsive point techniques (Zhang et al., 2019a), or adding a second network to be trained in parallel with the main model Zhang et al. (2019b). More recent methods leverage the random-then-greedy technique (Lu & Mazumder, 2018), where a random initial batch of samples is selected and then the probabilities of those samples are computed and used to select a secondary batch that is used for training. Within this scheme, (Katharopoulos & Fleuret, 2018) define a theoretical bound for the magnitude of the gradients that allows for faster computation of the sampling probabilities and (Jiang et al., 2019) and (Ioannou et al., 2019) use the loss as a measure of sample importance to keep the sampling distribution updated through the training. Finally, (Kawaguchi & Lu, 2020) introduces the top-k loss from (Fan et al., 2017) to perform the back-propagation step using the samples with highest losses only. Note that none of these methods avoids doing a full forward pass every epoch to update the sampling probabilities.\nLearning rate schedules have proven to be useful alternatives for faster convergence. The authors in (Smith & Topin, 2019; Smith, 2017) propose a cyclic learning rate schedule to reach faster convergence by using larger learning rates at intermediate training stages and very low rates at the end. Li et al. (Li et al., 2020b) also study the importance of the learning rate schedules to accelerate the training of DNNs. In particular, they explore budgeted training and propose a linearly decaying learning rate schedule that approaches zero at the end of the training, which without additional hyper-parameters, improves the standard learning rate schedules.\nData augmentation techniques, generally, aim to increase the variance of the data to achieve better generalization. Recent approaches, however, go a step further and target specific weaknesses from CNNs: cutout (DeVries & Taylor, 2017) drops contiguous patches of data from the input to force the network to spread its attention over the entire object, mixup (Zhang et al., 2018a) proposes to train using convex combinations of images and label pairs which smooth class boundaries and improve model calibration (Thulasidasan et al., 2019), and RICAP (Takahashi et al., 2018) combines the\nadvantages of the two previous techniques by training on images generated from joining multiple patches and doing the corresponding convex combination of labels. More generally, RandAugment (Cubuk et al., 2020) randomly combines commonly used data augmentation techniques as a reduction of the search space of the recently proposed methods that find automated augmentation policies (Ho et al., 2019; Cubuk et al., 2019)." }, { "heading": "3 BUDGETED TRAINING", "text": "The standard way of training DNNs is by gradient based minimization of cross-entropy\n`(θ) = − 1 N N∑ i=1 yTi log hθ(y|xi), (1)\nwhere N is the number of samples in the dataset D = {xi, yi}Ni=1 and yi ∈ {0, 1} C is the one-hot encoding ground-truth label for sample xi, C is the number of classes, hθ(y|xi) is the predicted posterior probability of a DNN model given xi (i.e. prediction after a softmax normalization), and θ are the parameters of the model. Convergence to a reasonable performance usually determines the end of the training, whereas in budgeted training there is a fixed iteration budget. We adopt the setting by Li et al. (2020b), where the budget is defined as a percentage of the full training setup. Formally, we define the budget B ∈ [0, 1] as the fraction of forward and backward passes used for training the model hθ(x) with respect to a standard full training. As we aim at analyzing importance sampling, the budget restriction will be mainly applied to the amount of data N ×B shown every epoch. However, a reduction on the number of epochs T to T ×B (where an epoch T is considered a pass over all samples) is also considered as truncated training for budgeted training.\nTruncated training is the simplest approach to budgeted training: keep the standard SGD optimization and reduce the number of epochs trained by the model to T ×B. We call this strategy, where the model sees all the samples every epoch, scan-SGD. While seeing all the samples is common practice, we remove this constraint and draw the samples from a uniform probability distribution at every iteration and call this strategy unif-SGD. In this approach the budget is defined by randomly selecting N ×B samples every epoch (and still training for T epochs).\nImportance sampling aims to accelerate the convergence of SGD by sampling the most difficult samples DS = {xi, yi}NSi=1 more often, where NS = N ×B (the number of samples selected given a certain budget). Loshchilov & Hutter (2015) proposed a simple approach for importance sampling that uses the loss of every sample as a measure of the sample importance. Chang et al. (2017) adapts this approach to avoid additional forward passes by using as importance:\npti = 1\nt t∑ k=1 ( 1− yTi hkθ(y|xi) ) + t, (2)\nwhere hkθ(y|xi) is the prediction of the model given the sample xi in epoch k, and t is the current epoch. Therefore, the average predicted probability across previous epochs associated to the groundtruth class of each sample defines the importance of sample xi. The smoothing constant t is defined as the mean per sample importance up to the current epoch: 1N ∑N i=1 p t i.\nThe sampling distribution P t at a particular epoch t is then given by:\nP ti = pti∑N j=1 pj . (3)\nBy drawing samples from the distribution P t this approach biases the training towards the most difficult samples, and selects those samples with highest loss value; we name this method p-SGD. Similarly, (Chang et al., 2017) propose to select those samples that are closer to the decision boundaries and favors the samples with higher uncertainty by defining the importance measure as cti = p t i × (1− pti); we name this approach c-SGD.\nBoth p-SGD and c-SGD are very computationally efficient as the importance estimation only requires information available during training. Conversely, Jiang et al. (2019) propose to perform forward\npasses on all the samples to determine the most important ones and later reduce the amount of backward passes; they name this method selective backpropagation (SB). At every forward pass, SB stores the sample xi with probability:\nsti = [ FR(`(h t θ(xi), yi)) ]b , (4)\nwhere FR is the cumulative distribution function from a history of the last R samples seen by the model and b > 0 is a constant that determines the selectivity of the method, i.e. the budget used during the training. In practice, SB does as many forward passes as needed until it has enough samples to form a full a mini-batch. It then performs the training forward and backward passes with the selected samples to update the model.\nFinally, as an alternative training paradigm to prioritize the most important samples, Kawaguchi & Lu (2020) propose to use only the q samples with highest loss from a mini-batch in the backward pass. As the training accuracy increases, q decreases until only 1/16 th of the images in the mini-batch are used in the backward pass. The authors name this approach ordered SGD (OSGD) and provide a default setting for the adaptive values of q depending on the training accuracy.\nImportance sampling methods under budgeted training give a precise notion of the training budget. For unif-SGD, p-SGD, and c-SGD the adaptation needed consists of selecting a fixed number of samples per epoch N ×B based on the corresponding sampling probability distribution Pt and still train the full T epochs. For SB, the parameter b determines the selectivity of the algorithm: higher values will reject more samples. Note that this method requires additional forward passes that we exclude from the budget as they do not induce the backward passes used for training. We adapt OSGD by truncating the training as in scan-SGD: all the parameters are kept constant but the total number of epochs is reduced to T ×B. Additionally, we consider the wall-clock time of each method with respect to a full budget training as a metric to evaluate the approaches." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL FRAMEWORK", "text": "Datasets We experiment on image classification tasks using CIFAR-10/100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and mini-ImageNet (Vinyals et al., 2016) datasets. CIFAR10/100 consist of 50K samples for training and 10K for testing; each divided into 10(100) classes for CIFAR-10(100). The samples are images extracted from ImageNet (Deng et al., 2009) and down-sampled to 32×32. SVHN contains 32×32 RGB images of real-world house numbers divided into 10 classes, 73257 for training and 26032 for testing. Mini-ImageNet is a subset of ImageNet with 50K samples for training and 10K for testing divided into 10 classes and down-sampled to 84×84. Unless otherwise stated, all the experiments use standard data augmentation: random cropping with padding of 4 pixels per side and random horizontal flip.\nTraining details We train a ResNet-18 architecture (He et al., 2016) for 200 epochs with SGD with momentum of 0.9 and a batch size of 128. We use two learning rate schedules: step-wise and linear decay. For both schedules we adopt the budget-aware version proposed by Li et al. (2020b) and use an initial learning rate of 0.1. In the step-wise case, the learning rate is divided by 10 at 1/3 (epoch 66) and 2/3 (epoch 133) of the training. The linear schedule decreases the learning rate value at every iteration linearly from the initial value to approximately zero (10−6) at the end of the training. We always report the average accuracy and standard deviation of the model across 3 independent runs trained on a GeForce GTX 1080Ti GPU." }, { "heading": "4.2 BUDGET-FREE TRAINING FOR IMPORTANCE SAMPLING", "text": "Current methods from the state-of-the-art are optimized with no restriction in the number of training iterations. While this allows the methods to better exploit the training process, it makes it difficult to evaluate their computational benefit. Therefore, Table 1 presents the performance, wall-clock time, and speed-up relative to a full training of the methods presented in Section 3. While the simpler approaches to importance sampling, p-SGD and c-SGD, achieve similar performance to SGD and\nreduce the computational time up to 29.08 % (9.93%) in CIFAR-10 (CIFAR-100), SB reduces the training time 39.72% (15.60%) in CIFAR-10 (CIFAR-100) with very small drops in accuracy.\nAll methods train with a step-wise linear learning rate schedule. SGD corresponds to a standard training as described in Subsection 4.1. p-SGD and c-SGD correspond to the methods described in Section 3 introduced by (Chang et al., 2017) that for the experiments in Table 1 train for 200 epochs where the first 70 epochs consist of a warm-up stage with a uniform sampling strategy as done in the original paper. For CIFAR-10 we use a budget of 0.8 for p-SGD and 0.7 for c-SGD, and for CIFAR-100 a budget of 0.9 for both approaches (budgets retaining most accuracy were selected). Finally, SB and OSGD follow the setups described in the corresponding papers, (Jiang et al., 2019) and (Kawaguchi & Lu, 2020), and run on the official code." }, { "heading": "4.3 BUDGETED TRAINING FOR IMPORTANCE SAMPLING", "text": "We adapt importance sampling approaches as described in Section 3 and configure each method to constrain its computation to the given budget. Table 2 shows the analyzed methods performance under the same budget for a step-wise learning rate (SLR) decay and the linear decay (LLR) proposed by Li et al. (2020b) for budgeted training (described in Section 4.1). Surprisingly, this setup shows that most methods achieve very similar performance given a predefined budget, thus not observing faster convergence when using importance sampling. Both p-SGD and c-SGD provide marginal or no improvements: p-SGD improves unif-SGD in CIFAR-10 with a step-wise schedule of the learning rate, but fails to do so in CIFAR-100, and in the LLR setup only improves for certain budgets. Similar behaviour is observed in the results from c-SGD. Conversely, SB surpasses the other approaches consistently for SLR and in most cases in the LLR setup. However, SB introduces additional forward passes not considered as budget, while the other methods do not.\nWe consider scan-SGD and unif-SGD, as two naive baselines for budgeted training. Despite having similar results (scan-SGD seems to be marginally better than unif-SGD), we use unif-SGD for further experimentation in the following subsections as it adopts a uniform random sampling distribution, which contrast alternative sampling distributions of importance sampling methods.\nAdditionally, Table 2 confirms the effectiveness of a linear learning rate schedule as proposed in (Li et al., 2020b): all methods consistently improve with this schedule and in most of the cases unif-SGD and LLR performs on par with SB and SLR and surpasses all the other methods when using SLR." }, { "heading": "4.4 DATA VARIABILITY IMPORTANCE DURING TRAINING", "text": "Core-set selection approaches (Toneva et al., 2018; Coleman et al., 2020) aim to find the most representative samples in the dataset to make training more efficient, while keeping accuracy as high as possible. Figure 1 (top) presents how core-set selection and a randomly chosen subset (Random) both under-perform randomly sampling from a uniform distribution a different subset every epoch (unif-SGD), which approaches a standard training (black dashed line). Therefore, this experiment shows that varying the important subset during training (unif-SGD) is equally efficient from a training computation perspective, while bringing substantially better accuracy. Moreover, we find data variability to play an important role within importance sampling. We report our experiments comparing data variability in Figure 1 (bottom), where data variability is measured using the entropy H (c) of the number of times that a sample is presented to the network during training, being c the normalized N -dimension vector with the counts of each sample. Figure 1 (bottom) shows how\nimprovements in p-SGD relate to higher data variability (higher entropy): adding to the P sampling distribution from p-SGD the LLR, the smoothing constant, the average of the predictions across epochs, and data augmentation." }, { "heading": "4.5 DATA AUGMENTATION FOR IMPORTANCE SAMPLING", "text": "Importance sampling approaches usually do not explore the interaction of sampling strategies with data augmentation techniques (Loshchilov & Hutter, 2015; Katharopoulos & Fleuret, 2018; Jiang et al., 2019). To better understand this interaction, we explore interpolation-based augmentations via RICAP (Takahashi et al., 2018) and mixup (Zhang et al., 2018a); and non-interpolation augmentations using RandAugment (Cubuk et al., 2020). We implemented these data augmentation policies as reported in the original papers (see Table 3 for the hyperparameters used in our experiments). Note\nthat in mixup and RICAP we combine 2 and 4 images respectively within each minibatch, which results in the same number of samples being shown to the network (T ×B). Table 3 and 4 show that data augmentation is beneficial in a budgeted training scenario, in most cases all strategies studied increase performance of the different methods compared to the standard data augmentation. The main exception is in the lowest budget for SB where in some cases data augmentation damages performance. In particular, with RICAP and mixup, the improvements from importance sampling approaches are marginal and the naive unif-SGD provides results close to full training with standard data augmentation. In some cases unif-SGD surpasses full-training with standard augmentations, e.g. RICAP with 0.3 and 0.5 of budget in CIFAR-100, and both mixup and RICAP with 0.3 of budget in CIFAR-10. Note that this is even more evident in SVHN where all the budgets in Table 4 for unif-SGD with RICAP surpass the full training (SGD) with standard data augmentation.\nGiven the cost of the data augmentation policies considered is negligible (see Appendix B for details on wall-clock times), our results show that an adequate data augmentation can reduce the training time at no cost of accuracy and in some cases with a considerable increase in accuracy. For example, a 70% reduction of the training time (0.3 budget) corresponds to an increase in accuracy from 75.44% to 76.65% in CIFAR-100 and from 94.80% to 94.85% in CIFAR-10. Also, a 50% reduction (0.5 budget) corresponds to an increase in accuracy from 75.44% to 77.78% in CIFAR-100 and from 94.80% to 95.58% in CIFAR-10.\nWe also experimented with extremely low budgets (0.05 and 0.1) and found that data augmentation damages the training of DNNs (see Appendix A). For example, with B = 0.05 there is a drop of approximately 3 points in accuracy in CIFAR-10 and 5 points in CIFAR-100 with respect 88.34% and 62.84% for unif-SGD with standard data augmentation." }, { "heading": "5 CONCLUSION", "text": "This paper studies DNN training when the number of iterations is fixed (i.e. budgeted training) and explores the interaction of importance sampling techniques and data augmentation in this setup. Our experimental results suggest that, in budgeted training, DNNs prefer variability over selection of important samples: adequate data augmentation surpasses state-of-the-art importance sampling methods and allows for up to a 70% reduction of the training time (budget) with no loss or even increase in accuracy. Given the strong impact that data augmentation has in improving performance of budgeted training, we consider as interesting future work, exploring the limitations found in extreme budgets and in extending the study to large-scale datasets where training DNNs becomes a long-lasting process. Finally, the results presented in this paper motivate research in the direction of exploring training techniques and methodologies to further exploit budgeted training." }, { "heading": "A EXTREME BUDGETS", "text": "Table 5 shows the performance of the different approaches when the budget is further reduced to 0.05 and 0.1. These results show that in this extreme scenario, importance sampling approaches (s-SGD and SB) still bring little improvement over randomly selecting the training samples (unif-SGD). However, additional data augmentation does not bring a significant improvement in accuracy and in the most challenging cases, hinders convergence." }, { "heading": "B WALL-CLOCK TIME", "text": "Table 6 shows the wall-clock time in minutes corresponding to 0.3 of budget in CIFAR-100 for unif-SGD, p-SGD, and SB under different data augmentation policies. Note that SB has higher training times due to the additional forward passes introduced to compute the sample importance." } ]
2,020
null
SP:b31d37adc24ddff6ef32dc607fe3c8c29341a81d
[ "The paper presents a tutorial to a video analysis platform software, i.e., VideoFlow, which represents a video analysis task as a computation graph, provides common functions like video decoding and database storage, integrates deep learning frameworks, e.g. Caffe/Pytorch/MXNet as built-in inference engines, and supports heterogeneous hardware such as CPU/GPU/FPGA. VideoFlow also allows the customers to develop operator, decoder, and model inference extensions. The paper presents an example application of person ReID using the VideoFlow platform. The paper claims this VideoFlow software could be used in both academic and industrial scenarios. " ]
The past years have witnessed an explosion of deep learning frameworks like PyTorch and TensorFlow since the success of deep neural networks. These frameworks have significantly facilitated algorithm development in multimedia research and production. However, how to easily and efficiently build an end-to-end visual analysis pipeline with these algorithms is still an open issue. In most cases, developers have to spend a huge amount of time tackling data input and output, optimizing computation efficiency, or even debugging exhausting memory leaks together with algorithm development. VideoFlow aims to overcome these challenges by providing a flexible, efficient, extensible, and secure visual analysis framework for both the academia and industry. With VideoFlow, developers can focus on the improvement of algorithms themselves, as well as the construction of a complete visual analysis workflow. VideoFlow has been incubated in the practices of smart city innovation for more than three years. It has been widely used in tens of intelligent visual analysis systems. VideoFlow will be open-sourced at https://github.com/xxx/videoflow.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}", "year": 2016 }, { "authors": [ "Tianqi Chen", "Mu Li", "Yutian Li", "Min Lin", "Naiyan Wang", "Minjie Wang", "Tianjun Xiao", "Bing Xu", "Chiyuan Zhang", "Zheng Zhang" ], "title": "Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1512.01274,", "year": 2015 }, { "authors": [ "Tianqi Chen", "Thierry Moreau", "Ziheng Jiang", "Haichen Shen" ], "title": "Tvm: An end to end ir stack for deploying deep learning workloads on hardware platforms, 2017", "venue": null, "year": 2017 }, { "authors": [ "David Goodwin", "Soyoung Jeong" ], "title": "Maximizing utilization for data center inference with tensorrt inference server", "venue": "Nvidia GTC Silicon Valley,", "year": 2019 }, { "authors": [ "Jian Guo", "He He", "Tong He", "Leonard Lausen", "Mu Li", "Haibin Lin", "Xingjian Shi", "Chenguang Wang", "Junyuan Xie", "Sheng Zha" ], "title": "Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Alexander Hermans", "Lucas Beyer", "Bastian Leibe" ], "title": "In defense of the triplet loss for person reidentification", "venue": "arXiv preprint arXiv:1703.07737,", "year": 2017 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "In Proceedings of the 22nd ACM international conference on Multimedia,", "year": 2014 }, { "authors": [ "Yutian Lin", "Liang Zheng", "Zhedong Zheng", "Yu Wu", "Zhilan Hu", "Chenggang Yan", "Yi Yang" ], "title": "Improving person re-identification by attribute and identity learning", "venue": "Pattern Recognition,", "year": 2019 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott Reed", "Cheng-Yang Fu", "Alexander C Berg" ], "title": "Ssd: Single shot multibox detector", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Camillo Lugaresi", "Jiuqiang Tang", "Hadon Nash", "Chris McClanahan", "Esha Uboweja", "Michael Hays", "Fan Zhang", "Chuo-Ling Chang", "Ming Guang Yong", "Juhyun Lee", "Wan-Teh Chang", "Wei Hua", "Manfred Georg", "Matthias Grundmann" ], "title": "Mediapipe: A framework for building perception", "venue": "pipelines. CoRR,", "year": 2019 }, { "authors": [ "Anton Milan", "Laura Leal-Taixé", "Ian Reid", "Stefan Roth", "Konrad Schindler" ], "title": "Mot16: A benchmark for multi-object tracking", "venue": "arXiv preprint arXiv:1603.00831,", "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Kaustubh Purandare" ], "title": "An introduction to deepstream sdk", "venue": "Nvidia GTC,", "year": 2018 }, { "authors": [ "Henning Schulzrinne", "Anup Rao", "Robert Lanphier" ], "title": "Real time streaming protocol (rtsp)", "venue": null, "year": 1998 }, { "authors": [ "Jifei Song", "Yongxin Yang", "Yi-Zhe Song", "Tao Xiang", "Timothy M Hospedales" ], "title": "Generalizable person re-identification by domain-invariant mapping network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yifan Sun", "Liang Zheng", "Yi Yang", "Qi Tian", "Shengjin Wang" ], "title": "Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline)", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Han Vanholder" ], "title": "Efficient inference with tensorrt, 2016", "venue": null, "year": 2016 }, { "authors": [ "Longhui Wei", "Shiliang Zhang", "Hantao Yao", "Wen Gao", "Qi Tian" ], "title": "Glad: Global-local-alignment descriptor for pedestrian retrieval", "venue": "In Proceedings of the 2017 ACM on Multimedia Conference,", "year": 2017 }, { "authors": [ "Longhui Wei", "Shiliang Zhang", "Wen Gao", "Qi Tian" ], "title": "Person trasfer gan to bridge domain gap for person re-identification", "venue": "In Computer Vision and Pattern Recognition, IEEE Conference", "year": 2018 }, { "authors": [ "Tianyuan Yu", "Da Li", "Yongxin Yang", "Timothy M Hospedales", "Tao Xiang" ], "title": "Robust person reidentification by modelling feature uncertainty", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Haiyu Zhao", "Maoqing Tian", "Shuyang Sun", "Jing Shao", "Junjie Yan", "Shuai Yi", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Spindle net: Person re-identification with human body region guided feature decomposition and fusion", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The success of computer vision techniques is spawning intelligent visual analysis systems in real applications. Rather than serving individual models, these systems are often powered by a workflow of image/video decoding, several serial or parallel algorithm processing stages, as well as sinking analysis results. The varied visual analysis requirements in different real scenarios put forward a high demand on a framework for fast algorithm development, flexible pipeline construction, efficient workflow execution, as well as secure model protection.\nThere exist some frameworks approaching some of the above mentioned targets, like DeepStream (Purandare, 2018) and MediaPipe (Lugaresi et al., 2019). DeepStream is on top of GStreamer (GSTREAMER, 1999), which primarily targets audio/video media editing rather than analysis. MediaPipe can be used to build prototypes to polished cross-platform applications and measure performance. Though it is flexible and extensible on calculators, efficiency, model security, and extension on more aspects are expected by real online services in industry.\nIn this paper, we present VideoFlow, to meet the visual analysis requirements for both algorithm development and deployment in real systems with the following highlights.\nFlexibility. VideoFlow is designed around stateful Computation Graph and stateless Resource. Computation graph abstracts the visual processing workflow into a stateful directed acyclic graph. Developers can focus on the implementation of processing units (graph nodes) and the construction of the whole workflow. Resource is a stateless shared computation module of computation graphs. The most typical resource is deep learning model inference. Resource decouples the stateless visual processing components from the whole complicated visual analysis pipeline, helping developers focus on the optimization of these computation or Input/Output(IO) intensive implementation.\nEfficiency. VideoFlow is designed for better efficiency from four levels. (1) Resource-level: resources can aggregate the scattered computation requests from computation graph instances into intensive processing for better efficiency. (2) Video-level: all videos are analyzed in parallel in a shared execution engine. (3) Frame-level: video frames are parallelized on operations which are irrelevant to frame orders. (4) Operator-level: visual analysis is a multi-branch pipeline in most\ncases. The different branches and different operators of each branch without sequential dependency are analyzed in parallel.\nExtensibility. VideoFlow is designed from the beginning to be as modular as possible to allow easy extension to almost all its components. It can be extended to different hardware devices like Graphic Processing Units(GPU), Neural Processing Unit (NPU), etc. It can be hosted on either x86 or ARM platforms. Developers can customize their own implementations with VideoFlow as a dependent library. The extended implementations can be registered back to VideoFlow as plugins at runtime.\nSecurity. Model protection is an important problem in industry. VideoFlow encodes model files into encrypted binary codes as part of the compiled library. The secret key can be obscured into the same library, or exported to a separate key management service (KMS). At runtime, VideoFlow decrypts the models and verifies authorization from a remote service periodically.\nVideoFlow has been incubated in the practices of the smart city innovation for more than three years. It is designed for computer vision practitioners, including engineers, researchers, students, and software developers. The targets of VideoFlow include: 1) free developers from the exhausting data loading/sinking, parallel programming and debugging to the optimization of algorithms; 2) enable easy extension of video decoding, deep model inference and algorithm implementation; 3) provide highly efficient framework for large scale visual processing in industry rather than just experimental prototypes. 4) protect the intellectual property of models and algorithms to make sure that they can only work with authorization." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 DEEP LEARNING FRAMEWORKS", "text": "Almost all existing deep learning frameworks like Caffe (Jia et al., 2014), TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2017), MXNet (Chen et al., 2015) describe networks in directed graphs or even dynamic graphs. VideoFlow draws lessons from this successful design for visual analysis. The difference is that the basic units in deep networks are low level operations like convolutions, compared to higher level processing like object tracking in VideoFlow. The data transferred between operators in VideoFlow is also much more complex than the Tensor in deep learning.\nAs to model inference, there are some specially optimized engines , like TensorRT (Vanholder, 2016) and MKL-DNN/oneAPI (Intel) by hardware manufactures. In the open source community, developers put forward TVM for easy extension to different hardware for more effective inference (Chen et al., 2017). On top of these engines, there are some serving platforms for individual models rather workflow construction, like tensorflow serving (Google, 2016), NVIDIA Triton Inference Server (Goodwin & Jeong, 2019). VideoFlow integrates these inference engines as Resources with their C++ interfaces." }, { "heading": "2.2 VISUAL ANALYSIS FRAMEWORKS", "text": "The recent has witnessed some visual analysis frameworks. Nvidia launches the DeepStream project in the early days for video analysis on GPU (Purandare, 2018). It is oriented as well as optimized on GPU and TensorRT, regardless of the bustling heterogeneous hardware devices. Besides, it is built on top of GStreamer (GSTREAMER, 1999), which primarily targets audio/video media editing rather than analysis, limiting its flexibility and extensibility. The gst-video-analytics project (Intel, 2019) is also built on top of GStreamer(Deuermeyer & Andrey). Google proposed MediaPipe by building graphs for arbitrary streaming data processing with a computation graph as well (Lugaresi et al., 2019). MediaPipe can be used to build prototypes to polished cross-platform applications and measure performance. Though it is flexible and extensible on calculators, real online visual analysis expects extension on more aspects, more efficiency optimization, and model security protection. Compared to MediaPipe, VideoFlow features these advantages for better application in both academia and industry. Another framework also named Videoflow (de Armas, 2019) is designed to facilitate easy and quick definition of computer vision stream processing pipelines. However, it is just a prototype experimental platform, with limitations on extensibility, efficiency, and security." }, { "heading": "3 ARCHITECTURE", "text": "VideoFlow is oriented around stateful Computation Graph and stateless Resource with a welloptimized execution engine. Computation graph is a directed acyclic graph describing the whole workflow of video or image analysis. As the two main components of a graph, Node and Edge denote visual processing operators and data flow between operators, respectively. Resource is shared for graph irrelevant computation. The architecture is shown in Figure 1." }, { "heading": "3.1 OPERATOR", "text": "Operator is the basic unit of visual analysis workflow. An operator depends on the outputs of its parent operators. Its own outputs can be consumed by arbitrary number of child operators. According to the number of inputs an outputs, operators are categorized as follows:\n• Entrypoint: operators that have zero inputs. This is the start of a computation graph. Each graph can have only one entrypoint.\n• Processor: operators that have at least one input and at least one output. Processors occupy most of the workflow of visual analysis. It’s also the main kind of operator with the highest demand on easy extension.\n• Sinker: operators that have zero outputs. This is the end of a computation graph. A graph can have multiple sinkers." }, { "heading": "3.2 DATA FLOW", "text": "Data flow is the edge connection between two operators (nodes). An operator may generate several number of data with different types for its child nodes. Data flow is a collection of arbitrary number of data pointers of arbitrary type (vector<void*> in our C++ implementation) in VideoFlow. VideoFlow guarantees that the incoming data pointers are always safe to be read. Developers do not need to care how many other operators are also consuming the data, or whether the data should be released during the workflow." }, { "heading": "3.3 RESOURCE", "text": "Resource is the stateless computation unit shared by graphs. The most representative resource is deep model inference. Resource is abstracted due to three main reasons. Firstly, many operations like deep model inference and data sinking to databases have their own independent semantics. They are irrelevant to whether it is used for video or image processing, which step of the whole pipeline invokes the operation, or how the outputs will be post-processed. Secondly, these operations are often computation or IO intensive. Leaving them in the operators will incur bottlenecks on CPU, memory or network bandwidth due to large amount of resource competition. Gathering the scattered but shared requests from different graphs for uniform processing proves to be a good practice to improve efficiency. Thirdly, resource can be improved without affecting the visual analysis logic.\nFor example, we can accelerate the inference speed of a PyTorch model by switching to a TensorRT worker. We can change to a more efficient database connector for more real-time data sinking. Without the abstraction of these resources, all affected operators have to be re-implemented to earn the benefits." }, { "heading": "3.4 GRAPH CONSTRUCTION", "text": "Computation graph is described in json format with the following fields.\n“resource” describes the resources that will be used by operators. Each resource should have two fields: “type” to create the correct class instance and “params” to specify the resource configurations.\n“ops” describes the operators that will be used to construct computation graphs. Operators can be used multiple times by different graphs. As the same to resource, each operator should have two fields: “type” and “params”.\n“graph” is the place to define computation graphs. Each graph definition is a json dictionary of key-value pairs. Key is the operator name. Its value is a list of operator names as its child nodes.\n“subgraphs”[optional] is used to re-use resources, operators and graphs from other graph configuration files.\n“libs”[optional] specifies external dynamic libraries that should be loaded by VideoFlow, especially the extended libraries in Figure 3.\n“config”[optional] is for global settings, currently including number of parallel image processing threads and number of frames for parallel video processing.\nAn example file is provided in the supplementary material to show the person reidentification workflow (Section 5)." }, { "heading": "3.5 EXECUTION SCHEDULING", "text": "With the graph defined and constructed, execution scheduling determines which operator should be calculated. In real cases, there can be multiple computation graph instances running in parallel, each with either shared or different structures. Figure 2 shows the execution scheduling of these graphs. Each graph has several replicas, with each replica called as an order. Video frames are actually processed in these graph replicas/orders. The replicas are processed in parallel for framelevel parallelism. Each order starts from the Forward function of the entrypoint node.\nForward. As Figure 2 shows, the forward function first checks if the current operator is ready to be executed. The readiness checking includes: 1) all parents of the current node have finished on this order. 2) the previous order of the current node has been executed if the current node is an ordered operator. If ready, the forward function puts its own processing function into the task queue\nof the execution engine, waiting to be executed. The processing function first finishes the internal processing logic of the operator. After that, it calls the Forward function of its following operators. If it is the leaf operator, it calls the Backward function of its own. Forward of entrypoints is specially implemented with a separate thread retrieving and dispatching data to idle orders.\nBackward is the process to reset the node to be ready to process latter frames. The backward function first checks if all its children have been reset. If so, it resets its forward status. Then it continues to call the Backward of all its parents.\nExecution Engine. The processing functions of operators are put into a task queue of the execution engine. All processing units share the same interface. The execution engine does not know which order of which graph a processing function comes from. All orders and all graphs are executed concurrently once they are put into the queue. Inside the engine there is a thread pool, with all threads fetching and executing tasks from the queue." }, { "heading": "3.6 DEEP MODEL INFERENCE", "text": "Deep model inference is the most typical computation intensive resource in visual analysis. The built-in implementation covers deep learning frameworks (Caffe, PyTorch, MXNet)1 and acceleration toolkits (TensorRT, TVM, etc.). Developers can customize their own workers easily. VideoFlow adopts dynamic batching and in-device processing to fully utilize the capability of heterogeneous hardware devices. On GPU, it supports multi-stream for better performance.\nDynamic Batching. Heterogeneous hardware devices often need to work in the batch mode for full utilization. A single call of a batch of data is much more efficient than multiple calls of single data. In VideoFlow, the input of deep learning inference is a batch of DLTask, which is a structure defined for deep learning input and output of a single task. VideoFlow defines a thread-safe queue to collect the scattered DLTask from operators to form a DLTaskBatch. Deep inference workers request DLTaskBatch from the queue with a timeout mechanism. The timeout is essential since videos, frames, and operators are not strictly time aligned. The timeout can try to collect as many tasks as possible in the limited period, while keeping the processing latency limited at the same time.\nIn-device processing. Data transfer across host and device memory is cost expensive and time consuming. In VideoFlow, frames are defined with hardware contexts. It provides hardware-specific image operations like resizing, cropping, pixel format conversion, mean subtraction, scaling, etc. All these operations including dynamic batching are conducted on their most appropriate hardware devices according to their hardware contexts. GPU-decoded frames are pre-processed and analyzed on the same GPU card. CPU-decoded frames are pre-processed on CPU, but analyzed on device with the lowest expected latency. The in-device processing lowers the system cost significantly, especially the CPU cost, as will be verified on the Section 5.\n1Currently TensorFlow is partially supported due to difficulties in integrating its C++ interface and library." }, { "heading": "3.7 EXTENSION OF VIDEOFLOW", "text": "Library extension is often intrusive. Developers have to write and compile their code together within the libraries. Though it works, the drawbacks are: 1) The total compilation time is getting longer with the library becomes more and more complex; 2) It is really hard to integrate extensions from different developers except that they share the same code repository. However, the different extension may come from different teams, with their concern for code and model protection.\nVideoFlow provides a more convenient way for extension as Figure 3 shows. Developers can customize their own implementation and generate their own libraries. These libraries can have their own model protection or authorization. With the registration mechanism, VideoFlow will load libraries specified in the “libs” field of the graph configuration file at runtime. Throughout VideoFlow, registration is widely used for extension of all modules. By this way, VideoFlow enables coding separately, but working together." }, { "heading": "3.8 OPERATOR EXTENSION", "text": "Operator interface is highly simplified for easy customization. All operators derive from the same base class Op. As the start of graphs, customization of entrypoints requires a little bit more care of data input and workflow interruption. These are highly dependent on how the graph will interact with outside callers. The built-in entrypoints have covered most cases of visual analysis. Except for entrypoints, there are only five steps to customize a new operator: construction/destruction, initialization, auxiliary memory management, processing, and registration. We provide a detailed example code of an object detection operator in the supplementary material.\nConstruction/Destruction. In construction function, developers should firstly make clear whether the current operator can be parallelized on the frame-level, which is called ordered. Frame sequence order independent processing should all be declared as unordered for better parallelism to boost the overall efficiency. The second is the number of parent operators and the input data type list. The last is the output type list of the current operator. Note that “*” is allowed during type specification for wildcard type. The output types of the parent operators will be checked to see whether they match the input types of child operators during graph construction.\nInitialization. The Init function is used to initialize some settings after the graph has been constructed, but before the actual analysis begins.\nAuxiliary Memory Management. Auxiliary memory is the frequently used temporal memory during processing. The life-cycle of auxiliary memory is the same with the graph. Auxiliary memory can be used as the output data. To be specific, developers need to override the MallocResource and ReleaseResource functions for auxiliary memory management.\nProcessing. This is the place to process the input data and generate the output data. The run function will be called again and again for visual analysis. All parallelism are optimized around this function, though developers do not need to care about the detailed mechanism. They only need to remember not to write shared memories (like class member variables or global variables) if the operator is declared as unordered for thread-safety, since there can be multiple threads executing the same run function on different frames. Note that the auxiliary memory will be allocated for each order of the graph. It is safe to operate on the auxiliary memory without thread-safety concern no matter the operator is ordered or not.\nRegistration. Registration is just a macro to register the operator to VideoFlow so that it can be constructed according to its name." }, { "heading": "3.9 DECODER EXTENSION", "text": "Video decoding needs to tackle various video sources like online camera recording, web streaming, local files, or even non-standard video transfer protocols. Besides, it should make full use of the hardware decoding modules which are widely equipped with modern heterogeneous hardware devices. VideoFlow abstracts video decoding into frontend and backend. The frontend tackles various video sources. For backend, there can be different implementation on different hardware devices.\nThese backends register to DecoderBackend with their supported codecs, decoding capabilities, and priority.\nTo extend video decoding, developers can choose to implement a new video frontend or a new decoding backend. Interfaces of both the frontend and the backend are simple (Open, Put, Get, Close). VideoFlow makes sure that frontend developers do not need to care about decoding acceleration with hardware devices. Backend developers do not need to care about where the video data comes from or how to demux the data packets." }, { "heading": "3.10 DEEP MODEL INFERENCE EXTENSION", "text": "VideoFlow provides built-in inference support for Caffe, MXNet, and PyTorch, as well as TensorRT and TVM (Chen et al., 2017). Nevertheless, there are still three main scenarios to extend deep model inference: new inference backend, output post-processing, and input pre-processing.\nInference Backend means to extend new deep learning or hardware acceleration frameworks. There are three functions to be overridden: Init, FeedData, and process. The Init function should determine the hardware context, parse the models, check model input and output, and allocate input/output memory buffers. FeedData is used to pre-process and feed a batch of data to the inference framework. process is a private function to invoke the inference after the input data is ready. The output is written back to DLTask in this step.\nOutput Post-processing. This is quite common for frameworks like TensorRT since there are still quite a large number of operators not supported. In many cases, developers can accelerate the backbone network with these acceleration toolkits. The left unsupported procedures can be implemented by overriding the process function of existing inference backends.\nInput Pre-processing. VideoFlow provides scaling with mean subtraction and data copy as the built-in pre-processing methods for input image data. To customize a new pre-processing function, VideoFlow defines an empty template structure named PreProcessFunc with device type, pixel format, and pre-processing type as its template parameters. Developers just need to implement a new template specialization for their own pre-processing." }, { "heading": "3.11 SECURITY", "text": "Model protection is a key challenge in deployment. VideoFlow provides a security mechanism as Figure 3 shows. vf-codify converts model files to encrypted source. The encryption key is obfuscated in the source codes to avoid library file parsing. In real production deployment, the key should be deposited into a Key-Management-Service (KMS) for higher security level.\nAt runtime, the security module tries to request authorization and decryption key from a remote service. If authorized successfully, it will decrypt the models into memory. Users may still have the concern if others can peep their models from the memory. This problem does exist in real cases. Attackers may dump the whole memory and steal the model parameters. A possible solution may be that hardware manufactures provide a safe memory region for model parameters, or part of model parameters. Anyway, security is an endless game of attack and defense. We welcome more open source contributions for better security." }, { "heading": "4 TOOLS", "text": "VideoFlow provides a graph editor to help users write their computation graphs, a visualizer to visualize the real-time analysis results of algorithms, like the detected objects, object trajectories, as well as a profiler to show the running status of each video channel and deep learning models. These tools can significantly benefit users for fast pipeline construction and optimization. Detailed illustration of the tools is shown in the appendix." }, { "heading": "5 PERSON REID APPLICATION EXAMPLE", "text": "In this section, we show an application example with a person reidentification (ReID) system. We test the performance benchmark of some measurable aspects of VideoFlow to verify its efficiency." }, { "heading": "1 95% 9.0% 30% 12% 1 17", "text": "" }, { "heading": "4 290% 9.5% 70% 30% 2 25", "text": "" }, { "heading": "4 X 1560% 9.7% 56% 30% 9 25", "text": "" }, { "heading": "4 X X 1100% 10.2% 70% 39% 10 25", "text": "" }, { "heading": "4 X X X 800% 10.2% 70% 39% 10 25", "text": "" }, { "heading": "8 X X X 950% 11.1% 84% 45% 12 25", "text": "Person ReID is widely explored in recent computer vision community (Song et al., 2019; Yu et al., 2019; Sun et al., 2018; Wei et al., 2018; Zhao et al., 2017; Wei et al., 2017; Hermans et al., 2017). A typical person ReID video processing pipeline is shown in Figure 4 in the appendix. We sample once every 5 frames due to large content similarity. The decoded frames are processed with person detection and person tracking. After then, two branches extract the ReID feature and the person attributes, respectively. Sinkers include a VisualSinker for real-time visualization and a FPSSinker to calculate the processing speed. The pipeline is deployed on a cloud virtual machine with Intel Xeon E5-2682 CPU (16 cores), 64 GB memory, and 1 Nvidia T4 GPU card.\nData. We choose MOT16-07 video sequence from mot challenge (Milan et al., 2016). We convert its resolution to 1920x1080 and push it with FFmpeg in a loop as a rtsp video service (Schulzrinne et al., 1998). Different channels of VideoFlow pull the same video stream to mimic parallel video analysis.\nModels. The three models are trained with Caffe, PyTorch, and MXNet to verify the wide support of deep learning frameworks, as shown in Table 1. These models will be accelerated with TensorRT for fast inference.\nPipeline Benchmark. Table 2 shows the efficiency evaluation of the whole pipeline. We choose some optimization aspects that can be quantified to verify the efficiency. Without frame parallel processing, even a single channel of video cannot be processed in real-time. Note that the workflow in this example is not that complicated. With four orders of frame parallelism, VideoFlow can process 2 video streams. TensorRT significantly boost the capability to 9 channels, as well as the CPU cost to almost 16 cores. GPU decoding reduces 5 cores of CPU consumption, with still 11 cores left mainly for person tracking. With in-device processing and larger frame parallelism, we can reduce about 1.5 cores of CPU consumption while improving concurrent real-time video processing to 12 channels." }, { "heading": "6 CONCLUSION", "text": "In this paper, we present VideoFlow, a computation graph and resource based framework for visual analysis. We illustrate its superiority from the aspects of flexibility, efficiency, extensibility, and security. VideoFlow can help developers focus on algorithm improvement and the construction of visual analysis workflow. It is a carefully designed and implemented framework for the ease of visual analysis without bias to any hardware manufacturers, devices, platforms, or computer vision frameworks. It can be used in both academic and industrial scenarios.\nVideoFlow has been widely used in intelligent visual analysis systems. We will open-source the project to welcome the contribution of the community for more implementation of operators, more adaptation of different hardware devices, as well as better optimization of the framework itself." }, { "heading": "A APPENDIX", "text": "The appendix describes the tools provided by VideoFlow for computation graphs construction, realtime analysis visualization, and operator performance profiling.\nA.1 GRAPH EDITOR\nGraph editor aims to help users write their computation graphs. As shown in Figure 4, it is comprised of three main panels: operator list, graph editor, and graph visualizer. The operator list panel shows the list of all operators inside a library. Users can import several library metadata into this tool. Libraries can be switched in this panel. The graph editor panel supports smart editing of graphs with auto-completion, grammar checking and auto-formatting. The graph visualizer panel synchronizes the graph definition in the editor into a graphical view to help users review the whole processing workflow.\nA.2 VISUALIZER\nVisualizer is actually an image stream player, with abilities to draw rectangles and texts on images. It is used to visualize the real-time analysis results of algorithms, like the detected objects, object trajectories, object attributes as shown in Figure 5. It is especially useful during the algorithm development period. In production, it can be used to check if algorithms are running or if algorithms perform well. Currently it is an image stream player for better alignment of video frames and analysis results. A video player is expected from the community with the functionality to display frames together with the time aligned elements from algorithms.\nA.3 PROFILER\nOne of the major things in development is to profile the performance of the overall system as well as each of its components. Profiler of VideoFlow has two views: channel view and resource view. Channel view displays the running status of each video channel. As shown in Figure 6, there are 12 orders for the selected channel. The horizontal axis is the timeline. The green blocks are the executed operators as well as their execution time. The figure also verifies frame-level parallelism. The resource view currently displays the running status of deep learning models. With the horizontal axis denoting the timeline and vertical axis denoting the batch size of each inference. This can help\nto check if heterogeneous hardware resources are fully utilized or overloaded. In cases where a graph uses many deep models, it can help to analyze the bottleneck of the overall system throughout." } ]
2,020
VIDEOFLOW: A FRAMEWORK FOR BUILDING VISUAL ANALYSIS PIPELINES
SP:f67271e00a669e2b64580762c04eb7b88965061d
[ "The paper proposed a regularizer loss as an alternative to adversarial training to improve the robustness of neural networks against adversarial attacks. The new regularizer is derived from a second-order Tyler series expansion of the loss function in the model robustness optimization problem. Clear mathematical derivation and thoughtful empirical experimental results are provided. The proposed method outperformed baseline adversarial training methods with better or on part robustness and higher standard accuracy." ]
Adversarial training is a common approach to improving the robustness of deep neural networks against adversarial examples. In this work, we propose a novel regularization approach as an alternative. To derive the regularizer, we formulate the adversarial robustness problem under the robust optimization framework and approximate the loss function using a second-order Taylor series expansion. Our proposed second-order adversarial regularizer (SOAR) is an upper bound based on the Taylor approximation of the inner-max in the robust optimization objective. We empirically show that the proposed method improves the robustness of networks against the `∞ and `2 bounded perturbations on CIFAR-10 and SVHN.
[]
[ { "authors": [ "Maksym Andriushchenko", "Francesco Croce", "Nicolas Flammarion", "Matthias Hein" ], "title": "Square attack: a query-efficient black-box adversarial attack via random search", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "John E Beasley" ], "title": "Heuristic algorithms for the unconstrained binary quadratic programming problem", "venue": null, "year": 1998 }, { "authors": [ "Aharon Ben-Tal", "Laurent El Ghaoui", "Arkadi Nemirovski" ], "title": "Robust optimization, volume 28", "venue": null, "year": 2009 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Chris M Bishop" ], "title": "Training with noise is equivalent to tikhonov regularization", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Qi-Zhi Cai", "Chang Liu", "Dawn Song" ], "title": "Curriculum adversarial training", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Minimally distorted adversarial examples with a fast adaptive boundary attack", "venue": "arXiv preprint arXiv:1907.02044,", "year": 2044 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "arXiv preprint arXiv:2003.01690,", "year": 2020 }, { "authors": [ "Gavin Weiguang Ding", "Kry Yik Chau Lui", "Xiaomeng Jin", "Luyu Wang", "Ruitong Huang" ], "title": "On the sensitivity of adversarial robustness to input data distributions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Gavin Weiguang Ding", "Yash Sharma", "Kry Yik Chau Lui", "Ruitong Huang" ], "title": "Mma training: Direct input space margin maximization through adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Gavin Weiguang Ding", "Luyu Wang", "Xiaomeng Jin" ], "title": "AdverTorch v0.1: An adversarial robustness toolbox based on pytorch", "venue": "arXiv preprint arXiv:1902.07623,", "year": 2019 }, { "authors": [ "Harris Drucker", "Yann Le Cun" ], "title": "Improving generalization performance using double backpropagation", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Angus Galloway", "Anna Golubeva", "Thomas Tanay", "Medhat Moussa", "Graham W Taylor" ], "title": "Batch normalization is a cause of adversarial vulnerability", "venue": "arXiv preprint arXiv:1905.02161,", "year": 1905 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Chuan Guo", "Jacob Gardner", "Yurong You", "Andrew Gordon Wilson", "Kilian Weinberger" ], "title": "Simple black-box adversarial attacks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd edition)", "venue": null, "year": 2009 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ruitong Huang", "Bing Xu", "Dale Schuurmans", "Csaba Szepesvári" ], "title": "Learning with a strong adversary", "venue": "arXiv preprint arXiv:1511.03034,", "year": 2015 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "John E Laird" ], "title": "The Soar cognitive architecture", "venue": "MIT press,", "year": 2012 }, { "authors": [ "Ricardo M Lima", "Ignacio E Grossmann" ], "title": "On the solution of nonconvex cardinality boolean quadratic programming problems: a computational study", "venue": "Computational Optimization and Applications,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "Pascal Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Krishnamurthy Dvijotham", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial robustness through local linearization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sara Sabour", "Yanshuai Cao", "Fartash Faghri", "David J Fleet" ], "title": "Adversarial manipulation of deep representations", "venue": "arXiv preprint arXiv:1511.05122,", "year": 2015 }, { "authors": [ "Ludwig Schmidt", "Shibani Santurkar", "Dimitris Tsipras", "Kunal Talwar", "Aleksander Madry" ], "title": "Adversarially robust generalization requires more data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Uri Shaham", "Yutaro Yamada", "Sahand Negahban" ], "title": "Understanding adversarial training: Increasing local stability of supervised models through robust optimization", "venue": null, "year": 2018 }, { "authors": [ "Carl-Johann Simon-Gabriel", "Yann Ollivier", "Leon Bottou", "Bernhard Schölkopf", "David Lopez-Paz" ], "title": "First-order adversarial vulnerability of neural networks and input dimension", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chawin Sitawarin", "Supriyo Chakraborty", "David Wagner" ], "title": "Improving adversarial robustness through progressive hardening", "venue": "arXiv preprint arXiv:2003.09347,", "year": 2020 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "The space of transferable adversarial examples", "venue": "arXiv preprint arXiv:1704.03453,", "year": 2017 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Roman Vershynin" ], "title": "High-dimensional probability. Cambridge University Press: An Introduction with Applications in Data Science, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jianyu Wang", "Haichao Zhang" ], "title": "Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yisen Wang", "Difan Zou", "Jinfeng Yi", "James Bailey", "Xingjun Ma", "Quanquan Gu" ], "title": "Improving adversarial robustness requires revisiting misclassified examples", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Zico Kolter" ], "title": "Wasserstein adversarial examples via projected sinkhorn iterations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yao-Yuan Yang", "Cyrus Rashtchian", "Hongyang Zhang", "Ruslan Salakhutdinov", "Kamalika Chaudhuri" ], "title": "Adversarial robustness through local lipschitzness", "venue": "arXiv preprint arXiv:2003.02460,", "year": 2020 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "by Simon-Gabriel" ], "title": "2019), that empirically showed adversarial robustness through regularization", "venue": null, "year": 2019 }, { "authors": [ "Cai" ], "title": "Sitawarin et al. (2020) study the connection between curriculum learning (Bengio et al., 2009) and training using adversarial examples with increasing difficulties. Our idea is similar. The model is first optimized for an easier task (standard training), and then regularized for a related, but more difficult task (improving adversarial robustness)", "venue": null, "year": 2018 }, { "authors": [ "Wong" ], "title": "2019a) use early-stopping as a simple solution. We observe that with a large learning rate, the model reaches a high adversarial accuracy faster and catastrophic over-fitting happens sooner. As such, our solution is to fix the number of epochs to 200 and then carefully sweep over various learning rates to make sure that catastrophic over-fitting do not happen", "venue": null, "year": 2019 }, { "authors": [ "Wang" ], "title": "Under review as a conference paper at ICLR 2021 viewpoint, we never need to compute the exact Hessian as we approximate it through first-order approximation. E.9 POTENTIAL ROBUSTNESS GAIN WITH INCREASING CAPACITIES Empirical studies in Madry et al", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Adversarial training (Szegedy et al., 2013) is the standard approach for improving the robustness of deep neural networks (DNN), or any other model, against adversarial examples. It is a data augmentation method that adds adversarial examples to the training set and updates the network with newly added data points. Intuitively, this procedure encourages the DNN not to make the same mistakes against an adversary. By adding sufficiently enough adversarial examples, the network gradually becomes robust to the attack it was trained on. One of the challenges with such a data augmentation approach is the tremendous amount of additional data required for learning a robust model. Schmidt et al. (2018) show that under a Gaussian data model, the sample complexity of robust generalization is √ d times larger than that of standard generalization. They further suggest that current datasets (e.g., CIFAR-10) may not be large enough to attain higher adversarial accuracy. A data augmentation procedure, however, is an indirect way to improve the robustness of a DNN. Our proposed alternative is to define a regularizer that penalizes DNN parameters prone to attacks. Minimizing the regularized loss function leads to estimators robust to adversarial examples.\nAdversarial training and our proposal can both be formulated in terms of robust optimization framework for adversarial robustness (Ben-Tal et al., 2009; Madry et al., 2018; Wong & Kolter, 2018; Shaham et al., 2018; Sinha et al., 2018). In this formulation, one is seeking to improve the worstcase performance of the model, where the performance is measured by a particular loss function `. Adversarial training can be understood as approximating such a worst-case loss by finding the corresponding worst-case data point, i.e., x+ δ with some specific attack techniques. Our proposed method is more direct. It is based on approximating the loss function `(x+ δ) using its second-order Taylor series expansion, i.e.,\n`(x+ δ) ≈ `(x) +∇x`(x)>δ + 1\n2 δ>∇2x`(x)δ,\nand then upper bounding the worst-case loss using the expansion terms. By considering both gradient and Hessian of the loss function with respect to (w.r.t.) the input, we can provide a more accurate approximation to the worst-case loss. In our derivations, we consider both `2 and `∞ attacks. In our derivations, the second-order expansion incorporates both the gradient and Hessian of the loss function with respect to (w.r.t.) the input. We call the method Second-Order Adversarial Regularizer (SOAR) (not to be confused with the Soar cognitive architecture Laird 2012). In the course of development of SOAR, we make the following contributions:\n• We show that an over-parameterized linear regression model can be severely affected by an adversary, even though its population loss is zero. We robustify it with a regularizer that\nexactly mimics the adversarial training. This suggests that regularization can be used instead of adversarial training (Section 2). • Inspired by such a possibility, we develop a regularizer which upper bounds the worst-case\neffect of an adversary under an approximation of the loss. In particular, we derive SOAR, which approximates the inner maximization of the robust optimization formulation based on the second-order Taylor series expansion of the loss function (Section 4). • We study SOAR in the logistic regression setting and reveal challenges with regularization\nusing Hessian w.r.t. the input. We develop a simple initialization method to circumvent the issue (Section 4.1). • We empirically show that SOAR significantly improves the adversarial robustness of the\nnetwork against `∞ attacks and `2 attacks on CIFAR-10 and SVHN. Specifically, we evaluate using a PGD1000 white-box attack (Madry et al., 2018), transferred PGD1000 attacks, AutoAttack (Croce & Hein, 2020), and SimBA (Guo et al., 2019)." }, { "heading": "2 LINEAR REGRESSION WITH AN OVER-PARAMETRIZED MODEL", "text": "This section shows that for over-parameterized linear models, gradient descent (GD) finds a solution that has zero population loss, but is prone to attacks. It also shows that one can avoid this problem with defining an appropriate regularizer. Hence, we do not need adversarial training to robustify such a model. This simple illustration motivates the development of our method in next sections. We only briefly report the main results here, and defer the derivations to Appendix A .\nConsider a linear model fw(x) = 〈w , x 〉 with x,w ∈ Rd. Suppose that w∗ = (1, 0, 0, . . . , 0)> and the distribution of x ∼ p is such that it is confined on a 1-dimensional subspace { (x1, 0, 0, . . . , 0) : x1 ∈ R }. This setup can be thought of as using an over-parameterized model that has many irrelevant dimensions with data that is only covering the relevant dimension of the input space. This is a simplified model of the situation when the data manifold has a dimension lower than the input space. We consider the squared error pointwise loss l(x;w) = 12 |〈x , w 〉 − 〈x , w\n∗ 〉|2. Denote the residual by r(x;w) = 〈x , w − w∗ 〉, and the population loss by L(w) = E [l(X;w)]. Suppose that we initialize the weights as w(0) = W ∼ N(0, σ2Id×d), and use GD on the population loss, i.e., w(t + 1) ← w(t) − β∇wL(w). It is easy to see that the partial derivatives w.r.t. w2,...,d are all zero, i.e., no weight adaptation happens. With a proper choice of learning rate β, we get that the asymptotic solution is w̄ , limr→∞ w(t) = (w∗1 , w2(0), w3(0), . . . , wd(0))\n>. That is, the initial random weights on dimensions 2, . . . , d do not change.\nWe make two observations. The first is that L(w̄) = 0, i.e., the population loss is zero. So from the perspective of training under the original loss, we are finding the optimal solution. The second observation is that this model is vulnerable to adversarial examples. An FGSM-like attack that perturbs x by ∆x = (0,∆x2,∆x3, . . . ,∆xd)> with ∆xi = ε sign(wi(0)) (for i = 2, . . . , d) has the population loss of EX,W [l(X + ∆x); w̄)] ≈ O(ε2d2σ2) under the adversary at the asymptotic solution w̄. When the dimension is large, this loss is quite significant. The culprit is obviously that GD is not forcing the initial weights to go to zero when there is no data from irrelevant and unused dimensions. This simple problem illustrates how the optimizer and an over-parameterized model might interact and lead to a solution that is prone to attacks.\nAn effective solution is to regularize the loss such that the weights of irrelevant dimensions to go to zero. Generic regularizers such as ridge and Lasso regression lead to a biased estimate ofw∗1 , and thus, one is motivated to define a regularizer that is specially-designed for improving adversarial robustness. Bishop (1995) showed the close connection between training with random perturbation and Tikhonov Regularization. Inspired by this idea, we develop a regularizer that mimics the adversary itself. For this FGSM-like adversary, the population loss at the perturbed point is\nLrobustified(w) , E [l(X + ∆x;w)] = L(w) + εE [r(X;w)] ‖w2:d‖1 + ε2\n2 ‖w2:d‖21 . (1)\nMinimizing Lrobustified(w) is equivalent to minimizing the model at the point x′ = x + ∆x. The regularizer εE [r(X;w)] ‖w2:d‖1 + ε2 2 ‖w2:d‖ 2 1 incorporates the effect of adversary in exact form.\nNonetheless, there are two limitations of this approach. The first is that it is designed for a particular choice of attack, an FGSM-like one. We would like a regularizer that is robust to a larger class\nof attacks. The second is that this regularizer is designed for a linear model and the squared error loss. How can we design a regularizer for more complicated models, such as DNNs? We address these questions by formulating the problem of adversarial robustness within the robust optimization framework (Section 3), and propose an approach to approximately solve it (Section 4)." }, { "heading": "3 ROBUST OPTIMIZATION FORMULATION", "text": "Designing an adversarial robust estimator can be formulated as a robust optimization problem (Huang et al., 2015; Madry et al., 2018; Wong & Kolter, 2018; Shaham et al., 2018). To describe it, let us introduce our notations first. Consider an input space X ⊂ Rd, an output space Y , and a parameter (or hypothesis) spaceW , parameterizing a model f : X ×W → Y . In the supervised learning scenario, we are given a data distribution D over pairs of examples {(Xi, Yi)}ni=1. Given the prediction of f(x;w) and a target value y, the pointwise loss function of the model is denoted by `(x, y;w) , `(f(x;w), y). Given the distribution of data, one can define the population loss as L(w) = E [`(X,Y ;w)]. The goal of the standard supervised learning problem is to find aw ∈ W that minimizes the population loss. A generic approach to do this is through empirical risk minimization (ERM). Explicit or implicit regularization is often used to control the complexity of the hypothesis to avoid over- or under-fitting (Hastie et al., 2009).\nAs shown in the previous section, it is possible to find a parameter w that minimizes the loss through ERM, but leads to a model that is vulnerable to adversarial examples. To incorporate the robustness notion in the model, it requires defenders to reconsider the training objective. It is also important to formalize and constrain the power of the adversary, so we understand the strength of the attack to which the model is resistant. This can be specified by limiting that the adversary can only modify any input x to x+ δ with δ ∈ ∆ ⊂ X . Commonly used constraints are ε-balls w.r.t. the `p-norms, though other constraint sets have been used too (Wong et al., 2019b). This goal can be formulated as a robust optimization problem where the objective is to minimize the adversarial population loss given some perturbation constraint ∆:\nmin w\nE(X,Y )∼D [\nmax δ∈∆\n`(X + δ, Y ;w) ]\n(2)\nWe have an interplay between two goals: 1) the inner-max term looks for the worst-case loss around the input, while 2) the outer-min term optimizes the hypothesis by minimizing such a loss.\nNote that solving the inner-max problem is often computationally difficult, so one may approximate it with a surrogate loss obtained from a particular attack. Adversarial training and its variants (Szegedy et al., 2013; Goodfellow et al., 2014; Kurakin et al., 2016; Madry et al., 2018; Wong et al., 2019a) can be intuitively understood as an approximation of this min-max problem via different δ(x).\nAs shown in Section 2, one can design a regularizer that provides the exact value of the loss function at the attacked point for a particular choice of model, loss function, and adversary, cf. (1). Under the robust optimization framework, the regularizer and adversarial training are two realizations of the inner-max objective in (2), but using such a regularizer relieved us from using a separate inner optimization procedure, as is done in adversarial training. Motivated by that example and the robust optimization framework discussed here, we develop a regularizer that can be understood as an upper-bound on the worst-case value of the loss at an attacked point under a second-order approximation of the loss function." }, { "heading": "4 SECOND-ORDER ADVERSARIAL REGULARIZER (SOAR)", "text": "The main idea of SOAR is to approximate the loss function using the second-order Taylor series expansion around an input x and then solve the inner maximization term of the robust optimization formulation (2) using the approximated form. We show this for both `2 and `∞ attacks; the same idea can be applied to other `p norms. We describe crucial steps of the derivation in this section, and defer details to Appendix B .\nAssuming that the loss is twice-differentiable, we can approximate the loss function around input x by the second-order Taylor expansion\n`(x+ δ, y;w) ≈ ˜̀2nd(x+ δ, y;w) , `(x, y;w) +∇x`(x, y;w)>δ + 1\n2 δ>∇2x`(x, y;w)δ. (3)\nFor brevity, we drop w, y and use∇ to denote∇x. Let us focus on the `p attacks, where the constraint set in (2) is ∆ = {δ : ‖δ‖p ≤ ε} for some ε > 0 and p ≥ 1. We focus on the `∞ attack because of its popularity, but we also derive the formulation for the `2 attacks.\nAs a warm-up, let us solve the inner optimization problem by considering the first-order Taylor series expansion. We have\n`FOAR(x) , max ‖δ‖∞≤ε\n`(x) +∇`(x)>δ = `(x) + ε ‖∇`(x)‖1 . (4)\nThe term ε ‖∇`(x)‖1 defines the First-Order Adversarial Regularizer (FOAR). This is similar to the regularizer introduced by Simon-Gabriel et al. (2019) with the choice of `∞ perturbation set. For a general `p-attach with 1 ≤ p ≤ ∞, we have ‖∇`(x)‖q with q satisfying p−1 + q−1 = 1. We shall empirically evaluate FOAR-based approach (for the `∞ attack), but our focus is going to be on solving the inner maximization problem based on the second-order Taylor expansion:\nmax ‖δ‖p≤ε `(x) +∇`(x)>δ + 1 2 δ>∇2`(x)δ, (5)\nfor p = 2,∞. The second-order expansion in (3) can be rewritten as\n`(x+ δ) ≈ `(x) + 1 2\n[ δ\n1 ]> [∇2`(x) ∇`(x) ∇`(x)> 1 ][ δ 1 ] − 1 2 = `(x) + 1 2 δ′>Hδ′ − 1 2 , (6)\nwhere δ′ = [δ; 1]. This allows us to derive an upper bound on the expansion terms using the characteristics of a single Hessian term H. Note that δ′ is a d + 1-dimensional vector and H is a (d+ 1)× (d+ 1) matrix. We need to find an upper bound on δ′>Hδ′ under the attack constraint. For the `∞ attack, solving this maximizing problem is not as easy as in (4) since the Boolean quadratic programming problem in formulation (5) is NP-hard. But we can relax the constraint set and find an upper bound for the maximizer. Note that with δ ∈ Rd, an `∞-ball of size ε is enclosed by an `2-ball of size √ dε with the same centre. Therefore, we can upper bound the inner maximization by\nmax ‖δ‖∞≤ε `(x+ δ) ≤ max ‖δ‖2≤ √ dε `(x+ δ), (7)\nwhich after substituting the second-order Taylor series expansion leads to an `2-constrained quadratic optimization problem\n`(x) + 1\n2 max ‖δ‖2≤ √ dε δ′>Hδ′ − 1 2 , (8)\nwith δ′ = [δ; 1] as before. The `2 version of SOAR does not require this extra step, and we have ε instead of √ dε in (8). A more detailed discussion on the above relaxation procedure is included in Appendix B.2 . Proposition 1. Let ` : Rd → R be a twice-differentiable function. For any ε > 0, we have\nmax ‖δ‖∞≤ε\n˜̀2nd(x+ δ) ≤ `(x) + dε2 + 1\n2 E [‖Hz‖2]−\n1 2 , (9)\nwhere H is defined in (6) and z ∼ N (0, I(d+1)×(d+1)).\nThis result upper bounds the maximum of the second-order approximation ˜̀2nd over an `∞ ball with radius ε, and relates it to an expectation of a Hessian-vector product. Note that there is a simple correspondence between (1) and regularized loss in (9). The latter can be understood as an upper bound on the worst-case damage of an adversary under a second-order approximation of the loss. For the `2 attack, the same line of argument leads to ε2 + 1 instead of dε2 + 1.\nLet us take a closer look at Hz. By decomposing z = [zd, z1] >, we get\nHz = [ ∇2`(x)zd + z1∇`(x) ∇`(x)>zd + z1 ] .\nThe term ∇2`(x)zd can be computed using Finite Difference (FD) approximation. Note that E [‖zd‖2] = √ d for our Normally distributed z. To ensure that the approximation direction has the same magnitude, we use the normalized z̃d = zd‖zd‖2 instead, and use the approximation below\n∇2`(x)zd ≈ ‖zd‖2 ∇`(x+ hz̃d)−∇`(x)\nh . (10)\nTo summarize, SOAR regularizer evaluated at x, with a direction z, and FD step size h > 0 is\nR(x; z, h) = dε2 + 1\n2 ∥∥∥∥∥ [ ‖zd‖2 ∇`(x+hz̃d)−∇`(x) h + z1∇`(x)\n∇`(x)>zd + z1\n]∥∥∥∥∥ 2 . (11)\nThe expectation in (9) can then be approximated by taking multiple samples of z drawn from z ∼ N (0, I(d+1)×(d+1)). These samples would be concentrated around its expectation. One can show that P {‖Hz‖ − E [‖Hz‖] > t} ≤ 2 exp(− ct 2\n‖H‖2 ), where c is a constant and ‖H‖2 is the `2-\ninduced norm (see Theorem 6.3.2 of Vershynin 2018). In practice, we observed that taking more than one sample of z do not provide significant improvement for increasing adversarial robustness, and we include an empirical study on the the effect of sample sizes in Appendix E.4 .\nBefore we discuss the remaining details, recall that we fully robustify the model with an appropriate regularizer in Section 2. Note the maximizer of the loss based on formulation (2) is exactly the FGSM direction, and (1) shows the population loss with our FGSM-like choice of ∆x. To further motivate a second-order approach, note that we can obtain the first two terms in (1) with a first-order regularizer such as FOAR; and we recover the exact form with a second-order formulation in (5).\nNext, we study SOAR in the simple logistic regression setting, which shows potential failure of the regularizer and reveals why we might observe gradient masking. Based on that insight, we provide the remaining details of the method afterwards in Section 4.1." }, { "heading": "4.1 AVOIDING GRADIENT MASKING", "text": "Consider a linear classifier f : Rd × Rd → R with f(x;w) = φ(〈w , x 〉), where x,w ∈ Rd are the input and the weights, and φ(·) is the sigmoid function. Note that the output of f has the interpretation of being a Bernoulli distribution. For the cross-entropy loss function `(x, y;w) = −[y log f(x;w)+(1−y) log(1−f(x;w))], the gradient w.r.t. the input x is∇`(x) = (f(x;w)−y)w and the Hessian w.r.t. the input x is ∇2`(x) = f(x;w)(1− f(x;w))ww>. The second-order Taylor series expansion (3) with the gradient and Hessian evaluated at x is\n`(x+ δ) ≈ `(x) + r(x, y;w)w>δ + 1 2 u(x;w)δ>ww>δ, (12)\nwhere r = r(x, y;w) = f(x;w) − y is the residual term describing the difference between the predicted probability and the correct label, and u = u(x;w) = f(x;w)(1− f(x;w)). Note that u can be interpreted as how confident the model is about its predication (correct or incorrect), and is close to 0 whenever the classifier is predicting a value close to 0 or 1. With this linear model, the maximization (8) becomes\n`(x) + max ‖δ‖2≤ √ dε\n[ rw>δ + 1\n2 uδ>ww>δ\n] = `(x) + ε √ d |r(x, y;w)| ‖w‖2 + dε2\n2 u(x;w) ‖w‖22 .\nThe regularization term is encouraging the norm of w to be small, weighted according to the residual r(x, y;w) and the uncertainty u(x;w).\nConsider a linear interpolation of the cross-entroply loss from x to a perturbed input x′. Specifically, we consider `(αx+ (1− α)x′) for α ∈ [0, 1]. Previous work has empirically shown that the value of the loss behaves logistically as α increases from 0 to 1 (Madry et al., 2018). In such a case, since there is very little curvature at x, if we use Hessian exactly at x, it leads to an inaccurate approximation of the value at `(x′). Consequently, we have a poor approximation of the inner-max, and the derived regularization will not be effective.\nFor the approximation in (12), this issue corresponds to the scenario in which the classifier is very confident about the clean input at x. Standard training techniques such as minimizing the crossentropy loss optimize the model such that it returns the correct label with a high confidence. Whenever\nAlgorithm 1: Computing the SOAR objective for a single training data Input : A pair of training data (x, y), `∞ constraint of ε, Finite difference step-size h.\n1 x′ ← x+ η, where η ← (η1, η2, . . . , ηd)> and ηi ∼ U(− ε2 , ε 2 ). 2 x′ ← ΠB(x, ε2 ) { x′ + ε2 sign (∇x`(x ′)) } where Π is the projection operator. 3 Sample z ∼ N (0, I(d+1)×(d+1)). 4 Compute SOAR regularizer R(x′; z, h) as (11). 5 Compute the pointwise objective: `SOAR(x, y) = `(x′, y) +R(x′; z, h).\nthe classifier is correct with a high confidence, both r and u will be close to zero. As a result, the effect of the regularizer diminishes, i.e., the weights are no longer regularized. In such a case, the Taylor series expansion, computed using the gradient and Hessian evaluated at x, becomes an inaccurate approximation to the loss, and hence its maximizer is not a good solution to the inner maximization problem.\nNote that this does not mean that one cannot use Taylor series expansion to approximate the loss. In fact, by the mean value theorem, there exists an h? ∈ (0, 1) such that the second-order Taylor expansion is exact: `(x + δ) = `(x) + ∇`(x)>δ + 12δ\n>∇2`(x + h?δ)δ. The issue is that if we compute the Hessian at x (instead of at x+h?δ), our approximation might not be very good whenever the curvature profile of the loss function at x is drastically different from the one at x+ h?δ.\nMore importantly, a method relying on the gradient masking can be easily circumvented (Athalye et al., 2018). Our early experimental results had also indicated that gradient masking occurred with SOAR when the gradient and Hessian were evaluated at x. In particular, we observe that SOAR with zero-initialization leads to models with nearly 100% confidence on their predictions, leading to an ineffective regularizer. The result is reported in Table 5 in Appendix D.\nThis suggests a heuristic to improve the quality of SOAR. That is to evaluate the gradient and Hessian, through FD approximation (10) at a less confident point in the `∞ ball of x. We found that evaluating the gradient and Hessian at 1-step PGD adversary successfully circumvent the issue (Line 1-2 in Algorithm 1). We compare other initializations in Table 5 in Appendix D. To ensure the regularization is of the original `∞ ball of ε, we use ε2 for PGD1 initialization, and then ε 2 in SOAR.\nBased on this heuristic, the regularized pointwise objective for a data point (x, y) is\n`SOAR(x, y) = `(x ′, y) +R(x′; z, h), (13)\nwhere z ∼ N (0, I(d+1)×(d+1)) and the point x′ is initialized at PGD1 adversary. Algorithm 1 summarizes SOAR on a single training data. We include the full training procedure in Appendix C . Moreover, we include additional discussions and experiments on gradient masking in Appendix E.11." }, { "heading": "4.2 RELATED WORK", "text": "Several regularization-based alternatives to adversarial training have been proposed. Simon-Gabriel et al. (2019) studied regularization under the first-order Taylor approximation. The proposed regularizer for the `∞ perturbation set is the same as FOAR. Qin et al. (2019) propose local linearity regularization (LLR), where the local linearity measure is defined by the maximum error of the firstorder Taylor approximation of the loss. LLR minimizes the local linearity mesaure, and minimizes the magnitude of the projection of gradient along the corresponding direction of the local linearity mesaure. It is motivated by the observation of flat loss surfaces during adversarial training.\nCURE (Moosavi-Dezfooli et al., 2019) is the closest to our method. They empirically observed that adversarial training leads to a reduction in the magnitude of eigenvalues of the Hessian w.r.t. the input. Thus, they proposed directly minimizing the curvature of the loss function to mimic the effect of adversarial training. An important advantage of our proposed method is that SOAR is derived from a complete second-order Taylor approximation of the loss, while CURE exclusively focuses on the second-order term for the estimation of the curvature. Note the final optimization objective in SOAR, FOAR, LLR and CURE contains derivative w.r.t. the input of the DNN, and such a technique was first introduced to improve generalization by Drucker & Le Cun (1992) as double backpropagation.\nAnother related line of adversarial regularization methods do not involve approximation to the loss function nor robust optimization. TRADES (Zhang et al., 2019) introduces a regularization term that penalizes the difference between the output of the model on a training data and its corresponding adversarial example. MART (Wang et al., 2020) reformulated the training objective by explicitly differentiating between the mis-classified and correctly classified examples. Ding et al. (2019b) present another regularization approach that leverages adaptive margin maximization (MMA) on correctly classified example to robustify the model." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we verify the effectiveness of the proposed regularization method against `∞ PGD attacks on CIFAR10. Our experiments show that training with SOAR leads to significant improvements in adversarial robustness, which is achieved without significantly sacrificing standard accuracy. We focus on `∞ in this section and defer evaluations on `2 in Appendix E.5. Additionally, we provide a detailed discussion and evaluations on the SVHN dataset in Appendix E.6.\nWe train ResNet-10 (He et al., 2016) on the CIFAR-10 dataset. The baseline methods consist of: (1) Standard: training with no adversarially perturbed data; (2) ADV: training with 10-step PGD adversarial examples; (3) TRADES; (4) MART and (5) MMA. Empirical studies in Madry et al. (2018) and Wang et al. (2020) reveal that their approaches benefit from increasing model capacity to achieve higher adversarial robustness, as such, we include WideResNet (Zagoruyko & Komodakis, 2016) for all baseline methods. We were not able to reproduce the results of two closely related works, CURE and LLR, which we discuss further in Appendix E.1. In Appendix E.13, we compare SOAR and FOAR with different initializations. FOAR achieves the best adversarial robustness using PGD1 initialization, so we only present this variation of FOAR in this section.\nThe optimization procedure is described in detail in Appendix E.2. Note that all methods in this section are trained to defend against `∞ norm attacks with ε = 8/255, as this is a popular choice of ε in the literature. The PGD adversaries discussed in Sections 5.1 and 5.2 are generated with ε = 8/255 and a step size of 2/255 (pixel values are normalized to [0, 1]). PGD20-50 denotes 20-step PGD attacks with 50 restarts. In Section 5.3, we compare SOAR with baseline methods on `∞ AutoAttack (Croce & Hein, 2020) adversaries with a varying ε. Additionally, results of all methods on ResNet10 are obtained by averaging over 3 independently initialized and trained models, where the standard deviations are reported in Appendix E.10. We use the provided pretrained WideResNet model provided in the public repository of each method. Lastly, discussions on challenges (i.e., difficult to train from scratch, catastrophic overfitting, BatchNorm, etc.) we encountered while implementing SOAR and our solutions (i.e., using pretrained model, clipping regularizer gradient, early stopping, etc.) are included in Appendix E.7." }, { "heading": "5.1 ROBUSTNESS AGAINST PGD WHITE-BOX ATTACKS", "text": "Before making the comparison between SOAR and the baselines in Table 1, note that FOAR achieves 32.28% against PGD20 attacks. Despite its uncompetitive performance, this shows that approximating the robust optimization formulation based on Taylor series expansion is a reasonable approach. Furthermore, this justifies our extension to a second-order approximation, as the firstorder alone is not sufficient. Lastly, we observe that training with SOAR significantly improves the adversarial robustness against all PGD attacks, leading to higher robustness in all k-step PGD attacks on the ResNet model. SOAR remains competitive compared to baseline methods trained on high-capacity WideResNet architecture." }, { "heading": "5.2 ROBUSTNESS AGAINST BLACK-BOX ATTACKS", "text": "Many defences only reach an illusion of robustness through methods collectively known as gradient masking (Athalye et al., 2018). These methods often fail against attacks generated from an undefended independently trained model, known as transfer-based black-box attacks. Recent works (Tramèr et al., 2017; Ilyas et al., 2019) have proposed hypotheses for the success of transfer-based black-box attacks. In our evaluation, the transferred attacks are PGD20 and PGD1000 perturbations generated from two source models: ResNet and WideResNet, which are denoted by the suffix -R and -W respectively. The source models are trained separately from the defence models on the unperturbed training set. Additionally, Tramer et al. (2020) recommends score-based black-box attacks such as SimBA (Guo et al., 2019). They are more relevant in real-world applications where gradient information is not accessible, and are empirically shown to be more effective than transfer-based attacks. Because they are solely based on the confidence score of the model, score-based attacks are resistant to gradient-masking. All black-box attacks in this section are `∞ constrained at ε = 8/255.\nSOAR achieves the best robustness against all baseline methods trained on ResNet, as shown in Table 2. Compared with the baselines trained on WideResNet, SOAR remains the most robust model against transferred PGD20-W and PGD1000-W, approaching its standard accuracy on unperturbed data. Note that all defence methods are substantially more vulnerable to the score-based SimBA attack. SOAR regularized model is the most robust method against SimBA." }, { "heading": "5.3 ROBUSTNESS AGAINST AUTOATTACK", "text": "During the ICLR rebuttal phase, we evaluated SOAR against Autoattack (Croce & Hein, 2020). In this section, we focus on the `∞-bounded Autoattack, and similar results with the `2-bounded attack is included in Appendix E.5. We noticed that SOAR has shown greater vulnerabilities to AutoAttack compared to the attacks discussed in Sections 5.1 and 5.2. AutoAttack consists of an ensemble of four attacks: two parameter-free versions of the PGD attack (APGD-CE and APGD-DLR), a white-box fast adaptive boundary (FAB) attack (Croce & Hein, 2019), a score-based black-box Square Attack (Andriushchenko et al., 2020). Notice that the major difference between the two PGD attacks is the\nloss they are based on: APGD-CE is based on the cross-entropy loss similar to (Madry et al., 2018), and APGD-DLR is based on the logit difference similar to (Carlini & Wagner, 2017).\nTo better understand the source of SOAR’s vulnerability, we tested it against the four attacks individually. First, we observed that the result against untargeted APGD-CE is similar to the one shown in Section 5.1. This is expected because the attacks are both formulated based on cross-entropybased PGD. However, there is a considerable degradation in the accuracy of SOAR against targeted APGD-DLR and targeted FAB. At ε = 8/255, SOAR is most vulnerable to targeted APGD-DLR with a robust accuracy of only 18.25%. To further investigate SOAR’s robustness against AutoAttack, we tested with different ε to verify if SOAR can at least improve robustness against `∞ attacks with smaller ε. We observed that at ε = 4/255 the robustness improvement of SOAR becomes more consistent. Interestingly, we also noticed that a model with better robustness at ε = 8/255 does not guarantee a better robustness at ε = 4/255, as is the case for Square Attack on ADV and SOAR.\nCombing the results with the four attacks and with different ε, we provide three hypotheses on the vulnerability of SOAR. First, SOAR might overfit to a particular type of attack: adversarial examples generated based on the cross-entropy loss. APGD-DLR is based on logit difference and FAB is based on finding minimal perturbation distances, which are both very different from the cross-entropy loss. Second, SOAR might rely on gradient masking to a certain extent, and thus PGD with cross-entropy loss is difficult to find adversaries while they still exist. This also suggests that the results with black-box attacks might be insufficient to conclusively eliminate the possibility of gradient masking. Third, since SOAR provide a more consistent robustness improvement at a smaller ε, this suggests that the techniques discussed in Section 4 did not completely address the problems raised from the second-order approximation. This makes the upper-bound of the inner-max problem loose, hence making SOAR improves robustness against attacks with ε smaller than what it was formulated with.\nFinally, we emphasize that this should not rule SOAR as a failed defence. Previous work shows that a mechanism based on gradient masking can be completely circumvented, resulting in a 0% accuracy against non-gradient-based attacks (Athalye et al., 2018). Our result on SimBA and Square Attack shows that this is not the case with SOAR, even at ε = 8/255, and thus the robustness improvement cannot be only due to gradient masking. Overall, we think SOAR’s vulnerability to AutoAttack is an interesting observation and requires further investigation." }, { "heading": "6 CONCLUSION", "text": "This work proposed SOAR, a regularizer that improves the robustness of DNN to adversarial examples. SOAR was obtained using the second-order Taylor series approximation of the loss function w.r.t. the input, and approximately solving the inner maximization of the robust optimization formulation. We showed that training with SOAR leads to significant improvement in adversarial robustness under `∞ and `2 attacks. This is only one step in designing better regularizers to improve the adversarial robustness. Several directions deserve further study, with the prominent one being SOAR’s vulnerabilities to AutoAttack. Another future direction is to understand the loss surface of DNN better in order to select a good point around which an accurate Taylor approximation can be made. This is important for designing regularizers that are not affected by gradient masking." }, { "heading": "A DERIVATIONS OF SECTION 2: LINEAR REGRESSION WITH AN OVER-PARAMETRIZED MODEL", "text": "We derive the results reported in Section 2 in more detail here. Recall that we consider a linear model fw(x) = 〈w , x 〉 with x,w ∈ Rd. We suppose that w∗ = (1, 0, 0, . . . , 0)> and the distribution of x ∼ p is such that it is confined on a 1-dimensional subspace { (x1, 0, 0, . . . , 0) : x1 ∈ R }. So the density of x is p ((x1, . . . , xd)) = p1(x1)δ(x2)δ(x3) . . . δ(xd), where δ(·) is Dirac’s delta function. We initialize the weights at the first time step as w(0) ∼ N(0, σ2Id×d), and use GD to find the minimizer of the population loss. The partial derivatives of the population loss are\n∂L(w) ∂wj = {∫ (w1 − w∗1)p1(x1)xdx = (w1 − w∗1)µ1,∫ (wj − w∗j )δ(xj)xdx = (wj − w∗j )0 = 0, j 6= 1.\nwhere µ1 = E [X1]. Notice that the gradient in dimension j = 1 is non-zero, unless (w1−w∗1)µ1 = 0. Assuming that µ1 6= 0, this implies that the gradient won’t be zero unless w1 = w∗1 . On the other hand, the gradients in dimensions j = 2, . . . , d are all zero, so GD does not change the value of wj(t) for j = 2, . . . , d. Therefore, under the proper choice of learning rate β, we get that the asymptotic solution of GD solution is w̄ , limr→∞ w(t) = (w∗1 , w2(0), w3(0), . . . , wd(0))\n>. It is clear that L(w̄) = 0, i.e., the population loss is zero, as noted already as our first observation in that section. Also note that we can easily attack this model by perturbing x by ∆x = (0,∆x2,∆x3, . . . ,∆xd)>. The pointwise loss at x+ ∆x is\nl(x+ ∆x;w) = 1\n2 |(w1 − w∗1)x1 + 〈w , ∆x 〉| 2 =\n1 2 |r(x;w) + 〈w , ∆x 〉|2 .\nWith the choice of ∆xi = ε sign(wi(0)) (for i = 2, . . . , d) and ∆x1 = 0, an FGSM-like attack (Goodfellow et al., 2014) at the learned weight w̄ leads to the pointwise loss of\nl(x+ ∆x; w̄) = 1 2 ε2 [ d∑ j=2 |wj(0)| ]2 ≈ 1 2 ε2 ‖w(0)‖21 .\nWe comment that our choice of ∆x is not from the same distribution as the training data x. This choice aligns with the hypotheses in Ding et al. (2019a); Schmidt et al. (2018) that adversarial examples come from a shifted data distribution; however, techniques such as feature adversaries (Sabour et al., 2015) focus on designing perturbations to be close to input distributions. We stress that the goal here is to illustrate the loss under this particular attack.\nIn order to get a better sense of this loss, we compute its expected value w.r.t. the randomness of weight initialization. We have that (including the extra |w1(0)| term too)\nEW∼N(0,σ2Id×d) [ ‖W‖21 ] = E d∑ i,j=1 |Wi||Wj | = d∑ i=1 E [ |Wi|2 ] + d∑ i,j=1,i6=j E [|Wi|]E [|Wj |] ,\nwhere we used the independence of the r.v. Wi and Wj when i 6= j. The expectation E [ |Wi|2 ] is the variance σ2 of Wi. The r.v. |Wj | has a folded normal distribution, and its expectation E [|Wj |] is√ 2 πσ. Thus, we get that\nEW∼N(0,σ2Id×1) [ ‖W‖21 ] = dσ2 + d(d− 1) 2\nπ σ2 ≈ 2 π d2σ2,\nfor d 1. The expected population loss of the specified attack ∆x at the asymptotic solution w̄ is\nEX,W [l(X + ∆x); w̄)] ≈ O(ε2d2σ2).\nThe dependence of this loss on dimension d is significant, showing that the learned model is quite vulnerable to attacks. We note that the conclusions would not change much with initial distributions other than the Normal distribution.\nAn effective solution is to regularize the loss to encourage the weights of irrelevant dimensions going to zero. A generic regularizer is to use the `2-norm of the weights, i.e., formulate the problem as a ridge regression. In that case, the regularized population loss is\nLridge(w) = 1 2 E [ |〈X , w 〉 − 〈X , w∗ 〉|2 ] + λ 2 ‖w‖22 .\nOne can see that the solution of∇wLridge(w) = 0 is w̄1(λ) = µ1µ1+λw ∗ 1 and w̄j(λ) = 0 for j 6= 1.\nw̄j(λ) =\n{ µ1\nµ1+λ w∗1 j = 1 0 j 6= 1.\nThe use of this generic regularizer seems reasonable in this example, as it enforces the weights for dimensions 2 to d to become zero. Its only drawback is that it leads to a biased estimate of w∗1 . The bias, however, can be made small with a small choice for λ. We can obtain a similar conclusion for the `1 regularizer (Lasso).\nAs such, one has to define a regularizer that is specially-designed for improving adversarial robustness. Bishop (1995) showed the strong connection between training with random perturbation and Tikhonov Regularization. Inspired by this idea, we develop a regularizer that mimics the adversary itself. Let us assume that a particular adversary attacks the model by adding ∆x = (0, ε sign(w2(0)), . . . , ε sign(wd(0)) >. The population loss at the perturbed point is\nLrobustified(w) , E [l(X + ∆x;w)] = 1\n2 E ∣∣∣∣∣∣r(x;w) + ε d∑ j=2 |wj | ∣∣∣∣∣∣ 2 \n= L(w) + εE [r(X;w)] ‖w2:d‖1 + ε2\n2 ‖w2:d‖21 ,\nwhere ‖w2:d‖1 = ∑d j=2 |wj |.1 This is the same objective as (1) reported in Section 2. Note that minimizing Lrobustified(w) is equivalent to minimizing the model at the point x′ = x + ∆x. The regularizer εE [r(X;w)] ‖w2:d‖1 + ε2 2 ‖w2:d‖ 2 1 incorporates the effect of adversary in exact form. This motivated the possibility of designing a regularized tailored to prevent attacks." }, { "heading": "A.1 DERIVATION OF THE POPULATION LOSS UNDER ITS FIRST AND SECOND ORDER APPRXOIMATION", "text": "First, we show that the FGSM direction is the maximizer of the loss when the perturbation is `∞ constrained. Based on the pointwise loss at x+ ∆x, we have\nmax ‖∆X‖∞≤ε\nl(x+ ∆x;w) = 1\n2 ∣∣∣∣r(x;w) + max‖∆X‖∞≤ε 〈w , ∆x 〉 ∣∣∣∣2 .\nWe use the Cauchy-Schwarz inequality to obtain\nmax ‖∆X‖∞≤ε 〈w , ∆x 〉 ≤ max ‖∆X‖∞≤ε |〈w , ∆x 〉| ≤ max ‖∆X‖∞≤ε ‖w‖1 ‖∆x‖∞ = ε ‖w‖1 ,\nwhich leads to\nargmax ‖∆X‖∞≤ε l(x+ ∆x;w) = ε sign(w).\nNext, we show that the first-order approximation of E [l(X + ∆x;w)] obtains the first two terms in (1).\nNote the gradient of the loss w.r.t. the input is\n∇xl(x;w) = (〈w , ∆x 〉 − 〈w∗ , ∆x 〉)(w − w∗) = r(x;w)(w − w∗), 1A similar, but more complicated result, would hold if the adversary could also attack the first dimension.\nand the Hessian w.r.t. the input is\n∇2xl(x;w) = (w − w∗)(w − w∗)>.\nThe first-order Taylor series approximation is Lrobustified(w) ≈ L̂1st(w) , E [ l(X;w) +∇xl(X;w)>∆x ] = L(w) + E [ r(X;w)(w − w∗)>∆x\n] = L(w) + E [ r(X;w)w>∆x\n] = L(w) + εE [r(X;w)] ‖w2:d‖1 .\nNote that w∗>∆x = 0 because of our particular choice of ∆x and w∗. Here we obtain the first two terms in (1).\nThe second-order Taylor series approximation is Lrobustified(w) ≈ L̂2nd(w) , E [ l(X;w) +∇xl(X;w)>∆x+ 1\n2 ∆x>∇2xl(x;w)∆x ] = L(w) + εE [r(X;w)] ‖w2:d‖1 + 1\n2 ∆x>(w − w∗)(w − w∗)>∆x\n= L(w) + εE [r(X;w)] ‖w2:d‖1 + ε2\n2 ‖w2:d‖21 ,\nwhich recovers the exact form in (1).\nThis completes the motivation of using second-order Taylor series approximation with our warm-up toy example." }, { "heading": "B DERIVATIONS OF SECTION 4: SECOND-ORDER ADVERSARIAL REGULARIZER (SOAR)", "text": "" }, { "heading": "B.1 RELAXATION", "text": "Note the Boolean quadratic programming (BQP) problem in formulation (5) is NP-hard (Beasley, 1998; Lima & Grossmann, 2017). Even though there exist semi-definite programming (SDP) relaxations, such approaches require the exact Hessian w.r.t. the input, which is computationally expensive to obtain for high-dimensional inputs. And even if we could compute the exact Hessian, SDP itself is a computationally expensive approach, and not suitable to be within the inner loop of a DNN training. As such, we relax the `∞ constraint to an `2 constraint, which as we see, leads to a computationally efficient solution." }, { "heading": "B.2 THE ISSUE RELATED TO THE LOOSENESS OF THE BOUND IN EQ (7)", "text": "In the ICLR rebuttal phase, the reviewer pointed out that, from the perspective of the volume ratio between the two `p balls, replacing ‖δ‖∞ ≤ ε with ‖δ‖2 ≤ √ dε can be problematic since the\nvolume of {δ : ‖δ‖∞ ≤ ε} is 2dεd whereas the volume of {δ : ‖δ‖2 ≤ √ (d)ε} is π d/2 Γ(1+d/2)d d/2εd. Their ratio goes to 0 as the dimension increases. The implication is that the search space for the `∞ maximizer is infinitesimal compared to the one for the `2 maximizer, leading to a loose upper-bound.\nAs a preliminary study on the tightness of the bound, we evaluated the two slides of (7) by approximating the maximum using PGD attacks. In particular, we approximate max||δ||∞≤ `(x+ δ) using `(x+ δ∞) where δ∞ is generated using 20-iteration `∞-PGD with = 8255 . Similarly, we approximate max||δ||2≤ √ d `(x+ δ) using `(x+ δ2) where δ2 is generated using 100-iteration `2-PGD with\n= 1.74. The reason for this particular configuration of attack parameter is to match the ones used during our previous evaluations.\nFrom this preliminary study, we observe that there is indeed a gap between the approximated LHS and RHS of (7), and thus, we think it is a valuable future research direction to explore other possibilities that allow us to use a second-order approximation to study the worst-case loss subject to an constrained perturbation." }, { "heading": "B.3 UNIFIED OBJECTIVE", "text": "We could maximize each term inside (8) separately and upper bound the max by max‖δ‖2≤ √ dε∇`(x) >δ + max‖δ‖2≤ √ dε 1 2δ >∇2`(x)δ = √ dε ‖∇`(x)‖2 + 1 2dε\n2σmax(∇2`(x)), where σmax(∇2`(x)) is the largest singular value of the Hessian matrix, ∇2`(x). Even though the norm of the gradient and the singular value of the Hessian have an intuitive appeal, separately optimizing these terms might lead to a looser upper bound than necessary. The reason is that the maximizer of the first two terms are argmax\n∣∣∇`(x)>δ∣∣ = ∇`(x)‖∇`(x)‖2 and the direction corresponding to the largest singular value of∇2`(x). In general, these two directions are not aligned." }, { "heading": "B.4 PROOF OF PROPOSITION 1", "text": "Proof. By the inclusion of the `∞-ball of radius ε within the `2-ball of radius √ dε and the definition of H in (6), we have\nmax ‖δ‖∞≤ε ˜̀2nd(x) ≤ max ‖δ‖2≤ √ dε ˜̀2nd(x)\n= max ‖δ‖2≤ √ dε `(x) +\n1\n2\n[ δ\n1 ]> [∇2`(x) ∇`(x) ∇`(x)> 1 ][ δ 1 ] − 1 2\n= `(x) + 1\n2 max\n‖δ‖2≤ √ dε\n[ δ\n1\n]> H [ δ\n1\n] − 1\n2\n≤ `(x) + 1 2\nmax ‖δ′‖2≤ √ dε2+1\nδ′>Hδ′ − 1 2 .\nIt remains to upper bound max‖δ′‖2≤ε′ δ ′>Hδ′ with ε′ =\n√ dε2 + 1. We use the Cauchy-Schwarz\ninequality to obtain\nmax ‖δ′‖2≤ε′ δ′>Hδ′ ≤ max ‖δ′‖2≤ε′ ∣∣δ′>Hδ′∣∣ ≤ max ‖δ′‖2≤ε′ ‖δ′‖2 ‖Hδ ′‖2 = ε ′ max ‖δ′‖2≤ε′ ‖Hδ′‖2 = ε ′2 ‖H‖2 ,\nwhere the last equality is obtained using properties of the `2-induced matrix norm (this is the spectral norm). Since computing ‖H‖2 would again require the exact input Hessian, and we would like to avoid it, we further upper bound the spectral norm by the Frobenius norm as\n‖H‖2 = σmax(H) ≤ ‖H‖F .\nThe Frobenius norm itself satisfies ‖H‖F = √ Tr(H>H) = E [‖Hz‖2] , (14)\nwhere z ∼ N (0, I(d+1)×(d+1)). Therefore, we can estimate ‖H‖F by sampling random vectors z and compute the sample average of ‖Hz‖2." }, { "heading": "C SOAR ALGROITHM: A COMPLETE ILLUSTRATION", "text": "In Algorithm 1, we present the inner-loop operation of SOAR using a single data point. Here we summarize the full training procedure with SOAR in Algorithm 2. Note that it is presented as if the optimizer is SGD, but we may use other optimizers as well.\nAlgorithm 2: Improving adversarial robustness via SOAR Input :Training dataset. Learning rate β, training batch size b, number of iterations N , `∞\nconstraint of ε, Finite difference step-size h.\n1 Initialize network with pre-trained weight w; 2 for i ∈ {0, 1, . . . , N} do 3 Get mini-batch B = {(x1, y1) , · · · , (xb, yb)} from the training set. 4 for j = 1, . . . ,m (in parallel) do 5 x′j ← xj + η, where η ← (η1, η2, . . . , ηd)> and ηi ∼ U(− ε2 , ε 2 ).\n6 x′j ← ΠB(xj , ε2 ) { x′j + ε 2 sign (∇x′j `(x ′ j)) } where Π is the projection operator. 7 Sample z ∼ N (0, I(d+1)×(d+1)). 8 Compute the SOAR regularizer R(x′j ; z, h) as (11). 9 Compute the pointwise objective: `SOAR(xj , yj) = `(x′j , yj) +R(x ′ j ; z, h).\n10 end 11 wi+1 ← wi − β × 1b ∑b j=1∇wi`SOAR. 12 end" }, { "heading": "D POTENTIAL CAUSES OF GRADIENT MASKING", "text": "We summarize the average value of the highest probability output for test set data initialized with zero, random and PGD1 perturbations in Table 5. We notice that training with SOAR using zero or random initialization leads to models with nearly 100% confidence on their predictions. This is aligned with the analysis of SOAR for a linear classifier (Section 4.1), which shows that the regularizer becomes ineffective as the model outputs high confidence predictions. Indeed, results in Table 7 show that those models are vulnerable under black-box attacks.\nResults in Table 5 suggest that highly confident predictions could be an indication for gradient masking. We demonstrate this using the gradient-based PGD attack. Recall that we generate PGD attacks by first initializing the clean data xn with a randomly chosen η within the `∞ ball of size ε, followed by gradient ascent at xn + η. Suppose that the model makes predictions with 100% confidence on any given input. This leads to a piece-wise loss surface that is either zero (correct predictions) or infinity (incorrect predictions). The gradient of this loss function is either zero or undefined, and thus making gradient ascent ineffective. Therefore, white-box gradient-based attacks are unable to find adversarial examples." }, { "heading": "E SUPPLEMENTARY EXPERIMENTS", "text": "" }, { "heading": "E.1 DISCUSSION ON THE REPRODUCIBILITY OF CURE AND LLR", "text": "We were not able to reproduce results of two closely related works, CURE (Moosavi-Dezfooli et al., 2019) and LLR (Qin et al., 2019). For CURE, we found the open-source implementation2, but were not able to reproduce their reported results using their implmentation. We were not able to reproduce the results of CURE with our own implementation either. For LLR, Yang et al. (2020) were not able to reproduce the results, they also provided an open-source implementation3. Regardless, we compare SOAR to the reported result by CURE and LLR in Table 6:" }, { "heading": "E.2 TRAINING AND EVALUATION SETUP", "text": "CIFAR-10: Training data is augmented with random crops and horizontal flips.\nResNet: We used an open-source ResNet-10 implementation4. More specifically, we initialize the model with ResNet(BasicBlock, [1,1,1,1]). Note that we remove the BatchNorm layers in the ResNet-10 architecture, and we discuss this further in Appendix E.7 .\nWideResNet: We used the implementation5 of WideResNet-34-10 model found in public repository maintained by the authors of TRADES (Zhang et al., 2019).\nStandard training on ResNet and WideResNet: Both are trained for a total of 200 epochs, with an initial learning rate of 0.1. The learning rate decays by an order of magnitude at epoch 100 and 150. We used a minibatch size of 128 for testing and training. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4.\nAdversarial training with PGD10 examples on ResNet: The optimization setting is the same as the one used for standard training. Additionally, to ensure that the final model has the highest adversarial robustness, we save the model at the end of every epoch, and the final evaluation is based on the one with the highest PGD20 accuracy.\nSOAR on ResNet: SOAR refers to continuing the training of the Standard model on ResNet. It is trained for a total of 200 epochs with an initial learning rate of 0.004 and decay by an order of magnitude at epoch 100. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4. We use a FD step-size h = 0.01 for the regularizer. Additionally, we apply a clipping of 10 on the regularizer, and we discuss this clipping operation in Appendix E.7 .\nMART and TRADES on ResNet: We used the same optimization setup as the ones in their respective public repository6. We briefly summarize it here. The model is trained for a total of 120 epochs, with an initial learning rate of 0.1. The learning rate decays by an order of magnitude at epoch 75, 90, 100. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4. We performed a hyperparameter sweep on the strength of the regularization term β and selected one that resulted in the best performance against PGD20 attacks. A complete result is reported in Appendix E.12 .\n2https://github.com/F-Salehi/CURE_robustness 3https://github.com/yangarbiter/robust-local-lipschitz 4https://github.com/kuangliu/pytorch-cifar 5https://github.com/yaodongyu/TRADES 6https://github.com/YisenWang/MART\nMMA on ResNet: We used the same optimization setup as the one in its public repository7. We briefly summarize it here. The model is trained for a total of 50000 iterations, with an initial learning rate of 0.3. The learning rate changes to 0.09 at the 20000 iteration, 0.03 at the 30000 iteration and lastly 0.009 at the 40000 iteration. We used SGD optimizer with momentum of 0.9 and a weight decay of 2e-4. We performed a hyperparameter sweep on the margin term and selected the one that resulted in the best performance against PGD20 attacks. A complete result is reported in Appendix E.12 .\nADV, TRADES, MART and MMA on WideResNet: We use the pretrained checkpoint provided in their respective repositories. Note that we use the pretrained checkpoint for PGD10 adversarially trained WideResNet in Madry’s CIFAR10 Challenge8.\nEvaluations: For FGSM and PGD attacks, we use the implementation in AdverTorch (Ding et al., 2019c). For SimBA (Guo et al., 2019), we use the authors’ open-source implementation9." }, { "heading": "E.3 ADVERSARIAL ROBUSTNESS OF THE MODEL TRAINED USING SOAR WITH DIFFERENT INITIALIZATIONS", "text": "We report the adversarial robustness of the model trained using SOAR with different initialization techniques in Table 7. The second column shows the accuracy against white-box PGD20 adversaries. The third column shows the accuracy against black-box PGD20 adversaries transferred from an independently initialized and standard-trained ResNet-10 model. Note that despite the high adversarial accuracy against white-box PGD attacks, models trained using SOAR with zero and random initialization perform poorly against transferred attacks. This suggests the presence of gradient masking when using SOAR with zero and random initializations. Evidently, SOAR with PGD1 initialization alleviates the gradient masking problem." }, { "heading": "E.4 COMPARING THE VALUES OF THE SOAR REGULARIZED LOSS COMPUTED USING", "text": "DIFFERENT NUMBERS OF RANDOMLY SAMPLED z\nSuppose we slightly modify Eq (13) by `SOAR(x, y, n) = `(x′, y) + 1n ∑n i=0R(x\n′; z(i), h) to incorporate the effect of using multiple randomly sampled vectors z(i) in computing the SOAR regularized loss. Notice that the current implementation is equivalent to using n = 1. We observed the model at two checkpoints, at the beginning and the end of SOAR regularization, the value of the regularized loss remains unchanged as we increase n from 1 to 100.\n7https://github.com/BorealisAI/mma_training 8https://github.com/MadryLab/cifar10_challenge 9https://github.com/cg563/simple-blackbox-attack" }, { "heading": "E.5 ROBUSTNESS UNDER `2 ATTACKS ON CIFAR-10", "text": "We evaluate SOAR and two of the baseline methods, ADV and TRADES, against `2 white-box and black-box attacks on CIFAR-10 in Table 9. No `2 results were reported by MART and we are not able to reproduce the `2 results using the implementation by MMA, thus those two methods are not included in our evaluation.\nIn Section 4, we show that the `∞ formulation of SOAR with ‖δ‖∞ = ε is equivalent to the `2 formulation of SOAR with ‖δ‖2 = ε √ d. In other words, models trained with SOAR to be robust against `∞ attacks with ε = 8255 should also obtain improved robustness against `2 attacks with ε = 8255 √ 32 ∗ 32 ∗ 3 = 1.74. In our evaluation, all `2 adversaries used during ADV and TRADES are generated with 10-step PGD (ε = 1.74) and a step size of 0.44. Note that the goal here is to show the improved robustness of SOAR against `2 attacks other than being SOTA, thus the optimization procedures are the same as the ones used in the `∞ evaluation.\nWe observe that training with SOAR improves the robustness of the model against `2 attacks. Instead of a fixed `2 norm, we demonstrate the improved robustness using an increasing range of ε. For all attacks, we use 100 iterations of PGD and a step size of 2.5ε100 . In Table 9, we find that training with SOAR leads to a significant increase in robustness against white-box and black-box `2 adversaries. As ε increases, SOAR model remain robust against white-box `2 attacks (ε = 1), while other methods falls off. The last column of Table 9 shows the robustness against transferred `2 attacks (ε = 1.74). The source model is a ResNet10 network trained separately from the defence models on the unperturbed training set. We observe that SOAR achieves the second highest robustness compared to baseline methods against transferred `2 attacks. This result empirically verifies our previous claim that `2 and `∞ formulation of SOAR only differs by a factor of √ d. Moreover, it aligns with findings by Simon-Gabriel et al. (2019), that empirically showed adversarial robustness through regularization gains robustness against more than one norm-ball attack at the same time." }, { "heading": "E.6 ADDITIONAL EVALUATION ON SVHN DATASET", "text": "We use the same ResNet-10 architecture as the one for CIFAR-10 evaluation. Training data is augmented with random crops and horizontal flips. For Standard training, we use the same optimization procedure as the one used for CIFAR-10. For SOAR and TRADES, we use the exact same hyperparameter for the regularizer. For SOAR, we use early-stopping at epoch 130 to prevent catastrophic over-fitting. Besides, the optimization schedule is identical for SOAR and TRADES as the ones used for CIFAR-10.\nWe emphasize again that the goal of evaluting using SVHN is to demonstrate the improved robustness with SOAR on a different dataset, thus we did not perform an additional hyper-parameter sweep. The optimization procedures are the same as the ones used in the CIFAR-10 evaluation.\nFor PGD10 adversarial training, we observe that ResNet-10 is not able to learn anything meaningful. Specifically, when trained with PGD10 examples, ResNet-10 does not perform better than a randomlyinitialized network in both standard and adversarial accuracy. Cai et al. (2018) made a similar observation on ResNet-50, where training accuracy is not improving over a long period of adversarial training with PGD10. They further investigated models with different capacities and found that even ResNet-50 might not be sufficiently deep for PGD10 adversarial training on SVHN. Wang & Zhang (2019) reported PGD10 adversarial training result on SVHN with WideResNet, which we include in Table 11.\nFor MART, we were not able to translate their CIFAR-10 results on SVHN. We performed the same hyperparameter sweep as the one in Table 18, as well as different optimization settings, but none resulted in a meaningful model. It is likely that the potential cause is the small capacity of ResNet-10. For MMA, the implementation included in its public repository is very specific to the CIFAR-10 dataset, so we did not include it in the comparison.\nOverall, we observe a similar performance on SVHN vs. on CIFAR-10. Compared to the result in Table 1, we observe a slight increase in standard accuracy and robust accuracy for both SOAR and TRADES. In particular, the standard accuracy increases by 8.87% and 3.28%, and the PGD20 accuracy increases by 3.52% and 2.93% for TRADES and SOAR respectively. More notably, we observe on SVHN that SOAR regularized model gains robustness without significantly sacrificing its standard accuracy.\nTable 12 compares the performance of SOAR to TRADES on SimBa and on transferred `∞ attacks. The evaluation setting for transferred attacks is identical to the one used for CIFAR-10, where we use an undefended independently trained ResNet-10 as the source model. Despite a smaller gap on the accuracy against transferred attacks, we see that SOAR regularized model yields a significant higher accuracy against the stronger SimBA attacks.\nNote that we did not perform any extensive hyperparameter sweep on SVHN, and we simply took what worked on CIFAR-10. We stress that the goal is to demonstrate the effectiveness of SOAR, and its performance relative to other baseline methods.\nNext, we evaluate SOAR and TRADES under `2 bounded white-box and black-box attacks. All `2 PGD adversaries are generated using the same method as the one in the evaluation for CIFAR-10. Also, we do not include ADV due to the same result discussed above. Our results show that training with SOAR significantly improves the robustness against `2 PGD white-box attacks compared to TRADES. For transferred attacks, TRADES and SOAR performs similarly." }, { "heading": "E.7 CHALLENGES", "text": "Batch Normalization: We observe that networks with BatchNorm layers do not benefit from SOAR in adversarial robustness. Specifically, we performed an extensive hyper-parameter search for SOAR on networks with BatchNorm layers, and we were not able to achieve meaningful improvement\nin adversarial robustness. A related work by Galloway et al. (2019) focuses on the connection between BatchNorm and adversarial robustness. In particular, their results show that on VGG-based architecture (Simonyan & Zisserman, 2014), there is a significant gap in adversarial robustness between networks with and without BatchNorm layers under standard training. Needless to say, the interaction between SOAR and BatchNorm requires further investigations, and we consider this as an important future direction. As such, we use a small-capacity ResNet (ResNet-10) in our experiment, and modified it by removing its BatchNorm layers. Specifically, we removed BatchNorm layers from all models used in the baseline experiments with ResNet. Note that BatchNorm layers makes the training process less sensitive to hyperparameters (Ioffe & Szegedy, 2015), and removing them makes it difficult to train a very deep network such as WideResNet. As such, we did not perform SOAR on WideResNet.\nStarting from pretrained model: We notice that it is difficult to train with SOAR on a newlyinitialized model. Note that it is a common technique to perform fine-tuning on a pretrained model for a specific task. In CURE, regularization is performed after a model is first trained with a cross-entropy loss to reach a high accuracy on clean data. They call the process adversarial fine-tuning. Cai et al. (2018); Sitawarin et al. (2020) study the connection between curriculum learning (Bengio et al., 2009) and training using adversarial examples with increasing difficulties. Our idea is similar. The model is first optimized for an easier task (standard training), and then regularized for a related, but more difficult task (improving adversarial robustness). Since the model has been trained to minimize its standard loss, the loss gradient can be very small compared to the regularizer gradient, and thus we apply a clipping of 10 on the regularizer.\nCatastrophic Overfitting: We observe that when the model achieves a high adversarial accuracy and continues training for a long period of time, both the standard and adversarial accuracy drop significantly. A similar phenomenon was observed in (Cai et al., 2018; Wong et al., 2019a), which they refer to as catastrophic forgetting and catastrophic over-fitting respectively. Wong et al. (2019a) use early-stopping as a simple solution. We observe that with a large learning rate, the model reaches a high adversarial accuracy faster and catastrophic over-fitting happens sooner. As such, our solution is to fix the number of epochs to 200 and then carefully sweep over various learning rates to make sure that catastrophic over-fitting do not happen.\nDiscussion on Computation Complexity: We emphasize that our primary goal is to propose regularization as an alternative approach to improving adversarial robustness. We discussed techniques towards an efficient implementation, however, there is still potential for a faster implementation. In our current implementation, a single epoch with WideResNet takes: 19 mins on PGD10 adversarial training, 26.5 mins on SOAR, 29 mins on MART, and 39.6 mins on TRADES. We observe that despite being a faster method than MART and TRADES, SOAR is still quite slow compared to PGD10 adversarial training. We characterize the computation complexity as a function of the number of forward and backward passes required for a single mini-batch. Standard training: 1 forward pass and 1 backward pass; Adversarial training with k-step PGD: k+1 forward passes and k+1 backward passes; FOAR: 1 forward pass and 2 backward passes; SOAR: 3 forward passes and 4 backward passes" }, { "heading": "E.8 DIFFERENTIABILITY OF RELU AND ITS EFFECT ON SOAR", "text": "The SOAR regularizer is derived based on the second-order Taylor approximation of the loss which requires the loss to be twice-differentiable. Although ReLU is not differentiable at 0, the probability of its input being at exactly 0 is very small. That is also why we can train ReLU networks through backpropagation. This is true for the Hessian too. In addition, notice that from a computation\nviewpoint, we never need to compute the exact Hessian as we approximate it through first-order approximation." }, { "heading": "E.9 POTENTIAL ROBUSTNESS GAIN WITH INCREASING CAPACITIES", "text": "Empirical studies in Madry et al. (2018) and Wang et al. (2020) reveal that their approaches benefit from increasing model capacity to achieve higher adversarial robustness. We have a similar observation with SOAR.\nTable 14 compares the performance of SOAR against `∞ bounded white-box attacks on networks with different capacities. CNN6(CNN8) refers to a simple 6-layer(8-layer) convolutional network, and ResNet-10 is the network we use in Section 5. Evidently, as network capacity increases, we observe improvements in both standard accuracy and adversarial accuracy. As such, we expect a similar gain in performance with larger capacity networks such as WideResNet." }, { "heading": "E.10 EXPERIMENT RESULTS ON RESNET10 IN TABLE 1 AND TABLE 2 WITH STANDARD DEVIAITIONS", "text": "All results on ResNet10 are obtained by averaging over 3 independently initialized and trained models. Here, we report the standard deviation of the results in Table 1 and Table 2. Notice we omit results on PGD100 and PGD200 due to space constraint." }, { "heading": "E.11 ADDITIONAL EXPERIMENTS ON GRADIENT MASKING", "text": "To verify that SOAR improves robustness of the model without gradient masking, we include the following experiments to empirically support our claim.\nFirst, from the result in Appendix E.3, we conclude that SOAR with zero initilaization results in gradient masking. This is shown by the high accuracy (89.24%, close to standard accuracy) under white-box PGD attacks and low accuracy (2.86%) under black-box transferred attacks. Next, prior work has verified that adversarial training with PGD20 adversaries (ADV) results model without gradient maskingAthalye et al. (2018). Therefore, let us use models trained using ADV and SOAR(zero-init) as examples of models with/without gradient masking respectively.\nIn the `∞ attack setting, PGD uses the sign of the gradient sign(∇x`(x)) to generate perturbations. As such, one way to verify the strength of gradient is to measure the average number of none-zero elements in the gradient. A model with gradient masking is expected to have much less non-zero elements than one without. In our experiment, the average non-zero element in gradient is 3072 for ADV trained (no GM), 3069 for SOAR (PGD1-init) and 1043 for SOAR (zero-init, has GM). We observe that SOAR with PGD1-init has a similar number of non-zero gradient elements compared to ADV, meaning PGD adversary can use sign of those non-zero gradient elements to generate meaningful perturbations.\nIn Section 5, the 20-iteration `∞ PGD adversaries are generated with a step-size of 2255 and ε = 8 255 . Suppose we use ε = 1 instead of ε = 8255 and other parameters remain the same, that is, we allow the maximum `∞ perturbation to reach the input range ([0, 1]) and generate PGD20 attacks. We observe such attacks result in near black-and-white images on SOAR with PGD-1 init; it has a 0% accuracy against such PGD20 attacks, similar to the 3.3% on ADV trained model. On the other hand, the robust accuracy for SOAR (zero-init) is 9.7%." }, { "heading": "E.12 HYPERPARAMETER SWEEP FOR TRADES, MART AND MMA ON RESNET", "text": "The following results show the hyperparameter sweep on TRADES, MART and MMA respectively. We include the one with the highest PGD20 accuracy in Section 5." }, { "heading": "E.13 ADVERSARIAL ROBUSTNESS OF THE MODEL TRAINED USING FOAR WITH DIFFERENT INITIALIZATIONS", "text": "FOAR achieves the best adversarial robustness using PGD1 initialization, so we only present this variation of FOAR in Section 5." } ]
2,020
null
SP:885d09e9fb6fa10be309dcbfe259ecf35ccabb82
[ "The paper proposes a neuro-symbolic model for sample-efficient VQA, which turns each question into a probabilistic program which is then softly executed. The problem explored in the paper and its background and context presented clearly and it does a good job in motivating its importance and trade-offs between possible solutions. While the use of a probabilistic program to represent the questions might be too stiff / inflexible in my opinion and may not generalize well to less constrained natural language, this direction is still of course important and interesting. It also does a great job in presenting the existing approaches and comparing their properties. The writing is good and the model is presented clearly with a very useful diagram. " ]
In multi-modal reasoning tasks, such as visual question answering (VQA), there have been many modeling and training paradigms tested. Previous models propose different methods for the vision and language tasks, but which ones perform the best while being sample and computationally efficient? Based on our experiments, we find that representing the text as probabilistic programs and images as object-level scene graphs best satisfy these desiderata. We extend existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner. Empirical results demonstrate that this differentiable end-to-end program executor is able to maintain state-of-the-art accuracy while being sample and computationally efficient.
[]
[ { "authors": [ "Felix A Gers", "Jürgen Schmidhuber", "Fred Cummins" ], "title": "Learning to forget: Continual prediction with lstm", "venue": null, "year": 1999 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Ronghang Hu", "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Kate Saenko" ], "title": "Learning to reason: End-to-end module networks for visual question answering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Ronghang Hu", "Jacob Andreas", "Trevor Darrell", "Kate Saenko" ], "title": "Explainable neural computation via stack neural module networks", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Ronghang Hu", "Anna Rohrbach", "Trevor Darrell", "Kate Saenko" ], "title": "Language-conditioned graph networks for relational reasoning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Drew Hudson", "Christopher D Manning" ], "title": "Learning by abstraction: The neural state machine", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Drew A Hudson", "Christopher D Manning" ], "title": "Compositional attention networks for machine reasoning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Qing Li", "Siyuan Huang", "Yining Hong", "Yixin Chen", "Ying Nian Wu", "Song-Chun. Zhu" ], "title": "Closed loop neural-symbolic learning via integrating neural perception, grammar parsing, and symbolic reasoning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Jiayuan Mao", "Chuang Gan", "Pushmeet Kohli", "Joshua B. Tenenbaum", "Jiajun Wu" ], "title": "The NeuroSymbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Mascharka", "Philip Tran", "Ryan Soklaski", "Arjun Majumdar" ], "title": "Transparency by design: Closing the gap between performance and interpretability in visual reasoning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Hao Tan", "Mohit Bansal" ], "title": "Lxmert: Learning cross-modality encoder representations from transformers", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ramakrishna Vedantam", "Karan Desai", "Stefan Lee", "Marcus Rohrbach", "Dhruv Batra", "Devi Parikh" ], "title": "Probabilistic neural-symbolic models for interpretable visual question answering", "venue": null, "year": 1902 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Kexin Yi", "Jiajun Wu", "Chuang Gan", "Antonio Torralba", "Pushmeet Kohli", "Josh Tenenbaum" ], "title": "Neuralsymbolic vqa: Disentangling reasoning from vision and language understanding", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many real-world complex tasks require both perception and reasoning (or System I and System II intelligence (Sutton & Barto, 2018)), such as VQA. What is the best way to integrate perception and reasoning components in a single model? Furthermore, how would such an integration lead to accurate models, while being sample and computationally efficient? Such questions are important to address when scaling reasoning systems to real world use cases, where empirical computation bounds must be understood in addition to the final model performance.\nThere is a spectrum of methods in the literature exploring different ways of integrating perception and reasoning. Nowadays, the perception is typically carried out via neural models: such as CNNs for vision, and LSTMs (Gers et al., 1999) or Transformers (Vaswani et al., 2017) for language. Depending on the representation of perception input and their reasoning interface, a method can be either more towards the neural end of the spectrum or more toward the symbolic end.\nFor the vision part, models can either use pixel-level or object-level symbolic representation. For the language part, models can generate either textual attention or programs, where the text is decomposed into a sequence of functions. Within the program representations, models typically operate on a selected discrete program or on probabilistic programs. The reasoning part used to produce the final answer can either use neural models, symbolic reasoning, or something in between, such as neural module networks (NMN) or soft logic blocks.\nExisting works for NMN methods leverage pixel-level representations and program representations such as NMN (Hu et al., 2017), Prob-NMN (Vedantam et al., 2019), and Stack-NMN (Hu et al., 2018). Representative models that use object-level vision also leverage both neural and symbolic language and reasoning. Models that are more neural are LXMERT (Tan & Bansal, 2019) and NSM (Hudson & Manning, 2019), while those that are more symbolic are NS-VQA (Yi et al., 2018), NSCL (Mao et al., 2019) and NGS (Li et al., 2020). A systematic comparison across these models is illustrated in Table 1 with more details in Appendix A.\nOverall, neural models have more expressive power but with more parameters, while more-symbolic models have more prior structures built into them but with fewer parameters. There is an interesting bias-variance trade-off in the model design. By encoding as much bias into the model as possible, one could reduce sample requirements.\nThe different choices of perception and reasoning components also limit how the QA models will be trained. If both components are chosen as neural modules, then the training can be done in a very efficient end-to-end fashion. If the reasoning is carried out using more discrete operations,\nthen the perception model needs to sample discrete outputs or take discrete inputs to interface with downstream reasoning. For instance, if symbolic reasoning is used, REINFORCE (Williams, 1992) is typically used to train the perception models, which may require many samples during the optimization process. Alternatively, one can also use expensive abduction (Li et al., 2020) to manipulate the perception models outputs to provide the correct reasoning and then optimize these perception models using these pseudo-labels. Overall, more neural models will be easier to optimize, while more symbolic models will need additional expensive discrete sampling during optimization. To highlight this interesting fact, we call it the neuro-symbolic trade-off.\nThis neuro-symbolic trade-off also affects sample efficiency and computational efficiency. To be more sample efficient, the model needs to be less neural, yet, a more neural model can be more computationally efficient during training. Thus a method that can achieve an overall good performance in terms of both sample and computation efficiency will require systematically determining which perception and reasoning components should be used and how to integrate them. To design such a model, we first test which method within each perception and reasoning component works the most efficiently. From this neuro-symbolic trade-off exploration we can design a model that uses these most efficient components and compare its overall performance against existing models." }, { "heading": "2 PROBLEM SETTING", "text": "Before the exploration, we formally define the different choices for the vision, language, and reasoning components. In the general VQA setting we are provided with an image I , a natural language question Q, and an answer A. We now define how these basic inputs are used in each component." }, { "heading": "2.1 REPRESENTATION FOR VISION", "text": "Given the image I there are two predominant visual representations: pixel and object-level attention.\nPixel Attention. Given an image one can leverage traditional deep learning architectures used for image representations and classification such as ResNets (He et al., 2016). Here the image is passed through many residual convolution layers before entering a MLP sub-network to perform a classification task. From one of these MLP linear layers, an intermediate dense image representation feature fI ∈ RD can be extracted, denoted by fI = ResNet(I). These features are used further down the VQA pipeline, where the downstream model computes attention over the relevant part of the feature based on the question asked.\nObject-level. Another paradigm is to leverage object detection models such as Faster R-CNNs (Ren et al., 2015) to identify individual objects within images. Given objects in the image, one can\nconduct more object-level or symbolic reasoning over the image, instead of reasoning through a pixel by pixel basis.\nIn this object-level representation, a set of object location bounding boxes (bbox) can be detected and labeled directly by using R-CNN asO = {(bbox1, label1), · · · , (bboxT , labelT )} = R-CNN(I) for a preset number of T objects. Here o ∈ O can be labeled as “small red shiny ball” or “large green tray” based on what is in the image.\nAnother approach is to factor the joint bounding box and label prediction into individual components to be handled by separate models. First the bounding boxes are extracted from the R-CNN as {bboxi}Ti=1 = R-CNN(I). Then these can be passed into a separate MLP network to retrieve the labels {labeli}Ti=1 = MLP(ResNet(I[bboxi])), where I[bbox] is cropped image at that bounding box location. These can be used to define the final set of objects: O = {(bboxi, labeli)Ti=1}. In such a setup, the benefit is that the R-CNN can be trained just as an object detector for a generic object class versus the background, whose annotations are easier to obtain. Furthermore, the number of supervised data the label MLP uses for training can be controlled separately. This is a useful mechanic during our model efficiency analysis where we work under the assumption that object bounding box is almost perfect, while object labeling is imperfect and expensive to annotate." }, { "heading": "2.2 REPRESENTATION FOR LANGUAGE", "text": "The language representations operates on the natural text question Q. Some data sets also provide intermediate representations of each Q through a function program layout FP . FP represents the question as a sequence of abstract functionsF as FP = [F1,F2, ...,Ft] forFi ∈ F . These function programs are used jointly with the visual representations to conduct reasoning to arrive at answer A. Details about potential realizations of F are described in the following reasoning representation section 2.3. Given the question Q and its representation FP we can define different approaches for representing the text.\nText Attention. Just using the embedded text tokens E a model can embed the question Q through a recurrent network to generate a final question representation hT , where T is the maximum length sequence. Then hT can be put through an recurrent decoder to obtain a latent function at each step ct through an attentive combination of the hidden states ct = ∑ T at · ht.\nSymbolic Program. If we want to explicitly produce a FP for a corresponding question Q, we similarly encode the text as done for text attention. During decoding ct is passed through a MLP to predict a valid function token. Then the most likely program is selected as arg maxFP P (FP | Q). Soft Program. When choosing a discrete symbolic program, the uncertainty of other function program parses is thrown out. Instead the probabilities for each program can be saved and an expected program can be computed as E[P (FP | Q)]. Intuitively all the possible programs have to be considered in this scenario which can be intractable. Instead soft program methods such as Stack-NMN factor this as E[P (FP | Q)] = E[ ∏ T P (Ft | Q)] = ∏ T E[P (Ft | Q)]. This enables preserving a distribution of functions at each step instead of selecting a single one." }, { "heading": "2.3 REPRESENTATION FOR REASONING", "text": "Given the visual and language representations, the reasoning component use these representations to derive the final answer A. Here we discuss methods that are neural, symbolic, and soft logic based.\nNeural Reasoning. Reasoning can be made directly with the image feature fI and encoded question hT such asA = MLP([hT ; fI ]) in a purely neural fashion. Other approaches can leverage the object representationsO. This is done by modulating the attention over whichO correspond to final answer A, conditioned on hT , as done in NSM or LCGN. LXMERT uses cross-modal attention between text embeddings E and O to predict the final answer. All these methods are more neural, but the FP can be leveraged as well to incorporate better biases through symbolic and soft programs.\nSymbolic Representations. From the question we can define abstract functions F to generate FP as described in the previous section. Representing F in a symbolic form enables encoding general knowledge or certain dataset’s domain specific language (DSL) into a model. This improves model\ninterpretability and provides better inductive biases as well. Here we further describe two classes of these functions: fine grained and coarse.\nA fine grained representation of FP is sequence of n-ary predicates, functions with n arguments, composing F . For example, given the question Q = “What is the shape of the thing left of the sphere?”, a sample fine grained program can be defined as FP = [filter shape(sphere, O),relate(left, O),query shape(O)] Here the visual representation (O or fI ) and higher level argument concepts, such as sphere, are used as inputs to each function. We observe clear biases encoded into the function architecture, as given a scene graph of objects O and their relations, one could walk along this graph using FP to get the final answer A. The trade-off is that the model has to deal with more complex functions, whose input arguments and output types can vary. For example relate shape and relate return a subset of objects, while query shape returns a string. Such formulations are used by more neuro-symbolic methods such as NS-VQA and NS-CL.\nCoarse function types consist of simpler predicates whose arity is fixed, typically 1, over F . Given the previous question Q, a coarse function can be defined as FP = [filterθ(fI),relateθ(fI),queryθ(fI)]. Here less structure is required with respect to the language and visual representation where each function can be parameterized as a NMNs. These require more parameters than DSL functions but are syntactically easier to handle as they typically just operate on a fixed dimensional image feature fI , thus implicitly encoding the function arguments.\nSymbolic Reasoning. Using any coarse or fine representation type for F , the symbolic reasoning can take place over the selected symbolic program FP . We define the high level execution of the symbolic reasoner to arrive at the answer by executing over FP as A = 〈FP, image representation〉S . In the fine grained and coarse samples this would look like:" }, { "heading": "A = 〈FP,O〉S = query shape(relate(left, filter shape(sphere, O)))", "text": "A = 〈FP, fI〉S = queryθ(relateθ(filterθ(fI)))\nSince the structure of the reasoning is discrete, to update the visual and language model weights requires sampling based learning such as REINFORCE or abductive methods.\nSoft Logic Reasoning. When conducting the symbolic reasoning we discard the uncertainty of the visual representations when generating the labels for O. Instead the probabilities for O can be saved. Here the uncertainty from the detections can be propagated during the execution in a soft manner to arrive at A. We can similarly define the soft logic reasoning as A = 〈FP, I〉SL = EO∼R-CNN(I)[〈FP,O〉S ]. Due to the probabilistic execution, this can be optimized end-to-end. Now that the components and their corresponding methods have been defined, we explore which methods are the most efficient for their respective task." }, { "heading": "3 NEURO-SYMBOLIC TRADE-OFF EXPLORATION", "text": "Many deep VQA models have been developed in the literature with a range of design elements, as can be seen from Table 1. Which one of these methods is the key factor in making a deep VQA model sample efficient while at the same time achieving state-of-the-art? In this comparison using the CLEVR dataset (Johnson et al., 2017), we aim to understand which design elements individually perform the best. Based on these findings, better end-to-end models can be designed from the individual methods selected. More specifically, we will explore the sample and computational efficiency for the following components:\n• Visual representations through pixel attention and object-level methods. • Reasoning execution through neural modules, symbolic execution, and soft logic. • Language encoding for questions through text attention, symbolic and soft programs.\nFor the representations of language and reasoning, we observe that these two components are tightly coupled. In language we define F and the corresponding FP given Q. In reasoning these functions\nget executed for fine grained functions, or a network gets constructed from neural modules in the coarse case. For this reason we found it difficult to isolate the language and reasoning exploration. This motivated us to initially observe the interactions between the vision and reasoning given a fixed FP . Then by iteratively selecting the best vision and reasoning components, we explore the most efficient language and reasoning combination. Each method is also explained in more detail in Appendix D." }, { "heading": "3.1 VISUAL REASONING", "text": "To test the visual perception and the reasoning components we break down the tests into two parts. First we first determine which visual representation is more efficient: pixel attention or objectcentric. Second we find the reasoning method that best complements the selected visual representation.\nPixel Attention versus Object-level. We compare pixel attention Stack-NMN and object-level NSCL representation methods. We chose these two models as their visual representations differ but had continuous end-to-end reasoning methods given a fixed FP .\nWe set up this experiment by training and freezing each model’s language components on CLEVR question program pairs (Q,FP ) to isolate the effects of visual methods. Then the visual representation is trained from scratch using different percentages of CLEVR text question and answer (Q,A) pair data (QA Data %), where no object level supervision is provided.\nThe sample and computational efficiencies are illustrated in Table 1, which indicate object-level representations work better. Object-level detectors are able to leverage cheap object locations labels for localization and can leverage MLPs to classify objects given a dense feature fI ∈ RD. In contrast, pixel-wise methods needs to learn the object attention maps through more parameter intensive convolutional layers over I ∈ RL×W×3 where D < L×W × 3 channels. Symbolic versus Soft Logic Reasoning. Since we will use fine grained object-level O information, we don’t need to conduct parameter intensive pure neural reasoning over fI . This lets us leverage of function programs FP . Similarly, given O, we don’t use coarse NMN functions, which are only compatible operating over fI . We focus on testing fine grained F to determine the best way to reason over the answer A = 〈FP, I〉∗, either using symbolic or soft logic. We use NS-CL which already performs soft logic over (Q,A) pairs. To test the symbolic reasoning we replace the soft functions F defined in NS-CL by discrete counterparts, similar to the symbolic execution performed by NS-VQA. To train the symbolic reasoning with QA data, we test REINFORCE and abduction based optimization, denoted as NS-RL and NS-AB respectively. The results in Table 2 indicate that propagating the uncertainty of the object predictions through soft logic leads to gains in computational efficiency as well as final performance. Abduction has benefits over REINFORCE as it is able to selectively sample the object probabilities, which shows to be a more efficient procedure as the accuracy of the vision model increases.\nAt this point we have been testing the visual components and present the benefits of object-level representation and soft logic reasoning. Given these two methods, we now explore which language representation would be the most efficient to use." }, { "heading": "3.2 LINGUISTIC REASONING", "text": "To test our language representation we want to determine the most efficient representation for Q. Taking into account the vision experiments, we find NS-CL’s operation over object-level representations and executing soft logic reasoning over a fixed FP to be the most suitable approach. To build off of this, we want to understand the best representation of FP for reasoning. Therefore we look at symbolic and soft approaches. Recall the tight integration between the language representation and reasoning means that we investigate these two components in a joint fashion.\nSymbolic versus Soft Execution. To test the language representation, we similarly train and freeze the visual representations to isolate the effects of the language models. These language models are then trained are also trained end-to-end on (Q,A) pairs without ground truth FP .\nWe compare NS-CL, which uses fine grained symbolic programs and soft logic reasoning over O to Stack-NMN which uses coarse soft programs and neural reasoning over fI . For Stack-NMN the language and vision are trained jointly, but for NS-CL the language parser is trained disjointly from the vision. During testing the computation efficiency results, we equally divided the iterations by the curriculum learning setup described in their work. This led to more stable accuracy improvements over training on random samples as done in NS-VQA.\nThe results are presented in Table 2 and we generally see slower convergence than the vision trials. This is due to the fact that the program produced has to be perfect to reason correctly, while some mis-labeled objects may not lead to incorrect reasoning. Furthermore we encounter spurious predictions of incorrect FP leading to the correct answer, prolonging training.\nLooking closer at the results, at 1% QA data, the symbolic representation is on top. We posit that this is due to model exhaustively sampling the program space over an end-to-end approach given limited data. However, as the amount of data increases, the end-to-end soft programs show better accuracies and computational efficiency. With the results from these language and vision experiments, we now have an understanding of which methods are the most efficient for VQA." }, { "heading": "4 VISION AND LANGUAGE END-TO-END REASONING.", "text": "We iteratively experimented on different visual, language, and reasoning VQA components to determine which methods had the best accuracy and efficiency trade-offs. Based on this, we determined the following desiderata for efficient VQA methods:\n• Soft programs to propagate language parsing uncertainty. • Object-level detection models with pre-trained localization for object based reasoning. • Soft logic functions to propagate the uncertainties from the vision and language branches. • End-to-end differentiability to efficiently optimize the perception and reasoning jointly.\nWe are motivated to combine the existing Stack-NMN and NS-CL frameworks we tested to synthesize such a model. However, during our trade-off exploration we found it non-trivial to simply combine different methods across visual, language and reasoning components. We address two challenges regarding storing the intermediate computation in memory and the overall optimization when using these selected methods." }, { "heading": "4.1 MEMORY", "text": "The first challenge is incompatibility at the reasoning level. NS-CL leverages fine grained functions that leverage objects O. The outputs of all of Stack-NMN’s soft programs that operate only on fI passed through coarse NMNs. To make such soft programs from Stack-NMN compatible with fine grained functions and object-level data, the memory storage handling the intermediate functional computation has to store object-level data as well.\nWe design such a memory structure by pre-assigning different object-level data modalities to different parts of a memory matrix M ∈ RT×(A+C+1). Here the first dimension T is the stack dimension used to store intermediate function outputs for future use, similar to Stack-NMN. The second (A + C + 1) dimension are the rows that store the heterogeneous values while Stack-NMN only stores a D dimensional image feature fI . The first A dimensions in the row indicate the object set attentions mdet, or which objects the model is paying attention to at that step. The C stores concatenated categorical value outputs possible by our vision models such as mcolor and msize. The final dimension mnum is reserved for some accumulated score such as count or a boolean bit for true or false questions. For the CLEVR specific case the object attentions mdet ∈ RA. The categorical values are [mcolor;mshape;mtexture;msize] ∈ RC , and the numeric slot mnum ∈ R. This enables computing the reasoning softly over NS-CL like object-level predictions and StackNMN function program layouts as EO∼R-CNN(I)[〈 ∏ T E[P (Ft | Q)], O〉S ]. After reasoning, the answers can be directly read from the memory instead of being inferred as a multi-class label. This is done by directly predicting the the question type from Q and accessing the corresponding memory location. For example if the question was asking about the color then we would return arg maxmcolor. Now that we have this fully differentiable architecture, we focus on the optimization procedure." }, { "heading": "4.2 OPTIMIZATION", "text": "The second challenge is that Stack-NMN trains end-to-end while NS-CL iterates between REINFORCE for program parsing and end-to-end training for visual concept learning. NS-CL fixes the language model while end-to-end training vision, while we want to jointly optimizing both language and vision models. This results in a much larger optimization landscape prone to spurious predictions from end-to-end training on QA data.\nTo mitigate this we initially start by training the the vision MLP and language LSTM models with a small amount of the direct supervision, using between 0.1% - 1% of such data. This is done over the object (I[bbox], label) and language (Q,FP ) pairs available in CLEVR. Then we train end-to-end on (Q,A) pair data where we have losses for categorical and numerical answers using, cross entropy and MSE losses respectively. Additionally, we found it useful to add the initial direct supervision losses and data when training end-to-end as well, weighted by hyperparameters α and β. We formulate this as a regularization method that prevents the models from diverging to spurious predictions when only provided with QA labels. We further provide details and demonstrate the efficacy of this regularization in Appendix C. All these terms give our overall loss as:\nLE2E = LQA XEnt + LQA MSE + αLO XEnt + βLFP XEnt\nFrom these extensions over Stack-NMN and NS-CL we construct a fully Differentiable end-to-end Program executor (DePe). A more detailed description DePe’s architecture and examples can be found in Appendix B." }, { "heading": "5 EXPERIMENTS", "text": "We built DePe using the desiderata over efficient VQA methods and now we test its overall end-toend performance jointly on vision and language. First we compare it to our base NS-CL and StackNMN methods in terms of our desired sample and computational complexity. Then we compare DePe’s accuracy against other VQA methods." }, { "heading": "5.1 EFFICIENCY PERFORMANCE", "text": "We compare the efficiencies of DePe, Stack-NMN, and NS-CL in Table 3. To test the computational efficiency of NS-CL we trained the program parser till each iteration step, fixed the parser, and then trained the vision components end-to-end given the fixed parser.\nOverall the results reflect the results reported during our component-wise testing. In terms of the computational efficiency, NS-CL (Mao et al., 2019) uses REINFORCE to optimize its FP parser, which requires many more iterations to converge. Stack-NMN (Hu et al., 2018) is also able to optimize end-to-end, but trains the fI representation requiring more training samples. DePe using Stack-NMN soft programs and object-level execution from NS-CL is able to more efficiently optimize over either method alone.\nIn terms of sample efficiency NS-CL and DePe are comparable given enough data since the models execute similarly once NS-CL language models are fine tuned. Since DePe is trained on some directly supervised data we also attempted directly supervising Stack-NMN and NS-CL, but saw similar performance as discussed in the model detail in Appendix D." }, { "heading": "5.2 COMPARATIVE PERFORMANCE", "text": "We present the accuracy comparison across different VQA models in Table 3. All models are able to achieve 96%+ given the full data set, but we are more interested in the results with lower sample complexity. At 10% data we observe that methods with continuous inference and DSL based neural modules, such DePe, NS-CL, Stack-NMN and Prob-NMN (Vedantam et al., 2019) scale better. The other methods are more on the neural side, such as TbD (Mascharka et al., 2018), MAC (Hudson & Manning, 2018), LXMERT (Tan & Bansal, 2019), and LCGN (Hu et al., 2019), require more data to converge. Methods that discretely sample, such as NS-VQA (Yi et al., 2018) or NGS (Li et al., 2020), can also achieve high accuracies given more training data, but in practice, require many iterations to converge and result in high variance results compared to continuous methods." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In VQA there are different paradigms of modeling the visual, language, and reasoning representations. In addition to the final model performance, we are interested in understanding model efficiency in these complex perception and reasoning tasks. First we introduced the existing models with their corresponding vision, language, reasoning, and training components used. Then we formally defined these components and the common method representations used within each component. In order to determine which methods were the most sample and computationally efficient, we iteratively tested the vision, reasoning, and language components. These results showed that objectlevel, soft program, soft logic, and end-to-end training were important for efficient VQA. Following this we modified existing models to leverage all of these efficient methods in a joint manner. We showed that this model, DePe, was the most efficient while retaining state-of-the-art performance.\nBased on these methods we look forward to testing DePe on larger real-world data sets. Since our model uses generic function programs to operate over language and vision it can be extended to different data sets with minimal modifications. Furthermore we plan to investigate how to use concepts embeddings similar to NS-CL within our memory instead of one-hot representations. We will also be interested in testing how the object-level representations work on questions involving both object and image level reasoning." }, { "heading": "A MODEL COMPARISONS", "text": "We explore a few comparative works that contain a variety of training strategies used for VQA. Each method type handles the vision, language, reasoning and training in different fashions.\nLXMERT. Reasoning over images based on questions is carried out via Transformer like deep architecture. Such neural reasoning module can be easily interfaced with neural perception modules, and optimized end-to-end with perception module jointly. Since such model incorporate few prior structures into the model, it contains lots of parameters and requires lots of question-answer pairs in order to achieve good results.\nNMN methods. With Neural Module Networks (NMN), the language is turned into a discrete function program over neural modules which act on pixel level attention to answer questions. The discrete function program allows reasoning steps to be executed exactly. However, such design also makes the entire model not end-to-end differentiable. One needs to use REINFORCE to optimize the model for producing the function program.\nAn extension of NMN is Prob-NMN where the predicted program is supervised over a small set of samples. These ground truth programs provide a prior distribution over valid programs. This prior is used to determine future valid programs and enforce systematic program layouts.\nIn Stack Neural Module Network, reasoning instructions in the question are executed as a neural program over neural modules which act on pixel level attention to answer questions. This neural program execution approach produces a soft function program, where discrete reasoning structures, such as differentiable pointer and stack, are incorporated into the neural program executor. This enables Stack-NMN to maintain uncertainty over which reasoning steps are selected.\nGNN Methods. In Neural State Machine (NSM) and Language-Conditioned Graph Neural Networks (LCGN), images are represented as object and relations, and graph neural networks conditioned on the language feature are used as reasoning modules. Graph neural networks are structured networks, which can represent logical classifier in graphs. Such graph neural networks and deep perception architectures are end-to-end differentiable. However, the architecture is quite generic, and requires large number of question-answer pairs to train.\nNS-CL. In Neural Symbolic Concept Learner, questions are turned into a symbolic program with soft logic function, which are then executed end-to-end on object attention produced by the vision model. The soft logic makes the reasoning step end-to-end differentiable. However, the functional programs are still discrete in structure, making the entire model not end-to-end differentiable. One needs to use REINFORCE to optimize the model for producing the function program.\nNGS. Neural-Grammar-Symbolic performs abduction, where both the image and language are turn into discrete object by sampling from the perception model, and symbolic program is exected to generate the answers.\nIn abductive learning, the symbolic reasoning step is directly interpretable, and many existing logic reasoning engine can be used. However, the model is not end-to-end differentiable. Discrete optimization is needed to generate supervision for the vision and language model to be updated." }, { "heading": "B DIFFERENTIABLE END-TO-END PROGRAM EXECUTOR", "text": "The overall DePe architecture, as shown in Figure 4, consists of multiple sub-parts. The perception models that encode the vision and the question. The soft logic functions F closely follow the domain-specific language (DSL) provided by CLEVR. We implement them in a probabilistic manner based on the signature, detailed in Appendix E. The function execution results are stored in a differentiable memory that maintains a stack of these operations. As our model executes step by step, this memory is used as a buffer for the reasoning chain’s probabilistic steps leading to a final answer.\nB.1 PERCEPTION\nObject detection. For our vision, we use object-level detection through a Mask R-CNN (He et al., 2017) to extract the corresponding bounding boxes of the objects. We then take detected objects and\nWhat i s t he shape of t he obj ect\nbehi nd t he t i ny r ed t hi ng?\nObject Parser\nQuestion Parser\nVQR Cell\nMemory\nspher e\nquery_shape\nQuestion Parser Encoder\nDecoder\nQuest i on\nQ Type Head\nFunction Head Function Argument Heads\n{filter_size: 0.8, ...} {small: 0.9, ...} {filter_color: 0.8, ...} {red: 0.9, ...}\nMemory\nObject Parser\nFunction Executor\nArguments\nFunction Memory\nFunction Attention\nFused Memory\nUpdate\nQuestion Parser\nVQR Cell\nVQR Cell VQR Cell\nDePe Overall\nRepeat for K reasoning steps\n... ...Figure 4: Our model ingests the image and extracts object-centric information. The question (textual input) is used by the question parser, which first embeds the question through an LSTM encoder. This embedding is used to predict the question type, which will be used to retrieve the answer from memory at the final step. The encoded text is passed to a decoder, which at every step predicts attention over which functions to execute and each function’s arguments at the current step. The objects, arguments, and any preceding memory inputs are used by each function to execute, stored in memory belonging to each function. These function memories are weighted by functional attention and fused, which updates the original memory. The decoding and the VQR Cell execution run a fixed number of times, and then the answer is extracted from the final memory.\npass them through a pre-trained ResNet-50 model (He et al., 2016) to extract image features. These features are fed into models to predict object attributes from the image features and their bounding boxes’ object relations.\nQuestion parser. The parser takes in the text input and encodes it through a Bi-LSTM to generate the final hidden state. This hidden state is used in the program parser processes where a function model predicts the attention over which soft function is to be used at that step. Additionally, these functions may contain arguments, such as filter shape[cube], so for each class of functions, we have a model to predict a distribution over the possible attributes. The hidden state is also used to predict question type, which is used to select the final memory answer at the end of the execution.\nFor the vision and the language, all prediction models are small MLPs. We denote the set of trainable vision and text models parameters as θvision, and θtext respectively.\nB.2 MEMORY\nIn the CLEVR DSL, the functions have different types of inputs and outputs. Stack-NMN used neural modules to approximate these DSL functions, thus were able to make the image attention output of a consistent length. Since we are using the soft version of the DSL, we had to design a memory structure that could handle varying input and output modalities and sizes.\nMemory block. To create a heterogeneous memory structure, we pre-assigned different modalities to different parts of a matrix. This memory matrix block M ∈ RT×(A+C+1) is used to store the intermediate results of the function computation.\nHere the first dimension T is the stack dimension on which the function outputs are stored in different steps of the execution. The second (A + C + 1) dimension are the rows that store the heterogeneous values. For the rows, A elements in the row indicate the object set attentions mdet, or which objects the model is paying attention to at that step. The C stores the categorical value outputs possible by our vision models such as mcolor or msize. For example if there are 6 possible colors, then mcolor ∈ R6. The final dimension mnum is reserved for some accumulated score such as count or a boolean bit for true or false questions. For the CLEVR specific case the object attentions mdet ∈ RA. The categorical values are [mcolor;mshape;mtexture;msize] ∈ RC , and the numeric slot mdet ∈ R. Stack structure. Choosing which row t ∈ T of the memory is accessed is handled by the stack. This enables the model to accurately retrieve the previous one or two stack values as needed by the functions, instead of to predict which locations in memory to use.\nSome functions may require just looking at the previous output from the VQR Cell, such as chain-like questions over a scene graph through functions such as filter, relate, sum. Such functions will pop from and then push to the same memory row. There are situations where functions need multiple inputs as well, such as comparison functions attribute equal, count equal. These functions will thus pop two values from the stack and will only push back one. These function signatures are summarized in Table 7 in Appendix E.\nStack manipulation. Each function will have access to the stack, which abstracts the memory management appropriately through push and pop operations. The stack memory M is initialized at random and a one-hot memory pointer p ∈ RT , starting at p0 = 1. Each function returns a row m or a specific slice such as mcolor to be updated in memory.\nTo push a row m onto the stack we follow the Stack-NMN convention updating the pointer as p = 1d conv(p, [0, 0, 1]) and the memory row Mi = Mi · (1 − pi) + m · pi. Here the convolution is just moving the one hot pointer left by one in a differentiable manner. Then only the memory row that contains the one hot pointer is updated with the row m. Similarly to pop a value we perform the following retrieve the row m = ∑T t=1 pi · Mi and push back the pointer right by p = 1d conv(p, [1, 0, 0]).\nB.3 VQR CELL\nThe visual question reasoning (VQR) Cell is responsible for the step by step executions over the vision and the text to get to the answer. Compared to previous methods that executed these DSL functional programs in a discrete sequence, we generate probabilistic output over all the functions F at each step. With this approach, we can propagate the uncertainties of the object detections and the question parser for end-to-end learning. An example of this execution is visible in Figure 5.\nSoft function execution. Each function has different return values and signatures. Since functions can have different input and output requirements, they require to be at different positions of the stack. Therefore each function is given its copy of the memory stack to operate over.\nOnce the memory outputs for each function are computed, they have to be weighted by which function our model is most likely needed for this step. This is done by using the function attention weights w computed by the question parser. These weights scale each function’s memory and the pointer and then computes a weighted sum, which is then used to update the global memory M and pointer p, as shown in Algorithm 1. This global memory is then copied for all the function memories in the next Cell iteration. The cell executes for a fixed number of iterations and is set to a number such that it can cover most ground truth program lengths. Algorithm 1: VQR Cell iteration for j ∈ [1, |F|] do Mj , pj = Fj(M,p) end Mave = ∑|F| j=1 wj ·Mj p = softmax( ∑|F| j=1 wj · pj) M = M · (1− p) +Mave · p\nFinal answer. After the last iteration of VQR Cell execution, the final answer is chosen by the memory corresponding to most probable question type. For example if we predict the question is asking about the color, then we return argmax(mcolor). If we predict the question type is about count, then we return mnum. Since our memory structure is heterogeneous, taking argmax over the\nentire row m may lead to an incorrect label due to parts of the memory storing values at different ranges, such as probabilities ∈ [0, 1] or counts ∈ Z+.\nB.4 OPTIMIZATION\nThe objective is to minimize the overall loss based on the predictions retrieved from memory. This involves a multi-task loss as we are predicting categorical and numeric answers to questions to optimize over θtext, θvision. This is done by minimizing the cross entropy loss for categorical answers and a mean squared error loss for numeric ones: L = LQA XEnt + LQA MSE . This optimization, particularly for the text has posed to be a difficult problem from scratch as there are many candidate functions at each step in the function program. Furthermore spurious programs can be proposed in early stages of training, corresponding to the shortest programs that answer the question, but don’t align with the ground truth semantics.\nTo address this, models such as NS-CL, employ structural constraints when training their text parser. They employ a fixed number of concepts tokens used to parse the question and discrete templates for the parser to follow. In addition they leverage curriculum learning and switch between optimizing the parser and vision models to support both the discrete and continuous training methods.\nOther models, such as NS-VQA, train on a small portion of the supervised labeled data. In this manner no such restrictions on the concepts, templates, or curriculum learning, but require labeling of ground truth programs. In this work we explore the pre-trained route. We pre-train our models with a small percentages (0.1-1%) of visual and textual data. When we use X% pre-training data, we sample X% of the training questions. Then we directly supervise our question parser and vision models on those QA programs and their corresponding vision scenes.\nAdditionally, we found it useful to add these direct supervision losses when training end-to-end with corresponding weights α and β. We formulate this as a regularization method that prevents the\nmodels from diverging to spurious predictions when the training signal is only coarse QA labels.\nL = LQA XEnt + LQA MSE + αLObj XEnt + βLText XEnt\nThe full definitions and learning strategies for the losses are available in Appendix C." }, { "heading": "C OPTIMIZATION DETAILS", "text": "If given and Image I and a question Q our model makes a prediction as follows. We first make predictions of the objects and text components as:\nŷ = DePe(I,Q; θtext, θvision)\nWe look to minimize the following loss L.\nLQA XEnt = − 1\nN N∑ i C∑ j Iyi∈cat · yij log ŷj\nLQA MSE = 1\nN N∑ i Iyi∈num · ||yi − ŷnum||2\nL = LQA XEnt + LQA MSE\nWe note that for stable convergence we include the pre-training data to our loss function. These are the cross entropy loss for the object attribute and relation predictions LObj XEnt. These are also the cross entropy loss for the question parser predictions of the functions at each step LText XEnt. This gives us our final loss function:\nL = LQA XEnt + LQA MSE + αLObj XEnt + βLText XEnt\nWe include two scalar terms for the pre-training losses to balance the models behavior to find local minima within the QA losses and focusing too much on optimizing the smaller number of pretraining samples. Setting these hyperparameters is more prelevant when the ratio of pre-training to QA samples is low, since the pre-training samples provide a weak signal, but should be respected. We find that setting α = β = √ % of QA data, works well in practice to balance this ratio. The gains of setting these hyperparmeters can be seen in Table 4.\nWe use Adam to optimize with lr = 1e−4, β1 = 0.9, β2 = 0.999, = 1e−8 and set our batch size to 1024." }, { "heading": "D EXPERIMENTS", "text": "Here we outline the details of the training procedure for the end-to-end training and efficiency experiments. We follow the default training procedure for the relevant work if it is not modified in this section.\nD.1 LXMERT\nFor LXMERT we were interested in exploring if the cross modal architecture could implicitly encode our attention based reasoning process, as conveyed by the recent Transformer literature. There were a couple of approaches that we tried during our tests.\nThe initial tests and the results we reported were the based on the original architecture proposed in the paper as well as using the pre-trained weights the authors provided. This used 9, 5, and 5 layers of the language encoder NL, the cross modality encoder, and the region of interest (RoI) object encoder NR respectively.\nWe also test the initializing the RoI encoder from scratch since the CLEVR features are more simpler than the ones used in pre-training on real images. Additionally we tested LXMERT with fewer layers, NL = 4, NX = 2, NR = 2, which we report in Table 5.\nD.2 STACK-NMN\nWhen experimenting with Stack-NMN we first trained the entire model on 100% of the CLEVR data using the ground truth layout and save it. For the visual representation experiments we take the trained model and freeze the LSTM weights while resetting the vision weights. For the language representation we freeze the CNN and NMN weights without the expert layout.\nWhen training Stack-NMN for direct supervision, we train the model with the ground truth layout but only on 0.1% of the data. Then we take load this model and ran it on the corresponding 1, 5, and 10% QA data without the expert layout. With only 0.1%, we didn’t see any sample of computational performance differences on end-to-end training.\nWe used the same settings as the original paper and trained all variations for 200k iterations.\nD.3 NS-CL\nWhen testing NS-CL end-to-end we require iterating between optimizing the language program parser and training the vision concept models, done in a REINFORCE and end-to-end fashion respectively. When conducting the computational complexity experiments we had to train and test the program parser and vision models at fixed number of training samples. To do this we divide up the total number of training samples uniformly into 10 curriculum lessons used in the original NS-CL. We then tune the program parser with the data belonging to the cumulative curriculum up to those number of training samples. Given this program parser we tune the vision models end-to-end over the same number of training samples. This way we have a better picture of NS-CL’s behavior as a function of the training iterations.\nTo make NS-CL more comparative to DePe, we also use the 0.1% direct supervision data improve the NS-CL initial performance. We similarly train the program parser on the (Q,FP ) pairs, thus start off at similar accuracies. Unlike DePe, NS-CL uses similarities between text and vision concepts, soit was unclear how to directly supervise the vision model given the object data. Similar to Stack-NMN we test a burn-in strategy where we train the entire model end-to-end with the QA Data from the corresponding 0.1% direct supervision then train using the 10% QA data for 150 epochs. Here we observed no significant gains from just start off with the supervised vision parser.\nIn the reasoning representation experiments we test NS-CL with discrete reasoning. Here we took the original soft logic F defined in the NS-CL paper and replaced them with symbolic logic as used in NS-VQA. Then we used REINFORCE to optimize when training on QA data in NS-RL. In NS-\nAB we use abduction to find the most likely change in the object concept prediction to make the resulting reasoning provide the correct answer. We discuss this in the next section.\nD.4 NS-AB ABDUCTION\nAbduction involves determining which detection mdet changes are required to correctly answer the question given a fixed discrete function program.\nAt a high level these changes are done by making the most likely change that makes the output answer right. In NGS they use a tree based formulation and in our implementation we retain our stack implementation. Our stack implementation lets us test abduction proposals in a greedy efficient manner, but leads to a smaller search of the detection space leading to the ground truth answer instead of a spurious one. We describe the details of our implementation below.\nD.4.1 POLICY DEFINITION\nThe function programQ = {fi}T , fi ∈ F tells us to execute a sequence of T operations or functions fi. If there are multiple reasoning branches that are aggregated, the execution trace follows a DAG dependency structure. We use a Markov decision process (MDP) formulation for the execution process:\n• A policy mtdet ∼ πθ(mtdet|mt∗, ft, I) with vision models parameters θvision will take the operation execution from the previous stage mt∗, the operator indicator ft and image I , and output an action at. Memory from the previous stage mt∗ could be any memory slice such as mdet,mattr,mnum. This action mtdet corresponds to the object selections made for the current operation from the input image. • The action will cause the state mt∗ to be updated to mt+1∗ , and the transition is described\nby P (mt+1∗ |mt∗,mtdet, ft). This is the operation execution for the current stage. This is carried by logic function ft as described in Table 7.\nAfter this sequence of operations, we obtain the a rewardR(A,mt+1∗ ) by comparing the last selected state with the answer A, and tell us whether the answer is correct or not. If the answer is correct R = 1, and otheriwse 0. Then the expected correctness of answer given the uncertainty in πθ and P is\nE[R(A,mt+1∗ )], where (1) mt+1∗ ∼ P (mt+1∗ |mt∗,mtdet, ft), at ∼ πθ(mtdet|mt∗, ft, I), m0∗ = ∅, ∀t = 1, . . . , T (2)\nD.4.2 OPTIMIZATION\nGiven m data points expressed as image I , questions Q and answer A triplets: (I1, Q1, A1), . . . , (Im, Qm, Am), the learning problem can be expressed as:\nπ∗θ = arg max πθ m∑ i=1 E[R(Ai,mi,t+1∗ )] (3)\nLabel abduction. If for a particular data point (I,Q,A) the answer according to the model is incorrect, one can try to find corrections by making minimal corrections ci to the model detections as follows:\nc∗1, . . . , c ∗ T = argminc1,...,cT D(m 1 det, . . . ,m T det‖c1, . . . , cT ) (4)\ns.t. R(A,mt+1∗ ) = 1 makes the answer right. (5)\nmt+1∗ ∼ P (mt+1∗ |mt∗, ct, ft),∀t = 1, . . . , T (6) where D(‖) is some distance measure between the corrections ci and the model predictions midet. For instance, D(‖) can be the negative likelihood of the correction under our model πθ, or the edit distance.\nIntuitively, we want to attempt the most likely corrections iteratively to find a solution by minimizing D(‖). Sampling all possible corrections at once, can cause right answers through spurious label changes, which we want to mitigate.\nSampling methods. Due to the compositional nature of the reasoning tasks, we attempt to optimize c∗i in a greedy fashion at each step of the program from i = 1 to i = T instead of jointly from i ⊆ {1, . . . , T}. This better enforces the consistency constraint when a single c∗i update leads to valid program executions. Valid executions mean that the predicted or abduced labels c∗i lead to a final answer, whether right or wrong. This is opposed to making conflicting changes in all c∗i , c ∗ j where i 6= j, leading to a failed program execution due to the manual abduced changes. If this greedy approach fails to find an answer, we fall back on exhaustively sampling midet ∼ πθ at all program levels for a fixed number of iterations." }, { "heading": "E MODULAR SOFT-LOGIC FUNCTIONS", "text": "The descriptions of the variables and constants used to describe memory components are listed in Table 6. The functions used are in Table 7. For notation simplicity, the function arguments are assumed to be popped from that function’s memory copy or predicted from the text or vision models." } ]
2,020
null
SP:fbb217eb911fc3b0d40b941281d08d0a399a459a
[ "The authors use an insight from chaos theory to derive an efficient method of estimating the largest and smallest eigenvalues of the loss Hessian wrt the weights. To do that, they use nearby weight space positions, optimize for a bit (either gradient climbing or descending), check how quickly the points are departing from each other, and use that to estimate the extreme eigenvalues using a connection to Lyapunov coefficients in chaos theory. Then they use on the fly estimated largest eigenvalue to automatically tune the learning rate of SGD." ]
Despite the complicated structure of modern deep neural network architectures, they are still optimized with algorithms based on Stochastic Gradient Descent (SGD). However, the reason behind the effectiveness of SGD is not well understood, making its study an active research area. In this paper, we formulate deep neural network optimization as a dynamical system and show that the rigorous theory developed to study chaotic systems can be useful to understand SGD and its variants. In particular, we first observe that the inverse of the instability timescale of SGD optimization, represented by the largest Lyapunov exponent, corresponds to the most negative eigenvalue of the Hessian of the loss. This observation enables the introduction of an efficient method to estimate the largest eigenvalue of the Hessian. Then, we empirically show that for a large range of learning rates, SGD traverses the loss landscape across regions with largest eigenvalue of the Hessian similar to the inverse of the learning rate. This explains why effective learning rates can be found to be within a large range of values and shows that SGD implicitly uses the largest eigenvalue of the Hessian while traversing the loss landscape. This sheds some light on the effectiveness of SGD over more sophisticated second-order methods. We also propose a quasi-Newton method that dynamically estimates an optimal learning rate for the optimization of deep learning models. We demonstrate that our observations and methods are robust across different architectures and loss functions on CIFAR-10 dataset.
[]
[ { "authors": [ "Jing An", "Jianfeng Lu", "Lexing Ying" ], "title": "Stochastic modified equations for the asynchronous stochastic gradient descent", "venue": "Information and Inference: A Journal of the IMA,", "year": 2018 }, { "authors": [ "Ludwig Arnold" ], "title": "Lyapunov exponents of nonlinear stochastic systems", "venue": "In Nonlinear Stochastic Dynamic Engineering Systems,", "year": 1988 }, { "authors": [ "Raef Bassily", "Vitaly Feldman", "Cristóbal Guzmán", "Kunal Talwar" ], "title": "Stability of stochastic gradient descent on nonsmooth convex losses", "venue": "arXiv preprint arXiv:2006.06914,", "year": 2020 }, { "authors": [ "Giancarlo Benettin", "Luigi Galgani", "Antonio Giorgilli", "Jean-Marie Strelcyn" ], "title": "Lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them", "venue": "Theory. Meccanica,", "year": 1980 }, { "authors": [ "Albert S Berahas", "Majid Jahani", "Martin Takáč" ], "title": "Quasi-newton methods for deep learning: Forget the past, just sample", "venue": null, "year": 1901 }, { "authors": [ "Antoine Bordes", "Léon Bottou", "Patrick Gallinari" ], "title": "Sgd-qn: Careful quasi-newton stochastic gradient descent", "venue": null, "year": 2009 }, { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Review,", "year": 2018 }, { "authors": [ "Massimo Cencini", "Angelo Vulpiani" ], "title": "Finite size lyapunov exponent: review on applications", "venue": "Journal of Physics A: Mathematical and Theoretical,", "year": 2013 }, { "authors": [ "Pratik Chaudhari", "Stefano Soatto" ], "title": "Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks", "venue": "Information Theory and Applications Workshop (ITA),", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Fengxiang He", "Tongliang Liu", "Dacheng Tao" ], "title": "Control batch size and learning rate to generalize well: Theoretical and empirical evidence", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Stanisław Jastrzebski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos Storkey" ], "title": "Three factors influencing minima in sgd", "venue": "arXiv preprint arXiv:1711.04623,", "year": 2017 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Robert Kleinberg", "Yuanzhi Li", "Yang Yuan" ], "title": "An alternative view: When does sgd escape local minima", "venue": "arXiv preprint arXiv:1802.06175,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Bernhard E Boser", "John S Denker", "Donnie Henderson", "Richard E Howard", "Wayne E Hubbard", "Lawrence D Jackel" ], "title": "Handwritten digit recognition with a back-propagation network", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Yann LeCun", "Patrice Y Simard", "Barak Pearlmutter" ], "title": "Automatic learning rate maximization by on-line estimation of the hessian’s eigenvectors", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip HS Torr", "Martin Jaggi" ], "title": "Understanding the effects of data parallelism and sparsity on neural network training", "venue": null, "year": 2020 }, { "authors": [ "Qianxiao Li", "Cheng Tai", "E Weinan" ], "title": "Stochastic modified equations and adaptive stochastic gradient algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Guan-Horng Liu", "Evangelos A Theodorou" ], "title": "Deep learning theory review: An optimal control and dynamical systems perspective", "venue": "arXiv preprint arXiv:1908.10920,", "year": 2019 }, { "authors": [ "Siyuan Ma", "Raef Bassily", "Mikhail Belkin" ], "title": "The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "James Martens" ], "title": "Deep learning via hessian-free optimization", "venue": "In ICML,", "year": 2010 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Jaideep Pathak", "Alexander Wikner", "Rebeckah Fussell", "Sarthak Chandra", "Brian R Hunt", "Michelle Girvan", "Edward Ott" ], "title": "Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model", "venue": "Chaos: An Interdisciplinary Journal of Nonlinear Science,", "year": 2018 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Madry" ], "title": "How does batch normalization help optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tom Schaul", "Yann LeCun" ], "title": "Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients", "venue": "arXiv preprint arXiv:1301.3764,", "year": 2013 }, { "authors": [ "Tom Schaul", "Sixin Zhang", "Yann LeCun" ], "title": "No more pesky learning rates", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Samuel S Schoenholz", "Justin Gilmer", "Surya Ganguli", "Jascha Sohl-Dickstein" ], "title": "Deep information propagation", "venue": "arXiv preprint arXiv:1611.01232,", "year": 2016 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Leslie N Smith", "Nicholay Topin" ], "title": "Super-convergence: Very fast training of neural networks using large learning rates", "venue": "In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications,", "year": 2019 }, { "authors": [ "Samuel L Smith", "Pieter-Jan Kindermans", "Chris Ying", "Quoc V Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "arXiv preprint arXiv:1711.00489,", "year": 2017 }, { "authors": [ "Jascha Sohl-Dickstein", "Ben Poole", "Surya Ganguli" ], "title": "Fast large-scale optimization by unifying stochastic gradient and quasi-newton methods", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Julien Clinton Sprott", "Julien C Sprott" ], "title": "Chaos and time-series analysis, volume 69", "venue": "Citeseer,", "year": 2003 }, { "authors": [ "Ruoyu Sun" ], "title": "Optimization for deep learning: theory and algorithms", "venue": "arXiv preprint arXiv:1912.08957,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jingzhao Zhang", "Tianxing He", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Why gradient clipping accelerates training: A theoretical justification for adaptivity", "venue": null, "year": 1905 } ]
[ { "heading": null, "text": "Despite the complicated structure of modern deep neural network architectures, they are still optimized with algorithms based on Stochastic Gradient Descent (SGD). However, the reason behind the effectiveness of SGD is not well understood, making its study an active research area. In this paper, we formulate deep neural network optimization as a dynamical system and show that the rigorous theory developed to study chaotic systems can be useful to understand SGD and its variants. In particular, we first observe that the inverse of the instability timescale of SGD optimization, represented by the largest Lyapunov exponent, corresponds to the most negative eigenvalue of the Hessian of the loss. This observation enables the introduction of an efficient method to estimate the largest eigenvalue of the Hessian. Then, we empirically show that for a large range of learning rates, SGD traverses the loss landscape across regions with largest eigenvalue of the Hessian similar to the inverse of the learning rate. This explains why effective learning rates can be found to be within a large range of values and shows that SGD implicitly uses the largest eigenvalue of the Hessian while traversing the loss landscape. This sheds some light on the effectiveness of SGD over more sophisticated second-order methods. We also propose a quasi-Newton method that dynamically estimates an optimal learning rate for the optimization of deep learning models. We demonstrate that our observations and methods are robust across different architectures and loss functions on CIFAR-10 dataset." }, { "heading": "1 INTRODUCTION", "text": "An interesting observation from current deep learning research is that classification and regression accuracy gains seem to be achieved from the intricacy of the underlying models rather than the optimization algorithm used for their training. Actually, the de facto choice for the optimization algorithm is still the classic Stochastic Gradient Descent (SGD) algorithm (Robbins & Monro, 1951) with minor modifications (Duchi et al., 2011; Sutskever et al., 2013; Kingma & Ba, 2014). Even though several sophisticated second-order and quasi-Newton methods (Martens, 2010; Martens & Grosse, 2015; Berahas et al., 2019) have been introduced, first-order methods remain popular and none of them seem to outperform SGD with a carefully tuned learning rate schedule (Hardt et al., 2016). This indicates that SGD (or in general first-order methods) probably has some intrinsic properties that make it effective to optimize over-parametrized deep neural networks. Despite various attempts to explain such phenomenon (Chaudhari & Soatto, 2018; Keskar et al., 2016; Kleinberg et al., 2018), little is understood about the effectiveness of SGD over sophisticated second-order optimization methods.\nIn this paper, we argue that chaos theory (Sprott & Sprott, 2003) is a useful approach to understand the neural network optimization based on SGD. The basic idea is to view neural network optimization as a dynamical system where the SGD update equation maps from the space of learnable parameters to itself and describes the evolution of the system over time. Once the evolution is defined, the rich theory developed to study chaotic dynamical systems can be leveraged to analyze and understand SGD and its variants. In essence, chaos theory enables us to study the evolution of the learnable parameters (i.e., the optimization trajectory) in order to understand the training behavior over large time scales (i.e., number of iterations).\nIn particular, we focus on understanding the influence of the learning rate on the SGD optimization trajectory. First, by observing that the Lyapunov exponent of SGD is the most negative eigenvalue of the Hessian of the loss, we introduce an efficient and accurate method to estimate the loss curvature. Then, we empirically show that for a range of learning rate schedules, SGD traverses the optimization landscape across regions with largest eigenvalue of the Hessian similar to the inverse of the learning rate. This demonstrates that at a specific time step, performing SGD update is similar to performing a quasi-Newton step, considering only the largest eigenvalue of the Hessian of the loss. This, for the first time, sheds some light on the effectiveness of SGD over more sophisticated second-order methods and corroborates the observation that SGD robustly converges for a variety of learning rate schedules (Sun, 2019).\nFurthermore, as pointed out in (LeCun et al., 1993), the inverse of the estimated curvature can be used as the learning rate when applying SGD to a new dataset or architecture. Hence, we can set up a “feedback” system where the quasi-Newton optimal learning rate is calculated dynamically based on the current largest eigenvalue of the Hessian (curvature), and the learning rate is consequently adjusted during the training, allowing a “parameter free” stochastic gradient descent optimization. The experiments are conducted on CIFAR-10 dataset to demonstrate that our observations are robust across a variety of models, including a simple linear model regression and more modern deep neural network architectures, trained with both cross entropy and mean square error loss functions." }, { "heading": "2 CHAOS THEORY FOR NEURAL NETWORK OPTIMIZATION", "text": "In recent years, several papers have used dynamical systems to study theoretical aspects of deep learning optimization (Liu & Theodorou, 2019). Essentially, this is achieved by defining the optimization of deep neural networks as the evolution of parameters over time. In particular, a dynamical system progresses according to a map function that describes how the system evolves in a specific time step. In the case of deep neural network optimization, this map function is defined from the space of parameters into itself. By describing the system evolution using such a map function, it is possible to leverage the mathematical machinery of dynamical systems. For instance, viewing SGD as a discrete approximation of a continuous stochastic differential equations, allowed Li et al. (2017) and An et al. (2018) to propose adaptive SGD algorithms. Furthermore, dynamical systems enabled LeCun et al. (1993) to relate learning rate with the inverse of the local Hessian in a quasiNewton optimization framework. Our paper also uses dynamical systems to study deep learning optimization, but differently from all methods above, we rely on chaos theory.\nChaos theory (Sprott & Sprott, 2003) studies the evolution of dynamical systems over large time scales and can categorize systems into chaotic or non chaotic. Under some simplifying but still general assumptions, chaotic systems are bounded and have strong dependence on the initial conditions. This means that chaotic systems evolving from different starting points that are within a relatively small region around a particular reference point, will diverge exponentially during the evolution process, where the amount of time taken for this divergence to happen is defined as the chaotic timescale. This chaotic timescale imposes a limit on our ability to predict the future state distribution of a dynamical system. In fact, the distribution of the future state, which have evolved for more than a few times the chaotic timescale, cannot be distinguished from random distributions, even when the system is fully deterministic. We apply concepts from chaos theory to improve our current understanding of the optimization of deep neural networks.\nMore specifically, we describe how to use standard chaos theory techniques to efficiently calculate the leading (positive and negative) eigenvalues of the Hessian of the loss function. With these eigenvalues we measure, in turn, the loss function curvature, which can be used to study the behavior of first-order optimization methods, such as SGD (Robbins & Monro, 1951). In particular, with this technique we formulate an explanation for the empirical robustness of SGD to the choice of learning rate and its scheduling function, and we investigate a method (based on quasi-Newton second order method) for dynamically finding the optimal learning rate during the optimization of deep neural networks. Such automated and dynamic estimation of optimal learning rate can lift a significant burden from the manual definition of learning rate schedules in deep learning optimization." }, { "heading": "2.1 LYAPUNOV EXPONENTS", "text": "In chaos theory, the Lyapunov exponents define the divergence rate of infinitesimally close trajectories, and the inverse of the largest Lyapunov exponent is the timescale that corresponds to the onset of chaos into the system. Two arbitrarily close initial conditions generate two solutions that diverge with time. Under the assumption that the map function of the system is differentiable, if one observes this divergence for a short time window, it grows exponentially. If the initial divergence q(0) is made smaller, the time window can be made larger (t→∞). The largest Lyapunov exponent λ is a measure of the growth of the divergence q(t) in the direction q̂(0) = q(0)/‖q(0)‖ with the largest growth (maxq̂(0)) along the trajectory, as in\nλ = max q̂(0) lim t→∞ lim ‖q(0)‖→0\n1 t log ‖q(t)‖ ‖q(0)‖ . (1)\nIn this paper, we rely on the local finite size Lyapunov exponent. In this context, local in time means that there is no limit to infinity for t in equation 1 – instead, it is an average over a constant time window t. Finite size means keeping the difference in parameter space fixed as a small constant with ‖q‖ = ∆q (i.e., no limit ‖q‖ → 0 in equation 1). Using a finite size allows the study of the dynamic system at a specific spatial scale (for a comprehensive review, see (Cencini & Vulpiani, 2013)), corresponding to the eigenvalues of the Hessian of a spatially smoothed version of the loss (or equivalently, to the numerical second derivative with a finite delta). When this analysis is used to study the Hessian, this is equivalent to calculating the local numerical second derivative. We found empirically that the results do not depend on the ∆q parameter within a large range of values.\nWe will show in Sec. 3 that this timescale (i.e., the Lyapunov exponent) corresponds to the most negative eigenvalue of the Hessian of the loss, when optimizing deep neural networks with SGD. Intuitively, a small difference in the initial condition will amplify exponentially in the directions with negative second derivatives and will dampen in directions with positive second derivatives. Empirically, we find that the chaotic timescales in effective training of deep neural networks are short (in the order of tens of iterations) when compared with the time of one epoch (i.e., the total number of iterations in one epoch). We also find that there are multiple unstable directions throughout the training, i.e., the system is hyper-chaotic." }, { "heading": "2.2 LYAPUNOV EXPONENTS FOR GD AND SGD", "text": "In this section we derive the formula to compute the largest Lyapunov exponents for Gradient Descent (GD) following (Sprott & Sprott, 2003). We first show that the largest Lyapunov exponent corresponds to the most negative eigenvalue of the Hessian of the loss and provide an algorithm to efficiently compute it. This will be later extended to calculate the largest (or in general the top-k) eigenvalue of the Hessian in section 3. For simplicity of the exposition, in this section we initially consider the non-stochastic setting. Also, for the results of this section to hold, we assume that the Hessian of the loss does not change quickly through time, and it does not change quickly along the optimization trajectory compared to the chaotic time scale. These assumptions can easily be checked a posteriori, and we will show how to overcome this (potential) limitation in section 3.\nLet θ be the vector of learnable parameters of the deep neural network, L(·) be the loss function, and α > 0 be the learning rate. The gradient descent step at iteration t is written as:\nθt+1 = θt − α dL(θt)\ndθ , (2)\nwhere the update step ∆θ = −αdL/dθ. In the limit of small steps the formulation is equivalent to a Partial Differential Equation (PDE)\ndθ dt = −α∂L(θ) ∂θ . (3)\nIntegrating equation 3 gives the evolution of the system, which is equivalent to training the neural network.\nTo compute the chaotic time scale (i.e. the inverse of the Lyapunov exponent), one needs to analyze the difference in evolution of GD at two arbitrarily close initial points. To this end, we consider a\nsmall perturbation q0 added to the initial weights θ0. For this perturbed starting point θ0 + q0, the PDE becomes:\nd(θ + q) dt = −α∂L(θ + q) ∂θ . (4)\nIn the limit of small q, considering the first order Taylor approximation of the above equation and subtracting equation 3, we obtain:\ndq dt = ∂ ( −α∂L(θ)∂θ ) ∂θ q . (5)\nThen, integrating equation 5, we obtain the evolution of the perturbation under GD:\nq(t) = exp ( −α∂ 2L(θ)\n∂θ2 t\n) q0 . (6)\nThis remains true as long as q(t) remains small, where the definition of small depends on the properties of L. We consider the decomposition of q0 as a sum of its projections on the eigenspace of the Hessian of the loss (with the Hessian being represented at the exponent of the formula in equation 6). In this space, the projection of q0 along the direction corresponding to the largest eigenvalue is the one growing the fastest. Starting with a random q0, the direction of q that becomes dominant after sufficient time is aligned with the eigenvector of the largest eigenvalue of the matrix at the exponent, and the growth rate of q is equal to the corresponding eigenvalue.\nMeasuring this growth rate provides a simple and linear (in the number of parameters) method to measure the leading eigenvalue. This procedure represents the calculation of the largest Lyapunov exponent, i.e., the largest eigenvalue (λ0) of the matrix −α∂2L/∂θ2. Due to the minus sign, this corresponds to the smallest eigenvalue (hN ) of the Hessian of the loss (H = ∂2L/∂θ2). More precisely, the smallest eigenvalue of the Hessian and the largest Lyapunov exponent are related as hN = −λ0/α. For non-convex losses, hN is the most negative eigenvalue and the matching eigenvector corresponds to the most unstable direction of the optimization of the loss.\nOnce q(t) is aligned with the largest eigenvector, equation 6 becomes\nq(t+ ∆t) = exp(λ0∆t)q(t) . (7)\nThe algorithm to calculate λ0 requires normalizing the length of q at each step to keep the increment “small”. This reference distance is equivalent to the choice of the step size for the calculation of finite difference based second derivative. In dynamical systems terminology this is called calculating the finite-size Lyapunov exponent. Now, the largest Lyapunov exponent is obtained by iterating the following two steps:\nλ0 ← log ( ‖q(t+ ∆t)‖ ‖q(t)‖ ) /∆t , (8)\nq(t+ ∆t)← q(t+ ∆t) ‖q(t)‖ ‖q(t+ ∆t)‖ ,\nwhere ‖·‖ denotes the L2 norm and ∆t denotes the time step. One could see that the computation of the largest Lyapunov exponent is analogous to the power method to compute the largest eigenvalue of a given matrix. This idea can be easily extended to compute the top-k Lyapunov exponents following the idea of Benettin et al. (1980). Please refer to the appendix C.\nSGD can be described with the same approach, with the loss function replaced by L(θ,ω) where ω are random variables that describe which images are picked in each minibatch, the data augmentation used, and in principle any other random process engineered in the network. We note that chaos theory is fully applicable with equivalent results in such general stochastic setting (Arnold, 1988). In the subsequent analysis we will leverage this and work with SGD. Finally, we demonstrate how to extend the method explained in the current section to compute the Lyapunov exponent for SGD with momentum in Appendix B." }, { "heading": "3 CALCULATING THE LARGEST EIGENVALUE OF THE LOCAL HESSIAN", "text": "In section 2.2 we discussed how one can estimate the local largest Lyapunov exponent of the SGD map, which, under some common conditions, corresponds to the most negative eigenvalue of the\nAlgorithm 1 Computation of the largest eigenvalue of the Hessian Input : L: loss function, D: training set, θ ∈ RN : point to compute the eigenvalue, b: batch size, ∆q: size for Lyapunov exponent Output: h0: the largest eigenvalue of the Hessian of L at θ q0 ∈ RN , q0 ← q0 ∆q‖q0‖ . Small perturbation of size ∆q θ0 ← θ, θ1 ← θ + q0, β ← 1 . Initialization while not converged do Db = {(xi,yi)}bi=1 ∼ D . Sample a mini-batch\nθ0 ← θ0 + β ∂L(θ0;Db)∂θ . Gradient ascent on θ0\nθ1 ← θ1 + β ∂L(θ1;Db)∂θ . Gradient ascent on θ1 λ← log ( ‖(θ1−θ0)‖\n∆q\n) . Lyapunov exponent iteration\nβ ← βλ . Re-scale the learning rate\nθ1 ← θ + (θ1 − θ0) ∆q‖(θ1−θ0)‖ , θ0 ← θ . Re-centering end h0 ← 1β . Eigenvalue from Lyapunov exponent\nHessian of the loss at the same point. While most negative eigenvalues can be used to analyze the existence of saddle points and training instability, in this paper we are interested in computing the largest eigenvalue of the Hessian. Note that the largest eigenvalue corresponds to a theoretical upper bound on the usable learning rate, under quasi-Newton approximation (LeCun et al., 1993). Therefore, by efficiently computing it, we intend to understand the relationship between the SGD optimization trajectory and the learning rate.\nTo compute the largest (or more generally, the top-k) eigenvalues, we need to redefine our map function. The idea is to eliminate the negative sign and use the gradient ascent equation. For this map, the largest Lyapunov exponent corresponds to the largest eigenvalue of the Hessian and the matching eigenvector corresponds to the direction of most instability. We would like to clarify that gradient ascent is used as a map function to estimate the largest eigenvalue of the local Hessian at a particular point in the parameter space. This approach can be employed at any given point, especially at the points in the optimization trajectory, where any algorithm can be chosen for the optimization.\nWith gradient ascent map, the PDE corresponding to equation 5 can be written as (note the missing minus sign):\n∂q ∂t = β\n∂2L(θ)\n∂θ2 q , (9)\nwhere β > 0 is the learning rate for gradient ascent (we use a different notation to distinguish it from the SGD learning rate α). Similarly, we can integrate equation 9, obtain an exponential form, where the dominating exponential corresponds to the largest eigenvalue. However, this time it corresponds to the largest eigenvalue of the Hessian of the loss denoted by h0.\nSince we intend to estimate the Lyapunov exponent at every point in the optimization trajectory, we now discuss how to accelerate the convergence of the Lyapunov exponent computation. To this end, we set up our chaotic map as a control problem, where we optimize the learning rate β used for our gradient ascent step such that the convergence of the eigenvector is the fastest but still stable. This is obtained by setting β such that the corresponding Lyapunov exponent is controlled to stay close to one. It does not need to be necessarily one, but it needs to be a value of the order of unity. In practice, the learning rate for the next step is re-scaled by the Lyapunov exponent computed in the current step and this numerical procedure ensures that the Lyapunov exponent quickly converges to one.\nOur final algorithm to compute the largest eigenvalue of the Hessian at a given point is summarized in Algorithm 1. In practice, this algorithm converges within a couple of iterations and the convergence criterion checks the fluctuation in λ around 1. As will be discussed in section 5, in comparison\nto (LeCun et al., 1993), our algorithm automatically tunes the learning rate β to compute the largest eigenvalue quickly and effectively eliminates one hyper-parameter used in (LeCun et al., 1993). This enables us to run Algorithm 1 at every step to understand the optimization trajectory of SGD and similarly to (LeCun et al., 1993), the largest eigenvalue can be used to automatically set the learning rate of SGD in the quasi-Newton framework." }, { "heading": "3.1 QUASI-NEWTON METHOD", "text": "Quasi-Newton method is an effective approach that utilizes approximate second-order information to improve gradient descent methods. The basic idea is to keep track of an estimate of the Hessian matrix and modify the gradient descent step (Nocedal & Wright, 2006). Formally, at iteration t, the quasi-Newton method can be written as:\nθt+1 = θt −B−1t dL(θt)\ndθ , (10)\nwhereBt denotes the estimate of the Hessian matrix at iteration t.\nIn this paper, we estimate the largest eigenvalue, so the matrix Bt takes the form of ht0I where ht0 is the largest eigenvalue of the Hessian at θt and I is the identity matrix. This is the simplest form of quasi-Newton method which effectively uses 1/ht0 as the learning rate at iteration t. This replaces hand-engineered learning rate schedules and could be beneficial when applying SGD to new problems. If top-k largest eigenvalues are estimated as discussed in the appendix, a more sophisticated quasi-Newton approach could be employed." }, { "heading": "4 EXPERIMENTS", "text": "All experiments are based on the CIFAR-10 dataset that has 10 classes with 5000 32×32 pixel training images per class, and 1000 32×32 pixel testing images per class. For all experiments, we use a batch size of 512 and a weight decay of 5× 10−4 and standard data augmentation techniques. We use a difference of 5×10−2 (∆q in Algorithm 1) in parameter space for calculating the Lyapunov exponent (results do not depend on changing this in a wide range of values). To show that the behavior of the method does not depend on particular loss functions, we run the experiments using the softmax crossentropy and mean square error loss functions. The following models are trained with the first two CIFAR-10 classes (planes and cars): 1) a linear model with mean square error – that is, a least square regression; 2) a Multi-Layer Perceptron (MLP) with three hidden layers; 3) a LeNet1 (LeCun et al., 1998) with relu activations; and 4) a small ResNet with two residual blocks. We also test the larger ResNet18 (He et al., 2016) using all ten CIFAR-10 classes. In all experiments, we use SGD without momentum. One iteration is typically sufficient in the control algorithm to compute h0 at each optimization step, however, noise in λ can be alleviated by running more than one iteration for each step.\nFigure 1 shows the losses for the MLP training with fixed and cyclic (Smith & Topin, 2019) learning rates. Notice how the inverse of the curvature (orange curve), measured with the controlled Lyapunov exponent, naturally adapts to both the fixed and cyclic learning rates. Since quasi-Newton methods would require the learning rate to be equal or similar to the inverse of the second derivative, we speculate that this discovered behavior is useful to explain the successful training of deep neural networks using first-order methods, such as SGD.\nFurthermore, we investigate a hyper-parameter free training based on the measured curvature in Figure 2. We first run our algorithm to measure the curvature on the initialization point, without training. This is to ensure convergence of the eigenvector and avoid a “cold” start. Then, when the starting curvature is known, we set the learning rate to this value and start the training. We keep a simple exponential running average of the curvature to remove noise (this is equivalent to choose a ∆t in the calculation of the Lyapunov exponent), and set the learning rate (red curve in Fig. 2) to this value dynamically. Empirically, we find that this “optimal” learning rate gradually decreases, guaranteeing a decrease in the loss. We show extensive experiments with analogous results on other architectures in the Appendix D." }, { "heading": "5 RELATED WORK", "text": "It is interesting how on one hand the design of neural network architectures became progressively more complicated over the years (LeCun et al., 1990; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016). But on the other hand, the most popular optimization strategies are still relatively simple and have not substantially changed (Bottou et al., 2018). In fact, the most effective approaches are still based on SGD with minor variations (Ruder, 2016), usually developed to increase their robustness or efficiency. Still, simple SGD often produces state-of-the-art results that are competitive and often even better (Hardt et al., 2016) than more advanced optimization techniques based on second-order methods or pseudo/quasi Newton approaches (Bordes et al., 2009; Sohl-Dickstein et al., 2014) or on adaptive methods (Wilson et al., 2017). Another interesting point about first order methods is their robustness to learning rate choice and schedules (Sun, 2019), evidenced by several methods that study efficient automatic learning rates for SGD (Schaul & LeCun, 2013; Schaul et al., 2013; Li et al., 2017). Hence, there should be some explanation for why simple first-order optimization methods (Ruder, 2016) work so well on large-scale problems with systems containing a large number of parameters, such as deep neural networks.\nA similar question has been asked by LeCun et al. (1993), who proposed a method to calculate the largest eigenvalue of the Hessian of the loss using the power method. To make it work along the SGD training, LeCun et al. (1993) designs a running average of the estimate of the eigenvector. Their idea is similar to ours, but our approach has the advantage of being free of the scheduling of the parameter that characterizes the running average. This advantage stems from the Lyapunov exponents analysis presented in our paper, which is one of the first steps in the exploration of the intersection between chaos theory and deep learning. Furthermore, since the largest eigenvalue of the Hessian can be used as an estimate for the smoothness coefficient of the loss function, our approach could improve the\nsmoothness coefficient estimation (Santurkar et al., 2018) and help methods that rely on it (Zhang et al., 2019; Lee et al., 2020).\nWe believe that there are many other topics in this intersection that are worth exploring, such as the use of deep neural networks to predict the behavior of chaotic dynamical systems (Pathak et al., 2018) or the exploration of neural networks as a dynamical system (Liu & Theodorou, 2019; Schoenholz et al., 2016). We align with works that view SGD as an approximation of stochastic differential equations (Chaudhari & Soatto, 2018) or that improve the understanding of empirical and theoretical properties of SGD (Ma et al., 2018; Keskar et al., 2016; Bassily et al., 2020; Kleinberg et al., 2018; Chaudhari & Soatto, 2018), particularly regarding the influence of batch size and learning rate to generalization (He et al., 2019; Jastrzebski et al., 2017; Smith et al., 2017)." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "In this work, we use a chaos theory approach to design a new method for the efficient estimation of the largest eigenvalue h0 of the Hessian of the loss function. Our proposed method is efficient because it is linear in the number of parameters and can be run in parallel to the optimization. This efficiency allows us to study the dynamical evolution of h0 during training and discover that 1/h0 converges to the chosen learning rate α. Moreover, we noticed that we could assign α to a large range of values and still have an effective training. Hence, setting the learning rate α with a quasiNewton optimization is largely superfluous for deep neural networks because of the convergence of 1/h0 to α. This means that SGD traverses the loss function along a path that has the correct curvature according to second-order optimization. Finally, we have some indications that the convergence of 1/h0 towards α is necessary for a successful training. Therefore, our approach could be used to narrow down the initial range of usable learning rates or to design learning rates schedules on new problems.\nAlthough we did not discuss generalization in this paper, we observe that for a fixed batch size, 1/h0 follows the learning rate α. This means that if larger learning rate is used towards the convergence, a wider optimum will be attained, and wider minima are usually attributed to better generalization (Keskar et al., 2016). This corroborates with previous results that show that the ratio between batch size and learning rate has a negative correlation to generalization (He et al., 2019; Jastrzebski et al., 2017; Smith et al., 2017)." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "B SGD+MOMENTUM", "text": "With SGD+momentum the analysis is conceptually similar, but mathematically more complicated.\nSGD+momentum: θt+1 = αpt+1 + θt,\npt+1 = mpt − ∂L\n∂θ .\nWe can rewrite: θt+1 − θt = αpt+1,\npt+1 − pt = −(1−m)pt − ∂L\n∂θ .\nIn the limit of small steps: ∂θ\n∂t = αp,\n∂p ∂t = −(1−m)p− ∂L ∂θ .\nIncidentally, in this formulation it becomes clear that (1 − m) is equivalent to a drag term of a particle of mass α under the motion of a potential L.\nIf we define a vector of length 2N describing the phase space, where N is the dimension of the parameter space θ (or p):\nθ̃ = (θ,p).\nThe phase space describes both the status of the network and the optimizer’s accumulated gradients.\nWe can rewrite the system of equations above in a compact form:\n∂θ̃ ∂t = [ 0 αI −∂L∂θ −(1−m)I ] θ̃,\nwhere I is the identity matrix of size N .\nJust like in the case of SGD, we obtain the equation for the evolution of a perturbation in the phase space (we call it q̃(t)), and integrate it over t which gives:\nq̃(t) = e\n∂ ∂θ̃ [ 0 αI −∂L∂θ −(1−m)I ] t\nq̃0.\nWe need to find the eigenvalues of the matrix at the exponent of the formula, hence we need to solve an equation of the form |A− λI| = 0, where |.| represents the determinant:∣∣∣∣[ 0 αI−∂2L∂θ2 −(1−m)I ] − λI ∣∣∣∣ = 0. Rewriting: ∣∣∣∣[ −λI αI−∂2L∂θ2 (−(1−m)− λ)I\n]∣∣∣∣ = 0. The Shur’s determinant identity (\n∣∣∣∣A BC D ∣∣∣∣ = |D||A−BD−1C| ) gives:∣∣∣∣−λ(−(1−m)− λ)I − α(−∂2L∂θ2\n)∣∣∣∣ = 0, which is a formula of the form |H − hI| = 0. This means that the eigenvalues of the hessian of the loss (h) are related to the Lyapunov exponents by the formula:\nh = −λ 2 + λ(1−m)\nα .\nSimilarly to the SGD case, the largest λ gives the smallest h. The final formula in this case becomes:\nhN = − λ20 + λ0(1−m)\nα ." }, { "heading": "C LYAPUNOV EXPONENT SPECTRUM", "text": "Our curvature estimation idea can be easily extended to estimate the top-k (negative or positive) eigenvalues of the Hessian. We calculate the first Lyapunov exponents using the orthogonalization procedure described by Benettin et al. (1980). To calculate the second Lyapunov exponent, it is enough to keep track of a second ”small“ increment vector q(1)(t) evolved in exactly the same way as q(t), with an additional orthogonalization step (Benettin et al., 1980):\nq(1)(t+ ∆t)← q(1)(t+ ∆t)− q (1)(t+ ∆t) · q(t+ ∆t) ||q(t+ ∆t)||\nto be done before the corresponding normalization step. The procedure is easy to generalize to further eigenvalues.\nThe results typically show a shallow dependence of the value with eigenvalue number (Figure 3). If this holds true in general, calculating additional eigenvalues will be of limited usefulness for improving optimization." }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "In this section we present additional experiments of the same type shown in section 4 done with different architectures and loss functions. Figures 4,5 and 6 show experiments with constant learning rate. Figure 7 shows experiments with cyclic learning rate. And Figures 8 and 9 show the full set of experiments on two classes CIFAR10 with our quasi-newton method for SGD. It is worth mentioning how it trains also with the linear model regression (Figure 9, upper-left). Figure 10 shows experiments with the larger ResNet18 architecture and the 10 classes of CIFAR10. The same behavior as in the main paper is consistently observed across different architectures/losses. It is possible to mitigate the noise in λ, and consequently in 1/h0, by increasing the number of iterations of Algorithm 1. As explained in the main text, 1/h0 cannot follow αwhen it is too small (e.g. Fig 7), that is, the curvature cannot go to infinity." } ]
2,020
A CHAOS THEORY APPROACH TO UNDERSTAND NEURAL NETWORK OPTIMIZATION
SP:d8f80f84b089766124693485390dbfce0c94527c
[ "This work proposes a new approach, based on projective clustering, for compressing the embedding layers of DNNs for natural language modeling tasks. The authors show that the trade-off between compression and model accuracy can be improved by considering a set of k subspaces rather than just a single subspace. Methods for compressing DNNs is an active area of research and this paper presents a promising approach to do so as well as interesting results. " ]
A common approach for compressing Natural Language Processing (NLP) networks is to encode the embedding layer as a matrix A ∈ Rn×d, compute its rank-j approximation Aj via SVD (Singular Value Decomposition), and then factor Aj into a pair of matrices that correspond to smaller fully-connected layers to replace the original embedding layer. Geometrically, the rows of A represent points in R, and the rows of Aj represent their projections onto the j-dimensional subspace that minimizes the sum of squared distances (“errors”) to the points. In practice, these rows of A may be spread around k > 1 subspaces, so factoring A based on a single subspace may lead to large errors that turn into large drops in accuracy. Inspired by projective clustering from computational geometry, we suggest replacing this subspace by a set of k subspaces, each of dimension j, that minimizes the sum of squared distances over every point (row in A) to its closest subspace. Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of k small layers that operate in parallel and are then recombined with a single fully-connected layer. Extensive experimental results on the GLUE benchmark yield networks that are both more accurate and smaller compared to the standard matrix factorization (SVD). For example, we further compress DistilBERT by reducing the size of the embedding layer by 40% while incurring only a 0.5% average drop in accuracy over all nine GLUE tasks, compared to a 2.8% drop using the existing SVD approach. On RoBERTa we achieve 43% compression of the embedding layer with less than a 0.8% average drop in accuracy as compared to a 3% drop previously.
[ { "affiliations": [], "name": "Alaa Maalouf" }, { "affiliations": [], "name": "Harry Lang" }, { "affiliations": [], "name": "Daniela Rus" }, { "affiliations": [], "name": "Dan Feldman" } ]
[ { "authors": [ "Anish Acharya", "Rahul Goel", "Angeliki Metallinou", "Inderjit Dhillon" ], "title": "Online embedding compression for text classification using low rank matrix factorization", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Kenneth L Clarkson", "David P Woodruff" ], "title": "Input sparsity and hardness for robust subspace approximation", "venue": "In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science,", "year": 2015 }, { "authors": [ "Andrew M Dai", "Quoc V Le" ], "title": "Semi-supervised sequence learning", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Arthur P Dempster", "Nan M Laird", "Donald B Rubin" ], "title": "Maximum likelihood from incomplete data via the em algorithm", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1977 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Michael Edwards", "Kasturi Varadarajan" ], "title": "No coreset, no cry: Ii", "venue": "In International Conference on Foundations of Software Technology and Theoretical Computer Science,", "year": 2005 }, { "authors": [ "Angela Fan", "Edouard Grave", "Armand Joulin" ], "title": "Reducing transformer depth on demand with structured dropout", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jianzhou Feng", "Li Song", "Xiaokang Yang", "Wenjun Zhang" ], "title": "Learning dictionary via subspace segmentation for sparse representation", "venue": "In 2011 18th IEEE International Conference on Image Processing,", "year": 2011 }, { "authors": [ "Mitchell A Gordon", "Kevin Duh", "Nicholas Andrews" ], "title": "Compressing bert: Studying the effects of weight pruning on transfer learning", "venue": "arXiv preprint arXiv:2002.08307,", "year": 2020 }, { "authors": [ "Fu-Ming Guo", "Sijia Liu", "Finlay S Mungall", "Xue Lin", "Yanzhi Wang" ], "title": "Reweighted proximal pruning for large-scale language representation", "venue": null, "year": 1909 }, { "authors": [ "Xiaoqi Jiao", "Yichun Yin", "Lifeng Shang", "Xin Jiang", "Xiao Chen", "Linlin Li", "Fang Wang", "Qun Liu" ], "title": "Tinybert: Distilling bert for natural language understanding", "venue": null, "year": 1909 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Quoc Le", "Tomas Mikolov" ], "title": "Distributed representations of sentences and documents", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Linqing Liu", "Huan Wang", "Jimmy Lin", "Richard Socher", "Caiming Xiong" ], "title": "Attentive student meets multi-task teacher: Improved knowledge distillation for pretrained models", "venue": "arXiv preprint arXiv:1911.03588,", "year": 2019 }, { "authors": [ "Risheng Liu", "Zhouchen Lin", "Fernando De la Torre", "Zhixun Su" ], "title": "Fixed-rank representation for unsupervised visual learning", "venue": "In 2012 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "Stuart Lloyd" ], "title": "Least squares quantization in pcm", "venue": "IEEE transactions on information theory,", "year": 1982 }, { "authors": [ "Julien Mairal", "Jean Ponce", "Guillermo Sapiro", "Andrew Zisserman", "Francis R Bach" ], "title": "Supervised dictionary learning", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "J Scott McCarley" ], "title": "Pruning a bert-based question answering model", "venue": "arXiv preprint arXiv:1910.06360,", "year": 2019 }, { "authors": [ "Paul Michel", "Omer Levy", "Graham Neubig" ], "title": "Are sixteen heads really better than one", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Subhabrata Mukherjee", "Ahmed Hassan Awadallah" ], "title": "Distilling transformers into simple neural networks with unlabeled transfer data", "venue": "arXiv preprint arXiv:1910.01769,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training, 2018", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "Sheng Shen", "Zhen Dong", "Jiayu Ye", "Linjian Ma", "Zhewei Yao", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "Q-bert: Hessian based ultra low precision quantization of bert", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Siqi Sun", "Yu Cheng", "Zhe Gan", "Jingjing Liu" ], "title": "Patient knowledge distillation for bert model compression", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Raphael Tang", "Yao Lu", "Linqing Liu", "Lili Mou", "Olga Vechtomova", "Jimmy Lin" ], "title": "Distilling taskspecific knowledge from bert into simple neural networks", "venue": null, "year": 1903 }, { "authors": [ "Ivana Tosic", "Pascal Frossard" ], "title": "Dictionary learning", "venue": "IEEE Signal Processing Magazine,", "year": 2011 }, { "authors": [ "Holger Trittenbach", "Klemens Böhm" ], "title": "One-class active learning for outlier detection with multiple subspaces", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Murad Tukan", "Alaa Maalouf", "Dan Feldman" ], "title": "Coresets for near-convex functions", "venue": "arXiv preprint arXiv:2006.05482,", "year": 2020 }, { "authors": [ "Murad Tukan", "Alaa Maalouf", "Matan Weksler", "Dan Feldman" ], "title": "Compressed deep networks: Goodbye svd, hello robust low-rank approximation", "venue": "arXiv preprint arXiv:2009.05647,", "year": 2020 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Ziheng Wang", "Jeremy Wohlwend", "Tao Lei" ], "title": "Structured pruning of large language models", "venue": "arXiv preprint arXiv:1910.04732,", "year": 2019 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": null, "year": 1910 }, { "authors": [ "Dong Xu", "Shuicheng Yan", "Lei Zhang", "Hong-Jiang Zhang", "Zhengkai Liu", "Heung-Yeung Shum" ], "title": "Concurrent subspaces analysis", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),", "year": 2005 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Xiyu Yu", "Tongliang Liu", "Xinchao Wang", "Dacheng Tao" ], "title": "On compressing deep models by low rank and sparse decomposition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ofir Zafrir", "Guy Boudoukh", "Peter Izsak", "Moshe Wasserblat" ], "title": "Q8bert: Quantized 8bit bert", "venue": "arXiv preprint arXiv:1910.06188,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION AND MOTIVATION", "text": "Deep Learning revolutionized Machine Learning by improving the accuracy by dozens of percents for fundamental tasks in Natural Language Processing (NLP) through learning representations of a natural language via a deep neural network (Mikolov et al., 2013; Radford et al., 2018; Le and Mikolov, 2014; Peters et al., 2018; Radford et al., 2019). Lately, it was shown that there is no need to train those networks from scratch each time we receive a new task/data, but to fine-tune a full pre-trained model on the specific task (Dai and Le, 2015; Radford et al., 2018; Devlin et al., 2019). However, in many cases, those networks are extremely large compared to classical machine learning models. For example, both BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) have more than 110 million parameters, and RoBERTa (Liu et al., 2019b) consists of more than 125 million parameters. Such large networks have two main drawbacks: (i) they use too much storage, e.g. memory or disk space, which may be infeasible for small IoT devices, smartphones, or when a personalized network is needed for each user/object/task, and (ii) classification may take too much time, especially for real-time applications such as NLP tasks: speech recognition, translation or speech-to-text.\nCompressed Networks. To this end, many papers suggested different techniques to compress large NLP networks, e.g., by low-rank factorization (Wang et al., 2019; Lan et al., 2019), prun-\n∗equal contribution\ning (McCarley, 2019; Michel et al., 2019; Fan et al., 2019; Guo et al., 2019; Gordon et al., 2020), quantization (Zafrir et al., 2019; Shen et al., 2020), weight sharing (Lan et al., 2019), and knowledge distillation (Sanh et al., 2019; Tang et al., 2019; Mukherjee and Awadallah, 2019; Liu et al., 2019a; Sun et al., 2019; Jiao et al., 2019); see more example papers and a comparison table in Gordon (2019) for compressing the BERT model. There is no consensus on which approach should be used in what contexts. However, in the context of compressing the embedding layer, the most common approach is low-rank factorization as in Lan et al. (2019), and it may be combined with other techniques such as quantization and pruning.\nIn this work, we suggest a novel low-rank factorization technique for compressing the embedding layer of a given model. This is motivated by the fact that in many networks, the embedding layer accounts for 20%− 40% of the network size. Our approach - MESSI: Multiple (parallel) Estimated SVDs for Smaller Intralayers - achieves a better accuracy for the same compression rate compared to the known standard matrix factorization. To present it, we first describe an embedding layer, the known technique for compressing it, and the geometric assumptions underlying this technique. Then, we give our approach followed by geometric intuition, and detailed explanation about the motivation and the architecture changes. Finally, we report our experimental results that demonstrate the strong performance of our technique.\nEmbedding Layer. The embedding layer aims to represent each word from a vocabulary by a real-valued vector that reflects the word’s semantic and syntactic information that can be extracted from the language. One can think of the embedding layer as a simple matrix multiplication as follows. The layer receives a standard vector x ∈ Rn (a row of the identity matrix, exactly one nonzero entry, usually called one-hot vector) that represents a word in the vocabulary, it multiplies x by a matrix AT ∈ Rd×n to obtain the corresponding d-dimensional word embedding vector y = ATx, which is the row in A that corresponds to the non-zero entry of x. The embedding layer has n input neurons, and the output has d neurons. The nd edges between the input and output neurons define the matrix A ∈ Rn×d. Here, the entry in the ith row and jth column of A is the weight of the edge between the ith input neuron to the jth output neuron; see Figure. 1.\nCompressing by Matrix Factorization. A common approach for compressing an embedding layer is to compute the j-rank approximation Aj ∈ Rn×d of the corresponding matrix A via SVD (Singular Value Decomposition; see e.g., Lan et al. (2019); Yu et al. (2017) and Acharya et al. (2019)), factor Aj into two smaller matrices U ∈ Rn×j and V ∈ Rj×d (i.e. Aj = UV ), and replace the original embedding layer that corresponds to A by a pair of layers that correspond to U and V . The number of parameters is then reduced to j(n + d). Moreover, computing the output takes O(j(n+ d)) time, compared to the O(nd) time for computing ATx. As above, we continue to use Aj to refer to a rank-j approximation of a matrix A.\nFine tuning. The layers that correspond to the matrices U and V above are sometimes used only as initial seeds for a training process that is called fine tuning. Here, the training data is fed into the\nnetwork, and the error is measured with respect to the final classification. Hence, the structure of the data remains the same but the edges are updated in each iteration to give a better accuracy.\nObserve that typically, the SVD takes the form Aj = UDṼ , where the columns of U ∈ Rn×j are orthogonal, the rows of Ṽ ∈ Rj×d are orthogonal, and D ∈ Rj×j is a diagonal matrix. In this paper and in others, we say that Aj = UV where V = DṼ . Furthermore, the orthogonalization is used only to obtain a low rank approximation Aj = UV using SVD. After that, this property is not kept in the network during the training process (when applying the fine-tuning).\nGeometric intuition. The embedding layer can be encoded into a matrix A ∈ Rn×d as explained above. Hence, each of the n rows of A corresponds to a point (vector) in Rd, and the j-rank approximation Aj ∈ Rn×d represents the projection on the j-dimensional subspace that minimizes the sum of squared distances (“errors”) to the points. Projecting these points onto any j-dimensional subspace of Rd would allow us to encode every point only via its j-coordinates on this subspace, and store only nj entries instead of the original nd entries of A. This is the matrix U ∈ Rn×j , where each row encodes the corresponding row in A by its j-coordinates on this subspace. The subspace itself can be represented by its basis of j d-dimensional vectors (jd entries), which is the column space of a matrix V T ∈ Rd×j . Figure 2 illustrates the small pair of layers that corresponds to U and V , those layers are a compression for the original big layer that corresponds to A.\nHowever, our goal is not only to compress the network or matrix, but also to approximate the original matrix operator A. To this end, among all the possible j-subspaces of Rd, we may be interested in the j-subspace that minimizes the sum of squared distances to the points, i.e., the sum of squared projected errors. This subspace can be computed easily via SVD. The corresponding projections of the rows of A on this subspace are the rows of the j-rank matrix Aj .\nThe hidden or statistical assumption in this model is that the rows of the matrix A (that represents the embedding layer) were actually generated by adding i.i.d. Gaussian noise to each point in a set of n points on a j-dimensional subspace, that is spanned by what are called latent variables or factors. Given only the resulting matrix A, the j-subspace that maximizes the likelihood (probability) of generating the original points is spanned by the j largest singular vectors of A.\nWhy a single distribution? Even if we accept the assumption of Gaussian noise, e.g. due to simplicity of computations or the law of large numbers, it is not intuitively clear why we should assume that the rows of A were sampled from a single distribution. Natural questions that arise are:\n(i) Can we get smaller and/or more accurate models in real-world networks by assuming multiple instead of a single generating distribution (i.e. multiple subspaces)?\n(ii) Can we efficiently compute the corresponding factorizations and represent them as part of a network ?" }, { "heading": "2 OUR CONTRIBUTION", "text": "We answer the above open questions by suggesting the following contributions. In short, the answers are:\n(i) In all the real-world networks that we tested, it is almost always better to assume k ≥ 2 distributions rather than a single one that generated the data. It is better in the sense that the resulting accuracy of the network is better compared to k = 1 (SVD) for the same compression rate.\n(ii) While approximating the global minimum is Max-SNP-Hard, our experiments show that we can efficiently compute many local minima and take the smallest one. We then explain how to encode the result back into the network. This is by suggesting a new embedding layer architecture that we call MESSI (Multiple (parallel) Estimated SVDs for Smaller Intralayers); see Figure 3. Extensive experimental results show significant improvement.\nComputational Geometry meets Deep Learning. Our technique also constructs the matrix A ∈ Rn×d from a given embedding layer. However, inspired by the geometric intuition from the previous section, we suggest to approximate the n rows of A by clustering them to k ≥ 2 subspaces instead of one. More precisely, given an integer k ≥ 1 we aim to compute a set of k subspaces in Rd, each of dimension j, that will minimize the sum over every squared distance of every point (row in A) to its nearest subspace. This can be considered as a combination of j-rank or j-subspace approximation, as defined above, and k-means clustering. In the k-means clustering problem we wish to approximate n points by k center points that minimizes the sum over squared distance between every point to its nearest center. In our case, the k centers points are replaced by k subspaces, each of dimension j. In computational geometry, this type of problem is called projective clustering (see Figure 4), and its used in many tasks in the fields of Machine Learning and Computer Vision (Feng et al., 2011; Xu et al., 2005; Liu et al., 2012; Trittenbach and Böhm, 2019),\nFrom Embedding layer to Embedding layers. The result of the above technique is a set of k matrices A1j , · · · , Akj , each of rank j and dimension ni × d where the ith matrix corresponds to the cluster of ni points that were projected on the ith j-dimensional subspace. Each of those matrices can be factored into two smaller matrices (due to its low rank), i.e., for every i ∈ {1, · · · , k}, we have Aij = U\niV i, where U i ∈ Rni×j , and V i ∈ Rj×d. To plug these matrices as part of the final network instead of the embedded layer, we suggest to encode these matrices via k parallel sub-layers as described in what follows and illustrated in Figure 3.\nOur pipeline: MESSI. We construct our new architecture as follows. We use A to refer to the n× d matrix from the embedding layer we seek to compress. The input to our pipeline is the matrix A, positive integers j and k, and (for the final step) parameters for the fine-tuning.\n1. Treating the n rows of A as n points in Rd, compute an approximate (k, j)-projective clustering. The result is k subspaces in Rd, each of dimension j, that minimize the sum of squared distances from each point (row in A) to its closest subspace. For the approximation, we compute a local minimum for this problem using the Expectation-Maximization (EM) method (Dempster et al., 1977).\n2. Partition the rows of A into k different subsets according to their nearest subspace from the previous step. The result is submatrices A1, . . . , Ak where Ai is a ni × d matrix and n1 + . . .+ nk = n.\n3. For each matrix Ai where 1 ≤ i ≤ k, factor it to two smaller matrices U i (of dimensions ni × j) and V i (of dimensions j × d) such that U iV i is the rank-j approximation of Ai.\n4. In the full network, replace the original fully-connected embedding layer by 2 layers. The first layer is a parallelization of k separate fully-connected layers, where for every i ∈ {1, · · · , k} the ith parallel layer consists of the matrix U i, i.e., it has ni input neurons and j output neurons. Here, each row of A is mapped appropriately. The second layer is by combining the matrices V 1, · · ·V k. Each of the k output vectors from the previous layer u1, . . . , uk are combined as V 1u1 + . . .+ V kuk; see Figure 3 for an illustration.\n5. Fine-tune the network.\nThe result is a compressed embedding layer. Every matrix U i has nij parameters, and the matrix V i has jd parameters. Therefore the compressed embedding layer consists of nj + kjd parameters, in comparison to the uncompressed layer of nd parameters.\nPractical Solution. The projective clustering problem is known to be Max-SNP-hard even for d = 2 and j = 2, for any approximation factor that is independent of n. Instead, we suggest to use an algorithm that provably converges to a local minimum via the Expectation-Maximization (EM) method (Dempster et al., 1977), which is a generalization of the well known Lloyd algorithm (Lloyd, 1982). The resulting clusters and factorizations are used to determine the new architecture and its initial weights; see Figure 3 for more details. We run on instances of AWS Amazon EC2 cloud, and detail our results in the next section.\nOpen code and networks. Complete open code to reproduce the resulting networks is provided. We expect it to be useful for future research, and give the following few examples." }, { "heading": "2.1 GENERALIZATIONS AND EXTENSIONS.", "text": "Our suggested architecture can be generalized and extended to support many other optimization functions that may be relevant for different types of datasets, tasks or applications besides NLP.\n`q-error. For simplicity, our suggested approach aims to minimize sum of squared distances to k subspaces. However, it can be easily applied also to sum of distances from the points to the subspace, which is a more robust approach toward outliers (“far away points”).\nEven for k = 1 recent results of Tukan et al. (2020b) show improvement over SVD.\nDistance functions. Similarly, we can replace the Euclidean `2-distance by e.g. the Manhattan distance which is the `1-norm between a point x and its projection, i.e., ‖x− x′‖1 or sum of differences between the corresponding entries, instead of sum of squared entries, as in the Euclidean distance ‖x− x′‖2 in this paper. Non-uniform dimensions. In this paper we assume that k subspaces approximate the input points, and each subspace has dimension exactly j, where j, k ≥ 1 are given integers. A better strategy is to allow each subspace to have a different dimension, ji for every i ∈ {1, · · · , k}, or add a constraint only on the sum j1 + · · · + jk of dimensions. Similarly, the number k may be tuned as in our experimental results. Using this approach we can improve the accuracy and enjoy the same compression rate.\nFor more details about those generalizations and others, we refer the interested reader to section E.1 at the appendix." }, { "heading": "3 EXPERIMENTAL RESULTS", "text": "GLUE benchmark. We run our experiments on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018). It is widely-used collection of 9 datasets for evaluating natural language understanding systems.\nNetworks. We use the following networks: (i) RoBERTa (Liu et al., 2019b), it consists of 120 millions parameters, and its embedding layer has 38.9 million parameters (32.5% of the entire network size), (ii) DistilBERT (Sanh et al., 2019) consists of 66 million parameters, and its embedding layer has 23.5 million parameters (35.5% of the entire network size), and (iii) ALBERT (base-v2) (Lan et al., 2019), which consists of 11.7 million parameters, and its embedding layer has 3.8 million parameters (33% of the entire network).\nSoftware and Hardware. All the experiments were conducted on a AWS c5a.16xlarge machine with 64 CPUs and 128 RAM [GiB]. To build and train networks, we used the suggested implementation at the Transformers 1 library from HuggingFace (Wolf et al., 2019) (Transformers version 3.1.0, and PyTorch version 1.6.0 (Paszke et al., 2017)). For more detailes about the implementation, we refer the reader to section A at the appendix.\nThe setup. All our experiments are benchmarked against their publicly available implementations of the DistilBERT, RoBERTa, and ALBERT models, fine-tuned for each task, which was in some cases higher and in other cases lower than the values printed in the publications introducing these models. Given an embedding layer from a network that is trained on a task from GLUE, an integer k ≥ 1, and an integer j ≥ 1. We build and initialize a new architecture that replaces the original embedding layer by two smaller layers as explained in Figure 3. We then fine tune the resulted network for 2 epochs. We ran the same experiments for several values of k and j that defines different compression rates. We compete with the standard matrix factorization approach in all experiments." }, { "heading": "3.1 REPORTED RESULTS", "text": "Compressing RoBERTA and DistilBERT. (i) In Figures 5 and 6 the x-axis is the compression rate of the embedding layer, i.e. a compression of 40% means the layer is 60% its original size. The y-axis is the accuracy drop (relative error) with respect to the original accuracy of the network (with fine tuning for 2 epochs). In Figure 5, each graph reports the results for a specific task from the GLUE benchmark on RoBERTa, while Figure 6 reports the results of DistilBERT.\n(ii) On the task WNLI we achieved 0 error on both networks using the two approaches of SVD and our approach until 60% compression rate, so we did not add a figure on it.\n(iii) In RoBERTa, we checked only 2 compression rates on MNLI due to time constraints, and we achieved similar results in both techniques, e.g., we compressed 45% of the embedding layer, based on our technique with k = 5 and j = 384 to obtain only 0.61% drop in accuracy with fine tuning and 4.2% without, this is compared to 0.61% and 13.9% respectively for the same compression rate via SVD factorization. In DistilBERT, we compressed 40% of the embedding layer with k = 4 and achieved a 0.1% increase in accuracy after fine-tuning, as compared to a 0.05% drop via SVD factorization (on MNLI).\n(iv) Table 1 suggests the best compressed networks in terms of accuracy VS size.\n1https://github.com/huggingface/transformers\nImproving the accuracy of pre-trained models using MESSI. In Table 2, we test if the MESSI architecture can improve the accuracy of a pre-trained model, while maintaining the same number of parameters. The only change done on the given model is factoring its embedding layer to the suggested architecture using the detailed pipeline at section 2. Here, we make sure to choose the right values of k and j such that the original embedding layer size is maintained (up to a very small change). We conducted this experiment on the model ALBERT (base v2). The results are actually promising.\nMore results that are placed in the appendix: (i) Figure 8 in section B shows the accuracy drop as a function of the compression rate on the RoBERTA model before fine-tuning. (ii) In section C we compress a fully-connected layer in different settings, specifically speaking we compress the two popular models: LeNet-300-100 on MNIST (LeCun et al., 1998), and VGG-19 (Simonyan and Zisserman, 2014) on CIFAR10 (Krizhevsky et al., 2009), see results at Figures 9 and 10. (iii) In section D, we suggest a way to determine the values of k and j in practice for a given compression rate, and we report the results on compressing DistilBERT based on this suggestion; see Figure 11. (iv) Finally, in section E we check how another clustering method can fit in our pipeline, i.e., instead of clustering the input neurons of the fully-connected layer (rows of A) via projective clustering (steps 1 and 2 in the pipeline at Section 2), we try the known k-means clustering, and then we continue the same by applying SVD on each cluster and building the corresponding new layers. See results in Figures 12, 13, 14 and 15." }, { "heading": "3.2 DISCUSSION", "text": "As shown by Figures 5 and 6, our approach outperforms the traditional SVD factorization. In all experiments, our method achieves better accuracy for the same compression rate compared to the traditional SVD. For example, in RobERTa, we compress 43% of the embedding layer with less that 0.8% average drop in accuracy, this is compared to the 3% drop in the standard technique for a smaller compression rate of 40%. In DistilBERT, we achieved 40% compression of the embedding layer while incurring only a 0.5% average drop in accuracy over all nine GLUE tasks, compared to a 2.8% drop using the existing SVD approach. As the reader will notice, the same dataset (and network) may be ideally handled by different values of k depending on the desired compression.\nWe observed that our technique shines mainly when the network is efficient, and any small change will lead to large error, e.g., as in the CoLA/RTE/MRPC graph of Figure 5. Although we achieve better results in all of the cases, but here the difference is more significant (up to 10%), since our compressed layer approximates the original layer better than SVD, the errors are smaller, and the accuracy is better. Furthermore, Figure 8 shows clearly that even without fine tuning, the new approach yields more accurate networks. Hence, we can fine tune for smaller number of epochs and achieve higher accuracy. Finally, by Table 2 we can see that the MESSI architecture can be used also to improve the accuracy of pre-trained models while maintaining the original size." }, { "heading": "3.3 CONCLUSION", "text": "We suggested a novel approach for compressing a fully-connected layer. This is by clustering the input neurons of the layer into k-subsets (via projective clustering) and then factoring the corresponding weights matrix of each subset. We then provided a novel architecture that replaces the original fully-connected layer by a set of k small layers that operate in parallel and are then recombined with a single fully-connected layer. The experimental results showed that our suggested algorithm overcomes the traditional factorization technique and achieves higher accuracy for the same compression rate before and after fine-tuning." }, { "heading": "3.4 FUTURE WORK", "text": "The future work includes experiments on other networks and data sets both from the field of NLP and outside it, e.g., an inserting experiment is to modify the ALBERT network (Lan et al., 2019), by changing its embedding layer architecture (that consists of two layers based on the standard matrix factorization) to the suggested architecture in this paper, while maintaining the same number of parameters, and to check if this modification improved its accuracy, also the suggested generalizations and extensions from section 2.1 should be tried, where we strongly believe they will allow us to achieve even better results. Finally, generalizing the approach to other type of layers." }, { "heading": "4 ACKNOWLEDGEMENTS", "text": "Support for this research has been provided in part by NSF award 1723943. We are grateful for it." }, { "heading": "B RESULTS BEFORE FINE TUNING", "text": "In this section we report the result of compressing RoBERTa without fine-tuning. By Figure 8 we can clearly see that even without fine tuning, the new approach yields more accurate networks compared to the standard SVD factorization. Hence, our approach gives a better start for the learning (fine-tuning) process, which implies that we can fine tune for smaller number of epochs and achieve higher accuracy and smaller networks." }, { "heading": "C COMPRESSING FULLY-CONNECTED LAYERS USING MESSI.", "text": "In this section we test our approach on two popular models: LeNet-300-100 on MNIST (LeCun et al., 1998), and VGG-19 (Simonyan and Zisserman, 2014) on CIFAR10 (Krizhevsky et al., 2009). Also here, we conducted our experiments on the same hardware described in Section 3.\nIn both experiments, we test our approach on multiple values of k and compare it to k = 1 (standard SVD factorization). For every value of k, we compress each layer from the hidden fully-connected layers of the given model by the same percentage and using the same value of k.\nLeNet-300-100. The network consists of 266610 parameters, and it is comprised of two fullyconnected hidden layers with 300 and 100 neurons, respectively, trained on the MNIST data set.\nWe test our approach on k ∈ {2, 3, 4, 5}. In Figure 9, we report the accuracy drop as a function of the compression rate for the whole network. We can see the advantage of our approach when compressing more than 90% of the network.\nVGG-19. We used the implementation at 2. The network consists of 16 convolutional layers, followed by 2 dense hidden (fully-connected) layers with 512 neurons each. Finally, the classification layer has 10 neurons. The fully-connected layers consists of 530442 parameters.\nHere, we tested our approach for k ∈ {2, 5}. In Figure 10, we report the accuracy drop as a function of the compression rate of the fully-connected layers. The suggested approach has a clear advantage for high compression rates.\n2https://github.com/chengyangfu/pytorch-vgg-cifar10/blob/master/vgg.py" }, { "heading": "D MESSI-ENSEMBLE", "text": "In this section we show only the best computed results of DistilBERT: that is obtained by training models at several k values and then evaluating the model that achieves the best accuracy on the training set. Specifically, given a fully-connected layer of n input neurons and d output neurons, for a given compression rate x (e.g., x = 0.4 means that we want to remove 40% of the parameters), we try multiple values of k via binary search on k. For every such k value we compute the implied value j = (1−x)dn/(n+kd), and we compress the network based on those k and j via the MESSI pipeline. Finally, we save the model that achieves the best accuracy on the training set, and evaluate its results on the test set. Figure 11 reports the results for this approach.\nE PROTECTIVE CLUSTERING VS k-MEANS\nRecall the suggested pipeline from section 2: The first step of it is to compute a set of k subspaces in Rd, each of dimension j that approximates the (k, j)-projective clustering of the input matrix A. Then, the second step partitions the input neurons (rows of A) according to their closest subspace from the set of k subspace that is computed in the first step. Then, in step 3, we compute the SVD for each cluster, and in steps 4 and 5 we build (and possibly fine-tune) the corresponding architecture as described (see in Figure 3).\nIn this section, we compare using projective clustering to using k-means clustering. We do not apply steps 1 and 2, as we instead partition the input neurons (rows of A) into k groups via applying kmeans clustering on them (instead of projective clustering). We then apply steps 3, 4 and 5 in exactly the same way.\nHere, we evaluated our results on the networks: RoBERTa (Liu et al., 2019b) and DistilBERT (Sanh et al., 2019) on the RTE and MRPC tasks from the GLUE benchmark (Wang et al., 2018). Figures 12 and 13 compare the results on RoBERTA between the two clustering methods, with and without fine-tuning, respectively, while Figures 14 and 15 do the same for the results on DistilBERT.\nWe also used the LeNet-300-100 model on MNIST LeCun et al. (1998) to check this (same) experiment in a different setting. See Figure 16.\nDiscussion. In Figure 13, where we test the accuracy drop before fine-tuning, we can see that using projective clustering for partitioning the neurons is better than running k-means on them, i.e., the projective clustering approach yielded a better start (accuracy before fine-tuning) than the k-means approach for the learning process.\nThis could be explained by the fact that our original approach (projective clustering) aims to compute a set of k subspaces (each of dimension j) that minimizes the sum of squared distances from each row in the input matrix A (neuron) to its closest subspace from the set. Hence, factoring the matrix A based on those subspaces gives a good approximation for it, which is not the case in the k-means clustering.\nThis advantage may explain the difference between the two approaches after fine-tuning for the same number of epochs as can be seen in Figure 12.\nOn the other hand, in Figure 15, the two methods gave similar results in terms of accuracy before fine tuning, and we can see that this effects the results after the fine-tuning, where the two approaches also succeeded to get similar results as can be seen in Figure 13.\nHence, the better way to determine the partition (which determines the compressed architecture) and to initialize the new layer in the MESSI pipeline is the projective clustering approach.\nE.1 GENERALIZATIONS AND EXTENSIONS.\nHere, we give more details about the suggested generalizations and extensions from section 2.1, we also add few more:\n`q-error. For simplicity, our suggested approach aims to minimize sum of squared distances to k subspaces. However, it can be easily applied also to sum of distances from the points to the subspace. In this case, we aim to compute the maximum-likelihood of the generating subspaces assuming a Laplacian instead of Gaussian distribution. More generally, we may want to minimize the sum over every distance to the power of q > 0., i.e., we take the q-norm ‖err‖q where err is the distance between a point to its projection on its closest subspace.\nEven for k = 1 recent results of Tukan et al. (2020b) show improvement over SVD.\nObserve that given the optimal subspaces, the system architecture in these cases remains the same as ours in Figure 3.\nDistance functions. Similarly, we can replace the Euclidean `2-distance by e.g. the Manhattan distance which is the `1-norm between a point x and its projection, i.e., ‖x− x′‖1 or sum of differences between the corresponding entries, instead of sum of squared entries, as in the Euclidean distance ‖x− x′‖2 in this paper. More generally, we may use the `p distance ‖x− x′‖p, or even non-distance functions such as M-Estimators that can handle outliers (as in Tukan et al. (2020a)) by replacing dist(p, x) with min {dist(p, x), t} where t > 0 is constant (threshold) that makes sure that far away points will not affect the overall sum too much.\nFrom an implementation perspective, the EM-algorithm for k-subspaces uses a k = 1 solver routine as a blackbox. Therefore extending to other distance functions is as simple as replacing the SVD solver (the k = 1 for Euclidean distance) by the corresponding solver for k = 1.\nNon-uniform dimensions. In this paper we assume that k subspaces approximate the input points, and each subspace has dimension exactly j, where j, k ≥ 1 are given integers. A better strategy is to allow each subspace to have a different dimension, ji for every i ∈ {1, · · · , k}, or add a constraint only on the sum j1 + · · · + jk of dimensions. Similarly, the number k may be tuned as in our experimental results. Using this approach we can improve the accuracy and enjoy the same compression rate. This search or parameter tuning, however, might increase the computation time of the compressed network. It also implies layers of different sizes (for each subspace) in Figure 3.\nDictionary Learning. Our approach of projective clustering is strongly related to Dictionary Learning (Tosic and Frossard, 2011; Mairal et al., 2009). Here, the input is a matrix A ∈ Rn×d and the output is a “dictionary” V T ∈ Rd×j and projections or atoms which are the rows of U ∈ Rn×j that minimize ‖A− UV ‖ under some norm. It is easy to prove that UV is simply the j-rank approximation of A, as explained in Section 1. However, if we have additional constraints, such as that every row of U should have, say, only k = 1 non-zero entries, then geometrically the columns of V T are the j lines that intersects the origin and minimize the sum of distances to the points. For k > 1 every point is projected onto the subspace that minimizes its distance and is spanned by k columns of V T .\nCoresets. Coresets are a useful tool, especially in projective clustering, to reduce the size of the input (compress it in some sense) while preserving the optimal solution or even the sum of distances to any set of k subspaces. However, we are not aware of any efficient implementations and the dependency on d and k is usually exponential as in Edwards and Varadarajan (2005). A natural open problem is to compute more efficient and practical coresets for projective clustering.\nE.2 EXPERIMENTING ON `q -ERROR\nTo get a taste of the suggested extensions, we tried the first suggestion of `q-error, with q = 1. I.e., we cluster the rows of the input matrix A based on the set of k-subspaces that minimizes the sum of (non-squared) distances from each row in A to its closest subspace from the set.\nThe local minimum of the new clustering problem can still be obtained by the suggested EM algorithm. The only difference is that the SVD computation of the optimal subspace for a cluster of points (k = 1) should be replaced by more involved approximation algorithm for computing the\nsubspace that minimizes sum over distances to the power of q = 1; see e.g. Tukan et al. (2020b); Clarkson and Woodruff (2015).\nHowever, this change increased the running time of the algorithm from minutes to days, this is due to the fact the deterministic approximation algorithms for the new problem (`1-error) with k = 1 take a time of O(nd4) at least, where d = 768 in our case, and we need to run this approximation algorithm many times in the EM procedure. For that, we conducted our experiments only on one network (RoBERTA) on 2 tasks from the GLUE benchmark (MRPC and RTE).\nTable 3, shows the accuracy drop for both techniques for two values of j with k = 5 on the MRPC task, and the same on RTE with k = 10. It can be seen from the table, that mostly, using the `1 error as an initialization is better than the `2. However,for some reason (that needs further investigation) after fine-tuning for 2 epochs both approaches reached almost the same accuracy, even more, the `2 approach achieved a better accuracy sometime. We leave this for future research." } ]
2,021
DEEP LEARNING MEETS PROJECTIVE CLUSTERING
SP:2f3bb20ca38e10fde160e4961d6b1796cadd465f
[ "The paper focuses on modeling multiple hierarchical relations on a heterogenous graph. The task “modeling joint hierarchies” is essentially trying to infer whether a given pair of entities has a hierarchical connection especially when there exists multiple hierarchical relations (2 in the paper), and missing links. The paper proposes to embed entities using boxes whose endpoints follow the Gumbel distribution. Given there exists two hierarchical relations, the paper transforms the box of one entity under relation 1 to the box of the entity under relation 2 with a parameterized linear function. This is in contrast to previous work that parameterized the box of two relations using separate independent parameters. " ]
Learning representations of entities and relations in knowledge graphs is an active area of research, with much emphasis placed on choosing the appropriate geometry to capture tree-like structures. Box embeddings (Vilnis et al., 2018; Li et al., 2019; Dasgupta et al., 2020), which represent concepts as n-dimensional hyperrectangles, are capable of embedding trees when training on a subset of the transitive closure. In Patel et al. (2020), the authors demonstrate that only the transitive reduction is required, and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes. While it is possible to represent joint hierarchies with this method, the parameters for each hierarchy are decoupled, making generalization between hierarchies infeasible. In this work, we introduce a learned box-to-box transformation which respects the geometric structure of box embeddings. We demonstrate that this not only improves the capability of modeling cross-hierarchy compositional edges, but is also capable of generalizing from a subset of the transitive reduction.
[]
[ { "authors": [ "Ralph Abboud", "İsmail İlkan Ceylan", "Thomas Lukasiewicz", "Tommaso Salvatori" ], "title": "Boxe: A box embedding model for knowledge base completion", "venue": "In Proceedings of the 34th Annual Conference on Neural Information Processing Systems NeurIPS,", "year": 2020 }, { "authors": [ "Ben Athiwaratkun", "Andrew Gordon Wilson" ], "title": "Hierarchical density order embeddings", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ivana Balazevic", "Carl Allen", "Timothy Hospedales" ], "title": "TuckER: Tensor factorization for knowledge graph completion", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Ivana Balazevic", "Carl Allen", "Timothy Hospedales" ], "title": "Multi-relational poincaré graph embeddings", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Lukas Biewald" ], "title": "Experiment tracking with weights and biases, 2020", "venue": "URL https://www.wandb. com/. Software available from wandb.com", "year": 2020 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "A. Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Benjamin Paul Chamberlain", "James R. Clough", "Marc Peter Deisenroth" ], "title": "Neural embeddings of graphs in hyperbolic space. 13th international workshop on mining and learning from graphs held in conjunction with KDD, 2017", "venue": null, "year": 2017 }, { "authors": [ "Ines Chami", "Adva Wolf", "Da-Cheng Juan", "Frederic Sala", "Sujith Ravi", "Christopher Ré" ], "title": "Lowdimensional hyperbolic knowledge graph embeddings", "venue": "arXiv preprint arXiv:2005.00545,", "year": 2020 }, { "authors": [ "Shib Sankar Dasgupta", "Michael Boratko", "Dongxu Zhang", "Luke Vilnis", "Xiang Lorraine Li", "Andrew McCallum" ], "title": "Improving local identifiability for probabilistic box embeddings", "venue": "In Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Katrin Erk" ], "title": "Representing words as regions in vector space", "venue": "In Proceedings of the Thirteenth Conference on Computational Natural Language Learning,", "year": 2009 }, { "authors": [ "Octavian Ganea", "Gary Bécigneul", "Thomas Hofmann" ], "title": "Hyperbolic neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Octavian-Eugen Ganea", "Gary Bécigneul", "Thomas Hofmann" ], "title": "Hyperbolic entailment cones for learning hierarchical embeddings", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "James A Hampton" ], "title": "The combination of prototype concepts", "venue": "The psychology of word meanings,", "year": 1991 }, { "authors": [ "Alice Lai", "Julia Hockenmaier" ], "title": "Learning to predict denotational probabilities for modeling entailment", "venue": "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Xiang Li", "Luke Vilnis", "Dongxu Zhang", "Michael Boratko", "Andrew McCallum" ], "title": "Smoothing the geometry of probabilistic box embeddings", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "George A Miller" ], "title": "WordNet: a lexical database for English", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Maximilian Nickel", "Douwe Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Robert M Nosofsky" ], "title": "Attention, similarity, and the identification–categorization relationship", "venue": "Journal of experimental psychology: General,", "year": 1986 }, { "authors": [ "Dhruvesh Patel", "Shib Sankar Dasgupta", "Michael Boratko", "Xiang Li", "Luke Vilnis", "Andrew McCallum" ], "title": "Representing joint hierarchies with box embeddings", "venue": "Automated Knowledge Base Construction,", "year": 2020 }, { "authors": [ "Hongyu Ren", "Weihua Hu", "Jure Leskovec" ], "title": "Query2box: Reasoning over knowledge graphs in vector space using box embeddings", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Edward E Smith", "Daniel N Osherson", "Lance J Rips", "Margaret Keane" ], "title": "Combining prototypes: A selective modification model", "venue": "Cognitive science,", "year": 1988 }, { "authors": [ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ], "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ivan Vendrov", "Ryan Kiros", "Sanja Fidler", "Raquel Urtasun" ], "title": "Order-embeddings of images and language", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Luke Vilnis", "Andrew McCallum" ], "title": "Word representations via gaussian embedding", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Luke Vilnis", "Xiang Li", "Shikhar Murty", "Andrew McCallum" ], "title": "Probabilistic embedding of knowledge graphs with box lattice measures", "venue": "In Association for Computational Linguistics,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representation learning for hierarchical relations is crucial in natural language processing because of the hierarchical nature of common knowledge, for example, <Bird ISA Animal> (Athiwaratkun & Wilson, 2018; Vendrov et al., 2016; Vilnis et al., 2018; Nickel & Kiela, 2017). The ISA relation represents meaningful hierarchical relationships between concepts and plays an essential role in generalization for other relations, such as the generalization of <organ PARTOF person> based on <eye PARTOF of person>, and <organ ISA eye>. The fundamental nature of the ISA relation means that it is inherently involved in a large amount of compositional human reasoning involving other relations.\nModeling hierarchies is essentially the problem of modeling a poset, or partially ordered set. The task of partial order completion, a general term to describe tasks which require learning a transitive relation, was introduced in (Vendrov et al., 2016). The authors also introduce a model based on the reverse product order on Rn, which essentially models concepts as infinite cones. Region-based representations have been effective in representing hierarchical data, as containment between regions is naturally transitive. Vilnis et al. (2018) introduced axis-aligned hyperrectangles (or boxes) that are provably more flexible than cones, and demonstrated state-of-the-art performance in multiple tasks.\nThus far, not as much effort has been put into modeling joint hierarchies. Patel et al. (2020) proposed to simultaneously model ISA and HASPART hierarchies from Wordnet (Miller, 1995). To do so, however, they effectively augmented the graph by duplicating the nodes to create a single massive hierarchy. Their model assigns two boxes BISA and BHASPART for each node n, which are unrelated, and therefore misses out on a large amount of semantic relatedness between ISA and HASPART .\nIn this paper we propose a box-to-box transformation which translates and dilates box representations between hierarchies. Our proposed model shares information between the ISA and HASPART hierarchies via this transformation as well as cross-hierarchy containment training objectives. We compare BOX-TRANSFORM MODEL with multiple strong baselines under different settings. We substantially outperform the prior TWO-BOX MODEL while training with only the transitive reduction of both hierarchies and predicting inferred composition edges. As mentioned above, our model’s shared learned features should allow for more generalization, and we test this by training on a subset of the transitive reduction, where we find we are able to outperform strong baselines. Finally, we\nperform a detailed analysis of the model’s capacity to predict compositional edges and transitive closure edges, both from an overfitting and generalization standpoint, identifying subsets where further improvement is needed." }, { "heading": "2 RELATED WORK", "text": "Recent advances in representing one single hierarchy mainly fall in two categories: 1) representing hierarchies in non-Euclidian space (eg. hyperbolic space, due to the curvature’s inductive bias to model tree-like structures) 2) using region-based representations instead of vectors for each node in the hierarchy (Erk, 2009). Hyperbolic space has been shown to be efficient in representing hierarchical relations, but also encounters difficulties in training (Nickel & Kiela, 2017; Ganea et al., 2018b; Chamberlain et al., 2017).\nCategorization models in psychology often represent a concept as a region (Nosofsky, 1986; Smith et al., 1988; Hampton, 1991). Vilnis & McCallum (2015) and Athiwaratkun & Wilson (2018) use Gaussian distributions to embed each word in the corpus, the latter of which uses thresholded divergences which amount to region representations. Vendrov et al. (2016) and Lai & Hockenmaier (2017) make use of the reverse product order on Rn+, which effectively results in cone representations. Vilnis et al. (2018) further extend this cone representation to axis-aligned hyperrectangles (or boxes), and demonstrate state-of-the-art performance on modeling hierarchies. Various training improvement methods for box embeddings have been proposed (Li et al., 2019; Dasgupta et al., 2020), the most recent of which is termed GumbelBox after it’s use of a latent noise model where box parameters are represented via Gumbel distributions.\nRegion representations are also used for tasks which do not require modeling hierarchy. In Vilnis et al. (2018), the authors also model conditional probability distributions using box embeddings. Abboud et al. (2020) and Ren et al. (2020) take a different approach, using boxes for their capacity to contain many vectors to provide slack in the loss function when modeling knowledge base triples or representing logical queries, respectively. Ren et al. (2020) also made use of an action on boxes similar to ours, involving translation and dilation, however our work differs in both the task (i.e. representing logical queries vs. joint hierarchies) and approach, as their model represents entities using vectors and a loss function based on a box-to-vector distance. The inductive bias of hyperbolic space is also exploited to model multiple relations, Ganea et al. (2018a) learn hyperbolic transformations for multiple relations using Poincare embeddings, and show model improvement in low computational resource settings. Patel et al. (2020), which our work is most similar to, represent joint hierarchies using box embeddings. However, they represent each concept with two boxes ignoring the internal semantics of the concepts.\nModeling joint hierarchies shares some similarities with knowledge base completion, however the goals of the two settings are different. When modeling joint hierarchies you are attempting to learn simultaneous transitive relations, and potentially learn relevant compositional edges involving these relations. For knowledge base completion, on the other hand, you may be learning many different relations, and primarily seek to recover edges which were removed rather than inferring new compositional edges. Still, the models which perform knowledge base completion can be applied to this task, as the data can be viewed as knowledge base triples with only 2 relations. There have been multiple works that aim to build better knowledge representation (Bordes et al., 2013; Trouillon et al., 2016; Sun et al., 2019; Balazevic et al., 2019a). Most relevant, Chami et al. (2020); Balazevic et al. (2019b) recently proposed KG embedding methods which embeds entities in the Poincaré ball model of hyperbolic space. These models are intended to capture relational patterns present in multi-relational graphs, with a particular emphasis on hierarchical relations." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 BOX LATTICE MODEL", "text": "Introduced in Vilnis et al. (2018), a box lattice model (or box model) is a geometric embedding which captures partial orders and lattice structure using n-dimensional hyperrectangles. Formally, we define the set of boxes B in Rn as\nB(Rn) = {[x1, x1]× · · · × [xd, xd]}, (1)\nwhere xi, xj ∈ R, and we represent all degenerate boxes where xi > xi with ∅. A box model for a set S is a function Box : S → B(Rn) which captures some desirable properties of the set S. As the name implies, the box lattice model is particularly suited to representing partial orders and lattice structures. Definition 1 (Poset). A partially ordered set, or poset, is a set P along with a relation such that, for each a, b, c ∈ P , we have\n1. a a (reflexivity)\n2. if a b and b a then a = b (antisymmetry)\n3. if a b and b c then a c (transitivity) Definition 2 (Lattice). A lattice is a poset where each pair of elements have a unique upper bound called the join, denoted by ∧, and a unique lower bound called the meet, denoted by ∨.\nThe authors note that there are natural geometric operations which form a lattice structure on B:\nBox(x) ∧ Box(y) := ∏ i [max(xi, yi),min(x i, yi)], (2)\nBox(x) ∨ Box(y) := ∏ i [min(xi, yi),max(x i, yi)], (3)\nIn other words, the meet of two boxes is the smallest containing box, and the join is the intersection, or ∅ if the boxes are disjoint. These geometric operations map very neatly to hierarchies, where the meet of two nodes is their closest common ancestor and the join is the closest common descendent (or ∅ if no such node exists). The ability of this model to capture lattice structure using geometric operations makes it a natural choice to embed hierarchies." }, { "heading": "3.2 PROBABILISTIC BOX MODEL TRAINING", "text": "In Vilnis et al. (2018), the authors also introduced a probabilistic interpretation of box embeddings and a learning method which was improved upon in Li et al. (2019) and Dasgupta et al. (2020). By using a probability measure µ on Rd (or by constraining the space to [0, 1]d), one can calculate box volumes as µ(Box(X)). The pullback of this measure yields a probability measure on S, and thus the box model can be imbued with valid probabilistic semantics. In particular, since the box space B is closed under intersection, we can calculate joint probabilities by computing P (X,Y ) = µ(Box(X) ∧ Box(Y )) and similarly compute conditional probabilities as\nP (X | Y ) = µ(Box(X) ∧ Box(Y )) µ(Box(Y )) . (4)\nThe conversion from a poset or lattice structure to probabilistic semantics is accomplished by assigning conditional probabilities, namely a b if and only if P (b | a) = 1. We note that the properties required of the relation follow as a natural consequence of the axioms for conditional probability. Apart from providing rigor and interpretability, the calibrated probabilistic semantics also inform and facilitate the training procedure for box embeddings, which is accomplished via gradient descent using KL-divergence with respect to the aforementioned probability distribution as a loss function.\nAs one might expect, care must be taken to handle the case when boxes are disjoint, as there is no gradient. In (Vilnis et al., 2018) the authors made use of the lattice structure to derive a lower bound on the probability, and (Li et al., 2019) introduced an approximation to Gaussian convolution over the boxes which similarly handled the case of disjoint boxes. (Dasgupta et al., 2020) improves this further by taking a random process perspective, ensembling over an entire family of box models. The endpoints of boxes are represented using Gumbel distributions, that is\nGumbelBox(X) = ∏ i [Xi, X i], Xi ∼ MaxGumbel(µi, β), Xi ∼ MinGumbel(µi, β), (5)\nwhere µ, β are the location and scale parameters of the Gumbel distribution respectively. The MaxGumbel distribution is given by\nf(x;µ, β) = 1\nβ exp(−x−µβ − e − x−µβ ), (6)\nand the MinGumbel distribution given by negating x an µ. The Gumbel distribution was chosen due to it’s min/max stability, making the set of Gumbel boxes closed under intersection, i.e. the intersection of two Gumbel boxes is another Gumbel box. We denote the space of all such boxes as G. The expected volume of a Gumbel box can be efficiently calculated analytically, and in Dasgupta et al. (2020) the authors use this expected volume to calculate the conditional probabilities mentioned in equation equation 4. This training method leads to improved performance on a number of tasks, and is particularly beneficial when embedding trees, thus we will use this Gumbel box approach in our setting." }, { "heading": "3.3 MODELING JOINT HIERARCHIES", "text": "Many existing methods have been proposed for modeling a single hierarchy, however entities are often simultaneously part of multiple hierarchies, for example hypernymy (i.e. ISA ) and meronomy (i.e. HASPART ). Furthermore, useful information can be shared across inferred compositional edges between the two hierarchies. For example, as shown in 1, based on <Bird,HASPART ,Wing> and <Dove,ISA ,Bird>, we can infer<Dove,HASPART ,Wing>. Due to the compositional nature of these relations, we can infer not only the per-relation transitive closure edges but also the compositional edges, i.e <Dove, HASPART , Wing>.\nFormally, for two hierarchical relations r1 and r2, composition edges can be formulated following certain rules. In figure 1, the rules are designed as follows: for <Head,HASPART ,Tail>, < x1, ISA , Head> represent the sub-class of Head, and <Tail, ISA , x2 > is the super-class of Tail. Composition edges can be generated as < x1,HASPART ,x2 >, < x1,HASPART ,Tail> or < Head ,HASPART ,x2 >. These compositional edges were identified in Patel et al. (2020), where it was observed that a model which effectively captures both hierarchies should make the correct prediction not only over the transitive closure of each individual relation but also on these compositional edges." }, { "heading": "4 METHODS", "text": "" }, { "heading": "4.1 BOX-TO-BOX TRANSFORMATION", "text": "As mentioned previously, our goal is to not only capture intra-relation transitivity, but also require the model to capture cross-hierarchy compositional edges; that is, for a set S with two partial orders 1, 2, we want a model capable of learning (a 1 b) ∧ (b 2 c) =⇒ a 2 c and (a 2 b) ∧ (b 1 c) =⇒ a 2 c . Furthermore, we hope to do so without including these compositional edges in our training data, in fact we will remove parts of these implications in the data, with the expectation that the embedding parameters capture relevant structure which allows us to recover them.\nAs shown in Dasgupta et al. (2020), Gumbel boxes are able to model hierarchies, we would like to benefit from this capability, particularly for modeling the ISA hierarchy, and thus we seek to learn a\nfunction f1 : S → G, where\na 1 b ⇐⇒ E[µ(f1(a) ∩ f1(b))]\nE[µ(f1(a))] = 1. (7)\nFor a given Gumbel box,\nf(x) = d∏ i=1 [Xi, X i], Xi ∼ MaxGumbel(µi, β), Xi ∼ MinGumbel(µi + ∆i, β). (8)\nwhere the free parameters are µi and ∆i. To simultaneously model a second relation, we train a function ϕ : G → G such that\na 2 b ⇐⇒ E[µ(ϕ(f1(a)) ∩ f1(b))]\nE[µ(ϕ(f1(a)))] = 1. (9)\nFor notational simplicity, we abbreviate f2 = ϕ ◦ f1. We choose the transformation ϕ to operate on the “min” coordinate of a Gumbel box and the “side-lengths”, that is, we transform a given Gumbel box\nf(x) = d∏ i=1 [Xi, X i], Xi ∼ MaxGumbel(µi, β), Xi ∼ MinGumbel(µi + ∆i, β). (10)\nto\nϕ (GumbelBox(X)) = d∏ i=1 [Yi, Y i], (11)\nwhere\nYi ∼ MaxGumbel(θiµi + bi, β), and (12) Y i ∼ MinGumbel(θiµi + bi + softplus(θi∆i + bi), β), (13)\nand the θi, θi, bi, bi are learned parameters. This effectively translates and dilates the location parameters of the Gumbel distributions which represent the “corners” of a given Gumbel box. We call this model the BOX-TRANSFORM MODEL .\nThe softplus function is used here as a way to ensure the max coordinate remains larger than the min, and it also provides a simple overflow protection for the expected box volume, as might happen with side-lengths larger than one in high dimensions. While mathematically simple, this transformation allows for parameter sharing between the embedding of a concept with respect to 1 and with respect to 2. Importantly, the transformation is capable of capturing both a global translation and dilation as well as a scaled transformation of the existing learned representation, allowing the absolute position in space (which, for previous box embedding models, was irrelevant) to potentially capture relevant features of the entities. Remark 1. The lack of a transformation on f1(b) is not an oversight. Using figure 1 as an example, if we consider the Bird box as representative of “all things which are birds”, and the HASPART Wing box as the representative of “all thing which have wings”, then encouraging containment of the Bird box inside the HASPART Wing box is quite natural. This conceptual motivation is precisely captured by the lack of a transformation on f1(b). This also coincides with the probabilistic semantics discussed in section 3.2, and is also the method employed by Patel et al. (2020), where this cross-hierarchy containment objective is soley responsible for any flow of information between hierarchies in the TWO-BOX MODEL ." }, { "heading": "4.2 CONNECTION TO TWO-BOX MODEL", "text": "There are two main differences between our model and the model introduced in (Patel et al., 2020) which, for reasons which will become clear, we call the TWO-BOX MODEL . First, the TWO-BOX MODEL preceeded the Gumbel box model, and instead uses the Soft box model from (Li et al., 2019). To ensure that the benefits from our model are not conflated with the improvements from using Gumbel boxes we also train a model using the method from (Patel et al., 2020) which makes use of Gumbel boxes.\nSecond, both models use different boxes to represent different relations, however the model from (Patel et al., 2020) allows both boxes to have free parameters, relying on containment between boxes representing different relations to pass information. Under the framework we have currently presented, this would be equivalent to learning two functions, f1 and f2, both of which have separate parameters for the min and side length of the boxes for each entity. While such a model has significant representational capacity, we would expect that it would suffer greatly from a lack of generalization. We evaluate this hypothesis by creating a second test, discussed in section 5.4, which removes edges from the transitive reduction of the training data." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 DATASET", "text": "We demonstrate the efficacy of BOX-TRANSFORM MODEL by using the joint hierarchy that has been created by Patel et al. (2020) from WordNet (Miller, 1995). In this dataset, hypernymy (ISA ) and meronymy (HASPART ) are two hierarchical relations of WordNet over noun sysnets, which are 82, 114 in total. Individually, the hypernymy part of the hierarchy contains 82, 114 nodes (i.e., all the synsets) with 84, 363 edges in its transitive reduction and the meronymy portion has 11, 235 synsets (out of 82, 114 synsets) with 9, 678 edges in its transitive reduction.\nJoint Hierarchy In order to evaluate the performance on the joint hierarchy, Patel et al. (2020) created composition edges using the inter-relational semantics between hypernymy and meronymy. In particular they use the following composition rules:\nISA ◦ ISA · · · ISA︸ ︷︷ ︸ 0 or 1 or 2 times ◦ HASPART ◦ ISA ◦ ISA · · · ISA︸ ︷︷ ︸ 0 or 1 or 2 times = HASPART . (14)\nTo illustrate from Figure 1,<Dove ISA Bird>∧<Bird HASPART Wing>∧<Wing ISA Appendage> implies that <birds HASPART appendage>. In total, 189, 613 composition edges are generated by the method described above for evaluation of the model on the joint hierarchy task. For each test/validation edge, a fixed set of negative samples of size 10 was generated by corrupting the head and tail 5 times each.\nWe provide the overall statistics for the dataset in the Table 1. We have also created a second training dataset which further removes part of the transitive reduction to evaluate the models on their generalization capability (refer to Section 5.4 & 5.5). The dataset used for those section has different statistics and they are reported in the respective sections." }, { "heading": "5.2 BASELINE MODELS AND TRAINING DETAILS", "text": "We compare BOX-TRANSFORM MODEL against geometric embedding methods as well as knowledge base completion methods. We give a brief description for each baseline below.\n1. TWO-BOX MODEL : As mentioned in 4.2, Patel et al. (2020) extends the idea of Box embeddings (Vilnis et al., 2018; Li et al., 2019) to model joint hierarchies by defining two boxes per node, one for each relation.\n2. Order Embeddings: Vendrov et al. (2016) treats each concept as axis parallel cones in positive orthant. We considered two different cone parameters for each entity following the TWO-BOX MODEL (Patel et al. (2020)).\n3. Poincaré Embeddings: (Nickel & Kiela, 2017) & Hyperbolic Entailment Cones (Ganea et al., 2018b): Tree-structured data are best captured in hyperbolic space (Chamberlain et al., 2017). Thus in Nickel & Kiela (2017), the authors learn embedding on n-dimensional Poincaré ball. For similar reasons, Ganea et al. (2018b) uses the hyperbolic space however they extend the hyperbolic point embeddings to entailment cones. Again, for these models, two separate parameters are considered for each entity.\n4. TransE and RotatE (Bordes et al., 2013; Sun et al., 2019): This task can be posed as knowledge base completion for a KB with only two relations. Thus we evaluate TransE and RotatE which are simple yet effective methods for knowledge base embeddings, which achieve state-of-the-art for many knowledge base embedding tasks. Unlike the two box model (Patel et al., 2020) or the other baselines, these methods have shared representation for each entity, and thus they are expected to generalise better on missing edges.\n5. Hyperbolic KG Embeddings (Balazevic et al., 2019b; Chami et al., 2020): We also compared our method against recently proposed KG embedding methods based on hyperbolic embeddings to model hierarchical structures present in KGs. The Multi-Relational Poincaré model (MuRP) (Balazevic et al., 2019b) learns relation-specific transforms of the entities that are embedded in hyperbolic space. The RoTH (Chami et al., 2020) parameterize the relation specific transformations as hyperbolic rotation, where as the AttH (Chami et al., 2020) combines hyperbolic reflection and rotation using attention. We provide more training related details in Appendix A.1." }, { "heading": "5.3 COMPOSITION EDGES FROM TRANSITIVE REDUCTION", "text": "In order to demonstrate the ability of the model to capture partially ordered (tree-like) data most embedding methods (Ganea et al., 2018b; Nickel & Kiela, 2017; Patel et al., 2020) train their model on the transitive reduction and predict on the transitive closure. For an evaluation on modeling the joint hierarchy, therefore, it is natural to train the models only on the transitive reduction of hypernymy and meronymy and evaluate on the composition edges, as done in Patel et al. (2020). We report the F1 score (with 1:10 negatives) for those edges in table 2. The threshold used for the classification is determined by maximizing the F1 score on the validation set.\nTable 3: Test F1 scores(%) of various methods for generalization capability.\nMethods F1 score\nPoincaré Embeddings 33.5 Hyperbolic Entailment Cones 36.0 TransE 57.0 RotatE 55.0 Order Embeddings 54.5 MuRP 20.1 AttH 27.0 RotE 48.8 RotH 46.7 TWO-BOX MODEL (with GumbelBox) 58.9 BOX-TRANSFORM MODEL 63.9\nFrom Table 2, we observe that BOX-TRANSFORM MODEL outperforms the other baselines by a significant margin. As mentioned in Patel et al. (2020) and so do we observe that in the next section 5.4 that the Poincaré embeddings and Hyperbolic entailment cones do face difficulty in learning when presented only with transitive reduction edges. However, the hyperbolic KG method Atth RoTH are able to learn the composite edges to a certain extent. The performance gain of RotH over its euclidean counterpart RotE can be attributed to its inductive bias towards modeling hierarchies. The performance of Box embedding method as proposed by Patel et al. (2020) performs at par order embedding method. However using GumbelBox formulation (Dasgupta et al., 2020), we observe significant performance boost as GumbelBox improves the local identifiability of the parameter space.\nStill, the capability of the BOX-TRANSFORM MODEL to benefit from shared cross-hierarchy features allows it to substantially outperform even this improved version of the TWO-BOX MODEL . This is likely due to the fact that the inductive bias provided by the transformation is more in line with the data; the model can benefit from the containments learned as a result of the ISA relation, and learn a HASPART transformation which potentially preserves these containments." }, { "heading": "5.4 LEARNING FROM INCOMPLETE TRANSITIVE REDUCTION", "text": "In Patel et al. (2020), and also in our previous experiment, we already observe that box embedding methods are highly capable of to recovering the transitive closure (in our case, composition edges) given the transitive reduction only. In this experiment, we train with even less of the transitive reduction, moving some of these edges to the test set. Now, reconstruction of the closure and the composition edges require models to generalize over the missing parts of the graph. We train on 9175 meronymy edges and 80372 hypernymy edges and test/validate on an aggregated pool of 251783 edges. Please refer to the Appendix A.2 for details on dataset creation and statistics. From Table 3, we observe that BOX-TRANSFORM MODEL outperforms all the baseline methods by a large extent. Although the two box model is performing worse than BOX-TRANSFORM MODEL , it is able to beat other baselines. Out of the two Knowledge base completion methods TransE performs the best and achieves comparative performance to two box model. Although the hyperbolic KG embeddings were able to perform well on the composite edges, their generalization performance is relatively lower than other KG embedding methods. We also observe that the RotE model that was under performing in composite edges, outperforms RotH by some margin in this generalization setting. We select the top three best performing methods for further analysis for each type of edges in the graph." }, { "heading": "5.5 PERFORMANCE ANALYSIS ON DIFFERENT SPLITS", "text": "Training on a subset of the transitive reduction showed that our model could generalize to composition edges even with the absence of essential edges to make such prediction. We further perform evaluation analysis using the same training data with the best-performed model selected by maximizing the f1 score on composition edges. We evaluate the model performance on the transitive closure for each hierarchy (ISA and HASPART ), and the composition edges on the joint hierarchy.\nFor each single hierarchy, some edges are removed from the transitive reduction X to create the incomplete transitive reduction training data X1. Evaluating the transitive closure of X directly evaluates the model’s performance on each hierarchy, denoted as TC(X). This can be further divided into three categories: dataset that evaluates model ability to capture transitive closure of X1, TC(X1), dataset that evaluates model generalization ability on missing edges X −X1, and dataset that evaluates model’s extended generalization ability on TC(X)− TC(X1).\nComposition edges from the joint hierarchy can be analyzed the same way. COMP(X,Y ) represent all the composition edges in the full wordnet dataset, composed by ISA transitive reduction X and HASPART transitive reduction Y . It can be further divided into two categories: data that evaluate model overfitting ability to capture COMP(X1, Y1) where X1 and Y1 is the corresponding training ISA and HASPART data in section 5.4, and data that evaluate model generalization ability on learning logical operations COMP(X,Y ) − COMP(X1, Y1). The detailed statistics on each of these splits are provided in Appendix A.3. The evaluation dataset is created by randomly creating negative examples with the pos: neg ratio 1:10. We select the top 3 best models from section 5.4, then choose the threshold that maximized the F1 score for the validation data of each split and report the test F1. As shown in table 4 and table 5, our model performs the best overall across different dataset splits. BOX-TRANSFORM MODEL performs much better on the full transitive closure of ISA , and all the composition edges. In general, BOX-TRANSFORM MODEL performs much better on transitive closure and composition edges by a large margin in all overfitting settings. TransE does better on predicting removed edges from the transitive reduction (which serves more as an analysis of the model’s capability, as it is not a typical evaluation for partial order completion), however we note that our model does surprisingly well on the ISA missing edges, which we attribute to the shared semantics between the hierarchy made possible by this box-to-box transformation." }, { "heading": "6 CONCLUSION", "text": "We proposed a box-to-box transformation which facilitates sharing of learned features across hierarchies. We demonstrate the BOX-TRANSFORM MODEL is capable of excellent performance when predicting compositional edges across a joint hierarchy. Furthermore, the model does an excellent job at modeling the transitive closure of each relation independently. In the future, extending from two relations to modeling multiple relations is essential in order to obtain more generalization from hierarchical ISA edges." }, { "heading": "A APPENDIX", "text": "A.1 TRAINING DETAILS\nIn our experiments, we have kept the number of parameters same across all the methods. We use 5 dimensional box embeddings for the Two Box Model (Patel et al., 2020). Since box embeddings are specified using min and side length in the same dimension. Thus we compare with 10 dimensional order embeddings, Poincaré embeddings, and hyperbolic entailment cones. However, since the above mentioned methods has two different number of parameters for each node, we use 20 dimensional vectors for RotatE, TransE to account for that. Our BOX-TRANSFORM MODEL uses 10 dimension box embeddings for similar reason.\nHyperparameter range: We use Bayesian hypermeter optimizer with Hyperband algorithm for all the methods using the web interface Biewald (2020). The hyperparameter ranges are Gumbelβ ∈ [0.001, 3], Softplus temperature for box volume T ∈ [1, 30], lr ∈ [0.0005, 1], batch size ∈ {8096, 2048, 1024, 512}, number of negative samples ∈ [2, 30] for all the methods. For max margin trainging we searched for the margin ∈ [1, 50]. The best hyperparameters for our method and a few competitive baselines are provided in appropriate config files along with the source code. We will make the code public after the anonymity period.\nA.2 DATASET CREATION STEPS FROM SECTION 5.4\nIn order to remove edges from the transitive reductions, we iterate through the transitive reduction edges of meronymy. With 0.5 probability we choose the edge for further processing. For each chosen HASPART edge, we select an outgoing ISA edge and pair them. We drop the ISA edge from the pair with 0.9 probability (the ratio of HASPART to ISA transitive reduction) and drop the HASPART edge in case the ISA is not dropped already.\nThis procedure ensures that all the edge removals happen around the composition edges, thus, the results reflect the models true capacity to generalize well for this joint hierarchy task. We evaluate the model on the composition edges, the removed reduction edges, and the closure edges with 251783 in numbers which we split into two parts for validation and test. In Table 3, we report the F1 score on this aggregated evaluation data with 1:10 fixed true negatives.\nA.3 DETAILS OF THE SPLITS FROM SECTION 5.5\nWe plot 2-dimensional box embeddings to inspect the quality of our proposed BOX-TRANSFORM MODEL . We use the box embedding parameters of the best performing model of experiment 5.3 (Table 2). Note that, the model is 10 dimensional. However, for a perfectly trained model, we should\nobserve containment along each dimension. Thus, we pick two dimensions randomly out of the 10-d to visualize the embeddings.\nFrom the example, the facts that <Car,HASPART ,CarDoor> and <CarDoor,ISA ,Door> would infer that <Car,HASPART , Door>. We observe from the Figure 2 that the HASPART transformation of the ”Car Door” and ”Door” successfully encloses the ISA transformation of the ”Car”, thus our model is able infer that composition edge . All the other composite edges such as <Sedan,HASPART , CarDoor>, <Sedan,HASPART , Door> etc. can be similarly inferred from the visualization." } ]
2,020
null
SP:03895ea221824f6e57ea88ec7332efbbec207c7d
[ "The authors propose wavelets for both separable and joint spatio-temporal graphs. And then the authors design a spatio-temporal graph scattering transform (ST-GST), which is a non-trainable counterpart of spatio-temporal graph convolutional networks and a nonlinear version of spatiotemporal graph wavelets. Finally, the proposed SF-GST is conducted by experiments, and the results show that it appears to be effective. However, The authors did not give the explanation of the motivation about why did the STG should be scattered by wavelets. Besides, from the results in Table 1, the joint versions based on the proposed method. i.e., Joint Kronecker, Joint Cartesian and Joint Strong, have not achieved the satisfied performance, though only separable versions performs best." ]
Although spatio-temporal graph neural networks have achieved great empirical success in handling multiple correlated time series, they may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data. Furthermore, spatio-temporal graph neural networks lack theoretical interpretation. To address these issues, we put forth a novel mathematically designed framework to analyze spatio-temporal data. Our proposed spatio-temporal graph scattering transform (ST-GST) extends traditional scattering transforms to the spatiotemporal domain. It performs iterative applications of spatio-temporal graph wavelets and nonlinear activation functions, which can be viewed as a forward pass of spatio-temporal graph convolutional networks without training. Since all the filter coefficients in ST-GST are mathematically designed, it is promising for the real-world scenarios with limited training data, and also allows for a theoretical analysis, which shows that the proposed ST-GST is stable to small perturbations of input signals and structures. Finally, our experiments show that i) ST-GST outperforms spatio-temporal graph convolutional networks by an increase of 35% in accuracy for MSR Action3D dataset; ii) it is better and computationally more efficient to design the transform based on separable spatio-temporal graphs than the joint ones; and iii) the nonlinearity in ST-GST is critical to empirical performance.
[ { "affiliations": [], "name": "SCATTERING TRANSFORM" }, { "affiliations": [], "name": "Chao Pan" }, { "affiliations": [], "name": "Siheng Chen" } ]
[ { "authors": [ "Ali N. Akansu", "Richard A. Haddad" ], "title": "Multiresolution signal decomposition: transforms, subbands, and wavelets", "venue": "Academic press,", "year": 2000 }, { "authors": [ "Martin Anthony", "Peter L Bartlett" ], "title": "Neural network learning: Theoretical foundations", "venue": "cambridge university press,", "year": 2009 }, { "authors": [ "Mustafa Bilgic", "Lilyana Mihalkova", "Lise Getoor" ], "title": "Active learning for networked data", "venue": "In Proceedings of the 27th international conference on machine learning,", "year": 2010 }, { "authors": [ "Paulo Vinicius Koerich Borges", "Nicola Conci", "Andrea Cavallaro" ], "title": "Video-based human behavior understanding: A survey", "venue": "IEEE transactions on circuits and systems for video technology,", "year": 1993 }, { "authors": [ "Joan Bruna", "Stéphane Mallat" ], "title": "Invariant scattering convolution networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Ke Cheng", "Yifan Zhang", "Xiangyu He", "Weihan Chen", "Jian Cheng", "Hanqing Lu" ], "title": "Skeleton-based action recognition with shift graph convolutional network", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Fernando Gama", "Alejandro Ribeiro", "Joan Bruna" ], "title": "Diffusion scattering transforms on graphs", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Fernando Gama", "Alejandro Ribeiro", "Joan Bruna" ], "title": "Stability of graph scattering transforms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Feng Gao", "Guy Wolf", "Matthew Hirn" ], "title": "Geometric scattering for graph data analysis", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Francesco Grassi", "Andreas Loukas", "Nathanaël Perraudin", "Benjamin Ricaud" ], "title": "A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "David K Hammond", "Pierre Vandergheynst", "Rémi Gribonval" ], "title": "Wavelets on graphs via spectral graph theory", "venue": "Applied and Computational Harmonic Analysis,", "year": 2011 }, { "authors": [ "Yue Hu", "Siheng Chen", "Ya Zhang", "Xiao Gu" ], "title": "Collaborative motion prediction via neural motion message passing", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Vassilis N Ioannidis", "Siheng Chen", "Georgios B Giannakis" ], "title": "Pruned graph scattering transforms", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jiun-Yu Kao", "Antonio Ortega", "Dong Tian", "Hassan Mansour", "Anthony Vetro" ], "title": "Graph based skeleton modeling for human activity analysis", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2019 }, { "authors": [ "Tae Soo Kim", "Austin Reiter" ], "title": "Interpretable 3d human action analysis with temporal convolutional networks", "venue": "IEEE conference on computer vision and pattern recognition workshops (CVPRW),", "year": 2017 }, { "authors": [ "Ron Levie", "Elvin Isufi", "Gitta Kutyniok" ], "title": "On the transferability of spectral graph filters", "venue": "13th International conference on Sampling Theory and Applications (SampTA),", "year": 2019 }, { "authors": [ "Maosen Li", "Siheng Chen", "Xu Chen", "Ya Zhang", "Yanfeng Wang", "Qi Tian" ], "title": "Actional-structural graph convolutional networks for skeleton-based action recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wanqing Li", "Zhengyou Zhang", "Zicheng Liu" ], "title": "Action recognition based on a bag of 3d points", "venue": "In 2010 IEEE Computer Society Conference on Computer Vision and Pattern RecognitionWorkshops,", "year": 2010 }, { "authors": [ "Jun Liu", "Amir Shahroudy", "Dong Xu", "Gang Wang" ], "title": "Spatio-temporal lstm with trust gates for 3d human action recognition", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Jun Liu", "Amir Shahroudy", "Mauricio Perez", "Gang Wang", "Ling-Yu Duan", "Alex C. Kot" ], "title": "Ntu rgb+d 120: A large-scale benchmark for 3d human activity understanding", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Ziyu Liu", "Hongwen Zhang", "Zhenghao Chen", "Zhiyong Wang", "Wanli Ouyang" ], "title": "Disentangling and unifying graph convolutions for skeleton-based action recognition", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Stéphane Mallat" ], "title": "Group invariant scattering", "venue": "Communications on Pure and Applied Mathematics,", "year": 2012 }, { "authors": [ "Vishal Monga", "Yuelong Li", "Yonina C Eldar" ], "title": "Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing", "venue": null, "year": 1912 }, { "authors": [ "Aliaksei Sandryhaila", "Jose MF Moura" ], "title": "Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure", "venue": "IEEE Signal Processing Magazine,", "year": 2014 }, { "authors": [ "Santiago Segarra", "Antonio G Marques", "Gonzalo Mateos", "Alejandro Ribeiro" ], "title": "Network topology inference from spectral templates", "venue": "IEEE Transactions on Signal and Information Processing over Networks,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz", "Shaked Shammah", "Amnon Shashua" ], "title": "Safe, multi-agent, reinforcement learning for autonomous driving", "venue": "arXiv preprint arXiv:1610.03295,", "year": 2016 }, { "authors": [ "David I Shuman", "Sunil K Narang", "Pascal Frossard", "Antonio Ortega", "Pierre Vandergheynst" ], "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "venue": "IEEE signal processing magazine,", "year": 2013 }, { "authors": [ "David I Shuman", "Christoph Wiesmeyr", "Nicki Holighaus", "Pierre Vandergheynst" ], "title": "Spectrumadapted tight graph wavelet and vertex-frequency frames", "venue": "IEEE Transactions on Signal Processing,", "year": 2015 }, { "authors": [ "Jiang Wang", "Zicheng Liu", "Ying Wu", "Junsong Yuan" ], "title": "Mining actionlet ensemble for action recognition with depth cameras", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Sijie Yan", "Yuanjun Xiong", "Dahua Lin" ], "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition", "venue": "In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Rui Zhao", "Wanru Xu", "Hui Su", "Qiang Ji" ], "title": "Bayesian hierarchical dynamic model for human action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dongmian Zou", "Gilad Lerman" ], "title": "Graph convolutional neural networks via scattering", "venue": "Applied and Computational Harmonic Analysis,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Processing and learning from spatio-temporal data have received increasing attention recently. Examples include: i) skeleton-based human action recognition based on a sequence of human poses (Liu et al. (2019)), which is critical to human behavior understanding (Borges et al. (2013)), and ii) multi-agent trajectory prediction (Hu et al. (2020)), which is critical to robotics and autonomous driving (Shalev-Shwartz et al. (2016)). A common pattern across these applications is that data evolves in both spatial and temporal domains. This paper aims to analyze this type of data by developing novel spatio-temporal graph-based data modeling and operations.\nSpatio-temporal graph-based data modeling. Graphs are often used to model data where irregularly spaced samples are observed over time. Good spatio-temporal graphs can provide informative priors that reflect the internal relationships within data. For example, in skeleton-based human action recognition, we can model a sequence of 3D joint locations as data supported on skeleton graphs across time, which reflects both the human physical constraints and temporal consistency (Yan et al. (2018)). Recent studies on modeling spatio-temporal graphs have followed either joint or separable processing frameworks. Joint processing is based on constructing a single spatio-temporal graph and processing (e.g., filtering) via operations on this graph (Kao et al. (2019); Liu et al. (2020)). In contrast, a separable processing approach works separately, and possibly with different operators, across the space and time dimension. In this case, independent graphs are used for space and\n∗This work was mainly done while Chao Pan and Siheng Chen were working at Mitsubishi Electric Research Laboratories (MERL).\ntime (Yan et al. (2018); Cheng et al. (2020)). However, no previous work thoroughly analyzes and compares these two constructions. In this work, we mathematically study these two types of graphs and justify the benefits of separable processing from both theoretical and empirical aspects.\nSpatio-temporal graph-based operations. Graph operations can be performed once the graph structure is given. Some commonly used graph operations include the graph Fourier transform (Shuman et al. (2013)), and graph wavelets (Hammond et al. (2011)). It is possible to extend those operations to the spatio-temporal graph domain. For example, Grassi et al. (2017) developed the short time-vertex Fourier transform and spectrum-based time-vertex wavelet transform. However, those mathematically designed, linear operations show some limitations in terms of empirical performances. In comparison, many recent deep neural networks adopt trainable graph convolution operations to analyze spatio-temporal data (Yan et al. (2018); Liu et al. (2020)). However, most networks are designed through trial and error. It is thus hard to explain the rationale behind empirical success and further improve the designs (Monga et al. (2019)). In this work, to fill in the gap between mathematically designed linear transforms and trainable spatio temporal graph neural networks, we propose a novel spatio-temporal graph scattering transform (ST-GST), which is a mathematically designed, nonlinear operation.\nSpecifically, to characterize the spatial and temporal dependencies, we present two types of graphs corresponding to joint and separable designs. We then construct spatio-temporal graph wavelets based on each of these types of graphs. We next propose the framework of ST-GST, which adopts spatio-temporal graph wavelets followed by a nonlinear activation function as a single scattering layer. All the filter coefficients in ST-GST are mathematically designed beforehand and no training is required. We further show that i) a design based on separable spatio-temporal graph is more flexible and computationally efficient than a joint design; and ii) ST-GST is stable to small perturbations on both input spatio-temporal graph signals and structures. Finally, our experiments on skeletonbased human action recognition show that the proposed ST-GST outperforms spatio-temporal graph convolutional networks by 35% accuracy in MSR Action3D dataset.\nWe summarize the main contributions of this work as follows:\n•We propose wavelets for both separable and joint spatio-temporal graphs. We show that it is more flexible and computationally efficient to design wavelets based on separable spatio-temporal graphs;\n•We propose a novel spatio-temporal graph scattering transform (ST-GST), which is a non-trainable counterpart of spatio-temporal graph convolutional networks and a nonlinear version of spatiotemporal graph wavelets. We also theoretically show that ST-GST is robust and stable in the presence of small perturbations on both input spatio-temporal graph signals and structures;\n• For skeleton-based human action recognition, our experiments show that: i) ST-GST can achieve similar or better performances than spatio-temporal graph convolutional networks and other nondeep-learning approaches in small-scale datasets; ii) separable spatio-temporal scattering works significantly better than joint spatio-temporal scattering; and iii) ST-GST significantly outperforms spatio-temporal graph wavelets because of the nonlinear activation function." }, { "heading": "2 RELATED WORK", "text": "Scattering transforms. Convolutional neural networks (CNNs) use nonlinearities coupled with trained filter coefficients and are well known to be hard to analyze theoretically (Anthony & Bartlett (2009)). As an alternative, Mallat (2012); Bruna & Mallat (2013) propose scattering transforms, which are non-trainable versions of CNNs. Under admissible conditions, the resulting transform enjoys both great performance in image classification and appealing theoretical properties. These ideas have been extended to the graph domain (Gama et al. (2019a); Zou & Lerman (2020); Gao et al. (2019); Ioannidis et al. (2020)). Specifically, the graph scattering transform (GST) proposed in (Gama et al. (2019a)) iteratively applies predefined graph filter banks and element-wise nonlinear activation function. In this work, we extend classical scattering transform to the spatio-temporal domain and provide a new mathematically designed transform to handle spatio-temporal data. The key difference between GST and our proposed ST-GST lies in the graph filter bank design, where ST-GST needs to handle both spatial and temporal domains.\nSpatio-temporal neural networks. Deep neural networks have been adapted to operate on spatiotemporal domain. For example, Liu et al. (2019) uses LSTM to process time series information, while ST-GCN (Yan et al. (2018)) combines a graph convolution layer and a temporal convolution\nlayer as a unit computational block in the network architecture. However, those networks all require a huge amount of high-quality labeled data, and training them is computationally expensive, which may make them impractical for many real-world scenarios. Furthermore, many architectures are designed through trial and error, making it hard to justify the design choices and further improve them. In this work, the proposed ST-GST is a nonlinear transform with a forward procedure similar to that of ST-GCN. However, ST-GST does not require any training, which is useful in many applications where only limited training data is available. Furthermore, since all filter coefficients in ST-GST are predefined, it allows us to perform theoretical analysis and the related conclusions potentially shed some light on the design of spatio-temporal networks.\nSkeleton-based human action recognition. Conventional skeleton-based action recognition models learn semantics based on hand-crafted features (Wang et al. (2012)). To handle time series information, some recurrent-neural-network-based models are proposed to capture the temporal dependencies between consecutive frames (Kim & Reiter (2017)). Recently, graph-based approaches have gained in popularity while achieving excellent performance (Yan et al., 2018; Li et al., 2019). In this work, our experiments focus on this task and show that ST-GST outperforms the state-of-the-art spatio-temporal graph neural networks, like MS-G3D (Liu et al., 2020), on small-scale datasets." }, { "heading": "3 SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM", "text": "In this section, we first define spatio-temporal graph structures and signals. We next design our spatio-temporal graph wavelets. Finally, we present ST-GST." }, { "heading": "3.1 SPATIO-TEMPORAL GRAPH STRUCTURES AND SIGNALS", "text": "Spatio-temporal data can be represented as a matrix X ∈ RN×T , where N is the number of the spatial positions and T is the number of time stamps. In this matrix, each row is a time-series for a spatial node, and each column is a spatial signal at a certain time stamp. Note that the index of spatial information can be arbitrary: we will associate each spatial location to a vertex on the spatial graph, and the edges will provide information about the relative position of the nodes. We can reshape the matrix to form a vector x of length NT , where the element x(s,t) := x(s−1)T+t is the feature value corresponding to the s-th vertex at time t. To construct a spatio-temporal graph, we create connections based on physical constraints. For example, for skeleton-based action recognition, the spatial graph is the human skeleton graph, reflecting bone connections; see Fig. 1(a); and the temporal graph is a line graph connecting consecutive time stamps; see Fig. 1(b).\nAs a starting point, we choose a spatial graph Gs = (Vs, Es,As) with |Vs| = N , reflecting the graph structure of each column in X and a temporal graph Gt = (Vt, Et,At) with |Vt| = T , reflecting the graph structure of each row in X. The separable spatio-temporal design is achieved by processing the columns and rows of X separately based on their respective graphs.\nAs an alternative, a product graph, denoted as G = Gs Gt = (V, E ,A) can be constructed to unify the relations in both the spatial and temporal domains, allowing us to process data jointly across space and time. The product graph Gs Gt has |V| = NT nodes and an appropriately defined NT ×NT adjacency matrix A. The operation interweaves two graphs to form a unifying graph structure. The edge weight A(s1,t1),(s2,t2) := A(s1−1)T+t1,(s2−1)T+t2 characterizes the relation, such as similarity or dependency, between the s1-th spatial node at the t1-th time stamp and the s2-th spatial node at the t2-th time stamp. There are three commonly used product graphs (Sandryhaila & Moura, 2014): i) Kronecker product: G = Gs⊗Gt with graph adjacency matrix as A = As⊗At and\n⊗ represents the Kronecker product of matrices; see Fig 1(c); ii) Cartesian product: G = Gs × Gt with A = As ⊗ IT + IN ⊗ At; see Fig 1(d); and iii) strong product: G = Gs Gt with A = As⊗At + As⊗ IT + IN ⊗At, which can be viewed as a combination of Kronecker and Cartesian products; see Fig 1(e). The joint spatio-temporal design is achieved based on a product graph.\nIn this paper, we consider designs based on both separable graphs and product graphs." }, { "heading": "3.2 SPATIO-TEMPORAL GRAPH FILTERING", "text": "We now show two graph filter designs, separable and joint filtering, based on the corresponding spatio-temporal graphs we just described. For each design, we first define the spatio-temporal graph shift, which is the most elementary graph filter and defines how information should propagate in a spatio-temporal graph. We then propose spatio-temporal graph filtering in both graph vertex and graph spectral domains.\nSeparable graph filters. Given the spatial graph Gs = (Vs, Es,As) and the temporal graph Gt = (Vt, Et,At), let the spatial graph shift be Ss = As and the temporal graph shift be St = At 1. For simplicity, we focus on the symmetric graph shifts. For a spatio-temporal graph signal, spatial and temporal graph filtering work as, H(Ss)X = ∑P−1 p=0 hpS p sX and XG >(St) = X( ∑Q−1 q=0 gqS q t ) >, where hp are gq are spatial and temporal filter coefficients, respectively. In each modality, the graph filter is a polynomial of the graph shift. The polynomial orders P and Q control the length of filters in the spatial and temporal modalities, respectively. Note that these two values can be chosen to be different, which provides additional design flexibility. Then, a separable spatio-temporal graph filtering operation can be defined as\nH(Ss)XG >(St) := ( P−1∑ p=0 hpS p s ) X ( Q−1∑ q=0 gqS q t )> = (H(Ss)⊗G(St)) x, (1)\nwhere the second equality follows from the property: M1XMT2 = (M1 ⊗M2)x. We can also represent the filtering process in the graph spectral domain. Let the eigen-decomposition of the spatial and temporal graphs be Ss = VsΛsVs> and St = VtΛtVt>, respectively, where Vs ∈ RN×N ,Vt ∈ RT×T form the spatial and temporal graph Fourier bases. The elements along the diagonals of Λs,Λt represent the spatial and temporal graph frequencies. We have H(Ss)X = Vs ∑P−1 p=0 hpΛ p sVs >X = VsH(Λs)Vs >X, XG>(St) = XVt ∑Q−1 q=0 gqΛ q tVt > = XVtG(Λt)Vt >. Letting V = Vs⊗Vt, the spectral representation of the separable spatio-temporal graph filtering is then (VsH(Λs)V>s )X(VtG(Λt)V > t ) > = V (H(Λs)⊗G(Λt)) V>x.\nJoint graph filters. Given the joint graph structure G = Gs Gt = (V, E ,A), let the spatio-temporal graph shift be S = A. Then, a joint spatio-temporal graph filtering operation can be defined as:\nH(S)x = K−1∑ k=0 hkS kx = V ( K−1∑ k=0 hkΛ k ) V>x = VH(Λ)V>x, (2)\nwhere hk is the filter coefficient. The kernel function h(λ) = ∑K−1 k=0 hkλ k is applied to each diagonal element of Λ to obtain H(Λ). Here h(λ) is independent of any specific graph structure, and characterizes the filter response in the graph spectral domain. Note that h = (h0, · · · , hK−1), h(λ) and H(·) are essentially the same thing, and are used interchangeably in this paper. It is worth pointing out that these three product graphs share the same form of joint spatio-temporal graph filtering (2) as well as the same graph Fourier bases V = Vs ⊗Vt. Following from (2), the spectral representation of the joint spatio-temporal graph filtering can be formulated as\nKronecker product: H(S) = V(H(Λs ⊗Λt))V>, Cartesian product: H(S) = V(H(Λs ⊗ IT + IN ⊗Λt))V>,\nStrong product: H(S) = V(H(Λs ⊗Λt + Λs ⊗ IT + IN ⊗Λt))V>.\n1Some other choices of a graph shift include the graph Laplacian matrix, graph transition matrix and their normalized counterparts. Adjacency matrix is considered here for notation simplicity.\nComparison between separable and joint graph filters. First of all, we stress that both separable and joint spatio-temporal graph filtering share the same Fourier bases, meaning that they share the same frequency space and their difference only comes from the frequency responses.\nSecond, designing filters based on separable spatio-temporal graphs provides additional flexibility. Although it is conceptually simple to design graph filters directly on product graphs, the eigenvalues along the spatial and temporal domains are tied together, making it difficult to adjust the frequency responses independently for the two modalities. Moreover, two domains are forced to share the same set of filter coefficients and length. Take a filter defined on Kronecker product graph as an example. By expanding the term H(Λs ⊗ Λt) we can have that H(Λs ⊗ Λt) =\n∑K−1 k=0 hk(Λs ⊗ Λt)k =∑K−1\nk=0 hk(Λ k s ⊗ Λkt ). This shows that the filter coefficients are applied on the product of spatial and temporal eigenvalues, making it hard to decompose and interpret the functionality of the filter in single modality. Such limitations make them less practical for spatio-temporal signals which might have distinct patterns in each of the two modalities. This problem is overcome by separable graph filtering, where different filters are applied to each modality. The flexibility of separable graph filters means that one can design different filters (h and g) with independent filter lengths (P and Q) in the spatial and temporal domains. However, it is worth pointing out that the representation power of these two formulations does not have a clear relationship that one is a subset of the other. Consider a joint graph filter designed on a strong product graph with length K = 3. The filter kernel is defined as H(Λs ⊗Λt + Λs ⊗ IT + IN ⊗Λt) = ∑2 k=0 hk(Λs ⊗Λt + Λs ⊗ IT + IN ⊗Λt)k. Similarly,\nthe kernel of a separable graph filter with P = Q = 3 can be written as H(Λs) ⊗ G(Λt) = ( ∑2\np=0 hpΛ p s) ⊗ ( ∑2 q=0 gqΛ q t ). By expanding the expression and rearranging the coefficients, one can obtain the coefficient matrices for the joint graph filter and the separable graph filter, C1 and C2, respectively; that is,\nC1 = [ h0 h1 h2 h1 h1 + 2h2 2h2 h2 2h2 h2 ] , C2 = [ h0g0 h0g1 h0g2 h1g0 h1g1 h1g2 h2g0 h2g1 h2g2 ] = [ h0 h1 h2 ] [g0 g1 g2] ,\nwhere (i, j)-th element means the coefficient of term Λi−1s ⊗Λ j−1 t .\nOn one hand, it is obvious that C2 is always a rank 1 matrix, while C1 could have rank 1, 2, or 3. So C1 is not a special case of C2. On the other hand, C1 is always a symmetric matrix, but C2 can be either symmetric or non-symmetric, depending on the choices of h and g. So C2 is also not a special case of C1. Therefore, the families spanned by two designs do not have any simple relationship that one is a subset of the other. Similar conclusions hold for the Kronecker and Cartesian products.\nThird, designing based on separable spatio-temporal graphs is computationally more efficient. In a separable graph filtering process, we only need to deal with two small matrix multiplications (1), instead of one large matrix-vector multiplication (2), reducing the computational cost from O(N2T 2) to O(NT (N + T )).\nIn short, the joint and separable graph filters are two different design methods for spatio-temporal graphs. Though the representation power of separable graph filters is not necessarily much stronger than joint ones, separable design enjoys the flexibility, computation efficiency and straightforward interpretation. Empirical performances also show that the separable design outperforms the joint one; see Section 5. Note that this separable design coincides with the basic module widely used in spatio-temporal graph convolutional networks Li et al. (2019), which consists of one graph convolution layer followed by one temporal convolution layer." }, { "heading": "3.3 SPATIO-TEMPORAL GRAPH WAVELETS", "text": "In time-series analysis and image processing, wavelets are one of the best tools to design filter banks, allowing us to trade-off between the time-frequency resolutions and touching the lower bound of uncertainty principle of the time-frequency representations (Akansu & Haddad, 2000). Inspired by this, we propose spatio-temporal graph wavelets, which include a series of mathematically designed graph filters to provide multiresolution analysis for spatio-temporal graph signals. The proposed spatio-temporal graph wavelets are later used at each layer in the proposed ST-GST framework. Based on two types of graph structures, we consider two designs: separable and joint wavelets.\nSeparable graph wavelets. Based on separable spatio-temporal graph filtering (1), we are able to design spatial graph wavelets, {Hj1(Ss) = ∑P−1 p=0 h (j1) p Sps} Js j1=1 , and temporal graph wavelets,\n{Gj2(St) = ∑Q−1 q=0 g (j2) q S q t} Jt j2=1 , separately. For each modality, the filter at scale j is defined as Hj(S) = S2 j−1 − S2j = S2j−1(I − S2j−1). There are also many other off-the-shelf graph wavelets we can choose from. More discussion about wavelets and their properties can be found in Appendix A. Since two modalities are designed individually, the number of wavelet scales for each modality could be different. This is important in practice because the number of time samples T is normally larger than the number of spatial nodes N . For each node in spatio-temporal graph, using different wavelet scales in the two domains allows for more flexibility to diffuse the signal with its neighbors. Based on this construction, when we choose Js and Jt scales for spatial and temporal domains, respectively, the overall number of scales for spatio-temporal wavelets is then J = Js×Jt. Joint graph wavelets. When the joint filtering (2) is chosen, we can directly apply existing graph wavelet designs, such as the spectral graph wavelet transform (Hammond et al., 2011)." }, { "heading": "3.4 SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM", "text": "The proposed ST-GST is a nonlinear version of spatio-temporal graph wavelets, which iteratively uses wavelets followed by a nonlinearity activation function. ST-GST includes three components: (i) spatio-temporal graph wavelets, (ii) a pointwise nonlinearity activation function σ(·), and (iii) a low-pass pooling operator U . These operations are performed sequentially to extract representative features from input spatio-temporal graph signal X. The main difference between ST-GST and spatio-temporal graph wavelets is the application of nonlinear activation at each layer. The nonlinear transformation disperses signals through the graph spectrum, producing more patterns in spectrum.\nSeparable ST-GST. Let Z ∈ RN×T be a spatio-temporal graph signal. At each scattering layer, we sequentially use spatial graph wavelets {Hj1} Js j1=1 and temporal wavelets {Gj2} Jt j2=1\nto convolve with Z. Since each graph filter generates a new spatio-temporal graph signal, separable spatiotemporal graph filtering generates J = Js × Jt spatio-temporal graph signals. Then, the nonlinear activation is applied for each spatio-temporal graph signal. For example, the (j1, j2)-th signal is Z(j1,j2) = σ(Hj1(Ss)ZG > j2\n(St)). We can treat each filtered spatio-temporal graph signal as one tree node. Given Z as the parent node, a scattering layer produces J children nodes.\nTo construct ST-GST, we first initialize the input data Z0 = X be the root of the scattering tree; and then, we recursively apply scattering layers at each node to produce children nodes, growing a scattering tree; see Fig. 2. We can index all the nodes in this scattering tree by a unique path from the root to each node. For example, p(`) = ((j(1)1 , j (1) 2 ), . . . , (j (`) 1 , j (`) 2 )) is the path from root to one tree node in the `-th layer, and the signal associated with it is Z(p(`)). Data matrix Z(p(`)) is then summarized by an pooling operator U(·) to obtain a lower-dimensional vector φ(p(`)) = U ( Z(p(`)) ) . Various pooling methods can lead to different dimensions of scattering features. Common choices for U(·) include average in the spatial domain (U = 1N 11×N ,φ = UZ ∈ R\nT ), average in the temporal domain (U = 1T 1T×1,φ = ZU ∈ R\nN ) and average in both modalities (U = 1NT 1N×T ,φ = U ◦ Z ∈ R), where ◦ represent Hadamard product. Finally, all scattering features φ(p(`)) are concatenated to construct a scattering feature map Φ(Ss,St,X) := {{φ(p(`))}all p(`)} L−1 `=0 ,\nJoint ST-GST. Since we deal with a unifying graph, we can use the spatio-temporal product graph directly in combination with the ordinary graph scattering transform (Gama et al. (2019b)).\nComparison with ST-GCNs. One distinct difference between ST-GST and ST-GCNs lies in the fact that the trainable graph convolution in each layer of ST-GCN performs the multiplication between a spatial graph shift and the feature matrix, which only extracts low-frequency information over the graph; while ST-GST leverages multiple spatio-temporal graph filters to cover multiple frequency bands. Furthermore, predefined filter coefficients conform a frame (3) in each layer of ST-GST, which is crucial for establishing the stability of ST-GST as shown in next section." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "Stability is the key to designing robust and reliable algorithms. However, since the training process of ST-GCNs is data-driven, it might be vulnerable to small perturbations added to training data, which may lead to significant degradation in practice. Trainable parameters make it hard to develop a theoretical analysis for ST-GCNs. In contrast, here we show that the proposed separable ST-GST is stable to perturbations on both spatio-temporal graph signals and structures. All proofs of statements\nin this section are explained thoroughly in Appendix B. Unless specified, ‖x‖ is the `2 norm for vector x, while ‖X‖ and ‖X‖2 are the Frobenius and spectral norm for matrix X, respectively. Here we show the results for separable spatio-temporal graph scattering transform. But all the results can be extended to the joint version. Before introducing perturbations, we first show that separable spatio-temporal graph wavelets also satisfy certain frame bounds. Thus, with a separable construction, we can control bound constants for spatio-temporal wavelet and build tight frames.\nLemma 1. Let {Hj1} Js j1=1 and {Gj2} Jt j2=1 be the wavelet filter bank for spatial domain and for temporal domain, respectively. Both satisfy frame properties such that for any x ∈ RN and y ∈ RT ,\nA21‖x‖2 ≤ Js∑\nj1=1\n‖Hj1(Ss)x‖ 2 ≤ B21‖x‖2, A22‖y‖2 ≤ Jt∑ j2=1 ‖Gj2(St)y‖ 2 ≤ B22‖y‖2, (3)\nThen, for any Z ∈ RN×T and its corresponding reshaped vector z ∈ RNT , it holds that\nA21A 2 2‖Z‖2 ≤ Js,Jt∑ j1,j2=1 ‖(Hj1(Ss)⊗Gj2(St))z‖ 2 = Js,Jt∑ j1,j2=1 ∥∥Hj1(Ss)ZG>j2(St)∥∥2 ≤ B21B22‖Z‖2. Lemma 1 guarantees that separable design can also lead to valid wavelets. Furthermore, when we choose both spatial {Hj1} Js j1=1 and temporal {Gj2} Jt j2=1\nto be tight frames with A1 = B1 = A2 = B2 = 1 (Shuman et al., 2015), the resulting separable wavelet also conforms a tight frame. In later context, denote B = B1 × B2 as the frame bound constant for separable spatio-temporal graph wavelet, and separable ST-GST is configured with L layers and J = Js × Jt scales at each layer." }, { "heading": "4.1 STABILITY TO PERTURBATION OF SPATIO-TEMPORAL GRAPH SIGNALS", "text": "Consider the perturbed spatio-temporal graph signal X̃ = X + ∆ ∈ RN×T , where ∆ ∈ RN×T is the perturbation matrix. Such an additive model can represent measurement noise caused by devices or adversarial perturbations added manually. Theorem 1 shows that the feature map for perturbed signal will not deviate much from original feature map under small input perturbations. Theorem 1. Consider the additive noise model for input data X, it then holds that\n‖Φ(Ss,St,X)− Φ(Ss,St, X̃)‖√ T ∑L−1\n`=0 J `\n≤ 1√ NT √√√√∑L−1`=0 B2`∑L−1 `=0 J ` ‖∆‖. (4)\nThe difference of output is normalized by the squared root of dimension of the final feature map. Note that we can construct spatio-temporal wavelet easily with B = 1 when spatial and temporal wavelets are both tight frames, then the normalized bound presented in (4) indicates that the transform is insensitive to perturbations on input signals as the factor is much smaller than 1.\nMethod Accuracy (%)\nGFT+TPM 74.0 HDM 81.8\nG N\nN s Temporal Conv. 72.1\nST-GCN (fixed topology) 52.0 MS-G3D (learnable topology) 82.2\nSc at\nte ri\nng\nSeparable ST-GST (5, 10, 3) 81.4 Separable ST-GST (5, 20, 3) 87.0 Joint Kronecker ST-GST (15, 3) 61.0 Joint Cartesian ST-GST (15, 3) 59.1\nJoint Strong ST-GST (15, 3) 61.7\nTable 1: Classification accuracy (MSR Action3D with 288 training and 269 testing samples).\nMethod Accuracy (%)\nG N\nN s\nDeep LSTM 60.7 PA-LSTM 62.9 ST-LSTM+TG 69.2 Temporal Conv. 74.3 ST-GCN (fixed topology) 75.8\nSc at\nte ri\nng\nSeparable ST-GST (5, 20, 2) 68.7 Separable ST-GST (5, 20, 3) 73.1 Joint Kronecker ST-GST (15, 3) 55.7 Joint Cartesian ST-GST (15, 3) 56.2\nJoint Strong ST-GST (15, 3) 57.1\nTable 2: Classification accuracy (NTU-RGB+D with 40, 320 training and 16, 560 testing samples)." }, { "heading": "4.2 STABILITY TO PERTURBATION OF SPATIO-TEMPORAL GRAPH STRUCTURES", "text": "Perturbations on the underlying graph usually happen when the graph is unknown or when the graph changes over time (Segarra et al., 2017). Since such kind of perturbations usually happen in spatial domain, here we simply consider the structure perturbations on the spatial graph only. Specifically, we consider the perturbed spatial graph Ŝs = Ss +E>Ss +SsE, where E is the perturbation matrix and temporal graph St is not changed. Detailed descriptions see Appendix B. Theorem 2. Suppose eigenvalues {mi}Ni=1 of E ∈ RN×N are organized in order such that |m1| ≤ |m2| ≤ · · · ≤ |mN |, satisfying |mN | ≤ /2 and |mi/mN − 1| ≤ for > 0 and all i’s. Suppose spatial filter bank {Hj1} Js j1=1 satisfies maxi |λh′i(λ)| ≤ C and temporal filter bank {Gj2} Jt j2=1 satisfies limited spectral response maxi |gi(λ)| ≤ D. It then holds that\n‖Φ(Ss,St,X)− Φ(Ŝs,St,X)‖√ T ∑L−1\n`=0 J `\n≤ CD B √ NT √√√√∑L−1`=0 `2(B2J)`∑L−1 `=0 J ` ‖X‖, (5)\nwhere characterizes the perturbation level. Theorem 2 shows that ST-GST is a stable transform also for structure deformation, as the norm of change of feature map is linear in . It is worth pointing out that the upper bound in both Theorems 1 and 2 only depend on the choice of filter banks and structure of scattering tree, instead of quantities related with specific graph support Ss and St that are shown in previous works (Gama et al., 2019b; Zou & Lerman, 2020; Levie et al., 2019)." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We now evaluate the performance of proposed ST-GST in skeleton-based action recognition task.\nExperimental setup. The number of layers, L, the number of spatial wavelet scales, Js, and the number of temporal wavelet scales, Jt, are represented by (Js, Jt, L) for separable ST-GST, and (J, L) for joint ST-GST. Training ratio means the fraction of data used for training in the training set. For the spatial domain, we use the skeleton graph; and for the temporal domain, we use a line graph connecting consecutive time stamps, see Fig. 1(a)(b). Geometric scattering wavelets are used\nin both domain, and the nonlinear activation σ(·) is absolute value function which has the property of energy-preserving. Features output by ST-GST are then utilized by random forest classifier with 300 decision trees for classification.\nComparison with state-of-the-art methods. We consider two datasets, MSR Action3D and NTURGB+D (cross-subject). For MSR Action3D, the proposed ST-GST is compared with GFT facilitated by temporal pyramid matching (GFT+TPM) (Kao et al., 2019), Bayesian hierarchical dynamic model (HDM) (Zhao et al., 2019), and a few deep learning approaches, including temporal convolution neural networks (Kim & Reiter, 2017), ST-GCN (Yan et al., 2018), and MS-G3D (Liu et al., 2020). For NTU-RGB+D, Deep LSTM (Liu et al., 2019), part-aware LSTM (PA-LSTM) (Liu et al., 2019) and spatio-temporal LSTM with trust gates (ST-LSTM+TG) (Liu et al., 2016) are included in comparison. Methods labeled “fixed topology” are modified so as not to use adaptive training of the adjacency matrix in order for the comparison with ST-GST to be fair. Tables 1 and 2 compares the classification accuracies on MSR Action3D and NTU-RGB+D, respectively. We see that even without any training, the performance of ST-GST is better than other non-deep-learning and LSTMbased methods, and is comparable with state-of-the-art GCN-based methods in large-scale dataset. Further, ST-GST outperforms all other methods when the size of training set is small. Fig. 3(a) shows the classification accuracy as a function of the training ratio. When training ratio is less than 20%, ST-GST significantly outperforms ST-GCN. Fig. 3(b) shows the accuracy-running time plot, reflecting that ST-GST is much faster than ST-GCN with similar classification performance.\nST-GST works well in small-scale-data regime. Table 1 and Fig. 3(a) show that ST-GST outperforms other deep learning methods in the small-scale-data regime, which can be explained as follows. The good performance of spatio-temporal graph neural networks highly relies on the assumption that the training data is abundant. When the size of training set is limited, most of them can be easily trapped into bad local optima due to overfitting, resulting in a significant drop of classification accuracy. But in practice, obtaining a huge amount of training data with high-quality labels could be extremely expensive. On the other hand, since ST-GST is a non-trainable framework, filter coefficients in ST-GST are mathematically designed rather than trained by data, which avoids the problem of overfitting when the training ratio is low. Another advantage of ST-GST compared to ST-GCN is that it requires less computation because no training process is involved in ST-GST.\nSeparable design is better than joint design. Tables 1 and 2 also show that separable spatiotemporal graph wavelets work much better than joint ones, achieving 25% increase in classification accuracy for MSR Action3D dataset. The result is consistent with our analysis in Section 3.2. The intuition is that when dealing with spatio-temporal data generated from complex structures like skeleton sequences, the fixed dependencies generated by product graphs highly restrict the way how signals can be diffused in spatio-temporal graphs and thus limit the efficiency of feature extraction.\nNonlinearity is beneficial. Fig. 3(c) compares ST-GST with and without nonlinearity and shows that it is critical to ST-GST, also reflecting the potential effect of nonlinearity in ST-GCNs." }, { "heading": "6 CONCLUSIONS", "text": "In this work we propose a novel spatio-temporal graph scattering transform (ST-GST), which can be viewed as one forward pass of spatio-temporal graph convolutional networks (ST-GCNs) without any training. ST-GST is stable to small perturbations on both input signals and structures. Our experiments show that: i) ST-GST can achieve better performance than both non-deep-learning and ST-GCNs based methods when the size of training samples is limited; ii) designing spatial and temporal graph filters separately is more flexible and computationally efficient than designing them jointly; and iii) the nonlinearity is critical to the performance." }, { "heading": "7 ACKNOWLEDGEMENT", "text": "This work is fully supported by Mitsubishi Electric Research Laboratories (MERL), where Chao Pan was a research intern, Siheng Chen was a research scientist and Antonio Ortega is a consultant." }, { "heading": "A DIFFERENT DESIGN OF GRAPH WAVELETS", "text": "There are many off-the-shelf, well-developed graph wavelets we can choose. They mainly focus on extracting features from multiple frequency bands of input signal spectrum. Some of them are shown as follows.\nMonic Cubic wavelets. Monic Cubic wavelets (Hammond et al., 2011) define the kernel function h(λ) as\nh(λ) = λ for λ < 1; −5 + 11λ− 6λ2 + λ3 for 1 ≤ λ ≤ 2; 2/λ for λ > 2.\nDifferent scales of filters are implemented by scaling and translation of above kernel function.\nItersine wavelets. Itersine wavelets define the kernel function at scale j as\nhj(λ) = sin\n( π\n2 cos2(π(λ− j − 1 2 ))\n) 1 [ j\n2 − 1 ≤ λ ≤ j 2\n] .\nItersine wavelets form tight frames.\nGeometric scattering wavelets. Geometric scattering wavelet filter bank (Gao et al., 2019) contains a set of filters based on lazy random walk matrix. The filter at scale j is defined as Hj(S) = S2\nj−1 −S2j = S2j−1(I−S2j−1), where S = 12 (I + AD −1) is the lazy random walk matrix and D\nis the degree matrix.\nNote that one is also allowed to customize either spatial or temporal graph wavelets, once they conform a frame and satisfy integral Lipschitz constraint shown as follows\nA2‖x‖2 ≤ J∑\nj=1\n‖Hjx‖2 ≤ B2‖x‖2, |λh′(λ)| ≤ const ∀λ,\nwhere A,B are scalar constants and h′(·) is the gradient of the kernel function." }, { "heading": "B PROOFS", "text": "B.1 PROOF OF LEMMA 1\nBy reshaping the signal from Z to z with Zs,t = z(s−1)T+t, we can have that\nJs,Jt∑ j1,j2=1 ‖(Hj1(Ss)⊗Gj2(St))z‖ 2 = Js,Jt∑ j1,j2=1 ∥∥Hj1(Ss)ZG>j2(St)∥∥2 . Since Ss and St do not change over computation process, we just use Hj1 and Gj2 to represent\nHj1(Ss) and Gj2(St), respectively. Suppose Hj1 = h11 h1N. . . hN1 hNN ∈ RN×N , then we have the Kronecker product as Hj1 ⊗Gj2 =\nh11Gj2 h1NGj2. . . hN1Gj2 hNNGj2 . Apply it to vector z and we can have a filtered signal yj1,j2 = (Hj1 ⊗Gj2)z ∈ RNT . The first T elements of y can also be written as\nyj1,j2(1 : T ) = N∑ i=1 h1iGj2 Zi,1 Zi,2\n... Zi,T\n = Gj2 N∑ i=1 h1i Zi,1 Zi,2\n... Zi,T\n .\nTherefore we have\nA22 ∥∥∥∥∥∥∥∥ N∑ i=1 h1i Zi,1 Zi,2\n... Zi,T\n ∥∥∥∥∥∥∥∥ 2 ≤ ∑ j2 ‖yj1,j2(1 : T )‖2 ≤ B22 ∥∥∥∥∥∥∥∥ N∑ i=1 h1i Zi,1 Zi,2\n... Zi,T\n ∥∥∥∥∥∥∥∥ 2 .\nThus ∑\nj2 ‖yj1,j2‖2 can be sandwiched as\nA22 N∑ k=1 ∥∥∥∥∥∥∥∥ N∑ i=1 hki Zi,1 Zi,2\n... Zi,T\n ∥∥∥∥∥∥∥∥ 2 ≤ ∑ j2 ‖yj1,j2‖2 ≤ B22 N∑ k=1 ∥∥∥∥∥∥∥∥ N∑ i=1 hki Zi,1 Zi,2\n... Zi,T\n ∥∥∥∥∥∥∥∥ 2 .\nBy definition of vector `2 norm, we can rewrite the upper and lower bound in Eq. (6) as\nA22 T∑ i=1 ∥∥∥∥∥∥∥∥Hj1 Z1,i Z2,i\n... ZN,i\n ∥∥∥∥∥∥∥∥ 2 ≤ ∑ j2 ‖yj1,j2‖2 ≤ B22 T∑ i=1 ∥∥∥∥∥∥∥∥Hj1 Z1,i Z2,i\n... ZN,i\n ∥∥∥∥∥∥∥∥ 2 .\nSumming above quantity over j1 gives us that\nA21A 2 2‖Z‖2 = A21A22 T∑ i=1 ∥∥∥∥∥∥∥∥ Z1,i Z2,i\n... ZN,i\n ∥∥∥∥∥∥∥∥ 2 ≤ ∑ j1,j2 ‖yj1,j2‖2 ≤ B21B22 T∑ i=1 ∥∥∥∥∥∥∥∥ Z1,i Z2,i\n... ZN,i\n ∥∥∥∥∥∥∥∥ 2 = B21B 2 2‖Z‖2,\nwhich completes the proof. Lemma 1 is a very handful result. It shows that we can easily construct new spatio-temporal wavelets just by combining spatio and temporal ones. Moreover, the constants for new frame bound can be easily obtained once we know the characteristics of the wavelets in each domain. In particular, it also provides us a convenient way to build tight frames for spatio-temporal data analysis with A = B, because we just need to choose tight frames for spatial and temporal domain separately without considering possible correlations.\nB.2 PROOF OF THEOREM 1\nWe are considering pooling operatorU(·) as average in spatial domain in this proof, so U = 1N 11×N and φ = UZ ∈ RT . The proof techniques can be easily generalized to any form of U(·). When reshaping Z ∈ RN×T to z ∈ RNT , the new pooling operator can be simply represented as\nU′ = 1\nN (IT , IT , · · · , IT ) ∈ RT×NT , φ = U′z.\nNote that ‖U′‖2 = 1√N . Consider scattering tree nodes at the last layer L − 1. Suppose they are indexed from 1 to JL−1 associated with signal a1, · · · ,aJL−1 , and their parent nodes are indexed from 1 to JL−2 associated with signal b1, · · · ,bJL−2 . When the input data X is perturbed, all signals in scattering tree will change correspondingly. Here we simply denote them as ã, b̃. Then for the change of feature vector located at node with a1, it holds that\n‖φa1 − φã1‖ 2 = ‖U′(a1 − ã1)‖2 ≤ ‖U′‖2‖a1 − ã1‖2 ≤\n1\nN ‖σ((Hj1 ⊗Gj2)(b1 − b̃1))‖2, (6)\nwhere j1 = j2 = 1. The last inequality holds because we are using absolute value function as nonlinear activation, which is non-expansive. Summing above quantity over j1, j2 and by the frame bound proved in Lemma 1, we can have that\nJL−1∑ i=1 ‖φai − φãi‖ 2 ≤ B 2 N JL−2∑ i=1 ‖bi − b̃i‖2. (7)\nNote that for sum of square norm of change at layer L− 2 it is\nJL−2∑ i=1 ‖φbi − φb̃i‖ 2 ≤ 1 N JL−2∑ i=1 ‖bi − b̃i‖2. (8)\nCompare Eq. (7) and (8). The upper bound only differs with a factor B2. Then by induction we can have that\n‖Φ(Ss,St,X)− Φ(Ss,St, X̃)‖2 ≤ 1\nN L−1∑ `=0 B2`‖x− x̃‖2 = 1 N L−1∑ `=0 B2`‖∆‖2.\nNormalize it with the dimension of final feature map, we have\n‖Φ(Ss,St,X)− Φ(Ss,St, X̃)‖√ T ∑L−1\n`=0 J `\n≤ 1√ NT √√√√∑L−1`=0 B2`∑L−1 `=0 J ` ‖∆‖. (9)\nB.3 PROOF OF THEOREM 2\nPerturbations on the underlying graph usually happen when the graph is unknown or when the graph changes over time (Segarra et al., 2017). Take skeleton-based action recognition as an example. Some joints may be misrecognized with others due to measurement noise of devices during certain frames, thus the location signals of those joints are interchanged. This leads to different spatial graph structures at those time stamps. Since such kind of perturbations usually happen in spatial domain, here we simply consider the structure perturbations on the spatial graph only. But the results can be extended to more general cases.\nConsider the original spatio-temporal graph as G with spatial graph shift matrix Ss and temporal one St, and the perturbed graph as Ĝ with Ŝs and St. We first show that ST-GST is invariant to node permutations in spatial domain, where the set of permutation matrices is defined as P = { P ∈ {0, 1}N×N : P1 = 1,P>1 = 1,PP> = IN } . Note that we are considering average in spatial domain for U(·), so U = 1N 11×N and φ = UZ ∈ R T , Û = UP.\nLemma 2. Consider the spatial permutation Ŝs = P>SsP and input data X̂ = P>X are also permuted in spatial domain correspondingly. Then, it holds that\nΦ(Ss,St,X) = Φ(Ŝs,St, X̂) (10)\nProof. Note that the permutation holds for all signals computed in scattering tree; that is to say, Ẑ(p(`)) = P\n>Z(p(`)). Suppose for path p(`) the last two filter are chosen as H(Ŝs) and G(St), then the feature vector after pooling with respect to new graph support and data can be written as\nφ(p(`))(Ŝs,St, Ẑ(p(`))) = Û(σ(H(Ŝs)Ẑ(p(`))G >(St)))\n= UPσ(P>H(Ss)PP >Z(p(`))G >(St))\nThe last equation holds due to definition of H(S). Since nonlinear activation is applied elementwise, we can rewrite it as\nφ(p(`))(Ŝs,St, Ẑ(p(`))) = Uσ(PP >H(Ss)PP >Z(p(`))G >(St))\n= Uσ(H(Ss)Z(p(`))G >(St))\n= φ(p(`))(Ss,St,Z(p(`))).\nThis conclusion holds independently of specific path p(`), so it holds for all feature vector after pooling in scattering tree. Since final feature map is just a concatenation of all feature vectors, the proof is complete.\nLemma 2 shows that the output of ST-GST is essentially independent of the node ordering in spatial domain, as long as the permutation is consistent across all time stamps. This result is intuitive\nbecause the output of graph convolution should only depend on relative neighborhood structure of each node. Since node reordering will not alter neighborhood topology, the output should remain unchanged.\nBased on Lemma 2, we use a relative perturbation model for structure modifications (Gama et al., 2019b), which focuses more on the change of neighborhood topology compared to absolute perturbations adopted in Levie et al. (2019). Define the set of permutations that make Ss and Ŝ the closet as Ps := arg minP∈P ‖P>ŜsP − Ss‖2. Consider the set of perturbation matrices E(S, Ŝ) = {E|P>ŜsP = Ss + E>Ss + SsE,P ∈ Ps,E ∈ RN×N}. Then the relative distance to measure structure perturbations can be defined as\nd(Ss, Ŝs) = min E∈E(Ss,Ŝs)\n‖E‖2\nNote that if Ŝs = P>SsP, meaning that the structure perturbation is purely permutation, then the relative distance d(Ss, Ŝs) = 0, which is consistent with result shown in Lemma 2. Therefore, without loss of generality, we can assume that P = IN and Ŝs = Ss + E>Ss + SsE in later context. With this formulation, we are ready to prove Lemma 3.\nLemma 3. Suppose eigenvalues {mi}Ni=1 of E are organized in order such that |m1| ≤ |m2| ≤ · · · ≤ |mN |, satisfying |mN | ≤ /2 and |mi/mN − 1| ≤ for > 0. For spatial graph filter H(Ss) and temporal graph filter G(St), denote their kernel functions as h(λ) and g(λ), respectively. If for all λ, h(λ) is chosen to satisfy integral Lipschitz constraint |λh′(λ)| ≤ C and g(λ) has bounded spectral response |g(λ)| ≤ D. Then it holds that\n‖H(Ss)⊗G(St)−H(Ŝs)⊗G(St)‖2 ≤ CD +O( 2). (11)\nProof. From Proposition 2 in Gama et al. (2019b) we can have that when E satisfies above conditions, ‖H(Ss)−H(Ŝs)‖2 ≤ C +O( 2). So\n‖H(Ss)⊗G(St)−H(Ŝs)⊗G(St)‖2 = ‖(H(Ss)−H(Ŝs))⊗G(St)‖2 ≤ ‖H(Ss)−H(Ŝs)‖2‖G(St)‖2 ≤ CD +O( 2),\nThe second line holds because H(Ss) − H(Ŝs) is a symmetric matrix, which can be written as eigen-decomposition as FΩF>. And (FΩF>)⊗ (VΛVT ) = (F⊗V)(Ω⊗Λ)(F⊗V)> holds, which finishes the proof. As for general structural perturbations, where we want to find ‖H(Ss) ⊗ G(St)−H(Ŝs)⊗G(Ŝt)‖2, we can add and subtract term H(Ŝs)⊗G(Ŝt), use triangle inequality and further bound those two terms with more assumptions on h(λ) and g(λ).\nThe bound shown in Lemma 3 indicates that the difference of output caused by changing spatial graph support from Ss to Ŝs is proportional to , which is a scalar characterizing the level of the perturbation. Constraints on eigenvalues of E limits the change of graph structure. A more detailed description explaining the necessity of such constraints can be found in Gama et al. (2019b). With Lemma 3 in hand, we are ready to show the change of feature vector after pooling at each node in scattering tree when such structure perturbations happen.\nLemma 4. Consider a ST-GST with L layers and J = Js×Jt scales at each layer. Suppose that the graph filter bank forms a frame with upper boundB = B1×B2, whereB1, B2 are frame bounds for spatial and temporal domain, respectively. Suppose for all λ, spatial wavelet filter bank {Hj1} Js j1=1 satisfies maxi |λh′i(λ)| ≤ C and temporal wavelet filter bank {Gj2} Jt j2=1\nsatisfies maxi |gi(λ)| ≤ D, and other conditions the same as Lemma 3. Then for the change of feature vector φp(`) associated with path p(`) it holds that\n‖φp(`)(Ss,St,X)− φp(`)(Ŝs,St,X)‖ ≤ 1√ N `CDB`−1‖X‖. (12)\nProof. Expand ‖φp(`)(Ss,St,X)− φp(`)(Ŝs,St,X)‖ as\n‖U′(σ(H j (`) 1 (Ss)⊗Gj(`)2 (St)))p(`)x−U ′(σ(H j (`) 1 (Ŝs)⊗Gj(`)2 (St)))p(`)x‖\n≤ 1√ N ‖(σ(H j (`) 1 (Ss)⊗Gj(`)2 (St)))p(`)x− (σ(Hj(`)1 (Ŝs)⊗Gj(`)2 (St)))p(`)x‖,\nwhere ‖U′‖2 = 1/ √ N and (σ(H\nj (`) 1 (Ss) ⊗ Gj(`)2 (St)))p(`) is a shorthand for applying spatiotemporal filters and nonlinear activation in order to input data ` times according to the path p(`). Add and subtract term σ(H\nj (`) 1 (Ss)⊗Gj(`)2 (St))σ(Hj(`−1)1 (Ŝs)⊗Gj(`−1)2 (St)) · · ·σ(Hj(1)1 (Ŝs)⊗ G\nj (1) 2\n(St))x and apply triangle inequality, we can have that\n‖(σ(H j (`) 1 (Ss)⊗Gj(`)2 (St)))p(`)x− (σ(Hj(`)1 (Ŝs)⊗Gj(`)2 (St)))p(`)x‖\n≤ ‖σ(H j (`) 1 (Ss)⊗Gj(`)2 (St)) ( (σ(H j (`−1) 1 (Ss)⊗Gj(`−1)2 (St)))p(`−1)−\n(σ(H j (`−1) 1 (Ŝs)⊗Gj(`−1)2 (St)))p(`−1) ) x‖+\n‖ ( σ(H\nj (`) 1 (Ss)⊗Gj(`)2 (St))− σ(Hj(`)1 (Ŝs)⊗Gj(`)2 (St)) ) ·\n(σ(H j (`−1) 1 (Ŝs)⊗Gj(`−1)2 (St)))p(`−1)x‖.\nRecursive quantities can be observed above and the bound can be solved explicitly (Gama et al., 2019b). By induction and conclusion from Lemma 3, we can get that\n‖(σ(H j (`) 1 (Ss)⊗Gj(`)2 (St)))p(`)x− (σ(Hj(`)1 (Ŝs)⊗Gj(`)2 (St)))p(`)x‖ ≤ ` CDB `−1‖x‖.\nMultiplying the coefficient 1/ √ N caused by pooling gets us the final result.\nNote that the upper bound in Lemma 4 holds for all path of length `. Thus the square norm of change in final feature map can be summarized by the sum of square norm of change at each layer, which finishes the proof of Theorem 2." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "C.1 DATASET\nMSR Action3D dataset (Li et al., 2010) is a small dataset capturing indoor human actions. It covers 20 action types and 10 subjects, with each subject repeating each action 2 or 3 times. The dataset contains 567 action clips with maximum number of frames 76; however, 10 of them are discarded because the skeleton information are either missing or too noisy (Wang et al., 2012). For each clip, locations of 20 joints are recorded, and only one subject is present. Training and testing set is decided by cross-subject split for this dataset, with 288 samples for training and 269 for testing.\nNTU-RGB+D (Liu et al., 2019) is currently the largest dataset with 3D joints annotations for human action recognition task. It covers 60 action types and 40 subjects. The dataset contains 56,880 action clips with maximum number of frames 300, and there are 25 joints for each subject in one clip. Each clip is guaranteed to have at most 2 subjects. The cross-subject benchmark of NTU-RGB+D includes 40,320 clips for training and 16,560 for testing.\nFull table of performance on MSR Action3D dataset. The table contains performance comparison for different algorithms with different set of parameters on MSR Action3D dataset. Note that the triple shown after ST-GST represents the value for (Js, Jt, L). Methods labeled “fixed topology” are modified so as not to use adaptive training of the adjacency matrix in order for the comparison with ST-GST to be fair. Methods labeled “learnable topology” means that we use adaptive training for adjacency matrix to further validate our claim. Other configurations of compared methods are then set by default. From the table we can see that ST-GST outperforms all other methods even when the graph topology can be learned by neural networks. The intuition behind this is that deep learning methods need large amount of training data due to the complex structures, and it can easily\nbe trapped into bad local optima due to overfitting when the size of training set is limited, which is common in practice. Also the good performance of ST-GST in sparse label regime could potentially inspire active learning for processing spatio-temporal data (Bilgic et al., 2010).\nPerformance on MSR Action3D dataset with standard deviations. We repeat part of our experiments 20 times on MSR Action3D dataset, especially for joint approaches, to obtain the standard deviations of classification accuracy. The results are shown in Table 4. Note that since ST-GST is a mathematically designed transform, the output features should be the same for different trails, and the randomness comes from classifiers used later (random forest in this case). It can be seen that the standard deviations are comparable in all these methods, and therefore the conclusion that separable ST-GST consistently outperforms joint ST-GST still holds.\nComparison between different choices of wavelets. In practice we find that using graph geometric scattering wavelets (Gao et al., 2019) for both spatial and temporal domain can achieve the best performance, which is reported in main text. Classification accuracy using other type of wavelets is shown here. All experiments performed here are separable ST-GST with Js = 5, Jt = 15, L = 3 on MSR Action3D dataset. An interesting observation is that there is a significant reduction in accuracy when we change temporal wavelet from diffusion based one (Geometric) to spectrum based one (MonicCubic or Itersine). This may caused by the design of different wavelets.\nStability of ST-GST. We also show the classification accuracy under different level of perturbations on spatio-temporal signals and spatial graph structures in Fig. 4. The experiments are con-\nducted on MSR Action3D dataset. For signal perturbation, signal-to-noise ratio (SNR) is defined as 10 log ‖X‖ 2\n‖∆‖2 . For structure perturbation, E is set to be a diagonal matrix, whose diagonal elements satisfy corresponding constraints on . From both Fig. 4(a) and (b) we can see that ST-GST is stable and will not deviate much from original output when the perturbations are small." } ]
2,021
null
SP:b6083b2193bf2ab0df08746ef2ec9e51b513525f
[ "The authors propose a module that regresses the parameters of an affine transformation or homography as an additional objective in the contrastive self-supervised learning framework. The authors argue that the geometric information encoded in the proposed module can supplement the signal provided by a contrastive loss, improving both performance and convergence speed. The authors validate their claims with two recent contrastive self-supervised learning approaches (i.e., SimCLR, BYOL) on several benchmark datasets showing effective results." ]
The typical contrastive self-supervised algorithm uses a similarity measure in latent space as the supervision signal by contrasting positive and negative images directly or indirectly. Although the utility of self-supervised algorithms has improved recently, there are still bottlenecks hindering their widespread use, such as the compute needed. In this paper, we propose a module that serves as an additional objective in the self-supervised contrastive learning paradigm. We show how the inclusion of this module to regress the parameters of an affine transformation or homography, in addition to the original contrastive objective, improves both performance and learning speed. Importantly, we ensure that this module does not enforce invariance to the various components of the affine transform, as this is not always ideal. We demonstrate the effectiveness of the additional objective on two recent, popular self-supervised algorithms. We perform an extensive experimental analysis of the proposed method and show an improvement in performance for all considered datasets. Further, we find that although both the general homography and affine transformation are sufficient to improve performance and convergence, the affine transformation performs better in all cases.
[]
[ { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster", "venue": null, "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey E. Hinton" ], "title": "A simple framework for contrastive learning of visual", "venue": "representations. ArXiv,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big selfsupervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A. Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": null, "year": 2015 }, { "authors": [ "Jeff Donahue", "Yangqing Jia", "Oriol Vinyals", "Judy Hoffman", "Ning Zhang", "Eric Tzeng", "Trevor Darrell" ], "title": "Decaf: A deep convolutional activation feature for generic visual recognition", "venue": "Proceedings of Machine Learning Research,", "year": 2014 }, { "authors": [ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with convolutional neural networks", "venue": "In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2014 }, { "authors": [ "Z. Feng", "C. Xu", "D. Tao" ], "title": "Self-supervised representation learning by rotation feature decoupling", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image", "venue": "rotations. ArXiv,", "year": 2018 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H. Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Koray Kavukcuoglu", "Rémi Munos", "Michal Valko" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross B. Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 2019 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Hankook Lee", "Sung Ju Hwang", "Jinwoo Shin" ], "title": "Self-supervised label augmentation via input transformations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Brais Martinez", "Davide Modolo", "Yuanjun Xiong", "Joseph Tighe" ], "title": "Action recognition with spatialtemporal discriminative filter banks", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "I. Misra", "L.V.D. Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Alejandro Newell", "Jia Deng" ], "title": "How useful is self-supervised pretraining for visual tasks", "venue": "pp. 7343–7352,", "year": 2020 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Philipp Krähenbühl", "Jeff Donahue", "Trevor Darrell", "Alexei Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": null, "year": 2016 }, { "authors": [ "Kihyuk Sohn", "David Berthelot", "Chun-Liang Li", "Zizhao Zhang", "Nicholas Carlini", "Ekin D. Cubuk", "Alex Kurakin", "Han Zhang", "Colin Raffel" ], "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "venue": null, "year": 2001 }, { "authors": [ "Mingxing Tan", "Ruoming Pang", "Quoc V. Le" ], "title": "Efficientdet: Scalable and efficient object detection", "venue": "In Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Kafeng Wang", "Xitong Gao", "Y. Zhao", "Xingjian Li", "D. Dou", "C. Xu" ], "title": "Pay attention to features, transfer learn faster cnns", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In ECCV,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "There is an ever-increasing pool of data, particularly unstructured data such as images, text, video, and audio. The vast majority of this data is unlabelled. The process of labelling is time-consuming, labour-intensive, and expensive. Such an environment makes algorithms that can leverage fully unlabelled data particularly useful and important. Such algorithms fall within the realm of unsupervised learning. A particular subset of unsupervised learning is known as Self-Supervised Learning (SSL). SSL is a paradigm in which the data itself provides a supervision signal to the algorithm.\nSomewhat related is another core area of research known as transfer learning (Wang et al., 2020). In the context of computer vision, this means being able to pre-train an encoder network offline on a large, varietal dataset, followed by domain-specific fine-tuning on the bespoke task at hand. The state-of-the-art for many transfer learning applications remains dominated by supervised learning techniques (Tan et al., 2020; Martinez et al., 2019; Donahue et al., 2014; Girshick et al., 2014), in which models are pre-trained on a large labelled dataset.\nHowever, self-supervised learning techniques have more recently come to the fore as potential alternatives that perform similarly on downstream tasks, while requiring no labelled data. Most selfsupervised techniques create a supervision signal from the data itself in one of two ways. The one approach are techniques that define a pre-text task beforehand that a neural network is trained to solve, such as inpainting (Pathak et al., 2016) or a jigsaw puzzle (Noroozi & Favaro, 2016). In this way, the pre-text task is a kind of proxy that, if solved, should produce reasonable representations for downstream visual tasks such as image or video recognition, object detection, or semantic segmentation. The other approach is a class of techniques known as contrastive methods (Chen et al., 2020a; He et al., 2019; Chen et al., 2020b). These methods minimise the distance (or maximise the similarity) between the latent representations of two augmented views of the same input image, while simultaneously maximising the distance between negative pairs. In this way, these methods enforce consistency regularisation (Sohn et al., 2020), a well-known approach to semi-supervised learning. These contrastive methods often outperform the pre-text task methods and are the current state-of-the-art in self-supervised learning. However, most of these contrastive methods have several\ndrawbacks, such as requiring prohibitively large batch sizes or memory banks, in order to retrieve the negative pairs of samples (Chen et al., 2020a; He et al., 2019).\nThe intuition behind our proposed module is that any system tasked with understanding images can benefit from understanding the geometry of the image and the objects within it. An affine transformation is a geometric transformation that preserves parallelism of lines. It can be composed of any sequence of rotation, translation, shearing, and scaling. A homography is a generalisation of this notion to include perspective warping. A homography need not preserve parallelism of lines, however, it ensures lines remain straight. Mathematically, a homography is shown in Equation 1. It has 8 degrees of freedom and is applied to a vector in homogenous coordinates. An affine transformation has the same form, but with the added constraint that φ3,1 = φ3,2 = 0.\nHφ = [ φ1,1 φ1,2 φ1,3 φ2,1 φ2,2 φ2,3 φ3,1 φ3,2 1 ] (1)\nThe ability to know how a source image was transformed to get to a target image implicitly means that you have learned something about the geometry of that image. An affine transformation or, more generally, a homography is a natural way to encode this idea. Forcing the network to estimate the parameters of a random homography applied to the source images thereby forces it to learn semantics about the geometry. This geometric information can supplement the signal provided by a contrastive loss, or loss in the latent space.\nIn this paper, we propose an additional module that can be used in tandem with contrastive selfsupervised learning techniques to augment the contrastive objective (the additional module is highlighted in Figure 1). The module is simple, model-agnostic, and can be used to supplement a contrastive algorithm to improve performance and supplement the information learned by the network to converge faster. The module is essentially an additional stream of the network with the objective of regressing the parameters of an affine transformation or homography. In this way, there is a multi-task objective that the network must solve: 1. minimising the original contrastive objective, and 2. learning the parameters of a homography applied to one of the input images from a vector difference of their latent representations. We force the latent space to encode the geometric transformation information by learning to regress the parameters of the transformation in an MLP that takes the vector difference of two latent representations of an input, x, and its transformed analogue, x′. By including the information in this way, the network is not invariant to the components of the transformation but is still able to use them as a self-supervised signal for learning. Moreover, this approach serves as a novel hybrid of the pre-text tasks and contrastive learning by enforcing consistency regularisation (Sohn et al., 2020).\nThrough extensive empirical studies, we show that the additional objective of regressing the transformation parameters serves as a useful supplementary task for self-supervised contrastive learning, and improves performance for all considered datasets in terms of linear evaluation accuracy and convergence speed.\nThe remainder of the paper is structured as follows. In Section 2, we cover the related work in the area of self-supervised learning, going into detail where necessary. In Section 3 we detail our\nproposed method. We first introduce a framework and set of notation to make the formalisation of the approach clear. We then delve into the details behind the architecture and choices for the various part of the system. This is followed by a comprehensive set of experiments in Section 4, including results of various datasets, as well as an ablative study. Finally, the paper is concluded with some closing remarks in Section 5." }, { "heading": "2 RELATED WORK", "text": "SSL is a popular research area within computer vision. Previous approaches can be broadly classed into two main categories. The first is where pre-text tasks are manually defined, and the goal of the algorithms is to solve these hand-crafted tasks (Lee et al., 2020; Doersch et al., 2015; Gidaris et al., 2018; Zhang et al., 2016; Misra & Maaten, 2020). Examples of such methods include inpainting (Pathak et al., 2016), colourising (Zhang et al., 2016), jigsaw puzzles (Noroozi & Favaro, 2016), patch prediction (Doersch et al., 2015), and geometric image transformations (Dosovitskiy et al., 2014) such as using rotation as the pre-text task (Gidaris et al., 2018; Feng et al., 2019). Some of these pre-text approaches that deal with geometric image transformations are similar in spirit to our method. Gidaris et al. (2018); Feng et al. (2019) are two variants of predicting image rotations as an auxiliary task for unsupervised learning. Perhaps closer to our method is Dosovitskiy et al. (2014), in which a set of transformations is applied to image patches, and the network is trained in a fully-unsupervised manner to predict surrogate classes defined by a set of transformed image patches by minimising the log loss. Our method, however, investigates a different, particular set of transformations (those that define an affine transformation of general homography), and show this can be used to aid self-supervised performance, using the transformation parameters themselves as targets that need to be regressed (using mean-squared error) by the contrastive algorithm in a multitask manner. The discrepancy in the network’s ability to predict the actual values of the parameters of the affine transformation/homography serves as our additional supervision signal.\nA somewhat related approach to our proposed method within the pre-text task domain is proposed by Lee et al. (2020). They propose to augment the learning process of a supervised learning algorithm with additional labels constructed using self-supervised labels. These labels are rotation classes and colour permutations. Importantly, they create a loss function which is based on a joint distribution of the original (supervised) labels and the self-supervised (augmented) labels. In this way, the network is not forced to be invariant to the transformations under consideration, since this has been shown to hurt performance (Lee et al., 2020). Our method is different to this in that we propose a module to be integrated specifically with self-supervised algorithms. Additionally, we regress the transformation parameters in real vector space and do not create classes for the parameters.\nThe other broad category of SSL is based on contrastive learning (Chen et al., 2020a; He et al., 2019; Caron et al., 2020), and this class of techniques represent the current state-of-the-art in selfsupervised learning, outperforming the hand-crafted pre-text task methods. These approaches learn representations by contrasting positive pairs of samples from negative pairs of samples in latent space. Such methods typically require that careful attention be paid to the negative samples. Additionally, they have the disadvantage of requiring prohibitively large batch sizes (4096-16000), memory banks, or other mechanisms to retrieve the relevant negative samples.\nOne popular such method is known as SimCLR (Chen et al., 2020a). SimCLR is a general framework for contrastive learning, and in its vanilla formulation consists of an encoder network parameterised by a CNN (usually a variant of ResNet (He et al., 2016)) and an MLP projection head. An input image is sampled, and two distinct views of that same input image are computed using a random augmentation. The augmentation consists of colour jiterring, Gaussian blurring, and random cropping. The two views are sent through the encoder network to produce two latent representations. These latent vectors are then sent through the projection head to produce final latent vectors. It is from these vectors that the loss is computed. In the case of SimCLR, the loss is normalised temperatured cross-entropy (NT-Xent).\nA recent approach proposed in Grill et al. (2020) (BYOL) somewhat overcomes the aforementioned disadvantages of requiring negative pairs of samples (which implicitly requires a large batch size). Two separate networks with their own weights are used in tandem to learn the representation. An online network (consisting of an encoder, MLP projection head, and MLP prediction network) is trained to predict the representation outputted by a target network. During training, the online\nnetwork parameters are updated using backpropagation of error derivatives computed using a meansquared error loss. However, the target network parameters are updated using an exponential moving average. In this way, BYOL overcomes collapsed solutions in which every image produces the same representation. We test our module with both SimCLR and BYOL, since these two methods serve as two popular, recent approaches to contrastive SSL.\nSome helpful findings for guiding self-supervised research were demonstrated in Kolesnikov et al. (2019). Core among these are that 1) standard architecture designs that work well in the fullysupervised setting do not necessarily work well in the self-supervised setting, 2) in the selfsupervised setting larger CNNs often means higher quality learned representations, and 3) the linear evaluation paradigm for assessing performance may take a long time to converge. Moreover, Newell & Deng (2020) find that the effectiveness of self-supervised pretraining decreases as the amount of labelled data increases, and that performance on one particular downstream task is not necessarily indicative of performance on other downstream tasks." }, { "heading": "3 PROPOSED METHOD", "text": "We first introduce a mathematical framework for discussing our method. Let B1 be a set of base transformations. A base transformation is a transformation that cannot be decomposed into more basic transformations and is interpreted as per Grill et al. (2020); Chen et al. (2020a). Examples of base transformations include colour jittering, cropping, and horizontal flipping. We define the possible base transformations a-priori, and |B1| < ∞. Next, we define a new set of base spatial transformations B2 that correspond to the general affine transformations (i.e. rotation, translation, scaling and shearing) or the full homography (i.e. affine transformations and perspective projection). Further, we impose the following condition:\nB1 ∩ B2 = ∅ (2)\nThe reason for this restriction will be apparent later.\nA transformation tb,θ is parameterised by its associated base transformation b ∈ B1 ∪ B2 and transformation parameters θ ∈ Θ. Then, the set of all possible transformations for a particular base transformation set B may be defined as:\nTi := {tb,θ|b ∈ Bi, θ ∈ Θ} (3)\nClearly, we may have that |Ti| =∞, since some parameters may take on any value within compact subsets of R. This is important because we want to be able to sample from an infinite sample space during training to ensure the network sees a variety of samples.\nWe can now define an augmentation, which is an ordered sequence of n transformations. As such, each unique ordering will necessarily produce a unique augmentation (e.g. flipping and then cropping is different from cropping and then flipping). Formally, an augmentation a is defined as:\na(x) = tbn,θn ◦ · · · ◦ tb2,θ2 ◦ tb1,θ1(x) (4)\nDenote the set of all possible augmentations for a transformation set Ti as ATi . Under this definition, AT2 is the set of all possible affine or homographic transformations. Examples of the affine transformations and homographies can be seen in Appendix A.\nNow, consider an input image x sampled at random from a dataset of images X ⊂ X , where X is the sample space of images. We sample augmentations a, b ∈ AT1 , and apply them to x to produce augmented views x1 and x2, respectively. We then sample an affine/homographic transformation cφ ∈ AT2 and apply it to x1 to produce x′1. Note that x1 and x′1 are related by a homography. This is a core assumption relied upon by further inductive biases we introduce into our model.\nWe now describe the proposed architecture as depicted in Figure 1. Let the mapping f : X → Rp be parameterised by a CNN, and the mappings g : Rp → Rk and h : Rp → Rm be parameterised by MLPs, where p, k, and m are the dimensionality of the encoder latent vector, projection head latent vector, and homography parameter vector, respectively. f and g are the encoder and projection head from the original SimCLR (Chen et al., 2020a) and BYOL (Grill et al., 2020) formulations, respectively, whereas h is a new MLP tasked with estimating the homography parameters. Note that\nif we are regressing all parameters of a general affine transformation, then m = 6, whereas for a full homography we have m = 8. For brevity, we have denoted both streams in the architecture to be a network with the same shared weights, although it may be the case that the two streams consist of networks with different weights (as is the case with BYOL).\nThe loss function for our method contains two terms. First is the original loss function as defined by the original method: NT-Xent for SimCLR and squared L2 for BYOL. We define this first term as L1(z1, z2), where z1 = g(f(x1)) and z2 = g(f(x2)). The second term can be seen as forcing the network to explicitly learn the affine transformation or homography between x1 and x′1. Let the latent representations of x1 and x′1 be l1 = f(x1) and l ′ 1 = f(x ′ 1). We send the vector difference l1− l′1 through h to produce an estimate of the homography’s parameters. We regress to these parameters using mean-squared error: L2(h(l1 − l′1), φ), where φ are the ground truth affine transformation parameters. Thus, the complete loss function is given by:\nL1(z1, z2) + L2(h(l1 − l′1), φ) (5)\nThe vector difference naturally describes the transformation needed to move from l1 to l′1. With our architecture and learning objective, we force this vector difference transformation vector to encode the homography between x1 and x′1. This interpretation may be seen as natural and intuitive. Hence, the L1 term enforces invariance to the transformations in B1 and L2 enforces non-invariance to the transformations in B2. Note that this is still completely self-supervised. Moreover, the restriction imposed in Equation 2 is necessary because we cannot have any transformations in cφ’s sequence that would destroy the fact that x1 and x′1 are related through a homography. For example, adding a cropping transformation would break the homography assumption. One may add transformations that do not break this restriction (e.g. colour jitter), however, we do not explore this here.\nWe may interpret this extended architecture as solving a multi-task learning objective in which 1) the contrastive loss between differing augmented views of the image must be minimised, and 2) another network must be able to estimate the homography between images, which explicitly forces the latent space to encode this spatial information during training." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "This section presents an empirical study comparing the original SimCLR and BYOL techniques on the CIFAR10, CIFAR100 and SVHN benchmark datasets, with and without our proposed module. Our goal is not to achieve near state-of-the-art performance on the datasets, but is rather to demonstrate the effectiveness of the proposed additional homography estimation objective under consistent experimental settings. In all cases, the proposed module improves the performance of a linear classifier on the learned representation and improves the learning speed.\nThe experimental setup for the self-supervised training of SimCLR and BYOL can be found in Table 1. The batch size is somewhat lower than the original methods since the original methods focused on performance on ImageNet, which requires a considerably larger batch size to perform well. In some additional experiments, we find performance decreased for our datasets with batch sizes larger than 256 for all methods (original SimCLR and BYOL, as well as our method). Further, we found alternative optimised hyperparameter values (learning rate, optimiser, and weight decay) that worked better than those proposed in the original formulations of SimCLR and BYOL, which can be attributed to similar reasons as the batch size arguments. We use the same type of learning rate decay as the previous methods, and train for the same number of epochs (and warmup epochs) as SimCLR. We use a temperature of 0.5 for the NT-Xent loss and keep all images at their default resolution of 32× 32. Lastly, all reported confidence intervals are the average across 10 trials of the full pipeline trained from scratch (SSL pretraining + linear evaluation).\nPerformance is measured as per the literature, using linear evaluation on the relevant dataset. The experimental setup for linear evaluation can be seen in Table 1. We freeze the encoder and only optimise the weights of a final linear layer using cross-entropy.\nWe parameterise f as ResNet50, while g and h are parameterised as two-layer ReLU MLPs (Figure 1). Further, to ensure consistency with SimCLR, we have that B1 = {random crop, random horizontal flip, colour jitter,Gaussian blur, random grayscale}. The output of h is a six-dimensional real vector, where the six components are defined according to the parameters of a general affine transform: 1) rotation angle, 2) vertical translation, 3) horizontal translation, 4) scaling factor, 5) vertical shear angle, and 6) horizontal shear angle. For a homography, the output of h is instead an eight-dimensional vector. For details about the transformations, see Appendix A." }, { "heading": "4.2 AFFINE AND HOMOGRAPHY OBJECTIVE", "text": "From Tables 2 and 3 (‘+ H’ and ‘+ A’ for homography and affine, respectively) we can see that the estimation of the affine transformation and the homography both assist performance and allow for faster learning. In particular, we note statistically significant improvements across all datasets for both SimCLR and BYOL with the affine objective.1 We posit that the ability to explicitly estimate the affine transformation or homography between input images in this way allows the encoder to learn complementary information early on in training that is not available from the contrastive supervision signal. The ability to estimate the affine transform or homography means that the network is encoding the geometry of the input images. This explicit geometric information is not directly available from the contrastive signal. Interestingly, the affine objective outperforms the full homography in all cases, even though an affine transformation is a subset of a homography. We perform a sweep of the distortion amount for the homography and find it consistently performs similar to or a little worse than the affine transform (see Appendix B). When the distortion factor becomes too large, accuracy drops noticeably as the images are too distorted to learn effectively. We note that incorporating our module into a network results in an average 30% additional training time versus the respective original methods.\nFigure 2 shows the linear evaluation accuracy trained on the embeddings extracted from the model at each epoch during the SSL training. We can see that performance and convergence improves with the inclusion of the proposed module. The module and its accompanying additional objective of\n1Using a t-test and significance level of 1%\nregressing the affine transform/homography may be seen as a regulariser for the original contrastive objective. This is further evidenced by the shaded regions in the figures, in which the proposed method results in more stable performance.\nWe note that the relative benefit of our proposed module diminishes with longer training time for SimCLR and BYOL. This makes sense as the relative benefit of the module decreases with time as the model learns to estimate the affine transformation or homography more accurately as the epochs progress. We performed additional experiments on SimCLR and BYOL training the model for longer, and note that the proposed module still outperforms or performs similarly to the original methods on all datasets. This is shown in Table 4. These results also verify the findings of previous works that find that larger models trained for longer benefits self-supervised architectures (Chen et al., 2020a; Grill et al., 2020; Chen et al., 2020b; Kolesnikov et al., 2019).\nWe note that the performance gap between SimCLR and BYOL evident in Tables 2 and 3 in general can be attributed to the fact that in the original works, BYOL was trained for 10x as long as SimCLR, whereas we trained both for the same number of epochs as the original SimCLR work. We posit that BYOL has simply not converged sufficiently, since BYOL eventually outperforms SimCLR (as evidenced by Table 4). This is consistent with the findings from the original works." }, { "heading": "4.3 INVARIANCE IS NOT ALWAYS DESIRABLE", "text": "In order for a function f to be invariant to a transformation T , we must have that, for all x, f(x) = f(Tx). Thus, one way to encourage invariance to T in a neural network f is to add a term to the loss function which minimises: L(f(x), f(Tx)) (6) for some measure of similarity L. If we rewrite our loss function from Equation 5 in terms of our input image x and augmentations a, b, cφ, we get:\nL1(g(f(ax)), g(f(bx))) + L2(h(f(ax)− f(cφax)), φ) (7)\nThe first term of the above loss, corresponding to the SimCLR/BYOL loss, is clearly of the form of Expression 6. This means that we are encouraging our representation to be invariant to the transformations within B1. However, the second term in the loss (i.e. the term corresponding to the affine\ntransformation/homography parameter estimation) is not of the form of Expression 6, since we have recast the objective into a parameter prediction task. Thus, we are not encouraging invariance to the transformations within B2. We provide some empirical evidence for this in Table 5. When we recast the module to minimise L2(f(x1), f(x′1)) (L2 being the mean squared error loss), performance decreases notably on all datasets, with an average relative decrease of over 8%. This is because, with this loss, we have enforced invariance to the transformations in B2. In particular, we have encouraged invariance to all the elements of an affine transformation, which proves problematic.\nTo delve deeper into the effect of transformation invariance on performance, we extract only the ‘6’ and ‘9’ classes of the SVHN dataset as a new dataset and repeat the SSL pre-training and linear evaluation tasks. The goal of this experiment is to observe how performance degrades when the neural network is encouraged to be invariant to certain transformations - including rotation - in a setting where certain invariance (i.e. rotation) is not desirable. Results can also be seen in Table 5. This further suggests that invariance to certain transformations is not always desirable. Evidence from Table 5 suggests that transformation invariance (for this particular class of transformations) in SSL may not always be desirable, and may, in fact, hurt performance, even when this may not be expected (as in the case with CIFAR10 and CIFAR100, since no classes of these seem as if they should be affected by transformation invariance like the ‘6’ vs ‘9’ case). For more details on the invariance analyses, see Appendix C." }, { "heading": "4.4 TRANSFORM COMPONENT ANALYSIS", "text": "Table 6 shows the performance of the various components of an affine transformation in terms of linear evaluation accuracy on the dataset.2 To compute these results, the output dimensionality of mapping h needs to be changed accordingly. Namely, rotation, translation, scale, and shear have corresponding output dimensionalities m of 1, 2, 1, and 2, respectively. Interestingly, shear alone outperforms the three other transforms on all datasets for both SimCLR and BYOL. We hypothesise that this is because shear corrupts the image the most out of the four transforms, but still in a recognisable way. This forces the networks to learn more complex geometry and information about the object that the other transforms. We leave further investigation of this to future work." }, { "heading": "4.5 ADDITIONAL ABLATIONS", "text": "We perform various additional experiments to motivate the choice of architecture. We experiment with other means of encoding the latent transformation, specifically, concatenation instead of vector difference. However, this results in marginal performance gains of an average of 0.28 percentage points across the 3 datasets. These results do not seem to justify the noticeable additional computational cost from the transformation representation being twice the size. For primarily this reason,\n2Due to apparent instability in training BYOL with our module for these low-dimensional outputs (e.g. single real value output for rotation and scale), we temporarily replace MSE with logcosh, which stabilises training in this setting\nwe opt to stick with vector difference. Further, we experiment with having the module operate on the output of g instead of f . Performance degrades for all datasets (SimCLR): 64.38 for CIFAR10, 29.99 for CIFAR100, and 82.49 for SVHN. Lastly, we perform some preliminary experiments into having two modules: one operating on x1 and the other operating on x2 (instead of just one module as per our original experimental setup). The resulting performance difference is negligible with this setup: CIFAR10 65.28 ± 0.61, CIFAR100 31.68 ± 1.04, and SVHN 84.10 ± 0.23. We posit that this is because if the one module can solve the homography estimation for x1, then a module operating on x2 will have to be able to solve the homography estimation for it, since the same types of random homographies/affine transformations are being applied to both streams." }, { "heading": "5 CONCLUSION", "text": "Network size and time of training is a bottleneck in modern SS architectures that can compete on a performance level like supervised alternatives. We have shown that the proposed module that regresses the parameters of an affine transformation or homography as an additional objective assists this training bottleneck with faster convergence and better performance. The architecture of the module does not encourage invariance to the affine or homographic transformation, as invariance has been previously shown to be potentially harmful (Lee et al., 2020). Rather, the proposed module encourages these transformations to be encoded within the latent space itself by directly estimating the parameters of the transformation. Lastly, we note that the affine transformation performs better in all cases than the full homography, even though the homography is a superset of affine transformations. The experiments suggest that the additional ability of perspective transform in a homography does not yield any tangible benefit over a regular affine transformation in such low-resolution settings." }, { "heading": "A DATA AUGMENTATION DETAILS", "text": "Tables 7 and 8 detail the parameter values and value ranges used in the experiments for the base transformation sets B1 and B2, respectively. The transformations from B1 are applied with a specified probability. We also normalise the parameters of the affine transformations in the following way. Consider a rotation angle α, translation values tx, ty , and shear angles sx, sy . We perform the following normalisation on these parameters: α := α/360; tx := tx/W ; ty := ty/H; sx := sx/smax; sy := sy/smax, where H,W are the image height and width, respectively, and smax is the maximum allowed shear." }, { "heading": "B AFFINE VS HOMOGRAPHY", "text": "In addition to the perspective distortion factor of 0.5, we perform a sweep across this parameter for the values {0.1, 0.2, 0.8}. The results can be seen in Table 9. Interestingly, most distortion factors perform similarly on these datasets, with a distortion factor of 0.5 performing best on average. However, when the factor gets too large, as is the case for 0.8, then the images become too corrupted for the neural network to seemingly learn anything useful.\nC INVARIANCE ANALYSIS\nFigures 4 and 5 shows the confusion matrices for a particular run of the SVHN dataset for SimCLR when enforcing (and not enforcing) transformation invariance. Interestingly, transformation invariance negatively affects most classes of the dataset. Unsurprisingly, the classes ‘6’ and ‘9’ are most negatively affected when transformation invariance is enforced. Rotation invariance in this context is prohibitive, and performance subsequently drops. By recasting the module as proposed - encode the transformation and predict its parameters - we do not enforce invariance, and instead allow the network to learn from a richer supervision signal by learning to estimate a homography." } ]
2,020
null
SP:0268dac3486fd3de176b7170b12d864092ad856a
[ "The paper under review proposes a variational inference procedure for a specific class of Cox processes whose intensity is derived from a stochastic differential equation. The methodology relies on a restriction of candidate solutions the the subset for which the drift depends on $x_t$, $N_t$ and $t$; the drift is then modelled with a neural network. By simulating from the candidate model, a sample average approximation of the ELBO is used to compute a stochastic gradient descent algorithm, optimize the bound and thus estimate non-parametrically the drift." ]
This paper proposes a stochastic variational inference (SVI) method for computing an approximate posterior path measure of a Cox process. These processes are widely used in natural and physical sciences, engineering and operations research, and represent a non-trivial model of a wide array of phenomena. In our work, we model the stochastic intensity as the solution of a diffusion stochastic differential equation (SDE), and our objective is to infer the posterior, or smoothing, measure over the paths given Poisson process realizations. We first derive a system of stochastic partial differential equations (SPDE) for the pathwise smoothing posterior density function, a non-trivial result, since the standard solution of SPDEs typically involves an Itô stochastic integral, which is not defined pathwise. Next, we propose an SVI approach to approximating the solution of the system. We parametrize the class of approximate smoothing posteriors using a neural network, derive a lower bound on the evidence of the observed point process samplepath, and optimize the lower bound using stochastic gradient descent (SGD). We demonstrate the efficacy of our method on both synthetic and real-world problems, and demonstrate the advantage of the neural network solution over standard numerical solvers.
[]
[ { "authors": [ "Cédric Archambeau", "Dan Cornford", "Manfred Opper", "John Shawe-Taylor" ], "title": "Gaussian process approximations of stochastic differential equations", "venue": "Journal of machine learning research,", "year": 2007 }, { "authors": [ "Cédric Archambeau", "Manfred Opper", "Yuan Shen", "Dan Cornford", "John S Shawe-Taylor" ], "title": "Variational inference for diffusion processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "Alan Bain", "Dan Crisan" ], "title": "Fundamentals of stochastic filtering, volume 60", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "JMC Clark" ], "title": "The design of robust approximations to the stochastic differential equations of nonlinear filtering", "venue": "Communication systems and random process theory,", "year": 1978 }, { "authors": [ "David R Cox" ], "title": "Some statistical methods connected with series of events", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1955 }, { "authors": [ "Botond Cseke", "Manfred Opper", "Guido Sanguinetti" ], "title": "Approximate inference in latent gaussianmarkov models from continuous time observations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Mark HA Davis" ], "title": "A pathwise solution of the equations of nonlinear filtering", "venue": "Theory of Probability & Its Applications,", "year": 1982 }, { "authors": [ "MHA Davis" ], "title": "Pathwise non-linear filtering. In Stochastic Systems: The Mathematics of Filtering and Identification and Applications, pp. 505–528", "venue": null, "year": 1981 }, { "authors": [ "Robert J. Elliott", "W. Paul Malcolm" ], "title": "General smoothing formulas for Markov-modulated poisson observations", "venue": "IEEE Transactions on Automatic Control,", "year": 2005 }, { "authors": [ "Hadi Fanaee-T", "Joao Gama" ], "title": "Event labeling combining ensemble detectors and background knowledge", "venue": "Progress in Artificial Intelligence, pp", "year": 2013 }, { "authors": [ "Jiequn Han", "Arnulf Jentzen", "E Weinan" ], "title": "Solving high-dimensional partial differential equations using deep learning", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Yuval Harel", "Ron Meir", "Manfred Opper" ], "title": "A tractable approximation to optimal point process filtering: Application to neural encoding", "venue": "Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Matthew D Hoffman", "David M Blei", "Chong Wang", "John Paisley" ], "title": "Stochastic variational inference", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Anna Kutschireiter", "Simone Carlo Surace", "Jean-Pascal Pfister" ], "title": "The hitchhiker’s guide to nonlinear filtering", "venue": "Journal of Mathematical Psychology,", "year": 2020 }, { "authors": [ "Xuechen Li", "Ting-Kam Leonard Wong", "Ricky TQ Chen", "David Duvenaud" ], "title": "Scalable gradients for stochastic differential equations", "venue": "arXiv preprint arXiv:2001.01328,", "year": 2020 }, { "authors": [ "Sanjoy K Mitter", "Nigel J Newton" ], "title": "A variational approach to nonlinear estimation", "venue": "SIAM journal on control and optimization,", "year": 2003 }, { "authors": [ "Bernt Oksendal" ], "title": "Stochastic differential equations: an introduction with applications", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Simo Särkkä" ], "title": "Bayesian Filtering and Smoothing. Institute of Mathematical Statistics Textbooks", "venue": null, "year": 2013 }, { "authors": [ "David Schnoerr", "Ramon Grima", "Guido Sanguinetti" ], "title": "Cox process representation and inference for stochastic reaction–diffusion processes", "venue": "Nature Communications,", "year": 2016 }, { "authors": [ "J.H. Van Schuppen" ], "title": "Filtering, prediction and smoothing for counting process observations, a martingale approach", "venue": "SIAM Journal on Applied Mathematics,", "year": 1977 }, { "authors": [ "Robert D. Skeel", "Martin Berzins" ], "title": "A method for the spatial discretization of parabolic equations in one space variable", "venue": "SIAM Journal on Scientific and Statistical Computing,", "year": 1990 }, { "authors": [ "D Snyder" ], "title": "Smoothing for doubly stochastic poisson processes", "venue": "IEEE Transactions on Information Theory,", "year": 1972 }, { "authors": [ "Alex Susemihl", "Ron Meir", "Manfred Opper" ], "title": "Analytical results for the error in filtering of Gaussian processes", "venue": "Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems", "year": 2011 }, { "authors": [ "Tobias Sutter", "Arnab Ganguly", "Heinz Koeppl" ], "title": "A variational approach to path estimation and parameter inference of hidden diffusion processes", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Belinda Tzen", "Maxim Raginsky" ], "title": "Neural stochastic differential equations: Deep latent gaussian models in the diffusion limit", "venue": "arXiv preprint arXiv:1905.09883,", "year": 2019 }, { "authors": [ "Ruixin Wang", "Prateek Jaiswal", "Harsha Honnappa" ], "title": "Estimating stochastic poisson intensities using deep latent models", "venue": "arXiv preprint arXiv:2007.06037,", "year": 2020 }, { "authors": [ "Tingting Zhang", "S.C. Kou" ], "title": "Nonparametric inference of doubly stochastic poisson process data via the kernel method", "venue": "The Annals of Applied Statistics,", "year": 2010 }, { "authors": [ "Xiaowei Zhang", "L Jeff Hong", "Jiheng Zhang" ], "title": "Scaling and modeling of call center arrivals", "venue": "In Proceedings of the Winter Simulation Conference", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Cox processes (Cox, 1955; Cox & Isham, 1980), also known as doubly-stochastic Poisson processes, are a class of stochastic point processes wherein the point intensity is itself stochastic and, conditional on a realization of the intensity process, the number of points in any subset of space is Poisson distributed. These processes are widely used in the natural and physical sciences, engineering and operations research, and form useful models of a wide array of phenomena.\nWe model the intensity by a diffusion process that is the solution of a stochastic differential equation (SDE). This is a standard assumption across a range of applications (Susemihl et al., 2011; Kutschireiter et al., 2020). The measure induced by the solution of the SDE serves as a prior measure over sample paths, and our objective is to infer a posterior measure over the paths of the underlying intensity process, given realizations of the Poisson point process observations over a fixed time horizon. This type of inference problem has been studied in the Bayesian filtering literature (Schuppen, 1977; Bain & Crisan, 2008; Särkkä, 2013), where it is of particular interest to infer the state of the intensity process at any past time given all count observations till the present time instant (the resulting posterior is called the smoothing posterior measure).\nIn a seminal paper, Snyder (1972) derived a stochastic partial differential equation (SPDE) describing the dynamics of the corresponding posterior density for Cox processes. The solution of this smoothing SPDE requires the computation of an Itô stochastic integral with respect to the counting process. It has long been recognized (Clark, 1978; Davis, 1981; 1982) that for stochastic smoothing (and filtering) theory to be useful in practice, it should be possible to compute smoothing posteriors conditioned on a single observed sample path. However, Itô integrals are not defined pathwise and deriving a pathwise smoothing density is remarkably hard. 30 years after Synder’s original work Elliott & Malcolm (2005) derived a pathwise smoothing SPDE in the form of a coupled system of forward and backward pathwise SPDEs. Nonetheless, solving the system of pathwise SPDEs, or sampling from the corresponding SDE, is still challenging and intractable in general. It is well known, for example, that numerical techniques for solving these SPDEs, such as the finite element\nmethod (FEM), suffers from the curse of dimensionality (Han et al., 2018). Therefore, it is of considerable interest to find more efficient methods to solve the smoothing SPDE.\nWe take a variational inference approach to computing an approximate smoothing posterior measure. Variational representations of Bayesian posteriors in stochastic filtering and smoothing theory have been developed in considerable generality; see (Mitter & Newton, 2003) for a rigorous treatment. There are a number of papers that consider the computation of an approximate posterior distribution over the paths of the underlying intensity process that is observed with additive Gaussian noise (Archambeau et al., 2007; 2008; Cseke et al., 2013; Susemihl et al., 2011; Sutter et al., 2016). Susemihl et al. (2011) studied Bayesian filtering of Gaussian processes by deriving a differential equation characterizing the evolution of the mean-square error (MSE) in estimating the underlying Gaussian process. On the other hand, Sutter et al. (2016) compute a variational approximation to the smoothing posterior density when the underlying diffusion intensity is observed with additive Brownian noise. They choose their variational family to be a class of SDEs with an analytically computable marginal density. This setting is considerably different from our setting, where the observed process is a point process. Nonetheless, Sutter et al. (2016) provides methodological motivation for our current study. In the context of the computation of approximate smoothing/filtering posteriors for point process observations, Harel et al. (2015) developed an analytically tractable approximation to the filtering posterior distribution of a diffusion modulated marked point processes under specific modeling assumptions suited for a neural encoding/decoding problem. In general, however, analytical tractability cannot be assured without restrictive assumptions.\nWe present a stochastic variational inference (SVI) (Hoffman et al., 2013) method for computing a variational approximation to the smoothing posterior density. Our approach fixes an approximating family of path measures to those induced by a class of parametrized SPDEs. In particular, we parametrize the drift function of the approximating SPDEs by a neural network with input and output variables matching the theoretical smoothing SPDE. Thereafter, using standard stochastic analysis tools we compute a tractable lower bound to the evidence of observing a sample path of count observations, the so-called evidence lower bound (ELBO). A sample average approximation (SAA) to the ELBO is further computed by simulating sample paths from the stochastic differential equation (SDE) corresponding to the approximating SPDE. Finally, by maximizing the ELBO, the neural network is trained using stochastic gradient descent (SGD) utilizing multiple batches of sample paths of count observations. Note that each sample path of the count observations entails the simulation of a separate SDE. We note that there are many problems in the natural and physical sciences, engineering and operations research where multiple paths of a point process (over a finite time horizon) may be obtained. For instance, we present an example in Section 5 modeling the demand for bikes rented during a 24 hour, one day time period in a bike-sharing platform, where the underlying driving intensity is subject to stochastic variations, and demand information is collected over multiple days.\nIn contrast to the variational algorithm developed in Sutter et al. (2016), where the variational lower bound must be re-optimized for new sample paths of the observation process, our variational method is more general and our approximation to the smoothing posterior can be used as a map for another (unobserved) sample path of count observations. Our computational approach can also be straightforwardly adapted to solve the problem of interest in Sutter et al. (2016).\nIn the subsequent sections, we describe our problem and method in detail and demonstrate the utility of our method with the help of numerical experiments. In particular, we show how the choice of approximating family enables us to use the trained neural network and in turn, the variational Bayesian smoothing posterior (VBSP), to compute smoothing SPDE in almost (3/4)th of the computational time required to compute the original smoothing SPDE using FEM. Moreover, we also efficiently generate Monte Carlo samples from the learned VBSP and use them for inference on the bike-sharing dataset, whereas FEM failed to compute either VBSP or the true smoothing density for the given time-space discretization." }, { "heading": "2 PROBLEM DESCRIPTION", "text": "Let Nt be a Cox process with unknown stochastic intensity {zt ∈ R+, t ∈ [0, T ]}. We use Nt′,t to denote a sample path realization of Nt restricted to the interval [t′, t], and useNt to denoteNt−N0; recall that N0 = 0 by definition. As noted before, a Cox process conditioned on the intensity is a\nPoisson process. Therefore, given a realized sample path {zt, t ∈ [0, T ]} of the intensity, and for any 0 ≤ t′ < t ≤ T , the marginal likelihood of observing Nt −Nt′ ∈ N counts in (t′, t] is\nNt −Nt′ ∼ L(Nt −Nt′ = Nt −Nt′ |{zs}t′<s≤t) := (∫ t\nt′ zsds )Nt−Nt′ e− ∫ tt′ zsds (Nt −Nt′)! , (1)\nwhere L denotes the Poisson likelihood. Rather than directly modeling the intensity z, we will bring a little more flexibility to our setting, and assume that zt is a deterministic transformation of an another stochastic process xt through a known mapping h : Rd 7→ R+: is zt = h(xt). Note that the non-negative range of h ensures that the Poisson intensity zt = h(xt) is non-negative. Unless xt ∈ R+, the mapping h cannot be an identity function. We use the term intensity process to refer to either zt or xt.\nWe model the intensity process {xt ∈ Rd,∀t ∈ [0, T ]} with the following SDE,\ndxt = b(xt)dt+ σ(xt)dBt,∀t ≤ T and x0 = 0, (2)\nwhere b : Rd 7→ Rd is the drift function, σ(·) : Rd 7→ Rd×d is the diffusion coefficient, and Bt is the d−dimensional Brownian motion (or Wiener process). We assume that there exists a strong solution to the SDE above (Oksendal, 2013, Chapter 5). Moreover, we assume that b(·), h(·), and σ(·) are fixed by the modeler apriori, and we are interested in inferring the unknown intensity process with their fixed definitions. Incorporating them will obscure our main contribution, and we leave it for future work.\nThe model of the count observations above forms a diffusion modulated Cox process. Diffusion modulated Cox processes are widely used to model the arrival process in various service systems such as call centers, hospitals, airports etc. (Zhang et al., 2014; Wang et al., 2020). Zhang & Kou (2010) use a Gaussian process modulated Cox-process to infer proteins’ conformation, in particular, they model the arrival rates of the photons collected from a laser excited protein molecule as a Gaussian process. Schnoerr et al. (2016) model spatio-temporal stochastic systems from systems biology and epidemiology using Cox process where intensity is modelled with diffusions.\nAs stated in the introduction, we seek to infer the smoothing posterior measure over the unknown intensity process {xt, t ∈ [0, T ]} using the count observations upto time T . Following terminology from the Bayesian filtering theory (Särkkä, 2013), we use smoothing to refer to inferring the unobserved intensity process at any past time given the observations upto the current time. Mathematically, the smoothing posterior is defined using the conditional expectation of the form E[f(xt)|s(Nu, u ∈ [0, T ])], where s(Nu, u ∈ [0, T ]) is the smallest sigma algebra (or filtration) generated by the Cox process {Nt} from time 0 to T . For brevity we write E[f(xt)|s(Nu, u ∈ [0, T ])] as E[f(xt)|N0,T ]. Interested readers may refer to Kutschireiter et al. (2020) for more details on non-linear filtering theory.\nWe now provide a formal derivation of the smoothing posterior using Bayes’ theorem (Bain & Crisan (2008); Elliott & Malcolm (2005)). Observe the conditional expectation satisfies\nE[f(xt)|N0,T ] = E†[Λ0,T f(xt)|N0,T ]\nE†[Λ0,T |N0,T ] (3)\nfor any measurable function f(·) and Λs,t := L(Ns,t)L†(Ns,t) for any 0 ≤ s < t ≤ T , where L † is the unit intensity Poisson likelihood and E†[·] denotes the expectation with respect to L†. Note that L† does not depend on the stochastic intensity process x and forms a reference measure. The marginal smoothing posterior density is defined as\npt(x|N0,T ) := P(xt ∈ dx|N0,T ), (4)\nwhich can be formally obtained from equation 3 by setting f(xt) = I{A}(xt) for any A ∈ Rd, where I{A}(y) is an indicator function that equals 1 when y ∈ A, otherwise 0. Now, define the unnormalized filtering density function q̄t(x) as the function satisfying\nP(xt ∈ dx|N0,t) = q̄t(x)dx∫ Rd q̄t(ξ)dξ , (5)\nand also define v̄t(x) := E†[Λt,T |N0,T ]. Then, it can be shown (Elliott & Malcolm (2005)) that for any measurable function f ,\nE[f(xt)|N0,T ] = E†[Λ0,T f(xt)|N0,T ]\nE†[Λ0,T |N0,T ] =\n∫ Rd f(ξ)q̄t(ξ)v̄t(ξ)dξ∫\nRd q̄t(ξ)v̄t(ξ)dξ . (6)\nNext, recalling that h(·) is the mapping to ensure the intensity process is positive, define the function Ψt for a given sample path of count observations (i.e., pathwise) as\nΨt := Ψ(h(x), t, Nt) = exp [(1− h(x))t+Nt log h(x)] ,∀x ∈ Rd. Following Elliott & Malcolm (2005, Theorem 4) one may use Ψt to derive a coupled system of pathwise SPDEs that characterize q̄t(x) and v̄t(x). In particular, they show that qt = Ψ−1t q̄t is a solution to the following SPDE\n∂tqt(x) = Ψ −1 t L ∗[Ψtqt(x)],∀t ≤ T, q0(x) = δx0(x), (7) where L∗ is the adjoint of L[F (x)] = 12 ∑ i,j ai,j(x)∂xixjF (x) + ∑ i bi(x)∂xiF (x), which is the infinitesimal generator of the prior process for any twice-differentiable, continuous, and bounded function F : Rd 7→ R and a(x) = σ(x)σ(x)T , and δx0(x) is the Dirac delta distribution at x0. Moreover, they also show that vt(x) = Ψtv̄t(x) satisfies the following backward parabolic equation\n∂tvt(x) = −ΨtL[Ψ−1t vt(x)], (8) with terminal condition vT (x) = ΨT (x).\nNow it follows from equation 6 that using the solution of these two SPDEs, the marginal smoothing posterior density for any t ∈ [0, T ] satisfies\npt(x|N0,T ) = qt(ξ)vt(ξ)dξ∫ Rd qt(ξ)vt(ξ)dξ . (9)\nUsing the SPDEs in equation 7, and 8, together with 9, it can be shown that the marginal smoothing posterior density pt(x|N0,T ) satisfies its own SPDE: for any t ∈ [0, T ],\n∂tpt(x|N0,T ) = − ∑ i ∂xi [ { (a(x)[∇ log(Ψ−1t vt(x))])i + bi(x) } pt(x|N0,T ) ] + 1\n2 ∑ i,j ∂xixjai,j(x)pt(x|N0,T ) (10)\nand pt(x|N0,T ) = δx0(x) with x0 = 0. We present a detailed derivation in Appendix A.1. Corresponding to this SPDE, there exists a smoothing posterior SDE, defined as\ndx̄t = { a(x̄t)[∇ log(Ψ−1t vt(x̄t))] + b(x̄t) } dt+ σ(x̄t)dB̄t and x̄0 = 0, (11)\nwhere {x̄t} is a modification of the process {xt} such that B̄t is independent of the Cox process Nt (and thus Bt).\nObserve that the entire sample path of the count observations N0,T is summarized through the pathwise function Ψt and vt together in the drift term of this SDE. Also note that the diffusion coefficient of the smoothing posterior SDE is precisely the same as that of the prior SDE. The computation of the drift term in the smoothing posterior SDE requires the solving equation 8 for vt(x) which, in turn, makes the posterior computation challenging and computationally intractable in general. Consequently, the computation of the marginal posterior density (and hence the path measure). Therefore, we propose a variational inference-based method to compute an approximation to the solution of the smoothing posterior SPDE, by computing an approximate solution to the smoothing posterior SDE in equation 11." }, { "heading": "3 VARIATIONAL BAYES FOR APPROXIMATING THE SMOOTHING DENSITY", "text": "Observe that the posterior path measure is the minimizer of the following variational optimization problem (Mitter & Newton (2003, Proposition 2.1)),\nmax Π(x0,T )∈P(C)\n{ KL(Π‖Π0) + ∫ dΠ(x0,T ) logL(N0,T |h(x0,T )) } , (12)\nwhere P(C) is the space of all absolutely continuous measures with respect to Π0, the measure induced by a solution of the intensity SDE (equation 2) on the space C[0, T ] of continuous functions with support [0, T ], and KL denotes the Kullback-Leibler divergence between two absolutely continuous measures.Note that P(C) also contains Π0. Solving this optimization problem over all measures in P(C) is intractable. Therefore, we choose the subset of absolutely continuous measures Qb̄ ⊂ P(C) induced by solutions of the following SDE:\ndxt =b̄(xt, Nt, t)dt+ σ(xt)dBt, for t ≤ T and x0 = 0, (13)\nwhere b̄(·, ·, ·) : Rd × N × [0, T ] 7→ Rd is the drift function. We term this space of measures Qb̄ as the variational family. The measures in Qb̄ are absolutely continuous with respect to Π0 as they have the same diffusion coefficient, therefore Qb̄ ⊂ P(C). This choice of the variational family is not arbitrary, but rather motivated by the smoothing posterior SDE derived in equation 11, where the diffusion coefficient is σ(·) and the drift coefficient has an intractable form (that depends on the prior drift b(·) and diffusion coefficient σ(·), and Nt through Ψt and vt(·)). Notice that the choice of drift function spans the space of measures in the variational family Qb̄. Since, Qb̄ ⊂ P(C), it follows from equation 12 that\nmax Π(x0,T )∈P(C)\n{ −KL(Π‖Π0) + ∫ dΠ(x0,T ) logL(N0,T |h(x0,T )) } ,\n≥ max Q∈Qb̄\n{ −KL(Q‖Π0) + ∫ dQ(x0,T ) logL(N0,T |h(x0,T )) } (14)\nThe right hand side above is known as the evidence lower bound (ELBO). The corresponding ELBO maximization problem to compute the optimal Q ∈ Qb̄ (for a given sample path N0,T ) is simply\nQ∗(·|N0,T ) = arg max Q∈Qb̄ EQ {logL(N0,T |x0,T )} − KL(Q‖Π0). (15)\nNote that absolutely continuous measures on path space correspond to changes in the drift function, for a fixed diffusion coefficient (else, the measures are singular). As a consequence of Girsanov’s theorem (Oksendal, 2013, Theorem 8.6.8) (see Appendix A.2 for the proof) we have,\nKL(Q‖Π0) = 1\n2 EQ [∫ T 0 ‖σ−1(xt)(b(xt)− b̄(xt, Nt, t))‖2dt ] , (16)\nwhere recall b(·, ·) is the drift of the prior SDE defined in equation 2 and b̄(·, ·, ·) is the drift of the variational SDE. Substituting this into equation 15 yields\nQ∗(·|N0,T ) = arg max Q∈Qb̄ EQ\n{ logL(N0,T |x0,T )− 1\n2 ∫ T 0 ‖σ−1(xt)[b(xt, t)− b̄(xt, Nt, t)]‖2dt } .\n(17)\nWe denote Q∗(·|N0,T ) as the variational Bayesian smoothing posterior (VBSP) path measure . Next, we lay down the details of the SVI algorithm to solve the above optimization problem to compute the VBSP." }, { "heading": "4 STOCHASTIC VARIATIONAL INFERENCE OF THE VBSP", "text": "It is evident from the ELBO in equation 17 and the definition of the variational familyQb̄ that computing the VBSP measure entails the computation of the unknown drift function b̄(·, ·, ·) in equation 13. We further restrict the family of measures Qb̄ by assuming the drift functions belong to a class of parametrized, smooth functions. A feasible way to model this class of drift functions is through a neural network. However, it is possible to use simpler approximation function classes as done in, for example, Sutter et al. (2016), who fixed b̄ to ensure that the marginal distributions of the variational smoothing SDE belong to a specific exponential family of distributions. We note\nthat in choosing a parametrized class, we must still ensure that the resulting drift functions are Lipschitz continuous and satisfy sufficient regularity so that a solution to the SDE equation 13 exists. Furthermore, restricting the drift functions in this way entails a further restriction of the class of (approximating) path measures. An open question is here is how much of a loss this entails (in terms of the Kullback-Leibler divergence from the ‘true’ posterior path measure in this instance).\nTo fix the idea, we assume that b̄(·, ·, ·) lies in a general class of functions parametrized by θ. Henceforth, we write the drift coefficient as b̄(·, ·, ·, θ) to make its dependence on θ explicit. We use stochastic gradient descent (SGD) to maximize the ELBO, which requires the computation of stochastic gradients of the ELBO with respect to θ. To compute the gradients, we generate sample paths of xθ, the solution of the variational SDE equation 13 for a given θ, using a first-order EulerMaruyama integration of the SDE. We do this for convenience, though higher order approximations could be used. Specifically, we partition the time interval [0, T ] in K equal sub-intervals of length ∆ = T/K, {t0, t1, . . . tK}, where t0 = 0 and tK = T and then generate the sequence of {xθti} K i=1 using the following recursive equation and initial condition xθt0 = 0:\nxθti = x θ ti−1 + b̄(x θ ti−1 , Nti−1 , ti−1, θ)∆ + ∆σ(x θ i−1)Zi,∀ i ∈ {1, 2, . . .K} (18)\nwhere {Zi}Ki=1 is the sequence of K independent and identically distributed (i.i.d.) d−dimensional standard Gaussian random vectors.\nWe generate M independent sample paths of the discrete-time process in equation 18 denoted as {xθ,mti }, for m ∈ {1, 2, . . .M}, to compute a sample average approximation (SAA) of the ELBO over the partition {t0, t1, . . . tK} as\nÊLBO = 1\nM M∑ m=1 [ K−1∑ i=0 [ logL(Nti,ti+1 |h(x θ,m ti )∆)− 1 2 ‖σ(xθ,mti ) −1[b(xθ,mti , ti)− b̄(x θ,m ti , Nti , ti, θ)]‖ 2∆ ]]\n= 1\nM M∑ m=1 [ K−1∑ i=0 [ Nti,ti+1 log(h(x θ,m ti )∆)− h(xθ,mti )∆− 1 2 ‖σ(xθ,mti ) −1[b(xθ,mti , ti)− b̄(x θ,m ti , Nti , ti, θ)]‖ 2∆ ]] + C,\n(19)\nwhere we used the definition of the Poisson likelihood L from equation 1 and C is a constant independent of θ. Now, to compute gradients of ELBO with respect to θ, observe that the gradient operator can be exchanged with the expectation, since the only source of randomness in each sample path of xθ,mt are K i.i.d. Gaussian random vectors {Zmi }Ki=1, which are independent of θ. Notice too that this is a pathwise analog of the reparametrization trick, and has been used recently to learn deep latent models (Tzen & Raginsky, 2019; Li et al., 2020). Also, note that thus far the ÊLBO is defined for a single sample path of the count observations N0,T . However, as noted before, in our method we will also take a sample average of ÊLBO over multiple batches of sample paths of count observations at each epoch of the training algorithm." }, { "heading": "5 NUMERICAL EXPERIMENTS", "text": "We present three experiments demonstrating the efficacy and utility of our proposed SVI method. First, we consider a setting when the underlying stochastic intensity process is 1-dimensional. We compare the SVI approximation with the ‘true’ smoothing posterior density computed using the solution of the forward and backward SPDEs, defined in equation 7 and equation 8. We solve the SPDEs using a standard finite element method (FEM) solver (Skeel & Berzins, 1990, Matlab solvers for 1-D PDEs). In a second experiment, we demonstrate the performance of our algorithm on a subset of a Bike-sharing dataset obtained from the UCI machine learning repository (Fanaee-T & Gama, 2013). In this experiment we estimate a smoothing posterior density for the observed counts of the demand for bikes in a 24 hour period, assuming that the demand process is well-modeled by a Cox process. In our third and final experiment, we apply our method to compute an approximation to a 4- dimensional smoothing posterior density. We note that despite being low dimensional, the standard FEM solver does not scale to this setting, while our method can be straightforwardly adapted." }, { "heading": "5.1 VARIATIONAL APPROXIMATION OF UNIVARIATE SMOOTHING POSTERIOR DENSITY", "text": "As defined in Section 2, we are interested in learning the posterior measure over an unknown process {xt ∈ R} where the intensity process {zt} satisfies zt = h(xt). We set b(x) = −x and σ(x) = 1 in the prior SDE as defined in equation 2. Furthermore, motivated from the mathematical structure of the true smoothing SDE equation 11, we fix our variational family Q to be the class of measures induced by solutions to the class of SDEs in equation 13 with drift and diffusion coefficient set as\nb̄(x, t,Nt) = − Ψ′t Ψt + V (x, t,NT −Nt; θ)− x, σ(x) = 1,\nwhere Ψ′t is the derivative of Ψt with respect to x. Here V (x, t,NT − Nt; θ) is modeled using a neural network with 2 hidden layers whose parameters we call θ (see Appendix A.3 for more details on the architecture)." }, { "heading": "5.1.1 SIMULATED DATASET", "text": "We generate sample paths of the count observation N0,T from a non-homogeneous Poisson process, where the intensity process {z0t } is the solution of the following ordinary differential equation, dz0t dt = 20(2− t) exp(−0.85(2− t)\n2), and z00 = 0. We fix the map h(a) = 5 ∗ exp(−.08 ∗ (a− 5)2) in this experiment. To train the neural network V , we use 150 samples paths of the count observation between time interval [0, 2] and optimize ÊLBO in equation 19 using Adam (Kingma & Ba, 2014).\nTo demonstrate the efficacy of our approach, we first generate 20 test sample paths of count observations and compute the true smoothing posterior density (defined in equation 9) using the solution of the forward and backward SPDEs (defined in equation 7 and equation 8 respectively). We do this using the FEM method. Then for the same test observations we compute the VBSP density by FEM using the trained drift coefficient of VBSP SDE (see equation 10 and 11).\nComparative results are presented in Figure 1. We clearly see from the the top row plot that these are very similar to each other, with our variational approximation capturing the sharp rises in density with high fidelity. Note from the first two plots in the bottom row that as the ELBO decreases the gap between the the true and VB smoothing posterior reduces too. Moreover, the time required to compute the smoothing density using the forward and backward SPDEs on the test data is about 2.2 seconds which is approximately fifty percent higher than the trained VBSP density, which required 1.6 seconds (on an 3.1 GHz Intel i5 CPU). This is due to the fact that we are required to solve one SPDE in the latter case instead of two in the former case.\nNotice that the learnt drift of the smoothing SDE equation 13 is a map which can be used with any sample path of count observations to generate Monte Carlo samples from the an approximate smoothing posterior density. In contrast, it is challenging to sample from the true smoothing SDE as it involves computing the solution of the system of SPDEs in equation 7 and 8. Furthermore, this solution must be recomputed for each new sample path of count observations." }, { "heading": "5.1.2 BIKE SHARING DATASET", "text": "In this experiment, we compute the VBSP density for the hourly counts of demand in a bike-sharing system. The experimental setting remains unaltered, albeit the diffusion coefficient is set to σ(x) = 1.1, to capture the increased variability in counts of bike demand than the variability in simulated counts in the previous experiment. Notice that fixing the map h is a modeling question, and we consider mappings of the form h(x) = a exp(−b(x − c)2) parametrized by a, b, and c. We take a simple empirical Bayes heuristic to fix their values after observing the count observations, based on the fact that h(x)∆ is the mean of the Poisson counts in the interval ∆. We set a in such a way that h(x)∆ equals the maximum of the median observed count. For the Bike-sharing data we re-scaled the problem to interval [0, 2] and fixed ∆ = 0.083 and thus choose 90/0.083 ' 1050 as a. The choice of b and c depends on the diffusion coefficient σ(x) and x0, as the SDE should be able to explore the relevant domain of h to appropriately model the actual count observations. Thus, after looking at the count data, we chose h(x) = 1050 ∗ exp(−0.001 ∗ (y − 50)2). The empirical results are summarized in Figure 2. We note here that the FEM approach (our implementation) to compute the VBSP density was numerically unstable and failed; this may be attributed\nto the concentrated nature of the VBSP, as visible in the plots. It is evident from the last plot in Figure 2 that for the current h, VBSP density captures the trend in the real count observations well, however we anticipate that a different choice of the map h can better model the count observations.\nNote that in a smoothing problem, the prior intensity process (specifically the drift coefficient b(·), diffusion coefficient σ(·) and the map h(·)) are sourced from an expert, and the objective is to update the modeler’s beliefs with count observations to compute the smoothing posterior density. In many settings, the functions b(·), σ(·) and h(·) are known only up to some unknown parameters. It is fairly straightforward to combine the parameters of h and b with the neural network parameters θ, and learn them all in a data-driven manner. We choose not to do this to keep the discussion simple. Learning the parameters of σ presents a slightly greater challenge since the path measures for different settings of σ are singular. This is a crucial difference from the finite dimensional setting, where the Lebesgue measure is a common reference. We leave solving this problem for future work." }, { "heading": "5.2 VARIATIONAL APPROXIMATION OF MULTIVARIATE SMOOTHING POSTERIOR DENSITY", "text": "We demonstrate our method on a 4-dimensional smoothing problem. In this case, we fix the map h(a) = 25 ∗ exp(−.08 ∗ ‖a − 5‖2), where ‖ · ‖ is the L-2 norm, and a ∈ R4. We also choose the prior density to be induced by an SDE defined in equation 2 with b(x) = −x and σ(x) = I , where x ∈ Rd and I is a d × d identity matrix. Furthermore, we choose our variational family to be a family of SDE as defined in equation 13, with drift and diffusion coefficients, b̄(x, t, Nt) = −∇ΨtΨt + Vd(x, t, NT −Nt; θ)− x and σ(x) = I .\nTo train the neural network V , we use 150 samples paths of the count observation between time interval [0, 2] and optimize the ELBO defined in equation 19 using Adam (Kingma & Ba, 2014). We plot the empirical results in Figure 3." }, { "heading": "A APPENDIX", "text": "A.1 DERIVATION OF SMOOTHING SPDE According to Theorem * [sic] in Elliott & Malcolm (2005), Kt := ( ∫ Rd qt(ξ)vt(ξ)dξ)\n−1 is almost surely constant for t ≤ T . Using this result it follows that\n∂tpS,t =Ktqt(x)∂tvt(x) +Ktvt(x)∂tqt(x)\n=Ktqt(x)∂tvt(x) +Ktvt(x) [ Ψ−1t L ∗[Ψt pS,t\nKtvt(x) ] ] =− pS,t\nvt(x) ΨtL[Ψ\n−1 t vt(x)] + vt(x) [ Ψ−1t L\n∗[Ψt pS,t vt(x) ] ] =− pS,t\nVt(x) L[Vt(x)] + Vt(x)\n[ L∗[\npS,t Vt(x) ]\n] . (20)\nwhere Vt(x) = Ψ−1t vt(x) and PS,t = pt(x|N0,T ) are introduced for brevity. Now observe that\nVt(x)L ∗ [ pS,t Vt(x) ] = Vt(x) 1 2 ∑ i,j ∂xixj ( ai,j(x) pS,t Vt(x) ) − ∑ i ∂xi ( bi(x) pS,t Vt(x) ) = Vt(x) 1\n2 ∑ i,j ∂xixj ( ai,j(x) pS,t Vt(x) ) + ∑ i [ ∂xiVt(x)bi(x)pS,t Vt(x) ] − ∑ i [(∂xibi(x)pS,t + bi(x)∂xipS,t)] . (21)\nConsider the summand in the first term of equation 21 and observe that\n∂xixj [ ai,j(x)pS,t Vt(x) ] = ∂xi [ Vt(x)∂xj (ai,j(x)pS,t)− ∂xjVt(x)ai,j(x)pS,t V 2t (x) ] = Vt(x)∂xixj (ai,j(x)pS,t)− ∂xiVt(x)∂xj (ai,j(x)pS,t)\nV 2t (x)\n+ 2∂xiVt(x)\nV 3t (x)\n[ ∂xjVt(x) (ai,j(x)pS,t) ] − 1 V 2t (x) ∂xi ( ∂xjVt(x)ai,j(x)pS,t ) Now the second term in equation 21 can be expressed as\nVt(x)∂xixj [ ai,j(x)pS,t Vt(x) ] = ∂xixj (ai,j(x)pS,t)− ∂xiVt(x)∂xj (ai,j(x)pS,t) Vt(x)\n+ 2∂xiVt(x)\nV 2t (x)\n[ ∂xjVt(x) (ai,j(x)pS,t) ] − 1 Vt(x) ∂xi [ ∂xjVt(x)ai,j(x)pS,t ] (22)\nSubstituting the above expression in equation 21, we have Vt(x)L ∗ [ pS,t Vt(x) ] = 1 2 ∑ i,j { ∂xixj (ai,j(x)pS,t)− ∂xiVt(x)∂xj (ai,j(x)pS,t) Vt(x)\n+ 2∂xiVt(x)\nV 2t (x)\n[ ∂xjVt(x) (ai,j(x)pS,t) ] − 1 Vt(x) ∂xi [ ∂xjVt(x)ai,j(x)pS,t ] } + ∑ i [ ∂xiVt(x)bi(x)pS,t Vt(x) ] − ∑ i [(∂xibi(x)pS,t + bi(x)∂xipS,t)] . (23)\nNow consider the first term in the RHS of equation 20\n− pS,t Vt(x) L[Vt(x)] =− 1 2 ∑ i,j ai,j(x)pS,t Vt(x) ∂xixjVt(x)− ∑ i bi(x)pS,t Vt(x) ∂xiVt(x). (24)\nNow substituting equation 23 and equation 24 into equation 20, we obtain\n∂ ∂t pS,t = − pS,t Vt(x) L[Vt(x)] + Vt(x)\n[ L∗[\npS,t Vt(x) ] ] =− 1\n2 ∑ i,j ai,j(x)pS,t Vt(x) ∂xixjVt(x)− ∑ i bi(x)pS,t Vt(x) ∂xiVt(x) + 1 2 ∑ i,j { ∂xixj (ai,j(x)pS,t)\n− ∂xiVt(x)∂xj (ai,j(x)pS,t)\nVt(x) +\n2∂xiVt(x)\nV 2t (x)\n[ ∂xjVt(x) (ai,j(x)pS,t) ] − 1 Vt(x) ∂xi [ ∂xjVt(x)ai,j(x)pS,t ] } + ∑ i [ ∂xiVt(x)bi(x)pS,t Vt(x) ] − ∑ i [(∂xibi(x)pS,t + bi(x)∂xipS,t)]\n= 1\n2 ∑ i,j ∂xixj (ai,j(x)pS,t)− ∑ i ∂xi [bi(x)pS,t]− 1 2 ∑ i,j {∂xixjVt(x) Vt(x) [ai,j(x)pS,t]\n+ ∂xiVt(x)∂xj (ai,j(x)pS,t)\nVt(x) −\n2∂xiVt(x)∂xjVt(x)\nV 2t (x) [(ai,j(x)pS,t)]\n+ 1\nVt(x)\n[ ∂xjVt(x)∂xi(ai,j(x)pS,t) + ∂xixjVt(x)ai,j(x)pS,t ] } = 1\n2 ∑ i,j ∂xixj (ai,j(x)pS,t)− ∑ i ∂xi [bi(x)pS,t]\n− 1 2 ∑ i,j { ∂xi [ ∂xj log Vt(x)[ai,j(x)pS,t] ] + ∂xiVt(x)∂xj (ai,j(x)pS,t) Vt(x)\n− ∂xiVt(x)∂xjVt(x)\nV 2t (x) [(ai,j(x)pS,t)] +\n1\nVt(x)\n[ ∂xixjVt(x)ai,j(x)pS,t ] } Since, ∂xixjVt(x) = ∂xjxiVt(x) therefore\n∂ ∂t pS,t = 1 2 ∑ i,j ∂xixj (ai,j(x)pS,t)− ∑ i ∂xi [bi(x)pS,t]\n− 1 2 ∑ i,j { ∂xi [ ∂xj log Vt(x)[ai,j(x)pS,t] ] + ∂xj [∂xi log Vt(x)[ai,j(x)pS,t]] } = 1\n2 ∑ i,j ∂xixj (ai,j(x)pS,t)− ∑ i ∂xi [bi(x)pS,t]− ∑ i,j { ∂xi [ ∂xj log Vt(x)[ai,j(x)pS,t] ] } = 1\n2 ∑ i,j ∂xixj (ai,j(x)pS,t)− ∑ i ∂xi {[(a(x)[∇ log Vt(x)])i + bi(x)]pS,t} . (25)\nA.2 KL-DIVERGENCE BETWEEN A MEMBER OF VARIATIONAL FAMILY AND PRIOR SDE\nWe derive a pathwise expression for KL(Q‖Π0) for a given count observation path N0,T . Theorem A.1. Define u(xt, t, Nt; θ) := σ(xt) −1 ( b(xt, t)− b̄(xt, t, Nt; θ) )\nand suppose that u satisfies a strong Novikov’s condition:\nE [ exp ( 1\n2 ∫ T 0 ‖u(xt, t, Nt; θ)‖2dt )] < +∞ ∀θ, φ.\nThen,\nKL(Q‖Π0) = EQ\n[ 1\n2 ∫ T 0 ‖u(xt, t, Nt; θ)‖2ds ] . (26)\nProof. Given samples path of count observationN0,T , using the definition of u and under Novikov’s condition, using Girsanov’s theorem (Oksendal, 2013, Theorem 8.6.8), we have\ndQ\ndΠ0 = exp\n( − ∫ t\n0\nu(xt, t, Nt; θ)dBs − 1\n2 ∫ t 0 ‖u(xt, t, Nt; θ)‖2ds ) ,\nand\nB̂t := ∫ t 0 u(xt, t, Nt; θ)ds+ B(t) (27)\nis a Brownian motion w.r.t. Q. Furthermore, we also have\ndxt = b(xt, t)dt+ σ(xt)dB̂t. (28)\nThe following expression is obtained by substituting for dQdΠ0 : EQ [ log ( dQ(x0:T )\ndΠ0(x0:T ) )] = −EQ [∫ T 0 ( u(xt, t, Nt; θ)dBs + 1 2 ‖u(xt, t, Nt; θ)‖2 ) ds ] .\n(29)\nNow, applying B̂(t) := ∫ t\n0 u(xt, t, Nt; θ)ds+ Bt in Equation equation 29, we have\nEQ [∫ T 0 u(xt, t, Nt; θ)dBs ] = EQ [∫ T 0 u(xt, t, Nt; θ)[dB̂s − u(xt, t, Nt; θ)ds] ]\n= EQ [∫ T 0 u(xt, t, Nt; θ)dB̂s − ∫ T 0 ‖u(xt, t, Nt; θ)‖2ds ]\n= EQ [ − ∫ T\n0\n‖u(xt, t, Nt; θ)‖2ds ] . (30)\nSubstituting equation 30 into equation 29 yields KL(Q‖Π0) = EQ [ log ( dQ(x0:T )\ndΠ0(x0:T ) )] = EQ [ 1\n2 ∫ T 0 ‖u(xt, t, Nt; θ)‖2ds ] .\n(31)\nand thus concludes the proof.\nA.3 NEURAL NETWORK ARCHITECTURE\nFor all the experiments, we use the neural network architecture depicted in Figure A.3, with ReLU activation functions between fully-connected hidden layers.\nA.4 COMPARING COMPUTATIONAL TIME REQUIRED TO NUMERICALLY COMPUTE VBSP AND TRUE SMOOTHING DENSITY USING FEM" } ]
2,020
null
SP:c653e54cd37cd4f661b12551c59344dbdfbb8329
[ "The paper presents a systematic analysis of approaches used to encode position information in transformers and in particular BERT-based models. The paper investigates absolute and relative position embedding strategies that use either fixed/learnable sinusoidal or fully learnable position embeddings. These embeddings are characterized based on different properties that are either inherent from their formulation or observed empirically such as monotonicity, translation invariance, and symmetry. Interestingly these properties appear to emerge naturally when having learnable parameters in APEs and RPEs." ]
Various Position Embeddings (PEs) have been proposed in Transformer based architectures (e.g. BERT) to model word order. These are empirically-driven and perform well, but no formal framework exists to systematically study them. To address this, we present three properties of PEs that capture word distance in vector space: translation invariance, monotonicity, and symmetry. These properties formally capture the behaviour of PEs and allow us to reinterpret sinusoidal PEs in a principled way. Moreover, we propose a new probing test (called ‘identical word probing’) and mathematical indicators to quantitatively detect the general attention patterns with respect to the above properties. An empirical evaluation of seven PEs (and their combinations) for classification (GLUE) and span prediction (SQuAD) shows that: (1) both classification and span prediction benefit from translation invariance and local monotonicity, while symmetry slightly decreases performance; (2) The fully-learnable absolute PE performs better in classification, while relative PEs perform better in span prediction. We contribute the first formal and quantitative analysis of desiderata for PEs, and a principled discussion about their correlation to the performance of typical downstream tasks.
[ { "affiliations": [], "name": "Benyou Wang" }, { "affiliations": [], "name": "Lifeng Shang" }, { "affiliations": [], "name": "Christina Lioma" }, { "affiliations": [], "name": "Xin Jiang" }, { "affiliations": [], "name": "Hao Yang" }, { "affiliations": [], "name": "Qun Liu" } ]
[ { "authors": [ "George B Arfken", "Hans J Weber" ], "title": "Mathematical methods for physicists", "venue": null, "year": 1999 }, { "authors": [ "Mihai Badoiu", "Erik D. Demaine", "MohammadTaghi Hajiaghayi", "Anastasios Sidiropoulos", "Morteza Zadimoghaddam" ], "title": "Ordinal embedding: Approximation algorithms and dimensionality reduction", "venue": "Algorithms and Techniques, 11th International Workshop,", "year": 2008 }, { "authors": [ "Iz Beltagy", "Matthew E Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "arXiv preprint arXiv:2004.05150,", "year": 2020 }, { "authors": [ "Yonatan Bilu", "Nathan Linial" ], "title": "Monotone maps, sphericity and bounded second eigenvalue", "venue": "J. Comb. Theory, Ser. B,", "year": 2005 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D Manning" ], "title": "What does bert look at? an analysis of bert’s attention", "venue": null, "year": 1906 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "arXiv preprint arXiv:1705.03122,", "year": 2017 }, { "authors": [ "Lalit Jain", "Kevin G. Jamieson", "Robert D. Nowak" ], "title": "Finite sample prediction and recovery bounds for ordinal embedding", "venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Guolin Ke", "Di He", "Tie-Yan Liu" ], "title": "Rethinking positional encoding in language pre-training", "venue": "arXiv preprint arXiv:2006.15595,", "year": 2020 }, { "authors": [ "Xuanqing Liu", "Hsiang-Fu Yu", "Inderjit Dhillon", "Cho-Jui Hsieh" ], "title": "Learning to encode position for transformer with continuous dynamical model", "venue": "arXiv preprint arXiv:2003.09229,", "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Hiroshi Maehara" ], "title": "Euclidean embeddings of finite metric spaces", "venue": "Discrete Mathematics,", "year": 2013 }, { "authors": [ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ], "title": "Scaling neural machine translation", "venue": "arXiv preprint arXiv:1806.00187,", "year": 2018 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training, 2018", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI blog,", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Pranav Rajpurkar", "Robin Jia", "Percy Liang" ], "title": "Know what you don’t know: Unanswerable questions for squad", "venue": "arXiv preprint arXiv:1806.03822,", "year": 2018 }, { "authors": [ "Anna Rogers", "Olga Kovaleva", "Anna Rumshisky" ], "title": "A primer in bertology: What we know about how bert works", "venue": "arXiv preprint arXiv:2002.12327,", "year": 2020 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations", "venue": "arXiv preprint arXiv:1803.02155,", "year": 2018 }, { "authors": [ "Roger N Shepard" ], "title": "The analysis of proximities: Multidimensional scaling with an unknown distance function", "venue": null, "year": 1962 }, { "authors": [ "Yoshikazu Terada", "Ulrike Luxburg" ], "title": "Local ordinal embedding", "venue": "Proceedings of Machine Learning Research,", "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jesse Vig" ], "title": "A multiscale visualization of attention in the transformer model", "venue": "arXiv preprint arXiv:1906.05714,", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Benyou Wang", "Donghao Zhao", "Christina Lioma", "Qiuchi Li", "Peng Zhang", "Jakob Grue Simonsen" ], "title": "Encoding word order in complex embeddings", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xing Wang", "Zhaopeng Tu", "Longyue Wang", "Shuming Shi" ], "title": "Self-attention with structural position representations", "venue": "arXiv preprint arXiv:1909.00383,", "year": 2019 }, { "authors": [ "Yu-An Wang", "Yun-Nung Chen" ], "title": "What do position embeddings learn? an empirical study of pretrained language model positional encoding", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Junqiu Wei", "Xiaozhe Ren", "Xiaoguang Li", "Wenyong Huang", "Yi Liao", "Yasheng Wang", "Jiashu Lin", "Xin Jiang", "Xiao Chen", "Qun Liu" ], "title": "Nezha: Neural contextualized representation for chinese language understanding", "venue": null, "year": 1909 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: Stateof-the-art natural language processing", "venue": "ArXiv, abs/1910.03771,", "year": 2019 }, { "authors": [ "Da Xu", "Chuanwei Ruan", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "title": "Self-attention with functional time representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hang Yan", "Bocao Deng", "Xiaonan Li", "Xipeng Qiu" ], "title": "Tener: Adapting transformer encoder for name entity recognition", "venue": "arXiv preprint arXiv:1911.04474,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "E parts" ], "title": "DETAILED EXPERIMENTAL SETTING We train BERT base and BERT medium with both masked language prediction and next sentence prediction tasks; most parameters are listed in Tab. 6, with the remaining parameters set as in the original paper. Note that we share RPE in different heads and layers", "venue": "Like (Shaw et al.,", "year": 2018 }, { "authors": [ "Ke" ], "title": "APE and RPE could be beneficial for classification tasks (GLUE), which in this paper, this complementary effect is not significant since most PE combinations (APE and RPE) do not outperform the BERT-style fully-learnable APE on classification. Instead, we empirically conclude that most PE combinations boost the performance in span prediction tasks. The benefit in classification tasks in (Ke", "venue": null, "year": 2020 }, { "authors": [ "Wang" ], "title": "2020) proposed a sinusoid-like complex word embedding to encode word order. Both (Xu et al., 2019) and (Wang et al., 2020) assume that PEs should satisfy the translation invariance property, but they induce different types of sinusoidal PE parameterization either in real or complex vector space", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Position embeddings (PEs) are crucial in Transformer-based architectures for capturing word order; without them, the representation is bag-of-words. Fully learnable absolute position embeddings (APEs) were first proposed by Gehring et al. (2017) to capture word position in Convolutional Seq2seq architectures. Sinusoidal functions were also used with Transformers to parameterize PEs in a fixed ad hoc way (Vaswani et al., 2017). Recently, Shaw et al. (2018) used relative position embedding (RPEs) with Transformers for machine translation. More recently, in Transformer pretrained language models, BERT (Devlin et al., 2018; Liu et al., 2019) and GPT (Radford et al., 2018) used fully learnable PEs. Yang et al. (2019) modified RPEs and used them in the XLNet pre-trained language model. To our knowledge, the fundamental differences between the various PEs have not been studied in a principled way.\nWe posit that the aim of PEs is to capture the sequential nature of positions in vector space, or technically, to bridge the distances in N (for positions) and RD (for position vectors). We therefore propose three expected properties for PEs: monotonicity, translation invariance, and symmetry 1. Using these properties, we formally reinterpret existing PEs and show the limitations of sinusoidal\n1Informally, as positions are originally positive integers, one may expect position vectors in vector space to have the following properties: 1) neighboring positions are embedded closer than faraway ones; 2) distances of two arbitrary m-offset position vectors are identical; 3) the metric (distance) itself is symmetric.\nPEs (Vaswani et al., 2017): they cannot adaptively meet the monotonicity property – thus we propose learnable sinusoidal PEs.\nWe benchmark 13 PEs (including APEs, RPEs, and their combinations) in GLUE and SQuAD, in a total of 11 individual tasks. Several indicators are devised to quantitatively measure translation invariance, monotonicity, and symmetry, which can be further used to calculate their statistical correlations with empirical performance in downstream tasks. We empirically find that both text classification tasks (in GLUE) and span prediction tasks (SQuAD V1.0 and V 2.0) can benefit from monotonicity (in nearby offset) and translation invariance (in particular without considering special tokens like [CLS]), but symmetry decreases performance since it can not deal with directions between query vectors and key vectors when calculating attentions. Plus, models with unbalanced attention regarding directions (generally attending more to preceding tokens than to succeeding tokens) slightly correlate with better performance (especially for span prediction tasks).\nExperiments also show that the fully-learnable APE performs better in classification, while RPEs perform better in span prediction tasks. This is explained by our proposed properties as follows: RPEs perform better in span prediction tasks since they meet better translation invariance, monotonicity , and asymmetry; the fully-learnable APE which does not strictly have the translation invariance and monotonicity properties during parameterizations (as it also performed worse in measuring translation invariance and local monotonicity than other APEs and all RPEs) still performs well because it can flexibly deal with special tokens (especially, unshiftable [CLS]).\nRegarding the newly-proposed learnable sinusoidal PEs, the learnable sinusoidal APE satisfies the three properties to a greater extent than other APE variants, and the learnable sinusoidal RPE exhibits better direction awareness than other PE variants. Experiments show that BERT with sinusoidal APEs slightly outperforms the fully-learnable APE in span prediction, but underperforms in classification tasks. Both for APEs and RPEs, learning frequencies in sinusoidal PEs appears to be beneficial. Lastly, sinusoidal PEs can be generalized to treat longer documents because they completely satisfy the translation invariance property, while the fully-learnable APE does not.\nThe contributions of this paper are summarised below: 1) We propose three principled properties for PEs that are either formally examined or empirically evaluated by quantitative indicators in a novel Identical Word Probing test; 2) We benchmark 13 PEs (including APEs, RPEs and their combinations) in GLUE, SQuAD V1.1 and SQuAD V2.0, in a total of 11 individual tasks; 3) we experimentally evaluate how the performance in individual tasks benefits from the above properties; 4) We propose two new PEs to extend sinusoidal PEs to learnable versions for APEs/RPEs." }, { "heading": "2 PROPERTIES OF POSITION EMBEDDINGS", "text": "Gehring et al. (2017); Vaswani et al. (2017) use absolute word positions as additional features in neural networks. Positions x ∈ N are distributively represented as an embedding of x as an element ~x ∈ RD in some Euclidean space. By standard methods in representation learning, similarity between embedded objects ~x and ~y is typically expressed by an inner product 〈~x, ~y〉, for instance the dot product gives rise to the usual cosine similarity between ~x and ~y. Generally, if words appear close to each other in a text (i.e., their positions are nearby), they are more likely to determine the (local) semantics together, than if they occurred far apart. Hence, positional proximity of words x and y should result in proximity of their embedded representations ~x and ~y. One common way of formalizing this is that an embedding should preserve the order of distances among positions 2. We denote φ(·, ·) as a function to calculate closeness/proximity between embedded positions, and any inner product can be a special case of φ(·, ·) with good properties. We can express preservation of the order of distances as: For every x, y, z ∈ N,\n|x− y| > |x− z| =⇒ φ(~x, ~y) < φ(~x, ~z) (1)\nNote that on the underlying space, the property in Eq. (1) has been studied for almost 60 years (Shepard, 1962), in both algorithmics (Bilu & Linial, 2005; Badoiu et al., 2008; Maehara, 2013), and machine learning (Terada & Luxburg, 2014; Jain et al., 2016) under the name ordinal embedding. As we are interested in the simple case of positions from N, Eq. (1) reduces to the following property:\n2Theoretical evidence for this is nontrivial unless we assume more about the particular non-linear functions. We empirically find that all learned PEs can preserve the order of distance\nProperty 1. Monotonicity: The proximity of embedded positions decreases when positions are further apart:\n∀x,m, n ∈ N : m > n⇐⇒ φ(~x,−−−−→x+m) < φ(~x,−−−→x+ n) (2) A priori, a position embedding might treat every element N individually. However, considering pairs of positions based on their relative proximity (rather than the absolute value of the positions), can lead to simplified and efficient position embeddings (Wang et al., 2020). Such embeddings satisfy translation invariance:\nProperty 2. Translation invariance: The proximity of embedded positions are translation invariant:\n∀x1, . . . , xn,m ∈ N : φ(~x1, −−−−→ x1 +m) = φ(~x2, −−−−→ x2 +m) = · · · = φ(~xn, −−−−−→ xn +m) (3)\nFinally, since the inner product is symmetric, we also consider whether φ(·, ·) is symmetric: Property 3. Symmetry: The proximity of embedded positions is symmetric,\n∀x, y ∈ N : φ(~x, ~y) = φ(~y, ~x) (4)\nThere is no generally accepted standard set of properties for position embeddings; based on prior work as described above, we posit that the above properties are important, and now examine several existing PEs in relation to these properties, either formally (in Sec. 3) or empirically (in Sec. 4)." }, { "heading": "3 UNDERSTANDING PES VIA THE PROPERTIES", "text": "PEs come in two variants: absolute PEs (APEs) where single positions are mapped to elements of the representation space, and relative PEs (RPEs) where the difference between positions (i.e., x−y for x, y ∈ N) is mapped to elements of the embedding space. For Transformer-based architectures, the difference between APEs and RPEs manifests itself in the attention mechanism, in particular how the matrices of query, key, and value weights WQ, WK , and WV are used to calculate attention in each attention head. Consider two positions x, y ∈ N, let WEx be the word embedding of the word at position x, and let Px and Px−y be the embeddings of the position x and relative position x− y, respectively. The query-key-value vector for the word at position x is typically calculated as below for APEs and RPEs3 respectively:\nAPE: QxKx Vx = (WEx + Px) WQWK WV ; RPE: QxKx Vx = WEx WQWK WV + 0Px−y Px−y (5) Observe that while the APEs calculation is linear in (WQ,WK ,WV ) with the word and position embeddings merged into the coefficient, the RPEs calculation is affine, with the relative position embedding Px−y acting as an offset independent of the word embedding WEx.\nIn Transformers, the resulting representation is a sum of value vectors with weights depending on A = QKT , that is, Attention(Q,K, V ) = softmax(QKT / √ dk)V . In the rest of the paper, we examine PEs in the above architecture with respect to the properties introduced in Section 2. In particular, we study four well-known variants of PEs: (1) the fully learnable APE (Gehring et al., 2017), (2) the fixed sinusoidal APE (Vaswani et al., 2017), (3) the fully learnable RPE (Shaw et al., 2018), and (4) the fixed sinusoidal RPE (Wei et al., 2019)." }, { "heading": "3.1 UNDERSTANDING SINUSOIDAL PES", "text": "With a sinusoidal parameterization in PEs, we may use a specific proximity, i.e., an efficient inner product like a dot product, to check if the sinusoidal form of PEs meets the above properties. The dot product between any two position vectors is\nAx,y = 〈~x, ~y〉 = sum sin(ω1x) cos(ω1x) · · ·\nsin(ωD 2 x) cos(ωD 2 x)\n sin(ω1y) cos(ω1y) · · ·\nsin(ωD 2 y) cos(ωD 2 y)\n = sum sin(ω1x) sin(ω1y) cos(ω1x) cos(ω1y) · · · sin(ωD\n2 x) sin(ωD 2 y)\ncos(ωD 2 x) cos(ωD 2 y)\n = D 2∑\ni=0\ncos(ωi(x− y)) (6)\n3There are many variants of RPEs (e.g., (Dai et al., 2019)). As selecting RPEs is not the main concern in this paper, we give the original (and typical) RPEs only. One can easily extend this work to other RPE variants.\nNote that sinusoidal PEs satisfy both Property 2 (translation invariance) because the inner product is only associated with its position difference x − y, and Property 3 (symmetry), because the dot product itself is symmetric: 〈~x, ~y〉 = 〈~y, ~x〉. Note also that checking Property 1 is equivalent to checking monotonicity of the map ψ(m) = ∑D/2 i=1 cos(ωim). ψ(m) is monotone on intervals where\nits first order derivative ψ′(m) = ∑D/2\ni=1 −ωi sin(ωim) does not change sign, and these intervals depend on the choice of ωi. With fixed frequencies ωi = (1/10000)2i/D, it is monotonous when m is roughly between 0 and 50, indicating that it can only strictly perceive a maximum distance of 50 and it is insensitive to faraway distances (e.g. longer than 50).\nAlthough sinusoidal PEs with fixed frequencies (i.e., ωi = (1/10000)2i/D) are common in APEs and RPEs, we argue that learning these frequencies is useful because it can adaptively adjust intervals of monotonicity (they do not have to be 0-50 as in the fixed sinusoidal APE) 4. With trainable frequencies, we can adaptively allocate a number of frequencies in a data-driven way. App. A.2 explains the expressive power of sinusoidal PEs with trainable frequencies from the perspective of the Fourier series. Extending existing fixed sinusoidal PEs to a learnable version with learnable frequencies gives two variants: a learnable sinusoidal APE and a learnable sinusoidal RPE." }, { "heading": "3.2 UNDERSTANDING RPES", "text": "RPEs ignore the absolute position of words and directly encode their relative distance. The RPEs expression adheres to the translation invariance property during parameterization, since relative distance with the same offset will be embedded as the same embedding, namely, Px1−y1 = Px2−y2 if x1−y1 = x2−y2. Plus, RPEs that separately embed forward and backward relative embeddings, i.e., Pi−j 6= Pj−i, do not meet symmetry during parameterization. Sinusoidal RPEs can also embed neighboring relative position in close vectors with a local monotonicity, similarly to sinusoidal APEs. Note that the dot products between two sinusoidal relative position vectors with the same offset, without distinguishing positive negative relative position vectors, should be identical 5. This makes it hardly perceive of the border between preceding and succeeding relative position vectors." }, { "heading": "4 EXAMINING PE PROPERTIES IN PRE-TRAINED LANGUAGE MODEL", "text": "We train BERT with six basic PEs as in Tab. 1 and their combination variants, and conduct a probing test to check to which degree they satisfy the properties.\nPre-training The pre-trained “BERT-base-uncased” checkpoint (Devlin et al., 2018) is used to train by replacing the original absolute PE module with a new PE variant (including APEs and RPEs). We train the new models with a sequence length of 128 for 5 epochs and then 512 for another 2 epochs. The training is the same as in the original BERT, i.e., BooksCorpus and Wikipedia (16G raw documents) with whole word masking. To be fair, the BERT with the original fully-learnable\n4See App. A to intuitively understand specific functions of each frequency ωi 5Namely, 〈Px1−y1 , Px2−y2〉 = 〈Px3−y3 , Px4−y4〉 if (x1−y1)− (x2−y2) = (x3−y3)− (x4−y4) = m,\nin both x− y > 0 and x− y < 0.\nAPE is also further trained in the same way. All models have about 110M parameters corresponding to a typical base setting, with minor differences solely depending on the parameterization in Tab. 1." }, { "heading": "4.1 DOT PRODUCT BETWEEN POSITION VECTORS", "text": "APEs We calculate dot products between two arbitrary position vectors for APEs and RPEs (see Fig. 1). For APEs, neighboring position vectors are generally closer compared to faraway ones. This trend is clearer in the learnable sinusoidal APE, which imposes a strict sinusoidal regularization for PEs. Note that additionally adopting RPEs does not affect too much PE patterns, as can be seen by comparing Fig. 1(a) and 1(b), or Fig. 1(c) and 1(d).\nRPEs In the fully-learnable RPE setting, the vertical and horizontal bright bands in 1(e) and 1(f) show that the relative position vectors for small offsets (e.g., {P−5, · · · , P0, · · ·P5} ) are notably different to other relative position vectors; it indicates that the relative position vectors with small offsets are more distinguishable than faraway relative position vectors. The four dark corners in 1(e) and 1(f) means that relative position vectors with longer offset than 20, i.e., from -64 to -20 and from 20 to 64, are very close, showing that the fully-learnable RPE does not significantly distinguish far-distant RPEs. This suggests that truncating RPEs into a fixed distance (e.g. 64 in (Shaw et al., 2018)), is reasonable. This effect is further explained in App. D." }, { "heading": "4.2 IDENTICAL WORD PROBING", "text": "In APEs, the attention matrix (A = softmax(QKT )) is related to individual words and their positions, an element of (inactivated) A in the first layer is given by:\naij = (wi + pi)W Q,1((wj + pj)W K,1)T\n= wiW Q,1(WK,1)TwTj︸ ︷︷ ︸\nword-word correspondence\n+wiW Q,1(WK,1)T pTj︸ ︷︷ ︸\nword-position correspondence\n+ piW Q,1(WK,1)TwTj︸ ︷︷ ︸\nword-position correspondence\n+ piW Q,1(WK,1)T pTj︸ ︷︷ ︸\nposition-position correspondence\n(7)\nIdentical word probing for PEs To study the effect of only PEs in A without considering individual words, we use identical word probing: feed many repeated identical words (can be arbitrary,\n6In all figures we only show the first 128 positions instead of 512 positions since they are in principle compatible. Practically in BERT, there is a minor discrepancy between the first 128 positions and the remaining positions due to the typical training strategy (first training on 128-length input and then 512-length input).\ndenoted as w̄) as a sentence to BERT to check the attention values Ā(1), with each element ā1ij(w̄) = (w̄ + pi)W Q,1((w̄ + pj)W K,1)T (8)\nAs we take an average of Ā(1) over many randomly-selected words w̄, the general patterns of Ā(1) will not be affected by any particular word. Namely, Ā(1) is word-free and only related to learned PEs. Thus, Ā(1) can be treated as a general attention bias and can also implicitly convey positionwise proximity in Transformers. Note that the probing test could also be applied to RPEs." }, { "heading": "4.2.1 QUALITATIVE ANALYSIS", "text": "Fig. 2 shows the average attention weights among all heads in the first layer. BERT without PE nearly treats all words uniformly (bag-of-words). Almost all APEs and RPEs have a clear pattern of translation invariance, local monotonicity in a neighboring window, and symmetry. Note that this is nontrivial since no specific constraints or priors were imposed on fully-learnable APEs/RPEs 7.\nBERT with APEs does not show any direction awareness since Fig. 2(b) and 2(c) are nearly symmetrical. As seen from Fig. 2(f,h), BERT with learnable sinusoidal RPE generally attends more on forward tokens than backward tokens, which cannot be clearly found in fully-learnable RPE and fixed sinusoidal RPE. Interestingly, the white bands along the diagonal in Fig. 2 (d, f, g) suggest that some words generally do not attend to themselves, as previously observed in (Clark et al., 2019) 8 ." }, { "heading": "4.2.2 QUANTITATIVE ANALYSIS", "text": "Using the activated attention values Ā(1) in Eq. 8 9 we adopt three quantitative indicators to measure to which extent BERT models with individual PEs satisfy the three properties and their derivative indicators (see App. B for details of calculating these indicators) in Tab. 2. Basically, all APEs and RPEs satisfy monotonicity in small offsets and translation invariance compared to BERT without PE; All PEs nearly satisfy symmetry except for the learnable sinusoidal RPE and its combinations.\nAPEs and RPEs The learnable sinusoidal APE better satisfies all three properties than fully learnable APE and fixed sinusoidal APE; this is due to its sinusoidal parameterization and flexible frequencies. RPEs satisfy translation invariance to a higher degree than APEs, because they directly\n7See App. G for an example of the evolution of PE patterns in fully learnable APE which starts from random initialization and ends with some patterns reflecting the properties.\n8See App. J for more details about the white band effect along the diagonal. 9300 random-selected words were used to calculate average Ā(1), as we empirically found that adopting\nmore words almost does not change Ā(1).\nsatisfy translation invariance during parameterization. In the last column, direction balance values of all PEs except for the fixed sin. APE are larger than one, which indicates that BERT models with all PEs generally attend more to preceding tokens than succeeding tokens, and this phenomenon appears to be stronger in learnable sinusoidal RPEs than others.\nThe fully learnable APE and [CLS] Fully learnable APE generally performs worse in translation invariance (see the 4-th column) as it has to deal with the unshiftable [CLS] which is always in the first position. Without considering [CLS] and [SEP] (see the 5-th column), the fully learnable APE satisfies translation invariance better than other APEs, showing that the fully learnable APE can flexibly deal with both special tokens and normal positions. The fully learnable APE also could handle the mismatch between special tokens and normal positions in the monotonicity property." }, { "heading": "5 PES IN DOWNSTREAM TASKS", "text": "We empirically compare the performance of PEs in classification and span prediction tasks.\nFine-tuning The fine-tuning on GLUE and SQuAD is the same as in the Huggingface website as per Wolf et al. (2019), see App. E for details. We report the average values of five runs per dataset. For classification, we use the GLUE (Wang et al., 2018) benchmark, which includes datasets for both single document classification and sentence pair classification. For span prediction, we use the SQuAD V1.1 and V2.0 datasets consisting of 100k crowdsourced question/answer pairs (Rajpurkar et al., 2016). Given a question and a passage from Wikipedia containing the answer, the task is to predict the answer text span in the passage. In V2.0, it is possible that no short answer exists in the passage since it additionally has 50,000 unanswerable questions written adversarially by crowdworkers (Rajpurkar et al., 2018)." }, { "heading": "5.1 EXPERIMENTAL RESULTS FOR DOWNSTREAM TASKS", "text": "GLUE Tab. 3 shows that the fully-learnable APE (a.k.a, BERT-style APE) performs well in GLUE. No PE variants, especially BERT with solely APEs or RPEs, notably outperform the fullylearnable APE. BRRT models with a combination of an APE and an RPE do not always boost the performance of the model with solely the APE or RPE.\nSQuAD Tab. 4 shows that nearly all BERT models with RPEs significantly outperform the fully learnable APE. The learnable sinusoidal APE is slightly better than the fully learnable APE in most cases. Both the best-performed models in SQuAD V1.1 and V2.0 adopt the fully-learnable RPE.\n10‘monotonicity ’ (second column) refers to monotonicity calculating in all relative distance, while ‘monotonicity (first 20 offsets)’ (third column) refers to monotonicity calculating within a relative distance of 20 (see App. C for monotonicity in other offsets.); the later matters since neighboring words in a small window are crucial in natural language. For translation invariance, we also adopt a new indicator without considering special tokens ([CLS] and [SEP]) to measure a purely position-aware translation invariance.\nAs demonstrated in Tab. 2, the fully learnable APE can flexibly deal with [CLS] and translation invariance in normal positions, thus it performs well in classification tasks (GLUE) which heavily relies on the unshiftable [CLS] token for inference. Span prediction tasks which do not infer from [CLS] can benefit from strict translation invariance during parameterization (e.g., sinusoidal APEs and RPEs), see Tab. 5 in Sec. 6.1 for the correlations between performance of SQuAD and the translation invariance property. Removing PEs (BERT without PE) dramatically decreases performance in SQuAD V1.1 and V2.0, and slightly harms performance on GLUE, showing that PEs are more important in SQuAD than GLUE.\nLearnable sinusoidal PEs The sinusoidal APEs outperform fully-learnable APE in span prediction but underperform it in classification tasks. The learnable sinusoidal APE/RPE outperforms fixed sinusoidal APE/RPE in GLUE and SQuADs, showing the expressive power of flexible frequencies.\nComplementarity of APEs and RPEs In SQuAD, jointly adopting APEs and RPEs can slightly boost performance in some cases. For instance, BERT with learnable sinusoidal APE + APE + fully RPE achieves the best EM score in both SQuADs. However, this complementary effect is relatively weaker in GLUE, where the fully-learnable APE performs strongly." }, { "heading": "6 DISCUSSIONS ON PES", "text": "" }, { "heading": "6.1 HOW DO THE PROPERTIES CORRELATE TO INDIVIDUAL TASKS?", "text": "We conduct a correlation analysis between the properties and the performance on individual tasks 11, as shown in Tab. 5. The results show that violating monotonicity in relatively-small offsets (e.g., 20) and translation invariance is harmful since it is negatively correlated to the performance on\n11Pearson correlations are calculated between the property indicators and the performance of each individual task for 12 PEs. BERT without PE was not considered, since its property indicators are significantly different with other PEs and its performance is much worse; it therefore unexpectedly increases correlation values.\nTable 5: Pearson correlations between the properties and evaluated tasks, evaluating on BERT models with 13 position embeddings. The positive (negative) numbers denote to which degree the performance of the task positively (negatively) correlate(s) to violating the property. This shows that\nviolating local monotonicity and translation invariance is harmful, while violating symmetry (and direction-balance) is beneficial. Best correlation values are in bold for each row.\nProperties CoLA SST-2 MNLI QQP GLUE SQuAD V1.1 SQuAD V2.0\nmonotonicity all offsets 0.44 0.43 0.56 0.32 0.48 -0.31 -0.27first 20 offsets -0.18 0.44 -0.24 -0.42 -0.21 -0.91 -0.86 translation invariance w/ [CLS]/[SEP] 0.48 0.52 0.04 -0.07 0.42 -0.63 -0.57w/o [CLS]/[SEP] -0.47 0.01 -0.69 -0.68 -0.61 -0.51 -0.58 symmetry 0.17 0.24 0.40 0.09 0.31 0.15 0.16 direction balance 0.32 0.16 0.63 0.35 0.48 0.32 0.37\nGLUE and SQuAD. However, violating symmetry (and direction-balance) is slightly beneficial. This shows that many tasks require BERT models to distinguish preceding and succeeding tokens, especially to attend more on preceding tokens. See Fig. 5b in App. C, the correlations between the direction balance indicators and the performance of downstream tasks will be much higher when only considering a few neighboring tokens for calculating the indicator." }, { "heading": "6.2 MORE DISCUSSIONS ON THE PROPOSED PROPERTIES", "text": "Monotonicity Monotonicity holds locally in a small neighboring window (usually in 5-20 offsets) for all PE variants, see Fig.2. This shows that BERT models generally are not sensitive to longerdistance attendance patterns, also evidenced by the fact that performance in downstream tasks correlates more highly with monotonicity in middle-distance offsets (e.g., 20 in the second row of Tab. 5) than longer offsets (see App. C). To check monotonicity guided by learned frequencies of learnable sinusoidal APEs in individual tasks, see App. A.3\nTranslation invariance In BERT, we argue that absolute positions of words are uninformative since (1) absolute positions of the second segment depend on the length of the first sentence; (2) words are randomly truncated in the beginning or end if a sentence exceeds the expected maximum length, which may shift absolute positions of all tokens with an unexpected offset (Devlin et al., 2018). That is, absolute positions of words in pre-trained language models are arbitrarily replaceable, and thus adopting translation invariance is generally reasonable. Models with strict Translation invariance (all RPEs and sinusoidal APEs) naturally make PEs generalize to longer documents than the documents used in the pre-training phase, see App. F for some empirical evidence.\nSymmetry APEs (especially sinusoidal APEs) express symmetry patterns without distinguishing the direction as shown in Fig 2. As seen from Eq. 7, it is nontrivial to model directions in two linearly-transformed query vectors and key vectors. This limits its performance in direction-sensitive downstream tasks. RPEs could behave better on direction perception, since forward and backward relative embeddings are separately embedded (see Tab. 1); Especially, learnable sinusoidal RPE or combination variants including it have more unbalanced attending patterns (see the last column in Tab. 2), as shown in Fig. 2 (f) and (h)," }, { "heading": "7 CONCLUSION", "text": "To theoretically and empirically understand position embeddings (PEs), we have defined three properties (translation invariance, monotonicity, and symmetry) inspired by distance mappings between the original domain of positions in N and their PEs in RD. A probing test has been proposed to quantitatively examine these properties using appropriate mathematical indicators. Our probing test has shown that these PEs nearly satisfy most properties even when they are fully-learnable without constraints. Experimental results have shown that violating local monotonicity and translation invariance decreases performance in downstream tasks (classification and span prediction tasks), and that violating symmetry benefits downstream tasks because of direction awareness. We also find that the fully-learnable absolute PE in general results in better performance for classification, and that relative PEs result in better performance for span prediction tasks, which can be explained by the connections between their properties and task characteristics." }, { "heading": "ACKNOWLEDGMENTS", "text": "The work is supported by the Quantum Access and Retrieval Theory (QUARTZ) project, which has received funding from the European Union‘s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 721321." }, { "heading": "A UNDERSTANDING FREQUENCIES", "text": "A.1 UNDERSTAND INDIVIDUAL FREQUENCIES\nWe argue in this paper that a learning schema for such frequencies will be useful in a sense it could adaptively adjust frequencies to meet different functions, see Fig. 3.\nA.2 EXPRESSIVE POWER OF LEARNABLE SINUSOIDAL PES\nIn Transformers, linear transformation is commonly-used, for example query, key, and value transformations on word representations. Let ri be the word representation paramertezied by the sum of word embeddings and position embeddings (like the learnable sinusoidal APEs). Then, each element in ri\nri,k(t) = ei,k + pk(t) =\n{ ei,k + sin(ω k\n2 t), if k is even\nei,k + cos(ω k−1 2 t), if k is odd\n(9)\nAfter a linear transformation parameterized by w (e.g., the key transformation WK in the first Transformer layer), ri is linearly transformed as hi(t) = wri (hi(t) can be one of query/key/value vectors Qx,Kx, Vx in t-th position) with each element\nhi,k(t) = D∑ k=1 wj,kei,k + D/2∑ k=1 (wj,2k sin(ω2kt) + wj,2k+1 cos(ω2k+1t)) (10)\nThe RHS is a typical Fourier series with a base term ∑ wj,kei,k and Fourier coefficients {wj,2k, wj,2k+1}. It is customarily assumed in physics and signal processing (Arfken & Weber, 1999) that the RHS in Eq. 10 with infinite D and appropriate frequencies could approximate any continuous function on a given interval.\nAs using an infinite D is not practical, dynamic allocation of a limited number of frequencies in a data-driven way could be beneficial for general approximation. The predefined frequencies ωi =\n(1/10000)2i/D in the Transformer (Vaswani et al., 2017) can be considered as a special case when it enumerates various frequencies ranging from 1/10000 to 1 under a specific distribution.\nA.3 LEARNED FREQUENCIES OF LEARNABLE SINUSOIDAL APE\nThe learned frequencies are shown in Fig. 4a. Observe that the learned frequencies are generally smaller than the pre-defined ones from (Vaswani et al., 2017) (i.e., ωi = (1/10000)2i/D). The learned frequencies are close to the learned one since we use ωi = (1/10000)2i/D as initialization.\nAs shown in Fig. 4b, the patterns of dot products between positions for learnable frequencies are quite different to the predefined ones (denoted as ‘default’ in the figure); indeed, the former appears more predisposed to deeming remote positions similar. Moreover, fine-tuned models for span prediction tasks (including SQuAD and SQuAD2) satisfy strict monotonicity in larger windows than for classification tasks. Observe also that the patterns in pre-training language models seem more similar to those in classification tasks than span prediction tasks." }, { "heading": "B QUANTITATIVELY MEASURING THE PROPERTIES.", "text": "To quantitatively measure the primary properties we treat in this paper, we propose multiple criteria, described below.\nAssume a position-wise attention matrix Ā(1) (denoted as A since there is no risk for confusion), in which each element is the (softmax) activated attention value from the i-th query token to j-th key token (all elements are positive).\nAverage In-group Variance (AIV) for translation invariance Let l be an offset between two positions; we denote by τ(l) the set of l-offset attention values {Ai,j , j − i = l}; for example, τ(1) = {A1,2, A2,3, · · · , AL−1,L}. Translation invariance requires that all elements in each group τ(l) should be identical, the smaller variance each τ(l) has, it is closer to translation invariance. The Average In-group Variance (AIV) is defined as a weighted average over in-group variances of all {τ(l)}L−1−L+1, namely:\nAIV(A) = ∑L−1 l=−L+1 var (τ(l)) · |τ(l)|∑L−1\nl=−L+1 |τ(l)| where | · | is the number of elements in the set. For normalization, this metric is further divided by the overall variance (i.e., var(A)).\nOrdered Pair Ratio (OPR) for monotonicity For a word in i-th position, based on the increasing distance to the i-th position, there a forward attention sequence Si,+ = {Ai,i, Ai,i+1, · · · , Ai,L} and a backward attention sequence Si,− = {Ai,i, Ai,i−1, · · · , Ai,1}. This results in 2L sequences denoted as S = {S1,+, S1,−, S2,+, S2,−, . . . , SL,+, SL,−}. The ideal (decreasing) monotonicity requires that each S (an element in S) is totally ordered as s0 > s1 > · · · > sL−1. We define the Ordered Ratio of S by:\nOPR(S) =\n∑ sj ,si∈S,i 6=j sign ((si − sj)(i− j))\n|S|2 − |S| We define sign(x) = 1 if x > 0, and sign(x) = 0 otherwise. Ideally, the OPR of a totally ordered decreasing (increasing) sequence S should be zero (one). The expected OPR of a randomly-ordered sequence (average OPR of the set of all such sequences) should be 0.5.\nFinally, we get a weighted sum of OPRs of all sequences in S. OPR(A) = ∑\nS∈S OPR (S) · |S|∑ S∈S |S|\nIn the paper, we also consider a version of monotonicity within a offset of k (e.g., ‘monotonicity (first 20 offsets)’ in Tab. 2), which OPR is calculated in first k elements of each S ∈ S.\nSymmetrical Discrepancy for symmetry We define the Symmetrical Discrepancy (SD) by:\nSD(A) = ∑ i,j,i<j |Ai,j −Aj,i| L× (L− 1)/2\nDirection Balance We define the Direction Balance (DB) as the ratio between the sum of the lower (left) triangle and the upper (right) triangle of A. Note that all elements are positive in A, DB(A) in l-offset range is always positive.\nDBl(A) = ∑ i,j;i<j,|i−j|<=lAi,j∑ i,j;i>j,|i−j|<=lAi,j\nIn Tab. 5 and 2, we report DB for a offset range of 20, see Fig. 5b for the performance correlations with other offset ranges." }, { "heading": "C MEASURING CORRELATIONS BETWEEN PROPERTIES AND DOWNSTREAM TASKS.", "text": "In Tab. 5, monotonicity in 20 offsets and the correlations with performance in downstream tasks was reported; here we show how different ranges of the monotonicity correlate to performance in downstream tasks. Among all tasks, we choose all single sentence classification tasks (CoLA and SST-2), two biggest sentence pair classification tasks (MNLI and QQP tasks have more training samples than others), average performance in GLUE, and in SQuAd (F1 metrics nearly have identical trends with EM metrics).\nAs shown in Fig. 5a, the monotonicity indicators in nearly 20-55 offsets are highest correlated to the performance of span prediction tasks (with Pearson correlation larger than 90%). Note that some classification tasks (especially SST-2) also show opposite correlations comparing to span prediction tasks, probably due to the unshiftable [CLS] on which classification tasks rely for interference does not need monotonicity.\nIn Fig. 5b, the performance correlates more to the direction balance indicators when considering neighboring tokens. For instance, the direction balance indicators within a small offset has correlation bigger than 0.5, this tends to be smaller with increasing offsets." }, { "heading": "D RELATIVE POSITION EMBEDDING WITH LONG OFFSETS", "text": "The dot product between two position embeddings are shown in Fig. 1(e) and (f). To analyze the behaviour, we replace the raw dot product with the cosine similarity (as the latter is normalized and\nthus easier to interpret). When the cosine similarity is one, the two vectors are perfectly colinear and share the same direction. For the purposes of this investigation, we arbitrarily pick 0.95 as a threshold for the cosine similarity, denote that two vectors are not significantly different.\nFrom Fig. 6 we observe the following for all PE variants with fully-learnable RPE: (1) There is no significant difference between relative position vectors with longer than 20-25 offsets; (2) forward relative position vectors are slightly more similar to forward relative position vectors instead of backward relative position vectors, and vice versa (see the central left-lower/right-upper white parts)." }, { "heading": "E DETAILED EXPERIMENTAL SETTING", "text": "We train BERT base and BERT medium with both masked language prediction and next sentence prediction tasks; most parameters are listed in Tab. 6, with the remaining parameters set as in the original paper. Note that we share RPE in different heads and layers. Like (Shaw et al., 2018) RPE are truncated from −64 to 64.\nWe perform five runs for SQuAD and GLUE benchmark. The results in GLUE are for the last checkpoint during fine-tuning while SQuAD takes the best one for every 1000 steps. Finally, we calculate the average over 5 runs. All these settings are the same for all PEs. We use Mismatched MNLI. In GLUE (Wang et al., 2018), the train and dev are somewhat adversarial: training samples (in train and dev) containing the same sentence usually have opposite labels. Models may get worse when it overfits in the train set, resulting in unexpected results. Therefore, we exclude WNLI to calculate average in the last column in Tab. 3. The fine-tuning parameters are using default values in Huggingface project Wolf et al. (2019)." }, { "heading": "F GENERALIZATION TO LONGER SENTENCES IN DOWNSTREAM TASKS", "text": "To fairly compare all models, we train a medium setting (8-layer transformer) on 128-length input in the first 10 epochs and 512-length input in the last 2 epochs from scratch. Fig. 7 shows that before 512-length pre-trained (like the 10-th epoch 128-length pre-trained) learnable sinusoidal APEs and RPEs perform better than BERT-style (without sinusoidal parameterization) in both SQuADs. This happens because PEs with translation invariance (learnable sinusoidal APEs and RPEs) generalize into longer positions 12, while position vectors between 128-512 positions are not trained in fullylearnable PEs and they are randomly initialized and finetuned in the downstream." }, { "heading": "G THE EVOLUTION OF DOT PRODUCTS BETWEEN POSITION VECTORS", "text": "We exhibit dot products between position vectors during training a BERT-medium, as shown in Fig. 8. There is seemingly no pattern in the beginning, but as the number of training steps increase, a regular pattern with translation invariance and local monotonicity emerges.\n12In practice, the document length of some tasks, like summarization, document-level translation, etc. may be much longer than the maximum length typical BERT models can deal with, i.e., 512. Then, learnable sinusoidal PEs or RPEs would be beneficial. Note that they also save parameters compared to typical BERT models, especially when document length is very long like (Beltagy et al., 2020)." }, { "heading": "H DISCUSSIONS ON RELATED WORKS", "text": "Complementary effect between APE and RPE The complementary effect between APE and RPE was demonstrated to be effective in (Wang et al., 2019) for machine translation. In the pretrained language model, Ke et al. (2020) propose that combining APE and RPE could be beneficial for classification tasks (GLUE), which in this paper, this complementary effect is not significant since most PE combinations (APE and RPE) do not outperform the BERT-style fully-learnable APE on classification. Instead, we empirically conclude that most PE combinations boost the performance in span prediction tasks. The benefit in classification tasks in (Ke et al., 2020) may come from other modifications, for example, it unties the [CLS] symbol from other positions. Moreover, in the paper, it adopts a special relative position embedding like (Raffel et al., 2019) (as this paper also suggests to do so): a simplified form of PE that each “embedding” is simply a scalar bias added to the corresponding logits when computing the attention weights. The fundamental difference between the ‘position bias’ and position embedding is unknown from now.\nStudy on attention visualization. Many works are focusing on understanding attention patterns in individual heads. For example Vig (2019) introduced a tool for visualizing attention in the Transformer at multiple scales; Rogers et al. (2020) suggest attention mechanisms like Vertical, Diagonal, Vertical + diagonal, Block, and Heterogeneous. Clark et al. (2019) found some attention mechanisms like attending broadly, to next, to [CLS] or [SEP], attend to punctuation. While our paper focuses on the general attention introduced by PEs from an average point of view, without considering any specific attention head.\nAsymmetry in sequential labeling Yan et al. (2019) suggested asymmetry of position embedding in named-entity recognition task (without involving pre-trained language models) which is a kind of sequential labeling tasks like span prediction (SQuAD) in this paper. Their conclusion is generally compatible with ours, but we question its assumption that ‘the property of distance-awareness disappears when query and key projection are conducted’. As shown in Fig. 9, we could slightly see some distance-awareness by directly taking the average position-position correspondence in the first layer among many heads (i.e., PWQ,1(WK,1)TPT ).\nFunctional parameterization of PEs Xu et al. (2019) proposes various variants of sinusoidal positional encodings inspired by functional analysis. Wang et al. (2020) proposed a sinusoid-like complex word embedding to encode word order. Both (Xu et al., 2019) and (Wang et al., 2020) assume that PEs should satisfy the translation invariance property, but they induce different types of sinusoidal PE parameterization either in real or complex vector space. Moreover, Liu et al. (2020) use a neural ODE component to parameterize position encoding as a continuous dynamical model, which could learn suitable PEs in neural networks. All of these PEs are inspiring. Since selecting\nthe suitable parameterization type is not the main concern in this paper, we adopted the typical ones, namely, the fully-learnable, (learnable or fixed) sinusoidal APEs/RPEs. The fundamental difference between these PE parameterizations needs further investigation. More recently, (Wang & Chen, 2020) empirically study the behaviour of many position embeddings and their performance in Transformers for various NLP tasks." }, { "heading": "I THE THREE PROPERTIES IN OTHER MODELS", "text": "By using the proposed identical probing test, we also check the properties of other trained Transformer models with decoder components in Tab. 7 and Fig. 10. The machine translation model 13 is a typical encoder-decoder architecture using multiple-layer Transformers. GPT2 (Radford et al., 2019) adopts a purely decoder architecture; 12-layer base setting is used in this work.\nMonotonicity Compared to BERT and the machine translation, GPT2 satisfies monotonicity (especially in the first 20 offsets) better than other models, showing capturing distance between neighboring tokens matters in the language model.\nTranslation invariance As seen from the translation invariance indicators in Tab. 7, GPT2 satisfies translation invariance poorer than other models, since tokens in it also additionally attend to a few beginning tokens no matter how far the attended tokens are.\nSymmetry GPT2 shows the biggest symmetrical discrepancy, since GPT2, which aims to predict the next word, adopts an attention mask of succeeding tokens to avoid information leakage. Plus, the machine translation encoder slightly attends more to the succeeding tokens while BERT attends more on the preceding tokens than succeeding tokens." }, { "heading": "J WHITE BAND EFFECTS ALONG THE DIAGONAL", "text": "In order to analyze the white band effects along the diagonal, we show all results of identical word probing (average attention values in the first layer of identical word probing with respect to 100\n13An English-to-French machine translation model downloaded from https://github.com/ pytorch/fairseq/tree/master/examples/translation (Vaswani et al., 2017; Ott et al., 2018; 2019)\n(a) Encoder (w/ Decoder): Machine translation (Vaswani et al., 2017) (b) Decoder : Generative language model (GPT2) (Radford et al., 2019) (c) Encoder : Masked language model (BERT) (Devlin et al., 2018)\nFigure 10: Identical word probing with different types of trained models.\nrandomly-selected words). This effect is more clear for fully-learnable RPE, learnable sinusoidal RPE and any combination variants including them (see. Fig. 11 (d,f,g,i,j,l)). To show the obvious differences between these PEs, in this paper, we use average unnormalized attention weights matrix for probing, but all indicators are calculated using normalized attention values for better quantitative comparison." }, { "heading": "K THE REPLACEABLE PROPERTY ABOUT ABSOLUTE POSITIONS OF WORDS", "text": "For example (we do not consider subword tokenization for simplicity), we have two sentences for next sentence predictions (As BERT did)\nsentence1 : Deadlines are the No.1 productive forces .\nsentence2 : I think , therefore I am .\nBy adding three special tokens, we will have a example with 17 tokens as\n[CLS] Deadlines are the No.1 productive forces . [SEP] I think , therefore I am .[SEP]\nwith absolute positions in the bracket as\n[CLS](1) Deadlines(2) are(3) the(4) No.1(5) productive(6) forces(7) .(8) [SEP](9) I(10) think(11) ,(12) therefore(13) I(14) am(15) .(16) [SEP](17)\nAssume that the expected maximum sequence length is 16 (actually 128 or 512 in BERT), we need to randomly remove the first token of the first sentence (i.e., Deadlines ) as\nvalid sample: I: [CLS](1) are(2) the(3) No.1(4) productive(5) forces(6) .(7) [SEP](8) I(9) think(10) ,(11) therefore(12) I(13) am(14) .(15) [SEP](16)\nor last token of the second sentence (i.e., . )\nvalid sample: II: [CLS](1) Deadlines(2) are(3) the(4) No.1(5) productive(6) forces(7) .(8) [SEP](9) I(10) think(11) ,(12) therefore(13) I(14) am(15) [SEP](16)\nBoth the above two sentences are valid for training. If we replaced the first sentence with another shorter sentence (i.e., Publish/Launch or Perish ?), the sample would be\nvalid sample: III: [CLS](1) Publish(2) or(3) Perish(4) ?(5) [SEP](6) I(7) think(8) ,(9) therefore(10) I(11) am(12) [SEP](13) [PAD](14) [PAD](15) [PAD](16)\nThe three samples I,II,III are valid, but its absolute position indexes are not shiftable. Especially, the first sentence of the second sentence could be 9, 10, and 7, respectively, depending on the random seed for dropping and the length of the first sentence.\n(a) fully-learnable APE (b) fixed sinusoidal APE (c) learnable sin. APE\n(d) fully-learnable RPE (e) fixed sinusoidal RPE (f) learnable sinusoidal RPE\n(g) learnable sinusoidal APE + fullylearnable RPE (h) learnable sinusoidal APE + fixed sinusoidal RPE (i) learnable sinusoidal APE + learnable sinusoidal RPE\n(j) BERT-style APE + fullylearnable RPE (k) BERT-style APE + fixed sinusoidal RPE (l) BERT-style APE+ learnable sinusoidal RPE\nFigure 11: Identical word probing (models with more PEs are shown here comparing to Fig. 2). Darker in the i-th row and j-th column means that the i-th words generally attend more on the j-th words." } ]
2,021
ON POSITION EMBEDDINGS IN BERT
SP:34177dc9d2e81610d167b996c3f106327c666f94
[ "This paper proposes a new method (AR-AdaLS) for label smoothing to improve deep network calibration. In particular, the authors draw a connection between lack of calibration (overconfidence) and examples which are prone to adversarial attacks. They show that by generating smoothed targets based on the adversarial robustness of an example, they can further improve model calibration beyond traditional label smoothing.", "The authors first expose a link between robustness and expected calibration error (ECE), the less robust a data point is, the larger the ECE. They then propose to exploit this link by introducing an adaptive label smoothing method that improves the expected calibration error of less robust data points. They benchmark their new method showing better calibration metrics on standard datasets, as well on corrupted datasets and out-of-distribution data. ", "This paper investigates the relationship between calibration and adversarial robustness, showing that calibration error is larger among less robust images. Based on this, a training procedure is proposed where the labels of the images are smoothed based on the robustness level, to produce a better calibrated model. The proposed method is evaluated on CIFAR-10/CIFAR-100, and their corrupted counterparts, comparing with other calibration and label smoothing approaches, as well as ensemble methods.", "This paper studies the connection between adversarial robustness (measured as the distance to the decision boundary using some adversarial attack) and model calibration (i.e how well the predicted probability indicates how much we can truth the model predictions). 1. The authors show that there is a significant correlation between adversarial robustness and model calibration. That is, inputs that have smaller distance to the decision boundary are more likely to have poorly calibrated predictions. 2. Based on this insight, the authors propose an algorithm called AR-AdaLS to learn how much to smooth the labels of the training data based on their adversarial robustness. They also discuss how this technique can be extended to an ensemble model. 3. The authors thoroughly compare their results against many previous calibration techniques and show that AR-AdaLS results in improved performance (for the single-model based methods). The results are superior also under distribution shift. They show results on CIFAR-10, CIFAR-100 and Imagenet datasets. For results under distribution shift, they show results on CIFAR-10-C, CIFAR-100-C and Imagenet-C datasets." ]
Neural networks lack adversarial robustness, i.e., they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated predictions, i.e., the predicted probability is not a good indicator of how much we should trust our model. In this paper, we study the connection between adversarial robustness and calibration and find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated predictions. Based on this insight, we examine if calibration can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary. We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration over the model even under distributional shifts. In addition, AR-AdaLS can also be applied to an ensemble model to further improve model calibration.
[ { "affiliations": [], "name": "Yao Qin" }, { "affiliations": [], "name": "Xuezhi Wang" }, { "affiliations": [], "name": "Alex Beutel" } ]
[ { "authors": [ "A. Athalye", "N. Carlini", "D. Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "C. Blundell", "J. Cornebise", "K. Kavukcuoglu", "D. Wierstra" ], "title": "Weight uncertainty in neural network. volume", "venue": "Proceedings of Machine Learning Research, pp. 1613–1622,", "year": 2015 }, { "authors": [ "N. Carlini", "D. Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec", "year": 2017 }, { "authors": [ "N. Carlini", "D. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "N. Carlini", "Ú. Erlingsson", "N. Papernot" ], "title": "Distribution density, tails, and outliers in machine learning", "venue": "Metrics and applications. ArXiv,", "year": 2019 }, { "authors": [ "Y. Gal", "Z. Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "I. Goodfellow", "J. Shlens", "C. Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "I. Goodfellow", "Y. Qin", "D. Berthelot" ], "title": "Evaluation methodology for attacks against confidence thresholding models", "venue": null, "year": 2018 }, { "authors": [ "A. Graves" ], "title": "Practical variational inference for neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2011 }, { "authors": [ "C. Guo", "G. Pleiss", "Y. Sun", "K.Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "W. He", "B. Li", "D. Song" ], "title": "Decision boundary analysis of adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "D. Hendrycks", "T.G. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "D.P. Kingma", "T. Salimans", "M. Welling" ], "title": "Variational dropout and the local reparameterization trick", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "M. Kull", "M. Perello Nieto", "M. Kängsepp", "T. Silva Filho", "H. Song", "P. Flach" ], "title": "Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "B. Lakshminarayanan", "A. Pritzel", "C. Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "M. Milani Fard", "Q. Cormier", "K. Canini", "M. Gupta" ], "title": "Launch and iterate: Reducing prediction churn", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "R. Müller", "S. Kornblith", "G.E. Hinton" ], "title": "When does label smoothing help", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Y. Qin", "N. Frosst", "S. Sabour", "C. Raffel", "G. Cottrell", "G. Hinton" ], "title": "Detecting and diagnosing adversarial images with class-conditional capsule reconstructions", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "O. Russakovsky", "J. Deng", "H. Su", "J. Krause", "S. Satheesh", "S. Ma", "Z. Huang", "A. Karpathy", "A. Khosla", "M.S. Bernstein", "A.C. Berg", "Li", "F.-F" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "J. Snoek", "Y. Ovadia", "E. Fertig", "B. Lakshminarayanan", "S. Nowozin", "D. Sculley", "J. Dillon", "J. Ren", "Z. Nado" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Y. Song", "T. Kim", "S. Nowozin", "S. Ermon", "N. Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "P. Stock", "M. Cissé" ], "title": "Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "D. Stutz", "M. Hein", "B. Schiele" ], "title": "Confidence-calibrated adversarial training: Generalizing to unseen attacks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "C. Szegedy", "W. Zaremba", "I. Sutskever", "J. Bruna", "D. Erhan", "I.J. Goodfellow", "R. Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "C. Szegedy", "V. Vanhoucke", "S. Ioffe", "J. Shlens", "Z. Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "S. Thulasidasan", "G. Chennupati", "J.A. Bilmes", "T. Bhattacharya", "S. Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "M. Welling", "Y.W. Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th International Conference on International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Y. Wen", "D. Tran", "J. Ba" ], "title": "Batchensemble: An alternative approach to efficient ensemble and lifelong learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "D. Xin", "N. Mayoraz", "H. Pham", "K. Lakshmanan", "J.R. Anderson" ], "title": "Folding: Why good models sometimes make spurious recommendations", "venue": "In Proceedings of the Eleventh ACM Conference on Recommender Systems,", "year": 2017 }, { "authors": [ "Y. Yang", "G. Zhang", "D. Katabi", "Z. Xu" ], "title": "Me-net: Towards effective adversarial robustness with matrix estimation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "S. Zagoruyko", "N. Komodakis" ], "title": "Wide residual networks", "venue": "In British Machine Vision Association,", "year": 2016 }, { "authors": [ "H. Zhang", "M. Cissé", "Y. Dauphin", "D. Lopez-Paz" ], "title": "Mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representation,", "year": 2018 }, { "authors": [ "J. Zhang", "B. Kailkhura", "Han", "T.Y.-J" ], "title": "Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2020 } ]
[ { "heading": "1 Introduction", "text": "The robustness of machine learning algorithms is becoming increasingly important as ML systems are being used in higher-stakes applications. In one line of research, neural networks are shown to lack adversarial robustness – small perturbations to the input can successfully fool classifiers into making incorrect predictions (Szegedy et al., 2014; Goodfellow et al., 2014; Carlini & Wagner, 2017b; Madry et al., 2017; Qin et al., 2020b). In largely separate lines of work, researchers have studied uncertainty in model’s predictions. For example, models are often miscalibrated where the predicted confidence is not indicative of the true likelihood of the model being correct (Guo et al., 2017; Thulasidasan et al., 2019; Lakshminarayanan et al., 2017; Wen et al., 2020; Kull et al., 2019). The calibration issue is exacerbated when models are asked to make predictions on data different from the training distribution (Snoek et al., 2019), which becomes an issue in practical settings where it is important that we can trust model predictions under distributional shift.\nDespite robustness, in all its forms, being a popular area of research, the relationship between these perspectives has not been extensively explored previously. In this paper, we study the correlation between adversarial robustness and calibration. We discover that input data points that are sensitive to small adversarial perturbations (are easily attacked) are more likely to have poorly calibrated predictions. This holds true on a number of network architectures for classification and on all the datasets that we consider: CIFAR-10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015). This suggests that the miscalibrated predictions on those adversarially unrobust data points greatly degrades the performance of model calibration. Based on this insight, we hypothesize and study if calibration can be improved by giving different supervision to the model depending on adversarial robustness of each training data.\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nTo this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) to integrate the correlations between adversarial robustness and calibration into training. Specifically, AR-AdaLS adaptively smooths the training labels conditioned on how vulnerable an input is to adversarial attacks. Our method improves label smoothing (Szegedy et al., 2014) by explicitly teaching the model to differentiate the training data according to their adversarial robustness and then adaptively smooth their labels. By giving different supervision to the training data, our method leads to better calibration over the model without an increase of latency during inference. In addition, since adversarially unrobust data points can be considered as outliers of the underlying data distribution (Carlini et al., 2019), our method can even greatly improve model calibration on held-out shifted data. Further, we propose “AR-AdaLS of Ensemble” to combine our AR-AdaLS and deep ensembles (Lakshminarayanan et al., 2017; Snoek et al., 2019), to further improve the calibration performance under distributional shift. Last, we find an additional benefit of AR-AdaLS is improving model stability (i.e., decreasing variance over multiple independent runs), which is valuable in practical applications where changes in predictions across runs (churn) is problematic.\nIn summary, our main contributions are as follows:\n• Relationship among Robustness Metrics: We find a significant correlation between adversarial robustness and calibration: inputs that are unrobust to adversarial attacks are more likely to have poorly calibrated predictions.\n• Algorithm: We hypothesize that training a model with different supervision based on adversarial robustness of each input will make the model better calibrated. To this end, we propose AR-AdaLS to automatically learn how much to soften the labels of training data based on their adversarial robustness. Further, we introduce “AR-AdaLS of Ensemble” to show how to apply AR-AdaLS to an ensemble model.\n• Experimental Analysis: On CIFAR-10, CIFAR-100 and ImageNet, we find that AR-AdaLS is more effective than previous label smoothing methods in improving calibration, particularly for shifted data. Further, we find that while ensembling can be beneficial, applying AR-AdaLS to adaptively calibrate ensembles offers further improvements over calibration." }, { "heading": "2 Related Work", "text": "Uncertainty estimates How to better estimate a model’s predictive uncertainty is an important research topic, since many models with a focus on accuracy may fall short in predictive uncertainty.\nA popular way to improve a model’s predictive uncertainty is to make the model well-calibrated, e.g., post-hoc calibration by temperature scaling (Guo et al., 2017), and multi-class Dirichlet calibration (Kull et al., 2019). In addition, Bayesian neural networks, through learning a posterior distribution over network parameters, can also be used to quantify a model’s predictive uncertainty, e.g., Graves (2011); Blundell et al. (2015); Welling & Teh (2011). Dropout-based variational inference (Gal & Ghahramani, 2016; Kingma et al., 2015) can help DNN models make less over-confident predictions and be better calibrated. Recently, mixup training (Zhang et al., 2018) has been shown to improve both models’ generalization and calibration (Thulasidasan et al., 2019), by preventing the model from being over-confident in its predictions. Despite the success of improving uncertainty estimates over in-distribution data, Snoek et al. (2019) argue that it does not usually translate to a better performance on data that shift from the training distribution. Among all the methods evaluated by Snoek et al. (2019) under distributional shift, ensemble of deep neural networks (Lakshminarayanan et al., 2017), is shown to be most robust to dataset shift, producing the best uncertainty estimates.\nAdversarial robustness On the other hand, machine learning models are known to be brittle (Xin et al., 2017) and vulnerable to adversarial examples (Athalye et al., 2018; Carlini & Wagner, 2017a,b; He et al., 2018; Qin et al., 2020a). Many defenses have been proposed to improve model’s adversarial robustness (Song et al., 2017; Yang et al., 2019; Goodfellow et al., 2018), however are further attacked by more advanced defense-aware attacks (Carlini & Wagner, 2017b; Athalye et al., 2018). Recently, Carlini et al. (2019); Stock & Cissé (2018) define adversarial robustness as the minimum distance in the input domain required to change the model’s output prediction by constructing an adversarial attack. The most recent work that is close to ours, Carlini et al. (2019), makes the interesting observation that easily attackable data are often outliers in the underlying data distribution and then use adversarial robustness to determine an improved ordering for curriculum learning. Our work, instead, explores the relationship between adversarial robustness and calibration. In addition, we use adversarial robustness as an indicator to adaptively smooth the training labels to improve model calibration.\nLabel smoothing Label smoothing is originally proposed in Szegedy et al. (2016) and is shown to be effective in improving the quality of uncertainty estimates in Müller et al. (2019); Thulasidasan et al. (2019). Instead of minimizing the cross-entropy loss between the predicted probability p̂ and the one-hot label p, label smoothing minimizes the cross-entropy between the predicted probability and a softened label p̃ = p(1 − ) + Z , where Z is the number of classes in the dataset and is a hyperparameter which controls the degree of the smoothing effect. Our work makes label smoothing adaptive and incorporates the correlation with adversarial robustness to further improve calibration." }, { "heading": "3 Correlations between Adversarial Robustness and Calibration", "text": "To explore the relationship between adversarial robustness and calibration, we first introduce the metrics to evaluate each of them (arrows indicate which direction is better).\nAdversarial robustness ↑ Adversarial robustness measures the minimum distance in the input domain required to change the model’s output prediction by constructing an adversarial attack (Carlini et al., 2019; Stock & Cissé, 2018). Specifically, given an input x and a classifier f(·) that predicts the class for the input, the adversarial robustness is defined as the minimum adversarial perturbation δ that enables f(x + δ) 6= f(x). Following the work (Carlini et al., 2019), we construct the `2 based CW attack (Carlini & Wagner, 2017b) and then use the `2 norm of the adversarial perturbation ‖δ‖2 to measure the distance to the decision boundary. Therefore, a more adversarially robust input requires a larger adversarial perturbation to change the model’s prediction.\nExpected calibration error ↓ Model calibration measures the alignment between the predicted probability and the accuracy. Well calibrated predictions convey the information about how much we should trust a model’s prediction. We follow the widely used expected calibration error (ECE) to measure the calibration performance of a network (Guo et al., 2017; Snoek et al., 2019). To compute the ECE, we need to first divide all the data into K buckets sorted by their predicted probability (confidence) of the predicted class. Let Bk represent the set of data in the k-th confidence bucket. Then the accuracy and the confidence of Bk are defined as acc(Bk) = 1|Bk| ∑ i∈Bk 1(ŷi =\nyi) and conf(Bk) = 1|Bk| ∑ i∈Bk p̂ ŷi i , where ŷ and y represent the predicted class and the true\nclass respectively, and p̂ŷ is the predicted probability of ŷ. The ECE is then defined as ECE =∑K k=1 |Bk| N |acc(Bk)− conf(Bk)|, where N is the number of data points." }, { "heading": "3.1 Correlations", "text": "Based on the evaluation metrics, we can see that adversarial robustness and calibration are measuring quite different properties: the adversarial robustness measures the property of the data by computing the adversarial perturbation δ from the input domain, while the calibration metric measures the properties of the model’s predicted probability in the output space. Although adversarial robustness and calibration are conceptually different, they are both connected to the decision boundary. Specifically, adversarial robustness can be used to measure the distance to the decision boundary: if a data point is adversarially unrobust, i.e., easy to find a small input perturbation to fool the classifier into wrong classifications, then this data point is close to the decision boundary. Meanwhile, models should have relatively less confident predictions on data points close to the decision boundary. However, as pointed out by (Guo et al., 2017; Snoek et al., 2019), existing deep neural networks are frequently over-confident, i.e., having predictions with high confidence even whey they should be uncertain. Taking these two together, we hypothesize if examples that can be easily attacked by adversarial examples are also poorly calibrated.\nTo test this, we perform experiments on the clean test set across three datasets: CIFAR-10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015) with different networks, whose architecture and accuracy are shown in Table 1. We refer to these models as “Vanilla” for each dataset in the following discussion. The details for training each vanilla network are included in Appendix A.\nTo explore the relationship between adversarial robustness and calibration, we start with the relationship between adversarial robustness and confidence together with accuracy. Specifically, we rank the input data according to their adversarial robustness and then divide the dataset into R equally-sized subsets (R = 10 used in this paper). For each adversarial robustness subset, we compute the accuracy and the average confidence score of the predicted class. As shown in the first row in Figure 1, we can clearly see that both accuracy and confidence increase with the adversarial robustness of the input data, and confidence is consistently higher than accuracy in each adversarial robustness subset across three datasets. This indicates that although vanilla classification models achieve the state-of-the-art accuracy, they tend to give over-confident predictions, especially for those adversarially unrobust data points.\nTaking one step further, we particularly compute the expected calibration error (ECE) in each adversarial robustness subset, shown in the bottom row of Figure 1. In general, we find that data points falling into lower adversarial robustness levels are more likely to be over-confident and less well calibrated (larger ECE). For those adversarially robust examples, there is a better alignment between the model’s predicted confidence and accuracy, and the ECE over those examples is close to 0. This nicely validates our hypothesis: inputs that are adversarially unrobust are more likely to have poorly calibrated predictions. On larger-scale ImageNet, while we still see the general trend holds, the least adversarially robust examples are relatively well calibrated. We hypothesize this may be due to larger training data and less overfitting.\nFurthermore, we also find an interesting correlation between adversarial robustness and model stability, which is measured by the variance of the predicted probability across M independent runs (e.g., M = 5). The variance is computed as σ2 = 1M−1 1 N ∑M m=1 ∑N i=1(p̂m,i − p̄i)2, where\np̂m,i is the m-th model’s predicted probability of the i-th data and p̄i = 1M ∑M m=1 p̂m,i is the average predicted probability over M runs. As shown in the bottom row of Figure 1, we see that those adversarially unrobust examples tend to have a much higher variance across all three datasets. This indicates that inputs that are unrobust to adversarial attacks are more likely to have unstable predictions.\nAlgorithm 1 Training procedure for AR-AdaLS Input: number of classes Z, number of training epochs T , number of adversarial robustness subset R, learning rate of adaptive label smoothing α. For each adversarial robustness training subset, we initialize the soft label as the one-hot label p̃r,t = pr, where the initial soft label for the correct class p̃ z=y r,t = 1.\nfor t = 1 to T do Minimize cross-entropy loss between soft label and predicted probability 1R ∑R r L(p̃r,t, p̂r,t)\nfor r = 1 to R do Update p̃z=yr,t+1 ← p̃ z=y r,t − α · {conf(Svalr )t − acc(Svalr )t} . Eqn. (3)\nClip p̃z=yr,t+1 to be within ( 1 Z , 1] Update r,t+1 ← (p̃z=yr,t+1 − 1) · Z1−Z . Eqn. (4) Update p̃r,t+1 ← pr(1− r,t+1) + r,t+1Z . Eqn. (1)\nend for end for\nTaking all together, these empirical results nicely build a connection between very different concepts. In particular, adversarial robustness is measured over the input domain while both calibration and stability are measured over the output space. Given the strong empirical connection, we now ask: can we improve model calibration and stability by targeting adversarially unrobust examples?" }, { "heading": "4 Method", "text": "Based on the correlation between adversarial robustness and calibration, we hypothesize and study if calibration can be improved by giving different supervision to the model depending on the adversarial robustness of training data. To this end, we propose a method named Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS), which performs label smoothing at different degrees to the training data based on their adversarial robustness. Specifically, we sort and divide the training data into R small subsets with equal size according to their adversarial robustness1 and then use r to soften the labels in each training subset Strainr . The soft labels can be formulated as:\np̃r = pr(1− r) + r Z , (1)\nwhere pr stands for the one-hot vector, e.g., pz=yr = 1 for the correct class and p z 6=y r = 0 for the others, and Z is the number of classes in the dataset. The parameter r controls the degree of smoothing effect and allows for different levels of smoothing in each adversarial robustness subset. Generally, a relatively larger r is desirable for lower adversarial robustness levels such that the model learns to make a lower confident prediction. Instead of empirically setting the parameter r in each adversarial robustness subset, we allow it to be adaptively updated according to the calibration performance on the validation set (discussed in Section 4.1). In this way, we explicitly train a network with different supervision based on the adversarial robustness of training data.\nThere are two options to obtain the adversarial robustness. One is “on the fly”: to keep creating the adversarial attacks during training, which provides precise adversarial robustness ranking but at the cost of great computing time. The other is to “pre-compute” the adversarial robustness by attacking a vanilla model with the same network architecture but trained with one-hot labels. This is more efficient but at the sacrifice of the precision of adversarial robustness ranking. In practice, we find that it is sufficient to make the network differentiate the adversarially robust and unrobust data with the pre-computed adversarial robustness (see more discussion in Section 5.6). Therefore, all experiments related to “AR-AdaLS” without further specification are based on pre-computed adversarial robustness for efficiency." }, { "heading": "4.1 Adaptive learning mechanism", "text": "To find the best hyperparameter for label smoothing, previous methods (Szegedy et al., 2016; Thulasidasan et al., 2019) sweep in a range and choose the one that has the best validation\n1Note, predicted confidence is not a good indicator for splitting the training dataset as the model can easily overfit to the training data and their predicted confidence are all close to 100%.\nperformance. However, in our setting, the number of combinations of r increases exponentially with the number of adversarial robustness subsets R. To this end, we propose an adaptive learning mechanism to automatically learn the parameter r in each adversarial robustness subset. The overall training procedure is summarized in Algorithm 1.\nFirst, we denote the soft label for the correct class in the r-th adversarial robustness subset as p̃z=yr . According to Eqn. (1), we can derive:\np̃z=yr = 1− r + r Z . (2)\nSince well-calibrated predicted probability should be aligned with the empirical accuracy, we use the calibration performance in the validation set to help update p̃z=yr for the training data. Specifically, we first rank the adversarial robustness of the validation data and split the validation set into R equally-sized subsets. Then, we use the difference between confidence and accuracy in the r-th adversarial robustness validation subset conf(Svalr )−acc(Svalr ) to update the soft label for the correct class of training data in the r-th adversarial robustness training subset Strainr ,\np̃z=yr,t+1 = p̃ z=y r,t − α · {conf(Svalr )t − acc(Svalr )t}, (3)\nwhere p̃z=yr,t is the soft label of the correct class in the r-th adversarial robustness training subset at time step t. The accuracy and the confidence of Svalr are defined as acc(S val r ) =\n1 |Svalr | ∑ i∈Svalr 1(ŷi = yi)\nand conf(Svalr ) = 1 |Svalr | ∑ i∈Svalr p̂z=ŷii , where ŷ and y is the predicted class and the true class respectively, p̂z=ŷ denotes the the predicted probability of the predicted class. The hyperparameter α > 0 plays a role as a learning rate to update the soft label p̃z=yr,t based on the difference between the predicted confidence and accuracy in the validation set. Intuitively, if we assign a large p̃z=yr to training data, then the network tends to make a high confident prediction and vice versa. Therefore, if the confidence is greater than the accuracy (conf(Svalr ) > acc(S val r )) in the validation set, we should reduce p̃z=yr to teach the network to be less confident. Otherwise, we should increase p̃ z=y r . In addition, we also need to constrain p̃z=yr to be within ( 1 Z , 1] after each update as it stands for the true probability of the correct class, where Z is the number of classes in the dataset.\nFor a given p̃z=yr , we can easily obtain r by reversing Eqn. (2),\nr = (p̃ z=y r − 1) ·\nZ\n1− Z , (4)\nand the soft labels for all the classes p̃r can be computed according to Eqn. (1). We update the soft labels after each training epoch in our experiments.\nNote that this adaptive learning mechanism can be easily applied to standard label smoothing without adversarial robustness slicing (R = 1). In this case, we can replace sweeping the hyperparameter with this adaptive learning method, named as “Adaptive Label Smoothing” (AdaLS). Our proposed AdaLS and AR-AdaLS do not increase the inference time: we test AdaLS and AR-AdaLS exactly the same as a vanilla model." }, { "heading": "5 Experiments", "text": "Datasets We test our method on three datasets CIFAR-10, CIFAR-100 and ImageNet. In addition, we also report performance on the shifted datasets: CIFAR-10-C, CIFAR-100-C and ImageNetC (Hendrycks & Dietterich, 2019), where there are different types of corruptions (19 types for CIFAR-10, 17 types for CIFAR-100 and 15 types for ImageNet), e.g., noise, blur, weather and digital categories that are frequently encountered in natural images. Each type of corruption has five levels of shift intensity, with higher levels having more corruption." }, { "heading": "5.1 How does AR-AdaLS work?", "text": "To have a deeper understanding of how AR-AdaLS works, in Figure 2 we visualize the effect of label smoothing (LS) and our AR-AdaLS. Comparing Figure 2 (a) and (b), AR-AdaLS is better at calibrating the data than label smoothing, especially on the adversarially unrobust examples (lower adversarial robustness level). Further, we show plots of ECE and variance in Figure 2 (c) and (d). Both label smoothing and AR-AdaLS improve model calibration and stability over vanilla model and AR-AdaLS has the best performance among three methods. This suggests that AR-AdaLS is better at improving calibration and stability in adversarially unrobust regions, not just on average." }, { "heading": "5.2 AR-AdaLS improves calibration", "text": "Baselines We compare our proposed AR-AdaLS with the following 8 different methods: (1) Vanilla model trained with one-hot labels, (2) Temperature Scaling (Guo et al., 2017), a post-hoc calibration method where the predicted logits are divided by a temperature which is tuned on the hold-out validation set, (3) label smoothing (LS) (Szegedy et al., 2016) that softs labels by sweeping the hyperparameter (the smoothing degree) in a range to find the best hyperparameter , (4) Adaptive Label Smoothing (AdaLS): we use our proposed adaptive learning mechanism introduced in Section 4.1 to automatically learn the hyperparameter rather than sweeping to find the best . (5) mixup, which is a data augmentation technique originally proposed in (Zhang et al., 2018) and recently found to be able to improve calibration in (Thulasidasan et al., 2019), (6) Confidence-calibrated adversarial training (CCAT) (Stutz et al., 2020), a method builds on adversarial training by reducing the confidence in the labels of adversarial examples. Note that there is a significant difference between our AR-AdaLS and CCAT: CCAT trains a model on the generated adversarial examples to improve a model’s adversarial robustness. In contrast, our AR-AdaLS, trained on the clean training data, is proposed to use the correlation between adversarial robustness and calibration to improve a model calibration performance. (7) “Ensemble of Vanilla” (Lakshminarayanan et al., 2017), an ensemble of M vanilla models independently trained with random initialization. (8) Mix-n-Match (Zhang et al., 2020), an ensemble and compositional method proposed for calibration. All the methods are trained with the same network architecture, i.e., WRN-28-10 (Zagoruyko & Komodakis, 2016) on both CIFAR-10 and CIFAR-100, and the same training hyperparameters: e.g., learning rate, batch size, number of training epochs, for fair comparison2. Please refer to Appendix A for all the training details and hyperparameters.\nResults The expected calibration error of all the methods on CIFAR-10 and CIFAR-100 are displayed in Table 2. We can clearly see that by differentiating the training data based on their adversarial robustness, AR-AdaLS effectively reduces the calibration error compared to other singlemodel based methods without significant change in accuracy (see Figure 6 in Appendix) and it is only rivaled by mixup on CIFAR-100, which uses extra domain knowledge through data augmentation. Note that AR-AdaLS is only trained on the clean training data without any data augmentation compared to mixup (Thulasidasan et al., 2019) and CCAT (Stutz et al., 2020).\n2The result of Mix-n-Match in Table 2 is from Table 1 reported in the original work (Zhang et al., 2020), which is trained with the same network architecture, WRN-28-10." }, { "heading": "5.3 Improve calibration on shifted dataset", "text": "Table 3 summarizes the mean calibration error (ECE) on the corrupted datasets: CIFAR-10-C and ImageNet-C (Hendrycks & Dietterich, 2019). Looking at all the single-model based methods, we can see that AR-AdaLS significantly outperforms other single-model based methods with the lowest ECE. Contrasting with LS and AdaLS, we see AR-AdaLS benefits greatly from the adversarial robustness slicing. As a result, our model learns to give smaller soft labels of the correct class to those adversarially unrobust training data, which can also be considered as outliers of the underlying data distribution (Carlini et al., 2019). Therefore, when tested on the shifted data that deep networks have been shown to produce pathologically over-confident predictions (Hendrycks & Dietterich, 2019), our model correctly learns to make a relatively lower-confidence prediction, resulting in a better calibration performance.\nIn addition, we also compare AR-AdaLS with “Ensemble of Vanilla” (Lakshminarayanan et al., 2017), which is shown to be the best model for models’ calibration under distributional shift (Snoek et al., 2019). The result of Ensemble of Vanilla is an ensemble of M = 5 vanilla models independently trained with random initialization. We can see that AR-AdaLS achieves comparable calibration performance on CIFAR-10 and the ensemble is better under highly shifted data on ImageNet.\nCombination with deep ensembles We further discuss the following two ways to combine AR-AdaLS with ensembles:\n• Ensemble of AR-AdaLS: As in Lakshminarayanan et al. (2017), we ensemble AR-AdaLS by training multiple independent AR-AdaLS models with random initialization, and average their predictions at inference.\n• AR-AdaLS of Ensemble: Instead of computing soft labels independently for each AR-AdaLS, we perform AR-AdaLS on the ensembled predictions, i.e., in Eqn (3) we compute confidence and accuracy based on the average of M = 5 model predictions. Each model is then supervised with the same soft labels. We will see this slight distinction in training is quite important.\nAs shown in Table 3, naively combining deep ensembles with AR-AdaLS (Ensemble of AR-AdaLS) could not effectively improve models’ calibration (see more details in Appendix B). In contrast, AR-AdaLS of Ensemble, which adaptively adjusts smoothing to keep the ensemble models well calibrated, performs the best under distributional shift on both CIFAR-10 and ImageNet." }, { "heading": "5.4 Improve model stability", "text": "Since we observe in Figure 1 that the most adversarially unrobust data points also have very unstable predictions, we test AR-AdaLS to see if it can help improve model stability, which is of great value in practice where high variance of a model is bad for churn (Milani Fard et al., 2016). In Figure 3 we can see that AR-AdaLS can effectively reduce the variance of a model compared to a vanilla model and label smoothing on CIFAR-10 and ImageNet. Please refer to Table 5 in Appendix for numerical numbers on both datasets." }, { "heading": "5.5 Improvements on out-of-distribution data", "text": "We further study the performance of AR-AdaLS when predicting on out-of-distribution (OOD) data. Following (Snoek et al., 2019), we compare the performance of Vanilla, Label Smoothing and AR-AdaLS by plotting the histogram of the entropy on the OOD data (higher entropy on OOD is better). As shown in Figure 4, each model is trained on CIFAR-10 dataset and then tested on CIFAR-100 dataset. We can clearly see that AR-AdaLS significantly reduces the number of lowentropy predictions on OOD data. In addition, using CIFAR-10/CIFAR-100 as in-distribution/outof-distribution data, we also report the Area under the ROC curve (AUROC) of label smoothing, mixup and AR-AdaLS. The AUROC score of standard label smoothing and mixup is 0.832±0.005 and 0.821±0.003 respectively, whereas our AR-AdaLS achieves 0.885±0.003. This demonstrates the effectiveness of AR-AdaLS even on fully out-of-distribution data." }, { "heading": "5.6 Sensitivity analysis", "text": "Sensitivity to the number of adversarial robustness subsets We perform a sensitivity analysis for the number of adversarial robustness subsets R. Specifically, we plot the calibration error of AR-AdaLS with a varying R on the clean CIFAR-10 and corrupted CIFAR-10-C in Figure 5. We can see that there is a significant drop in calibration error (ECE) when we increase the number of adversarial robustness subsets R from 1, where R = 1 denotes AdaLS. Further, the calibration error is relatively stable when R is chosen within the range [10, 16]. Thus, we choose R = 10 for all results shown in this paper for AR-AdaLS.\nSensitivity to the exactness of adversarial robustness To investigate this, we study the performance of AR-AdaLS using adversarial robustness generated via two different ways: One is “on-thefly”: we keep creating adversarial attacks during training, which provides a more precise adversarial robustness ranking but at the cost of great computing time. The other is to “pre-compute” adversarial robustness by attacking a vanilla model that is trained with one-hot labels. This is more efficient but at the sacrifice of the precision of adversarial robustness ranking. We perform experiments on CIFAR-100 as an example to compare the performance of AR-AdaLS based on the adversarial robustness that is “pre-computed” or “on-the-fly”. As shown in Table 4, generating adversarial robustness “on-the-fly” can further help improve the calibration performance for AR-AdaLS on both clean and shifted datasets, compared to pre-computing adversarial robustness. Similar patterns are observed on CIFAR-10.3\nTherefore, we can conclude that 1) the exactness of adversarial robustness is helpful for AR-AdaLS, that is, more precise adversarial robustness leads to a better performance. 2) AR-AdaLS with an approximation of adversarial robustness (pre-computed) can already significantly improve label smoothing. Hence, all results in this paper related to “AR-AdaLS” without further specification are based on pre-computed adversarial robustness for efficiency. This is because our main target is to show that the idea of differentiating the training data based on their adversarial robustness is promising to improve model calibration rather than pushing the results to the best." }, { "heading": "6 Conclusion", "text": "In this paper, we have explored the correlations between adversarial robustness and calibration. We find across three datasets that adversarially unrobust data points, where small adversarial perturbations to the input are able to fool the classifier into wrong predictions, are more likely to have poorly calibrated and unstable predictions. Based on this insight, we propose AR-AdaLS to adaptively smooth the labels of the training data based on their adversarial robustness. In our experiments we see that AR-AdaLS is more effective than previous label smoothing methods in improving calibration, particularly for shifted data, and can offer improvements on top of already strong ensembling methods. We believe this is an exciting new use for adversarial robustness as a means to more generally improve model trustworthiness, not just by limiting adversarial attacks but also improving calibration and stability on unexpected data. We hope this spurs further work at the intersection of these areas of research.\n3We did not run on-the-fly AR-AdaLS for ImageNet due to the computational intensity." } ]
null
Improving Calibration through the Relationship with Adversarial Robustness
SP:e1a78b637ef015d15ae3283f6bd3299e5244d457
[ "Neural text generation models typically rely on sampling schemes for autoregressive decoding. This may range from pure sampling, top-k, top-p to temperature modulated sampling. These methods are mostly heuristic schemes and lack theoretical analysis. This paper tries to fill that gap by analyzing these schemes theoretically under the Zipfian distribution assumption (an underlying distribution in natural language corpora and generally true for open-ended language generation models). While filling the theoretical gaps, this work proposes an adaptive top-k decoding mechanism - Mirostat. This is based on the understanding that cross-entropy is a useful measure of the quality of the generated text. " ]
Neural text decoding algorithms strongly influence the quality of texts generated using language models, but popular algorithms like top-k, top-p (nucleus), and temperature-based sampling may yield texts that have objectionable repetition or incoherence. Although these methods generate high-quality text after ad hoc parameter tuning that depends on the language model and the length of generated text, not much is known about the control they provide over the statistics of the output. This is important, however, since recent reports show that humans prefer when perplexity is neither too much nor too little and since we experimentally show that cross-entropy (log of perplexity) has a near-linear relation with repetition. First we provide a theoretical analysis of perplexity in top-k, top-p, and temperature sampling, under Zipfian statistics. Then, we use this analysis to design a feedback-based adaptive top-k text decoding algorithm called mirostat that generates text (of any length) with a predetermined target value of perplexity without any tuning. Experiments show that for low values of k and p, perplexity drops significantly with generated text length and leads to excessive repetitions (the boredom trap). Contrarily, for large values of k and p, perplexity increases with generated text length and leads to incoherence (confusion trap). Mirostat avoids both traps. Specifically, we show that setting target perplexity value beyond a threshold yields negligible sentence-level repetitions. Experiments with human raters for fluency, coherence, and quality further verify our findings.
[ { "affiliations": [], "name": "Sourya Basu" }, { "affiliations": [], "name": "Nitish Shirish Keskar" }, { "affiliations": [], "name": "Lav R. Varshney" } ]
[ { "authors": [ "Peter F. Brown", "Stephen A. Della Pietra", "Vincent J. Della Pietra", "Jennifer C. Lai", "Robert L. Mercer" ], "title": "An estimate of an upper bound for the entropy", "venue": null, "year": 1992 }, { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Thomas M. Cover", "Joy A. Thomas" ], "title": "Elements of Information Theory", "venue": null, "year": 2006 }, { "authors": [ "Sumanth Dathathri", "Andrea Madotto", "Janice Lan", "Jane Hung", "Eric Frank", "Piero Molino", "Jason Yosinski", "Rosanne Liu" ], "title": "Plug and play language models: a simple approach to controlled text generation", "venue": "In Proc. 9th Int. Conf. Learn. Represent. (ICLR),", "year": 2020 }, { "authors": [ "Angela Fan", "Mike Lewis", "Yann Dauphin" ], "title": "Hierarchical neural story generation", "venue": "In Proc. Assoc. Comput. Linguist. Annu. Meet. (ACL", "year": 2018 }, { "authors": [ "Mary Ellen Foster", "Michael White" ], "title": "Avoiding repetition in generated text", "venue": "In Proc. Conf. Eleventh European Workshop Natural Language Generation", "year": 2007 }, { "authors": [ "Edgar N. Gilbert" ], "title": "Codes based on inaccurate source probabilities", "venue": "IEEE Trans. Inf. Theory,", "year": 1971 }, { "authors": [ "Te Sun Han", "Kingo Kobayashi" ], "title": "Mathematics of information and coding", "venue": "Ann. Math. Stat.,", "year": 2007 }, { "authors": [ "Tatsunori Hashimoto", "Hugh Zhang", "Percy Liang" ], "title": "Unifying human and statistical evaluation for natural language generation", "venue": "In Proc. NAACL-HLT", "year": 2019 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Maxwell Forbes", "Antoine Bosselut", "David Golub", "Yejin Choi" ], "title": "Learning to write with cooperative discriminators", "venue": "In Proc. Assoc. Comput. Linguist. Annu. Meet. (ACL", "year": 2018 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Li Du", "Maxwell Forbes", "Yejin Choi" ], "title": "The curious case of neural text degeneration", "venue": "In Proc. 9th Int. Conf. Learn. Represent. (ICLR),", "year": 2020 }, { "authors": [ "Daphne Ippolito", "Reno Kriz", "Maria Kustikova", "João Sedoc", "Chris Callison-Burch" ], "title": "Comparison of diverse decoding methods from conditional language models", "venue": "In Proc. Assoc. Comput. Linguist. Annu. Meet. (ACL", "year": 2019 }, { "authors": [ "Daphne Ippolito", "Daniel Duckworth", "Chris Callison-Burch", "Douglas Eck" ], "title": "Automatic detection of generated text is easiest when humans are fooled", "venue": "In Proc. Assoc. Comput. Linguist. Annu. Meet. (ACL", "year": 2020 }, { "authors": [ "Shaojie Jiang", "Thomas Wolf", "Christof Monz", "Maarten de Rijke" ], "title": "TLDR: Token loss dynamic reweighting for reducing repetitive utterance generation", "venue": "[cs.CL].,", "year": 2020 }, { "authors": [ "Nitish Shirish Keskar", "Bryan McCann", "Lav R. Varshney", "Caiming Xiong", "Richard Socher" ], "title": "CTRL: A conditional transformer language model for controllable generation", "venue": "arXiv:1909.05858v2 [cs.CL].,", "year": 2019 }, { "authors": [ "Ilia Kulikov", "Alexander Miller", "Kyunghyun Cho", "Jason Weston" ], "title": "Importance of search and evaluation strategies in neural dialogue modeling", "venue": "In Proc. 12th Int. Conf. Natural Language Generation (ICNLG", "year": 2019 }, { "authors": [ "Sander Lestrade" ], "title": "Unzipping Zipf’s law", "venue": "PloS ONE,", "year": 2017 }, { "authors": [ "Jiwei Li", "Will Monroe", "Dan Jurafsky" ], "title": "A simple, fast diverse decoding algorithm for neural generation", "venue": "[cs.CL].,", "year": 2016 }, { "authors": [ "Christopher Manning", "Hinrich Schutze" ], "title": "Foundations of Statistical Natural Language Processing", "venue": null, "year": 1999 }, { "authors": [ "Ning Miao", "Hao Zhou", "Lili Mou", "Rui Yan", "Lei Li" ], "title": "CGMH: Constrained sentence generation by metropolis-hastings sampling", "venue": "In Proc. 33rd AAAI Conf. Artif. Intell.,", "year": 2019 }, { "authors": [ "Steven T. Piantadosi" ], "title": "Zipf’s word frequency law in natural language: A critical review and future directions", "venue": "Psychonomic Bulletin & Review,", "year": 2014 }, { "authors": [ "David M.W. Powers" ], "title": "Applications and explanations of Zipf’s law", "venue": "In New Meth. Language Process. and Comp. Natural Language Learning,", "year": 1998 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "Unpublished manuscript,", "year": 2019 }, { "authors": [ "Lav R. Varshney", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Limits of detecting text generated by large-scale language models", "venue": "In Proc. 2020 Inf. Theory Appl. Workshop,", "year": 2020 }, { "authors": [ "Ashwin K. Vijayakumar", "Michael Cogswell", "Ramprasaath R. Selvaraju", "Qing Sun", "Stefan Lee", "David Crandall", "Dhruv Batra" ], "title": "Diverse beam search for improved description of complex scenes", "venue": "In Proc. 32nd AAAI Conf. Artif. Intell.,", "year": 2018 }, { "authors": [ "Sean Welleck", "Ilia Kulikov", "Stephen Roller", "Emily Dinan", "Kyunghyun Cho", "Jason Weston" ], "title": "Neural text generation with unlikelihood training", "venue": "In Proc. 9th Int. Conf. Learn. Represent. (ICLR),", "year": 2020 }, { "authors": [ "Ian H. Witten", "Radford M. Neal", "John G. Cleary" ], "title": "Arithmetic coding for data compression", "venue": "Commun. ACM,", "year": 1987 }, { "authors": [ "Hugh Zhang", "Daniel Duckworth", "Daphne Ippolito", "Arvind Neelakantan" ], "title": "Trading off diversity and quality in natural language generation", "venue": "arXiv:2004.10450v1 [cs.CL].,", "year": 2020 }, { "authors": [ "George K. Zipf" ], "title": "The Psycho-biology of Language: An Introduction to Dynamic Philology", "venue": null, "year": 1965 }, { "authors": [], "title": "Reagmire, aka ”The Lady in the Lake”) (1945-2049) (wife of B.G. Lloyd) (Philadelphia, PA)” Target surprise value = 6.0 Observed cross-entropy rate = 5.735 Average compressed size (per token) = 5.834 Percentage compression", "venue": null, "year": 2049 }, { "authors": [ "These flag designs come largely of British origin", "the date published of its introduction by the Royal Armouries of Lameucers is from" ], "title": "The camouflage was recently added to the new Irish power orange flag", "venue": "The design is based on the weapons bao mouèret Standard and has two coloured pouches connected to the rifle barrel by two checks along the top of the barrel with protection straps around the barrel to protect the cutouts. NATO hired a team of physicists to do the reconstruction.", "year": 1638 }, { "authors": [ "Maxwell to be finished in September" ], "title": "But to judge from his books alone, it was clear that his approach to the encoding of messages and to writing them down and deciphering text was an enormous leap forward", "venue": "One of his original authors was Joseph Blow, a French engineer. In the early 1980s, he joined the Elliptic Curve Project, a British group devoted to hard evidence from the highest levels of intelligence to produce", "year": 1965 }, { "authors": [ "Chaplin", "Herbert S. Gates" ], "title": "Shannon presented the American [29] Chambers cipher record by William Alford, the retired London mathematician and close collaborator", "venue": null, "year": 1942 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large-scale generative language models (LMs) have received recent attention due to their highquality open-ended text generation ability (Brown et al., 2020; Radford et al., 2019). Generating texts from these LMs usually relies on some form of random sampling. Pure sampling often leads to incoherent and low-quality texts (Holtzman et al., 2018), whereas greedy decoding leads to excessive repetitions, another form of low quality. The right decoding algorithm is needed to generate highquality texts with controlled attributes (Ippolito et al., 2020; Zhang et al., 2020; Ippolito et al., 2019).\nWe introduce mirostat,1 a neural text decoding algorithm that actively controls the generative process to maintain the perplexity of generated text at a certain desired value. Mirostat uses an adaptive topk sampling algorithm to actively tune the value of k which helps maintain the overall perplexity of the text; recall that top-k sampling (Holtzman et al., 2018; Fan et al., 2018) is where the next word is sampled from the top k most probable choices.\nTop-k sampling and several other recent sampling methods are motivated by suppressing an unreliable tail in the probability distribution of trained LMs. Another sampling method is top-p, also known as nucleus sampling, where the next word is chosen from the top x probable choices, where\n1The word mirostat is derived from mirum which is Latin for surprise and stat meaning control. This work was funded in part by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM AI Horizons Network and the National Science Foundation Grant CCF-1717530.\nx is the smallest integer such that their cumulative probability mass is at least p (Holtzman et al., 2020). While top-k sampling involves a fixed number of most probable choices, top-p sampling involves a dynamic number of choices based on a fixed p value and shows better statistical and human-evaluated performance. For small values of k and p, these sampling methods unfortunately repeat phrases in generated text. This can be handled by penalizing repetitions and using appropriate temperature values (Keskar et al., 2019) or adding diversity to the generated text (Zhang et al., 2020; Vijayakumar et al., 2018). On the other hand, large values of k and p can lead to incoherent texts similar to pure sampling. Although choosing appropriate values of p or k can avoid repetition and incoherence, this involves ad hoc tuning of parameters. Even for a fixed value of p or k, the generated text can have varying statistical properties.\nIntriguingly, as we demonstrate via Example 1 in Appendix A, small values of a certain perplexity statistic of generated texts called surprise (Def. 1) are closely linked to repetitions and large values of surprise are linked to incoherence. Perplexity is a statistical metric used to evaluate quality of neural text generation, and is closely related to average surprise as shown in Fig. 7 in Appendix A and formalized in Sec. 2. A large-scale human subject experiment by Zhang et al. (2020) showed human-evaluated quality is closely related to the likelihood of the generated text for fixed number of tokens. In particular, reducing perplexity increases quality upto some point before the quality starts dropping. This implies that good control over perplexity of the generated text would give direct control over the quality of generated text (as evaluated by humans). Generating texts with an appropriately chosen target perplexity value may maximize quality of generated text. Ergo mirostat.\nNow we summarize our key contributions. Sec. 3 shows theoretically how cross-entropy and hence perplexity grows in top-k and top-p sampling as a function of k and p respectively, which was previously unknown. Sec. 4 introduces mirostat sampling, which outputs texts with predetermined target perplexity value. Although perplexity may not fully capture the quality of text (Hashimoto et al., 2019), much literature discusses its correlation to quality (Zhang et al., 2020). Hence, our algorithm to control perplexity helps generate high-quality text. Sec. 5.1 experimentally shows much fluctuation in cross-entropy rates in top-k and top-p sampling as a function of their input parameters, hence unable to control perplexity of output text. Sec. 5.2 shows repetition is closely related to perplexity of the generated texts, mostly independent of the sampling method, but slightly dependent on the LM used. Sec. 5.3 experimentally shows mirostat sampling avoids both boredom and confusion traps for a wide range of target perplexity values. Sec. 5.4 provides our own experiments with human raters that demonstrate mirostat efficacy for fluency, coherence, and overall quality." }, { "heading": "1.1 RELATED WORK", "text": "Sampling from distorted probability distribution Pure sampling from LMs often leads to incoherent text whereas greedy decoding leads to repetitions. Distorting probability distributions, as in top-k, top-p, or temperature sampling help improve quality of generated texts, if parameters are properly tuned (Holtzman et al., 2018; Fan et al., 2018; Holtzman et al., 2020). Tuning these methods, however, is ad hoc and does not provide good control over the statistics of the output. Our method uses statistics of previously-generated tokens as input to generate the next token, by distorting the probability distribution so it helps control the overall statistics of the generated text. This ability to control the perplexity of the output is a key advantage of our method over previous work. This, when used with the relation between perplexity and human-evaluated quality observed by Zhang et al. (2020), can yield text that has better quality control.\nControllable text generation Controllable text generation has oft focused on semantics of the output text, as in LMs like CTRL (Keskar et al., 2019), and sampling algorithms like plug-and-play LM (Dathathri et al., 2020) and constrained sentence generation by Metropolis-Hastings (Miao et al., 2019). Contrarily our approach is purely statistical, guiding the decoder along a desired statistical path that addresses issues with pure sampling and greedy decoding.\nQuality-diversity tradeoff Top-k, top-p, and low-temperature sampling improve the quality of the text, but at the cost of reduced diversity. Applications like question-answering only demand highquality generation, but open-ended tasks such as story generation demand diversity too. Li et al. (2016); Vijayakumar et al. (2018); Kulikov et al. (2019) propose variants of beam search to induce diversity in generated text. However, Zhang et al. (2020) observe a tradeoff between quality and\ndiversity; they further observe diversity is closely related to entropy whereas quality is maximized in a certain range of observed likelihood values for fixed-length sentences. Our algorithm well-controls observed cross-entropy, the observed likelihood per token of generated text. Hence, by maintaining the observed cross-entropy in a certain range, we can ensure high-quality text generation.\nRepetitions Greedy decoding from LMs often lead to texts with excessive repetitions both at token- and sentence-levels. Several techniques have been proposed to address this. Token loss dynamic reweighting (TLDR) hypothesizes some tokens are more difficult to learn than others and so reweighting tokens during learning can balance things to reduce repetitions (Jiang et al., 2020). Keskar et al. (2019) use a repetition penalty in decoding to reduce repetition of tokens. Welleck et al. (2020) suggest the cause for repetitions is a flaw in the training objective itself and use a new objective that gives less probability to unlikely sequence including texts with high repetitions. Variants of top-k sampling and repetition penalty in (Keskar et al., 2019) were used before by Foster & White (2007) to reduce repetitions. Here, we demonstrate a near-linear relation between repetitions and observed cross-entropy and so we directly control repetitions by controlling observed cross-entropy." }, { "heading": "2 SURPRISE, CROSS-ENTROPY, AND PERPLEXITY", "text": "Here we formally define surprise, cross-entropy, and perplexity. For a random variable X ∈ X distributed as P , the surprisal associated with an instance x of X is defined as − logP (x) (Han & Kobayashi, 2007). Hence, less probable instances are more surprising than more probable instances. Extending the definition to conditional random variables, we next define the surprise associated with tokens and sentences with respect to generated text for a fixed model distribution PM . Definition 1. The surprise value of a token X with respect to generated text X<i and model distribution PM for some fixed model M is SM (X|X<i) = − logPM (X|X<i).\nWe will soon see this quantity is directly related to perplexity. Now we define the average surprise for a sentence X with n tokens. Definition 2. For a sentence Xn = (X1, . . . , Xn) with n tokens, the surprise rate with respect to a probability distribution PM for some model M is SM (Xn) = − 1n ∑n i=1 logPM (Xi|X<i).\nThe cross-entropy of a discrete random variable X ∈ X distributed as PM with respect to a discrete random variable Y ∈ Y distributed as PN such that Y ⊆ X is H(PN , PM ) = − ∑ y∈Y PN (y) logPM (y) = EPN [SM (Y )]. The cross-entropy rate of a stochastic process X = {Xi}, Xi ∈ X distributed as PM with respect to a stochastic process Y = {Yi}, Yi ∈ Y distributed as PN and Y ⊆ X is defined as H(PN , PM ) = limn→∞ EPN [SM (Y n)], when the limit exists. Further, if Y n is sampled from PN and if PN is a stationary ergodic source, then by the Shannon-McMillan-Breiman theorem (Cover & Thomas, 2006, Thm. 16.8.1), we have limn→∞SM (Y\nn) = H(PN , PM ), when the limit exists. Now, the perplexity corresponding to H(PN , PM ) is simply PPL(PN , PM ) = 2H(PN ,PM ), following Brown et al. (1992); Varshney et al. (2020). For experiments, when the text is generated using PN , we approximate H(PN , PM ) by SM (Y\nn) for a sentence of length of n. This is because natural language shows stationary ergodic property (Manning & Schutze, 1999). Perplexity denotes how close PN is to PM . The lower the perplexity, the closer the distributions PN and PM ." }, { "heading": "3 THEORETICAL ANALYSIS OF SAMPLING METHODS", "text": "Here we summarize theoretical results for different sampling methods; details and proofs in App. B.\nZipf’s law states that the frequency of occurrence of any word in the vocabulary is inversely proportional to its rank in the frequency table (Zipf, 1965; Powers, 1998). More precisely, for a vocabulary of size N = |V| the frequency of the ith most probable word is\np(i; s,N) = 1/(isHN,s), (1)\nwhere s is an exponent characterizing the distribution and HN,s = ∑N n=1 1 ns is the N th generalized harmonic number. Further, for human languages the exponent s is very close to 1. Hence, when\nrequired, we write s = 1 + , for some small > 0. For all of our theoretical analysis, we assume the sampled words follow Zipf’s law.\nFirst we summarize results for top-k sampling. Thm. 1 shows that S(k) grows steeply for small values of k, but grows very slowly for large values of k. Thm. 2 computes an approximation for H(PMk , PM ); Fig. 1a shows this approximation is very good. Since H(PMk , PM ) does not grow much beyond k = 2000, it makes sense to tune k between 1 to 2000 to get a desired cross-entropy.\nNow we summarize the results for top-p sampling. Thm. 3 proves that S(p) behaves near-linearly in p. Further, Thm. 4 provides approximate expressions for H(PMp , PM ) that show H(PMp , PM ) grows approximately linearly with p; this approximate linearity is also shown in Fig. 1b. This is in contrast to top-k sampling where H(PMk , PM ) is highly nonlinear. Temperature is used to suitably distort the original distribution in so as to generate samples that avoid problems associated with pure sampling. In particular, lowering the temperature makes the sampling more greedy. For a given temperature T > 0, the frequency of the kth most probable word in (1) is given by p(k; s,N, T ) = 1/ ( k s T HN, sT ) = p(k; sT , N). Hence the effect of temperature in our analysis is captured simply by modifying s to s/T ." }, { "heading": "4 PERPLEXITY-CONTROLLED TEXT GENERATION", "text": "In this section we propose the mirostat algorithm2 to directly control the cross-entropy rate of the generated text. Mirostat works in two stages for generating each word. First it estimates the value of s assuming words follow Zipf’s law, details of which is given in Appendix C. Then, it uses top-k sampling where k is a function of the estimated s and of the target surprise value of the output text.\nAlg. 1 details mirostat,which generates texts with predetermined average surprise value. The input is a target surprise value τ , which in turn initializes a variable µ = 2τ . Each word is sampled by first estimating s from (30) as ŝ, then using top-k sampling by approximating k as a function of ŝ and µ according to HN,s ≈ ∫ N 1 1 ts dt = (1−Ns−1) s−1 and using (3) to get, with ̂ = ŝ− 1,\nk = ( (̂2µ)/(1−N−̂) ) 1 ŝ , (2)\nWe initialize k corresponding to surprise value 2τ and not τ since we are sampling from top-k and not computing the surprise value at k itself. This 2τ initialization works well in practice, and the rest is taken care of by feedback: An error term e is computed as the difference between the observed surprise S(X) of the sampled word X and τ , which then is used to update µ. Note that we can use an alternate algorithm to tune k in Alg. 1 by iterating through the most probable tokens to set k\n2Code is available at https://github.com/basusourya/mirostat\ncorresponding to a token that has a suitable amount of surprise. More details on such an algorithm with experimental results are given in Appendix E.\nAlgorithm 1: Mirostat sampling for perplexity control Target cross-entropy τ , maximum cross-entropy µ = 2τ , learning rate η, m = 100 while more words are to be generated do\nCompute ŝ from (30): ŝ = ∑m−1 i=1 tibi/ ∑m−1 i=1 t 2 i\nCompute k from (2): k = ( ̂2µ/(1−N−̂) )(1/ŝ) Sample the next word X using top-k sampling Compute error: e = S(X)− τ Update µ: µ = µ− ηe\nend" }, { "heading": "5 EXPERIMENTAL ANALYSIS", "text": "Here we provide experiments for the performance of top-k, top-p, and mirostat sampling. We use the GPT-2 LM with 117M parameters for all experiments (Radford et al., 2019) unless mentioned otherwise, and just refer to it as GPT-2. One main takeaway is that unlike other approaches, mirostat indeed provides direct control over the observed cross-entropy of the output text." }, { "heading": "5.1 CROSS-ENTROPY RATE FOR DIFFERENT SAMPLING METHODS", "text": "Fig. 2 plots observed cross-entropy in generated texts versus several input parameters for different sampling methods. For each plot, we generate four output texts of 200 tokens corresponding to each value of input parameter in each sampling method with same context in each case.\nFig. 2a shows the observed surprise values in generated texts versus k in top-k sampling. Note that cross-entropy has a steep increase for small values of k and relatively slow increase in k for high values of k. Thus, for small values of k, cross-entropy is very sensitive to change in the k, but, for large values of k, cross-entropy hardly changes. Even though we can clearly see the increase in cross-entropy with increase in k, it is difficult to control cross-entropy using top-k sampling.\nFig. 2b plots the observed surprise values in generated text versus p in top-p sampling. Observe that cross-entropy grow essentially linearly with increase in p unlike top-k sampling.\nFig. 2c plots the observed cross-entropy in generated texts versus target cross-entropy in mirostat sampling, Alg. 1. Observe that mirostat sampling gives very good control over observed surprise value, with low variance for surprise values less than five. For higher target surprise, Alg. 1 saturates in controlling observed surprise, since the algorithm truncates low probability words to control surprise value (and the baseline surprise without any truncation is around five). Thus, to get better control over observed surprise values, we must truncate some more probable words as well, which would reduce the quality of the generated text, hence not considered here.\nThe observation on different growth rate of surprise values in top-k and top-p sampling in Fig. 2 is not very intuitive on its own. Our theoretical analysis in Sec. 3 helps explain nonlinear growth in cross-entropy rate in top-k sampling and essentially linear growth in cross-entropy rate in top-p sampling. Note that our theoretical analysis in Sec. 3 deals with cross-entropy while our experiments deal with cross-entropy rate. However, for practical purposes cross-entropy helps us give an intuition about cross-entropy rate in different sampling methods under stationary ergodic assumption. There is not much fluctuation in cross-entropy rate in Fig. 2c because we use feedback to control the cross-entropy rate more accurately, which gives accurate results even for a small number of tokens." }, { "heading": "5.2 PERPLEXITY AND REPETITIONS", "text": "Here, we present some experimental observations for percentage of repeated tokens across different sampling methods and LMs. In Fig. 3, we generate texts with 200 tokens using different sampling methods and models with varying relevant input parameters such as k, p, or target surprise values, τ .\nWe also consider the percentage of n-gram repetitions for different values of n for a fixed sampling method. We define percentage n-gram repetition as ( 1− number of distinct n-gram tokenstotal number of n-gram tokens ) ×100,where an n-gram token simply means concatenation of n contiguous tokens. Hence, for n = 1, n-gram repetitions capture word-level repetitions, whereas larger values of n capture sentence-level repetitions. For n = 1, we refer to percentage 1-gram repetition simply as percentage repetition.\nIn Fig. 3a, we observe that percentage repetition decreases with increase in cross-entropy and more importantly, for a fixed GPT-2 model, this relation is independent of the sampling method.\nIn Fig. 3b, we use top-k sampling with varying temperature values. We observe that repetitions for different temperature values and k follow the same curve as in Fig. 3a. This implies cross-entropy\ncontrols the percentage repetitions in generated texts. Moreover, it implies that once the model and cross-entropy are fixed, percentage repetition is not affected by the considered sampling methods.\nIn Fig. 3c, we capture n-gram repetitions for varying cross-entropy rate and different values of n. We note from Fig. 3c that for small values of n, the percentage n-gram repetitions drop almost linearly with increase in cross-entropy, whereas for larger values of n, the percentage n-gram repetitions is very close to zero for cross-entropy greater than 3. This indicates sentence-level repetitions disappear after a threshold of cross-entropy whereas word-level repetitions continue to appear for larger values of cross-entropy. Also, note that in human-generated text data, there are often common pronouns and conjunctions that are essential and are often repeated, hence we do not expect a good sampling algorithm to have absolutely zero 1-gram repetitions. But, we do expect a good sampling algorithm to have minimum sentence-level repetitions, which all the sampling seems to show beyond a threshold of cross-entropy, which seems to be around 2.5 for GPT-2.\nFig. 3d plots percent repetition versus cross-entropy for different LMs using top-p sampling for varying values of p. Larger LMs such as GPT-2-XL with 1558M parameters have slightly less repetitions for a fixed value of cross-entropy than smaller LMs such as GPT-2 with 117M parameters.\nWe also provide more experimental results showing near-linear relation between observed crossentropy and percentage repetition in CTRL (Keskar et al., 2019) in Appendix F.\nControlling repetitions using repetition penalty Consider a repetition penalty in top-k sampling to reduce percent repetition, where we multiply negative scores of each repeated word in the vocabulary of GPT-2 by θ ∈ {1, . . . , 20} before computing the corresponding probabilities using softmax. See Keskar et al. (2019) for more details on repetition penalty. Essentially, this reduces the probability of repeated words. In Fig. 4a, we observe that repetition penalty tends to reduce percent repetition for fixed cross-entropy rates. However, we also note in Fig. 4b that percent repetition does not have a good relation with θ, which makes it difficult to use it in practice with target percent repetition. Note that the cases where percent repetition is close to zero have high-cross entropy rate, which is true for all values of θ. Hence, we conclude that mirostat when used in conjunction with repetition penalty can provide high-quality results. However, we leave this investigation for future work. From these observations, we conclude that to control percentage repetition in generation we must control the cross-entropy of the output text, exactly like in mirostat." }, { "heading": "5.3 BOREDOM AND CONFUSION TRAPS", "text": "Here we show top-k and top-p sampling cannot control output and therefore get trapped into lowquality generation, for a wide range of k and p. We generated 10 samples of 900-token texts on the same context and averaged their observed cross-entropy at various points of generation in Fig. 5, except for the single-sample human-generated text (the tokens following the context in the corpus).\nFig. 5a illustrates that for small values of k and p, both top-k and top-p sampling methods fall into low cross-entropy regions—boredom traps—which results in increase in repetitions as the length of\nthe generated text increases, as illustrated in Sec. 5.2. Hence, lack of control over output statistics in these methods leads to degradation of quality in generated texts for longer texts.\nFig. 5b shows that for high values of k and p in top-k and top-p sampling methods respectively, the observed cross-entropy of the generated texts increases with the length of generated texts. This leads to increase in incoherence in the text as the token index increases—the confusion trap.\nIn Fig. 5c we choose certain values of k and p in an ad hoc manner and generate texts using top-k and top-p sampling methods respectively to observe that for these values of k and p, the generated texts tend to have cross-entropy that seems to converge to a limiting value with increase in text length and not fall into either boredom or confusion traps. We also show how the observed cross-entropy varies with increase in text length in the human-generated text corresponding to the tokens following the context for these experiments. Human-generated text converges to some limiting value of crossentropy when the generated text is long enough and does not fall into either boredom or confusion.\nFinally, Fig. 5d shows cross-entropy for the mirostat-generated texts converges to the target crossentropy within a few tokens and maintains the desired value of cross-entropy for long texts." }, { "heading": "5.4 HUMAN EVALUATIONS", "text": "We evaluated performance using human raters, which further indicated the importance and necessity of controlling cross-entropy rates for generating high-quality texts. We generated 300 tokens using GPT-2 from a fixed context with average cross-entropy rate τ ∈ {2.5, 3, 4, 5} using both mirostat and top-p sampling. We presented these texts and a human-generated 300 word continuation of the context to 43 participants from the University of Illinois at Urbana-Champaign and Indian Institute of Technology, Kanpur. Participants were not informed of the generation process and rated each text on 1 to 7 Likert scales for fluency, coherence, and overall quality. Further, the raters guessed if the text was AI- or human-generated. More details of the experiment are in Appendix D. Fig. 6 shows texts that had cross-entropy rate τ = 3 received the best ratings by human participants for fluency,\ncoherence, and overall quality. Further, for τ = 3, more than half of raters mistakenly guessed the AI-generated text to be human generated. These results show that controlling the cross-entropy rate helps generate high-quality human-like texts. Further, the sensitivity of these human evaluations to change in cross-entropy rates in Fig. 6 and the fluctuations in cross-entropy rates for fixed input parameters in top-p and top-k sampling from Fig. 2 show that mirostat produces high-quality texts without much ad hoc tuning of input parameters." }, { "heading": "6 CONCLUSION", "text": "We provided a theoretical explanation of how perplexity varies as a function of input parameters in popular top-k and top-p neural text decoding algorithms, showing that log of perplexity varies nearly linearly as a function of p and a highly nonlinearly as a function of k. Building on this analysis, we developed mirostat, a neural text decoding algorithm that directly controls the perplexity of the generated text over a wide range of text length. Notably, for longer texts and certain ranges of input parameters, top-k and top-p sampling fall into boredom and confusion traps which cause low-quality texts; Mirostat avoids both traps. Further, recent large-scale human evaluation of neural generated text suggests that human-judged text quality is maximized for a certain range of perplexity of the output: mirostat provides direct control to stay in that perplexity range. There are also implications for data compression as given in Appendix A.2. As a takeaway, we find that mirostat with target surprise around 3.0, produces varying lengths of high-quality texts with minimal repetitions. This is corroborated in our own experiments with human raters.\nWe further analyze the relation between perplexity and repetitions in text: for fixed model, repetitions vary linearly with perplexity and are independent of the sampling method. We also find that larger LMs have less repetitions for any fixed perplexity. Future work would include theoretical analysis of repetitions, boredom and confusion traps, and convergence properties of mirostat." }, { "heading": "A EXAMPLES", "text": "Here we describe two examples. The first example demonstrates the relation between surprise, repetitions, and incoherence. The second example demonstrates the relation between text compression and cross-entropy.\nA.1 RELATION BETWEEN SURPRISE, REPETITIONS, AND INCOHERENCE\nExample 1. We generate four samples of texts with 200 tokens (which are words or subwords that are part of the vocabulary of the model) using GPT-2 model with 117M parameters Radford et al. (2019). For the first two samples we use top-p sampling with p = 0.4, for the third sample we use top-p sampling with p = 1.0, and for the fourth sample we use our proposed mirostat sampling algorithm. The fifth example is human-generated, taken from the same corpus as the context. The context used is as follows.\nContext:“For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing. Turing had been posted to Washington to share with the U.S. Navy’s cryptanalytic service the methods used by the British Government Code and Cypher School at Bletchley\nPark to break the ciphers used by the Kriegsmarine U-boats in the north Atlantic Ocean. He was also interested in the encipherment of speech and to this end spent time at Bell Labs.”" }, { "heading": "1. Observed average surprise value = 1.471 p = 0.4", "text": "Generated text: “Turing’s work on the cryptography of the Kriegsmarine was based on the cipher used by the Kriegsmarine’s famous E-2. The Kriegsmarine code was based on the Cryptographic Algorithm (CAT) of the Royal Navy. It was a new method of solving the problem of encryption, which was a mathematical problem which was the basis of the cryptanalysis of the Japanese naval cipher.\nTuring’s work on the cipher was based on the algorithm used by the Kriegsmarine’s famous E-2. The cipher was based on the algorithm used by the Kriegsmarine’s famous E-2.\nTuring’s work on the cipher was based on the algorithm used by the Kriegsmarine’s famous E-2.\nTuring’s work on the cipher was based on the algorithm used by the Kriegsmarine’s famous E-2.\nTuring’s work on the cipher was based on”\n2. Observed average surprise value = 1.887, p = 0.4\nGenerated text: “Turing was a great mathematician, and he was not a professional cryptographer. But he was a very talented cryptographer. He was also a highly intelligent man. He was a great man who was very well-versed in the law of probability. He was also very well-versed in the mathematical methods of the United States Army. He was also very well-versed in the mathematics of the Russian Academy of Sciences. He was also very well-versed in the mathematical methods of the German Academy of Sciences.\nIn 1943, Shannon traveled to Moscow to meet with the Director of the NSA. He was told that the FBI had obtained information from the NSA about the German secret police. He was told that the FBI had obtained information from the NSA about the German secret police. He was told that the FBI had obtained information about the German secret police.\nThe NSA, on the other hand, had been unable to get information from the NSA about the German secrete”" }, { "heading": "3. Observed average surprise value = 6.342 p = 1.0", "text": "Generated text: “In January of 1942, in an open letter delivered to 8 persons and addressed to Sherwood, Shannon took the opportunity to address skeptics about the little out-of-print $128,000 unreported race as ”cryptanalysis apparatus of extraterrestrial intelligence.” His reply was also reprinted in 6 South Carolina publications (Atlanta Press, Fall Publishing, Miami, New House Press, Yorkshire Press, Sydney Press, Weatherby Press, Winter Press, and Thames & Hudson, to name a few), and eventually accepted as the final subscribed FHAT article. (Summary, were whom)\nBirmingham City Police concludes that [Turing’s][new replace Robert] Symington is not guilty of breach of OKF law. Further, his suggestions for preventing his sentencing Should he reside anywhere near the city he defies extradition to Bavaria closer to dawn minimal? Lylimud Simpson (Ladies & Gentlemen, Lawrence Livermore University Press, Philadelphia): Jim Gilmore and its wife, Eva Civilsky,”" }, { "heading": "4. Observed average surprise value = 1.828 mirostat target surprise = 1.8", "text": "Generated text: “In April 1943, Shannon was assigned to the U.S. Naval Intelligence Center at Naval Air Station Norfolk, Virginia, where he was assigned to work on a project to develop a new cipher for the U.S. Navy’s cryptanalysis program. The project was to be conducted by the U.S. Navy’s cryptanalysis team.\nIn early April, the U.S. Navy was informed that Shannon had been assigned to work on an encrypted cipher for the Kriegsmarine’s U-boat. The project was to be conducted by a group of cryptographers from the U.S. Naval Intelligence Center. The project was to be conducted by a group of cryptographers from the U.S. Naval Intelligence Center in a project that could be executed by the cryptanalysts.\nIn early April, Shannon was ordered by the U.S. Navy to provide the cryptanalysts with information on a cipher for the U-boat’s U-boats that could”" }, { "heading": "5. Observed average surprise value = 5.285 human-generated", "text": "Generated text: “Shannon and Turing met at teatime in the cafeteria. Turing showed Shannon his 1936 paper that defined what is now known as the ’Universal Turing machine’. This impressed Shannon, as many of its ideas complemented his own.\nIn 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with ’the problem of separating a signal from interfering noise in communications systems’. In other words, it modeled the problem in terms of data and signal processing and thus heralded the coming of the Information Age. Shannon’s work on cryptography was even more closely related to his later publications on communication theory. At the close of the war”\nFigure 7 shows plots of surprise values against indices of tokens in each of the samples in Ex. 1. The blue plot corresponds to surprise values of each token, while the red plot corresponds to average surprise values over a window of size 10 at each token index. Note that the surprise values drop drastically in Fig. 7a as the repetitions increase in Ex. 7.1. Similarly, in Fig. 7b, we observe a dip in surprise values wherever there is a repetition in Ex. 7.2. Clearly, there is a correlation between small average surprise values and repetitions. Further, in Fig. 7a note that the generating model seems to get trapped into a small surprise repetition region. We call this region of small surprise as boredom trap. We observe that these models tend to fall into a boredom trap for small values of p. Figure 7c corresponds to Ex. 7.3, where we choose p = 1.0 and illustrate that for large values of p, the average surprise value of the generated text tends to increase with the number of generated tokens, which leads to incoherence. We call this region of large surprise a confusion trap. Figure 7d shows surprise values corresponding to Ex. 7.4 which is generated using our proposed sampling algorithm, mirostat. We observe in Fig. 7d that mirostat increases the surprise value when when falling into a boredom trap and, thereby maintaining the average surprise value. By doing so, it not only helps generate high-quality text with predetermined average surprise value, but also helps avoid small surprise repetition regions and large surprise incoherent regions. In Fig. 7e, we show the surprise values in human-generated text that followed this context as shown in Ex. 7.5. We observe that human-generated text has average surprise value that is between values using top-p sampling for p = 0.4 and p = 1.0. More importantly, human-generated text does not fall into either of the traps.\nA.2 TEXT GENERATION AND COMPRESSION\nHere we will look at texts generated for various target surprise values using mirostat sampling with GPT-2 with 117M. We also observe the well-known relation between cross-entropy and data compression (Cover & Thomas, 2006, Ch. 5), (Gilbert, 1971). In particular, it is known that when the actual probability distribution of the generated text is not known, then the minimum lossless compression rate achievable is equal to the cross-entropy of the assumed distribution, which is the LM here, with respect to the actual unknown distribution, which is obtained from adaptive top-k sampling here.\nExample 2. We generate 200 tokens for different values of target surprise values using the GPT-2 model with 117M parameters to show the quality of the text generated using Alg. 1 for different target surprise values. We also measure the compression rates obtained using arithmetic coding (Witten et al., 1987; Rissanen & Langdon, 1979) with the LM as the probability distribution. So, in a way, mirostat can generate text that has a predetermined minimum lossless compression rate for a given model.\nContext:“For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing. Turing had been posted to Washington to share with the U.S. Navy’s cryptanalytic service the methods used by the British Government Code and Cypher School at Bletchley Park to break the ciphers used by the Kriegsmarine U-boats in the north Atlantic\nOcean. He was also interested in the encipherment of speech and to this end spent time at Bell Labs.”\nTarget surprise value = 1.0 Observed cross-entropy rate = 0.759 Average compressed size (per token) = 0.863 Percentage compression = 97.66% Generated text: “In April 1943, Shannon was assigned to the U.S. Navy’s cryptanalytic service, the Naval Intelligence Service. He was assigned to the cryptanalytic service’s cryptanalysis unit, the Naval Intelligence Laboratory. He was also assigned to the cryptanalytic unit’s cryptanalysis unit, the Naval Intelligence Laboratory.\nIn May 1943, Shannon was assigned to the cryptanalytic unit’s cryptanalysis unit, the Naval Intelligence Laboratory. He was also assigned to the cryptanalytic unit’s cryptanalysis unit, the Naval Intelligence Laboratory.\nIn June 1943, Shannon was assigned to the cryptanalytic unit’s cryptanalysis unit. He was assigned to the cryptanalytic unit’s cryptanalysis unit, the Naval Intelligence Laboratory.\nIn July 1943, Shannon was assigned to the cryptanalytic unit’s cryptanalytic unit. He was assigned to the cryptanalytic unit’s cryptanalytic unit, the Naval Intelligence Laboratory.\nIn August 1943, Shannon was assigned”\nTarget surprise value = 2.0 Observed cross-entropy rate = 1.987 Average compressed size (per token) = 2.07 Percentage compression = 95.13% Generated text: “He was a member of the British Intelligence Service, and was also a member of the British Intelligence Service’s Intelligence Advisory Board.\nIn early 1944, Shannon was assigned to the British Intelligence Service’s Intelligence Advisory Board. He was also assigned the task of deciphering the ciphertext of the British Government Code. He was also assigned to the British Intelligence Service’s Cryptanalytic Service. He was assigned to the British Intelligence Service’s Intelligence Advisory Board at the end of 1944. He was assigned to the British Intelligence Service’s Secret Intelligence Service at the end of 1944.\nHe was confirmed to work as the Director of the British Intelligence Service’s Information Service at the end of 1944. He was the first British employee to be appointed as an Intelligence Advisory Officer at the end of the war. He was also confirmed as an Intelligence Advisory Officer to the United Kingdom’s Intelligence Advisory Committee at the end of the war. (The United Kingdom is a member of the European Union.)\nHe was also appointed”\nTarget surprise value = 3.0 Observed cross-entropy rate = 3.002 Average compressed size (per token) = 3.141 Percentage compression = 91.84% Generated text: “After a short stay at Bell Labs, Shannon was commissioned to report back to the Naval Intelligence Services (NISC) to learn more about his research. He made his way over to the Royal Naval Institute. He had already been working with a number of other cryptanalytic researchers, including the mathematician Alan Turing, for over eight years at Bell Labs, but this research was not yet done. He spent the next two months studying the cipher schemes of the various British cryptanalytic systems. The first step was to find out the actual cipher scheme for the cipher used by the HMS Queen Elizabeth on a submarine.\nBy the time Shannon arrived at the Royal Naval Institute he had been working for Patagonia Scientific and Technology, the British cryptanalytic company, for eight years. Patagonia received a call at 9 A.D. on the evening of September 4, 1943, asking to check the cipher system for a cipher used by the HMS Queen Elizabeth, which is written as:”\nTarget surprise value = 4.0 Observed cross-entropy rate = 3.943 Average compressed size (per token) = 4.06 Percentage compression = 89.98% Generated text: “He also began writing code to help with deciphering encrypted communications. During 1943, Shannon kept a diary to help decipher his instructions to the cryptanalytic service. Hearing of the story of Alan Turing sent Shannon excitedly into a panic of his own. He that he had\nonly discovered Turing had made it clear to him that the great deception he had witnessed was due to the German government’s efforts to conceal it. I believe he intended to reveal it. In the early days, the NSA was working with the National Security Agency (NSA) at different times against the Soviet Union in the early 90s and as recently as three years ago against the USSR in the early 1990s. It is believed that the four Russian spies were at the CIA while also an intermediary in the process of doing a disinformation campaign to manipulate the American public to support Bush Sr. by calling for American military intervention in Afghanistan against the Bush, Bush, and Cheney (and the Bush family) and by trying to persuade the public to”\nTarget surprise value = 5.0 Observed cross-entropy rate = 4.929 Average compressed size (per token) = 5.116 Percentage compression = 85.41% Generated text: “Naval Contractors hired Shannon as a shortage specialist at the Navy. So she worked alone with Turing to decipher the British cipher for the northern German submarines. Shannon undertook the work initially on the S.G.F. dual cipher. Shannon spent the whole working days at the Bell Labs lab at Bletchley Park.\nAfter weeks of interrogations Shannon was able to break the great El-Fran jujito with a German accent. Shannon then calculated that this hydrocephalic consciousness alone would allow her to think the same words but without the huge amount of writing required to produce such a thing, and with millions of hours on board she was able to write some of the same phrases over a twenty-eight hours workweek.\nOutput: Individualist Hypothesis.\nMiranda (alias Charlotte Reagmire, aka ”The Lady in the Lake”) (1945-2049) (wife of B.G. Lloyd) (Philadelphia, PA)”\nTarget surprise value = 6.0 Observed cross-entropy rate = 5.735 Average compressed size (per token) = 5.834 Percentage compression = 85.55% Generated text: “The CIA trained a small band of cryptanalysts to do the maths again, this time using a UK number generator. A few days after the wars Belgium introduced Bermuda rugby as the appropriate sport for the National Guard. Notwithstanding there being no convention around using English for Rugby at the time, there would be no possible instance in the history of fencing in Europe. Flags for the Hurricanes had evolved recently using a Dutch Italian design called the Crazy Flag. These flag designs come largely of British origin and the date published of its introduction by the Royal Armouries of Lameucers is from 1638. The camouflage was recently added to the new Irish power orange flag. The design is based on the weapons bao mouèret Standard and has two coloured pouches connected to the rifle barrel by two checks along the top of the barrel with protection straps around the barrel to protect the cutouts. NATO hired a team of physicists to do the reconstruction. Readers who want to know more about this new”\nIn Ex. 2 we can see that low value of surprise value results in repetitions and high value of surprise value results in incoherent generated texts. Moderate surprise values result in good quality, coherent text with no repetition. Also, note that the control does not work well when the target surprise value is greater then 5. This is because without any truncation, the average surprise of pure sampled text comes out to be around 5.4. Thus, in order to attain higher values of average surprise, we need to truncate from both sides of the distribution." }, { "heading": "B THEORETICAL RESULTS", "text": "Theorem 1. If words are sampled from the Zipf’s distribution given by (1), then the surprise value of a word with rank k and its rate of increase are given by\nS(k) = s log k + logHN,s, (3) dS(x)\ndx = s x (4)\nrespectively, where S(x) is a continuous function with the same expression as S(k) with a continuous domain.\nProof. The expression of S(k) follows directly from Def. 1 and (1).\nFrom Fig. 8, we note that S(x) is highly sensitive to change in x for small values of x and its sensitivity to x decreases drastically with increase in x. Now, we analyze how cross-entropy varies with k. Let PM be the model distribution. Top-k sampling works by truncating the tail of the distribution PM and samples from the most probable k tokens. Let the truncated distribution be denoted by PMk . In Prop. 1, we provide an expression for H(PMk , PM ).\nProposition 1. Let PM be the model distribution satisfying (1) with vocabulary of size N and let PMk be the model distribution obtained by top-k sampling. Then H(PMk , PM ) is given by\nH(PMk , PM ) = s\nHk,s k∑ i=1 log i is + logHN,s. (5)\nProof. The distribution PM is given by (1) with vocabulary size N , and it is easy to check that the distribution PMk corresponding to top-k sampling is also given by (1) but with vocabulary size k. The rest follows directly from the definition of cross-entropy in Sec. 2.\nIt is difficult to get an intuition about the behavior of H(PMk , PM ) directly from (5). Thus, in Thm. 2 we obtain an approximation to H(PMk , PM ) that shows H(PMk , PM ) is essentially of the form c1(1 − c2 ln k+c3k −1 ) + c4 for 0 < < 1 ln 2 , where c1, c2, c3, c4 are some constants. Hence we observe that H(PMk , PM ) grows fast with small values of k and slows down for large values of k.\nTheorem 2. Let PM be the model distribution satisfying (1) with vocabulary of sizeN . and let PMk be the model distribution obtained using top-k sampling. Then, for 1 < s ≤ 1ln 2 , H(PMk , PM ) can be approximated as\nH(PMk , PM ) ≈ b1\nb3\n( 1− b2b3(ln k + 1 )− b1\nb1(b3k − 1)\n) + logHN,s, (6)\nwhere b1 = s ( log 2 21+ + log 3 31+ + 1 (ln 2)3 ( ln 3 + 1 )) , b2 = s ln 2 , and b3 = 1 + 0.7 are constants.\nProof. From Prop. 1 we have H(PMk , PM ) = s\nHk,s\n∑k i=1 log i is + logHN,s. We start by finding\nbounds for the expression ∑k i=1 log i is .\nFirst note that the function log tts is a decreasing function of t for t > e 1 s . Thus, for 1 ≤ s ≤ 1ln 2 , we have the following inequalities\nlog 2\n2s + ∫ k+1 3 log t ts dt ≤ k∑ i=1 log i is ≤ log 2 2s + log 3 3s + ∫ k+1 4 log (t− 1) (t− 1)s dt. (7)\nSolving the above integration for 1 < s ≤ 1ln 2 we get\na1 Hk,s − a2 Hk,s(k + 1)\n( ln (k + 1) + 1 ) ≤ H(PMk , PM )− logHN,s ≤\nb1 Hk,s − b2 Hk,sk\n( ln k + 1 ) ,\n(8) where a1 = s ( log 2 21+ + 1 (ln 2)3 ( ln 3 + 1 )) , a2 = s ln 2 , b1 =\ns (\nlog 2 21+ + log 3 31+ + 1 (ln 2)3\n( ln 3 + 1 )) , b2 = s ln 2 .\nNow, we bound Hk,s as follows. Note that 1ts is a decreasing function in t for t > 0 and s > 0, hence, we have ∫ k+1\n1\n1 ts dt ≤ k∑ i=1 1 is ≤ 1 + ∫ k+1 2\n1\n(t− 1)s dt (9)\n1− (k + 1)−\n≤ k∑ i=1 1 is ≤ 1 + 1− k − . (10)\nWe empirically observed that Hk,s can be approximated well as\nHk,s ≈ 0.7 + 1− k− , (11)\nwhich lies between the bounds found in (10). Moreover, we approximate H(PMk , PM ) using the upper bound obtained in (7) to get\nH(PMk , PM ) ≈ 1\nHk,s\n( b1 −\nb2 k\n( ln k + 1 )) + logHN,s (12)\n≈ b3(1− k −\nb3 )\n( b1 −\nb2 k\n( ln k + 1 )) + logHN,s (13)\n≈ b1 b3\n( 1− b2b3(ln k + 1 )− b1\nb1(b3k − 1)\n) + logHN,s, (14)\nwhere (14) follows by writing 1 (1− k− b3 ) as an infinite series in (13), then simplifying the expression and writing the infinite series back as a fraction.\nIn Thm. 3, we provide approximate expressions for S(p) and dS(p)dp that shows that S(p) grows essentially linearly with p.\nTheorem 3. If words are sampled from the Zipf’s distribution given by (1). If > 0 is a small constant, then S(p) and the rate of change of S(p) with respect to p is given by\nS(p) ≈ (1 + ) b ln 2 HN,sp− (1 + ) log b+ logHN,s (15)\ndS(p) dp ≈ (1 + )HN,s b ln 2 (1 + HN,s p b ), (16)\nwhere b = 1 + 0.7 .\nProof. The cumulative probability p(k) for Zipf’s distribution is given by p(k) = Hk,sHN,s . Using the approximation to Hk,s in (11), we have\np(k) = b− k−\nHN,s , (17)\nwhere b = 1 + 0.7 .\nNow, writing k as a function of p, we get\nk = (b− pHN,s)− 1 . (18)\nUsing (18) in the equation S(x) = s log x+ logHN,s from Thm. 1, we get\nS(p) = −1 + log (b−HN,s p) + logHN,s\n= −1 + ln 2 ln (1− HN,s p b )− 1 + log b+ logHN,s. (19)\nFurther, taking small enough, we can approximate ln (1− HN,s pb ) ≈ − HN,s p b . Thus, we have\nS(p) ≈ (1 + ) b ln 2 HN,sp− (1 + ) log b+ logHN,s. (20)\nNow, dS(p)dp can be directly computed from (19) as\ndS(p)\ndp =\nHN,s(1 + )\nln 2(b−HN,s p) . (21)\nFor small enough, we can use the approximation 1 1− HN,s p\nb\n≈ 1 + HN,s pb which gives\ndS(p) dp = HN,s(1 + ) b ln 2\n( 1 + HN,s p\nb\n) . (22)\nIn Fig. 9a, we plot the approximate expression for S(p) obtained in Thm. 3 which is a linear function in p and has a slope approximately 10 for s = 1.07 and N = 50, 000. In Fig. 9b, we plot the approximate expression for dS(p)dp from Thm. 3 which is also a linear function of p. This tells us that even though S(p) can be approximated as essentially a linear function of p, it has a slightly increasing slope. Further, unlike the plot of dS(x)dx in Fig. 8b, which is decreasing with k, dS(p) dp in Fig. 9b has a positive slope. Next, we provide an approximate expression for H(PMp , PM ) showing that it grows near-linearly with p.\nTheorem 4. Let PM be the model distribution satisfying (1) with vocabulary of size N . and let PMk(p) be the model distribution obtained using top-p sampling where k(p) is the minimum value of\nk satisfying 1HN,s ∑k(p) i=1 1 is ≥ p. Then, for 1 < s ≤ 1 ln 2 , H(PMp , PM ) can be approximated as\nH(PMp , PM ) ≈ s\n2 ln 2\n( pHN,s + p 2H2N,s ) + logHN,s. (23)\nProof. The cumulative probability p(k) for (1) can be written as\np(k) = Hk,s HN,s . (24)\nWe approximate ∑k i=1 ln i is ≈ ∫ k 1 ln t ts dt to get\nH(PMp , PM ) ≈ s\npHN,s ln 2 (∫ k 1 ln t ts dt ) + logHN,s, (25)\n= s\npHN,s ln 2\n( 1\n2 − 1 k (ln k + 1 )\n) + logHN,s, (26)\n(27)\nApproximating p(k) from (24) as p(k) = 1HN,s ∫ k 1 1 ts dt, we get\nk = (1− pHN,s)− 1 . (28)\nUsing (28) in (26), we have\nH(PMp , PM ) ≈ s\npHN,s ln 2\n( 1\n2 − 1 k (ln k + 1 )\n) + logHN,s,\n= s\npHN,s ln 2\n( 1\n2 + (1− pHN,s) 2 (ln (1− pHN,s)− 1) ) + logHN,s\n= s\n2pHN,s ln 2 (1 + (1− pHN,s)(ln (1− pHN,s)− 1)) + logHN,s\n= s\n2pHN,s ln 2 (ln (1− pHN,s)− pHN,s ln (1− pHN,s) + pHN,s) + logHN,s\n≈ s 2 ln 2\n( pHN,s + p 2H2N,s ) + logHN,s, (29)\nwhere (29) is obtained by taking the approximation ln (1− pHN,s) ≈ − pHN,s − ( pHN,s) 2\n2 for sufficiently small pHN,s." }, { "heading": "C MMSE ESTIMATION OF ZIPF’S EXPONENT", "text": "We assume words follow Zipf’s distribution (1). Further, we observe the probabilities produced by our LM as {pobs(1), . . . , pobs(i), . . . , pobs(N)}. We use minimum mean-squared error (MMSE) estimation to find the value of s. However, s shows up in p(k; s,N) both as an exponent of k and in HN,s which makes the computation difficult. Hence we estimate by minimizing MSE between logarithm of ratios of subsequent probabilities which eliminates HN,s, i.e. we estimate s as\nŝ = argmin s N−1∑ i=1 (sti − bi)2 = ∑N−1 i=1 tibi∑N−1 i=1 t 2 i , (30)\nwhere ti = log i+1i and bi = log pobs(i) pobs(i+1)\n. When N is large, we estimate s using the most probable m tokens for m around 100 to improve time complexity, which gives a practically good estimate." }, { "heading": "D HUMAN EVALUATIONS: EXPERIMENTAL SETUP", "text": "For human evaluations, 43 participants were shown nine text samples each of 300 words generated by mirostat, human, and top-p sampling shown in Tab. 1, Tab. 2, Tab. 3 in a random but fixed order. These texts were generated using the same context as in Ex. 1, which was also shown to the participants. For mirostat sampling, we simply set the required value of τ ∈ {2.5, 3, 4, 5} to obtain the sample text, but for top-p sampling, we sampled several times with different values of p till we obtained samples that had observed cross-entropy in {2.5, 3, 4, 5} such that comparisons between top-p and mirostat can be made. The participants were not shown the method of generation of these texts or any of its statistical properties. Each participant was asked to rate the fluency, coherence, and quality of the generated text on a scale of 1 to 7, where 1 is the worst possible rating and 7 is the best possible rating. Further, they were asked to guess whether the text was generated by an AI algorithm or a human. The participants were also provided with the standard definitions of fluency and coherence. They were asked to rate themselves on their knowledge of English on a scale of 1 to 5, where 1 meant no knowledge of English and 5 meant proficient in English and the participants rated their knowledge of English as 4.3 on an average." }, { "heading": "E ALTERNATE ALGORITHMS TO CONTROL PERPLEXITY", "text": "Here we provide two alternative algorithms to Alg. 1 that also control perplexity and compare their performance.\nMirostat 2.0 Here we provide an alternate algorithm for perplexity control, Alg. 2, which does not depend on the distribution of the underlying LM. In this sense, Alg. 2 controls perplexity in more general sequential generative models than Alg. 1 where the underlying distribution may not be Zipfian. In our work, we choose Alg. 1 since it has only an additional constant time complexity compared to top-k sampling. Whereas Alg. 2 has additional time complexity that depends on target cross-entropy rate and vocabulary size, which may vary with different LMs. Moreover, since we are working specifically with languages here, which have Zipfian properties (Zipf, 1965; Piantadosi, 2014; Lestrade, 2017) and since Alg. 1 empirically provides good control on perplexity of generated text, we choose this algorithm for our human experiments, which also validates the performance of Alg. 1.\nAlgorithm 2: Mirostat 2, an alternate implementation of mirostat sampling for perplexity control Target cross-entropy τ , maximum cross-entropy µ = 2τ , learning rate η while more words are to be generated do\nSort the words in descending order of their surprise values Truncate the words with surprise values greater than µ Normalize the probabilities of the remaining words Sample the next word X from the remaining words Compute error: e = S(X)− τ Update µ: µ = µ− ηe\nend\nIn Fig. 10, we compare Alg. 1 and Alg. 2 in terms of their control on cross-entropy rates and the relation between cross-entropy rate and percentage n-gram repetitions. We find that both the algorithms perform almost identically both in terms of controlling perplexity and ability to control repetition.\nMirostat average Here, we look at Alg. 3 which is identical to Alg. 2 except that for computing the error term, we use the average surprise of generated text instead of using the surprise of the most recently generated word.\nIn Fig. 11, we find that Alg. 3 performs poorly compared to Alg. 1 both in terms of controlling perplexity and repetitions. This is interesting since, intuitively, Alg. 3 should control observed cross-entropy rate well since we compute error from the observed cross-entropy rate itself instead of the surprise value of the most recent word. Our understanding for poor performance of why Alg. 3 does not control cross-entropy rate well is that the surprise values of words change very abruptly in\nthe generation process whereas the average value is rather smooth and does not give a good feedback of the current state of the generation process, hence, making it difficult to control the cross-entropy rate. However, we leave any theoretical result on the performance this algorithm to future work." }, { "heading": "F REPETITION ANALYSIS FOR CTRL", "text": "Here we compare different sampling methods used with CTRL Keskar et al. (2019). We observe in Fig. 12 that there is a near-linear relation between percentage repetition and observed cross-entropy rates similar to GPT-2. Moreover, for CTRL, 6-gram repetitions drops much more rapidly than GPT2. CTRL also has an offset in the cross entropy value from where the repetition starts to drop, which\nAlgorithm 3: Mirostat average, an alternate implementation of mirostat sampling for perplexity control Target cross-entropy τ , maximum cross-entropy µ = 2τ , learning rate η while more words are to be generated do\nSort the words in descending order of their surprise values Truncate the words with surprise values greater than µ Normalize the probabilities of the remaining words Sample the next word X from the remaining words Compute error: e = observed cross-entropy rate− τ Update µ: µ = µ− ηe\nend\ncould be due to its different vocabulary from GPT-2, training dataset, or training process, which is meant to provide better semantic control compared to GPT-2." } ]
2,021
MIROSTAT: A NEURAL TEXT DECODING ALGORITHM THAT DIRECTLY CONTROLS PERPLEXITY
SP:a3f2c5b8bc8bfa03ad589b322c82ac84bca605b2
[ "Universal function representation guarantee requires either highly discontinuous mappings or a highly dimensional latent space. For this reason the authors propose a new parametric family of aggregation functions, called LAF (for learning aggregation functions). It can be seen as a smooth version of the class of functions that are shown in DeepSets. LAF aggregator could learn all standard aggregation functions. Moreover in experiments the autors shows that LAF surpasses other aggregation methods." ]
Learning on sets is increasingly gaining attention in the machine learning community, due to its widespread applicability. Typically, representations over sets are computed by using fixed aggregation functions such as sum or maximum. However, recent results showed that universal function representation by sum(or max-) decomposition requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. To mitigate this problem, we introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality. LAF can approximate several extensively used aggregators (such as average, sum, maximum) as well as more complex functions (e.g. variance and skewness). We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-theart sum(max-) decomposition architectures such as DeepSets and library-based architectures like Principal Neighborhood Aggregation.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Gabriele Corso", "Luca Cavalleri", "Dominique Beaini", "Pietro Liò", "Petar" ], "title": "Veličković. Principal neighbourhood aggregation for graph nets", "venue": "arXiv preprint arXiv:2004.05718,", "year": 2020 }, { "authors": [ "Zoubin Ghahramani", "Katherine A Heller" ], "title": "Bayesian sets. In Advances in neural information processing", "venue": null, "year": 2006 }, { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems", "venue": "Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Maximilian Ilse", "Jakub M. Tomczak", "Max Welling" ], "title": "Attention-based deep multiple instance learning", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam R. Kosiorek", "Seungjin Choi", "Yee Whye Teh" ], "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Vitalik Melnikov", "Eyke Hüllermeier" ], "title": "Learning to aggregate using uninorms. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2016", "venue": "Riva del Garda, Italy,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Radu Bogdan Rusu", "Steve Cousins" ], "title": "3d is here: Point cloud library (pcl)", "venue": "In 2011 IEEE international conference on robotics and automation,", "year": 2011 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alessandro Tibo", "Paolo Frasconi", "Manfred Jaeger" ], "title": "A network architecture for multi-multiinstance learning. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2017", "venue": "Skopje, Macedonia,", "year": 1249 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In ICLR’18,", "year": 2018 }, { "authors": [ "Edward Wagstaff", "Fabian B Fuchs", "Martin Engelcke", "Ingmar Posner", "Michael Osborne" ], "title": "On the limitations of representing functions on sets", "venue": null, "year": 1901 }, { "authors": [ "Zhirong Wu", "Shuran Song", "Aditya Khosla", "Fisher Yu", "Linguang Zhang", "Xiaoou Tang", "Jianxiong Xiao" ], "title": "3d shapenets: A deep representation for volumetric shapes", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ronald R Yager", "Alexander Rybalov" ], "title": "Uninorm aggregation operators", "venue": "Fuzzy sets and systems,", "year": 1996 }, { "authors": [ "Manzil Zaheer", "Satwik Kottur", "Siamak Ravanbakhsh", "Barnabas Poczos", "Russ R Salakhutdinov", "Alexander J Smola" ], "title": "URL http://papers.nips.cc/paper/ 6931-deep-sets.pdf", "venue": "Deep sets", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The need to aggregate representations is ubiquitous in deep learning. Some recent examples include max-over-time pooling used in convolutional networks for sequence classification (Kim, 2014), average pooling of neighbors in graph convolutional networks (Kipf & Welling, 2017), max-pooling in Deep Sets (Zaheer et al., 2017), in (generalized) multi-instance learning (Tibo et al., 2017) and in GraphSAGE (Hamilton et al., 2017). In all the above cases (with the exception of LSTM-pooling in GraphSAGE) the aggregation function is predefined, i.e., not tunable, which may be in general a disadvantage (Ilse et al., 2018). Sum-based aggregation has been advocated based on theoretical findings showing the permutation invariant functions can be sum-decomposed (Zaheer et al., 2017; Xu et al., 2019). However, recent results (Wagstaff et al., 2019) showed that this universal function representation guarantee requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. This suggests that learning set functions that are accurate on sets of large cardinality is difficult.\nInspired by previous work on learning uninorms (Melnikov & Hüllermeier, 2016), we propose a new parametric family of aggregation functions that we call LAF, for learning aggregation functions. A single LAF unit can approximate standard aggregators like sum, max or mean as well as model intermediate behaviours (possibly different in different areas of the space). In addition, LAF layers with multiple aggregation units can approximate higher order moments of distributions like variance, skewness or kurtosis. In contrast, other authors (Corso et al., 2020) suggest to employ a predefined library of elementary aggregators to be combined. Since LAF can represent sums, it can be seen as a smooth version of the class of functions that are shown in Zaheer et al. (2017) to enjoy universality results in representing set functions. The hope is that being smoother, LAF is more easily learnable. Our empirical findings show that this can be actually the case, especially when asking the model to generalize over large sets.\nIn particular, in this paper we offer an extensive experimental analysis showing that:\n• LAF layers can learn a wide range of aggregators (including higher-order moments) on sets of scalars without background knowledge on the nature of the aggregation task\n• LAF layers on the top of traditional layers can learn the same wide range of aggregators on sets of high dimensional vectors (MNIST images)\n• LAF outperforms state-of-the-art set learning methods such as DeepSets and PNA on realworld problems involving point clouds and text concept set retrieval.\n• LAF performs comparably to PNA on random graph generation tasks, outperforming several graph neural networks architectures including GAT (Veličković et al., 2018) and GIN (Xu et al., 2019)\nThe rest of this work is structured as follows. In Section 2 we define the LAF framework and show how appropriate parametrizations of LAF allow to represent a wide range of popular aggregation functions. In Section 3 we discuss some relevant related work. Section 4 reports synthetic and realworld experiments showing the advantages of LAF over (sets of) predifined aggregators. Finally, conclusions and pointers to future work are discussed in Section 5." }, { "heading": "2 THE LEARNING AGGREGATION FUNCTION FRAMEWORK", "text": "We use x = {x1, . . . , xN} to denote finite multisets of real numbers xi ∈ R. Note that directly taking x to be a multiset, not a vector, means that there is no need to define properties like exchangeability or permutation equivariance for operations on x. An aggregation function agg is any function that returns for any multiset x of arbitrary cardinality N ∈ N a value agg(x) ∈ R. Standard aggregation functions like mean and max can be understood as (normalized) Lp-norms. We therefore build our parametric LAF aggregator around generalized Lp-norms of the form\nLa,b(x) := (∑ i xbi )a (a, b ≥ 0). (1)\nLa,b is invariant under the addition of zeros: La,b(x) = La,b(x ∪ 0) where 0 is a multiset of zeros of arbitrary cardinality. In order to also enable aggregations that can represent conjunctive behavior such as min, we make symmetric use of aggregators of the multisets 1−x := {1−xi|xi ∈ x}. For La,b(1 − x) to be a well-behaved, dual version of La,b(x), the values in x need to lie in the range [0, 1]. We therefore restrict the following definition of our learnable aggregation function to sets x whose elements are in [0, 1]:\nLAF(x) := αLa,b(x) + βLc,d(1− x) γLe,f (x) + δLg,h(1− x)\n(2)\ndefined by tunable parameters a, . . . , h ≥ 0, and α, . . . , δ ∈ R. In cases where sets need to be aggregated whose elements are not already bounded by 0, 1, we apply a sigmoid function to the set elements prior to aggregation.\nTable 1 shows how a number of important aggregation functions are special cases of LAF (for values in [0, 1]). We make repeated use of the fact that L0,1 returns the constant 1. For max and min LAF only provides an asymptotic approximation in the limit of specific function parameters (as indicated in the limits column of Table 1). In most cases, the parameterization of LAF for the functions in Table 1 will not be unique. Being able to encode the powers of moments implies that e.g. the variance of x can be expressed as the difference 1/N ∑ i x 2 i − (1/N ∑ i xi)\n2 of two LAF aggregators.\nSince LAF includes sum-aggregation, we can adapt the results of Zaheer et al. (2017) and Wagstaff et al. (2019) on the theoretical universality of sum-aggregation as follows.\nProposition 1 Let X ⊂ R be countable, and f a function defined on finite multisets with elements from X . Then there exist functions φ : X → [0, 1], ρ : R → R, and a parameterization of LAF, such that f(x) = ρ(LAF (φx);α, β, γ, δ, a, b, c, d), where φx is the multiset {φ(x)|x ∈ x}.\nA proof in Wagstaff et al. (2019) for a very similar proposition used a mapping fromX into the reals. Our requirement that LAF inputs must be in [0, 1] requires a modification of the proof (contained in the supplementary material), which for the definition of φ relies on a randomized construction. Proposition 1 shows that we retain the theoretical universality guarantees of Zaheer et al. (2017), while enabling a wider range of solutions based on continuous encoding and decoding functions.\nIt should be emphasized at this point that the primary purpose of LAF is not to provide a uniform representation of different standard aggregators as displayed in Table 1, but to enable a continuum of intermediate and hybrid aggregators. Figure 1 shows the graphs of 4 different randomly generated LAF functions over the unit square [0, 1] × [0, 1], i.e., evaluated over sets of size 2. Parameters α, . . . , γ were randomly sampled in the interval [0, 1]; parameters b, d, f, h are randomly sampled from the integers 0, . . . , 5, and a, c, e, g are obtained as 1/i with i a random integer from 0, . . . , 5. The figure illustrates the rich repertoire of aggregation functions with different qualitative behaviors already for non-extreme parameter values." }, { "heading": "2.1 LAF ARCHITECTURE", "text": "LAF can be easily used as a module of a larger architecture suitable for learning on sets. Several LAF units can be combined as shown in Figure 2, to capture different aspects of the input set, which can be in general a set of vectors x = {x1, . . . , xN} where xi ∈ Rd. Note that multiple aggregators are also used in related frameworks such as DeepSets (Zaheer et al., 2017) or Graph Neural Networks (Veličković et al., 2018; Corso et al., 2020). A module with r LAF units takes as input d-dimensional vectors and produces a vector of size r × d as output. Each LAF unit performs an element-wise aggregation of the vectors in the set such that Lk,j = LAF({xi,j , . . . , xN,j};αk, βk, γk, δk, ak, bk, ck, dk) for k = 1, . . . , r and j = 1, . . . , d. The output vector can be then fed into the next layer." }, { "heading": "3 RELATED WORK", "text": "Several studies address the problem of aggregating data over sets. Sum-decomposition strategies have been used in (Zaheer et al., 2017) for points cloud classification and set expansion tasks and in (Santoro et al., 2017) for question answering and dynamic physical systems computation. Max, sum and average are standard aggregation functions for node neighborhoods in graph neural networks (Hamilton et al., 2017; Kipf & Welling, 2017; Xu et al., 2019; Veličković et al., 2018). Zaheer et al. (2017) first proved universal representation results for these standard aggregators when combined with learned mappings over inputs and results of the aggregation. However, Wagstaff et al. (2019) showed that these universality results are of little practical use, as they either require highly discontinuous mappings that would be extremely difficult to learn, or a latent dimension that is at least the size of the maximum number of input elements.\nUninorms (Yager & Rybalov, 1996) are a class of aggregation functions in fuzzy logic that can behave in a conjunctive, disjunctive or averaging manner depending on a parameter called neutral element. Melnikov & Hüllermeier (2016) proposed to learn fuzzy aggregators by adjusting these learnable parameters, showing promising results on combining reviewers scores on papers into an\noverall decision of acceptance or reject. Despite the advantage of incorporating different behaviours in one single function, uninorms present discontinuities in the regions between aggregators, making them not amenable to be utilized in fully differentiable frameworks. Furthermore the range of possible behaviours is restricted to those commonly used in the context of fuzzy-logic.\nThe need for considering multiple candidate aggregators is advocated in a very recent work that was developed in parallel with our framework (Corso et al., 2020). The resulting architecture, termed Principal Neighborhood Aggregation (PNA) combines multiple standard aggregators, including most of the ones we consider in the LAF framework, adjusting their outputs with degree scalers. However, the underlying philosophy is rather different. PNA aims at learning to select the appropriate aggregator(s) from a pool of candidates, while LAF explores a continuous space of aggregators that includes standard ones as extreme cases. Our experimental evaluation shows that PNA has troubles in learning aggregators that generalize over set sizes, despite having them in the pool of candidates, likely because of the quasi-combinatorial structure of its search space. On the other hand, LAF can successfully learn even the higher moment aggregators and consistently outperforms PNA.\nClosely connected, but somewhat complementary to aggregation operators are attention mechanisms (Bahdanau et al., 2015; Vaswani et al., 2017). They have been explored to manipulate set data in Lee et al. (2019) and in the context of multi-instance learning (Ilse et al., 2018). Attention operates at the level of set elements, and aims at a transformation (weighting) of their representations such as to optimize a subsequent weighted sum-aggregation. While the objectives of attention-based frameworks and LAF partially overlap, they are functionally quite different. Exploring combinations of LAF with attention mechanisms is a possible subject of future work." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present and discuss experimental results showing the potential of the LAF framework on both synthetic and real-world tasks1. Synthetic experiments are aimed at showing the ability of LAF to learn a wide range of aggregators and its ability to generalize over set sizes (i.e., having test-set sets whose cardinality exceeds the cardinality of the training-set sets), something that alternative architectures based on predefined aggregators fail to achieve. We use DeepSets, PNA, and LSTM as representatives of these architectures. The LSTM architecture corresponds to a version of DeepSets where the aggregation function is replaced by a LSTM layer. Experiments on diverse tasks including point cloud classification, text concept set retrieval and graph properties prediction are aimed at showing the potential of the framework on real-world applications.\n1The source code is now available in the supplementary material" }, { "heading": "4.1 EXPERIMENTS ON SCALARS", "text": "This section shows the learning capacity of the LAF framework to learn simple and complex aggregation functions where constituents of the sets are simple numerical values. In this setting we consider sets made of scalar integer values. The training set is constructed as follows: for each set, we initially sample its cardinality K from a uniform distribution taking values in {2,M}, and then we uniformly sample K integers in 0, . . . , 9. For the training set we use M = 10. We construct several test sets for different values of M (M = 5, 10, 15, 20, 25, 30, 35, 40, 45, 50). This implies that models need to generalize to larger set sizes. Contrarily to the training set, each test set is constructed in order to diversify the target labels it contains, so as to avoid degenerate behaviours for large set sizes (e.g., maximum constantly equal to 9). Each synthetic dataset is composed of 100,000 sets for training, 20,000 set for validating and 100,000 for testing.\nThe number of aggregation units is set as follows. The model contains nine LAF (Equation 2) units, whose parameters {ak, . . . , hk}, k = 1, . . . , 9 are initialized according to a uniform sampling in [0, 1] as those parameters must be positive, whereas the coefficients {α, . . . , δ} are initialized with a Gaussian distribution with zero mean and standard deviation of 0.01 to cover also negative values. The positivity constraint for parameters {a, b, ..., h} is enforced by projection during the optimization process. The remaining parameters can take on negative values. DeepSets also uses nine units: three max units, three sum units, and three mean units and PNA uses seven units: mean, max, sum, standard deviation, variance, skewness and kurtosis. Preliminary experiments showed that expanding the set of aggregators for PNA with higher order moments only leads to worse performance. Each set of integers is fed into an embedding layer (followed by a sigmoid) before performing the aggregation function. DeepSets and PNA do need an embedding layer (otherwise they would have no parameters to be tuned). Although LAF does not need an embedding layer, we used it in all models to make the comparison more uniform. The architecture details are reported in the supplementary material. We use the Mean Absolute Error (MAE) as a loss function to calculate the prediction error.\nFigure 3 shows the trend of the MAE error for the three methods for increasing test set sizes, for different types of target aggregators. As expected, DeepSets manages to learn the identity function and thus correctly models aggregators like sum, max and mean. Even if LAF needs to adjust its parameters in order to properly aggregate the data, its performance are competitive with those of DeepSets. When moving to more complex aggregators like inverse count, median or moments of different orders, DeepSets fails to learn the latent representation. One the other hand, the performance of LAF is very stable for growing set sizes. While having in principle at its disposal most of the target aggregators (including higher order moment) PNA badly overfits over the cardinality of sets in the training set in all cases (remember that the training set contains sets of cardinality at most 10). The reason why LAF substantially outperforms PNA on large set sizes could be explained in terms of a greater flexibility to adapt to the learnt representation. Indeed, LAF parameters can\nadjust the laf function to be compliant with the latent representation even if the input mapping fails to learn the identity. On the other hand, having a bunch of fixed, hard-coded aggregators, PNA needs to be able to both learn the identity mapping and select the correct aggregator among the candidates. Finally, LSTM exhibits generally poor results when compared to the other methods, particularly in the case of the count and the sum." }, { "heading": "4.2 MNIST DIGITS", "text": "In this section, we modify the previous experimental setting to process MNIST images of digits. The dataset is the same as in the experiment on scalars, but integers are replaced by randomly sampling MNIST images for the same digits. Instances for the training and test sets are drawn from the MNIST training and test sets, respectively. This experiment aims to demonstrate the ability of LAF to learn from more complex representations of the data by plugging it into end-to-end differentiable architectures. Contrarily to the model of the previous section, here we use three dense layers for learning picture representations before performing the aggregation function. The architecture details are reported in the supplementary material.\nFigure 4 shows the comparison of LAF, DeepSets, PNA, and LSTM in this setting. Results are quite similar to those achieved in the scalar setting, indicating that LAF is capable of effectively backpropagating information so as to drive the learning of an appropriate latent representation, while DeepSets, PNA, and LSTM suffer from the same problems seen in aggregating scalars.\nFurthermore, Figure 5 provides a qualitative evaluation of the predictions of the LAF, DeepSets, and PNA methods on a representative subset of the target aggregators. The images illustrate the correlation between the true labels and the predictions. LAF predictions are distributed over the diagonal line, with no clear bias. On the other hand, DeepSets and PNA perform generally worse than LAF, exhibiting higher variances. In particular, for inverse count and kurtosis, DeepSets and PNA predictions are condensed in a specific area, suggesting an overfitting on the training set." }, { "heading": "4.3 POINT CLOUD", "text": "In order to evaluate LAF on real-world dataset, we consider point cloud classification, a prototype task for set-wise prediction. Therefore, we run experimental comparisons on the ModelNet40 (Wu et al., 2015) dataset, which consists of 9,843 training and 2,468 test point clouds of objects distributed over 40 classes. The dataset is preprocessed following the same procedure described by Zaheer et al. (2017). We create point clouds of 100 and 1,000 three-dimensional points by adopting the point-cloud library’s sampling routine developed by Rusu & Cousins (2011) and normalizing each set of points to have zero mean (along each axis) and unit (global) variance. We refer with\nP100 and P1000 to the two datasets. For all the settings, we consider the same architecture and hyper-parameters of the DeepSets permutation invariant model described by Zaheer et al. (2017). For LAF, we replace the original aggregation function (max) used in DeepSets with 10 LAF units, while for PNA we use the concatenation of max, min, mean, and standard deviation, as proposed by the authors. For PNA we do not consider any scaler, as the cardinalities of the sets are fixed.\nResults in Table 2 show that LAF produces an advantage in the lower resolution dataset (i.e. on P100), while it obtains comparable (and slightly more stable) performances in the higher resolution one (i.e. on P1000). These results suggest that having predefined aggregators is not necessarily an optimal choice in real world cases, and that the flexibility of LAF in modeling diverse aggregation functions can boost performance and stability." }, { "heading": "4.4 SET EXPANSION", "text": "Following the experimental setup of DeepSets, we also considered the Set Expansion task. In this task the aim is to augment a set of objects of the same class with other similar objects, as explained in (Zaheer et al., 2017). The model learns to predict a score for an object given a query set and decide whether to add the object to the existing set. Specifically, Zaheer et al. (2017) consider the specific application of set expansion to text concept retrieval. The idea is to retrieve words that belong to a particular concept, giving as input set a set of words having the same concept. We employ the same model and hyper-parameters of the original publication, where we replace the sumdecomposition aggregation with LAF units for our methods and the min, max, mean, and standard deviation aggregators for PNA.\nWe trained our model on sets constructed from a vocabulary of different size, namely LDA-1K, LDA-3K and LDA-5K. Table 3 shows the results of LAF, DeepSets and PNA on different evaluation metrics. We report the retrieval metrics recall@K, median rank and mean reciprocal rank. We also report the results on other methods the authors compared to in the original paper. More details on the\nother methods in the table can be found in the original publication. Briefly, Random samples a word uniformly from the vocabulary; Bayes Set (Ghahramani & Heller, 2006); w2v-Near computes the nearest neighbors in the word2vec (Mikolov et al., 2013) space; NN-max uses a similar architecture as our DeepSets but uses max pooling to compute the set feature, as opposed to sum pooling; NNmax-con uses max pooling on set elements but concatenates this pooled representation with that of query for a final set feature; NN-sum-con is similar to NN-max-con but uses sum pooling followed by concatenation with query representation. For the sake of fairness, we have rerun DeepSets using the current implementation from the authors (indicated as DeepSet∗ in Table 3), exhibiting better results than the ones reported in the original paper. Nonetheless, LAF outperforms all other methods in most cases, especially on LDA-3K and LDA-5K." }, { "heading": "4.5 MULTI-TASK GRAPH PROPERTIES", "text": "Corso et al. (2020) defines a benchmark consisting of 6 classical graph theory tasks on artificially generated graphs from a wide range of popular graph types like Erdos-Renyi, Barabasi-Albert or star-shaped graphs. Three of the tasks are defined for nodes, while the other three for whole graphs. The node tasks are the single-source shortest-path lengths (N1), the eccentricity (N2) and the Laplacian features (N3). The graph tasks are graph connectivity (G1), diameter (G2), and the spectral radius (G3). For more details about the experimental settings please refer to Corso et al. (2020).\nWe compare LAF against PNA by simply replacing the original PNA aggregators and scalers with 100 LAF units (see Equation 2). Table 4 shows that albeit these datasets were designed to highlight the features of the PNA architecture, that outperforms a wide range of alternative graph neural network approaches LAF produces competitive results, outperforming state-of-the-art GNN approaches like GIN (Xu et al., 2019), GCN (Kipf & Welling, 2017) and GAT (Veličković et al., 2018) and even improving over PNA on spectral radius prediction." }, { "heading": "5 CONCLUSIONS", "text": "The theoretical underpinnings for sum aggregation as a universal framework for defining set functions do not necessarily provide a template for practical solutions. Therefore we introduced LAF, a framework for learning aggregation functions that make use of a parametric aggregator to effectively explore a rich space of possible aggregations. LAF defines a new class of aggregation functions, which include as special cases widely used aggregators, and also has the ability to learn complex functions such as higher-order moments. We empirically showed the generalization ability of our method on synthetic settings as well as real-world datasets, providing comparisons with stateof-the-art sum-decomposition approaches and recently introduced techniques. The flexibility of our model is a crucial aspect for potential practical use in many deep learning architectures, due to its ability to be easily plugged into and learned in end-to-end architectures. The portability of LAF opens a new range of possible applications for aggregation functions in machine learning methods, and future research in this direction can enhance the expressivity of many architectures and models that deal with unstructured data." } ]
2,020
null
SP:2434dec4e18251ecfe3d6a7838881e799aad8b4f
[ "This paper mainly solves the instability issue on the spectral normalization for generative adversarial networks (SN-GANs) when training with high dimensional data. To address this, the authors present a preconditioning layer (PC-layer) with two different ways (i.e., FPC and APC) to perform a low-degree polynomial preconditioning. Experiments on LSUN 256x256 training data demonstrate that FPC and APC are able to control the strength of preconditioning. My detailed comments are as follows." ]
One of the major challenges when training generative adversarial nets (GANs) is instability. To address this instability spectral normalization (SN) is remarkably successful. However, SN-GAN still suffers from training instabilities, especially when working with higher-dimensional data. We find that those instabilities are accompanied by large condition numbers of the discriminator weight matrices. To improve training stability we study common linear-algebra practice and employ preconditioning. Specifically, we introduce a preconditioning layer (PC-layer) that performs a low-degree polynomial preconditioning. We use this PC-layer in two ways: 1) fixed preconditioning (FPC) adds a fixed PC-layer to all layers; and 2) adaptive preconditioning (APC) adaptively controls the strength of preconditioning. Empirically, we show that FPC and APC stabilize training of unconditional GANs using classical architectures. On LSUN 256 ⇥ 256 data, APC improves FID scores by around 5 points over baselines.
[]
[ { "authors": [ "Jonas Adler", "Sebastian Lunz" ], "title": "Banach wasserstein gan", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Léon Bottou" ], "title": "Towards principled methods for training generative adversarial networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Yingyu Liang", "Tengyu Ma", "Yi Zhang" ], "title": "Generalization and equilibrium in generative adversarial nets (GANs)", "venue": null, "year": 2017 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": null, "year": 1904 }, { "authors": [ "David Balduzzi", "Sebastien Racaniere", "James Martens", "Jakob Foerster", "Karl Tuyls", "Thore Graepel" ], "title": "The mechanics of n-player differentiable games", "venue": "arXiv preprint arXiv:1802.05642,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Yann LeCun" ], "title": "Scaling learning algorithms towards AI", "venue": "In Large Scale Kernel Machines. MIT Press,", "year": 2007 }, { "authors": [ "Hugo Berard", "Gauthier Gidel", "Amjad Almahairi", "Pascal Vincent", "Simon Lacoste-Julien" ], "title": "A closer look at the optimization landscapes of generative adversarial networks", "venue": null, "year": 1906 }, { "authors": [ "David Berthelot", "Tom Schumm", "Luke Metz" ], "title": "Began: Boundary equilibrium generative adversarial networks", "venue": "arXiv preprint arXiv:1703.10717,", "year": 2017 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Nonlinear programming", "venue": "Journal of the Operational Research Society,", "year": 1997 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Neural photo editing with introspective adversarial networks", "venue": "arXiv preprint arXiv:1609.07093,", "year": 2016 }, { "authors": [ "Ke Chen" ], "title": "Matrix preconditioning techniques and applications, volume 19", "venue": null, "year": 2005 }, { "authors": [ "Ruohan Wang Antoine Cully", "Hyung Jin Chang", "Yiannis Demiris" ], "title": "Magan: Margin adaptation for generative adversarial networks", "venue": "arXiv preprint arXiv:1704.03817,", "year": 2017 }, { "authors": [ "Constantinos Daskalakis", "Andrew Ilyas", "Vasilis Syrgkanis", "Haoyang Zeng" ], "title": "Training gans with optimism", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ishan Deshpande", "Ziyu Zhang", "Alexander Schwing" ], "title": "Generative modeling using the sliced wasserstein distance", "venue": null, "year": 2018 }, { "authors": [ "Simon S Du", "Jason D Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "arXiv preprint arXiv:1811.03804,", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Gauthier Gidel", "Hugo Berard", "Gaëtan Vignoud", "Pascal Vincent", "Simon Lacoste-Julien" ], "title": "A variational inequality perspective on generative adversarial networks", "venue": "arXiv preprint arXiv:1802.10551,", "year": 2018 }, { "authors": [ "Gauthier Gidel", "Reyhane Askari Hemmat", "Mohammad Pezeshki", "Remi Lepriol", "Gabriel Huang", "Simon Lacoste-Julien", "Ioannis Mitliagkas" ], "title": "Negative momentum for improved game dynamics", "venue": null, "year": 2019 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron Courville" ], "title": "Improved training of wasserstein gans", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Roger A Horn", "Charles R Johnson" ], "title": "Topics in matrix analysis", "venue": "Cambridge university press,", "year": 1994 }, { "authors": [ "Wei Hu", "Lechao Xiao", "Jeffrey Pennington" ], "title": "Provable benefit of orthogonal initialization in optimizing deep linear networks", "venue": "arXiv preprint arXiv:2001.05992,", "year": 2020 }, { "authors": [ "Xun Huang", "Yixuan Li", "Omid Poursaeed", "John Hopcroft", "Serge Belongie" ], "title": "Stacked generative adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Haoming Jiang", "Zhehui Chen", "Minshuo Chen", "Feng Liu", "Dingding Wang", "Tuo Zhao" ], "title": "On computation and generalization of generative adversarial networks under spectrum control", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Olin G Johnson", "Charles A Micchelli", "George Paul" ], "title": "Polynomial preconditioners for conjugate gradient calculations", "venue": "SIAM Journal on Numerical Analysis,", "year": 1983 }, { "authors": [ "Animesh Karnewar", "Oliver Wang" ], "title": "Msg-gan: multi-scale gradient gan for stable image synthesis", "venue": "arXiv preprint arXiv:1903.06048,", "year": 2019 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Soheil Kolouri", "Charles E Martin", "Gustavo K Rohde" ], "title": "Sliced-wasserstein autoencoder: An embarrassingly simple generative model", "venue": "arXiv preprint arXiv:1804.01947,", "year": 2018 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel Schoenholz", "Yasaman Bahri", "Roman Novak", "Jascha SohlDickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Chun-Liang Li", "Wei-Cheng Chang", "Yu Cheng", "Yiming Yang", "Barnabás Póczos" ], "title": "Mmd gan: Towards deeper understanding of moment matching network", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Chunyuan Li", "Changyou Chen", "David Carlson", "Lawrence Carin" ], "title": "Preconditioned stochastic gradient langevin dynamics for deep neural networks", "venue": "arXiv preprint arXiv:1512.07666,", "year": 2015 }, { "authors": [ "Jerry Li", "Aleksander Madry", "John Peebles", "Ludwig Schmidt" ], "title": "Towards understanding the dynamics of generative adversarial networks", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Zinan Lin", "Ashish Khetan", "Giulia Fanti", "Sewoong Oh" ], "title": "Pacgan: The power of two samples in generative adversarial networks", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Zinan Lin", "Vyas Sekar", "Giulia Fanti" ], "title": "Why Spectral Normalization Stabilizes GANs: Analysis and Improvements", "venue": "In arXiv e-prints,", "year": 2020 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": null, "year": 2017 }, { "authors": [ "Lars Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for gans do actually converge", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Luke Metz", "Ben Poole", "David Pfau", "Jascha Sohl-Dickstein" ], "title": "Unrolled generative adversarial networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "cgans with projection discriminator", "venue": "arXiv preprint arXiv:1802.05637,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Youssef Mroueh", "Tom Sercu", "Vaibhava Goel" ], "title": "Mcgan: Mean and covariance feature matching gan", "venue": "arXiv preprint arXiv:1702.08398,", "year": 2017 }, { "authors": [ "Quynh Nguyen", "Matthias Hein" ], "title": "The loss surface of deep and wide neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In International conference on machine learning,", "year": 2017 }, { "authors": [ "Augustus Odena", "Jacob Buckman", "Catherine Olsson", "Tom B. Brown", "Christopher Olah", "Colin Raffel", "Ian Goodfellow" ], "title": "Is Generator Conditioning Causally Related to GAN Performance", "venue": null, "year": 2018 }, { "authors": [ "Jeffrey Pennington", "Samuel Schoenholz", "Surya Ganguli" ], "title": "Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ben Poole", "Alexander A Alemi", "Jascha Sohl-Dickstein", "Anelia Angelova" ], "title": "Improved generator objectives for gans", "venue": "arXiv preprint arXiv:1612.02780,", "year": 2016 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": null, "year": 2016 }, { "authors": [ "Jiqing Wu", "Zhiwu Huang", "Wen Li", "Janine Thoma", "Luc Van Gool" ], "title": "Sliced wasserstein generative models", "venue": null, "year": 2019 }, { "authors": [ "Lechao Xiao", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Samuel S Schoenholz", "Jeffrey Pennington" ], "title": "Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks", "venue": "arXiv preprint arXiv:1806.05393,", "year": 2018 }, { "authors": [ "Lechao Xiao", "Jeffrey Pennington", "Samuel S Schoenholz" ], "title": "Disentangling trainability and generalization in deep neural networks", "venue": null, "year": 2020 }, { "authors": [ "Yasin Yazıcı", "Chuan-Sheng Foo", "Stefan Winkler", "Kim-Hui Yap", "Georgios Piliouras", "Vijay Chandrasekhar" ], "title": "The unusual effectiveness of averaging in gan training", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling sgd batch size to 32k for imagenet training", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Difan Zou", "Yuan Cao", "Dongruo Zhou", "Quanquan Gu" ], "title": "Stochastic gradient descent optimizes over-parameterized deep relu networks", "venue": "arXiv preprint arXiv:1811.08888,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative Adversarial Nets (GANs) (Goodfellow et al., 2014) successfully transform samples from one distribution to another. Nevertheless, training GANs is known to be challenging, and its performance is often sensitive to hyper-parameters and datasets. Understanding the training difficulties of GAN is thus an important problem.\nRecent studies in neural network theory (Pennington et al., 2017; Xiao et al., 2018; 2020) suggest that the spectrum of the input-output Jacobian or neural tangent kernel (NTK) is an important metric for understanding training performance. While directly manipulating the spectrum of the Jacobian or NTK is not easy, a practical approach is to manipulate the spectrum of weight matrices, such as orthogonal initialization (Xiao et al., 2018). For a special neural net, Hu et al. (2020) showed that orthogonal initialization leads to better convergence result than Gaussian initialization, which provides early theoretical evidence for the importance of manipulating the weight matrix spectrum.\nMotivated by these studies, we suspect that an ‘adequate’ weight matrix spectrum is also important for GAN training. Indeed, one of the most popular techniques for GAN training, spectral normalization (SN) (Miyato et al., 2018), manipulates the spectrum by scaling all singular values by a constant. This ensures the spectral norm is upper bounded. However, we find that for some hyperparameters and for high-resolution datasets, SN-GAN fails to generate good images. In a study we find the condition numbers of weight matrices to become very large and the majority of the singular values are close to 0 during training. See Fig. 1(a) and Fig. 2(a). This can happen as SN does not promote a small condition number.\nThis finding motivates to reduce the condition number of weights during GAN training. Recall that controlling the condition number is also a central problem in numerical linear algebra, known as preconditioning (see Chen (2005)). We hence seek to develop a “plug-in” preconditioner for weights. This requires the preconditioner to be differentiable. Out of various preconditioners, we find the polynomial preconditioner to be a suitable choice due to the simple differentiation and strong theoretical support from approximation theory. Further, we suggest to adaptively adjust the strength of the preconditioner during training so as to not overly restrict the expressivity. We show the efficacy of preconditioning on CIFAR10 (32 ⇥ 32), STL (48 ⇥ 48) and LSUN bedroom, tower and living room (256 ⇥ 256). Summary of contributions. For a deep linear network studied in (Hu et al., 2020), we prove that if all weight matrices have bounded spectrum, then gradient descent converges to global min-\nimum at a geometric rate. We then introduce a PC-layer (preconditioning layer) that consists of a low-degree polynomial preconditioner. We further study adaptive preconditioning (APC) which adaptively controls the strength of PC on different layers in different iterations. Applying PC and APC to unconditional GAN training on LSUN data (256 ⇥ 256), permits to generate high-quality images when SN-GAN fails. We also show that APC achieves better FID scores on CIFAR10, STL, and LSUN than a recently proposed method of Jiang et al. (2019)." }, { "heading": "1.1 RELATED WORK", "text": "Related to the proposed method is work by Jiang et al. (2019), which also controls the spectrum in GAN training. They re-parameterize a weight matrix W via W = USV T , add orthogonal regularization of U, V and certain regularizer of entries of the diagonal matrix S. This approach differs from ours in a few aspects. First, Jiang et al. (2019) essentially solves a constrained optimization problem with constraints UTU = I, V TV = I using a penalty method (Bertsekas, 1997). In contrast, our approach solves an unconstrained problem since we add one layer into the neural net, similar to batch normalization (BN) (Ioffe & Szegedy, 2015) and SN (Miyato et al., 2018). Second, our PC layer is a direct generalization of SN as it includes SN-layer as a special case. In contrast, the method of Jiang et al. (2019) differs from SN-layer in any case. Our proposed method thus offers a smoother transition for existing users of SN.\nIn a broader context, a number of approaches have been proposed to stabilize and improve GAN training, such as modifying the loss function (Arjovsky et al., 2017; Arjovsky & Bottou, 2017; Mao et al., 2017; Li et al., 2017b; Deshpande et al., 2018), normalization and regularization (Gulrajani et al., 2017; Miyato et al., 2018), progressive growing techniques (Karras et al., 2018; Huang et al., 2017), changing architecture (Zhang et al., 2019; Karnewar & Wang, 2019), utilizing side information like class labels (Mirza & Osindero, 2014; Odena et al., 2017; Miyato & Koyama, 2018). Using this taxonomy, our approach fits the “normalization and regularization” category (even though our method is not exactly normalization, the essence of “embedded control” is similar). Note that these directions are relatively orthogonal, and our approach can be potentially combined with other techniques such as progressive growing. However, due to limited computational resources, we focus on unconditional GANs using classical architectures, the setting studied by Miyato et al. (2018)." }, { "heading": "1.2 NOTATION AND DEFINITION", "text": "We use eig(A) to denote the multiset (i.e., allow repetition) of all eigenvalues of A. If all eigenvalues of A are non-negative real numbers, we say A is a positive semidefinite (PSD) matrix. The singular values of a matrix A 2 Rn⇥m are the square root of the eigenvalues of ATA 2 Rm⇥m. Let max(A) and min(A) denote the maximum and minimum singular values of A. Let kAk2 denote\nthe spectral norm of A, i.e., the largest singular value. The condition number of a square matrix A is traditionally defined as (A) = kAk2kA 1k2 = max(A) min(A) . We extend this definition to a rectangular matrix A 2 Rn⇥m where n m via (A) = max(A)\nmin(A) . Let deg(p) denote the degree of a polynomial\np and let Pk = {p | deg(p) k} be the set of polynomials with degree no more than k." }, { "heading": "2 WHY CONTROLLING THE SPECTRUM?", "text": "To understand why controlling the spectrum is helpful we leverage recent tools in neural network theory to prove the following result: if weight matrices have small condition numbers, then gradient descent for deep pyramid linear networks converges to the global-min fast. This is inspired by Hu et al. (2020) analyzing a deep linear network to justify orthogonal initialization.\nSimilar to Hu et al. (2020), we consider a linear network that takes an input x 2 Rdx⇥1 and outputs\nF (✓;x) = WLWL 1 . . .W1x 2 Rdy⇥1, (1) where ✓ = (W1, . . . ,WL) represents the collection of all parameters and Wj is a matrix of dimension dj ⇥ dj 1, j = 1, . . . , L. Here we define d0 = dx and dL = dy . Assume there exists r 2 {1, . . . , L}, such that dy = dL dL 1 · · · dr, and n d0 d1 · · · dr. This means the network is a pyramid network, which generalizes the equal-width network of Hu et al. (2020).\nSuppose y = (y1; . . . ; yn) 2 Rndy⇥1 are the labels, and the predictions are F (✓;X) = (F (✓;x1); . . . , F (✓;xn)) 2 Rndy⇥1. We consider a quadratic loss L(✓) = 12ky F (✓;X)k 2 .\nStarting from ✓(0), we generate ✓(k) = (W1(k), . . . ,WL(k)), k = 1, 2, . . . via gradient descent:\n✓(k + 1) = ✓(k) ⌘rL(✓(k)). Denote the residual e(k) = F (✓(k);X) y. For given ⌧l 1, µl 0, l = 1, . . . , L, define\nR , {✓ = (W1, . . . ,WL) | ⌧l max(Wl) min(Wl) µl, 8l}. , LkXk2⌧L . . . ⌧1 (ke(0)k+ kXkF ⌧L . . . ⌧1) , µ , (µ1 . . . µL)2 min(X)2.\nThe following result states that if ✓(k) stay within region R (i.e., weight matrices have bounded spectrum) for k = 0, 1, . . . ,K, then the loss decreases at a geometric rate until iteration K. The rate (1\nµ ) depends on (⌧L...⌧1)\n2\n(µL...µ1)2 , which is related to the condition numbers of all weights.\nTheorem 1 Suppose ⌘ = 1 . Assume ✓(k) 2 R, k = 0, 1, . . . ,K. Then we have\nke(k + 1)k2 (1 µ )ke(k)k2, k = 0, 1, . . . ,K. (2)\nSee Appendix D.3.1 for the proof and detailed discussions.\nFor proper initial point ✓(0) where Wl(0)’s are full-rank, we can always pick ⌧l, l so that ✓(0) 2 R. The trajectory {✓(k)} either stays in R forever (in which case K = 1), or leaves R at some finite iteration K. In the former case, the loss converges to zero at a geometrical rate; in the latter case, the loss decreases to below (1 µ/ )Kke(0)k2. However, our theorem does not specify how large K is for a given situation. Previous works on convergence (e.g., Hu et al., 2020; Du et al., 2018; Allen-Zhu et al., 2019; Zou et al., 2018) bound the movement of the weights with extra assumptions, so that the trajectory stays in a certain nice regime (related to R). We do not attempt to prove the trajectory stays in R. Instead, we use this as a motivation for algorithm design: if we can improve the condition numbers of weights during training, then the trajectory may stay in R for a longer time, and thus lead to smaller loss values. Next, we present the preconditioning layer as such a method." }, { "heading": "3 PRECONDITIONING LAYER", "text": "In the following, we first introduce classical polynomial preconditioners in Sec. 3.1. We then present the preconditioning layer for deep nets in Sec. 3.2. We explain how to compute a preconditioning polynomial afterwards in Sec. 3.3, and finally present an adaptive preconditioning in Sec. 3.4." }, { "heading": "3.1 PRELIMINARY: POLYNOMIAL PRECONDITIONER", "text": "Preconditioning considers the following classical question: for a symmetric matrix Q, how to find an operator g such that (g(Q)) is small? Due to the importance of this question and the wide applicability there is a huge literature on preconditioning. See, e.g., Chen (2005) for an overview, and Appendix B for a short introduction. In this work, we focus on polynomial preconditioners (Johnson et al., 1983). The goal is to find a polynomial p̂ such that p̂(Q)Q has a small condition number. The matrix p̂(Q) is often called preconditioner, and ĝ(Q) , p̂(Q)Q is the precondtioned matrix. We call g the preconditioning polynomial. Polynomial preconditioning has a special merit: the difficult problem of manipulating eigenvalues can be transformed to manipulating a 1-d function, based on the following fact (proof in Appendix E.2.1).\nClaim 3.1 Suppose ĝ is any polynomial, and Q 2 Rm⇥m is a real symmetric matrix with eigenvalues 1 · · · m. Then the eigenvalues of the matrix ĝ(Q) are ĝ( i), i = 1, . . . ,m. As a corollary, if ĝ([ 1, m]) ✓ [1 ✏, 1], then eig(ĝ(Q)) ✓ [1 ✏, 1].\nTo find a matrix ĝ(Q) = p̂(Q)Q that is well-conditioned, we need to find a polynomial p̂ such that ĝ(x) = p̂(x)x maps [ 1, m] to [1 ✏, 1]. This can be formulated as a function approximation problem: find a polynomial ĝ(x) of the form xp̂(x) that approximates a function f̂( ) in 2 [ 1, m]. Under some criterion, the optimal polynomial is a variant of the Chebychev polynomial, and the solutions to more general criteria are also well understood. See Appendix B.1 for more.\nA scaling trick is commonly used in practice. It reduces the problem of preconditioning Q to the problem of preconditioning a scaled matrix Qsca = Q/ m. Scaling employs two steps: first, we find a polynomial g that approximates f(x) = 1 in x 2 [ 1/ m, 1]; second, set ĝ( ) = g( / m) and use ĝ(Q) = g(Q/ m) = g(Qsca) as the final preconditioned matrix. It is easy to verify ĝ approximates f̂( ) = 1 in [ 1, m]. Thus this approach is essentially identical to solving the approximation problem in [ 1, m]. Johnson et al. (1983) use this trick mainly to simplify notation since they can assume m = 1 without loss of generality. We will use this scaling trick for a different purpose (see Section 3.3)." }, { "heading": "3.2 PRECONDITIONING LAYER IN DEEP NETS", "text": "Suppose D(W1, . . . ,WL) is a deep net parameterized by weights W1, . . . ,WL for layers l 2 {1, . . . , L}. To control the spectrum of a weight Wl, we want to embed a preconditioner ĝ into the neural net. Among various preconditioners, polynomial ones are appealing since their gradient is simple and permits natural integration with backpropagation. For this we present a preconditioning layer (PC-layer) as follows: a PC-layer ĝ(W ) = g(SN(W )) is the concatenation of a preconditioning polynomial g and an SN operation of (Miyato et al., 2018) (see Appendix app sub: details of FPC and APC for details of SN(W )). The SN operator is used as a scaling operator (reason explained later). We describe an efficient implementation of PC-layer in Appendix C.3.\nIn our case, we use A = SN(W ) to indicate the scaled matrix. Prior work on polynomial preconditioners (Johnson et al., 1983; Chen, 2005) often study square matrices. To handle rectangular matrices, some modifications are needed.\nA naı̈ve solution is to apply a preconditioner to the symmetrized matrix ATA, leading to a matrix g(A) = p(ATA)ATA. This solution works for linear models (see Appendix B.2 for details), but it is not appropriate for deep nets since the shape of p(ATA)ATA 2 Rm⇥m differs from A. To maintain the shape of size n⇥m, we propose to transform A to g(A) = p(AAT )A 2 Rn⇥m. This transformation works for general parameterized models including linear models and neural nets. For a detailed comparison of these two approaches, see Appendix B.2. The following claim relates the spectrum of A and p(AAT )A; see the proof in Appendix E.2.2.\nClaim 3.2 Suppose A 2 Rn⇥m has singular values 1 · · · m. Suppose g(x) = p(x2)x where p is a polynomial. Then the singular values of g(A) = p(AAT )A are |g( 1)|, . . . , |g( m)|.\nTo find a polynomial p such that g(A) = p(AAT )A is well-conditioned, we need to find a polynomial p such that g(x) = p(x2)x maps [ 1, m] to [1 ✏, 1] for some ✏. This can be formulated as a function approximation problem: find a polynomial g(x) in Gk that approximates a function\nf( ) = 1 in 2 [ 1, m] where Gk = {g(x) = xp(x2) | p 2 Pk}. We will describe the algorithm for finding the preconditioning polynomial g in Sec. 3.3.\nIn principle, the PC-layer can be added to any deep net including supervised learning and GANs. Here, we focus on GANs for the following reason. Current algorithms for supervised learning already work quite well, diminishing the effect of preconditioning. In contrast, for GANs, there is a lot of room to improve training. Following SN-GAN which applies SN to the discriminator of GANs, in the experiments we apply PC to the discriminator." }, { "heading": "3.3 FINDING PRECONDITIONING POLYNOMIALS", "text": "In this subsection, we discuss how to generate preconditioning polynomials. This generation is done off-line and independent of training. We will present the optimization formulation and discuss the choice of a few hyperparameters such as the desirable range and the target function f .\nOptimization formulation. Suppose we are given a range [ L, U], a target function f , and an integer k; the specific choices are discussed later. Suppose we want to solve the following approximation problem: find the best polynomial of the form g(x) = x(a0 + a1x2 + · · · + akx2k) that approximates f(x) in domain [ L, U], i.e., solve\nmin g2Gk d[ L, U](g(x), f(x)), (3)\nwhere Gk = {g(x) = xp(x2) | p 2 Pk}, d[ L, U] is a distance metric on the function space C[ L, U], such as the `1 distance d[ L, U](f, g) = maxt2[ L, U] |f(t) g(t)|. We consider a weighted least-square problem suggested by Johnson et al. (1983):\nmin g2Gk\nZ U\nL\n|g(x) f(x)|2w(x)dx, (4)\nwhere w(x) = x↵ is a weight function used in (Johnson et al., 1983). We discretize the objective and solve the finite-sample version of Eq. (4) as follows:\nmin c=(c0,c1,...,ck)2Rk+1\nnX\ni=1\nxi kX\nt=0\nctx 2t i f(xi)\n2\nw(xi), (5)\nwhere xi 2 [ L, U], 8i (e.g., drawn from uniform distribution on [ L, U]). This is a weighted least squares problem (thus convex) that can be easily solved by a standard solver.\nChoice of desirable range [ L, U]. The range [ L, U] within which we want to approximate the target function is often chosen to be the convex hull of the singular values of the matrix to be preconditioned. For the original matrix W , the desirable range [ L, U] = [ min(W ), max(W )]. However, this range varies across different layers and different iterations. For this reason we scale each W by 1/kWk2 to obtain A so that its singular values lie in a fixed range [0, 1]. Note that a more precise range is [ min(A)/ max(A), 1], but we can relax it to [0, 1]. We follow Miyato et al. (2018) to use one power iteration to estimate the spectral norm W̃ ⇡ kWk2, and denote SN(W ) = W/W̃ . Since W̃ is not exactly kWk2, the range of singular values of A = SN(W ) may not be exactly in [0, 1]. We have checked the empirical estimation and found that the estimated spectral norm during the training of SN-GAN is often less than 1.1 times the true spectral norm (see Fig. 5 in Appendix E.1), thus we pick [ L, U] = [0, 1.1] in our implementation.\nChoice of target function f( ). Previously, we discuss the ideal situation that [ L, U] = [ 1, m], thus the target function is 1. In the previous paragraph, we have relaxed the desirable range to [0, U], then we cannot set f(x) = 1 in [0, U], because any polynomial g( ) 2 Gk must satisfy g(0) = 0, causing large approximation error at = 0. We shall set f(0) = 0. A candidate target function is PLb(x), where PLb(x) = ⇢ x/b, x < b\n1, x b is defined as a piece-wise linear function with\ncutoff point b. If the cutoff point b < min(A), then PLb( ) maps all singular values of A to 1.\nWhile setting all singular values to 1 is ideal for fast training, this may reduce the expressiveness for deep nets. More specifically, the set of functions {D(W1, . . . ,WL) | eig(WTl Wl) ✓ {1}} is smaller than {D(W1, . . . ,WL) | eig(WTl Wl) ✓ [ 0, 1]}, thus forcing all singular values to be 1\nmay hurt the representation power. Therefore, we do not want the target function to have value 1 in [ min(A), U]. In practice, the value of min(A) varies for different problems, therefore we permit a flexible target function f , to be chosen by a user.\nIn our implementation, we restrict target functions to a family of piece-wise linear functions. We use PLb(x) with a relatively large cutoff point b, such as 0.8 and 0.3. We plot our candidate target functions PL0.3 , PL0.4, PL0.6 and PL0.8 in Figure 3. As the cutoff point b changes from 1 to 0, the function PLb becomes more aggressive as it pushes more singular values to 1. As a result, the optimization will likely become easier, while the representation power becomes weaker. The exact choice of the target function is likely problem-dependent, and we discuss two strategies to select them in Section 3.4.\nSearch space of preconditioning polynomial. As mentioned earlier, the default search space is Gk = {g(x) = xp(x2) | p 2 Pk} for a pre-fixed k. The degree of g( ) is an important hyperparameter. On the one hand, the higher the degree, the better the polynomial can fit the target function f . On the other hand, higher degree leads to more computation. In our implementation, we consider k = 1, 2, 3, 4, i.e., polynomials of degree 3, 5, 7 and 9. The extra time is relatively small; see Section C.4 for details." }, { "heading": "3.4 FIXED PRECONDITIONING AND ADAPTIVE PRECONDITIONING", "text": "The preconditioning polynomial can be determined by the target function and the degree k. Which polynomial shall we use during training?\nCandidate preconditioners. At first sight, there are two hyper-parameters b and k. Nevertheless, if b is small (steep slope), then it is hard to approximate PLb by low-order polynomials. For each degree k 2 {3, 5, 7, 9}, there is a certain bk such that b < bk leads to large approximation error. We find that b3 ⇡ 0.8, b5 ⇡ 0.6, b7 ⇡ 0.4, b9 ⇡ 0.3. After fitting PL0.3, PL0.4, PL0.6 and PL0.8, the resulting polynomials are shown in Figure 3.\nA natural approach is to add the PC-layer to all layers of the neural net, resulting in a preconditioned net DPC(✓) = D(g(SN(W1)), . . . , g(SN(WL)). We call this method fixed preconditioning (FPC). Just like other hyperparameters, in practice, we can try various preconditioners and pick the one with the best performance. Not surprisingly, the best preconditioner varies for different datasets.\nAdaptive preconditioning (APC). Motivated by adaptive learning rate schemes like Adam (Kingma & Ba, 2014) and LARS (You et al., 2017), we propose an adaptive preconditioning scheme. In APC, we apply the preconditioner in a epoch-adaptive and layer-adaptive manner: at each epoch and for each layer the algorithm will automatically pick a proper preconditioner based on the current condition number.\nThe standard condition number (A) = max(A) min(A) is not necessarily a good indicator for the optimization performance. In APC, we use a modified condition number ̃(A) = max(A) ( Pm0\ni=1 i(A))/m0 .\nwhere A has m columns and m0 = dm10e. We prepare r preconditioning polynomials g1, . . . , gr with different strength (e.g., the four polynomials g1, g2, g3, g4 shown in Figure 3). We set a number of ranges [0, ⌧1], [⌧1, ⌧2], . . . , [⌧r,1] and let ⌧0 = 0, ⌧r+1 = 1. If the modified condition number of A falls into the range [⌧i, ⌧i+1] for i 2 {0, 1, . . . , r}, we will use gi in the PC-layer. In our im-\nplementation, we set r = 4. To save computation, we only compute the modified condition number and update PC strength at a fixed interval (e.g., every 1000 iterations). The summary of APC is presented in Table 3 in Appendix C.2.\nComputation time. We use a few implementation tricks; see Appendix C.3. In our implementation of FPC with a degree 3, 5, 7 or 9 polynomial, the actual added time is around 20 30% (Fig. 4 (a)) of the original training time of SN-GAN. Fig. 4 (b) shows that the extra time of APC over SN is often less than 10%. See Appendix C.4 for more on the computation time." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We will demonstrate the following two findings. First, SN-GAN still suffers from training instabilities, and the failure case is accompanied by large condition numbers. Second, PC layers can reduce the condition number, and improve the final performance, especially for high resolution data (LSUN 256⇥ 256). We conduct a set of experiments for unconditional image generation on CIFAR-10 (32⇥ 32), STL10 (48 ⇥ 48), LSUN-bedroom (128 ⇥ 128 and 256 ⇥ 256), LSUN-tower (256 ⇥ 256) and LSUNliving-room (256 ⇥ 256). We also compare the condition numbers of the discriminator layers for different normalization methods to demonstrate the connection between the condition number and the performance. The following methods are used in our experiments: standard SN; SVD with DOptimal Reg. (Jiang et al., 2019); FPC with degree 3 or 7 preconditioners; APC. Following Miyato et al. (2018), we use the log loss GAN on the CNN structure and the hinge loss GAN on the ResNet structure.\nCIFAR and STL: Training failure of (1,1)-update. Tuning a GAN is notoriously difficult and sensitivity to hyper-parameters. Even for low-resolution images, without prior knowledge of good hyper-parameters such as Dit, Git, training a GAN is often not trivial. On CIFAR10, SN-GAN uses Dit = 5, Git = 1 for ResNet; for simplicity, we call it a (5, 1)-update. However, using a (1, 1)- update, i.e., changing Dit = 5 to Dit = 1 while keeping Git = 1, will lead to an SN-GAN training failure: a dramatic decrease of final performance and an FID score above 77. SN-GAN with (1, 1)- update also fails on STL data, yielding an FID score above 147. We are interested in stabilizing the (1, 1)-update for two reasons: first, trainability for both (1, 1)-update and (5, 1)-update means improved training stability; second, the (1, 1)-update requires only about 1/3 of the time compared to the (5, 1)-update. Therefore, in the first experiment, we explore GAN-training with (1, 1)-update on CIFAR-10 and STL-10.\nFailure mode: large condition numbers. Understanding the failure mode of training is often very useful for designing algorithms (e.g., Glorot & Bengio, 2010). We suspect that a large condition number is a failure mode for GAN training. As Table 1 shows, the high FID scores (bad case) of SN-GAN are accompanied by large condition numbers.\nPC reduces condition numbers and rectifies failures. Table 1 shows that FPC and APC can both greatly improve the training performance: they reduce FID from 77 to less than 20 for CIFAR-10, and reduce FID from 147 to less than 34 for STL in 200k iterations. The evolution of the 5 smallest singular values of the adaptive preconditioned matrices and the condition numbers are showed in Fig. 1(b) and Fig. 2(b) for STL-10 training on ResNet with Dit = 1. This shows that PC-GAN successfully improves the spectrum of weight matrices in this setting.\nExperiments on “good” case of SN-GAN. We report the results for the (5, 1)-update on CIFAR-10 and STL-10 with ResNet in the Appendix. For those FPC and APC achieve similar or slightly better FID scores. We also report IS scores there. We also list the results of PC and multiple baselines on the CNN structure in the Appendix.\nHigh resolution images LSUN. Using high-resolution data is more challenging. We present numerical results on LSUN bedroom (128 ⇥ 128, and 256 ⇥ 256) , LSUN tower (256 ⇥ 256) and LSUN living room (256 ⇥ 256) data in Table 2. The training time for one instance is 30 hours on a single RTX 2080 Ti (200k iterations).\nNote, SN-GAN is unstable and results in FID > 80 for LSUN-bedroom 256 ⇥ 256. The SVD method, our FPC and APC generate reasonable FID scores on all three data sets. Importantly, our FPC is comparable or better than SVD, and our APC consistently outperforms the SVD method by\n4-6 FID scores in most cases. Also note, the condition numbers of the failure case of SN-GAN are much higher than the two normal cases of SN-GAN. In all cases, FPC and APC achieve significantly lower condition numbers than SN-GAN. APC achieves higher condition numbers than FPC, and also better FID scores. We suspect that FPC over-controls the condition numbers which leads to lower representation power. In contrast, APC strikes a better balance between representation and optimization than FPC. The generated image samples are presented in Appendix F.5." }, { "heading": "5 CONCLUSION", "text": "We prove that for a deep pyramid linear networks, if all weight matrices have bounded singular values throughout training, then the algorithm converges to a global minimal value at a geometric rate. This result indicates that small weight matrix condition numbers are helpful for training. We propose a preconditioning (PC) layer to improve weight matrix condition numbers during training, by leveraging tools from polynomial preconditioning literature. It is differentiable, and thus can be plugged into any neural net. We propose two methods to utilize the PC-layer: in FPC (fixed preconditioning), we add a fixed PC-layer to all layers; in APC (adaptive preconditioning), we add PC-layers with different preconditioning power depending on the condition number. Empirically, we show that applying FPC and APC to GAN training, we can generate good images in a few cases that SN-GAN perform badly, such as LSUN-bedroom 256⇥256 image generation." } ]
2,020
null
SP:bc280e927e60317d6c2382d5507f522ba58ebe42
[ "This paper proposes techniques that generate logical rules out of knowledge graphs; the idea is to produce more complex rules than usual by exploiting a differentiable formulation of the associated learning process. This is a relevant theme as rule learning from knowledge graphs is important in practice due to its potential interpretability (as compared to black-box schemes based on embeddings). The solution is relatively simple to describe, with a score that leads to differentiable learning, and some needed insights to obtain useful results. The empirical testing seems fine and does indicate that the method is useful in practice." ]
Logical rules inside a knowledge graph (KG) are essential for reasoning, logical inference, and rule mining. However, existing works can only handle simple, i.e., chain-like and tree-like, rules and cannot capture KG’s complex semantics, which can be better captured by graph-like rules. Besides, learning graph-like rules is very difficult because the graph structure exhibits a huge discrete search space. To address these issues, observing that the plausibility of logical rules can be explained by how frequently it appears in a KG, we propose a score function that represents graph-like rules with learnable parameters. The score also helps relax the discrete space into a continuous one and can be uniformly transformed into matrix form by the Einstein summation convention. Thus, it allows us to learn graph-like rules in an efficient, differentiable, and end-to-end training manner by optimizing the normalized score. We conduct extensive experiments on real-world datasets to show that our method outperforms previous works due to logical rules’ better expressive ability. Furthermore, we demonstrate that our method can learn high-quality and interpretable graph-like logical rules.
[ { "affiliations": [], "name": "GRAPH-LIKE LOGI" } ]
[ { "authors": [ "Krister Åhlander" ], "title": "Einstein summation for multidimensional arrays", "venue": "Computers & Mathematics with Applications,", "year": 2002 }, { "authors": [ "Sören Auer", "Christian Bizer", "Georgi Kobilarov", "Jens Lehmann", "Richard Cyganiak", "Zachary Ives" ], "title": "Dbpedia: A nucleus for a web of open data", "venue": "In The semantic web,", "year": 2007 }, { "authors": [ "Ivana Balažević", "Carl Allen", "Timothy M Hospedales" ], "title": "Tucker: Tensor factorization for knowledge graph completion", "venue": "arXiv preprint arXiv:1901.09590,", "year": 2019 }, { "authors": [ "Kurt Bollacker", "Colin Evans", "Praveen Paritosh", "Tim Sturge", "Jamie Taylor" ], "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "venue": "In Proceedings of the 2008 ACM SIGMOD international conference on Management of data,", "year": 2008 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Yang Chen", "Sean Goldberg", "Daisy Zhe Wang", "Soumitra Siddharth Johri" ], "title": "Ontological pathfinding", "venue": "In International Conference on Management of Data,", "year": 2016 }, { "authors": [ "William W Cohen", "Haitian Sun", "R Alex Hofer", "Matthew Siegler" ], "title": "Scalable neural methods for reasoning with a symbolic knowledge base", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "G Daniel", "Johnnie Gray" ], "title": "opt einsum-a python package for optimizing contraction order for einsum-like expressions", "venue": "Journal of Open Source Software,", "year": 2018 }, { "authors": [ "Rajarshi Das", "Shehzaad Dhuliawala", "Manzil Zaheer", "Luke Vilnis", "Ishan Durugkar", "Akshay Krishnamurthy", "Alex Smola", "Andrew McCallum" ], "title": "Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Bhuwan Dhingra", "Manzil Zaheer", "Vidhisha Balachandran", "Graham Neubig", "Ruslan Salakhutdinov", "William W Cohen" ], "title": "Differentiable reasoning over a virtual knowledge base", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Richard Evans", "Edward Grefenstette" ], "title": "Learning explanatory rules from noisy data", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Luis Galárraga", "Christina Teflioudi", "Katja Hose", "Fabian M Suchanek" ], "title": "Fast rule mining in ontological knowledge bases with AMIE+", "venue": "International Journal on Very Large Databases,", "year": 2015 }, { "authors": [ "Will Hamilton", "Payal Bajaj", "Marinka Zitnik", "Dan Jurafsky", "Jure Leskovec" ], "title": "Embedding logical queries on knowledge graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Vinh Thinh Ho", "Daria Stepanova", "Mohamed H Gad-Elrab", "Evgeny Kharlamov", "Gerhard Weikum" ], "title": "Rule learning from knowledge graphs guided by embedding models", "venue": "In International Semantic Web Conference,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Stanley Kok", "Pedro Domingos" ], "title": "Learning the structure of markov logic networks", "venue": "In International Conference on Machine Learning,", "year": 2005 }, { "authors": [ "Stanley Kok", "Pedro Domingos" ], "title": "Statistical predicate invention", "venue": "In International Conference on Machine Learning,", "year": 2007 }, { "authors": [ "Robin Manhaeve", "Sebastijan Dumancic", "Angelika Kimmig", "Thomas Demeester", "Luc De Raedt" ], "title": "Deepproblog: Neural probabilistic logic programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Pasquale Minervini", "Matko Bosnjak", "Tim Rocktäschel", "Sebastian Riedel" ], "title": "Towards neural theorem proving at scale", "venue": "arXiv preprint arXiv:1807.08204,", "year": 2018 }, { "authors": [ "Pasquale Minervini", "Sebastian Riedel", "Pontus Stenetorp", "Edward Grefenstette", "Tim Rocktschel" ], "title": "Learning reasoning strategies in end-to-end differentiable proving, 2020", "venue": null, "year": 2020 }, { "authors": [ "Meng Qu", "Jian Tang" ], "title": "Probabilistic logic neural networks for reasoning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hongyu Ren", "Weihua Hu", "Jure Leskovec" ], "title": "Query2box: Reasoning over knowledge graphs in vector space using box embeddings", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tim Rocktäschel", "Sebastian Riedel" ], "title": "End-to-end differentiable proving", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ali Sadeghian", "Mohammadreza Armandpour", "Patrick Ding", "Daisy Zhe Wang" ], "title": "Drum: End-toend differentiable rule mining on knowledge graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ehud Y Shapiro" ], "title": "Inductive inference of theories from facts", "venue": "Yale University, Department of Computer Science,", "year": 1981 }, { "authors": [ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ], "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Komal K. Teru", "Etienne Denis", "William L. Hamilton" ], "title": "Inductive relation prediction by subgraph reasoning", "venue": "arXiv: Learning,", "year": 2020 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Po-Wei Wang", "Daria Stepanova", "Csaba Domokos", "J Zico Kolter" ], "title": "Differentiable learning of numerical rules in knowledge graphs", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Wenhan Xiong", "Thien Hoang", "William Yang Wang" ], "title": "Deeppath: A reinforcement learning method for knowledge graph reasoning", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Fan Yang", "Zhilin Yang", "William W Cohen" ], "title": "Differentiable learning of logical rules for knowledge base reasoning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yuan Yang", "Le Song" ], "title": "Learn to explain efficiently via neural logic inductive learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Wen Zhang", "Bibek Paudel", "Liang Wang", "Jiaoyan Chen", "Hai Zhu", "Wei Zhang", "Abraham Bernstein", "Huajun Chen" ], "title": "Iteratively learning embeddings and rules for knowledge graph reasoning", "venue": "In The World Wide Web Conference,", "year": 2019 }, { "authors": [ "Yuyu Zhang", "Xinshi Chen", "Yuan Yang", "Arun Ramamurthy", "Bo Li", "Yuan Qi", "Le Song" ], "title": "Efficient probabilistic logic reasoning with graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "We conduct the experiment of link prediction on three datasets Kinship", "UMLS", "Family following the setting in Sadeghian" ], "title": "To be fair, we remove the embedding information in logical rule mining methods to purely compare the expressive ability about logic rules", "venue": "We test all the methods for chain-like rules(Neural-LP (Yang et al., 2017), DRUM (Sadeghian et al., 2019) for more complex rule learning with only one input and one output, although they may learn inaccurate", "year": 2019 } ]
[ { "heading": null, "text": "Logical rules inside a knowledge graph (KG) are essential for reasoning, logical inference, and rule mining. However, existing works can only handle simple, i.e., chain-like and tree-like, rules and cannot capture KG’s complex semantics, which can be better captured by graph-like rules. Besides, learning graph-like rules is very difficult because the graph structure exhibits a huge discrete search space. To address these issues, observing that the plausibility of logical rules can be explained by how frequently it appears in a KG, we propose a score function that represents graph-like rules with learnable parameters. The score also helps relax the discrete space into a continuous one and can be uniformly transformed into matrix form by the Einstein summation convention. Thus, it allows us to learn graph-like rules in an efficient, differentiable, and end-to-end training manner by optimizing the normalized score. We conduct extensive experiments on real-world datasets to show that our method outperforms previous works due to logical rules’ better expressive ability. Furthermore, we demonstrate that our method can learn high-quality and interpretable graph-like logical rules." }, { "heading": "1 INTRODUCTION", "text": "Knowledge graph (KG) refers to a special type of directed graphs including various entities as nodes and relations as directed edges representing a large number of facts (Auer et al., 2007; Bollacker et al., 2008). In KG, logical rules are a set of compositional logical relations within a specific structure, which are important for reasoning (Cohen et al., 2019; Zhang et al., 2019a; Qu & Tang, 2019), logical inference (Dhingra et al., 2020; Das et al., 2018; Xiong et al., 2017), rule mining (Sadeghian et al., 2019; Yang et al., 2017; Yang & Song, 2020), theorem proving (Rocktäschel & Riedel, 2017; Minervini et al., 2018; 2020), etc.\nLearning logical rules (Galárraga et al., 2015; Chen et al., 2016), as an important task, aims to infer a structural logical rule for logical query or relation, which can support logical query or link prediction while providing interpretable logical rules. The structure of logical queries can be various with very different semantics, as shown in Figure 1, including chain-like, tree-like and graph-like rules. Learning the logical rules, especially the graph-like rules, are very difficult because both the logical structure and the relations assigned on each edge are unknown requiring to be inferred from input-output pairs, which compose a huge discrete searching space.\nIn this paper, we dive into the problem of learning graph-like logical rules, including both the logical structure representing how logic connects and the relations assigned on different edges. Recently, a series of works on learning logical rule (Yang et al., 2017; Sadeghian et al., 2019; Yang & Song, 2020) has been proposed, which not only can support tasks including logical query and link prediction, but as a side effect, can also provide the mined logical rules with high interpretability. As shown in Figure 1, all these works are limited to learning chain-like rules (the left case) (Yang et al., 2017; Sadeghian et al., 2019) or tree-like rules (the middle case) (Hamilton et al., 2018; Ren et al., 2020; Yang & Song, 2020). However, there are widely-existed graph-like logical rules, which the existing works cannot handle due to their limited expressive ability about logical rules. Learning graph-like logical rules is very important in many scenarios such as recommendation systems, question-answering system and KG completion, while learning such complex rules is still an open and challenging problem.\nGraph-like rule\nWhich book has two common readers with the book X while the two readers are friends?\nTree-like rule\nWhat is the address of the university that both the students X1 and X2 study at?\nChain-like rule\nWho is X’s friend’s supervisor?\nSemantic Questions\nWe propose a novel method that can explicitly learn the structural logical rules, including a logical structure and the relations assigned on each edge, and we can use the inferred logical rules for conducting inductive logical query with unseen entities and graphs. All the structural logical rules construct a discrete search space to explore, and searching for that is an NP-hard problem. To tackle with this problem, our method constructs a continuous space including both the structural information and the relational information to learn, which allows us to train our model in an endto-end differentiable manner. Specifically, as shown in Figure 1, we take the frequency of a logical rule in KG as its score to estimate how likely a logical rule stands. After optimizing w.r.t. the normalized score, our model yields interpretable logical rules of high quality, and support inductive logical query and link prediction, which has been demonstrated by our extensive experiments on real-world datasets.\nOur contributions can be summarized as following three aspects,\n• We first propose the problem of learning graph-like rules and design an end-to-end differentiable model that can learn graph-like logical rules instead of only chain-like or tree-like rules, modeling both the logical structure describing how the logic connects and relations assigned on edges.\n• We provide a uniform expression by Einsum to represent the score of all graph-like logical rules, including the ones that cannot be represented by a combination of matrix/element-wise addition/product, which is elegant for expression and convenient for implementation.\n• We conduct extensive experiments to demonstrate that our model has better expressive ability for graph-like logical rules and show our model can mine high-quality logical rules with high interpretability." }, { "heading": "2 PROBLEM FORMULATION", "text": "Here, we formally introduce the definition of logical score, and based on that, we further introduce our model’s main focus, relation inference (Yang et al., 2017; Sadeghian et al., 2019) and structural rule learning, and our evaluation task, logical query (Hamilton et al., 2018; Ren et al., 2020).\nDefinition 1 (Logical Score) Logical rule is formulated by ∧ni=1Ri → Rcpx : sr where sr is the score for ∧ni=1Ri, andRi is a relationRi = Ri(Vi, V ′i ), Vi, V ′i ∈ {{Xj}, Y, {Zk}} for i = 1, · · · , n and Rcpx is a relation Rcpx({Xj}, Y ), {Xj} are input nodes, {Zk} are free-variable nodes, Y is output node.\nFor strict logical query, for any Rcpx({Xj}, Y ), there exists (Z1, · · · , ZK) that make ∧ni=1Ri be true, we can draw the conclusion ∧ni=1Ri → Rcpx. However, because KG is usually noisy and incomplete, for learning logical rules, our key insight is to design the score as the number of freevariable tuples (Z1, · · · , ZK) that make ∧ni=1Ri be true, which can capture the correlation between logical rules and the input-output pairs of a logical query. For example, for the case in the middle of Figure 1, Rstudy at(X1, Z) ∧ Rstudy at(X2, Z) ∧ Raddress of(Z, Y ) → Rcpx(X1, X2, Y ); for the case in the right of Figure 1, we have Rread(X,Z1) ∧ Rread(X,Z2) ∧ Rfriend(Z1, Z2) ∧ Rread(inv)(Z1, Y ) ∧ Rread(inv)(Z2, Y ) → Rcpx(X,Y ). Note that, Rcpx can both be a relation that exists in the KG and the human-defined logic rule for a query, which tends to be more complex. The score sr serves as two roles: (i). when input-output pairs are given, it can measure how likely a logical rule is, which corresponds to the scenario of Task 1 and Task 2; (ii). when logical rules for query and inputs are given, it can measure how much a output node fits the query, which corresponds to Task 3.\nTask 1 (Relation Inference) GivenRcpx({Xj}, Y ) is satisfied and a logical structure composed by G = {e1(V1, V ′1), e2(V2, V ′2), · · · }, we need to infer how to assign relation Ri on each edge ei to form a logical rule ∧ni=1Ri(Vi, V ′i ) that will make the score sr of Rcpx({Xj}, Y ) high.\nFor this task, previous relation inference works (Yang et al., 2017; Sadeghian et al., 2019) can also conduct this task but they limit the G to be a chain-like. We model the relation between the inputoutput pairs behind the query as Rcpx and infer its graph-like logical rule.\nTask 2 (Structural Rule Learning) Given Rcpx({Xj}, Y ) is satisfied and the possible max nodes number n̂ ≥ ne, where ne is the size of {{Xj}, Y }, we need to infer what structure G = {e1(V1, V ′1), e2(V2, V ′2), · · · , en(Vn, V ′n)} where ne ≤ n ≤ n̂ and the relations assigned on edges ∧ni=1Ri that will make the score sr high.\nFor this task, logical structures in previous works (Yang et al., 2017; Sadeghian et al., 2019; Yang & Song, 2020) are limited to chains or trees, and the number of input entities are limited to 1. However, we can infer both the logical structure and the relations assigned on edges for graph-like rules.\nTask 3 (Logical Query) Given input nodes {Xj} and the query relation, the target nodes of this query can be represented by q = {Y |Rcpx({Xj}, Y )}.\nNote that, in previous works (Hamilton et al., 2018; Ren et al., 2020), the logical ruleRcpx = ∧ni=1Ri is given, different from those works, we need to infer the ∧ni=1Ri for logic query. Our model targets at the inference of complex logical rules, and use the inferred logic rules to conduct logical query as evaluation task. For evaluation, we regard Task 3 as the main task and the other two tasks as side products." }, { "heading": "3 RELATED WORKS", "text": "" }, { "heading": "3.1 LOGICAL QUERY FROM KNOWLEDGE GRAPHS", "text": "Logical rules learning (Teru et al., 2020; Evans & Grefenstette, 2018; Manhaeve et al., 2018; Wang et al., 2019; Ho et al., 2018) is to learn logical rules (Task 1) for logical query (Task 3) in an inductive setting. Neural-LP (Yang et al., 2017) design an end-to-end differentiable framework to learn the probability of different logical rules. Furthermore, DRUM (Sadeghian et al., 2019) improve NeuralLP (Yang et al., 2017) by introducing the low-rank matrix decomposition. However, these two works can only tackle chain-like logical rules. Different from our model, they mainly focus on relatively simple logical rules such as chain-like or tree-like rules. To the best of our knowledge, our model is the first one that can learn to infer graph-like complex logical rule including structure and relations assigned on different edges.\nLogical queries (Serge et al., 1995) aims to learn how to accurately query an entity (Task 3) according to given input entities and relations representing the logical rules in a transductive setting.\nAccording to Task 3, the logic rules representing the semantics of query explicitly given at both training and testing stages in this branch of works, but in our paper, the logic rules require to be inferred in the training stage. For most of these works, the main idea is to project entities into the embedding space (Bordes et al., 2013; Trouillon et al., 2016; Sun et al., 2018; Balažević et al., 2019) and transform the relations into a type of manipulation in embedding space, such as a linear projection. Hamilton et al. (2018) first proposes an embedding-based method for conduct query with tree-like logical rules. Ren et al. (2020) further improves Hamilton et al. (2018) by modeling the entities as box embedding rather than vector embedding, which is more natural for the manipulation for the conjunction of sets. Different from our model, these methods require explicit given logical structures with given relations on edges." }, { "heading": "3.2 DIFFERENTIABLE INDUCTIVE LOGIC PROGRAMMING", "text": "Inductive Logic Programming (ILP) (Shapiro, 1981) aims to conduct inductive logic reasoning or theorem proving based on entities, predicates and formulas. Predicates refer to a type of function projecting one or more entities to 0 or 1. For example, isMale(X) return whether X is a person (1) or not (0), isFatherOf(Y,X) returns whether Y is the father of X (1) or not (0), where X and Y are variables which we can instantiate them as entities. Then, we can define formulas by combining predicates by logic operations including and/or/not. For examples, isMale(X) ∧ isFatherOf(Y,X)→ isSonOf(X,Y ) is a logic entailment, which composes of a body formula isMale(X) ∧ isFatherOf(Y,X) and a head formula isSonOf(X,Y ). The physical meaning of this entailment means if the body formula is satisfied (equals to 1), then we can draw the conclusion in the head formula. ILP is an NP-hard problem (Zhang et al., 2019b) and traditional methods relying on hard matching (Galárraga et al., 2015) for ILP are of high computational complexity due to the very large search space.\nMarkov Logic Networks(MLN) (Richardson & Domingos, 2006; Kok & Domingos, 2005) elegantly combine ILP problem and probabilistic models, which define potential functions in Markov random field with formulas as nodes. Different from that, we consider the logical rule as a graph with entities as nodes. In recent years, a lot of works on differentiable ILP (Rocktäschel & Riedel, 2017; Minervini et al., 2018; 2020) have been proposed. The most recent work is NLIL (Yang & Song, 2020), which targets learning the logic rules based on unary and binary predicates efficiently. When we only focus on the binary predicate, then we can naturally conduct ILP based on KG because the relations in KG can be naturally regarded as a binary predicate on two entities (exist: 1; not exist: 0). If we only use NLIL (Yang & Song, 2020) for binary predicates (relations in KG), then it can only tackle chain-like rules (same as Neural-LP (Yang et al., 2017) and DRUM (Sadeghian et al., 2019)) instead of graph-like rules. To the best of our knowledge, differentiable ILP methods cannot handle graph-like rules." }, { "heading": "4 THE PROPOSED METHOD", "text": "Logical graph structure and relations assigned on edges construct a huge discrete space, which is extremely hard to search (Task 1 and Task 2). The key idea of our method is to design a score function modeling both the logical structure and the relation assigned on edges, by optimizing which we can obtain complex rules and conduct logical query (Task 3).\n• First, we introduce how to represent the score sr in when given the logical structure, to estimate how likely a logical rule is, by maximizing which we can infer relations (Task 1).\n• Second, we introduce how to merge the structural information into the score to uniformly obtain the logical structure and relations on edges by optimizing one score (Task 2).\n• Finally, we provide a uniform and elegant expression of the score sr in a matrix form by Einsum for all graph-like logical rules and exploit cross-entropy loss for optimizing our model, which can further support logical query (Task 3)." }, { "heading": "4.1 RELATION INFERENCE FOR GRAPH-LIKE LOGICAL RULES", "text": "Given input node {Xj}, target node Y and a structureG as shown in Figure 1, we use the number of the tuples ({Zk}) that satisfy ∧ni=1Ri as the score to evaluate the plausibility of a logical rule. We denote the adjacency matrix of relation Ri as Ai, which is a soft choice among adjacency matrices Āk corresponding to all relationsR in KG. Then, the score, can be represented as\nsr({Xj}, Y ) = ∑|V| Z1=1 · · · ∑|V| ZK=1 ∏n i=1 Ai[Vi, V ′ i ], (1)\nwhere Vi, V ′i ∈ {{Xj}, Y, {Zk}}, Ai[Vi, V ′i ] denotes the (Vi, V ′i )-indexed entry in Ai, Ai is defined as\nAi = ∑|R|\nk=1 Āk · exp(βik)/ ∑|R| k′=1 exp(βik′), (2)\nwhere Āk represents the k-th relation in the set of relations in KG, by which we can use the coefficiencts {βik} on different relations as learnable parameters to learn which relation should be assigned on each edge given the logical structure. For the right case in Figure 1, Eq. (1) becomes\nsr({Xj}, Y ) = ∑|V|\nZ1=1 ∑|V| Z2=1 A1[X,Z1]A2[X,Z2]A3[Z1, Z2]A4[Z1, Y ]A5[Z2, Y ], (3)\nIntuitively, in Eq. (3), after assigning an entity for each free variable Z, A1[X,Z1] = 1 means there is relation R1 between X and Z1, the product of such terms equals to 1 only if all of them equal to 1, which means all free variable Z1 and Z2 together with X and Y satisfies the logical rule ∧ni=1Ri. We sum w.r.t. Z1, . . . , ZK to calculate the count of such tuples (Z1, . . . , ZK) in the full KG that satisfied the logical rule to measure its plausibility." }, { "heading": "4.2 STRUCTURAL RULE LEARNING", "text": "In realistic logical inference scenarios (Yang & Song, 2020; Sadeghian et al., 2019), besides the relations, we also do not know the structure of the logical rule G = {e1(V1, V ′1), e2(V2, V ′2), · · · }. Thus, we need to infer the logical structure as well as the relations. To achieve this goal, we add two special auxiliary relations: “removing” relation represented by full-one matrix 1 (all the entries are 1) and “merging” relation represented by identity matrix (the diagonal entries are 1 and the others are 0) into the adjacency matrices. The physical meaning of identity matrix I is merging two connected nodes as one node in the logical rule, because if I[Vi, V ′i ] = 1, then Vi = V ′ i , i.e., they are the same entity in KG; the full-one matrix 1 removes the edge in the logical rule, because for any Vi and V ′i , we have 1[Vi, V ′i ] = 1, i.e., there is no relation requirement between Vi and V ′ i . We expand the parameters {βjk} for these two matrices and update the equation of softly selected adjacency matrix Ai in Eq. (1) as follows,\nAi = ∑|R|+2\nk=1 Ā+k · exp(βik)/ ∑|R|+2 k′=1 exp(βik′), (4)\nwhere Ā+k represents the k-th relation in the augmented set of relations including the original relations in KG and two auxiliary relations. By merging nodes and removing edges, which correspond to learning large coefficients on one of these two relations, we can obtain any graph structure from the complete graph whose nodes number is no less than the ground truth graph.\nTheorem 4.1 Given adjacency matrices consisting of the original adjacency matrices and two auxiliary adjacency matrices: identity matrix I and full-one matrix 1, we assume there are m̂ ≤ m points constructing the complete graph, then for any logical rule ∧ni=1Ri, there exists a suite of parameters {βjk} that make Eq. (1) equal to the number of (Z1, . . . , ZK) that satisfies the logical rule." }, { "heading": "4.3 TRAINING ALGORITHM", "text": "For chain-like and tree-like rules, Eq. (1) can be efficiently computed with matrix and elementwise product (see Section 4.4). But for more general graph-like rules, Eq. (1) cannot be directly computed in such a compact way. Here, we introduce Einsum (see Appendix A for more details) to make this possible. Einsum is a flexible convention of matrix calculation that unifies matrix multiplication, element-wise multiplication and some more complex matrix/tensor calculation that cannot be expressed by these ordinary operators (e.g., calculating the scores for graph-like rules). Specially, we express Eq. (2) in a Einsum format as follows,\nsr({Xj}, Y ) = einsum (’X1, ..., V1V ′1 , ..., VnV ′n, Y ’, vX1 , · · · , A1, . . . , An, vY ) , (5) where sr({Xj}, Y ) denotes the score of the pair ({Xj}, Y ), vX and vY are the one-hot vectors of the input and output entities.\nSuch a convention has two advantages: (i). we can uniformly represent all graph-like rules in a matrix form; (ii). well-engineered libraries such as Numpy1 and Pytorch2 can be exploited for fast computation (Daniel et al., 2018).\nFinally, we need a loss function that can not only encourage the positive samples but also can provide penalty for negative samples, which allows our model to be learn logical rules accurately. Thus, we first calculate ŝr({Xj}, Y ) = sr({Xj}, Y )/ ∑ Y ′∈V sr({Xj}, Y ′), i.e., the normalized score for each entity, then optimize the objective function composed by cross-entropy loss as follows,\narg min{βjk} ∑\n({Xj},Y )∈D ∑ Y ′∈V −I[Y ′ = Y ] log(ŝr({Xj}, Y ′)), (6)\nwhere I[·] is an indicator function, which returns 1 if the statement is true and otherwise returns 0. We optimize this loss, which is back-propagated through the calculation of Einsum, to learn the parameters {βjk} with high interpretability in an end-to-end differentiable manner. The training process is summarized in Algorithm 1.\nAlgorithm 1 The training process of our model for logical inference. Require: A set D of training data {({Xj}, Rcpx, Y )}, KG, max nodes number n̂ ≥ ne, max step;\n1: initialize βik, i = 1, . . . , n, k = 1, . . . , |R|, step = 0; 2: while step < max step do 3: sample a mini-batch Dbatch batch ⊆ D; 4: for each ({Xj}, Rcpx, Y ) ∈ Sbatch do 5: update parameters {βik} based on the loss function Eq. (6); 6: end for 7: step← step+1 8: end while 9: return logical rules ∧ni=1Ri, ne ≤ n ≤ n̂.\nComputational Complexity. Naturally, the search space of structural logical rules is very large and searching them is an NP-hard problem (Galárraga et al., 2015). Our method constructs a continuous space to estimate the logical rules to optimize it in a differentiable manner, which significantly reduces the complexity to O(|V|K+1), where |V| is the number of entities in KG, K is the number of free-variable nodes." }, { "heading": "4.4 CASE STUDY: COMPARISON WITH EXISTING METHODS", "text": "We take three real-world cases in Figure 1 to further show how our graph-like rules learning method generalizes previous chain-/tree-like one. To the best of our knowledge, our model is the first that can infer the graph-like rules.\n• Chain-like rule. Left in Figure 1 can be represented by sr({Xj}, Y ) = einsum(’i, jk, kl, l’, vX , Afriend, Asupervisor, vY ). This will degenerate to the form of matrix multiplication (Yang et al.,\n1https://numpy.org/doc/stable/reference/generated/numpy.einsum.html 2https://pytorch.org/docs/stable/generated/torch.einsum.html\n2017; Sadeghian et al., 2019) as follows, sr({Xj}, Y ) = v>Y AsupervisorAfriendvX , where vX and vY are the one-hot vectors of the input entity and output entity.\n• Tree-like rule. Middle in Figure 1 can be represented by sr({Xj}, Y ) = einsum( ’i, j, ik, jk, kl, l’, vX1 , vX2 , Astudy at, Astudy at, Aaddress of, vY ). This will degenerate to the form of a combination of matrix multiplication and element-wise product. For example, the case in the middle of Figure 1 as follows, sr({Xj}, Y ) = v>Y Aaddress of(Astudy atvX2 Astudy atvX1), where denotes the elementwise product, vX1 and vX2 are one-hot vectors of two inputs, and vY is the one-hot vector of the tail. EQB (Hamilton et al., 2018) and Query2box (Ren et al., 2020) require Y to be in the end of the logical rules and {Zk} in the middle of logical rules, because they transform the logical rules into a computational flow. Our method does not have requirements on the number of input nodes and the position of Y and {Zk}.\n• Graph-like rule. Right in Figure 1 can be represented by sr({Xj}, Y ) = einsum( ’i, ij, ik, jk, jl, kl, l ’, vX , Aread, Aread, Afriend, Aread (inv), Aread (inv), vY ). This cannot be simplified by only using a combination of matrix multiplication/element-wise addition/product. All the graph-like logical rules can be expressed by Einsum uniformly." }, { "heading": "5 EXPERIMENTS", "text": "We conduct extensive experiments on real-world datasets to compare our performance on logical query (Task 3) in Section 5.2. Furthermore, we also demonstrate that our model is able to infer the relations (Task 1) and learn the structural logical rules (Task 2) with high quality in Section 5.3 and Section 5.4." }, { "heading": "5.1 EXPERIMENT SETUP", "text": "We implement our model in Python using Pytorch libraryand optimize all the models by Adam optimizer (Kingma & Ba, 2015).\nDatasets. We use the Kinship, Family and Unified Medical Language System (UMLS) datasets (Kok & Domingos, 2007) to evaluate our model’s ability to learn some representative logical rules for logical queries. Furthermore, we use Douban and Yelp to evaluate our model’s ability to learn complex graph-like logical rules for logical queries. We report more details in Section C.1.\nQuery Generation. We carefully design five representative query structures as shown in Figure 2 including the right example in Figure 1, which do exist in real-world datasets.\nWe choose 2-chain (2c) because it is a basic chain rule, a comparison will help understand how our model works on the most common cases. We choose 2-intersection (2i) because it is a typical treelike logical rule, while many of the existing works cannot handle it because most of them only allow one input or are limited to chain-like rules. Only complex logical query methods (Ren et al., 2020) can tackle with this case. We choose 2-intersection-without-an-input (2iw) because it is a special type of tree-like logical rules, the positions of the output and free-variable node are very special. To test more complex graph-like rule, we choose a triangle (tri) structure, which cannot be modeled as chain/tree-like rules. At last, we choose a structure of 2 chains with a bridge (2cb) the same as the right case in Figure 1. All these query structures have corresponding realistic semantics. For query structure 2c, 2i, 2iw, we extract the top 5 frequent query types (∧ni=1Ri) and then follow the logic rules to randomly generate 1000 input-output pairs for each query structure. For query structure tri, we use the query type represented by ”Who is X’s friend and meanwhile has a co-reading book with X?”. For query structure 2cb, we use the query semantics ”Which book has two common readers with the book X while the two readers are friends” as shown in Figure 1. We split the datasets (queries) into training, validation, testing datasets according to the ratio 2 : 1 : 1. We use the training dataset for learning the parameters in our model, use the validation set to decide when to conduct early-stopping, and finally use the testing dataset for evaluating our model.\nComparing Methods. We compare our method on logical query with a series of rule mining methods Neural-LP (Yang et al., 2017) and DRUM(Sadeghian et al., 2019) with high interpretability and the state-of-the-art embedding-based logical query method (Ren et al., 2020) that can handle tree-like logical rules. To fairly compare the expressive ability about logical rules, we remove the embedding information and the corresponding neural network generating coefficients in rule mining baselines (Yang et al., 2017; Sadeghian et al., 2019), instead, we set the coefficients on relations as learnable parameters.\nEvaluation Metrics. We use Mean Reciprocal Rank (MRR) and Hit Rate at k (k = 1, 3) as evaluation metrics (see Section C.2 for more details)." }, { "heading": "5.2 PERFORMANCE COMPARISON", "text": "We compare our model with other methods on three real-world datasets in terms of the three relatively simple but representative query structures (2c,2i,2iw) as shown in Table 2. Our method is the only one that can tackle all these three query structures. We can observe that, for query structure 2c, Our model achieves better or comparable performance with Neural-LP and DRUM, because these two methods are specifically designed for chain-like rules. However, they cannot handle query structure 2i because they only allow single-input query. Furthermore, they cannot learn the 2iw correctly, because they require the output entity to be in the end of chain rule. Their learned chain rules for 2iw are inaccurate or totally wrong, so our model improve the performance of 2iw query type with a large margin compared to them. Query2box is designed for handling missing relations in complex queries, which performs poorly on these datasets. Furthermore, Query2box relies on the embeddings of entities so it cannot handle unseen entities but ours can.\nFor more complex logic rules (tri,2cb), we conduct the experiment on two real-world datasets in recommendation system domains, Douban and Yelp, whose information is reported in Table 6. The\nperformances of Neural-LP and DRUM are poor because their learned chain rules are far from the the graph-like rules as shown in Figure 3. Here, we do not compare with Query2box because it cannot work on such graph-like logical rules.\nThe results reported in Table 3 show that our model has the best ability on learning such graphlike rules, which cannot be accurately modeled by methods for chain-like rules, such as Neural-LP or DRUM. As we will discuss in Section 5.3, our model learns totally correct logical rules so it can achieve such a good performance. The running time results are shown in Table 4, we can observe that our running time is comparable to Neural-LP and DRUM." }, { "heading": "5.3 CASE STUDY", "text": "Furthermore, we check whether the learned rules are the same as ground-truth rules. As mentioned, we learn a set of weights represented by {βik} for different relations assigned on edges, while the ’merging’ relation corresponding to matrix 1 means merging two connected nodes and the ’removing’ relation corresponding to matrix I means removing the edge (no rule requirement). So we visualize the weights representing the learned logic rules as shown in Figure 3. From that, we can observe that most of the relations are learned correctly with very high confidence, and it learns high confidence for removing one edge (the auxiliary relation represented by full-one matrix), which means our model has the ability to both infer the relations and learn the logic structures." }, { "heading": "5.4 ABLATION STUDY", "text": "To demonstrate the effectiveness of auxiliary matrices I and 1, we conduct the experiments that we train the model with both matrices, one of them or none of them, respectively.\nAs shown in Table 5, we can observe that for case 2c, the model with both matrices achieves the best performance, which suggests the effectiveness of these two matrices. For case 2i, the model with the matrix 1 achieves the best performance because the model with 1 has the expressive ability to model the case 2i, more matrices will lead to more parameters and difficulty for learning. Due to the same reason, for case 2iw, the model with the matrix 1 and the model with both matrices achieve similar performance. For most complex cases, we need both auxiliary matrices to accurately express the complex logical rules." }, { "heading": "6 CONCLUSION", "text": "We propose a uniform score to not only unify the existing logical query and inference works but also tackle more complex graph-like logical rules. Furthermore, we exploit Einsum to elegantly express this score function and optimize our model in an end-to-end differentiable manner, which can learn both the logic structure and the relations assigned on edges. At last, we conduct extensive experiments on real-world datasets datasets to demonstrate the effectiveness of our model on logical query and show that our model can yield high-quality complex logical rules with interpretability." }, { "heading": "A EINSUM", "text": "Einsum (Åhlander, 2002) is a mathematical notational convention that is elegant and convenient for expressing summation and multiplication on vectors, matrices, and tensors. It can represent some calculation of multiple matrices that cannot be represented by matrix product or element-wise product. Specifically, it not only makes the expression of complex logic rules very simple and elegant but also make the implementation of this calculation simpler and more efficient. Here, we introduce the implicit mode Einsum because it can tackle more cases than the classical one, and this function has been implemented in many widely-used libraries such as Numpy and Pytorch. It takes a string representing the equation and a series of tensors as input and provides a tensor as output. The rules of Einsum are as follows,\n• the input string is comma-separated including a series of labels, where each term separated by commas represent a tensor (or matrix, vector) and each letter is corresponding to a dimension;\n• the terms in the string before ’→’ means input variables, and the term after ’→’ means output variables;\n• the same label in multiple inputs means element-wise multiplication on the dimensions of different input tensors;\n• the dimensions that occur in the inputs but not in the outputs mean that dimension will be summed, while others will remain in the output. When there is no dimension after ’→’, it means all dimensions should be summed to get a scalar, and in such case ’→’ can be omitted.\nHere we provide several simple examples to help understand:\n• einsum (’i, i→ ’, a, b) or einsum (’i, i’, a, b) represents the inner product of two vectors a>b; • einsum (’i, j → ij’, a, b) represents the outer product of two vectors ab>; • einsum (’ij, ij → ij’, A,B) represents the element-wise product of two matrices A B; • einsum (’ij, jk → ik’, A,B) represents the matrix product of two matrices AB." }, { "heading": "B PROOF OF THEOREM 4.1", "text": "Proof 1 Here, we provide a strategy to construct the coefficients {βjk} which make Eq. (1) equal to the number of (Z1, . . . , ZK) that satisfies the logical rule. First, without loss of generality, we assume that the ground-truth logical rules is ∧ni=1Ri(Vi, V ′i ) including m free-variable nodes. Our strategy is divided into two stages: merging redundant nodes and removing useless edges.\nStage 1: Merging redundant nodes. If m̂ > m, we can set the coefficient of identity matrix of a complete graph on all edges between node Zm, Zm+1, · · · , Zm̂ as 1, which means all of these edges are merged. And then for any Zj that j < m and Zk,Zl that k, l > m, we force the coefficient on the edge between Zj and Zk should be the same as the edge between Zj and Zl, i.e., the edge connected to these binding nodes should be the same. After the merging stage, we get a complete graph with m nodes.\nStage 2: Removing useless edges. Then for any V and V ′ if there is no edge between these two nodes, we set the coefficient of ’removing’ relation as 1 and others as 0, which means we remove those edges from the logical rules. Finally, for Ri(Vi, V ′i ) in the ground-truth logical rule, we set the coefficient of the correct relation on the corresponding edge as 1 and others as 0.\nAfter all these manipulations, we can get that sr({Xj}, Y ) = ∑|V| Z1=1 · · · ∑|V| ZK=1 ∏|Ec| i=1 Ai[Vi, V ′ i ], (7)\nwhere\nAi = ∑|R|\nk=1\n( exp(βik)/ ∑|R| k′=1 exp(βik′) ) Ā+k ,\nbecomes sr({Xj}, Y ) = ∑|V| Z1=1 · · · ∑|V| ZK=1 ∏n i=1 Ai[Vi, V ′ i ], (8)\nwhere\nAi = ∑|R|+2\nk=1\n( exp(βik)/ ∑|R|+2 k′=1 exp(βik′) ) Āk,\nwhich means the Eq. (7) becomes the number of (Z1, . . . , ZK) that satisfies the logical rule. Note that, this is not the only way to construct the parameters to achieve this goal." }, { "heading": "C EXPERIMENTS", "text": "C.1 DATASET\nThe statistics of five real-world datasets are reported in Table 6.\nC.2 EVALUATION METRICS.\nFor each query Rcpx({Xj}, Y ), we can calculate scores sr for all entities by our model. We sort the entities in a descending order of score and denote the rank of a right entity Y as rank(Y ). Then, for each entity Y we calculate MRR as follows,\nMRR = 1 |Dtest| ∑ ({Xj},Y )∈Dtest\n1\nrank(Y )\nand Hit Rate at k as follows,\nhit@k = 1 |Dtest| ∑ ({Xj},Y )∈Dtest I [rank(Y ) ≤ k] ,\nwhere I[·] is an indicator function, which returns 1 if the statement is true and otherwise returns 0.\nC.3 LINK PREDICTION\nWe conduct the experiment of link prediction on three datasets Kinship, UMLS and Family following the setting in Sadeghian et al. (2019). To be fair, we remove the embedding information in logical rule mining methods to purely compare the expressive ability about logic rules. We test all the methods for chain-like rules(Neural-LP (Yang et al., 2017), DRUM (Sadeghian et al., 2019) for more complex rule learning with only one input and one output, although they may learn inaccurate or wrong rules. We further add GraIL (Teru et al., 2020), the state-of-the-art method designed for inductive link prediction, as a baseline. As shown in Table 7, we can observe that our model achieves comparable performance compared with existing works." } ]
2,020
null
SP:ad96575881588cd2566d2c9c589882a6db9b3874
[ "This paper studies meta-learning in the mixed linear regression setting, focusing on the effect of the within-task step-size on performance. For over-parameterized, under-parameterized, and NTK regimes they derive expressions for test-time loss that suggest that negative or close-to-zero learning rates are optimal, and provide experiments that closely match these results. However, some aspects of the mathematical approach are unclear, and the work's impact is limited without an investigation of the consequences of the analysis." ]
Deep learning models require a large amount of data to perform well. When data is scarce for a target task, we can transfer the knowledge gained by training on similar tasks to quickly learn the target. A successful approach is meta-learning, or learning to learn a distribution of tasks, where learning is represented by an outer loop, and to learn by an inner loop of gradient descent. However, a number of recent empirical studies argue that the inner loop is unnecessary and more simple models work equally well or even better. We study the performance of MAML as a function of the learning rate of the inner loop, where zero learning rate implies that there is no inner loop. Using random matrix theory and exact solutions of linear models, we calculate an algebraic expression for the test loss of MAML applied to mixed linear regression and nonlinear regression with overparameterized models. Surprisingly, while the optimal learning rate for adaptation is positive, we find that the optimal learning rate for training is always negative, a setting that has never been considered before. Therefore, not only does the performance increase by decreasing the learning rate to zero, as suggested by recent work, but it can be increased even further by decreasing the learning rate to negative values. These results help clarify under what circumstances meta-learning performs best.
[ { "affiliations": [], "name": "Alberto Bernacchia" } ]
[ { "authors": [ "Madhu S. Advani", "Andrew M. Saxe" ], "title": "High-dimensional dynamics of generalization error in neural networks. arXiv:1710.03667 [physics, q-bio, stat], October 2017", "venue": "URL http: //arxiv.org/abs/1710.03667", "year": 2017 }, { "authors": [ "Yu Bai", "Minshuo Chen", "Pan Zhou", "Tuo Zhao", "Jason D. Lee", "Sham Kakade", "Huan Wang", "Caiming Xiong" ], "title": "How Important is the Train-Validation Split in Meta-Learning? arXiv:2010.05843 [cs, stat", "venue": "URL http://arxiv.org/abs/2010.05843", "year": 2021 }, { "authors": [ "Luca Bertinetto", "João F. Henriques", "Philip H.S. Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers. arXiv:1805.08136 [cs, stat], July 2019", "venue": "URL http:// arxiv.org/abs/1805.08136", "year": 2019 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A Closer Look at Few-shot Classification. arXiv:1904.04232 [cs], January 2020a", "venue": "URL http: //arxiv.org/abs/1904.04232", "year": 1904 }, { "authors": [ "Yinbo Chen", "Xiaolong Wang", "Zhuang Liu", "Huijuan Xu", "Trevor Darrell" ], "title": "A New Meta-Baseline for Few-Shot Learning. arXiv:2003.04390 [cs], March 2020b", "venue": "URL http://arxiv.org/ abs/2003.04390", "year": 2003 }, { "authors": [ "Lenaic Chizat", "Edouard Oyallon", "Francis Bach" ], "title": "On Lazy Training in Differentiable Programming", "venue": "[cs, math],", "year": 2020 }, { "authors": [ "Liam Collins", "Aryan Mokhtari", "Sanjay Shakkottai" ], "title": "Why Does MAML Outperform ERM? An Optimization Perspective. arXiv:2010.14672 [cs, math, stat], December 2020", "venue": "URL http: //arxiv.org/abs/2010.14672", "year": 2010 }, { "authors": [ "Giulia Denevi", "Carlo Ciliberto", "Dimitris Stamos", "Massimiliano Pontil" ], "title": "Learning To Learn Around A Common Mean", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Guneet S. Dhillon", "Pratik Chaudhari", "Avinash Ravichandran", "Stefano Soatto" ], "title": "A Baseline for Few-Shot Image Classification. arXiv:1909.02729 [cs, stat], March 2020", "venue": "URL http: //arxiv.org/abs/1909.02729", "year": 1909 }, { "authors": [ "Jeff Donahue", "Yangqing Jia", "Oriol Vinyals", "Judy Hoffman", "Ning Zhang", "Eric Tzeng", "Trevor Darrell" ], "title": "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition", "venue": "ICML, pp", "year": 2014 }, { "authors": [ "Simon S. Du", "Wei Hu", "Sham M. Kakade", "Jason D. Lee", "Qi Lei" ], "title": "Few-Shot Learning via Learning the Representation, Provably", "venue": "URL http: //arxiv.org/abs/2002.09434", "year": 2020 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm", "venue": "[cs],", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv:1703.03400 [cs], March 2017", "venue": "URL http://arxiv.org/ abs/1703.03400", "year": 2017 }, { "authors": [ "Katelyn Gao", "Ozan Sener" ], "title": "Modeling and Optimization Trade-off in Meta-learning", "venue": "URL http://arxiv.org/abs/2010", "year": 2020 }, { "authors": [ "Micah Goldblum", "Steven Reich", "Liam Fowl", "Renkun Ni", "Valeriia Cherepanova", "Tom Goldstein" ], "title": "Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks. arXiv:2002.06753 [cs, stat], March 2020", "venue": "URL http://arxiv.org/abs/2002.06753", "year": 2002 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "RECASTING GRADIENT-BASED META-LEARNING AS HIERARCHICAL BAYES", "venue": "ICLR, pp", "year": 2018 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J. Tibshirani" ], "title": "Surprises in HighDimensional Ridgeless Least Squares Interpolation", "venue": "URL http://arxiv.org/abs/1903.08560", "year": 2019 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-Learning in Neural Networks: A Survey. arXiv:2004.05439 [cs, stat], April 2020", "venue": "URL http://arxiv.org/ abs/2004.05439", "year": 2004 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural Tangent Kernel: Convergence and Generalization in Neural Networks", "venue": "URL http: //arxiv.org/abs/1806.07572", "year": 2018 }, { "authors": [ "Kaiyi Ji", "Junjie Yang", "Yingbin Liang" ], "title": "Multi-Step Model-Agnostic Meta-Learning: Convergence and Improved Algorithms. arXiv:2002.07836 [cs, math, stat], February 2020", "venue": "URL http: //arxiv.org/abs/2002.07836", "year": 2002 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B. Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling Laws for Neural Language Models. arXiv:2001.08361 [cs, stat", "venue": "URL http://arxiv.org/abs/2001", "year": 2020 }, { "authors": [ "Mikhail Khodak", "Maria-Florina Balcan", "Ameet Talwalkar" ], "title": "Adaptive Gradient-Based MetaLearning Methods. arXiv:1906.02717 [cs, stat], December 2019", "venue": "URL http://arxiv.org/ abs/1906.02717", "year": 1906 }, { "authors": [ "Weihao Kong", "Raghav Somani", "Zhao Song", "Sham Kakade", "Sewoong Oh" ], "title": "Meta-learning for mixed linear regression. arXiv:2002.08936 [cs, stat], February 2020", "venue": "URL http://arxiv", "year": 2002 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel S. Schoenholz", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Jeffrey Pennington" ], "title": "Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent", "venue": "URL http://arxiv.org/abs/ 1902.06720", "year": 2019 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-SGD: Learning to Learn Quickly for Few-Shot Learning", "venue": "[cs],", "year": 2017 }, { "authors": [ "Preetum Nakkiran" ], "title": "More Data Can Hurt for Linear Regression: Sample-wise Double Descent. arXiv:1912.07242 [cs, math, stat], December 2019", "venue": "URL http://arxiv.org/abs/1912", "year": 1912 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A Survey on Transfer Learning", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2010 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML. arXiv:1909.09157 [cs, stat], February 2020", "venue": "URL http://arxiv.org/abs/1909.09157", "year": 1909 }, { "authors": [ "Jonathan S Rosenfeld", "Amir Rosenfeld", "Yonatan Belinkov", "Nir Shavit" ], "title": "A CONSTRUCTIVE PREDICTION OF THE GENERALIZATION", "venue": "ERROR ACROSS SCALES. ICLR,", "year": 2020 }, { "authors": [ "Nikunj Saunshi", "Yi Zhang", "Mikhail Khodak", "Sanjeev Arora" ], "title": "A Sample Complexity Separation between Non-Convex and Convex Meta-Learning", "venue": "URL http://arxiv.org/abs/2002.11172", "year": 2020 }, { "authors": [ "Yonglong Tian", "Yue Wang", "Dilip Krishnan", "Joshua B. Tenenbaum", "Phillip Isola" ], "title": "Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need? arXiv:2003.11539 [cs", "venue": "URL http://arxiv.org/abs/2003.11539", "year": 2020 }, { "authors": [ "Eleni Triantafillou", "Tyler Zhu", "Vincent Dumoulin", "Pascal Lamblin", "Utku Evci", "Kelvin Xu", "Ross Goroshin", "Carles Gelada", "Kevin Swersky", "Pierre-Antoine Manzagol", "Hugo Larochelle" ], "title": "MetaDataset: A Dataset of Datasets for Learning to Learn from Few Examples", "venue": "URL http://arxiv.org/abs/1903.03096", "year": 2020 }, { "authors": [ "Nilesh Tripuraneni", "Chi Jin", "Michael I. Jordan" ], "title": "Provable Meta-Learning of Linear Representations. arXiv:2002.11684 [cs, stat], February 2020", "venue": "URL http://arxiv.org/abs/2002", "year": 2002 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching Networks for One Shot Learning. arXiv:1606.04080 [cs, stat], December 2017", "venue": "URL http://arxiv.org/abs/1606.04080", "year": 2017 }, { "authors": [ "Haoxiang Wang", "Ruoyu Sun", "Bo Li" ], "title": "Global Convergence and Generalization Bound of Gradient-Based Meta-Learning with Deep Neural Nets. arXiv:2006.14606 [cs, stat], November 2020a", "venue": "URL http://arxiv.org/abs/2006.14606", "year": 2006 }, { "authors": [ "Lingxiao Wang", "Qi Cai", "Zhuoran Yang", "Zhaoran Wang" ], "title": "On the Global Optimality of ModelAgnostic Meta-Learning", "venue": "[cs, stat], June 2020b. URL http://arxiv", "year": 2006 }, { "authors": [ "Greg Yang", "Edward J. Hu" ], "title": "Feature Learning in Infinite-Width Neural Networks. arXiv:2011.14522 [cond-mat], November 2020", "venue": "URL http://arxiv.org/abs/2011. 14522", "year": 2011 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "[cs],", "year": 2014 }, { "authors": [ "Yufan Zhou", "Zhenyi Wang", "Jiayi Xian", "Changyou Chen", "Jinhui Xu" ], "title": "Meta-Learning with Neural Tangent Kernels", "venue": "[cs],", "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Learning models represent the state-of-the-art in several machine learning benchmarks (LeCun et al. (2015)), and their performance does not seem to stop improving when adding more data and computing resources (Rosenfeld et al. (2020), Kaplan et al. (2020)). However, they require a large amount of data and compute to start with, which are often not available to practitioners. The approach of fine-tuning has proved very effective to address this limitation: pre-train a model on a source task, for which a large dataset is available, and use this model as the starting point for a quick additional training (fine-tuning) on the small dataset of the target task (Pan & Yang (2010), Donahue et al. (2014), Yosinski et al. (2014)). This approach is popular because pre-trained models are often made available by institutions that have the resources to train them.\nIn some circumstances, multiple source tasks are available, all of which have scarce data, as opposed to a single source task with abundant data. This case is addressed by meta-learning, in which a model gains experience over multiple source tasks and uses it to improve its learning of future target tasks. The idea of meta-learning is inspired by the ability of humans to generalize across tasks, without having to train on any single task for long time. A meta-learning problem is solved by a bi-level optimization procedure: an outer loop optimizes meta-parameters across tasks, while an inner loop optimizes parameters within each task (Hospedales et al. (2020)).\nThe idea of meta-learning has gained some popularity, but a few recent papers argue that a simple alternative to meta-learning is just good enough, in which the inner loop is removed entirely (Chen et al. (2020a), Tian et al. (2020), Dhillon et al. (2020), Chen et al. (2020b), Raghu et al. (2020)). Other studies find the opposite (Goldblum et al. (2020), Collins et al. (2020), Gao & Sener (2020)). It is hard to resolve the debate because there is little theory available to explain these findings.\nIn this work, using random matrix theory and exact solutions of linear models, we derive an algebraic expression of the average test loss of MAML, a simple and successful meta-learning algorithm (Finn et al. (2017)), as a function of its hyperparameters. In particular, we study its performance as a\nfunction of the inner loop learning rate during meta-training. Setting this learning rate to zero is equivalent to removing the inner loop, as advocated by recent work (Chen et al. (2020a), Tian et al. (2020), Dhillon et al. (2020), Chen et al. (2020b), Raghu et al. (2020)). Surprisingly, we find that the optimal learning rate is negative, thus performance can be increased by reducing the learning rate below zero. In particular, we find the following:\n• In the problem of mixed linear regression, we prove that the optimal learning rate is always negative in overparameterized models. The same result holds in underparameterized models provided that the optimal learning rate is small in absolute value. We validate the theory by running extensive experiments.\n• We extend these results to the case of nonlinear regression and wide neural networks, in which the output can be approximated by a linear function of the parameters (Jacot et al. (2018), Lee et al. (2019)). While in this case we cannot prove that the optimal learning rate is always negative, preliminary experiments suggest that the result holds in this case as well." }, { "heading": "2 RELATED WORK", "text": "The field of meta-learning includes a broad range of problems and solutions, see Hospedales et al. (2020) for a recent review focusing on neural networks and deep learning. In this context, metalearning received increased attention in the past few years, several new benchmarks have been introduced, and a large number of algorithms and models have been proposed to solve them (Vinyals et al. (2017), Bertinetto et al. (2019), Triantafillou et al. (2020)). Despite the surge in empirical work, theoretical work is still lagging behind.\nSimilar to our work, a few other studies used random matrix theory and exact solutions to calculate the average test loss for the problem of linear regression (Advani & Saxe (2017), Hastie et al. (2019), Nakkiran (2019)). To our knowledge, our study is the first to apply this technique to the problem of meta-learning with multiple tasks. Our results reduce to those of linear regression in the case of one single task. Furthermore, we are among the first to apply the framework of Neural Tangent Kernel (Jacot et al. (2018), Lee et al. (2019)) to the problem of meta-learning (a few papers appeared after our submission: Yang & Hu (2020), Wang et al. (2020a), Zhou et al. (2021)).\nSimilar to us, a few theoretical studies looked at the problem of mixed linear regression in the context of meta-learning. In Denevi et al. (2018), Bai et al. (2021), a meta-parameter is used to bias the taskspecific parameters through a regularization term. Kong et al. (2020) looks at whether many tasks with small data can compensate for a lack of tasks with big data. Tripuraneni et al. (2020), Du et al. (2020) study the sample complexity of representation learning. However, none of these studies look into the effect of learning rate on performance, which is our main focus.\nIn this work, we focus on MAML, a simple and successful meta-learning algorithm (Finn et al. (2017)). A few theoretical studies have investigated MAML, looking at: universality of the optimization algorithm (Finn & Levine (2018)), bayesian inference interpretation (Grant et al. (2018)), proof of convergence (Ji et al. (2020)), difference between convex and non-convex losses (Saunshi et al. (2020)), global optimality (Wang et al. (2020b)), effect of the inner loop (Collins et al. (2020), Gao & Sener (2020)). Again, none of these studies look at the effect of the learning rate, the main subject of our work. The theoretical work of Khodak et al. (2019) connects the learning rate to task similarity, while the work of Li et al. (2017) meta-learns the learning rate." }, { "heading": "3 META-LEARNING AND MAML", "text": "In this work, we follow the notation of Hospedales et al. (2020) and we use MAML (Finn et al. (2017)) as the meta-learning algorithm. We assume the existence of a distribution of tasks τ and, for each task, a loss function Lτ and a distribution of data pointsDτ = {xτ , yτ}with input xτ and label yτ . We assume that the loss function is the same for all tasks, Lτ = L, but each task is characterized by a different distribution of the data. The empirical meta-learning loss is evaluated on a sample of\nm tasks, and a sample of nv validation data points for each task:\nLmeta (ω;Dt,Dv) = 1\nmnv m∑ i=1 nv∑ j=1 L ( θ(ω;D(i)t );x v(i) j , y v(i) j ) (1)\nThe training set D(i)t = { x t(i) j , y t(i) j } j=1:nt and validation set D(i)v = { x v(i) j , y v(i) j } j=1:nv are drawn independently from the same distribution in each task i. The function θ represents the adaptation of the meta-parameter ω, which is evaluated on the training set. Different meta-learning algorithms correspond to a different choice of θ, we describe below the choice of MAML (Eq.3), the subject of this study. During meta-training, the loss of Eq.1 is optimized with respect to the meta-parameter ω, usually by stochastic gradient descent, starting from an initial point ω0. The optimum is denoted as ω?(Dt,Dv). This optimization is referred to as the outer loop, while computation of θ is referred to as the inner loop of meta-learning. During meta-testing, a new (target) task is given and θ adapts on a set Dr of nr target data points. The final performance of the model is computed on test data Ds of the target task. Therefore, the test loss is equal to\nLtest = Lmeta (ω?(Dt,Dv);Dr,Ds) (2)\nIn MAML, the inner loop corresponds to a few steps of gradient descent, with a given learning rate αt. In this work we consider the simple case of a single gradient step:\nθ(ω;D(i)t ) = ω − αt nt nt∑ j=1 ∂L ∂θ ∣∣∣∣ ω;x t(i) j ,y t(i) j\n(3)\nIf the learning rate αt is zero, then parameters are not adapted during meta-training and θ(ω) = ω. In that case, a single set of parameters in learned across all data and there is no inner loop. However, it is important to note that a distinct learning rate αr is used during meta-testing. A setting similar to this has been advocated in a few recent studies (Chen et al. (2020a), Tian et al. (2020), Dhillon et al. (2020), Chen et al. (2020b), Raghu et al. (2020)).\nWe show that, intuitively, the optimal learning rate at meta-testing (adaptation) time αr is always positive. Surprisingly, in the family of problems considered in this study, we find that the optimal learning rate during meta-training αt is instead negative. We note that the setting αt = 0 effectively does not use the nt training data points, therefore we could in principle add this data to the validation set, but we do not consider this option here since we are interested in a wide range of possible values of αt as opposed to the specific case αt = 0." }, { "heading": "4 MIXED LINEAR REGRESSION", "text": "We study MAML applied to the problem of mixed linear regression. Note that the goal here is not to solve the problem of mixed linear regression, but to probe the performance of MAML as a function of its hyperparameters.\nIn mixed linear regression, each task is characterized by a different linear function, and a model is evaluated by the mean squared error loss function. We assume a generative model in the form of y = xTw + z, where x is the input vector (of dimension p), y is the output (scalar), z is noise (scalar), and w\nis a vector of generating parameters (of dimension p), therefore p represents both the number of parameters and the input dimension. All distributions are assumed Gaussian:\nw ∼ N ( w0, ν2\np Ip\n) x ∼ N (0, Ip) y|x,w ∼ N ( xTw, σ2 ) (4)\nwhere Ip is the p×p identity matrix, σ is the label noise, w0 is the task mean and ν represents the task variability. Different meta-training tasks i correspond to different draws of generating parameters w(i), while the parameters for the meta-testing task are denoted by w′. We denote by superscripts t, v, r, s the training, validation, target and test data, respectively. A graphical model of data generation is shown in Figure 1.\nUsing random matrix theory and exact solutions of linear models, we calculate the test loss as a function of the following hyperparameters: the number of training tasks m, number of data points per task for training (nt), validation (nv) and target (nr), learning rate for training αt and for adaptation to target αr. Furthermore, we have the hyperparameters specific to the mixed linear regression problem: p, ν, σ, w0. Since we use exact solutions to the linear problem, our approach is equivalent to running the outer loop optimization until convergence (see section 7.1 in the Appendix for details). We derive results in two cases: overparameterized p > nvm and underparameterized p < nvm." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 OVERPARAMETERIZED CASE", "text": "In the overparameterized case, the number of parameters p is larger than the total number of validation data across tasks nvm. In this case, since the data does not fully constrain the parameters, the optimal value of ω found during meta-training depends on the initial condition used for optimization, which we call ω0.\nTheorem 1. Consider the algorithm of section 3 (MAML one-step), and the data generating model of section 4 (mixed linear regression). Let p > nvm. Let p(ξ) and nt(ξ) be any function of order O(ξ) as ξ →∞. Let |ω0 −w0| be of order O(ξ−1/4). Then the test loss of Eq.2, averaged over the entire data distribution (see Eq.27 in the Appendix) is equal to\nLtest = σ 2\n2\n( 1 + α2rp\nnr\n) +\n+ hr ν2 2 ( 1 + nvm p ) + 1 2 ( 1− nvm p ) |ω0 −w0|2 + σ2nvm 2p 1 + α2tp nt ht +O (ξ−3/2) (5) where we define the following expressions\nht = (1− αt)2 + α2t p+ 1\nnt (6)\nhr = (1− αr)2 + α2r p+ 1\nnr (7)\nProof. The proof of this Theorem can be found in the Appendix, sections 7.3, 7.3.1.\nThe loss always increases with the output noise σ and task variability ν. Overfitting is expressed in Eq.5 by the term |ω0 −w0|, the distance between the initial condition for the optimization of ω0 and the ground truth mean of the generating model w0. Adding more validation data nv and tasksm may increase or decrease the loss depending on the size of this term relative to the noise (Nakkiran (2019)), as it does reducing the number of parameters p. However, the loss always decreases with the number of data points for the target task nr, as that data only affects the adaptation step.\nOur main focus is studying how the loss is affected by the learning rates, during training αt and adaptation αr. The loss is a quadratic and convex function of αr, therefore it has a unique minimum. While it is possible to compute the optimal value of αr from Eq.5, here we just note that the loss is a sum of two quadratic functions, one has a minimum at αr = 0 and another has a minimum at αr = 1/ (1 + (p+ 1)/nr), therefore the optimal learning rate is in between the two values and is always positive. This is intuitive, since a positive learning rate for adaptation implies that the parameters get closer to the optimum for the target task. An example of the loss as a function of the adaptation learning rate αr is shown in Figure 2a, where we also show the results of experiments\nin which we run MAML empirically. The good agreement between theory and experiment suggest that Eq.5 is accurate.\nHowever, the training learning rate αt shows the opposite: by taking the derivative of Eq.5 with respect to αt, it is possible to show that it has a unique absolute minimum for a negative value of αt. This can be proved by noting that this function has the same finite value for large positive or negative αt, its derivative is always positive at αt = 0, and it has one minimum (−) and one maximum (+) at values\nα±t = − nt + 1\n2p ±\n√( nt + 1\n2p )2 + nt p\n(8)\nNote that the argmax α+t is always positive, while the argmin α − t is always negative. This result is counter-intuitive, since a negative learning rate pushes parameters towards higher values of the loss. However, learning of the meta-parameter ω is performed by the outer loop (minimize Eq.1), for which there is no learning rate since we are using the exact solution to the linear problem and thus we are effectively training to convergence. Therefore, it remains unclear whether the inner loop (Eq.3) should push parameters towards higher or lower values of the loss. An example of the loss as a function of the training learning rate αr is shown in Figure 2b, where we also show the results of experiments in which we run MAML empirically. Here the theory slightly underestimate the experimental loss, but the overall shapes of the curves are in good agreement, suggesting that Eq.5 is accurate. Additional experiments are shown in the Appendix, Figure 6." }, { "heading": "5.2 UNDERPARAMETERIZED CASE", "text": "In the underparameterized case, the number of parameters p is smaller than the total number of validation data across tasks nvm. In this case, since the data fully constrains the parameters, the optimal value of ω found during meta-training is unique. We prove the following result.\nTheorem 2. Consider the algorithm of section 3 (MAML one-step), and the data generating model of section 4 (mixed linear regression). Let p < nvm. Let nv(ξ) and nt(ξ) be any function of order O(ξ). For ξ,m→∞, the test loss of Eq.2, averaged over the entire data distribution (see Eq.27 in the Appendix) is equal to\nLtest = σ 2\n2\n( 1 + α2rp\nnr\n) + hrν2\n2 +\n+ hr 2ht2 p nvm\n{ σ2 [ ht +\nα2t nt [(nv + 1) g1 + pg2]\n] + ν2\np [(nv + 1) g3 + pg4]\n} +O ( (mξ)−3/2 ) (9)\nwhere hr, ht are defined as in previous section, Eqs.6, 7, and gi are order O(1) polynomials in αt, see Eqs.98-101 in the Appendix.\nProof. The proof of this Theorem can be found in the Appendix, sections 7.3, 7.3.2.\nAgain, the loss always increases with the output noise σ and task variability ν. Furthermore, in this case the loss always decreases with the number of data points nv , nr, and tasks m. Note that, for a very large number of tasks m, the loss does not depend on meta-training hyperparameters αt, nv , nt. When the number of tasks is infinite, it doesn’t matter whether we run the inner loop, and how much data we have for each task.\nAs in the overparameterized case, the loss is a quadratic and convex function of the adaptation learning rate αr, and there is a unique minimum. While the value of the argmin is different, in this case as well the loss is a sum of two quadratic functions, one with minimum at αr = 0 and another with a minimum at αr = 1/ (1 + (p+ 1)/nr), therefore the optimal learning rate is again in between the same two values and is always positive. Similar comments applies to this case: a positive learning rate for adaptation implies that the parameters get closer to the optimum for the target task. An example of the loss as a function of the adaptation learning rate αr is shown in Figure 3a, where we also show the results of experiments in which we run MAML empirically. The good agreement between theory and experiment suggest that Eq.9 is accurate.\nAs a function of the training learning rate αt, the loss Eq.9 is the ratio of two fourth order polynomials, therefore it is not straightforward to determine its behaviour. However, it is possible to show that the following holds\n∂Ltest\n∂αt ∣∣∣∣∣ αt=0 = σ2p nvm ≥ 0 (10)\nsuggesting that performance is always better for negative values of αt around zero. Even if counterintuitive, this finding aligns with that of previous section, and similar comments apply. An example of the loss as a function of the training learning rate αr is shown in Figure 3b, where we also show the results of experiments in which we run MAML empirically. A good agreement is observed between theory and experiment, again suggesting that Eq.9 is accurate. Additional experiments are shown in the Appendix, Figure 6." }, { "heading": "5.3 NON-GAUSSIAN THEORY IN OVERPARAMETERIZED MODELS", "text": "In previous sections we studied the performance of MAML applied to the problem of mixed linear regression. It remains unclear whether the results in the linear case are relevant for the more interesting case of nonlinear problems. Inspired by recent theoretical work, we consider the case of nonlinear regression with squared loss\nL (ω) = E x E y|x\n1 2 [y − f (x,ω)]2 (11)\nwhere y is a target output and f (x,ω) the output of a neural network with input x and parameters ω. The introduction of the Neural Tangent Kernel showed that, in the limit of infinitely wide neural networks, the output is a linear function of its parameters during the entire course of training (Jacot et al. (2018), Lee et al. (2019)). This is expressed by a first order Taylor expansion\nf (x,ω) ' f (x,ω0) + k (x,ω0)T (ω − ω0) (12) k (x,ω0) = ∇ω f (x,ω)|x,ω0 (13)\nThe parameters ω remain close to the initial condition ω0 during the entire course of training, a phenomenon referred to as lazy training (Chizat et al. (2020)), and therefore the output can be linearized around ω0. Intuitively, in a model that is heavily overparameterized, the data does not constrain the parameters, and a parameter that minimizes the loss in Eq.11 can be found in the vicinity of any initial condition ω0. Note that, while the output of the neural network is linear in the parameters, it remains a nonlinear function of its input, through the vector of nonlinear functions k in Eq.13.\nBy substituting Eq.12 into Eq.11, the nonlinear regression becomes effectively linear, in the sense that the loss is a quadratic function of the parameters ω, and all nonlinearities are contained in the functions k in Eq.13, that are fixed by the initial condition ω0. This suggests that we can carry over the theory developed in the previous section to this problem. However, in this case the input to the linear regression problem is effectively k (x), and some of the assumptions made in the previous section are not acceptable. In particular, even if we assume that x is Gaussian, k (x) is a nonlinear function of x and cannot be assumed Gaussian. We prove the following result, where we generalize the result of section 5.1 to non-Gaussian inputs and weights. Theorem 3. Consider the algorithm of section 3 (MAML one-step), with ω0 = 0, and the data generating model of section 4, where the input x and the weights w are not necessarily Gaussian, and have zero mean and covariances, respectively, Σ = ExxT and Σw = EwwT . Let F be the matrix of fourth order moments F = E ( xTΣx ) xxT . Let p > nvm. Let p(ξ) and nt(ξ) be any\nfunction of order O(ξ) as ξ →∞. Let Tr ( Σ2w ) be of order O ( ξ−1 ) , and let the variances of matrix products of the rescaled inputs x/ √ p, up to sixth order, be of order O ( ξ−1 )\n(see Eqs.134-136 in the Appendix). Then the test loss of Eq.2, averaged over the entire data distribution (see Eq.27 in the Appendix) is equal to\nLtest = 1 2 Tr (ΣwHr) + σ2 2\n[ 1 +\nα2r nr\nTr ( Σ2 )] +\n+ 1\n2 nvm\nTr (HrHt) { Tr (ΣwHt) + σ2 [ 1 + α2t nt Tr ( Σ2 )]}\nTr (Ht)2 +O\n( ξ−3/2 ) (14)\nwhere we define the following matrices\nHt = [ Σ (I − αtΣ)2 +\nα2t nt\n( F − Σ3 )] (15)\nHr = [ Σ (I − αrΣ)2 +\nα2r nr\n( F − Σ3 )] (16)\nProof. The proof of this Theorem can be found in Appendix, section 7.4.\nNote that this result reduces to Eqs.5, 6, 7 when Σ = I , Σw = Iν2/p, F = I(p + 2), ω0 = 0, w = 0. This expression for the loss is more difficult to analyze than those given in the previous sections, because it involves traces of nonlinear functions of matrices, all elements of which are free hyperparameters. Nevertheless, it is possible to show that, as a function of the adaptation learning rate αr, the loss in Eq.14 is still a quadratic function. As a function of the adaptation learning rate αr, the loss in Eq.14 is the ratio of two fourth order polynomials, but it is difficult to draw any conclusions since their coefficients do not appear to have simple relationships.\nEven if the influence of the hyperparameters is not easy to predict, the expression in Eq.14 can still be used to quickly probe the behavior of the loss empirically, by using example values for the Σ, Σw, F , since computing the expression is very fast. Here we choose values of Σ, Σw by a single random draw from a Wishart distribution\nΣ ∼ W (I, p) Σw ∼ ν2\np W (I, p) (17)\nNote that the number of degrees of freedom of the distribution is equal to the size of the matrices, p, therefore this covariances display significant correlations. Furthermore, we choose F = 2Σ3 + ΣTr ( Σ2 ) , which is the value taken when x follows a Gaussian distribution. Therefore, we effectively test the loss in Eq.14 for a Gaussian distribution, as in previous section, but we stress that the expression is valid for any distribution of x within the assumptions of Theorem 3. We also run experiments of MAML, applied again to mixed linear regression, but now using the covariance matrices drawn in Eq.17. Figure 4 shows the loss in Eq.14 as a function of the learning rates, during adaptation (panel a) and training (panel b). Qualitatively, we observe a similar behaviour as in section 5.1: the adaptation learning rate has a unique minimum for a positive value of αr, while the training learning rate shows better performance for negative values of αt. Again, there is a good agreement between theory and experiment, suggesting that Eq.14 is a good approximation." }, { "heading": "5.4 NONLINEAR REGRESSION", "text": "To investigate whether negative learning rates improve performance on non-linear regression in practice, we studied the simple case of MAML with a neural network applied to a quadratic function. Specifically, the target output is generated according to y = (wTx+ b)2 + z, where b is a bias term. The data x, z and generating parameters w are sampled as described in section 4 (in addition, the bias b was drawn from a Gaussian distribution of zero mean and unit variance.). We use a 2-layer\nfeed-forward neural network with ReLU activation functions. Weights are initialized following a Gaussian distribution of zero mean and variance equal to the inverse number of inputs. We report results with a network width of 400 in both layers; results were similar with larger network widths. We use the square loss function and we train the neural network in the outer loop with stochastic gradient descent with a learning rate of 0.001 for 5000 epochs (until convergence). We used most parameters identical to section 5.1: nt = 30;nv = 2;nr = 20;m = 3; p = 60;σ = 1, ν = 0.5,w0 = 0. The learning rate for adaptation was set to αr = 0.01. Note that in section 5.1 the model was initialized at the ground truth of the generative model (ω0 = w0), while here the neural network parameters are initialized at random. Figure 5 shows the test loss as a function of the learning rate αt. The best performance is obtained for a negative learning rate of αt = −0.0075.\n6 DISCUSSION\nWe calculated algebraic expressions for the average test loss of MAML applied to a simple family of linear models, as a function of the hyperparameters. Surprisingly, we showed that the optimal value of the learning rate of the inner loop during training is negative. This finding seems to carry over to more interesting nonlinear models in the overparameterized case. However, additional work is necessary to establish the conditions under which the optimal learning rate may be positive, for example by probing more extensively Eq.14.\nA negative optimal learning rate is surprising and counter-intuitive, since negative learning rates push parameters towards higher values of the loss. However, the meta-training loss is minimized by the outer loop, therefore it is not immediately obvious whether the learning rate of the inner loop should be positive, and we show that in some circumstances it should not. However, perhaps obviously, we also show that the learning rate during adaptation at test time should always be positive, otherwise the target task cannot be learned.\nIn this work, we considered the case of nonlinear models in the overparameterized case. However, typical applications of MAML (and meta-learning in\ngeneral) implement relatively small models due to the heavy computational load of running bi-level optimization, including both outer and inner loop. Our theory applies to regression problems, and assumes a limited number of tasks where data is independently drawn in each task, while some applications use a large number of tasks with correlated draws (for example, images may be shared across tasks in few-shot image classification, see Bertinetto et al. (2019)). Our theory is valid at the exact optimum of the outer loop, which is equivalent to training the outer loop to convergence, therefore overfitting may occur in the outer loop of our model. Another limitation of our theory is represented by the assumptions on the input and task covariance, which have no correlations in Theorems 1, 2, and are subject to some technical assumptions in Theorem 3.\nTo the best of our knowledge, nobody has considered training meta-learning models with negative learning rates in the inner loop. Given that some studies advocate removing the inner loop altogether, which is similar to setting the learning rate to zero, it would be interesting to try a negative one. On the other hand, it is possible that a negative learning rate does not work in classification problems, in nonlinear models, or using input or tasks with a complex structure, settings that are outside the theory presented in this work.\nWe would like to thank Paolo Grazieschi for helping with formalizing the theorems, and Ritwik Niyogi for helping with nonlinear regression experiments." }, { "heading": "7 APPENDIX", "text": "" }, { "heading": "7.1 DEFINITION OF THE LOSS FUNCTION", "text": "We consider the problem of mixed linear regression y = Xw + z with squared loss, where X is a n × p matrix of input data, each row is one of n data vectors of dimension p, z is a n × 1 noise vector, w is a p× 1 vector of generating parameters and y is a n× 1 output vector. Data is collected for m tasks, each with a different value of the parameters w and a different realization of the input X and noise z. We denote by w(i) the parameters for task i, for i = 1, . . . ,m. For a given task i, we denote by Xt(i), Xv(i) the input data for, respectively, the training and validation sets, by zt(i), zv(i) the corresponding noise vectors and by yt(i),yv(i) the output vectors. We denote by nt, nv the data sample size for training and validations sets, respectively.\nFor a given task i, the training output is equal to\nyt(i) = Xt(i)w(i) + zt(i) (18) Similarly, the validation output is equal to\nyv(i) = Xv(i)w(i) + zv(i). (19)\nWe consider MAML as a model for meta-learning (Finn et al 2017). The meta-training loss is equal to\nLmeta = 1 2nvm m∑ i=1 ∣∣∣yv(i) −Xv(i)θ(i)(ω)∣∣∣2 (20)\nwhere vertical brackets denote euclidean norm, and the estimated parameters θ(i)(ω) are equal to the one-step gradient update on the single-task training loss L(i) = |yt(i) − Xt(i)θ(i)|2/2nt, with initial condition given by the meta-parameter ω. The single gradient update is equal to\nθ(i)(ω) = ( Ip −\nαt nt Xt(i) T Xt(i)\n) ω +\nαt nt Xt(i) T yt(i) (21)\nwhere Ip is the p × p identity matrix and αt is the learning rate. We seek to minimize the metatraining loss with respect to the meta-parameter ω, namely\nω? = arg min ω Lmeta (22)\nWe evaluate the solution ω? by calculating the meta-test loss\nLtest = 1 2ns |ys −Xsθ?|2 (23)\nNote that the test loss is calculated over test data Xs, zs, and test parameters w′, namely\nys = Xsw′ + zs (24)\nFurthermore, the estimated parameters θ? are calculated on a separate set of target data Xr, zr, namely\nθ? = ( Ip −\nαr nr XrTXr\n) ω? +\nαr nr XrTyr (25)\nyr = Xrw′ + zr (26)\nNote that the learning rate and sample size can be different at testing, denoted by αr, nr, ns. We are interested in calculating the average test loss, that is the test loss of Eq.23 averaged over the entire data distribution, equal to\nLtest = E w E zt E Xt E zv E Xv E w′ E zs E Xs E zr E Xr\n1\n2ns |ys −Xsθ?|2 (27)" }, { "heading": "7.2 DEFINITION OF PROBABILITY DISTRIBUTIONS", "text": "We assume that all random variables are Gaussian. In particular, we assume that the rows of the matrix X are independent, and each row, denoted by x, is distributed according to a multivariate Gaussian with zero mean and unit covariance\nx ∼ N (0, Ip) (28)\nwhere Ip is the p × p identity matrix. Similarly, the noise is distributed following a multivariate Gaussian with zero mean and variance equal to σ2, namely\nz ∼ N ( 0, σ2In ) (29)\nFinally, the generating parameters are also distributed according to a multivariate Gaussian of variance ν2/p, namely\nw ∼ N ( w0, ν2\np Ip\n) (30)\nThe generating parameter w is drawn once and kept fixed within a task, and drawn independently for different tasks. The values of x and z are drawn independently in all tasks and datasets (training, validation, target, test). In order to perform the calculations in the next section, we need the following results.\nLemma 1. Let X be a Gaussian n × p random matrix with independent rows, and each row has covariance equal to Ip, the p× p identity matrix. Then:\nE [ XTX ] = nIp (31)\nE [( XTX )2] = n (n+ p+ 1) Ip = n 2µ2Ip (32)\nE [( XTX )3] = n ( n2 + p2 + 3np+ 3n+ 3p+ 4 ) Ip = n 3µ3Ip (33)\nE [( XTX )4] = n ( n3 + p3 + 6n2p+ 6np2+ (34)\n+6n2 + 6p2 + 17np+ 21n+ 21p+ 20 ) Ip = n 4µ4Ip (35)\nE [ XTX Tr ( XTX )] = ( n2p+ 2n ) Ip = pn 2µ1,1Ip (36)\nE [( XTX )2 Tr ( XTX )] = n ( n2p+ np2 + np+ 4n+ 4p+ 4 ) Ip = pn 3µ2,1Ip (37)\nE [ XTXTr (( XTX )2)] = n ( n2p+ np2 + np+ 4n+ 4p+ 4 ) Ip = pn 3µ1,2Ip (38)\nE [( XTX )2 Tr (( XTX )2)] = n ( n3p+ np3 + 2n2p2 + 2n2p+ 2np2+ (39)\n+8n2 + 8p2 + 21np+ 20n+ 20p+ 20 ) Ip = pn 4µ2,2Ip (40)\nwhere the last equality in each of these expressions defines the variables µ. Furthermore, for any n× n symmetric matrix C and any p× p symmetric matrix D, independent of X:\nE [ XTCX ] = Tr (C) Ip (41)\nE [ XTXDXTX ] = n (n+ 1)D + nTr (D) Ip (42)\nProof. The Lemma follows by direct computations of the above expectations, using Isserlis’ theorem. Particularly, for higher order exponents, combinatorics plays a crucial role in counting products of different Gaussian variables in an effective way.\nLemma 2. Let Xv(i), Xt(i) be Gaussian random matrices, of size respectively nv × p and nt × p, with independent rows, and each row has covariance equal to Ip, the p× p identity matrix. Let p(ξ) and nt(ξ) be any function of order O(ξ) as ξ →∞. Then:\nXv(i)Xv(i) T = p Inv +O ( ξ1/2 ) (43)\nXv(i)Xt(i) T Xt(i)Xv(i) T = pnt Inv +O ( ξ3/2 ) (44)\nXv(i)Xt(i) T Xt(i)Xt(i) T Xt(i)Xv(i) T = pnt(nt + p+ 1)Inv +O ( ξ5/2 ) (45)\nNote that the order O (ξ) applies to all elements of the matrix in each expression. For i 6= j\nXv(i)Xv(j) T = O ( ξ1/2 ) (46)\nXv(i)Xt(i) T Xt(i)Xv(j) T = O ( ξ3/2 ) (47)\nXv(i)Xt(i) T Xt(i)Xt(j) T Xt(j)Xv(j) T = O ( ξ5/2 ) (48)\nFurthermore, for any positive real number δ and for any p× p symmetric matrix D independent of X, where Tr(D) and Tr(D2) are both of order O(ξδ)\nXv(i)DXv(i) T = Tr (D) Inv +O ( ξδ/2 ) (49)\nXv(i)Xt(i) T Xt(i)DXv(i) T = Tr (D)ntInv +O ( ξ1+δ/2 ) (50)\nXv(i)Xt(i) T Xt(i)DXt(i) T Xt(i)Xv(i) T = Tr (D)nt(nt + p+ 1)Inv +O ( ξ2+δ/2 ) (51)\nXv(i)DXv(j) T = O ( ξδ/2 ) (52)\nXv(i)Xt(i) T Xt(i)DXv(j) T = O ( ξ1+δ/2 ) (53)\nXv(i)Xt(i) T Xt(i)DXt(j) T Xt(j)Xv(j) T = O ( ξ2+δ/2 ) (54)\nProof. The Lemma follows by direct computations of the expectations and variances of each term.\nLemma 3. Let Xv , Xt be Gaussian random matrices, of size respectively nv × p and nt × p, with independent rows, and each row has covariance equal to Ip, the p × p identity matrix. Let nv(ξ) and nt(ξ) be any function of order O(ξ) for ξ →∞. Then:\nXvTXv = nv Ip +O ( ξ1/2 ) (55)\nXt T XtXvTXv = ntnv Ip +O ( ξ3/2 ) (56)\nXt T XtXvTXvXt T Xt = nvnt(nt + p+ 1)Ip +O ( ξ5/2 ) (57)\nNote that the order O (ξ) applies to all elements of the matrix in each expression.\nProof. The Lemma follows by direct computations of the expectations and variances of each term." }, { "heading": "7.3 PROOF OF THEOREMS 1 AND 2", "text": "We calculate the average test loss as a function of the hyperparameters nt, nv , nr, p, m, αt, αr, σ, ν, w0. Using the expression in Eq.24 for the test output, we rewrite the test loss in Eq.27 as\nLtest = E 1\n2ns |Xs (w′ − θ?) + zs|2 (58)\nWe start by averaging this expression with respect to Xs, zs, noting that θ? does not depend on test data. We further average with respect to w′, but note that θ? depends on test parameters, so we average only terms that do not depend on θ?. Using Eq.31, the result is\nLtest = σ 2 2 + ν2 2 + |w0|2 2 + E\n[ |θ?|2\n2 − (w0 + δw′) T θ?\n] (59)\nwhere we define δw′ = w′ − w0. The second term in the expectation is linear in θ? and can be averaged over Xr, zr, using Eq.25 and noting that ω? does not depend on target data. The result is\nE Xr E zr θ? = (1− αr)ω? + αr (w0 + δw′) (60)\nUsing Eq.60 we average over w′ the second term in the expectation of Eq.59 and find\nLtest = σ 2\n2 +\n( 1\n2 − αr\n)( ν2 + |w0|2 ) − (1− αr)wT0 E ω? + E |θ?|2\n2 (61)\nWe average the last term of this expression over zr,w′, using Eq.25 and noting that ω? does not depend on target data and test parameters. The result is\nE w′ E zr |θ?|2 = |ω?|2 + α\n2 r\nn2r (ω? −w0)T\n( XrTXr )2 (ω? −w0)− (62)\n− 2αr nr XrTXrω?T (ω? −w0) + α2rσ 2 n2r Tr [ XrXrT ] + α2rν 2 n2rp Tr [( XrXrT )2] (63)\nWe now average over Xr, again noting that ω? does not depend on target data. Using Eqs.31, 32, we find\nE Xr E w′ E zr |θ?|2 = |ω?|2 + α2r\n( 1 + p+ 1\nnr\n)( ν2 + |ω? −w0|2 ) − 2αrω?T (ω? −w0) + α2rσ 2p\nnr (64)\nWe can now rewrite the average test loss 61 as\nLtest = σ 2\n2\n( 1 + α2rp\nnr\n) + 1\n2\n[ (1− αr)2 + α2r p+ 1\nnr\n]( ν2 + E |ω? −w0|2 ) (65)\nIn order to average the last term, we need an expression for ω?. We note that the loss in Eq.20 is quadratic in ω, therefore the solution of Eq.22 can be found using standard linear algebra. In particular, the loss in Eq.20 can be rewritten as\nLmeta = 1 2nvm |γ −Bω|2 (66)\nwhere γ is a vector of shape nvm× 1, and B is a matrix of shape nvm× p. The vector γ is a stack of m vectors\nγ = Xv(1) ( Ip − αtntX t(1)TXt(1) ) w(1) − αtntX v(1)Xt(1) T zt(1) + zv(1)\n... Xv(m) ( Ip − αtntX t(m)TXt(m) ) w(m) − αtntX v(m)Xt(m) T zt(m) + zv(m) (67) Similarly, the matrix B is a stack of m matrices\nB = Xv(1) ( Ip − αtntX t(1)TXt(1) )\n... Xv(m) ( Ip − αtntX t(m)TXt(m) ) (68)\nWe denote by Ip the p × p identity matrix. The expression for ω that minimizes Eq.66 depends on whether the problem is overparameterized (p > nvm) or underparameterized (p < nvm), therefore we distinguish these two cases in the following sections." }, { "heading": "7.3.1 OVERPARAMETERIZED CASE (THEOREM 1)", "text": "In the overparameterized case (p > nvm), under the assumption that the inverse of BBT exists, the value of ω that minimizes Eq.66 is equal to\nω? = BT ( BBT )−1 γ + [ Ip −BT ( BBT )−1 B ] ω0 (69)\nThe vector ω0 is interpreted as the initial condition of the parameter optimization of the outer loop, when optimized by gradient descent. Note that the matrix B does not depend on w, zt, zv , and Ew Ezt Ezv γ = Bw0. We denote by δγ the deviation from the average, and we have\nω? −w0 = BT ( BBT )−1 δγ + [ Ip −BT ( BBT )−1 B ] (ω0 −w0) (70)\nWe square this expression and average over w, zt, zv . We use the cyclic property of the trace and the fact that BT ( BBT )−1 B is a projection. The result is\n|ω? −w0|2 = Tr [ Γ ( BBT )−1] + (ω0 −w0)T [ Ip −BT ( BBT )−1 B ] (ω0 −w0) (71)\nThe matrix Γ is defined as\nΓ = E w E zt E zv δγ δγT =\nΓ (1) 0 0 0 . . . 0\n0 0 Γ(m) (72) Where matrix blocks are given by the following expression\nΓ(i) = ν2\np Xv(i)\n( Ip −\nαt nt Xt(i) T Xt(i)\n)2 Xv(i) T + σ2 ( Inv +\nα2t n2t Xv(i)Xt(i) T Xt(i)Xv(i) T ) (73)\nIt is convenient to rewrite the scalar product of Eq.71 in terms of the trace of outer products |ω? −w0|2 = Tr [( BBT )−1 ( Γ−B (ω0 −w0) (ω0 −w0)T BT )] + |ω0 −w0|2 (74)\nIn order to calculate E |ω? −w0|2 in Eq.65 we need to average this expression over training and validation data. These averages are hard to compute since they involve nonlinear functions of the data. However, we can approximate these terms by assuming that p and nt are large, both of order O(ξ), where ξ is a large number. Furthermore, we assume that |ω0 −w0| is of order O(ξ−1/4). Using Lemma 2, together with the expressions of B (Eq.68) and Γ (Eqs.72,73), we can prove that\n1 p BBT =\n[ (1− αt)2 + α2t p+ 1\nnt\n] Invm +O ( ξ−1/2 ) (75)\nΓ = { ν2 [ (1− αt)2 + α2t p+ 1\nnt\n] + σ2 ( 1 + α2tp\nnt\n)} Invm +O ( ξ−1/2 ) (76)\nB (ω0 −w0) (ω0 −w0)T BT = |ω0 −w0|2 [ (1− αt)2 + α2t p+ 1\nnt\n] Invm +O ( ξ−1/2 ) (77)\nUsing Eq.75 and Taylor expansion, the inverse ( BBT )−1 is equal to\n( BBT )−1 = 1\np\n[ (1− αt)2 + α2t p+ 1\nnt\n]−1 Invm +O ( ξ−3/2 ) , (78)\nSubstituting the three expressions above in Eq.74, and ignoring terms of lower order, we find\nE |ω? −w0|2 = (\n1− nvm p\n) |ω0 −w0|2 + nvm\np ν2 + σ2 1 + α2tpnt (1− αt)2 + α2t p+1 nt +O (ξ−3/2) (79)\nSubstituting this expression into in Eq.65, we find the value of average test loss\nLtest =σ 2\n2\n( 1 + α2rp\nnr\n) + (80)\n+hr ν2 2 ( 1 + nvm p ) + 1 2 ( 1− nvm p ) |ω0 −w0|2 + σ2nvm 2p 1 + α2tp nt ht +O (ξ−3/2) (81)\nwhere we define the following expressions\nht = (1− αt)2 + α2t p+ 1\nnt and hr = (1− αr)2 + α2r\np+ 1\nnr (82)" }, { "heading": "7.3.2 UNDERPARAMETERIZED CASE (THEOREM 2)", "text": "In the underparameterized case (p < nvm), under the assumption that the inverse of BTB exists, the value of ω that minimizes Eq.66 is equal to\nω? = ( BTB )−1 BTγ (83)\nNote that the matrix B does not depend on w, zt, zv , and Ew Ezt Ezv γ = Bw0. We denote by δγ the deviation from the average, and we have\n|ω? −w0|2 = Tr [( BTB )−1 BT δγ δγTB ( BTB )−1] (84)\nWe need to average this expression in order to calculate E |ω? −w0|2 in Eq.65. We start by averaging δγ δγT over w, zt, zv , since B does not depend on those variables. Note that w, zt, zv are independent on each other and across tasks. As in previous section, we denote by Γ the result of this operation, given by Eq.s72, 73. Finally, we need to average over the training and validation data\nE |ω? −w0|2 = E Xt E Xv\nTr [( BTB )−1 BTΓB ( BTB )−1] (85)\nIt is hard to average this expression because it includes nonlinear functions of the data. However, we can approximate these terms by assuming that either m or ξ (or both) is a large number, where ξ is defined by assuming that both nt and nv are of order O(ξ). Using Lemma 3, together with the expression of B (Eq.68), and noting that each factor in Eq.85 has a sum over m independent terms, we can prove that\n1\nnvm BTB =\n( 1− 2αt + α2tµ2 ) Ip +O ( (mξ)−1/2 ) (86)\nThe expression for µ2 is given in Eq.32. Using this result and a Taylor expansion, the inverse is equal to\nnvm ( BTB )−1 = ( 1− 2αt + α2tµ2 )−1 Ip +O ( (mξ)−1/2 ) (87)\nSimilarly, the term BTΓB is equal to its average plus a term of smaller order 1\nnvm BTΓB =\n1 nvm E ( BTΓB ) +O ( (mξ)−1/2 ) (88)\nWe substitute these expressions in Eq.85 and neglect lower orders. Here we show how to calculate explicitly the expectation of BTΓB. For ease of notation, we define the matrix At(i) = I − αtntX\nt(i)TXt(i). Using the expressions of B (Eq.68) and Γ (Eqs.72,73), the expression for BTΓB is given by\nBTΓB = σ2 m∑ i=1 At(i) T Xv(i) T Xv(i)At(i) + ν2 p m∑ i=1 ( At(i) T Xv(i) T Xv(i)At(i) )2 +\n+ α2tσ 2\nn2t m∑ i=1 At(i) T Xv(i) T Xv(i)Xt(i) T Xt(i)Xv(i) T Xv(i)At(i) (89)\nWe use Eqs.31, 32 to calculate the average of the first term in Eq.89\nE Xt E Xv m∑ i=1 At(i) T Xv(i) T Xv(i)At(i) = nvm ( 1− 2αt + α2tµ2 ) Ip (90)\nWe use Eqs.31, 32, 33, 41, 36, 37, 38, 39 to calculate the average of the second term\nE Xt E Xv m∑ i=1 ( At(i) T Xv(i) T Xv(i)At(i) )2 = E Xt m∑ i=1 [ nv (nv + 1)A t(i)4 + nvA t(i)2Tr ( At(i) 2 )] =\n(91) = mnv (nv + 1) ( 1− 4αt + 6α2tµ2 − 4α3tµ3 + α4tµ4 ) Ip+\n+mnvp ( 1− 4αt + 2α2tµ2 + 4α2tµ1,1 − 4α3tµ2,1 + α4tµ2,2 ) Ip (92)\nFinally, we compute the average of the third term, using Eqs.31, 32, 33, 34, 41, 36, 37\nE Xt E Xv m∑ i=1 At(i) T Xv(i) T Xv(i)Xt(i) T Xt(i)Xv(i) T Xv(i)At(i) = (93)\n= E Xt m∑ i=1 [ nv (nv + 1)A t(i)TXt(i) T Xt(i)At(i) + nvA t(i)TAt(i)Tr ( Xt(i) T Xt(i) )] = (94)\n= mnv (nv + 1)nt ( 1− 2αtµ2 + α2tµ3 ) Ip +mnvntp ( 1− 2αtµ1,1 + α2tµ2,1 ) Ip (95)\nPutting everything together in Eq.85, and applying the trace operator, we find the following expression for the meta-parameter variance\nE |ω? −w0|2 = p\nnvm\n( 1− 2αt + α2tµ2 )−2{ σ2 ( 1− 2αt + α2tµ2 ) +\n+ α2tσ 2\nnt\n[ (nv + 1) ( 1− 2αtµ2 + α2tµ3 ) + p ( 1− 2αtµ1,1 + α2tµ2,1 )] + ν2\np\n[ (nv + 1) ( 1− 4αt + 6α2tµ2 − 4α3tµ3 + α4tµ4 ) +\n+ p ( 1− 4αt + 2α2tµ2 + 4α2tµ1,1 − 4α3tµ2,1 + α4tµ2,2 ) ]} +O ( (mξ)−3/2 ) (96)\nWe rewrite this expression as\nE |ω? −w0|2 = p\nht2nvm\n{ σ2 [ ht +\nα2t nt [(nv + 1) g1 + pg2]\n] + ν2\np [(nv + 1) g3 + pg3]\n} +\n+O ( (mξ)−3/2 )\n(97)\nwhere we defined the following expressions for gi\ng1 = 1− 2αtµ2 + α2tµ3 (98) g2 = 1− 2αtµ1,1 + α2tµ2,1 (99) g3 = 1− 4αt + 6α2tµ2 − 4α3tµ3 + α4tµ4 (100) g4 = 1− 4αt + 2α2tµ2 + 4α2tµ1,1 − 4α3tµ2,1 + α4tµ2,2 (101)\nand µi are equal to\nµ2 = 1\nnt (nt + p+ 1) (102)\nµ3 = 1\nn2t\n( n2t + p 2 + 3ntp+ 3nt + 3p+ 4 )\n(103)\nµ4 = 1\nn3t\n( n3t + p 3 + 6n2tp+ 6ntp 2 + 6n2t + 6p 2 + 17ntp+ 21nt + 21p+ 20 )\n(104)\nµ1,1 = 1\nn2tp\n( n2tp+ 2nt ) (105)\nµ2,1 = 1\nn2tp\n( n2tp+ ntp 2 + ntp+ 4nt + 4p+ 4 )\n(106)\nµ2,2 = 1\nn3tp\n( n3tp+ ntp 3 + 2n2tp 2 + 2n2tp+ 2ntp 2 + 8n2t + 8p 2 + 21ntp+ 20nt + 20p+ 20 ) (107)\nSubstituting this expression back into Eq.65 returns the final expression for the average test loss, equal to\nLtest = σ 2\n2\n( 1 + α2rp\nnr\n) + hrν2\n2 +\n+ hr 2ht2 p nvm\n{ σ2 [ ht +\nα2t nt [(nv + 1) g1 + pg2]\n] + ν2\np [(nv + 1) g3 + pg4]\n} +O ( (mξ)−3/2 ) (108)" }, { "heading": "7.4 PROOF OF THEOREM 3", "text": "In this section, we release some assumption on the distributions of data and parameters. In particular, we do not assume a specific distribution for input data vectors x and generating parameter vector\nw, besides that different data vectors are independent, and so are data and parameters for different tasks. We further assume that those vectors have zero mean, and denote their covariance as\nΣ = ExxT (109)\nΣw = EwwT (110) We will also use the following matrix, including fourth order moments\nF = E ( xTΣx ) xxT (111)\nWe do not make any assumption about the distribution of x, but we note that, if x is Gaussian, then F = 2Σ3 + ΣTr ( Σ2 ) . We keep the assumption that the output noise is Gaussian and independent\nfor different data points and tasks, with variance σ2. Using the same notation as in previous sections, we will also use the following expressions (for any p× p matrix A)\nE [ XTX ] = nΣ (112)\nE Tr [ ΣXTXAXTX ] = Tr { A [ n2Σ3 + n ( F − Σ3 )]} (113)\nWe proceed to derive the same formula under these less restrictive assumptions, in the overparameterized case only, following is the same derivation of section 7.3. We further assume ω0 = 0, w0 = 0. Again we start from the expression in Eq.24 for the test output, and we rewrite the test loss in Eq.27 as\nLtest = E 1\n2ns |Xs (w′ − θ?) + zs|2 (114)\nWe average this expression with respect to Xs, zs, noting that θ? does not depend on test data. We further average with respect to w′, but note that θ? depends on test parameters, so we average only terms that do not depend on θ?. Using Eq.112, the result is\nLtest = σ 2\n2 +\n1 2 Tr (ΣΣw) + E\n[ 1\n2 θ?TΣ θ? −w′TΣ θ?\n] (115)\nThe second term in the expectation is linear in θ? and can be averaged over Xr, zr, using Eq.25 and noting that ω? does not depend on target data. The result is\nE Xr E zr θ? = (I − αrΣ)ω? + αrΣw′ (116)\nFurthermore, we show below (Eq.128) that the following average holds\nE w E zt E zv ω? = 0 (117)\nCombining Eqs.116, 117, we can calculate the second term in the expectation of Eq.115 and find\nLtest = σ 2\n2 +\n1 2 Tr (ΣΣw)− αrTr\n( Σ2Σw ) + E 1\n2 θ?TΣ θ? (118)\nWe start by averaging the third term of this expression over zr,w′, using Eq.25 and noting that ω? does not depend on target data and test parameters. The result is\nE w′ E zr θ?TΣ θ? = Tr\n[ Σ ( I − αr\nnr XrTXr\n) ω?ω?T ( I − αr\nnr XrTXr\n)] + (119)\n+ α2rσ 2 n2r Tr [ XrΣXrT ] + α2r n2r Tr [ ΣXrTXrΣwX rTXr ]\n(120)\nWe now average over Xr, again noting that ω? does not depend on target data. Using Eqs.112, 113, we find\nE Xr E w′ E zr θ?TΣ θ? = Tr\n{ ω?ω?T [ Σ (I − αrΣ)2 +\nα2r nr\n( F − Σ3 )]} + (121)\n+ α2rσ 2 nr Tr ( Σ2 ) + α2rTr { Σw [ Σ3 + 1 nr ( F − Σ3 )]} (122)\nWe can now rewrite the average test loss in Eq.118 as\nLtest = σ 2\n2\n[ 1 +\nα2r nr\nTr ( Σ2 )] + 1 2 Tr [( Σw + E ω?ω?T ) Hr ]\n(123)\nwhere we define the following matrix\nHr = [ Σ (I − αrΣ)2 +\nα2r nr\n( F − Σ3 )] (124)\nIn order to average the last term, we need an expression for ω?. We note that the loss in Eq.20 is quadratic in ω, therefore the solution in Eq.22 can be found using standard linear algebra. In particular, the loss in Eq.20 can be rewritten as\nLmeta = 1 2nvm |γ −Bω|2 (125)\nwhere γ is a vector of shape nvm× 1, and B is a matrix of shape nvm× p. The vector γ is a stack of m vectors\nγ = Xv(1) ( I − αtntX t(1)TXt(1) ) w(1) − αtntX v(1)Xt(1) T zt(1) + zv(1)\n... Xv(m) ( I − αtntX t(m)TXt(m) ) w(m) − αtntX v(m)Xt(m) T zt(m) + zv(m) (126) Similarly, the matrix B is a stack of m matrices\nB = Xv(1) ( I − αtntX t(1)TXt(1) )\n... Xv(m) ( I − αtntX t(m)TXt(m) ) (127)\nIn the overparameterized case (p > nvm), under the assumption that the inverse of BBT exists, the value of ω that minimizes Eq.125, and that also has minimum norm, is equal to\nω? = BT ( BBT )−1 γ (128)\nNote that the matrix B does not depend on w, zt, zv , and Ew Ezt Ezv γ = 0, therefore Eq.117 holds. In order to finish calculating Eq.123, we need to average the following term\nTr ( Hrω?ω?T ) = Tr [( BBT )−1 γγT ( BBT )−1 ( BHrBT )] (129)\nwhere we used the cyclic property of the trace. We start by averaging γγT over w, zt, zv , since B does not depend on those variables. Note that w, zt, zv are independent on each other and across tasks. We denote by Γ the result of this operation, which is equal to a block diagonal matrix\nΓ = E w E zt E zv γγT =\nΓ (1) 0 0 0 . . . 0\n0 0 Γ(m) (130) Where matrix blocks are given by the following expression\nΓ(i) = Xv(i) ( I − αt\nnt Xt(i)\nT Xt(i) ) Σw ( I − αt\nnt Xt(i)\nT Xt(i) ) Xv(i) T + (131)\n+ σ2 ( Inv +\nα2t n2t Xv(i)Xt(i) T Xt(i)Xv(i)\nT )\n(132)\nFinally, we need to average over the training and validation data E Tr ( Hrω?ω?T ) = E Xt E Xv Tr [( BBT )−1 Γ ( BBT )−1 ( BHrBT )] (133)\nThese averages are hard to compute since they involve nonlinear functions of the data. However, we can approximate these terms by assuming that p and nt are large, both of order O(ξ), where ξ is a large number. Furthermore, we assume that Tr ( Σ2w ) is of order O ( ξ−1 ) , and that the variances of matrix products of the rescaled inputs x/ √ p, up to sixth order, are all of orderO ( ξ−1 ) , in particular\nVar ( 1\np Xv(i)Xv(j)\nT )\n= O ( ξ−1 )\n(134)\nVar ( 1\np2 Xv(i)Xt(i)\nT Xt(i)Xv(j)\nT )\n= O ( ξ−1 )\n(135)\nVar ( 1\np3 Xv(i)Xt(i)\nT Xt(i)Xt(j) T Xt(j)Xv(j)\nT )\n= O ( ξ−1 )\n(136)\nThen, using Eqs.112, 113 and the expressions ofB (Eq.127) and Γ (Eqs.130,131), we can prove that BBT = Tr ( Ht ) Invm +O ( ξ1/2 ) (137)\nΓ = { Tr ( ΣwH t ) + σ2 [ 1 +\nα2t nt\nTr ( Σ2 )]} Invm +O ( ξ1/2 ) (138)\nBHrBT = Tr ( HrHt ) Invm + +O ( ξ1/2 ) (139)\nwhere, similar to Eq.124, we define\nHt = [ Σ (I − αtΣ)2 +\nα2t nt\n( F − Σ3 )] (140)\nNote that all these terms are of orderO (ξ). The inverse ofBBT can be found by a Taylor expansion( BBT )−1 = Tr ( Ht )−1 Invm +O ( ξ−3/2 ) (141)\nSubstituting these expressions in Eq.133, we find\nE Tr ( Hrω?ω?T ) = nvm\nTr (HrHt) { Tr (ΣwHt) + σ2 [ 1 + α2t nt Tr ( Σ2 )]}\nTr (Ht)2 +O\n( ξ−3/2 ) (142)\nSubstituting this expression into in Eq.123, we find the value of average test loss\nLtest = 1 2 Tr (ΣwHr) + σ2 2\n[ 1 +\nα2r nr\nTr ( Σ2 )] + (143)\n+ 1\n2 nvm\nTr (HrHt) { Tr (ΣwHt) + σ2 [ 1 + α2t nt Tr ( Σ2 )]}\nTr (Ht)2 +O\n( ξ−3/2 ) (144)" } ]
2,021
META-LEARNING WITH NEGATIVE LEARNING RATES
SP:638a6687e5846937cea0e0be3a6e68ad743a787d
[ "In order to improve the robustness of the learned models, prior work has proposed various data augmentation techniques and different ways of incorporating them into training. This work seeks to provide a general understanding of how we should train with augmented samples in order to learn robust and invariant models from both theoretical and empirical perspectives. More importantly, the authors showed that the regularization of the augmented samples in the training procedure can be inspired from the theoretical analysis since it directly suggests the ideal regularization." ]
Data augmentation is one of the most popular techniques for improving the robustness of neural networks. In addition to directly training the model with original samples and augmented samples, a torrent of methods regularizing the distance between embeddings/representations of the original samples and their augmented counterparts have been introduced. In this paper, we explore these various regularization choices, seeking to provide a general understanding of how we should regularize the embeddings. Our analysis suggests the ideal choices of regularization correspond to various assumptions. With an invariance test, we argue that regularization is important if the model is to be used in a broader context than accuracy-driven setting because non-regularized approaches are limited in learning the concept of invariance, despite equally high accuracy. Finally, we also show that the generic approach we identified (squared `2 norm regularized augmentation) outperforms several recent methods, which are each specially designed for one task and significantly more complicated than ours, over three different tasks.
[]
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Information dropout: Learning optimal representations through noisy computation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Akari Asai", "Hannaneh Hajishirzi" ], "title": "Logic-guided data augmentation and regularization for consistent question answering, 2020", "venue": null, "year": 2020 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Sergey Bobkov", "Michel Ledoux" ], "title": "One-dimensional empirical measures, order statistics, and Kantorovich transport distances, volume 261", "venue": "American Mathematical Society,", "year": 2019 }, { "authors": [ "Olivier Bousquet", "Stéphane Boucheron", "Gábor Lugosi" ], "title": "Introduction to statistical learning theory", "venue": "In Summer School on Machine Learning,", "year": 2003 }, { "authors": [ "Shuxiao Chen", "Edgar Dobriban", "Jane H Lee" ], "title": "A group-theoretic framework for data augmentation, 2019", "venue": null, "year": 2019 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Marco Cuturi", "Arnaud Doucet" ], "title": "Fast computation of wasserstein barycenters", "venue": null, "year": 2014 }, { "authors": [ "Alhussein Fawzi", "Horst Samulowitz", "Deepak S. Turaga", "Pascal Frossard" ], "title": "Adaptive data augmentation for image classification", "venue": "IEEE International Conference on Image Processing,", "year": 2016 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Ian Goodfellow" ], "title": "Nips 2016 tutorial: Generative adversarial networks", "venue": "arXiv preprint arXiv:1701.00160,", "year": 2016 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron Courville" ], "title": "Improved training of wasserstein gans, 2017", "venue": null, "year": 2017 }, { "authors": [ "Hao Guo", "Kang Zheng", "Xiaochuan Fan", "Hongkai Yu", "Song Wang" ], "title": "Visual attention consistency under image transforms for multi-label image classification", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Alex Hernández-Garcı́a", "Peter König" ], "title": "Data augmentation instead of explicit regularization, 2018", "venue": null, "year": 2018 }, { "authors": [ "Daniel Ho", "Eric Liang", "Xi Chen", "Ion Stoica", "Pieter Abbeel" ], "title": "Population based augmentation: Efficient learning of augmentation policy schedules", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zhiting Hu", "Bowen Tan", "Russ R Salakhutdinov", "Tom M Mitchell", "Eric P Xing" ], "title": "Learning data manipulation for augmentation and weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman" ], "title": "Spatial transformer networks. In Advances in neural information processing", "venue": null, "year": 2017 }, { "authors": [ "Jisoo Jeong", "Seungeui Lee", "Jeesoo Kim", "Nojun Kwak" ], "title": "Consistency-based semi-supervised learning for object detection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing, 2018", "venue": null, "year": 2018 }, { "authors": [ "Risi Kondor", "Zhen Lin", "Shubhendu Trivedi" ], "title": "Clebsch–gordan nets: a fully fourier space spherical convolutional neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Davis Liang", "Zhiheng Huang", "Zachary C Lipton" ], "title": "Learning noise-invariant representations for robust speech recognition", "venue": "IEEE Spoken Language Technology Workshop (SLT),", "year": 2018 }, { "authors": [ "Percy Liang" ], "title": "Cs229t/stat231: Statistical learning theory (winter", "venue": null, "year": 2016 }, { "authors": [ "Raphael Gontijo Lopes", "Dong Yin", "Ben Poole", "Justin Gilmer", "Ekin D Cubuk" ], "title": "Improving robustness without sacrificing accuracy with patch gaussian augmentation", "venue": null, "year": 1906 }, { "authors": [ "Shashank Rajput", "Zhili Feng", "Zachary Charles", "Po-Ling Loh", "Dimitris Papailiopoulos" ], "title": "Does data augmentation lead to positive margin", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Mehdi Sajjadi", "Mehran Javanmardi", "Tolga Tasdizen" ], "title": "Regularization with stochastic transformations and perturbations for deep semi-supervised learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Meet Shah", "Xinlei Chen", "Marcus Rohrbach", "Devi Parikh" ], "title": "Cycle-consistency for robust visual question answering", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Connor Shorten", "Taghi M Khoshgoftaar" ], "title": "A survey on image data augmentation for deep learning", "venue": "Journal of Big Data,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Kai Sheng Tai", "Peter Bailis", "Gregory Valiant" ], "title": "Equivariant transformer networks", "venue": "arXiv preprint arXiv:1901.11399,", "year": 2019 }, { "authors": [ "Zhuozhuo Tu", "Jingwei Zhang", "Dacheng Tao" ], "title": "Theoretical analysis of adversarial learning: A minimax approach", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Cédric Villani" ], "title": "Topics in optimal transportation", "venue": "Number 58. American Mathematical Soc.,", "year": 2003 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Haohan Wang", "Songwei Ge", "Eric P. Xing", "Zachary C. Lipton" ], "title": "Learning robust global representations by penalizing local predictive power, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Haohan Wang", "Zexue He", "Zachary C. Lipton", "Eric P. Xing" ], "title": "Learning robust representations by projecting superficial statistics out", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Haohan Wang", "Xindi Wu", "Zeyi Huang", "Eric P. Xing" ], "title": "High frequency component helps explain the generalization of convolutional neural networks", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yulin Wang", "Xuran Pan", "Shiji Song", "Hong Zhang", "Gao Huang", "Cheng Wu" ], "title": "Implicit semantic data augmentation for deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xindi Wu", "Yijun Mao", "Haohan Wang", "Xiangrui Zeng", "Xin Gao", "Eric P. Xing", "Min Xu" ], "title": "Regularized adversarial training (RAT) for robust cellular electron cryo tomograms classification", "venue": "IEEE International Conference on Bioinformatics and Biomedicine,", "year": 2019 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation", "venue": "arXiv preprint arXiv:1904.12848,", "year": 2019 }, { "authors": [ "Saining Xie", "Tianbao Yang", "Xiaoyu Wang", "Yuanqing Lin" ], "title": "Hyper-class augmented and regularized deep learning for fine-grained image classification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Fanny Yang", "Zuowen Wang", "Christina Heinze-Deml" ], "title": "Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Richard Zhang" ], "title": "Making convolutional networks shift-invariant again", "venue": "arXiv preprint arXiv:1904.11486,", "year": 2019 }, { "authors": [ "Xinyu Zhang", "Qiang Wang", "Jian Zhang", "Zhao Zhong" ], "title": "Adversarial autoaugment", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhirui Zhang", "Shuangzhi Wu", "Shujie Liu", "Mu Li", "Ming Zhou", "Tong Xu" ], "title": "Regularizing neural machine translation by target-bidirectional agreement", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Barret Zoph", "Ekin D. Cubuk", "Golnaz Ghiasi", "Tsung-Yi Lin", "Jonathon Shlens", "Quoc V. Le" ], "title": "Learning data augmentation strategies for object detection, 2019", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent advances in deep learning has delivered remarkable empirical performance over i.i.d test data, and the community continues to investigate the more challenging and realistic scenario when models are tested in robustness over non-i.i.d data (e.g., Ben-David et al., 2010; Szegedy et al., 2013). Recent studies suggest that one cause of the fragility is the model’s tendency in capturing undesired signals (Wang et al., 2020), thus combating this tendency may be a key to robust models.\nTo help models ignore the undesired signals, data augmentation (i.e., diluting the undesired signals of training samples by applying transformations to existing examples) is often used. Given its widely usage, we seek to answer the question: how should we train with augmented samples so that the assistance of augmentation can be taken to the fullest extent to learn robust and invariant models?\nIn this paper, We analyze the generalization behaviors of models trained with augmented data and associated regularization techniques. We investigate a set of assumptions and compare the worstcase expected risk over unseen data when i.i.d samples are allowed to be transformed according to a function belonging to a family. We bound the expected risk with terms that can be computed during training, so that our analysis can inspire how to regularize the training procedure. While all the derived methods have an upper bound of the expected risk, with progressively stronger assumptions, we have progressively simpler regularization, allowing practical choices to be made according to the understanding of the application. Our contributions of this paper are as follows:\n• We offer analyses of the generalization behaviors of augmented models trained with different regularizations: these regularizations require progressively stronger assumptions of the data and the augmentation functions, but progressively less computational efforts. For example, with assumptions pertaining to augmentation transformation functions, the Wasserstein distance over the original and augmented empirical distributions can be calculated through simple `1 norm distance.\n• We test and compare these methods and offer practical guidance on how to choose regularizations in practice. In short, regularizing the squared `2 distance of logits between the augmented samples and original samples is a favorable method, suggested by both theoretical and empirical evidence.\n• With an invariance test, we argue that vanilla augmentation does not utilize the augmented samples to the fullest extent, especially in learning invariant representations, thus may not be ideal unless the only goal of augmentation is to improve the accuracy over a specific setting." }, { "heading": "2 RELATED WORK & KEY DIFFERENCES", "text": "Data augmentation has been used effectively for years. Tracing back to the earliest convolutional neural networks, we notice that even the LeNet applied on MNIST dataset has been boosted by mixing the distorted images to the original ones (LeCun et al., 1998). Later, the rapidly growing machine learning community has seen a proliferate development of data augmentation techniques (e.g., flipping, rotation, blurring etc.) that have helped models climb the ladder of the state-of-theart (one may refer to relevant survey (Shorten & Khoshgoftaar, 2019) for details). Recent advances expanded the conventional concept of data augmentation and invented several new approaches, such as leveraging the information in unlabelled data (Xie et al., 2019), automatically learning augmentation functions (Ho et al., 2019; Hu et al., 2019; Wang et al., 2019c; Zhang et al., 2020; Zoph et al., 2019), and generating the samples (with constraint) that maximize the training loss along training (Fawzi et al., 2016), which is later widely accepted as adversarial training (Madry et al., 2018).\nWhile the above works mainly discuss how to generate the augmented samples, in this paper, we mainly answer the question about how to train the models with augmented samples. For example, instead of directly mixing augmented samples with the original samples, one can consider regularizing the representations (or outputs) of original samples and augmented samples to be close under a distance metric (also known as a consistency loss). Many concrete ideas have been explored in different contexts. For example, `2 distance and cosine similarities between internal representations in speech recognition (Liang et al., 2018), squared `2 distance between logits (Kannan et al., 2018), or KL divergence between softmax outputs (Zhang et al., 2019a) in adversarially robust vision models, Jensen–Shannon divergence (of three distributions) between embeddings for texture invariant image classification (Hendrycks et al., 2020). These are but a few highlights of the concrete and successful implementations for different applications out of a huge collection (e.g., (Wu et al., 2019; Guo et al., 2019; Zhang et al., 2019b; Shah et al., 2019; Asai & Hajishirzi, 2020; Sajjadi et al., 2016; Zheng et al., 2016; Xie et al., 2015)), and one can easily imagine methods permuting these three elements (distance metrics, representation or outputs, and applications) to be invented. Even further, although we are not aware of the following methods in the context of data augmentation, given the popularity of GAN (Goodfellow, 2016) and domain adversarial neural network (Ganin et al., 2016), one can also expect the distance metric generalizes to a specialized discriminator (i.e. a classifier), which can be intuitively understood as a calculated (usually maximized) distance measure, Wasserstein-1 metric as an example (Arjovsky et al., 2017; Gulrajani et al., 2017).\nKey Differences: With this rich collection of regularizing choices, which one method should we consider in general? More importantly, do we actually need the regularization at all? These questions are important for multiple reasons, especially considering that there are paper suggesting that these regularizations may lead to worse results (Jeong et al., 2019). In this paper, we answer the first question with a proved upper bound of the worst case generalization error, and our upper bound explicitly describes what regularizations are needed. For the second question, we will show that regularizations can help the model to learn the concept of invariance.\nThere are also several previous discussions regarding the detailed understandings of data augmentation (Yang et al., 2019; Chen et al., 2019; Hernández-Garcı́a & König, 2018; Rajput et al., 2019; Dao et al., 2019), among which, (Yang et al., 2019) is probably the most relevant as it also defends the usage of regularizations. However, we believe our discussions are more comprehensive and supported theoretically, since our analysis directly suggests the ideal regularization. Also, empirically, we design an invariance test in addition to the worst-case accuracy used in the preceding work." }, { "heading": "3 TRAINING STRATEGIES WITH AUGMENTED DATA", "text": "Notations (X,Y) denotes the data, where X 2 Rn⇥p and Y 2 {0, 1}n⇥k (one-hot vectors for k classes), and f(·, ✓) denotes the model, which takes in the data and outputs the softmax (probabilities of the prediction) and ✓ denotes the corresponding parameters. g() completes the prediction (i.e., mapping softmax to one-hot prediction). l(·, ·) denotes a generic loss function. a(·) denotes a\ntransformation that alters the undesired signals of a sample, i.e., the data augmentation method. a 2 A, which is the set of transformation functions. P denotes the distribution of (x,y). For any sampled (x,y), we can have (a(x),y), and we use Pa to denote the distribution of these transformed samples. r(·; ✓) denotes the risk of model ✓. b· denotes the estimated term ·." }, { "heading": "3.1 WELL-BEHAVED DATA TRANSFORMATION FUNCTION", "text": "Despite the strong empirical performance data augmentation has demonstrated, it should be intuitively expected that the performance can only be improved when the augmentation is chosen wisely. Therefore, before we proceed to analyze the behaviors of training with data augmentations, we need first regulate some basic properties of the data transformation functions used. Intuitively, we will consider the following three properties.\n• “Dependence-preservation” with two perspectives: Label-wise, the transformation cannot alter the label of the data, which is a central requirement of almost all the data augmentation practice. Feature-wise, the transformation will not introduce new dependencies between the samples.\n• “Efficiency”: the augmentation should only generate new samples of the same label as minor perturbations of the original one. If a transformation violates this property, there should exist other simpler transformations that can generate the same target sample.\n• “Vertices”: There are extreme cases of the transformations. For example, if one needs the model to be invariant to rotations from 0 to 60 , we consider the vertices to be 0 rotation function (thus identity map) and 60 rotation function. In practice, one usually selects the transformation vertices with intuitions and domain knowledge.\nWe now formally define these three properties. The definition will depend on the model, thus these properties are not only regulating the transformation functions, but also the model. We introduce the Assumptions A1-A3 corresponding to the properties.\nA1: Dependence-preservation: the transformation function will not alter the dependency regarding the label (i.e., for any a() 2 A, a(x) will have the same label as x) or the features (i.e., for any a1(), a2() 2 A, a1(x1) ?? a1(x2) for any x1,x2 2 X that x1 6= x2).\nA2: Efficiency: for b✓ and any a() 2 A, f(a(x); b✓) is closer to x than any other samples under a distance metric de(·, ·), i.e., de(f(a(x); b✓), f(x; b✓)) minx02X x de(f(a(x); b✓), f(x0; b✓)).\nA3: Vertices: For a model b✓ and a transformation a(), we use Pa,b✓ to denote the distribution of f(a(x); b✓) for (x,y) ⇠ P . “Vertices” argues that exists two extreme elements in A, namely a+ and a , with certain metric dx(·, ·), we have\ndx(Pa+,b✓,Pa ,b✓) = sup a1,a22A dx(Pa1,b✓,Pa2,b✓) (1)\nNote that dx(·, ·) is a metric over two distributions and de(·, ·) is a metric over two samples. Also, slightly different from the intuitive understanding of “vertices” above, A3 regulates the behavior of embedding instead of raw data. All of our follow-up analysis will require A1 to hold, but with more assumptions held, we can get computationally lighter methods with bounded error." }, { "heading": "3.2 BACKGROUND, ROBUSTNESS, AND INVARIANCE", "text": "One central goal of machine learning is to understand the generalization error. When the test data and train data are from the same distribution, many previous analyses can be sketched as:\nrP(b✓) brP(b✓) + (|⇥|, n, ) (2)\nwhich states that the expected risk can be bounded by the empirical risk and a function of hypothesis space |⇥| and number of samples n; accounts for the probability when the bound holds. () is a function of these three terms. Dependent on the details of different analyses, different concrete examples of this generic term will need different assumptions. We use a generic assumption A4 to denote the assumptions required for each example. More concrete discussions are in Appendix A\nRobustness In addition to the generalization error above, we also study the robustness by following the established definition as in the worst case expected risk when the test data is allowed to be shifted to some other distributions by transformation functions in A. Formally, we study\nrP0(b✓) = E(x,y)⇠P max a⇠A I(g(f(a(x); b✓)) 6= y) (3)\nAs rP(b✓) rP0(b✓), we only need to study (3). We will analyze (3) in different scenarios involving different assumptions and offer formalizations of the generalization bounds under each scenario. Our bounds shall also immediately inspire the development of methods in each scenario as the terms involved in our bound are all computable within reasonable computational loads.\nInvariance In addition to robustness, we are also interested in whether the model learns to be invariant to the undesired signals. Intuitively, if data augmentation is used to help dilute the undesired signals from data by altering the undesired signals with a() 2 A, a successfully trained model with augmented data will map the raw data with various undesired signals to the same embedding. Thus, we study the following metric to quantify the model’s ability in learning invariant representations:\nI(b✓,P) = sup a1,a22A dx(Pa1,b✓,Pa2,b✓), (4)\nwhere Pa,b✓ to denote the distribution of f(a(x); b✓) for (x,y) ⇠ P . dx() is a distance over two distributions, and we suggest to use Wasserstein metric given its favorable properties (e.g., see practical examples in Figure 1 of (Cuturi & Doucet, 2014) or theoretical discussions in (Villani, 2008)). Due to the difficulties in assessing f(a(x); b✓) (as it depends on b✓), we mainly study (4) empirically, and argue that models trained with explicit regularization of the empirical counterpart of (4) will have favorable invariance property." }, { "heading": "3.3 WORST-CASE AUGMENTATION (ADVERSARIAL TRAINING)", "text": "We consider robustness first. (3) can be written equivalently into the expected risk over a pseudo distribution P 0 (see Lemma 1 in (Tu et al., 2019)), which is the distribution that can sample the data leading to the worst expected risk. Thus, equivalently, we can consider supP02T (P,A) rP0(b✓). With an assumption relating the worst distribution of expected risk and the worst distribution of the empirical risk (namely, A5, in Appendix A), the bound of our interest (i.e., supP02T (P,A) rP0(b✓)) can be analogously analyzed through supP02T (P,A) brP0(b✓). By the definition of P 0, we can have: Lemma 3.1. With Assumptions A1, A4, and A5, with probability at least 1 , we have\nsup P02T (P,A)\nrP0(b✓) 1\nn\nX\n(x,y)⇠P\nsup a2A\nI(g(f(a(x); b✓)) 6= y) + (|⇥|, n, ) (5)\nThis result is a straightforward follow-up of the preceding discussions. In practice, it aligns with the adversarial training (Madry et al., 2018), a method that has demonstrated impressive empirical successes in the robust machine learning community.\nWhile the adversarial training has been valued by its empirical superiorities, it may still have the following two directions that can be improved: firstly, it lacks an explicit enforcement of the concept of invariance between the original sample and the transformed sample; secondly, it assumes that elements of A are enumerable, thus 1 n P (x,y)⇠P supa2A I(g(f(a(x); b✓)) 6= y) is computable. The remaining discussions expand along these two directions." }, { "heading": "3.4 REGULARIZED WORST-CASE AUGMENTATION", "text": "To force the concept of invariance, the immediate solution might be to apply some regularizations to minimize the distance between the embeddings learned from the original sample and the ones learned from the transformed samples. We have offered a summary of these methods in Section 2.\nTo have a model with small invariance score, the direct approach will be regularizing the empirical counterpart of (4). We notice that existing methods barely consider this regularization, probably because of the computational difficulty of Wasserstein distance. Conveniently, we have the following result that links the `1 regularization to the Wasserstein-1 metric in the context of data augmentation.\nProposition 3.2. With A2, and de(·, ·) in A2 chosen to be `1 norm, for any a 2 A, we have X\ni\n||f(xi; b✓) f(a(xi); b✓)||1 = W1(f(x; b✓), f(a(x); b✓)) (6)\nThis result conveniently allows us to use `1 norm distance to replace Wasserstein metric, integrating the advantages of Wasserstein metric while avoiding practical issues such as computational complexity and difficulty to pass the gradient back during backpropagation.\nWe continue to discuss the generalization behaviors. Our analysis remains in the scope of multiclass classification, where the risk is evaluated as misclassification rate, and the model is optimized with cross-entropy loss (with the base chosen to be log base in cross-entropy loss). This setup aligns with A4, and should represent the modern neural network studies well enough.\nBefore we proceed, we need another technical assumption A6 (details in Appendix A), which can be intuitively considered as a tool that allows us to relax classification error into cross-entropy error, so that we can bound the generalization error with the terms we can directly optimize during training.\nWe can now offer another technical result: Theorem 3.3. With Assumptions A1, A2, A4, A5, and A6, and de(·, ·) in A2 is `1 norm, with probability at least 1 , the worst case generalization risk will be bounded as\nsup P02T (P,A)\nrP0(b✓) brP(b✓) + X\ni\n||f(xi; b✓) f(x0i; b✓)||1 + (|⇥|, n, ) (7)\nand x0 = a(x), where a = argmaxa2A y >f(a(x); b✓).\nThis technical result also immediately inspires the method to guarantee worst case performance, as well as to explicitly enforce the concept of invariance. Notice that a = argmaxa2A y>f(a(x); b✓) is simply selecting the augmentation function maximizing the cross-entropy loss, a standard used by many worst case augmenting method (e.g., Madry et al., 2018)." }, { "heading": "3.5 REGULARIZED TRAINING WITH VERTICES", "text": "As A in practice is usually a set with a large number of (and possibly infinite) elements, we may not always be able to identify the worst case transformation function with reasonable computational efforts. This limitation also prevents us from effective estimating the generalization error as the bound requires the identification of the worst case transformation.\nOur final discussion is to leverage the vertex property of the transformation function to bound the worst case generalization error: Lemma 3.4. With Assumptions A1-A6, and de(·, ·) in A2 chosen as `1 norm distance, dx(·, ·) in A3 chosen as Wasserstein-1 metric, assuming there is a a0() 2 A where brPa0 (b✓) = 1 2 brPa+ (b✓) + brPa (b✓) , with probability at least 1 , we have:\nsup P02T (P,A)\nrP0(b✓) 1\n2\nbrPa+ (b✓) + brPa (b✓) +\nX\ni\n||f(a+(xi); b✓) f(a (x0); b✓)||1 + (|⇥|, n, )\nThis result inspires the method that can directly guarantee the worst case generalization result and can be optimized conveniently without searching for the worst-case transformations. However, this method requires a good domain knowledge of the vertices of the transformation functions." }, { "heading": "3.6 ENGINEERING SPECIFICATION OF RELEVANT METHODS", "text": "Our theoretical analysis has lead to a line of methods, however, not every method can be effectively implemented, especially due to the difficulties of passing gradient back for optimizations. Therefore, to boost the influence of the loss function through backpropagation, we recommend to adapt the methods with the following two changes: 1) the regularization is enforced on logits instead of softmax; 2) we use squared `2 norm instead of `1 norm because `1 norm is not differentiable everywhere. We discuss the effects of these compromises in ablation studies in Appendix E.\nAlso, in the cases where we need to identify the worst case transformation functions, we iterate through all the transformation functions and identify the function with the maximum loss.\nOverall, our analysis leads to the following main training strategies:\n• VA (vanilla augmentation): mix the augmented samples of a vertex function to the original ones for training (original samples are considered as from another vertex in following experiments).\n• VWA (vanilla worst-case augmentation): at each iteration, identify the worst-case transformation functions and train with samples generated by them (also known as adversarial training).\n• RA (regularized augmentation): regularizing the squared `2 distance over logits between the original samples and the augmented samples of a fixed vertex transformation function.\n• RWA (regularized worst-case augmentation): regularizing the squared `2 distance over logits between the original samples and the worst-case augmented samples identified at each iteration." }, { "heading": "4 EXPERIMENTS", "text": "We first use some synthetic experiments to verify our assumptions and inspect the consequences when the assumptions are not met (in Appendix C). Then, in the following paragraphs, we test the methods discussed to support our arguments in learning robustness and invariance. Finally, we show the power of our discussions by competing with advanced methods designed for specific tasks." }, { "heading": "4.1 EXPERIMENTS FOR LEARNING ROBUST & INVARIANT REPRESENTATION", "text": "Experiment Setup: We first test our arguments with two data sets and three different sets of the augmentations. We study MNIST dataset with LeNet architecture, and CIFAR10 dataset with ResNet18 architecture. To examine the effects of the augmentation strategies, we disable all the heuristics that are frequently used to boost the test accuracy of models, such as the default augmentation many models trained for CIFAR10 adopted, and the BatchNorm (also due to the recent arguments against the effects of BatchNorm in learning robust features (Wang et al., 2020)), although forgoing these heuristics will result in a lower overall performance than one usually expects.\nWe consider three different sets of transformation functions: texture, rotation, and contrast. The details of these transformation functions and the experiment setup are in Appendix D.\nWe consider three different evaluation metrics:\n• Clean: test accuracy on the original test data, mainly reported as a reference for other metrics. • Robustness: the worst accuracy when each sample can be transformed with a 2 A. • Invariance: A metric to test whether the models learns the concept of invariance (details to follow).\nInvariance-test: To test whether a model can truly learns the concept of invariance within A = {a1(), a2(), . . . , at()} of t elements, we design a new evaluation metric: for a sampled collection of data of the sample label i, denoted as X(i), we generate the transformed copies of it with A, resulting in X(i)a1 ,X (i) a2 , . . . ,X (i) at . We combined these copies into a dataset, denoted as X (i). For every sample x in X (i), we retrieve its t nearest neighbors of other samples in X (i), and calculate the overlap of the retrieved samples and {a1(x), a2(x), . . . , at(x)}. Since the identify map is in A, so the calculated overlap score will be in [1/t, 1]. The distance used is d(·, ·) = ||f(·; b✓) f(·; b✓)||1, where b✓ is the model we are interested to examine. Finally, we report the averaged score for every label. Thus, a high overlap score indicates the prediction of model b✓ is invariant to the augmentation functions in A. If we use other distance functions, the reported values may differ, but we notice that the rank of the methods compared in terms of this test barely changes.\nResults: We show the results in Table 1 and Table 6 (in Appendix) for MNIST and CIFAR10 respectively. Table 1 shows that RWA is generally a superior method, in terms of all the metrics, especially the invariance evaluation as it shows a much higher invariance score than competing methods. We believe this advantage of invariance comes from two sources: regularizations and the fact that RWA has seen all the augmentation functions in A. In comparison, RA also has regularization but only\nsees the vertices in A, so the invariance score of RA is not compatitable to RWA, although better than VA. Table 6 roughly tells the same story. More discussions are in Appendix D.\nOther results (Appendix E): The strength of RWA can also be shown in several other different scenarios, even in the out-of-domain test scenario where the transformation functions are not in A. RWA generally performs the best, although not the best in every single test. We also perform ablation test to validate the choice of squared `2 norm over logits in contrast to other distance metrics. Our choice performs the best in the worst-case performance. This advantage is expected as our choice is validated by theoretical arguments as well as consideration of engineering convenience.\nOverall, the empirical performances align with our expectation from the theoretical discussion: while all methods discussed have a bounded worst case performance, we do not intend to compare the upper bounds because smaller upper bounds do not necessarily guarantee a smaller risk. However, worst case augmentation methods tend to show a better worst case performances because they have been augmented with all the elements in A. Also, there is no clear evidence suggesting the difference between augmentation methods and its regularized versions in terms of the worst case performance, but it is clear that regularization helps to learn the concept of invariance." }, { "heading": "4.2 COMPARISON TO ADVANCED METHODS", "text": "Finally, we also compete our generic data augmentation methods against several specifically designed methods in different applications. We use the four generic methods (VA, RA, VWA, and RWA) with generic transformation functions (A of “rotation”, “contrast”, or ”texture” used in the synthetic experiments). We compare our methods with techniques invented for three different topics of study (rotation invariant, texture perturbation, and cross-domain generalization), and each of these topics has seen a long line of method development. We follow each own tradition (e.g., rotation methods are usually tested in CIFAR10 dataset, seemingly due to the methods’ computational requirements), test over each own most challenging dataset (e.g., ImageNet-Sketch is the most recent and challenging dataset in domain generalization, although less studied), and report each own evaluation metric (e.g., methods tested with ImageNet-C are usually evaluated with mCE).\nOverall, the performances of our generic methods outperform these advanced SOTA techniques. Thus, the main conclusion, as validated by these challenging scenarios, are (1) usage of data augmentation can outperform carefully designed methods; (2) usage of the consistency loss can further improve the performances; (3) regularized worst-case augmentation generally works the best.\nDue to the limitation of space, we leave the background details of these experiments in Appendix F, where we introduce the detailed experiment settings, and explain the acronyms in Tables 2-4\nRotation-invariant Image Classification We test the models with nine different rotations including 0 . Augmentation related methods only use the A of “rotation” in synthetic experiments, so the testing scenario goes beyond what the augmentation methods have seen during training. The results in Table 2 strongly endorses the efficacy of augmentation-based methods. Interestingly, regularized augmentation methods probably with the benefit of learning the concept of invariance, tend to behave well in the transformations not considered during training. Also, RA outperforms VWA on average.\nTexture-perturbed ImageNet classification We also test the performance on the image classification over multiple perturbations. We train the model over standard ImageNet training set and test the model with ImageNet-C data (Hendrycks & Dietterich, 2019), which is a perturbed version of ImageNet by corrupting the original ImageNet validation set with a collection of noises. The results\nare reported in Table 3, which shows that our generic method can outperform the current SOTA methods after a continued finetuning process with reducing learning rates.\nCross-domain ImageNet-Sketch Classification We also compare to the methods used for crossdomain evaluation. with the most challenging setup in this scenario: train the models with standard ImageNet training data, and test the model over ImageNet-Sketch data (Wang et al., 2019a), which is a collection of sketches following the structure ImageNet validation set. Similarly, we only augment the samples with a generic augmentation set (A of “contrast” in synthetic experiments, Appendix D). The results in Table 4 again support the strength of the correct usage of data augmentation." }, { "heading": "5 CONCLUSION", "text": "In this paper, we conducted a systematic inspection to study the proper regularization techniques that are provably related to the generalization error of a machine learning model, when the test distribution are allowed to be perturbed by a family of transformation functions. With progressively more specific assumptions, we identified progressively simpler methods that can bound the worst case risk. We summarize the main take-home messages below:\n• Regularizing a norm distance between the logits of the originals samples and the logits of the augmented samples enjoys several merits: the trained model tend to have good worst cast performance, and can learn the concept of invariance (as shown in our invariance test). Although our theory suggests `1 norm, but we recommend squared `2 norm in practice considering the difficulties of passing the (sub)gradient of `1 norm in backpropagation.\n• With the vertex assumption held (it usually requires domain knowledge to choose the vertex functions), one can use “regularized training with vertices” method and get good empirical performance in both accuracy and invariance, and the method is at the same complexity order of vanilla training without data augmentation. When we do not have the domain knowledge (thus are not confident in the vertex assumption), we recommend “regularized worst-case augmentation”, which has the best performance overall, but requires extra computations to identify the worst-case augmentated samples at each iteration." } ]
2,020
null
SP:33cd383e425b23699614bcff904cc4e52720c29c
[ "This paper provides a new analysis for the FedAvg algorithm, which assumes the data on different workers are non-IID and the objective functions are non-convex. The new analysis improved the existing bounds of FedAvg. Besides, the analysis is also extended to the non-stationary network, where the number of workers participating in the optimization may vary." ]
Federated learning (FL) is a distributed machine learning architecture that leverages a large number of workers to jointly learn a model with decentralized data. FL has received increasing attention in recent years thanks to its data privacy protection, communication efficiency and a linear speedup for convergence in training (i.e., convergence performance increases linearly with respect to the number of workers). However, existing studies on linear speedup for convergence are only limited to the assumptions of i.i.d. datasets across workers and/or full worker participation, both of which rarely hold in practice. So far, it remains an open question whether or not the linear speedup for convergence is achievable under non-i.i.d. datasets with partial worker participation in FL. In this paper, we show that the answer is affirmative. Specifically, we show that the federated averaging (FedAvg) algorithm (with two-sided learning rates) on non-i.i.d. datasets in non-convex settings achieves a convergence rate O( 1 √ mKT + 1 T ) for full worker participation and a convergence rate O( √ K √ nT + 1 T ) for partial worker participation, where K is the number of local steps, T is the number of total communication rounds, m is the total worker number and n is the worker number in one communication round if for partial worker participation. Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to T/m in full worker participation. We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results.
[ { "affiliations": [], "name": "Haibo Yang" }, { "affiliations": [], "name": "Minghong Fang" }, { "affiliations": [], "name": "Jia Liu" } ]
[ { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Review,", "year": 2018 }, { "authors": [ "Hubert Eichner", "Tomer Koren", "H Brendan McMahan", "Nathan Srebro", "Kunal Talwar" ], "title": "Semi-cyclic stochastic gradient descent", "venue": "arXiv preprint arXiv:1904.10120,", "year": 2019 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Tzu-Ming Harry Hsu", "Hang Qi", "Matthew Brown" ], "title": "Measuring the effects of non-identical data distribution for federated visual classification", "venue": null, "year": 1909 }, { "authors": [ "Li Huang", "Yifeng Yin", "Zeng Fu", "Shifa Zhang", "Hao Deng", "Dianbo Liu" ], "title": "Loadaboost: Loss-based adaboost federated machine learning on medical data", "venue": "arXiv preprint arXiv:1811.12629,", "year": 2018 }, { "authors": [ "Eunjeong Jeong", "Seungeun Oh", "Hyesung Kim", "Jihong Park", "Mehdi Bennis", "Seong-Lyun Kim" ], "title": "Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data", "venue": "arXiv preprint arXiv:1811.11479,", "year": 2018 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [ "Sai Praneeth Karimireddy", "Satyen Kale", "Mehryar Mohri", "Sashank J Reddi", "Sebastian U Stich", "Ananda Theertha Suresh" ], "title": "Scaffold: Stochastic controlled averaging for on-device federated learning", "venue": null, "year": 1910 }, { "authors": [ "Ahmed Khaled", "Konstantin Mishchenko", "Peter Richtárik" ], "title": "Better communication complexity for local sgd", "venue": "arXiv preprint arXiv:1909.04746,", "year": 2019 }, { "authors": [ "Ahmed Khaled", "Konstantin Mishchenko", "Peter Richtárik" ], "title": "First analysis of local gd on heterogeneous data", "venue": "arXiv preprint arXiv:1909.04715,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "arXiv preprint arXiv:1812.06127,", "year": 2018 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: Challenges, methods, and future directions", "venue": "arXiv preprint arXiv:1908.07873,", "year": 2019 }, { "authors": [ "Xiang Li", "Kaixuan Huang", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "On the convergence of fedavg on non-iid data", "venue": "arXiv preprint arXiv:1907.02189,", "year": 2019 }, { "authors": [ "Tao Lin", "Sebastian U Stich", "Kumar Kshitij Patel", "Martin Jaggi" ], "title": "Don’t use large mini-batches, use local sgd", "venue": "arXiv preprint arXiv:1808.07217,", "year": 2018 }, { "authors": [ "H Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "arXiv preprint arXiv:1602.05629,", "year": 2016 }, { "authors": [ "Sashank Reddi", "Zachary Charles", "Manzil Zaheer", "Zachary Garrett", "Keith Rush", "Jakub Konecny", "Sanjiv Kumar", "H Brendan McMahan" ], "title": "Adaptive federated optimization", "venue": "arXiv preprint arXiv:2003.00295,", "year": 2020 }, { "authors": [ "Felix Sattler", "Simon Wiedemann", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "Robust and communication-efficient federated learning from non-iid data", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2019 }, { "authors": [ "Sebastian U Stich" ], "title": "Local sgd converges fast and communicates little", "venue": "arXiv preprint arXiv:1805.09767,", "year": 2018 }, { "authors": [ "Sebastian U Stich", "Sai Praneeth Karimireddy" ], "title": "The error-feedback framework: Better rates for sgd with delayed gradients and compressed communication", "venue": null, "year": 1909 }, { "authors": [ "Sebastian U Stich", "Jean-Baptiste Cordonnier", "Martin Jaggi" ], "title": "Sparsified sgd with memory", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jianyu Wang", "Gauri Joshi" ], "title": "Cooperative sgd: A unified framework for the design and analysis of communication-efficient sgd algorithms", "venue": "arXiv preprint arXiv:1808.07576,", "year": 2018 }, { "authors": [ "Jianyu Wang", "Vinayak Tantia", "Nicolas Ballas", "Michael Rabbat" ], "title": "Slowmo: Improving communication-efficient distributed sgd with slow momentum", "venue": "arXiv preprint arXiv:1910.00643,", "year": 2019 }, { "authors": [ "Shiqiang Wang", "Tiffany Tuor", "Theodoros Salonidis", "Kin K Leung", "Christian Makaya", "Ting He", "Kevin Chan" ], "title": "Adaptive federated learning in resource constrained edge computing systems", "venue": "IEEE Journal on Selected Areas in Communications,", "year": 2019 }, { "authors": [ "Hao Yu", "Rong Jin", "Sen Yang" ], "title": "On the linear speedup analysis of communication efficient momentum sgd for distributed non-convex optimization", "venue": "arXiv preprint arXiv:1905.03817,", "year": 2019 }, { "authors": [ "Hao Yu", "Sen Yang", "Shenghuo Zhu" ], "title": "Parallel restarted sgd with faster convergence and less communication: Demystifying why model averaging works for deep learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 }, { "authors": [ "Fan Zhou", "Guojing Cong" ], "title": "On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization", "venue": "arXiv preprint arXiv:1708.01012,", "year": 2017 }, { "authors": [ "Li" ], "title": "2019b) showed a convergence rate", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "mKT + 1T ) for full worker participation and a\nconvergence rate O( √ K√ nT\n+ 1T ) for partial worker participation, where K is the number of local steps, T is the number of total communication rounds, m is the total worker number and n is the worker number in one communication round if for partial worker participation. Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to T/m in full worker participation. We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results." }, { "heading": "1 INTRODUCTION", "text": "Federated Learning (FL) is a distributed machine learning paradigm that leverages a large number of workers to collaboratively learn a model with decentralized data under the coordination of a centralized server. Formally, the goal of FL is to solve an optimization problem, which can be decomposed as:\nmin x∈Rd\nf(x) := 1\nm m∑ i=1 Fi(x),\nwhere Fi(x) , Eξi∼Di [Fi(x, ξi)] is the local (non-convex) loss function associated with a local data distribution Di and m is the number of workers. FL allows a large number of workers (such as edge devices) to participate flexibly without sharing data, which helps protect data privacy. However, it also introduces two unique challenges unseen in traditional distributed learning algorithms that are used typically for large data centers:\n• Non-independent-identically-distributed (non-i.i.d.) datasets across workers (data heterogeneity): In conventional distributed learning in data centers, the distribution for each worker’s local dataset can usually be assumed to be i.i.d., i.e., Di = D,∀i ∈ {1, ...,m}. Unfortunately, this assumption rarely holds for FL since data are generated locally at the workers based on their circumstances, i.e., Di 6= Dj , for i 6= j. It will be seen later that the non-i.i.d assumption imposes significant challenges in algorithm design for FL and their performance analysis.\n• Time-varying partial worker participation (systems non-stationarity): With the flexibility for workers’ participation in many scenarios (particularly in mobile edge computing), workers may randomly join or leave the FL system at will, thus rendering the active worker set stochastic and time-varying across communication rounds. Hence, it is often infeasible to wait for all workers’ responses as in traditional distributed learning, since inactive workers or stragglers will significantly slow down the whole training process. As a result, only a subset of the workers may be chosen by the server in each communication round, i.e., partial worker participation.\nIn recent years, the Federated Averaging method (FedAvg) and its variants (McMahan et al., 2016; Li et al., 2018; Hsu et al., 2019; Karimireddy et al., 2019; Wang et al., 2019a) have emerged as a prevailing approach for FL. Similar to the traditional distributed learning, FedAvg leverages local computation at each worker and employs a centralized parameter server to aggregate and update the model parameters. The unique feature of FedAvg is that each worker runs multiple local stochastic gradient descent (SGD) steps rather than just one step as in traditional distributed learning between two consecutive communication rounds. For i.i.d. datasets and the full worker participation setting, Stich (2018) and Yu et al. (2019b) proposed two variants of FedAvg that achieve a convergence rate of O(mKT + 1√ mKT\n) with a bounded gradient assumption for both strongly convex and nonconvex problems, where m is the number of workers, K is the local update steps, and T is the total communication rounds. Wang & Joshi (2018) and Stich & Karimireddy (2019) further proposed improved FedAvg algorithms to achieve an O(mT + 1√ mKT\n) convergence rate without bounded gradient assumption. Notably, for a sufficiently large T , the above rates become O( 1√\nmKT )1, which\nimplies a linear speedup with respect to the number of workers.2 This linear speedup is highly desirable for an FL algorithm because the algorithm is able to effectively leverage the massive parallelism in a large FL system. However, with non-i.i.d. datasets and partial worker participation in FL, a fundamental open question arises: Can we still achieve the same linear speedup for convergence, i.e., O( 1√\nmKT ), with non-i.i.d. datasets and under either full or partial worker participation?\nIn this paper, we show the answer to the above question is affirmative. Specifically, we show that a generalized FedAvg with two-sided learning rates achieves linear convergence speedup with non-i.i.d. datasets and under full/partial worker participation. We highlight our contributions as follows:\n• For non-convex problems, we show that the convergence rate of the FedAvg algorithm on non-i.i.d. dataset areO( 1√\nmKT + 1T ) andO( √ K√ nT\n+ 1T ) for full and partial worker participation, respectively, where n is the size of the partially participating worker set. This indicates that our proposed algorithm achieves a linear speedup for convergence rate for a sufficiently large T . When reduced to the i.i.d. case, our convergence rate is O( 1TK + 1√ mKT\n), which is also better than previous works. We summarize the convergence rate comparisons for both i.i.d. and non-i.i.d. cases in Table 1. It is worth noting that our proof does not require the bounded gradient assumption. We note that the SCAFFOLD algorithm (Karimireddy et al., 2019) also achieves the linear speedup but extra variance reduction operations are required, which lead to higher communication costs and implementation complexity. By contrast, we do not have such extra requirements in this paper.\n• In order to achieve a linear speedup, i.e., a convergence rate O( 1√ mKT ), we show that the number\nof local updatesK can be as large as T/m, which improves the T 1/3/m result previously shown in Yu et al. (2019a) and Karimireddy et al. (2019). As shown later in the communication complexity comparison in Table 1, a larger number of local steps implies relatively fewer communication rounds, thus less communication overhead. Interestingly, our results also indicate that the number of local updates K does not hurt but rather help the convergence with a proper learning rates choice in full worker participation. This overcomes the limitation as suggested in Li et al. (2019b) that local SGD steps might slow down the convergence (O(KT ) for strongly convex case). This result also reveals new insights on the relationship between the number of local steps and learning rate.\n1This rate also matches the convergence rate order of parallel SGD in conventional distributed learning. 2To attain accuracy for an algorithm, it needs to take O( 1 2 ) steps with a convergence rate O( 1√\nT ), while\nneeding O( 1 m 2 ) steps if the convergence rate is O( 1√ mT ) (the hidden constant in Big-O is the same). In this sense, one achieves a linear speedup with respect to the number of workers.\nNotation. In this paper, we let m be the total number of workers and St be the set of active workers for the t-th communication round with size |St| = n for some n ∈ (0,m]. 3 We use K to denote the number of local steps per communication round at each worker. We let T be the number of total communication rounds. In addition, we use boldface to denote matrices/vectors. We let [·]it,k represent the parameter of k-th local step in the i-th worker after the t-th communication. We use ‖·‖2 to denote the `2-norm. For a natural number m, we use [m] to represent the set {1, · · · ,m}. The rest of the paper is organized as follows. In Section 2, we review the literature to put our work in comparative perspectives. Section 3 presents the convergence analysis for our proposed algorithm. Section 4 discusses the implication of the convergence rate analysis. Section 5 presents numerical results and Section 6 concludes this paper. Due to space limitation, the details of all proofs and some experiments are provided in the supplementary material." }, { "heading": "2 RELATED WORK", "text": "The federated averaging (FedAvg) algorithm was first proposed by McMahan et al. (2016) for FL as a heuristic to improve communication efficiency and data privacy. Since then, this work has sparked many follow-ups that focus on FL with i.i.d. datasets and full worker participation (also known as LocalSGD (Stich, 2018; Yu et al., 2019b; Wang & Joshi, 2018; Stich & Karimireddy, 2019; Lin et al., 2018; Khaled et al., 2019a; Zhou & Cong, 2017)). Under these two assumptions, most of the theoretical works can achieve a linear speedup for convergence, i.e., O( 1√\nmKT ) for a sufficiently\nlarge T , matching the rate of the parallel SGD. In addition, LocalSGD is empirically shown to be communication-efficient and enjoys better generalization performance (Lin et al., 2018). For a comprehensive introduction to FL, we refer readers to Li et al. (2019a) and Kairouz et al. (2019).\n3For simplicity and ease of presentation in this paper, we let |St| = n. We note that this is not a restrictive condition and our proofs and results still hold for |St| ≥ n, which can be easily satisfied in practice.\nAlgorithm 1 A Generalized FedAvg Algorithm with Two-Sided Learning Rates. Initialize x0 for t = 0, · · · , T − 1 do\nThe server samples a subset St of workers with |St| = n. for each worker i ∈ St in parallel do xit,0 = xt for k = 0, · · · ,K − 1 do\nCompute an unbiased estimate git,k = ∇Fi(xit,k, ξit,k) of∇Fi(xit,k). Local worker update: xit,k+1 = x i t,k − ηLgit,k.\nend for Let ∆it = x i t,K − xit,0 = −ηL ∑K−1 k=0 g i t,k. Send ∆ i t to the server.\nend for At Server:\nReceive ∆it, i ∈ S. Let ∆t = 1|S| ∑ i∈S ∆ i t. Server Update: xt+1 = xt + η∆t. Broadcasting xt+1 to workers.\nend for\nFor non-i.i.d. datasets, many works (Sattler et al., 2019; Zhao et al., 2018; Li et al., 2018; Wang et al., 2019a; Karimireddy et al., 2019; Huang et al., 2018; Jeong et al., 2018) heuristically demonstrated the performance of FedAvg and its variants. On convergence rate with full worker participation, many works (Stich et al., 2018; Yu et al., 2019a; Wang & Joshi, 2018; Karimireddy et al., 2019; Reddi et al., 2020) can achieve linear speedup, but their convergence rate bounds could be improved as shown in this paper. On convergence rate with partial worker participation, Li et al. (2019b) showed that the original FedAvg can achieve O(K/T ) for strongly convex functions, which suggests that local SGD steps slow down the convergence in the original FedAvg. Karimireddy et al. (2019) analyzed a generalized FedAvg with two-sided learning rates under strongly convex, convex and non-convex cases. However, as shown in Table 1, none of them indicates that linear speedup is achievable with non-i.i.d. datasets under partial worker participation. Note that the SCAFFOLD algorithm (Karimireddy et al., 2019) can achieve linear speedup but extra variance reduction operations are required, which lead to higher communication costs and implementation complexity. In this paper, we show that this linear speedup can be achieved without any extra requirements. For more detailed comparisons and other algorithmic variants in FL and decentralized settings, we refer readers to Kairouz et al. (2019)." }, { "heading": "3 LINEAR SPEEDUP OF THE GENERALIZED FEDAVG WITH TWO-SIDED LEARNING RATES FOR NON-IID DATASETS", "text": "In this paper, we consider a FedAvg algorithm with two-sided learning rates as shown in Algorithm 1, which is generalized from previous works (Karimireddy et al., 2019; Reddi et al., 2020). Here, workers perform multiple SGD steps using a worker optimizer to minimize the local loss on its own dataset, while the server aggregates and updates the global model using another gradient-based server optimizer based on the returned parameters. Specifically, between two consecutive communication rounds, each worker performs K SGD steps with the worker’s local learning rate ηL. We assume an unbiased estimator in each step, which is denoted by git,k = ∇Fi(xit,k, ξit,k), where ξit,k is a random local data sample for k-th steps after t-th communication round at worker i. Then, each worker sends the accumulative parameter difference ∆it to the server. On the server side, the server aggregates all available ∆it-values and updates the model parameters with a global learning rate η. The FedAvg algorithm with two-sided learning rates provides a natural way to decouple the learning of workers and server, thus utilizing different learning rate schedules for workers and the server. The original FedAvg can be viewed as a special case of this framework with server-side learning rate being one.\nIn what follows, we show that a linear speedup for convergence is achievable by the generalized FedAvg for non-convex functions on non-i.i.d. datasets. We first state our assumptions as follows.\nAssumption 1. (L-Lipschitz Continuous Gradient) There exists a constant L > 0, such that ‖∇Fi(x)−∇Fi(y)‖ ≤ L‖x− y‖,∀x,y ∈ Rd, and i ∈ [m]. Assumption 2. (Unbiased Local Gradient Estimator) Let ξit be a random local data sample in the t-th step at the i-th worker. The local gradient estimator is unbiased, i.e., E[∇Fi(xt, ξit)] = ∇Fi(xt), ∀i ∈ [m], where the expectation is over all local datasets samples. Assumption 3. (Bounded Local and Global Variance) There exist two constants σL > 0 and σG > 0, such that the variance of each local gradient estimator is bounded by E[‖∇Fi(xt, ξit)−∇Fi(xt)‖2] ≤ σ2L, ∀i ∈ [m], and the global variability of the local gradient of the cost function is bounded by ‖∇Fi(xt)−∇f(xt)‖2 ≤ σ2G, ∀i ∈ [m],∀t.\nThe first two assumptions are standard in non-convex optimization (Ghadimi & Lan, 2013; Bottou et al., 2018). For Assumption 3, the bounded local variance is also a standard assumption. We use a universal bound σG to quantify the heterogeneity of the non-i.i.d. datasets among different workers. In particular, σG = 0 corresponds to i.i.d. datasets. This assumption is also used in other works for FL under non-i.i.d. datasets (Reddi et al., 2020; Yu et al., 2019b; Wang et al., 2019b) as well as in decentralized optimization (Kairouz et al., 2019). It is worth noting that we do not require a bounded gradient assumption, which is often assumed in FL optimization analysis." }, { "heading": "3.1 CONVERGENCE ANALYSIS FOR FULL WORKER PARTICIPATION", "text": "In this subsection, we first analyze the convergence rate of the generalized FedAvg with two-sided learning rates under full worker participation, for which we have the following result: Theorem 1. Let constant local and global learning rates ηL and η be chosen as such that ηL ≤ 18LK and ηηL ≤ 1KL . Under Assumptions 1–3 and with full worker participation, the sequence of outputs {xk} generated by Algorithm 1 satisfies:\nmin t∈[T ] E[‖∇f(xt)‖22] ≤ f0 − f∗ cηηLKT + Φ,\nwhere Φ , 1c [ LηηL 2m σ 2 L +\n5Kη2LL 2\n2 (σ 2 L + 6Kσ 2 G)], c is a constant, f0 , f(x0), f∗ , f(x∗) and the\nexpectation is over the local dataset samples among workers.\nRemark 1. The convergence bound contains two parts: a vanishing term f0−f∗cηηLKT as T increases and a constant term Φ whose size depends on the problem instance parameters and is independent of T . The vanishing term’s decay rate matches that of the typical SGD methods.\nRemark 2. The first part of Φ (i.e., LηηL2m σ 2 L) is due to the local stochastic gradients at each worker, which shrinks at rate 1m as m increases. The cumulative variance of the K local steps contributes to the second term in Φ (i.e., 5Kη 2 LL 2\n2 (σ 2 L + 6Kσ 2 G)), which is independent of m and largely affected\nby the data heterogeneity. To make the second part small, an inverse relationship between the local learning rate and local steps should be satisfied, i.e., ηL = O( 1K ). Specifically, note that the global and local variances are quadratically and linearly amplified by K. This requires a sufficiently small ηL to offset the variance between two successive communication rounds to make the second term in Φ small. This is consistent with the observation in strongly convex FL that a decaying learning rate is needed for FL to converge under non-i.i.d. datasets even if full gradients used in each worker (Li et al., 2019b). However, we note that our explicit inverse relationship between ηL and K in the above is new. Intuitively, the K local steps with a sufficiently small ηL can be viewed as one SGD step with a large learning rate.\nWith Theorem 1, we immediately have the following convergence rate for the generalized FedAvg algorithm with a proper choice of two-sided learning rates: Corollary 1. Let ηL = 1√TKL and η = √ Km. The convergence rate of the generalized FedAvg\nalgorithm under full worker participation is mint∈[T ] E[‖∇f(xt)‖22] = O ( 1√ mKT + 1T ) .\nRemark 3. The generalized FedAvg algorithm with two-sided learning rates can achieve a linear speedup for non-i.i.d. datasets, i.e., a O( 1√\nmKT ) convergence rate as long as T ≥ mK. Although\nmany works have achieved this convergence rate asymptotically, we improve the maximum number\nof local steps K to T/m, which is significantly better than the state-of-art bounds such as T 1/3/m shown in (Karimireddy et al., 2019; Yu et al., 2019a; Kairouz et al., 2019). Note that a larger number of local steps implies relatively fewer communication rounds, thus less communication overhead. See also the communication complexity comparison in Table 1. For example, when T = 106 and m = 100 (as used in (Kairouz et al., 2019)), the local steps in our algorithm is K ≤ T/m = 104. However, K ≤ T 1/3\nm = 1 means that no extra local steps can be taken to reduce communication costs.\nRemark 4. When degenerated to the i.i.d. case (σG = 0), the convergence rate becomes O( 1TK + 1√ mKT ), which has a better first term in the bound compared with previous work as shown in Table 1." }, { "heading": "3.2 CONVERGENCE ANALYSIS FOR PARTIAL WORKER PARTICIPATION", "text": "Partial worker participation in each communication round may be more practical than full worker participation due to many physical limitations of FL in practice (e.g., excessive delays because of too many devices to poll, malfunctioning devices, etc.). Partial worker participation can also accelerate the training by neglecting stragglers. We consider two sampling strategies proposed by Li et al. (2018) and Li et al. (2019b). Let St be the participating worker index set at communication round t with |St| = n, ∀t, for some n ∈ (0,m]. St is randomly and independently selected either with replacement (Strategy 1) or without replacement (Strategy 2) sequentially according to the sampling probabilities pi,∀i ∈ [m]. For each member in St, we pick a worker from the entire set [m] uniformly at random with probability pi = 1m ,∀i ∈ [m]. That is, selection likelihood for anyone worker i ∈ St is p = nm . Then we have the following results: Theorem 2. Under Assumptions 1–3 with partial worker participation, the sequence of outputs {xk} generated by Algorithm 1 with constant learning rates η and ηL satisfies:\nmin t∈[T ] E[‖∇f(xt)‖22] ≤ f0 − f∗ cηηLKT + Φ,\nwhere f0 = f(x0), f∗ = f(x∗), and the expectation is over the local dataset samples among workers.\nFor sampling Strategy 1, let η and ηL be chosen as such that ηL ≤ 18LK , ηηLKL < n−1 n and 30K2η2LL 2 − LηηLn (90K 3L2η2L + 3K) < 1. It then holds that:\nΦ , 1\nc\n[ LηηL\n2n σ2L + 3LKηηL 2n σ2G + ( 5Kη2LL 2 2 + 15K2ηη3LL 3 2n )(σ2L + 6Kσ 2 G)\n] .\nFor sampling Strategy 2, let η and ηL be chosen as such that ηL ≤ 18LK , ηηLKL ≤ n(m−1) m(n−1) and 10K2η2LL 2 − LηηL m−nn(m−1) (90K 3η2LL 2 + 3K) < 1. It then holds that:\nΦ , 1\nc\n[ LηηL\n2n σ2L+3LKηηL m− n 2n(m− 1) σ2G+\n( 5Kη2LL 2\n2 +15K2ηη3LL 3 m− n 2n(m− 1)\n) (σ2L+6Kσ 2 G) ] .\nFrom Theorem 2, we immediately have the following convergence rate for the generalized FedAvg algorithm with a proper choice of two-sided learning rates: Corollary 2. Let ηL = 1√TKL and η = √ Kn. The convergence rate of the generalized FedAvg algorithm under partial worker participation and both sampling strategies are:\nmin t∈[T ]\nE‖∇f(xt)‖22 ≤ O ( √\nK√ nT + 1 T\n) .\nRemark 5. The convergence rate bound for partial worker participation has the same structure but with a larger variance term. This implies that the partial worker participation through the uniform sampling does not result in fundamental changes in convergence (in order sense) except for an amplified variance due to fewer workers participating and random sampling. The intuition is that the uniform sampling (with/without replacement) for worker selection yields a good approximation of the entire worker distribution in expectation, which reduces the risk of distribution deviation due to the partial worker participation. As shown in Section 5, the distribution deviation due to fewer worker participation could render the training unstable, especially in highly non-i.i.d. cases.\nRemark 6. The generalized FedAvg with partial worker participation under non-i.i.d. datasets can still achieve a linear speedup O(\n√ K√ nT\n) with proper learning rate settings as shown in Corollary 2. In addition, when degenerated to i.i.d. case (σG = 0), the convergence rate becomes O( 1TK + 1√ nKT ).\nRemark 7. Here, we let |St| = n only for ease of presentation and better readability. We note that this is not a restrictive condition. We can show that |St| = n can be relaxed to |St| ≥ n, ∀t ∈ [T ] and the same convergence rate still holds. In fact, our full proof in Appendix A.2 is for |St| ≥ n." }, { "heading": "4 DISCUSSION", "text": "In light of above results, in what follows, we discuss several insights from the convergence analysis:\nConvergence Rate: We show that the generalized FedAvg algorithm with two-sided learning rates can achieve a linear speedup, i.e., an O( 1√\nmKT ) convergence rate with a proper choice of hyper-\nparameters. Thus, it works well in large FL systems, where massive parallelism can be leveraged to accelerate training. The key challenge in convergence analysis stems from the different local loss functions (also called “model drift” in the literature) among workers due to the non-i.i.d. datasets and local steps. As shown above, we obtain a convergence bound for the generalized FedAvg method containing a vanishing term and a constant term (the constant term is similar to that of SGD). In contrast, the constant term in SGD is only due to the local variance. Note that, similar to SGD, the iterations do not diminish the constant term. The local variance σ2L (randomness of stochastic gradients), global variability σ2G (non-i.i.d. datasets), and the number of local steps K (amplification factor) all contribute to the constant term, but the total global variability in K local steps dominates the term. When the local learning rate ηL is set to an inverse relationship with respect to the number of local steps K, the constant term is controllable. An intuitive explanation is that the K small local steps can be approximately viewed as one large step in conventional SGD. So this speedup and the more allowed local steps can be largely attributed to the two-sided learning rates setting.\nNumber of Local Steps: Besides the result that the maximum number of local steps is improved to K ≤ T/m, we also show that the local steps could help the convergence with the proper hyperparameter choices, which supports previous numerical results (McMahan et al., 2016; Stich, 2018; Lin et al., 2018) and is verified in different models with different non-i.i.d. degree datasets in Section 5. However, there are other results showing the local steps slow down the convergence (Li et al., 2019b). We believe that whether local steps help or hurt the convergence in FL worths further investigations.\nNumber of Workers: We show that the convergence rate improves substantially as the the number of workers in each communication round increases. This is consistent with the results for i.i.d. cases in Stich (2018). For i.i.d. datasets, more workers means more data samples and thus less variance and better performance. For non-i.i.d. datasets, having more workers implies that the distribution of the sampled workers is a better approximation for the distribution of all workers. This is also empirically observed in Section 5. On the other hand, the sampling strategy plays an important role in non-i.i.d. case as well. Here, we adopt the uniform sampling (with/without replacement) to enlist workers to participate in FL. Intuitively, the distribution of the sampled workers’ collective datasets under uniform sampling yields a good approximation of the overall data distribution in expectation.\nNote that, in this paper, we assume that every worker is available to participate once being enlisted. However, this may not always be feasible. In practice, the workers need to be in certain states in order to be able to participate in FL (e.g., in charging or idle states, etc. (Eichner et al., 2019)). Therefore, care must be taken in sampling and enlisting workers in practice. We believe that the joint design of sampling schemes and the generalized FedAvg algorithm will have a significant impact on the convergence, which needs further investigations." }, { "heading": "5 NUMERICAL RESULTS", "text": "We perform extensive experiments to verify our theoretical results. We use three models: logistic regression (LR), a fully-connected neural network with two hidden layers (2NN) and a convolution neural network (CNN) with the non-i.i.d. version of MNIST (LeCun et al., 1998) and one ResNet model with CIFAR-10 (Krizhevsky et al., 2009). Due to space limitation, we relegate some experimental results in the supplementary material.\n0 20 40 60 80 100 Communication Round\n0.0\n0.5\n1.5\n2.0\n2.5\nTr ai\nni ng\nL os\ns\ndigits_1 digits_2 digits_5 digits_10\n0 20 40 60 80 100 Communication Round\n0.0\n0.5\n1.5\n2.0\n2.5\nTr ai\nni ng\nL os\ns\ndigits_10, worker number = 100 digits_10, worker number = 50 digits_10, worker number = 10 digits_2, worker number = 100 digits_2, worker number = 50 digits_2, worker number = 10\n0 20 40 60 80 100 Communication Round\n0.0\n0.5\n1.5\n2.0\n2.5\nTr ai\nni ng\nL os\ns\nworker number = 100, local steps = 1 epochs worker number = 100, local steps = 5 epochs worker number = 100, local steps = 10 epochs worker number = 10, local steps = 1 epochs worker number = 10, local steps = 5 epochs worker number = 10, local steps = 10 epochs\n0 20 40 60 80 100 Communication Round\n0.2\n0.4\n0.6\n0.8\n1.0\nTe st\nA cc\nur ac\ny\ndigits_1 digits_2 digits_5 digits_10\n(a) Impact of non-i.i.d. datasets.\n0 20 40 60 80 100 Communication Round\n0.2\n0.4\n0.6\n0.8\n1.0\nTe st\nA cc\nur ac\ny\ndigits_10, worker number = 100 digits_10, worker number = 50 digits_10, worker number = 10 digits_2, worker number = 100 digits_2, worker number = 50 digits_2, worker number = 10\n(b) Impact of worker number.\n0 20 40 60 80 100 Communication Round\n0.2\n0.4\n0.6\n0.8\n1.0\nTe st\nA cc\nur ac\ny\nworker number = 100, local steps = 1 epochs worker number = 100, local steps = 5 epochs worker number = 100, local steps = 10 epochs worker number = 10, local steps = 1 epochs worker number = 10, local steps = 5 epochs worker number = 10, local steps = 10 epochs\n(c) Impact of local steps\nFigure 1: Training loss (top) and test accuracy (bottom) for the 2NN model with hyper-parameters setting: local learning rate 0.1, global learning rate 1.0: (a) worker number 100, local steps 5 epochs; (b) local steps 5 epochs; (c) 5 digits in each worker’s dataset.\nIn this section, we elaborate the results under non-i.i.d. MNIST datasets for the 2NN. We distribute the MNIST dataset among m = 100 workers randomly and evenly in a digit-based manner such that the local dataset for each worker contains only a certain class of digits. The number of digits in each worker’s dataset represents the non-i.i.d. degree. For digits_10, each worker has training/testing samples with ten digits from 0 to 9, which is essentially an i.i.d. case. For digits_1, each worker has samples only associated with one digit, which leads to highly non-i.i.d. datasets among workers. For partial worker participation, we set the number of workers n = 10 in each communication round.\nImpact of non-i.i.d. datasets: As shown in Figure 1(a), for the 2NN model with full worker participation, the top-row figures are for training loss versus communication round and the bottomrow are for test accuracy versus communication round. We can see that the generalized FedAvg algorithm converges under non-i.i.d. datasets with a proper learning rate choice in both cases. For five digits (digits_5) in each worker’s dataset with full (partial) worker participation in Figure 1(a), the generalized FedAvg algorithm achieves a convergence speed comparable to that of the i.i.d. case (digits_10). Another key observation is that non-i.i.d. datasets slow down the convergence under the same learning rate settings for both cases. The higher the non-i.i.d. degree, the slower the convergence speed. As the non-i.i.d. degree increases (from case digits_10 to case digits_1), it is obvious that the training loss is increasing and test accuracy is decreasing. This trend is more obvious from the zigzagging curves for partial worker participation. These two observations can also be verified for other models as shown in the supplementary material, which confirms our theoretical analysis.\nImpact of worker number: As shown in Figure 1(b), we compare the training loss and test accuracy between full worker participation n = 100 and partial worker participation n = 10 with the same hyper-parameters. Compared with full worker participation, partial worker participation introduces another source of randomness, which leads to zigzagging convergence curves and slower convergence. This problem is more prominent for highly non-i.i.d. datasets. For full worker participation, it can neutralize the the system heterogeneity in each communication round. However, it might not be able to neutralize the gaps among different workers for partial worker participation. That is, the datasets’ distribution does not approximate the overall distribution well. Specifically, it is not unlikely that the digits in these datasets among all active workers are only a proper subset of the total 10 digits in the original MNIST dataset, especially with highly non-i.i.d. datasets. This trend is also obvious for complex models and complicated datasets as shown in the supplementary material. The sampling strategy here is random sampling with equal probability without replacement. In practice, however, the actual sampling of the workers in FL could be more complex, which requires further investigations.\nImpact of local steps: One open question of FL is that whether the local steps help the convergence or not. In Figure 1(c), we show that the local steps could help the convergence for both full and partial worker participation. These results verify our theoretical analysis. However, Li et al. (2019b) showed that the local steps may hurt the convergence, which was demonstrated under unbalanced non-i.i.d. MNIST datasets. We believe that this may be due to the combined effect of unbalanced datasets and local steps rather than just the use of local steps only.\nComparison with SCAFFOLD: Lastly, we compare with the SCAFFOLD algorithm (Karimireddy et al., 2019) since it also achieves the same linear speedup effect under non-i.i.d. datasets. We compare communication rounds, total communication load, and estimated wall-clock time under the same settings to achieve certain test accuracy, and the results are reported in Table 2. The non-i.i.d. dataset is digits_2 and the i.i.d. dataset is digits_10. The learning rates are ηL = 0.1, η = 1.0, and number of local steps K is 5 epochs. We set the target accuracy = 95% for MNIST and = 75% for CIFAR-10. Note that the total training time contains two parts: i) the computation time for training the local model at each worker and ii) the communication time for information exchanges between the workers and the server. We assume the bandwidth 20 MB/s for both uplink and downlink connections. For MNIST datasets, we can see that our algorithm is similar to or outperforms SCAFFOLD. This is because the numbers of communication rounds of both algorithms are relatively small for such simple tasks. For non-i.i.d. CIFAR-10, the SCAFFOLD algorithm takes slightly fewer number of communication rounds than our FedAvg algorithm to achieve = 75% thanks to its variance reduction. However, it takes more than 1.5 times of communication cost and wall-clock time compared to those of our FedAvg algorithm. Due to space limitation, we relegate the results of time proportions for computation and communication to Appendix B (see Figure 7)." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "In this paper, we analyzed the convergence of a generlized FedAvg algorithm with two-sided learning rates on non-i.i.d. datasets for general non-convex optimization. We proved that the generalized FedAvg algorithm achieves a linear speedup for convergence under full and partial worker participation. We showed that the local steps in FL could help the convergence and we improve the maximum number of local steps to T/m. While our work sheds light on theoretical understanding of FL, it also opens the doors to many new interesting questions in FL, such as how to sample optimally in partial worker participation, and how to deal with active participant sets that are both time-varying and size-varying across communication rounds. We hope that the insights and proof techniques in this paper can pave the way for many new research directions in the aforementioned areas." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work is supported in part by NSF grants CAREER CNS-1943226, CIF-2110252, ECCS-1818791, CCF-1934884, ONR grant ONR N00014-17-1-2417, and a Google Faculty Research Award." }, { "heading": "A APPENDIX I: PROOFS", "text": "In this section, we give the proofs in detail for full and partial worker participation in Section A.1 and Section A.2, respectively." }, { "heading": "A.1 PROOF OF THEOREM 1", "text": "Theorem 1. Let constant local and global learning rates ηL and η be chosen as such that ηL ≤ 18LK and ηηL ≤ 1KL . Under Assumptions 1–3 and with full worker participation, the sequence of outputs {xk} generated by Algorithm 1 satisfies:\nmin t∈[T ] E[‖∇f(xt)‖22] ≤ f0 − f∗ cηηLKT + Φ,\nwhere Φ , 1c [ LηηL 2m σ 2 L +\n5Kη2LL 2\n2 (σ 2 L + 6Kσ 2 G)], c is a constant, f0 , f(x0), f∗ , f(x∗) and the\nexpectation is over the local dataset samples among workers. Proof. For convenience, we define ∆̄t , 1m ∑m i=1 ∆ i t. Under full device participation (i.e., St =\n[m]), it is clear that ∆t = 1m ∑m i=1 ∆ i t = ∆̄t.\nDue to the smoothness in Assumption 1, taking expectation of f(xt+1) over the randomness at communication round t, we have:\nEt[f(xt+1)] ≤ f(xt) + 〈 ∇f(xt),Et[xt+1 − xt] 〉 + L\n2 Et[‖xt+1 − xt‖2]\n= f(xt)+ 〈 ∇f(xt),Et[η∆̄t + ηηLK∇f(xt)− ηηLK∇f(xt)] 〉 + L\n2 η2Et[‖∆̄t‖2]\n= f(xt)−ηηLK‖∇f(xt)‖2+η 〈 ∇f(xt),Et[∆̄t+ηLK∇f(xt)] 〉︸ ︷︷ ︸ A1 + L 2 η2 Et[‖∆̄t‖2]︸ ︷︷ ︸ A2 .\n(1)\nNote that the term A1 in (1) can be bounded as follows: A1 = 〈 ∇f(xt),Et[∆̄t + ηLK∇f(xt)]\n〉 = 〈 ∇f(xt),Et [ − 1 m m∑ i=1 K−1∑ k=0 ηLg i t,k + ηLK∇f(xt) ]〉\n= 〈 ∇f(xt),Et [ − 1 m m∑ i=1 K−1∑ k=0 ηL∇Fi(xit,k) + ηLK 1 m m∑ i=1 ∇Fi(xt) ]〉\n= 〈√ ηLK∇f(xt),− √ ηL\nm √ K Et m∑ i=1 K−1∑ k=0 (∇Fi(xit,k)−∇Fi(xt)) 〉\n(a1) = ηLK\n2 ‖∇f(xt)‖2+ ηL 2Km2 Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k)−∇Fi(xt) ∥∥∥∥2− ηL2Km2Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2\n(a2)\n≤ ηLK 2 ‖∇f(xt)‖2+ ηL 2m m∑ i=1 K−1∑ k=0 Et‖∇Fi(xit,k)−∇Fi(xt)‖2− ηL 2Km2 Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2\n(a3)\n≤ ηLK 2 ‖∇f(xt)‖2 +\nηLL 2\n2m m∑ i=1 K−1∑ k=0 Et‖xit,k − xt‖2 − ηL 2Km2 Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2\n(a4)\n≤ηLK( 1\n2 +15K2η2LL\n2)‖∇f(xt)‖2+ 5K2η3LL 2\n2 (σ2L+6Kσ 2 G)− ηL 2Km2 Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2,\n(2)\nwhere (a1) follows from that 〈 x,y 〉 = 12 [‖x‖ 2 + ‖y‖2 − ‖x − y‖2] for x = √ ηLK∇f(xt) and y = − √ ηL\nm √ K ∑m i=1 ∑K−1 k=0 (∇Fi(xit,k) − ∇Fi(xt)), (a2) is due to that E[‖x1 + · · · + xn‖2] ≤\nnE[‖x1‖2 + · · ·+ ‖xn‖2] , (a3) is due to Assumption 1 and (a4) follows from Lemma 2. The term A2 in (1) can be bounded as:\nA2 = Et[‖∆̄t‖2] = Et [∥∥∥∥ 1m m∑ i=1 ∆it ∥∥∥∥2]\n≤ 1 m2 Et [∥∥∥∥ m∑\ni=1\n∆it ∥∥∥∥2]\n= η2L m2 Et [∥∥∥∥ m∑\ni=1 K−1∑ k=0 git,k ∥∥∥∥2] (a5) =\nη2L m2 Et [∥∥∥∥ m∑\ni=1 K−1∑ k=0 (git,k −∇Fi(xit,k)) ∥∥∥∥2]+ η2Lm2Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2\n(a6) ≤ Kη 2 L\nm σ2L + η2L m2 Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2, (3)\nwhere (a5) follows from the fact that E[‖x‖2] = E[‖x − E[x]‖2] + ‖E[x]‖2] and (a6) is due to the bounded variance assumption in Assumption 3 and the fact that E[‖x1 + · · · + xn‖2] = E[‖x1‖2 + · · ·+ ‖xn‖2] if x ′\nis are independent with zero mean and E[git,j ] = ∇Fi(xit,j). Substituting the inequalities in (2) of A1 and (3) of A2 into inequality (1), we have:\nEt[f(xt+1)] ≤ f(xt)−ηηLK‖∇f(xt)‖2+η < ∇f(xt),Et[∆̄t+ηLK∇f(xt)] >︸ ︷︷ ︸ A1 + L 2 η2 Et[‖∆̄t‖2]︸ ︷︷ ︸ A2\n≤ f(xt)− ηηLK( 1\n2 − 15K2η2LL2)‖∇f(xt)‖2 + LKη2η2L 2m σ2L\n+ 5ηK2η3LL 2\n2 (σ2L + 6Kσ 2 G)− ( ηηL 2Km2 − Lη 2η2L 2m2 )Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2\n(a7)\n≤ f(xt)−ηηLK( 1\n2 −5K2η2LL2)‖∇f(xt)‖2+ LKη2η2L 2m σ2L+ 5ηK2η3LL 2 2 (σ2L+6Kσ 2 G)\n(a8) ≤ f(xt)− cηηLK‖∇f(xt)‖2 + LKη2η2L\n2m σ2L +\n5ηK2η3LL 2\n2 (σ2L + 6Kσ 2 G),\nwhere (a7) follows from ( ηηL2Km2 − Lη2η2L 2m2 ) ≥ 0 if ηηL ≤ 1 KL , (a8) holds because there exists a constant c > 0 satisfying ( 12 − 15K 2η2LL 2) > c > 0 if ηL < 1√30KL .\nRearranging and summing from t = 0, · · · , T − 1, we have: T−1∑ t=0 cηηLKE[∇f(xt)] ≤ f(x0)− f(xT ) + T (ηηLK) [ LηηL 2m σ2L + 5Kη2LL 2 2 (σ2L + 6Kσ 2 G) ] which implies,\nmin t∈[T ] E‖∇f(xt)‖22 ≤ f0 − f∗ cηηLKT + Φ,\nwhere Φ = 1c [ LηηL 2m σ 2 L +\n5Kη2LL 2\n2 (σ 2 L + 6Kσ 2 G)]. This completes the proof." }, { "heading": "A.2 PROOF OF THEOREM 2", "text": "Theorem 2. Under Assumptions 1–3 with partial worker participation, the sequence of outputs {xk} generated by Algorithm 1 with constant learning rates η and ηL satisfies:\nmin t∈[T ] E[‖∇f(xt)‖22] ≤ f0 − f∗ cηηLKT + Φ,\nwhere f0 = f(x0), f∗ = f(x∗), and the expectation is over the local dataset samples among workers.\nFor sampling Strategy 1, let η and ηL be chosen as such that ηL ≤ 18LK , ηηLKL < n−1 n and 30K2η2LL 2 − LηηLn (90K 3L2η2L + 3K) < 1. It then holds that:\nΦ , 1\nc\n[ LηηL\n2n σ2L + 3LKηηL 2n σ2G + ( 5Kη2LL 2 2 + 15K2ηη3LL 3 2n )(σ2L + 6Kσ 2 G)\n] .\nFor sampling Strategy 2, let η and ηL be chosen as such that ηL ≤ 18LK , ηηLKL ≤ n(m−1) m(n−1) and 10K2η2LL 2 − LηηL m−nn(m−1) (90K 3η2LL 2 + 3K) < 1. It then holds that:\nΦ , 1\nc\n[ LηηL\n2n σ2L+3LKηηL m− n 2n(m− 1) σ2G+\n( 5Kη2LL 2\n2 +15K2ηη3LL 3 m− n 2n(m− 1)\n) (σ2L+6Kσ 2 G) ] .\nProof. Let ∆̄t be defined the same as in the proof of Theorem 1. Under partial device participation, note that ∆̄t 6= ∆t (recall that ∆̄t , 1m ∑m i=1 ∆ i t, ∆t = 1 n ∑ i∈St ∆ i t, and |St| = n). The randomness for partial worker participation contains two parts: the random sampling and the stochastic gradient. We still use Et[·] to represent the expectation with respect to both types of randomness. Due to the smoothness assumption in Assumption 1, taking expectation of f(xt+1) over the randomness at communication round t:\nEt[f(xt+1)] ≤ f(xt) + 〈 ∇f(xt),Et[xt+1 − xt] 〉 + L\n2 Et[‖xt+1 − xt‖2]\n= f(xt) + 〈 ∇f(xt),Et[η∆t + ηηLK∇f(xt)− ηηLK∇f(xt)] 〉 + L\n2 η2Et[‖∆t‖2]\n= f(xt)−ηηLK‖∇f(xt)‖2+η 〈 ∇f(xt),Et[∆t+ηLK∇f(xt)] 〉︸ ︷︷ ︸ A ′ 1 + L 2 η2 Et[‖∆t‖2]︸ ︷︷ ︸ A ′ 2\n(4)\nThe term A ′ 1 in (4) can be bounded as follows: Since ESt [A ′\n1] = A1 due to Lemma 1 for both sampling strategies, we have the same bound as in inequality 2 for A ′\n1:\nA ′ 1 ≤ ηLK( 1\n2 + 15K2η2LL\n2)‖∇f(xt)‖2 + 5K2η3LL 2\n2 (σ2L + 6Kσ 2 G)\n− ηL 2Km2 Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2, (5)\nFor strategy 1: We can bound A ′\n2 in (4) as follows.\nNote St is an index set (multiset) for independent sampling (equal probability) with replacement in which some elements may have the same value. Suppose St = {l1, . . . , ln}.\nA ′\n2 = Et[‖∆t‖2] = Et [∥∥∥∥ 1n ∑\ni∈St\n∆it ∥∥∥∥2]\n= 1 n2 Et [∥∥∥∥∑\ni∈St\n∆it ∥∥∥∥2]\n= 1 n2 Et [∥∥∥∥ n∑\nz=1\n∆lzt ∥∥∥∥2] (b1) =\nη2L n2 Et [∥∥∥∥ n∑\nz=1 K−1∑ j=0 [glzt,j −∇Flz (x lz t,j)] ∥∥∥∥2]+ η2Ln2Et [∥∥∥∥ n∑ z=1 K−1∑ j=0 ∇Flz (x lz t,j) ∥∥∥∥2] (b2)\n≤ Kη 2 L\nn σ2L + η2L n2 Et [∥∥∥∥ n∑\nz=1 K−1∑ j=0 ∇Flz (x lz t,j) ∥∥∥∥2], where (b1) follows from the fact that E[‖x‖2] = E[‖x− E[x]‖2] + ‖E[x]‖2] and (b2) is due to the bounded variance assumption 3 and E[‖x1 + · · ·+ xn‖2] ≤ nE[‖x1‖2 + · · ·+ ‖xn‖2].\nBy letting ti = ∑K−1 j=0 ∇Fi(xit,j), we have:\nEt [∥∥∥∥ n∑\nz=1 K−1∑ j=0 ∇Flz (x lz t,j) ∥∥∥∥2 = Et[∥∥∥∥ n∑ z=1 tlz ∥∥∥∥2]\n= Et [ n∑ z=1 ‖tlz‖2 + ∑\ni 6=j;li,lj∈St\n〈 tli , tlj 〉] (b3) = Et [ n‖tl1‖2 + n(n− 1) 〈 tl1 , tl2\n〉] = n\nm m∑ i=1 ‖ti‖2 + n(n− 1) m2 ∑ i,j∈[m] 〈 ti, tj 〉 = n\nm m∑ i=1 ‖ti‖2 + n(n− 1) m2 ‖ m∑ i=1 ti‖2,\nwhere (b3) is due to the independent sampling with replacement.\nSo we can bound A ′\n2 as follows.\nA ′\n2 = Et[‖∆t‖2]\n≤ Kη 2 L\nn σ2L + η2L mn m∑ i=1 Et‖ti‖2 + (n− 1)η2L m2n Et ∥∥∥∥ m∑ i=1 ti ∥∥∥∥2, (6) For ti, we have:\nm∑ i=1 Et‖ti‖2 = m∑ i=1 Et ∥∥∥∥K−1∑ j=0 ∇Fi(xit,j)−∇Fi(xt) +∇Fi(xt)−∇f(xt) +∇f(xt) ∥∥∥∥2\n(b4) ≤ 3KL2 m∑ i=1 K−1∑ j=0 Et‖xit,j − xt‖2 + 3mK2σ2G + 3mK2‖∇f(xt)‖2\n(b5)\n≤ 15mK3L2η2L(σ2L+6Kσ2G)+(90mK4L2η2L + 3mK2)‖∇f(xt)‖2+3mK2σ2G, (7)\nwhere (b4) is due to the fact that E[‖x1 + · · ·+ xn‖2] ≤ nE[‖x1‖2 + · · ·+ ‖xn‖2] , Assumptions 3 and 1, and (b5) follows from Lemma 2.\nSubstituting the inequalities in ( 5) of A ′ 1 and ( 6) of A ′ 2 into inequality (4), we have:\nEt[f(xt+1)] ≤ f(xt)−ηηLK‖∇f(xt)‖2+η 〈 ∇f(xt),Et[∆t+ηLK∇f(xt)] 〉︸ ︷︷ ︸ A ′ 1 + L 2 η2 Et[‖∆t‖2]︸ ︷︷ ︸ A ′ 2\n≤ f(xt)− ηηLK( 1\n2 − 15K2η2LL2)‖∇f(xt)‖2 +\n5ηK2η3LL 2\n2 (σ2L + 6Kσ 2 G)\n+\n[ (n− 1)Lη2η2L\n2m2n − ηηL 2Km2 ] Et ∥∥∥∥ m∑ i=1 ti ∥∥∥∥2+LKη2η2L2n σ2L+Lη2η2L2mn m∑ i=1 Et‖ti‖2\n(b6)\n≤ f(xt)− ηηLK( 1\n2 − 15K2η2LL2)‖∇f(xt)‖2 +\n5ηK2η3LL 2\n2 (σ2L + 6Kσ 2 G)\n+ LKη2η2L\n2n σ2L + Lη2η2L 2mn m∑ i=1 Et‖ti‖2\n(b7)\n≤ f(xt)− ηηLK( 1\n2 − 15K2η2LL2 − LηηL 2n (90K3L2η2L + 3K))‖∇f(xt)‖2\n+\n[ 5ηK2η3LL 2\n2 + 15K3L3η2η4L 2n\n] (σ2L+6Kσ 2 G)+\nLKη2η2L 2n σ2L+ 3K2Lη2η2L 2n σ2G\n(b8) ≤ f(xt)− cηηLK‖∇f(xt)‖2 + LKη2η2L\n2n σ2L + 3K2Lη2η2L 2n σ2G\n+ ηηLK\n[ 5Kη2LL 2\n2 +\n15K2η3LηL 3\n2n\n] (σ2L + 6Kσ 2 G), (8)\nwhere (b6) follows from (n−1)Lη 2η2L\n2m2n − ηηL 2Km2 ≤ 0 if ηηLKL ≤ n−1 n , (b7)is due to inequality (7) and\n(b8) holds since there exists a constant c > 0 such that [ 12−15K 2η2LL 2− LηηL2n (90K 3L2η2L+3K)] > c > 0 if 30K2η2LL 2 − LηηLn (90K 3L2η2L + 3K) < 1.\nNote that the requirement of |St| = n can be relaxed to |St| ≥ n. With pt ≥ n workers in t-th communication round, 8 is\nEt[f(xt+1)] ≤ f(xt)− cηηLK‖∇f(xt)‖2 + LKη2η2L\n2pt σ2L + 3KLη2η2L 2pt σ2G\n+ ηηLK\n[ 5Kη2LL 2\n2 +\n15Kη3LηL 3\n2pt\n] (σ2L + 6Kσ 2 G)\n≤ f(xt)− cηηLK‖∇f(xt)‖2 + LKη2η2L\n2n σ2L + 3K2Lη2η2L 2n σ2G\n+ ηηLK\n[ 5Kη2LL 2\n2 +\n15K2η3LηL 3\n2n\n] (σ2L + 6Kσ 2 G).\nThat is, the same convergence rate can be guaranteed if at least n workers in each communication round (no need to be exactly n).\nRearranging and summing from t = 0, · · · , T − 1, we have the convergence for partial device participation with sampling strategy 1 as follows:\nmin t∈[T ] E[‖∇f(xt)‖22] ≤ f0 − f∗ cηηLKT + Φ,\nwhere Φ = 1c [ LηηL 2n σ 2 L + 3KLηηL 2n σ 2 G + ( 5Kη2LL 2 2 + 15K2ηη3LL 3 2n )(σ 2 L + 6Kσ 2 G) ] and c is a constant.\nFor strategy 2: Under the strategy of independent sampling with equal probability without replacement. We bound A ′\n2 as follows.\nA ′\n2 = Et[‖∆t‖2] = Et [∥∥∥∥ 1n ∑\ni∈St\n∆it ∥∥∥∥2]\n= 1 n2 Et [∥∥∥∥∑\ni∈St\n∆it ∥∥∥∥2]\n= 1 n2 Et [∥∥∥∥ m∑\ni=1\nI{i ∈ St}∆it ∥∥∥∥2]\n= η2L n2 Et [∥∥∥∥ m∑\ni=1 I{i ∈ St} K−1∑ j=0 [git,j−∇Fi(xit,j)] ∥∥∥∥2]+ η2Ln2Et [∥∥∥∥ m∑ i=1 I{i ∈ St} K−1∑ j=0 ∇Fi(xit,j)] ∥∥∥∥2]\n= η2L n2 Et [∥∥∥∥ m∑\ni=1 P{i ∈ St} K−1∑ j=0 [git,j −∇Fi(xit,j)] ∥∥∥∥2 + η2Ln2 ∥∥∥∥ m∑ i=1 I{i ∈ St} K−1∑ j=0 ∇Fi(xit,j) ∥∥∥∥2]\n(b9) = η2L nm Et [ m∑ i=1 K−1∑ j=0 ∥∥∥∥git,j −∇Fi(xit,j)∥∥∥∥2]+ η2Ln2Et [∥∥∥∥ m∑ i=1 I{i ∈ St} K−1∑ j=0 ∇Fi(xit,j) ∥∥∥∥2]\n(b10) ≤ Kη 2 L\nn σ2L + η2L n2 ∥∥∥∥ m∑ i=1 P{i ∈ St} K−1∑ j=0 ∇Fi(xit,j) ∥∥∥∥2, (9)\nwhere (b9) is due to the fact that E[‖x1 + · · · + xn‖2] = E[‖x1‖2 + · · · + ‖xn‖2] if x ′\nis are independent with zero mean, xi = git,j −∇Fi(xit,j) is independent random variable with mean zero, and P{i ∈ St} = nm . (b10) is due to bounded variance assumption in Assumption 3\nSubstituting the inequalities in (5) of A ′ 1 and (9) of A ′ 2 into inequality (4), we have:\nEt[f(xt+1)]≤f(xt)−ηηLK‖∇f(xt)‖2+η 〈 ∇f(xt),Et[∆t+ηLK∇f(xt)] 〉︸ ︷︷ ︸ A ′ 1 + L 2 η2 Et[‖∆t‖2]︸ ︷︷ ︸ A ′ 2\n≤ ∇f(xt)− ηηLK( 1\n2 − 15K2η2LL2)‖∇f(xt)‖2 + LKη2η2L 2n σ2L + 5ηK2η3LL 2 2 (σ2L + 6Kσ 2 G)\n+ Lη2η2L 2n2 Et ∥∥∥∥ m∑ i=1 P{i ∈ St} K−1∑ j=0 ∇Fi(xit,j) ∥∥∥∥2 − ηηL2Km2Et ∥∥∥∥ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k) ∥∥∥∥2︸ ︷︷ ︸\nA ′ 3\n.\nThen we bound A ′\n3 as follows. By letting ti = ∑K−1 j=0 ∇Fi(xit,j), we have:\nm∑ i=1 Et‖ti‖2 ≤ 15mK3L2η2L(σ2L + 6Kσ2G) + (90mK4L2η2L + 3mK2)‖∇f(xt)‖2 + 3mK2σ2G.\nIt then follows that\n‖ m∑ i=1 ti‖2 = ∑ i∈[m] ‖ti‖2 + ∑ i6=j < ti, tj >\n(b11) = ∑ i∈[m] m‖ti‖2 − 1 2 ∑ i6=j ‖ti − tj‖2\n‖ m∑ i=1 P{i ∈ St}ti‖2 = ∑ i∈[m] P{i ∈ St}‖ti‖2 + ∑ i 6=j P{i, j ∈ St} < ti, tj >\n(b12) =\nn\nm ∑ i∈[m] ‖ti‖2 + n(n− 1) m(m− 1) ∑ i 6=j < ti, tj >\n(b13) =\nn2\nm ∑ i∈[m] ‖ti‖2 − n(n− 1) 2m(m− 1) ∑ i 6=j ‖ti − tj‖2,\nwhere (b11) and (b13) are due to the fact that 〈 x,y 〉 = 12 [‖x‖ 2+‖y‖2−‖x−y‖2] ≤ 12 [‖x‖ 2+‖y‖2], (b12) follows from the fact that P{i ∈ St} = nm and P{i, j ∈ St} = n(n−1) m(m−1) . Therefore, we have\nA ′ 3 = Lη2η2L 2n2 ‖ m∑ i=1 P{i ∈ St} K−1∑ j=0 ∇Fi(xit,j)]‖2 − ηηL 2Km2 ‖ m∑ i=1 K−1∑ k=0 ∇Fi(xit,k)‖2\n= ( Lη2η2L 2m − ηηL 2Km ) m∑ i=1 ‖ti‖2 + ( ηηL 4Km2 − Lη 2η2L(n− 1) 4mn(m− 1) ) ∑ i 6=j ‖ti − tj‖2\n(b14) = ( Lη2η2L 2m − Lη 2η2L(n− 1) 2n(m− 1) ) m∑ i=1 ‖ti‖2 − ( ηηL 2Km2 − Lη 2η2L(n− 1) 2mn(m− 1) )‖ ∑ i∈[m] ti‖2\n(b15) ≤ (Lη 2η2L 2m − Lη 2η2L(n− 1) 2n(m− 1) ) m∑ i=1 ‖ti‖2\n= Lη2η2L m− n\n2mn(m− 1) m∑ i=1 ‖ti‖2,\nwhere (b14) follows from the fact that ‖ ∑ i∈[m] ti‖2 = ∑ i∈[m]m‖ti‖2 − 1 2 ∑ i 6=j ‖ti − tj‖2, and (b15) is due to the fact that ( ηηL2Km2 − Lη2η2L(n−1) 2mn(m−1) ) ≥ 0 if ηηLKL ≤ n(m−1) m(n−1) .\nThen we have\nEt[f(xt+1)] ≤ f(xt)−ηηLK( 1\n2 −15K2η2LL2−LηηL m− n 2n(m− 1) (90K3η2LL 2+3K))‖∇f(xt)‖2\n+ LKη2η2L\n2n σ2L + 3K\n2Lη2η2L m− n\n2n(m− 1) σ2G\n+ ηηLK( 5Kη2LL 2\n2 + 15Kηη3LL 3 m− n 2n(m− 1) )(σ2L + 6Kσ 2 G)\n(b16) ≤ f(xt)− cηηLK‖∇f(xt)‖2 + LKη2η2L\n2n σ2L + 3KLη\n2η2L m− n\n2n(m− 1) σ2G\n+ ηηLK( 5Kη2LL 2\n2 + 15K2ηη3LL 3 m− n 2n(m− 1) )(σ2L + 6Kσ 2 G), (10)\nwhere (b16) holds because there exists a constant c > 0 satisfying ( 12 − 5K 2η2LL 2 − LηηL m−n 2n(m−1) (90K 3η2LL 2 + 3K)) > c > 0 if 10K2η2LL 2−LηηL m−nn(m−1) (90K 3η2LL 2 + 3K) < 1.\nNote that the requirement of |St| = n can be relaxed to |St| ≥ n. With pt ≥ n workers in t-th communication round, 10 is\nEt[f(xt+1)] ≤ f(xt)− cηηLK‖∇f(xt)‖2 + LKη2η2L\n2pt σ2L + 3KLη\n2η2L m− pt\n2pt(m− 1) σ2G\n+ ηηLK( 5Kη2LL 2\n2 + 15K2ηη3LL 3 m− pt 2pt(m− 1) )(σ2L + 6Kσ 2 G)\n≤ f(xt)− cηηLK‖∇f(xt)‖2 + LKη2η2L\n2n σ2L + 3KLη\n2η2L m− n\n2n(m− 1) σ2G\n+ ηηLK( 5Kη2LL 2\n2 + 15K2ηη3LL 3 m− n 2n(m− 1) )(σ2L + 6Kσ 2 G)\nThat is, the same convergence rate can be guaranteed if at least n workers in each communication round (no need to be exactly n).\nRearranging and summing from t = 0, · · · , T − 1, we have the convergence for partial device participation with sampling strategy 2 as follows:\nmin t∈[T ] E[‖∇f(xt)‖22] ≤ f0 − f∗ cηηLKT + Φ,\nwhere Φ = 1c [ LηηL 2n σ 2 L + 3KLηηL m−n 2n(m−1)σ 2 G + ( 5Kη2LL 2 2 + 15K 2ηη3LL 3 m−n 2n(m−1) )(σ 2 L + 6Kσ 2 G) ] and c is a constant. This completes the proof." }, { "heading": "A.2.1 KEY LEMMAS", "text": "Lemma 1 (Unbiased Sampling). For strategies 1 and 2, the estimator ∆t is unbiased, i.e.,\nESt [∆t] = ∆̄t." }, { "heading": "Proof of Lemma 1.", "text": "Let St = {t1, · · · , tn} with size n. Both for sampling strategies 1 and 2, each sampling distribution is identical. Then we have:\nESt [∆t] = 1\nn ESt [ ∑ ti∈St ∆tit ] = 1 n ESt [ n∑ i=1 ∆tit ] = ESt [∆ t1 t ] = 1 m m∑ i=1 ∆it = ∆̄t." }, { "heading": "A.3 AUXILIARY LEMMAS", "text": "Lemma 2 (Lemma 4 in Reddi et al. (2020)). For any step-size satisfying ηL ≤ 18LK , we can have the following results:\n1\nm m∑ i=1 E[‖xit,k − xt‖2] ≤ 5Kη2L(σ2L + 6Kσ2G) + 30K2η2L‖∇f(xt)‖2.\nProof. In order for this paper to be self-contained, we restate the proof of Lemma 4 in (Reddi et al., 2020) here.\nFor any worker i ∈ [m] and k ∈ [K], we have:\nE[‖xit,k − xt‖2] = E[‖xit,k−1 − xt − ηLgtt,k−1‖2] ≤E[‖xit,k−1−xt−ηL(gtt,k−1−∇Fi(xit,k−1)+∇Fi(xit,k−1)−∇Fi(xt)+∇Fi(xt)−∇f(xt)+∇f(xt))‖2]\n≤ (1 + 1 2K − 1 )E[‖xit,k−1 − xt‖2] + E[‖ηL(gtt,k−1 −∇Fi(xit,k−1))‖2]\n+6KE[‖ηL(∇Fi(xit,k−1)−∇Fi(xt))‖2]+6KE[‖ηL(∇Fi(xt)−∇f(xt)))‖2]+6K‖ηL∇f(xt)‖2\n≤(1+ 1 2K−1 )E[‖xit,k−1−xt‖2]+η2Lσ2L+6Kη2LL2E[‖xit,k−1−xt‖2]+6Kη2Lσ2G+6K‖ηL∇f(xt)‖2\n= (1 + 1\n2K − 1 + 6Kη2LL 2)E[‖xit,k−1 − xt‖2] + η2Lσ2L + 6Kη2Lσ2G + 6K‖ηL∇f(xt)‖2\n≤ (1 + 1 K − 1 )E[‖xit,k−1 − xt‖2] + η2Lσ2L + 6Kη2Lσ2G + 6K‖ηL∇f(xt)‖2\nUnrolling the recursion, we get:\n1\nm m∑ i=1 E[‖xit,k − xt‖2] ≤ k−1∑ p=0 (1 + 1 K − 1 )p[η2Lσ 2 L + 6Kσ 2 G + 6Kη 2 L‖ηL∇f(xt))‖2]\n≤ (K − 1)[(1 + 1 K − 1 )K − 1][η2Lσ2L + 6Kσ2G + 6Kη2L‖ηL∇f(xt))‖2]\n≤ 5Kη2L(σ2L + 6Kσ2G) + 30K2η2L‖∇f(xt)‖2\nThis completes the proof." }, { "heading": "B APPENDIX II: EXPERIMENTS", "text": "We provide the full detail of the experiments. We uses non-i.i.d. versions for MNIST and CIFAR-10, which are described as follows:" }, { "heading": "B.1 MNIST", "text": "We study image classification of handwritten digits 0-9 in MNIST and modify the MNIST dataset to a non-i.i.d. version.\nTo impose statistical heterogeneity, we split the data based on the digits (p) they contain in their dataset. We distribute the data to m = 100 workers such that each worker contains only a certain class of digits with the same number of training/test samples. For example, for p = 1, each worker only has training/testing samples with one digit, which causes heterogeneity among different workers. For p = 10, each worker has samples with 10 digits, which is essentially i.i.d. case. In this way, we can use the digits in worker’s local dataset to represent the non-i.i.d. degree qualitatively. In each communication round, 100 workers run K epochs locally in parallel and then the server samples n workers for aggregation and update. We make a grid-search experiments for the hyper-parameters as shown in Table 3.\nWe run three models: multinomial logistic regression, fully-connected network with two hidden layers (2NN) (two 200 neurons hidden layers with ReLU followed by an output layer), convolutional neural network (CNN), as shown in Table 4. The results are shown in Figures 2, 3 and 4.\nUnless stated otherwise, we use the following default parameter setting: the server learning rate and client learning rate are set to η = 1.0 and ηL = 0.1, respectively. The local epochs is set to K = 10. The total number of clients is set to 100, and the clients partition number is set to n = 10. We use the same strategy to distribute the data over clients as suggested in McMahan et al. (2016). For the i.i.d. setting, we evenly partition all the training data among all clients, i.e., each client observes 500 data; for the non-i.i.d. setting, we first sort the training data by label, then divide all the training data into 200 shards of size 250, and randomly assign two shards to each client. For the CIFAR-10 dataset, we train our classifier with the ResNet model. The results are shown in Figure 5 and Figure 6." }, { "heading": "B.3 DISCUSSION", "text": "Impact of non-i.i.d. datasets: Figure 2 shows the results of training loss (top) and test accuracy (bottom) for three models under different non-i.i.d. datasets with full and partial worker participation\non MNIST. We can see that the FedAvg algorithm converges under non-i.i.d. datasets with a proper learning rate choice in these cases. We believe that the major challenge in FL is the non-i.i.d. datasets. For these datasets with a lower degree of non-i.i.d., the FedAvg algorithm can achieve a good result compared with the i.i.d. case. For example, when the local dataset in each worker has five digits (p = 5) with full (partial) worker participation, the FedAvg algorithm achieves a convergence speed comparable with that of the i.i.d. case (p = 10). This result can be observed in Figure 2 for all three models. As the degree of non-i.i.d. datasets increases, its negative impact on the convergence is becoming more obvious. The higher the degree of non-i.i.d., the slower the convergence speed. As the non-i.i.d. degree increases (from case p = 10 to case p = 1), it is obvious that the training loss is increasing and test accuracy is decreasing. For these with high degree of non-i.i.d., the convergence curves oscillate and are highly unstable. This trend is more obvious for complex models such for CNN in Figure 2(c).\nImpact of worker number: For full worker participation, the server can have an accurate estimation of the system heterogeneity after receiving the updates for all workers and neutralize this heterogeneity in each communication round. However, partial worker participation introduces another source of randomness, which leads to zigzagging convergence curves and slower convergence. In each\ncommunication round, the server can only receive a subset of workers based on the sampling strategy. So the server could only have a coarse estimation of the system heterogeneity and might not be able to neutralize the heterogeneity among different workers for partial worker participation. This problem is more prominent for highly non-i.i.d. datasets. It is not unlikely that the digits in these datasets among all active workers are only a proper subset of the total 10 digits in the original MNIST dataset, especially with highly non-i.i.d. datasets. For example, for p = 1 with 10 workers in each\ncommunication round, it is highly likely that the datasets formed by these ten workers only includes certain small number of digits (say, 4 or 5) rather than total 10 digits. But for p = 5, it is the opposite, that is, the digits in these datasets among these 10 workers are highly likely to be 10. So in each communication round, the server can mitigate system heterogeneity since it covers the training samples with all 10 digits. This trend is more obvious for complex models and datasets given the dramatic drop of test accuracy in the result of CIFAR-10 in Figure 5.\nThe sample strategy here is random sampling with equal probability without replacement. In practice, the workers need to be in certain states in order to be able to participate in FL (e.g., in charging or idle states, etc.(Eichner et al., 2019)). Therefore, care must be taken in sampling and enlisting workers in practice. We believe that the joint design of sampling schemes, number of workers and the FedAvg algorithm will have a significant impact on the convergence, which needs further investigations.\nImpact of local steps: Figure 3 and Figure 4 shows the results of training loss (top) and test accuracy (bottom) for three models under different local steps with full and partial worker participation respectively. Figure 6 shows the impact of local steps in CIFAR-10. One open question of FL is that whether the local steps help the convergence or not. Li et al. (2019b) showed a convergence rate O(KT ), i.e., the local steps may hurt the convergence for full and partial worker participation. In this two figures, we can see that local steps could help the convergence for both full and partial worker participation. However, it only has a slight effect on the convergence compared to the effects of non-i.i.d. datasets and number of workers.\nComparison with SCAFFOLD: We compare SCAFFOLD (Karimireddy et al., 2019) with the generalized FedAVg algorithm in this paper in terms of communication rounds, total communication overloads and estimated wall-clock time to achieve certain test accuracy in Table 2. We run the experiments using the same GPU (NVIDIA V100) to ensure the same conditions. Here, we give a specific comparison for these two algorithms under exact condition. Note that we divide the total training time to two parts: the computation time when the worker trains the local model and the communication time when information exchanges between the worker and server. We only compare the computation time and communication time with a fixed bandwidth 20MB/s for both uploading and downloading connections. As shown in Figure 7, to achieve = 75%, SCAFFOLD performs less communication round due to the variance reduction techniques. That is, it spends less time on computation. However, it needs to communicates as twice as the FedAvg since the control variate to perform variance reduction in each worker needs to update in each round. In this way, the communication time would be largely prolonged." } ]
2,021
null
SP:86adaa9dd2414906f708b26e60c86b6e854bb222
[ "The paper considers stochastic gradient descent convergence in a distributed setting with m workers, where up to α workers can be Byzantine, i.e. perform in an arbitrarily adversarial way. In this setting, they develop a variant of SGD which finds a second-order stationary point, prevents Byzantine workers from significantly affecting convergence, and achieves α^2 + 1/m speedup compared with the sequential case. The main idea of the algorithm is to measure deviations of gradient updates for a certain number of rounds and detect Byzantine machines which must have a significant deviation to noticeably affect the algorithm’s behavior." ]
We study adversary-resilient stochastic distributed optimization, in which m machines can independently compute stochastic gradients, and cooperate to jointly optimize over their local objective functions. However, an α-fraction of the machines are Byzantine, in that they may behave in arbitrary, adversarial ways. We consider a variant of this procedure in the challenging non-convex case. Our main result is a new algorithm SafeguardSGD which can provably escape saddle points and find approximate local minima of the non-convex objective. The algorithm is based on a new concentration filtering technique, and its sample and time complexity bounds match the best known theoretical bounds in the stochastic, distributed setting when no Byzantine machines are present. Our algorithm is very practical: it improves upon the performance of all prior methods when training deep neural networks, it is relatively lightweight, and it is the first method to withstand two recently-proposed Byzantine attacks.
[ { "affiliations": [], "name": "Zeyuan Allen-Zhu" }, { "affiliations": [], "name": "Faeze Ebrahimian" }, { "affiliations": [], "name": "Jerry Li" }, { "affiliations": [], "name": "Dan Alistarh" } ]
[ { "authors": [ "Dan Alistarh", "Zeyuan Allen-Zhu", "Jerry Li" ], "title": "Byzantine stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu. Natasha" ], "title": "Faster Non-Convex Optimization Than SGD", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu" ], "title": "How To Make the Gradients Small Stochastically", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li" ], "title": "Feature purification: How adversarial training performs robust deep learning", "venue": "arXiv preprint arXiv:2005.10190,", "year": 2020 }, { "authors": [ "Gilad Baruch", "Moran Baruch", "Yoav Goldberg" ], "title": "A little is enough: Circumventing defenses for distributed learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Peva Blanchard", "El Mahdi El Mhamdi", "Rachid Guerraoui", "Julien Stainer" ], "title": "Machine learning with adversaries: Byzantine tolerant gradient descent", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Saikiran Bulusu", "Prashant Khanduri", "Pranay Sharma", "Pramod K Varshney" ], "title": "On distributed stochastic gradient descent for nonconvex functions in the presence of byzantines", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Miguel Castro", "Barbara Liskov" ], "title": "Practical byzantine fault tolerance", "venue": "In OSDI,", "year": 1999 }, { "authors": [ "Yudong Chen", "Lili Su", "Jiaming Xu" ], "title": "Distributed statistical machine learning in adversarial settings: Byzantine gradient descent", "venue": "Proceedings of the ACM on Measurement and Analysis of Computing Systems,", "year": 2017 }, { "authors": [ "Cong Fang", "Chris Junchi Li", "Zhouchen Lin", "Tong Zhang" ], "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jiashi Feng", "Huan Xu", "Shie Mannor" ], "title": "Distributed robust learning", "venue": "arXiv preprint arXiv:1409.5937,", "year": 2014 }, { "authors": [ "Rong Ge", "Furong Huang", "Chi Jin", "Yang Yuan" ], "title": "Escaping from saddle points—online stochastic gradient for tensor decomposition", "venue": "In Proceedings of the 28th Annual Conference on Learning", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chi Jin", "Rong Ge", "Praneeth Netrapalli", "Sham M Kakade", "Michael I Jordan" ], "title": "How to escape saddle points efficiently", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Chi Jin", "Praneeth Netrapalli", "Rong Ge", "Sham M Kakade", "Michael I. Jordan" ], "title": "On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points", "venue": null, "year": 1902 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "The CIFAR-10/100 dataset. https://www.cs", "venue": "toronto.edu/ ̃kriz/cifar.html,", "year": 2014 }, { "authors": [ "Lihua Lei", "Cheng Ju", "Jianbo Chen", "Michael I Jordan" ], "title": "Nonconvex Finite-Sum Optimization Via SCSG Methods", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR. arXiv preprint arXiv:1706.06083,", "year": 2018 }, { "authors": [ "Iosif Pinelis" ], "title": "Optimum bounds for the distributions of martingales in banach spaces", "venue": "The Annals of Probability,", "year": 1994 }, { "authors": [ "Lili Su", "Nitin H Vaidya" ], "title": "Fault-tolerant multi-agent optimization: optimal iterative distributed algorithms", "venue": "In PODC, pp. 425–434", "year": 2016 }, { "authors": [ "Lili Su", "Nitin H Vaidya" ], "title": "Defending non-bayesian learning against adversarial attacks", "venue": "ISDC,", "year": 2016 }, { "authors": [ "Lili Su", "Jiaming Xu" ], "title": "Securing distributed machine learning in high dimensions", "venue": "arXiv preprint arXiv:1804.10140,", "year": 2018 }, { "authors": [ "Nilesh Tripuraneni", "Mitchell Stern", "Chi Jin", "Jeffrey Regier", "Michael I Jordan" ], "title": "Stochastic Cubic Regularization for Fast Nonconvex Optimization", "venue": "ArXiv e-prints,", "year": 2017 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Generalized Byzantine-tolerant SGD", "venue": "arXiv preprint arXiv:1802.10116,", "year": 2018 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Zeno: Byzantine-suspicious stochastic gradient descent", "venue": "arXiv preprint arXiv:1805.10032,", "year": 2018 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Fall of empires: Breaking byzantine-tolerant SGD by inner product manipulation", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Haibo Yang", "Xin Zhang", "Minghong Fang", "Jia Liu" ], "title": "Byzantine-resilient stochastic gradient descent for distributed learning: A lipschitz-inspired coordinate-wise median approach", "venue": null, "year": 1909 }, { "authors": [ "Dong Yin", "Yudong Chen", "Kanna Ramchandran", "Peter Bartlett" ], "title": "Byzantine-robust distributed learning: Towards optimal statistical rates", "venue": "arXiv preprint arXiv:1803.01498,", "year": 2018 }, { "authors": [ "Dong Yin", "Yudong Chen", "Ramchandran Kannan", "Peter Bartlett" ], "title": "Defending against saddle point attack in byzantine-robust distributed learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Chen" ], "title": "2017), and Zeno Xie et al. (2018b) with attacks. We set α = 0.4 so there are 4 Byzantine workers. (This exceeds the fault-tolerance of Krum, and so we also tested Krum with only 3 Byzantine workers", "venue": null, "year": 2018 }, { "authors": [ "GeoMed Chen" ], "title": "The geometric median", "venue": null, "year": 2017 }, { "authors": [ "Zeno Xie" ], "title": "ym}\\{yi} by Euclidean distances. Note that Krum requires 2b + 2 < m. So, we have also repeated the experiments for Krum with 3 Byzantine workers (out of 10 workers) to be more fair", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Motivated by the pervasiveness of large-scale distributed machine learning, there has recently been significant interest in providing distributed optimization algorithms with strong fault-tolerance guarantees. In this context, the strongest, most stringent fault model is that of Byzantine faults (Lamport et al., 1982): given m machines, each having access to private data, at most an α fraction of the machines can behave in arbitrary, possibly adversarial ways, with the goal of breaking or slowing down the algorithm. Although extremely harsh, this fault model is the “gold standard” in distributed computing (Lynch, 1996; Lamport et al., 1982; Castro et al., 1999), as algorithms proven to be correct in this setting are guaranteed to converge under arbitrary system behaviour. A setting of particular interest in this context has been that of distributed stochastic optimization. Here, the task is to minimize some stochastic function f(x) = Es∼D[fs(x)] over a distribution D, where fs(·) can be viewed as the loss function for sample s ∼ D. We assume there are m machines (workers) and an honest master, and α < 1/2 fraction of the workers may be Byzantine. In each iteration t, each worker has access to a version of the global iterate xt, which is maintained by the master. The worker can independently sample s ∼ D, compute ∇fs(xt), and then synchronously send this stochastic gradient to the master. The master aggregates the workers’ messages, and sends an updated iterate xt+1 to all the workers. Eventually, the master has to output an approximate minimizer of f . Clearly, the above description only applies to honest workers; Byzantine workers may deviate arbitrarily and return adversarial “gradient” vectors to the master in every iteration. This distributed framework is quite general and well studied. One of the first references in this setting studied distributed PCA and regression (Feng et al., 2014). Other early approaches (Blanchard et al., 2017; Chen et al., 2017; Su & Vaidya, 2016a;b; Xie et al., 2018a) relied on defining generalizations of the geometric median. These approaches can withstand up to half of the nodes being malicious, but can have relatively high local computational cost Ω(m2d) (Blanchard et al., 2017; Chen et al., 2017), where m is the number of nodes and d is the problem dimension, and usually have suboptimal sample and iteration complexities. Follow-up work resolved this last issue when the objective f(·) is convex, leading to tight sample ∗The full and future editions of this paper can be found on https://arxiv.org/abs/2012.14368. †Microsoft Research Redmond, zeyuan@csail.mit.edu ‡University of Waterloo, faezeeb75@gmail.com §Microsoft Research Redmond, jerrl@microsoft.com ¶IST Austria, dan.alistarh@ist.ac.at\ncomplexity bounds. Specifically, Yin et al. (2018) provided bounds for gradient descent-type algorithms, and showed that the bounds are tight when the dimension is constant. Alistarh et al. (2018) provided a stochastic gradient descent (SGD) type algorithm and showed that its sample and time complexities are asymptotically optimal even when the dimension is large. Non-convex Byzantine-resilient stochastic optimization. In this paper, we focus on the more challenging non-convex setting, and shoot for the strong goal of finding approximate local minima (a.k.a. second-order critical points). In a nutshell, our main result is the following. Fix d to denote the dimension, and let the objective f : Rd → R be Lipschitz smooth and second-order smooth. We have m worker machines, each having access to unbiased, bounded estimators of the gradient of f . Given an initial point x0, the SafeguardSGD algorithm ensures that, even if at most α < 1/2 fraction of the machines are Byzantine, after\nT = Õ (( α2 + 1m ) d(f(x0)−min f(x)) ε4 ) parallel iterations,\nfor at least a constant fraction of the indices t ∈ [T ], the following hold: ‖∇f(xt)‖ ≤ ε and ∇2f(xt) − √ εI.\nIf the goal is simply ‖∇f(xt)‖ ≤ ε, then T = Õ ( ( α2 + 1m ) (f(x0)−min f(x)) ε4 ) iterations suffice. Here, the Õ notation serves to hide logarithmic factors for readability. We spell out these factors in the detailed analysis.\n• When α < 1/ √ m, our sample complexity (= mT ) matches the best known result in the non-\nByzantine case (Jin et al., 2019) without additional assumptions, and enjoys linear parallel speedup: with m workers of which < √ m are Byzantine, the parallel speedup is Ω̃(m).1 • For α ∈ [1/ √ m, 1/2), our parallel time complexity is Õ(α2) times that needed when no\nparallelism is used. This still gives parallel speedup. This α2 factor appears in convex Byzantine distributed optimization, where it is tight (Yin et al., 2018; Alistarh et al., 2018).\n• The Lipschitz and second-order smoothness assumptions are the minimal assumptions needed to derive convergence rates for finding second-order critical points (Jin et al., 2019).\nComparison with prior bounds. The closest known bounds are by Yin et al. (2019), who derived three gradient descent-type of algorithms (based on median, mean, and iterative filtering) to find a weaker type of approximate local minima. Since it relies on full gradients, their algorithm is arguably less practical, and their time complexities are generally higher than ours (see Section 2.1). Other prior works consider a weaker goal: to find approximate stationary points ‖∇f(x)‖ ≤ ε only: Bulusu et al. (2020) additionally assumed there is a guaranteed good (i.e. non-Byzantine) worker known by the master, Xie et al. (2018b) gave a practical algorithm when the Byzantine attackers have no information about the loss function or its gradient, Yang et al. (2019); Xie et al. (2018a); Blanchard et al. (2017) derived eventual convergence without an explicit complexity bound, and the non-convex result obtained in Yin et al. (2018) is subsumed by Yin et al. (2019), discussed above. Our algorithm and techniques. The structure of our algorithm is deceptively simple. The master node keeps track of the sum of gradients produced by each worker across time. It labels (allegedly) good workers as those whose sum of gradients “concentrate” well with respect to a surrogate of the median vector, and labels bad workers otherwise. Once a worker is labelled bad, it is removed from consideration forever. The master then performs the vanilla SGD, by moving in the negative direction of the average gradients produced by those workers currently labelled as good. We call our algorithm SafeguardSGD, since it behaves like having a safe guard to filter away bad workers. Its processing overhead at the master is O(md), negligible compared to standard SGD. As the astute reader may have guessed, the key non-trivial technical ingredient is to identify the right quantity to check for concentration, and make it compatible with the task of non-convex optimization. In particular, we manage to construct such quantities so that (1) good non-Byzantine workers never get mislabelled as bad ones; (2) Byzantine workers may be labelled as good ones (which is inevitable) but when they do, the convergence rates are not impacted significantly; and (3) the notion does not require additional assumptions or running time overhead. The idea of using concentration (for each worker across time) to filter out Byzantine machines\n1By parallel speedup we mean the reduction in wall-clock time due to sampling gradients in parallel among the m nodes. In each time step, the algorithm generates m new gradients, although some may be corrupted.\ntraces back to the convex setting (Alistarh et al., 2018). However, the quantities used in (Alistarh et al., 2018) to check for concentration are necessarily different from this paper, and our analysis is completely new, as deriving non-convex rates is known to be much more delicate and challenging. Recently, Bulusu et al. (2020) used similar concentration filters to Alistarh et al. (2018) in the nonconvex setting, but under stronger assumptions, and for the simpler task of finding stationary points. Many other algorithms do not rely on concentration filters. In each iteration, they ask each worker to compute a batch of stochastic gradients, and then use coordinate-wise median or mean over the batch average (e.g. Yin et al. (2018; 2019); Yang et al. (2019)) or iterative filtering (e.g. Su & Xu (2018); Yin et al. (2019)) by the master to derive a “robust mean.” These works fundamentally rely on each iteration to calculate an almost precise full gradient, so that they can apply a surrogate of full gradient descent. Such algorithms can introduce higher sample and time complexities (see Section 2), are less practical than stochastic gradient schemes, require additional restrictions on the resilience factor α, e.g. α < 1/4 (Su & Xu, 2018), and, critically, have been shown to be vulnerable to recent attacks (Baruch et al., 2019; Xie et al., 2020). Attack resilience and experimental validation. There is a growing literature on customized attacks against Byzantine-resilient algorithms, showing that many defenses can be entirely circumvented in real-world scenarios (Baruch et al., 2019; Xie et al., 2020). Our algorithm is provably correct against these attacks, a fact we also validate experimentally. We implemented SafeguardSGD to examine its practical performance against a range of prior works (Xie et al., 2018b; Blanchard et al., 2017; Chen et al., 2017; Yin et al., 2018; 2019), and against recent attacks on the distributed task of training deep neural networks. Our experiments show that SafeguardSGD generally outperforms previous methods in convergence speed and final accuracy, sometimes by a wide accuracy margin. This is true not only against known Byzantine attacks, but also against attack variants we fine-crafted to specifically slow down our algorithm, and against transient node failures." }, { "heading": "2 STATEMENT OF OUR THEORETICAL RESULT", "text": "We denote by ‖ · ‖ the Euclidean norm and [n] := {1, 2, . . . , n}. Given symmetric matrices A,B, we let ‖A‖2 denote the spectral norm of A. We use to denote Loewner ordering, i.e. A B if A−B is positive semi-definite. We denote by λmin(A) the minimum eigenvalue of matrix A. We consider arbitrary d-dimensional non-convex functions f : Rd → R satisfying the following:\n• f(x) is L-Lipschitz smooth: meaning ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖ for any x, y ∈ Rd; • f(x) is L2-second-order smooth: ‖∇2f(x)−∇2f(y)‖2 ≤ L2 · ‖x− y‖ for any x, y ∈ Rd;\nFor notational simplicity of the proofs, we assume L = L2 = V = 1.2 Note that we have also assumed the domain of f is the entire space Rd. If instead there is a compact domain X ⊂ Rd, then one can use projected SGD and re-derive similar results of this paper. We choose to present our result in the simplest setting to convey our main ideas. Byzantine non-convex stochastic distributed optimization. We let m be the number of worker machines and assume at most an α fraction of them are Byzantine for α ∈ [ 0, 12 ) . We denote by good ⊆ [m] the set of good (i.e. non-Byzantine) machines, and the algorithm does not know good. Assumption 2.1. In each iteration t, the algorithm (on the master) is allowed to specify a point xt and query m machines. Each machine i ∈ [m] gives back a vector∇t,i ∈ Rd satisfying\n• If i ∈ good, the stochastic gradient∇t,i satisfies E[∇t,i] = ∇f(xt) and ‖∇f(xt)−∇t,i‖ ≤ V .3\n• If i ∈ [m] \\ good, then∇t,i can be arbitrary (w.l.o.g. we assume ‖∇f(xt)−∇t,i‖ ≤ V).4\nRemark 2.2. For each t and i 6∈ good, the vector ∇t,i can be adversarially chosen and may depend 2In the literature of convergence analysis for non-convex optimization, the final complexity bounds naturally and polynomially depend on these parameters L,L2,V , and the way the dependence goes is typically unique (Allen-Zhu, 2018a;b; Fang et al., 2018; Jin et al., 2019). This is why it suffices to ignore their appearance and only compare the polynomial dependence on ε and d.\n3One can instead assume Pr[‖∇f(xt) − ∇t,i‖ > t] ≤ 2 exp(−t2/2V2) and the results of this paper continue to hold up to logarithmic factors. To present the simplest theory, we do not include that version in this paper. We refer interested readers to Jin et al. (2019) for how to deal with such probabilistic assumption (when there is no Byzantine worker).\n4This requirement ‖∇f(xt)−∇t,i‖ ≤ V is “without loss of generality” because it is trivial for the algorithm to catch bad machines if they output∇t,i more than 2V away from the majorities.\nAlgorithm 1 SafeguardSGD: perturbed SGD with double safe guard\nInput: point x0 ∈ Rd, rate η > 0, lengths T ≥ T1 ≥ T0 ≥ 1, threshold T1 > T0 > 0; 1: good0 ← [m]; 2: for t← 0 to T − 1 do 3: last1 ← max{t1 ∈ [t] : t1 is a multiple of T1}; 4: last0 ← max{t0 ∈ [t] : t0 is a multiple of T0} 5: for each i ∈ goodt do 6: receive∇t,i ∈ Rd from machine i; 7: Ai ← ∑t k=last1 ∇k,i |goodk| and Bi ← ∑t k=last0 ∇k,i |goodk| ;\n8: Amed ← Ai where i ∈ goodt is any machine s.t. ∣∣{j ∈ goodt : ‖Aj −Ai‖ ≤ T1}∣∣ > m/2.\n9: Bmed ← Bi where i ∈ goodt is any machine s.t. ∣∣{j ∈ goodt : ‖Bj −Bi‖ ≤ T0}∣∣ > m/2.\n10: goodt+1 ← { i ∈ goodt : ‖Ai −Amed‖ ≤ 2T1 ∧ ‖Bi −Bmed‖ ≤ 2T0 } ;\n11: xt+1 = xt − η ( ξt +\n1 |goodt| ∑ i∈goodt ∇t,i ) ; Gaussian noise ξt ∼ N (0, ν2I)\non {∇t′,i}t′≤t,i∈[m]. In particular, the Byzantine machines can even collude during an iteration." }, { "heading": "2.1 OUR ALGORITHM AND THEOREM", "text": "Our algorithm is based on arguably the simplest possible method for achieving this goal, (perturbed) stochastic gradient descent (SGD) (Ge et al., 2015). Our techniques more broadly apply to more complicated methods (e.g. at least to Allen-Zhu (2018a;b)), but we choose to analyze the simplest variant of SGD, since it is the most widely applied method in modern non-convex machine learning. As illustrated in Algorithm 1, in each iteration t = 0, 1, . . . , T − 1, we maintain a set of (allegedly) good machines goodt ⊆ [m]. We begin with good0 = [m] and start to detect malicious machines and remove them from the set. We choose a learning rate η > 0, and perform the SGD update\nxt+1 = xt + ξt − η 1|goodt| ∑ i∈goodt\n∇t,i where ξt ∼ N (0, ν2I) is a random Gaussian perturbation that is added for theoretical purpose. For each machine i ∈ [m], we keep track of the history of its stochastic gradients up to two windows. Namely, Ai ← ∑t k=last1 ∇k,i |goodk| and Bi ← ∑t k=last0 ∇k,i |goodk|\n, for windows sizes T0 ≤ T1 ≤ T . We compare among remaining machines in goodt, and kick out those ones whose Ai or Bi deviate “more than usual” to construct goodt+1. Conceptually, we view these two as safe guards. Our theory makes sure that, when the “window sizes” and the thresholds for “more than usual” are defined properly, then goodt shall always include good, and the algorithm shall proceed to find approximate local minima. Formally, we have (letting the Õ notion to hide polylogarithmic factors)\nTheorem 2.3. Let C3 = α2 + 1m . Suppose we choose ν 2 = Θ̃(C3), η = Θ̃( ε\n2\ndC3 ), T0 = Θ̃( 1η ),\nT1 = Θ̃( 1 η √ ε ), T0 = Θ̃(\n√ T0), and T1 = Θ̃( √ T1), then after\nT = Õ (\n(f(x0)−min f(x))d ε4 (α 2 + 1m ) )\niterations, with high probability, for at least constant fraction of the indices t ∈ [T ], they satisfy ‖∇f(xt)‖ ≤ ε and ∇2f(xt) − √ εI .\nRemark 2.4. If one only wishes to achieve a significantly simpler goal — finding first-order critical points ‖∇f(xt)‖ ≤ ε — the analysis becomes much easier (see Section 3.1). In particular, having one safe guard without perturbation (i.e. ν = 0) suffices, and the iteration complexity reduces to T = Õ ( f(x0)−min f(x) ε4 (α 2 + 1m ) ) . Bulusu et al. (2020) achieves this easier goal but requires an additional assumption: there is one guaranteed good worker known by the master.\nOur contribution. We reiterate our theoretical contributions from three perspectives. 1) When α < 1/ √ m, our algorithm requires mT = Õ ( (f(x0)−min f(x))d ε4 ) stochastic gradient computations. This matches the best known result (Jin et al., 2019) under our minimal assumptions of the nonconvex objective. (There exist other works in the stochastic setting that break the ε−4 barrier\nand get rid of the dimension dependence d under stronger assumptions.)5. 2) When α < 1/ √ m, our algorithm enjoys linear parallel speed-up: the parallel time complexity reduces by a factor of Θ(m). When α ∈ [1/ √ m, 1/2), our parallel time complexity is Õ(α2) times that needed when no parallelism is used, still giving noticeable speedup. The α2 factor also appeared in convex Byzantine distributed optimization (and is known to be tight there) (Yin et al., 2018; Alistarh et al., 2018). Comparison to (Yin et al., 2019). Yin et al. (2019) derived three gradient descent-type algorithms to find points with a weaker (and less standard) guarantee: ‖∇f(x)‖ ≤ ε and ∇2f(x) −(ε2d)1/5I. Despite practical differences (namely, gradient descent may be less favorable comparing to stochastic gradient descent especially in deep learning applications), the parallel time complexities derived from their result are also generally larger than ours. Their paper focuses on bounding the number of sampled stochastic functions, as opposed to the number of stochastic gradient evaluations like we do. When translated to our language, each of the workers in their setting needs to evaluate T stochastic gradients, where (1) T = Õ ( α2d ε4 + d2 ε4m+ √ d ε3\n) if using coordinate-wise median, (2) T = Õ ( α2d2\nε4 + d2 ε4m\n) if using trimmed mean, and (3)\nT = Õ ( α ε4 + d ε4m ) if using iterative filtering. The complexities (1) and (2) are larger than ours (also with a weaker guarantee); the complexity (3) seems incomparable to ours, but when translating to the more standard (ε, √ ε) guarantee, becomes T = Õ ( αd2\nε5 + d3 ε5m\n) so is also larger than ours. It is\nworth noting that (3) requires α < 1/4 so cannot withstand half of the machines being Byzantine. Resilience against practical attacks. Our algorithm’s filtering is based upon tracking Bi (resp. Ai), the stochastic gradients of each machine i averaged over a window of T0 (resp. T1) iterations. This is a departure from previous defenses, most of which are history-less, and enables us to be provably Byzantine-resilient against state-of-the-art attacks (Baruch et al., 2019; Xie et al., 2020). In Baruch et al. (2019), Byzantine workers collude to shift the gradient mean by a factor β times the standard deviation of the (true stochastic) gradient, while staying within population variance. They noticed β can be quite large especially in neural network training. Their attack circumvent existing defenses because those defense algorithms are “historyless”, while their attack is statistically indistinguishable from an honest execution in any single iteration. However, our algorithm can provably defend against this attack since it has memory: Byzantine workers following their strategy will progressively diverge from the (honest) “median” Bmed (by an amount proportional to Ω(T ) in T iterations as opposed to √ T ), and be marked as malicious by our algorithm. (See Figure 2(a).) In Xie et al. (2020), Byzantine workers deviate in the negative direction of the gradient. However, to avoid being caught by our algorithm, the maximum “magnitude” of this attack has to stay within our thresholds. We implemented both attacks and showed our algorithm’s robustness experimentally. Finally, we note that prior “historyless” schemes, such as Krum or median-based schemes, could be thought of as providing stronger guarantees, as they in theory allow Byzantine nodes to change IDs during the computation: such schemes only require an upper bound on the number of Byzantine agents in each round. However, the attack of Baruch et al. (2019) essentially shows that all such schemes are vulnerable to variance attacks, and that such attacks are eminently plausible in practice. Thus, this suggests that the use of historical information, which requires that Byzantine nodes cannot change their IDs during the execution, may be necessary for Byzantine resilience. Tolerating transient failures and node ID relabeling. Our algorithm can also withstand transient node failures and some degrees of ID relabeling, by resetting the set of good nodes goodt to include all nodes every T1 steps. The algorithm then proceeds as usual. The key observation behind this relaxation is the fact that our analysis only requires that the attack conditions hold inside the current window. (Please see the Theorem B.1 for details.) We validate this experimentally in Section 5." }, { "heading": "3 WARMUP: SINGLE SAFE GUARD", "text": "As a warmup, let us first analyze the behavior of perturbed SGD with a single safe guard. Consider Algorithm 2, where we start with a point w0, a set good0 ⊇ good, and perform T steps of perturbed SGD. (We use the wt sequence instead of the xt sequence to emphasize that we are in Algorithm 2.)\n5Works such as (Allen-Zhu, 2018a; Lei et al., 2017; Tripuraneni et al., 2017; Allen-Zhu, 2018b; Fang et al., 2018; Nguyen et al., 2017) require f(x) = Es∼D[fs(x)] where each fs(x) is second-order smooth and/or Lipschitz smooth. This requirement may be too strong for certain practical applications.\nAlgorithm 2 Perturbed SGD with single safe guard (for analysis purpose only)\nInput: point w0 ∈ Rd, set good0 ⊇ good, rate η > 0, length T ≥ 1, threshold T > 0; 1: for t← 0 to T − 1 do 2: for each i ∈ goodt do 3: receive∇t,i ∈ Rd from machine i; 4: Bi ← ∑t k=0 ∇k,i |goodk| ;\n5: Bmed ← Bi where i ∈ goodt is any machine s.t. ∣∣{j ∈ goodt : ‖Bj −Bi‖ ≤ T}∣∣ > m/2.\n6: goodt+1 ← { i ∈ goodt : ‖Bi −Bmed‖ ≤ 2T } ;\n7: wt+1 = wt − η ( ξt +\n1 |goodt| ∑ i∈goodt ∇t,i ) ; Gaussian noise ξt ∼ N (0, ν2I)\nDefinition 3.1. We make the following definition to simplify notations: let Ξt := σt + ∆t where • σt := 1|goodt| ∑ i∈good ( ∇t,i −∇f(wt) ) • ∆t := 1|goodt| ∑ i∈goodt\\good ( ∇t,i −∇f(wt)\n) Therefore, we can re-write the SGD update as wt+1 = wt − η(∇f(wt) + ξt + Ξt) .\nThe following lemma is fairly immediate to prove: Lemma 3.2 (single safe guard). In Algorithm 2, suppose we choose T = 8 √ T log(16mT/p). Then, with probability at least 1− p/4, for every t = 0, . . . , T − 1, • goodt ⊇ good.\n• ‖σt‖2 ≤ O( log(T/p)m ) and ‖σ0 + · · ·+ σt−1‖ 2 ≤ O(T log(T/p)m )\n• ‖∆t‖2 ≤ α2 and ‖∆0 + · · ·+ ∆t−1‖2 ≤ O(α2T log(mT/p)) • |〈∇f(wt), ξt〉| ≤ ‖∇f(wt)‖ ·O(ν √ log(T/p)),\n• ‖ξt‖2 ≤ O(ν2d log(T/p)), ‖ξ0 + · · ·+ ξt−1‖2 ≤ O(ν2dT log(T/p)) We call this probabilistic event EventsingleT (w0) and Pr[Event single T (w0)] ≥ 1− p/4.\n(The third property above is ensured by our choice of T and the use of safe guard, and the rest of the properties follow from simple martingale concentration arguments. Details are in Appendix A.1.)" }, { "heading": "3.1 CORE TECHNICAL LEMMA 1: OBJECTIVE DECREASE", "text": "Our first main technical lemma is the following:\nLemma 3.3. Suppose we choose T as in Lemma 3.2. Denote by C1 = log(T/p) and C2 = α2 log mTp + log(T/p) m . Suppose η ≤ 0.01 min{1, 1 C2 }, T = 1 100η(1+ √ C2) and we start from w0 and apply Algorithm 2. Under event EventsingleT (w0), it satisfies\nf(w0)− f(wT ) ≥ 0.7η ∑T−1 t=0 ( ‖∇f(wt)‖2 − η ·O(C2 + (C2)1.5)−O(C1ν2η(d+ √ C2)) ) Lemma 3.3 says after T ≈ 1η steps of perturbed SGD, the objective value decreases by, up to some small additive error and up to logarithmic factors, f(w0) − f(wT ) ≥ 0.7η ∑T−1 t=0 (‖∇f(wt)‖2 − ηC2). This immediately implies, if we choose η ≈ ε 2\nC2 , then by repeating this analysis for\nO(C2ε4 ) = O( α2+1/m\nε4 ) iterations, we can find approximate critical point x with ‖∇f(x)‖ ≤ ε. Proof sketch of Lemma 3.3. The full proof is in Appendix A.2 but we illustrate the main idea and difficulties below. After simple manipulations, it is not hard to derive that\nf(w0)− f(wT ) ' 0.9η ∑T−1 t=0 ( ‖∇f(wt)‖2 − η ) + η ∑T−1 t=0 〈∇f(wt),Ξt〉︸ ︷︷ ︸\nremainder terms\nwhere recall that Ξt = σt+∆t. When there are no Byzantine machines, we have E[Ξt] = E[σt] = 0 so the remainder terms must be small by martingale concentration. Therefore, the main technical difficulty arises to deal with those Byzantine machines, who can adversarially design their∇t (even by collusion) so as to negatively correlate with∇f(wt) to “maximally destroy” the above inequality.\nOur main idea is to use second-order smoothness to write∇f(wt) ≈ ∇f(w0)+∇2f(w0)·(wt−w0). To illustrate our idea, let us ignore the constant vector and assume that the Hessian is the identity: that is, imagine as if∇f(wt) ≈ wt −w0. Using wt −w0 = − ∑ k<t Ξt + ξt, we immediately have\n−〈∇f(wt),Ξt〉 ≈ −〈wt − w0,Ξt〉 = ∑ k<t〈Ξk,Ξt〉+ ∑ k<t〈ξk,Ξt〉 (3.1)\nFor the first partial sum 〈 ∑ k<t Ξk,Ξt〉 in (3.1), it is easy to bound its magnitude using our safeguard.\nIndeed, we have ∣∣∑ t〈 ∑ k<t Ξk,Ξt〉 ∣∣ ≤ ‖∑t Ξt‖2+∑t ‖Ξt‖2 so we can apply Lemma 3.2. For the second partial sum ∑ t ∑ k<t〈ξk,Ξt〉, we can apply the concentration Proposition 3.4 below.\nProposition 3.4. Fix the dimension parameter d ≥ 1. Suppose ξ0, . . . , ξT−1 ∈ Rd are i.i.d. drawn fromN (0, I), and that ∆1, . . . ,∆T−1 are arbitrary vectors in Rd. Here, each vector ∆t with t = 1, . . . , T − 1 can depend on ξ0, . . . , ξt−1 but not on ξt, . . . , ξT−1. Suppose that these vectors satisfy ‖∆1 + · · ·+ ∆t‖2 ≤ T for every t = 1, . . . , T − 1. Then, with probability at least 1− p,∣∣∣∑T−1t=1 〈ξ0 + · · ·+ ξt−1,∆t〉∣∣∣ ≤ O(√dTT log(T/p)) ." }, { "heading": "3.2 CORE TECHNICAL LEMMA 2: RANDOMNESS COUPLING", "text": "Our next technical lemma studies that, if run Algorithm 2 from a point w0 so that the Hessian ∇2f(w0) has a eigenvalue which is less than −δ (think of w0 as a saddle point), then with good probability, after sufficiently many iterations, the sequence w1, w2, . . . , wT shall escape from w0 to distance at leastR for some parameterR ≈ δ. To prove this, motivated by Jin et al. (2017), we study two executions of Algorithm 2 where their randomness are coupled. We then argue that at least one of them has to escape from w0. For any vector v, let [v]i denote the i-th coordinate of v.\nLemma 3.5. Suppose we choose T as in Lemma 3.2 and C1, C2 as in Lemma 3.3. Suppose w0 ∈ Rd satisfies λmin(∇2f(w0)) = −δ for some δ ≥ 0. Without loss of generality let e1 be the eigenvector of∇2f(w0) with smallest eigenvalue. Consider now two executions of Algorithm 2, both starting from wa0 = w b 0 = w0, and suppose their randomness {ξat }t and {ξbt }t are coupled so that [ξat ]1 = −[ξbt ]1 but [ξat ]i = [ξbt ]i for i > 1. In words, the randomness is the same orthogonal to e1, but along e1, the two have opposite signs. Now, suppose we perform T = Θ( 1ηδ log R2δ ην2 ) steps of perturbed SGD from wa0, w b 0 respectively using Algorithm 2. Suppose\nR ≤ O (\nδ√ C1 log(R2δ/ην2)\n) and ν2 ≥ Ω ( C2 log R2δ ην ) .\nThen, under events EventsingleT (w a 0) and Event single T (w b 0), with probability at least 0.98, either ‖wat − w0‖ > R or ‖wbt − w0‖ > R for some t ∈ [T ].\nProof details in Appendix A.4. The main proof difficulty is to analyze a noisy version of the power method, where the noise comes from (1) Gaussian perturbation (which is the good noise), (2) stochastic gradients (which has zero mean), and (3) Byzantine workers (which can be adversarial)." }, { "heading": "4 FROM WARMUP TO FINAL THEOREM WITH DOUBLE SAFE GUARDS", "text": "At a high level, Lemma 3.3 ensures that if we keep encountering points with large gradient ‖∇f(wt)‖, then the objective should sufficiently decrease; in contrast, Lemma 3.5 says that if we keep encountering points with negative Hessian directions (i.e., λmin(∇2f(wt)) < −δ), then the points must move a lot (i.e., by more than R in T iterations, which can also lead to sufficient objective decrease, see Lemma B.4). Therefore, at a high level, when the two lemmas are combined, they tell that we must not encounter points with ‖∇f(x)‖ being large, or λmin(∇2f(x)) being very negative, for too many iterations. Therefore, the algorithm can find approximate local minima. The reason we need two safe guards, is because the number of rounds T for Lemma 3.3 and Lemma 3.5 differ by a factor. We need two safe guards with different window sizes to ensure the two lemmas simultaneously hold. We encourage the reader to examine the full analysis in Appendix B." }, { "heading": "5 EXPERIMENTAL VALIDATION", "text": "We evaluate the convergence of SafeguardSGD to examine its practical performance against prior works. We perform the non-convex task of training a residual network ResNet-20 (He et al., 2016) on the CIFAR-10/100 datasets (Krizhevsky et al., 2014). More details are given in Appendix C.\nWe instantiatem = 10 workers and one master node executing data-parallel SGD for 140 passes (i.e. epochs) over the training dataset. The results for higher number of workers and epochs are similar, and therefore omitted. We compare against Geometric Median (Chen et al., 2017), Coordinatewise Median (Yin et al., 2018; 2019), Krum (Blanchard et al., 2017), and Zeno (Xie et al., 2018b). Overall, our experimental setup is very similar to Zeno (Xie et al., 2018b) but with additional attacks. We implemented the approach of Yang et al. (2019), but found it very sensitive to hyper-parameter values and were unable to make it converge across all attacks even after significant tuning of its γ parameter. We also implemented the convex algorithm of Alistarh et al. (2018), and executed it in our non-convex setting. We found their algorithm can be easily attacked on our ResNet training tasks. There exists a simple attack, described in Appendix C.4 which causes their algorithm to either mislabel most good workers as Byzantine, or diverge, or converge to very poor solutions. This is not surprising, since their algorithm is designed for, and only guaranteed to work in, the convex setting. To make the comparison stronger, when implementing SafeguardSGD, we have chosen fixed window sizes T0 = 1 epoch and T1 = 6 epochs across all experiments, and adopted an automated process to select T0,T1. Determining these thresholds requires being able to pre-run the task on an honest worker. We have also implemented a single safeguard variant of SafeguardSGD, with window size T = 3 epochs. Attacks. We set α = 0.4, which means that there are 4 Byzantine workers. (This exceeds the fault-tolerance of Krum, and so we also tested Krum with only 3 Byzantine workers.)\n• LABEL-FLIPPING ATTACK: each Byzantine worker computes its gradient based on the crossentropy loss with flipped labels: for CIFAR-10, label ` ∈ {0, ..., 9} is flipped to 9− `.\n• DELAYED-GRADIENT ATTACK: each Byzantine worker sends an old gradient to master. In our experiments, the delay is of D = 1000 iterations.\n• VARIANCE ATTACK (Baruch et al., 2019): Byzantine workers measure the mean and the standard-deviation of gradients at each round, and collude to move the mean by the largest value which still operates within population variance. (For our parameter settings, this is 0.3 times the standard deviation. We discuss results for additional parameter values in the Appendix.)\n• SIGN-FLIPPING ATTACK: each Byzantine worker sends the negative gradient to the master. • SAFEGUARD ATTACK: each Byzantine workers sends a negative but re-scaled gradient to the\nmaster. We use re-scale factors 0.6 and 0.7 in our experiments. The re-scale factor 0.6 avoids triggering the safe-guard conditions at the master, and the re-scale factor 0.7 occasionally triggers the safe-guard conditions. This attack is an instantiation of the inner-product attack (Xie et al., 2020), customized specifically to maximally affect our SafeguardSGD algorithm.\nMain experimental results. The ideal test accuracy is 91.7%, which corresponds to applying SGD using only the stochastic gradients from the honest workers. Figure 1 compares the performances\nin test accuracy. Below we summarize our main findings for the experiments, and we defer detailed discussions (and additional experiments for CIFAR-100) to Appendix C.\n• SafeguardSGD generally outperforms all the previous methods in test accuracy. The test accuracy difference can be “90% vs. < 40%” between our algorithm and the best prior work.\n• The variance attack is indeed very strong, in that it severely affects the accuracy of all prior works (test accuracy < 35%). This is because thesese defenses are “historyless.” By contrast, our algorithm not only provably but also empirically defends against it.\n• Our safeguard attack (especially with re-scale factor 0.7) is as strong as the variance attack, and even stronger on the CIFAR-100 dataset; please see the results in Appendix C.2.5.\n• The label-flipping attack is rather weak: although some defenses, such as Zeno, did not determine which of the workers are malicious, they still converge well under this attack.\n• The sign-flipping and delayed-gradient attacks are moderate: the best prior works can achieve accuracy 60% ∼ 70%. It is worth noting that the sign-flipping attack can already nullify the Zeno defence (test accuracy 20%). The issue seems to be that it can be very hard for Zeno to use relatively few samples to determine if the gradient direction is flipped to negative.\n• SafeguardSGD can easily catch all the bad workers under sign-flipping and variance attacks, and thus leads to gives ideal performance. It cannot catch any bad worker for label-flipping and delayed-gradient attacks, but there is no performance loss anyways if we use such bad gradients.\n• The safeguard attacks, designed to maximally impact the performance of our SafeguardSGD, can indeed affect our performance. Specifically, under re-scale factor 0.6, the test accuracy drops from 91.7% to 89.3% because SafeguardSGD cannot catch any bad worker; however, under rescale factor 0.7, the test accuracy no longer drops because SafeguardSGD can begin to catch some bad workers (it can catch between 0 and 4 bad workers depending on the randomness.)\n• In most cases, the single-safeguard algorithm is close to double-safeguard, except for the safeguard(x0.7) attack, in which using double-safeguard one can more easily catch bad workers. (This is more apparent in the CIFAR-100 experiment, see Appendix C.2.5.)\nWe conclude that SafeguardSGD can be practical, and outperforms previous approaches. A deeper dive: how the algorithm works. Let us explain the inner workings of our algorithm in the context of a “delayed” attack, where the Byzantine nodes collude to execute an attack only after a specific, given point in the execution (in this case, the first half-epoch). Figure 2(a) presents the results from the perspective of the value of ‖Bi −Bmed‖ registered at the master server, for two nodes, an honest one, and a Byzantine one. The value of ‖Bi−Bmed‖ increases for all the nodes (at a rate of roughly √ t at step t); but, once the attack starts, the statistic for the Byzantine node grows linearly in t, leading to fast detection. Transient attacks and node ID relabeling. Finally, in Figure 2(b) we analyze the behaviour of our algorithm when it periodically (every 3 epochs for single safeguard and 6 epochs for double safeguard) resets the set of good nodes to include all nodes, restarting the detection process from scratch. Our theoretical result still applies after this relaxation. This relaxation has two benefits. First, it benefits from bad workers that under transient failures (e.g., the node fails for 10 epochs but resumes to work correctly after a while), and thus benefits from the data stored on this worker. Second, it can defend against certain degree of node ID relabeling: it supports the case when good and bad workers exchange their IDs every 6 epochs. In Figure 2(b), we see even under the (very strong) variance attack, relaxed safeguard maintains good performance." }, { "heading": "ACKNOWLEDGMENTS", "text": "F. E. and D. A. were supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML)." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A MISSING PROOFS FOR SECTION 3", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 3.2", "text": "Recall the following, useful inequality.\nLemma A.1 (Pinelis’ 1994 inequality (Pinelis, 1994)). Let X1, . . . , XT ∈ Rd be a random process satisfying E[Xt|X1, . . . , Xt−1] = 0 and ‖Xt‖ ≤M . Then, Pr [ ‖X1 + · · ·+XT ‖2 > 2 log(2/δ)M2T ] ≤ δ.\nLemma 3.2 is in fact a direct corollary of the following claim, whose proof is quite classical. Denote by C = log(16mT/p). Denote by\nB (t) i := ∇0,i |good0| + · · ·+ ∇t−1,i|goodt−1| and B(t)? := ∇f(w0) |good0| + · · ·+ ∇f(wt−1)|goodt−1| .\nRecall at iteration t− 1, Algorithm 2 computes {B(t)1 , . . . , B (t) m } as well as some B(t)med = B (t) i where i is any machine in goodt−1 such that at least half of j ∈ [m] satisfies ‖B (t) j −B (t) i ‖ ≤ 8 √ tC/m.\nClaim A.2. Let C = log(16mT/p). Then, with probability at least 1− p/4, we have\n(a) for all i ∈ good and t ∈ [T ], ‖B(t)i −B (t) ? ‖ ≤ 4\n√ tC/m.\n(b) for all t ∈ [T ], each i ∈ good is a valid choice for B(t)med = B (t) i .\n(c) for all i ∈ good and t ∈ [T ], ‖B(t)i −B (t) med‖ ≤ 16 √ tC/m and ‖B(t)? −B(t)med‖ ≤ 12 √ tC/m (d) for all i ∈ good and t ∈ [T ], we also have i ∈ goodt+1.\n(e) ∥∥∑ i∈good ( B (t) i −B (t) ? )∥∥ ≤ O(√t log(T/p)/√m). Proof of Claim A.2. We prove by induction. Suppose the statements hold for t− 1 and we now move to t.\n(a) For each i ∈ good, note E[∇t,i] = ∇t and ‖∇t,i − ∇t‖ ≤ 1. Let Xt = ∇t,i−∇t|goodt| , so that ‖Xt‖ ≤ 1|goodt| ≤ 1 (1−α)m ≤ 2 m . We can thus apply Lemma A.1 to the Xt and then take a union\nbound over all i ∈ good. Thus, with probability at least 1 − p 8T we have ‖B(t)i − B (t) ? ‖ ≤ 4\n√ tC/m for\nall i ∈ good. The result follows from a further union bound over t ∈ [T ].\n(b) Claim A.2a implies for every i, j ∈ good we have ‖B(t)i − B (t) j ‖ ≤ 8\n√ tC/m. Therefore each i ∈ good\nis a valid choice for setting B(t)med = B (t) i .\n(c) This is a consequence of the previous items and the definition of B(t)med.\n(d) This is a consequence of the previous item.\n(e) We can apply Lemma A.1 with {X1, X2, . . . , Xt|good|} = { ∇k,i−∇f(wk) |goodk|\n}k∈[t],i∈good. It holds with probability at least 1− p\n8T that ∥∥∑ i∈good ( B (t) i −B (t) ? )∥∥ ≤ O(√t log(T/p)/√m).\nProof of Lemma 3.2. The property goodt ⊇ good is from Claim A.2d.\nThe property ‖σt‖2 ≤ O( log(T/p)m ) is by standard concentration inequalities for sums of bounded random vectors.\nThe property ‖σ0 + · · ·+ σt−1‖2 ≤ O(T log(T/p)m ) is from Claim A.2e. The property ‖∆t‖ ≤ α is obvious as we have at most α fraction of the bad machines. The bound on ‖∆0 + · · · + ∆t−1‖2 can be derived as follows. For every i ∈ [m] \\ good, let t be the last iteration i satisfies i ∈ goodt. Then, by the triangle inequality,\n‖B(t+1)i −B (t+1) ? ‖ ≤\n2 m + ‖B(t)i −B (t) ? ‖\nOn the other hand, t ∈ goodt implies ‖B (t) i − B (t) med‖ ≤ 16\n√ tC/m by the algorithm; combining this with\n‖B(t)? −B(t)med‖ ≤ 12 √ tC/m, and summing up over all such bad machines i finishes the proof.\nThe final two properties follow from standard facts about Gaussian random vectors." }, { "heading": "A.2 PROOF OF LEMMA 3.3", "text": "Proof of Lemma 3.3. Using the Lipschitz smoothness of f(·), we have\nf(wt)− f(wt+1) ≥ 〈∇f(wt), wt − wt+1〉 − 1\n2 ‖wt − wt+1‖2\n= η‖∇f(wt)‖2 + η〈∇f(wt),Ξt〉 − 1\n2 ‖wt − wt+1‖2 + η〈∇f(wt), ξt〉\nWe first show:\n‖wt − wt+1‖2 = η2‖∇f(wt) + Ξt − ξt‖2 ≤ 3η2(‖∇f(wt)‖2 + ‖Ξt‖2 + ‖ξt‖2)\n| T−1∑ t=0 η〈∇f(wt), ξt〉| ≤ η √√√√T−1∑ t=0 ‖∇f(wt)‖2 ·O(ν √ C1) ≤ ( 0.05η T−1∑ t=0 ‖∇f(wt)‖2 ) +O(C1ν 2η)\nThe first follows since (a + b + c)2 ≤ 3(a2 + b2 + c2) for any a, b, c ∈ R, and the second follows from Lemma 3.2. Combining them, and also using that ‖Ξt‖2 ≤ O(C2), ‖ξt‖2 ≤ O(dν2C1), and η ≤ 0.01, we have\nf(w0)− f(wT ) ≥ 0.9η T−1∑ t=0 ( ‖∇f(wt)‖2 −O(ηC2) ) + η T−1∑ t=0 〈∇f(wt),Ξt〉 −O(ηTν2C1(ηd+ 1 T ))\n(A.1)\nFor the inner product on the right hand of (A.1), we have that\nη T−1∑ t=0 〈∇f(wt),Ξt〉 = η T T−1∑ q=0 〈 ∇f(wq), T−1∑ t=0 Ξt 〉\n︸ ︷︷ ︸ ♠\n+ η\nT T−1∑ q=0 T−1∑ t=0\n〈∇f(wt)−∇f(wq),Ξt〉︸ ︷︷ ︸ ♣\n(A.2)\nFor the first term ♠, we have\n|♠| ≤ η T T−1∑ q=0 ∣∣∣〈∇f(wq), T−1∑ t=0 Ξt 〉∣∣∣ ≤ η T T−1∑ q=0 ‖∇f(wq)‖ · ∥∥∥ T−1∑ t=0 Ξt ∥∥∥ ≤ 0.1η\nT−1∑ q=0 ‖∇f(wq)‖2 + O(η) T 2 T−1∑ q=0 ∥∥∥ T−1∑ t=0 Ξt ∥∥∥2 ≤ 0.1η\nT−1∑ q=0 ‖∇f(wq)‖2 +O ( ηC2 ) where the last inequality follows from Lemma 3.2.\nFor the second term ♣, we have\n|♣| ≤ η T T−1∑ q=0 ∣∣∣ T−1∑ t=0 〈∇f(wt)−∇f(wq),Ξt〉 ∣∣∣ ≤ η T T−1∑ q=0 ∣∣∣ T−1∑ t=0 〈∇2f(w0)(wt − wq),Ξt〉 ∣∣∣︸ ︷︷ ︸\n♦\n+ η\nT T−1∑ q=0 T−1∑ t=0\n(‖wt − w0‖+ ‖wq − w0‖)‖wt − wq‖‖Ξt‖︸ ︷︷ ︸ ♥\nUsing ‖wt − wq‖ ≤ ‖wt − w0‖+ ‖wq − w0‖, one can derive\n♥ ≤ η T T−1∑ q=0 T−1∑ t=0 (‖wt − w0‖+ ‖wq − w0‖)2 ·O( √ C2)\n≤ η T−1∑ t=0 ‖wt − w0‖2 ·O( √ C2) ≤ η3 T−1∑ t=0 ‖∇f(w0) + · · ·+∇f(wt−1) + Ξ0 + · · ·+ Ξt−1 + ξ0 + · · ·+ ξt−1‖2 ·O( √ C2)\n≤ O( √ C2η 3T 2) T−1∑ t=0 ‖∇f(wt)‖2 +O( √ C2C2η 3T 2) +O(η3ν2T 2dC1 √ C2)\nAs for ♦,∣∣∣ T−1∑ t=0 〈∇2f(w0)(wt − wq),Ξt〉 ∣∣∣ ≤ ∣∣∣ T−1∑ t=q+1 〈∇2f(w0)(wt − wq),Ξt〉 ∣∣∣+ ∣∣∣ q−1∑ t=0 〈∇2f(w0)(wt − wq),Ξt〉 ∣∣∣\nFor the first term (and the second term is analogous), we have∣∣∣ T−1∑ t=q+1 〈∇2f(w0)(wt − wq),Ξt〉 ∣∣∣\n= η ∣∣∣ T−1∑ t=q+1 〈∇2f(w0)(∇f(wq) + · · ·∇f(wt−1) + Ξq + · · ·Ξt−1 + ξq + · · ·+ ξt−1),Ξt〉 ∣∣∣\n≤ η ∣∣∣ T−1∑ t=q+1 〈∇2f(w0)(ξq + · · ·+ ξt−1),Ξt〉 ∣∣∣+\nη ∣∣∣ T−1∑ t=q+1 〈∇2f(w0)(∇f(wq) + · · ·∇f(wt−1)),Ξt〉 ∣∣∣+ η∣∣∣ T−1∑ t=q+1 〈∇2f(w0)(Ξq + · · ·Ξt−1),Ξt〉 ∣∣∣\n¬ ≤ η ·O( √ dν2TC1 · √ TC2)+\nη ∣∣∣ T−2∑ t=q 〈∇2f(w0)∇f(wt) ,Ξt+1 + · · ·+ ΞT−1〉 ∣∣∣+ η 2 〈 ∇2f(w0)(Ξq + · · ·ΞT−1), (Ξq + · · ·ΞT−1) 〉\n≤ η ·O( √ dν2TC1 · √ TC2) + η T−2∑ t=q ‖∇f(wt)‖‖Ξt+1 + · · ·+ ΞT−1‖+ η 2 ‖Ξq + · · ·ΞT−1‖2\n ≤ O(η √ TC2) · T−1∑ t=0 ‖∇f(wt)‖+O(TηC2 + Tην2dC1) .\nAbove, inequality ¬ uses ‖Ξ0 + · · ·Ξt‖ ≤ O( √ TC2) for C2 = α2 log mTp + log(T/p) m\n(see Lemma 3.2) and a delicate application of Azuma’s inequality that we state at the end of this subsection (see Proposition 3.4); Inequality uses Young’s inequality and Lemma 3.2.\nPutting this back to the formula of ♦, we have\n♦ ≤ O(η2 √ TC2) · T−1∑ t=0 ‖∇f(wt)‖+O(Tη2C2 + Tη2ν2dC1)\n≤ 0.1η T−1∑ t=0 ‖∇f(wt)‖2 +O(η3T 2C2 + Tη2C2 + Tη2ν2dC1)\nFinally, putting ♦ and ♥ back to ♣, and putting ♣ and ♠ back to (A.2) and (A.1), we have f(w0)− f(wT ) ≥ 0.8η T−1∑ t=0 ‖∇f(wt)‖2 −O( √ C2η 3T 2) T−1∑ t=0 ‖∇f(wt)‖2\n− C2 ·O(η + η2T + η3T 2 + √ C2η 3T 2)− C1 ·O(Tη2ν2d+ T 2η3ν2 √ C2 + ηTν 2(ηd+ 1\nT ))\ntogether with T = 1 100η(1+ √ C2) and η ≤ 0.01 min{1, 1 C2 }, we have f(w0)− f(wT ) ≥ 0.7η T−1∑ t=0 ‖∇f(wt)‖2 − C2 ·O(η + η2T + η3T 2 + √ C2η 3T 2)− C1 ·O(Tην2η(d+ √ C2))\n= 0.7η T−1∑ t=0 ( ‖∇f(wt)‖2 − C2 ·O( 1 T + η + η2T + √ C2η 2T )− C1 ·O(ηTν2η(d+ √ C2)) ) ≥ 0.7η T−1∑ t=0 ( ‖∇f(wt)‖2 − η ·O(C2 + (C2)1.5)−O(C1ν2η(d+ √ C2)) ) ." }, { "heading": "A.3 PROOF OF PROPOSITION 3.4", "text": "Proposition 3.4. Fix the dimension parameter d ≥ 1. Suppose ξ0, . . . , ξT−1 ∈ Rd are i.i.d. drawn from N (0, I), and that ∆1, . . . ,∆T−1 are arbitrary vectors in Rd. Here, each vector ∆t with t = 1, . . . , T − 1 can depend on ξ0, . . . , ξt−1 but not on ξt, . . . , ξT−1. Suppose that these vectors satisfy ‖∆1 + · · ·+ ∆t‖2 ≤ T for every t = 1, . . . , T − 1. Then, with probability at least 1− p,∣∣∣∑T−1t=1 〈ξ0 + · · ·+ ξt−1,∆t〉∣∣∣ ≤ O(√dTT log(T/p)) . Proof of Proposition 3.4. Using the identity formula6∑T−1\nt=1 〈ξ0 + · · ·+ ξt−1,∆t〉 = (∑T−2 t=0 ξt )(∑T−1 t=1 ∆t ) − ∑T−2 t=1 〈ξt,∆1 + · · ·+ ∆t〉\nwe have∣∣∣∑T−1t=1 〈ξ0 + · · ·+ ξt−1,∆t〉∣∣∣ ≤ ∥∥∥∑T−2t=0 ξt∥∥∥ · ∥∥∥∑T−1t=1 ∆t∥∥∥+ ∣∣∣∑T−2t=1 〈ξt,∆1 + · · ·+ ∆t〉∣∣∣ . ≤ O( √ dTT log(T/p)) +\n∣∣∣∑T−2t=1 〈ξt,∆1 + · · ·+ ∆t〉∣∣∣ . where the last inequality uses ‖ξ0 + · · · + ξT−2‖ ≤ O( √ dT log(1/p)) with probability at least 1 − p/2. Furthermore, we note that ξt is independent of ξ0, . . . , ξt−1,∆1, . . . ,∆t and E[ξt] = 0. Therefore, letting St = 〈ξt,∆1 + · · ·+ ∆t〉, we have E[St|ξ0, . . . , ξt−1] = 0; furthermore, with probability at least 1− p/2, it satisfies |St| ≤ O(\n√ dT log(T/p)) for every t. Finally, by Azuma’s inequality, we have∣∣∣∑T−2t=1 〈ξt,∆1 + · · ·+ ∆t〉∣∣∣ ≤ O(√dTT log(T/p)) ." }, { "heading": "A.4 PROOF OF LEMMA 3.5", "text": "Proof of Lemma 3.5. Let us denote by rt = [ξat]1 2 = − [ξ b t]1 2 and we know rt ∼ N (0, ν 2 4 ). We can write wat+1 − wbt+1 = ηrte1 + wat − wbt − η(∇f(wat)−∇f(wbt ))− η(Ξat − Ξbt) Using the second-order smoothness, we have\n∇f(wat)−∇f(wbt ) = ∫ 1 τ=0 ∇2f ( wat + τ(w b t − wat) ) (wat − wbt )dτ\n= ∇2f(w0) · (wat − wbt ) + θt for some vector ‖θt‖ ≤ max{‖wa0 − wat‖, ‖wb0 − wbt‖} · ‖wat − wbt‖. Therefore, we have\nwat+1 − wbt+1 = ηrte1 + ( I− η∇2f(w0) ) (wat − wbt )− η(Ξat − Ξbt + θt)\nNow, giving ψ0 = ψ0 = 0, imagine two sequences • ψt+1 = ηrte1 + ( I− η∇2f(w0) ) ψt and\n• ψt+1 = ηrte1 + ( I− η∇2f(w0) ) ψt − η(Ξat − Ξbt + θt) = wat+1 − wbt+1\nWe will inductively prove ‖ψt − ψt‖ ≤ 12‖ψt‖. On one hand, it is easy to see that ψt is zero except in the first coordinate, in which it behaves as a Gaussian with zero mean and variance ∑t−1 k=0(1 + ηδ) 2k · η 2ν2 4 =\nΘ ( (1+ηδ)2t\nηδ · η2ν2\n) . By Gaussian tail bounds, we know that\n• with probability at least 0.99, it satisfies ‖ψt‖ ≤ O( √ ηC1ν(1+ηδ)\nt √ δ ) for every t\n• with probability at least 0.99, it satisfies ‖ψT ‖ ≥ 11000 ( √ ην(1+ηδ)T√ δ )\nIn the rest of the proof, we condition on this event happens. We prove towards contradiction by assuming ‖wat − wa0‖ ≤ R and ‖wbt − wb0‖ ≤ R for all t ∈ [T ]. We will inductively prove that ‖ψt − ψt‖ ≤ 12000 ( √ ην(1+ηδ)t√ δ ). We calculate the difference\nψt − ψt = η t−1∑ i=0 ( I− η∇2f(w0) )t−1−i (Ξai − Ξbi + θi)\nLet g = ψt−ψt‖ψt−ψt‖ , then we can inner-product the above equation by vector g, which gives\n‖ψt − ψt‖ = η t−1∑ i=0 〈 Ξai − Ξbi + θi , ( I− η∇2f(w0) )t−1−i g 〉\n6We thank an anonymous reviewer on openreview who pointed out this simpler proof to us.\n¬ ≤ η t−1∑ i=0 (〈 Ξai − Ξbi , ( I− η∇2f(w0) )t−1−i g 〉 +R ·O( √ ηC1ν(1 + ηδ) i √ δ ) · (1 + ηδ)t−1−i )\n≤ η t−1∑ i=0 〈 Ξai − Ξbi , ( I− η∇2f(w0) )t−1−i g 〉 +O(RηT √ ηC1√ δ ν(1 + ηδ)t)\nwhere the inequality ¬ uses ‖θi‖ ≤ R · ‖ψi‖ ≤ R · (‖ψi‖ + ‖ψi − ψi‖), ∥∥(I − η∇2f(w0))t−1−ig∥∥ ≤\n(1 + ηδ)t−1−i, and the inductive assumption. Let us call M = ( I− η∇2f(w0) ) , and focus on∣∣∣∣∣ t−1∑ i=0 〈 Ξai , ( I− η∇2f(w0) )t−1−i g 〉∣∣∣∣∣\n= ∣∣∣∣∣〈Ξa0 + · · ·+ Ξat−1 , g〉+ t−2∑ i=0 〈 Ξa0 + · · ·+ Ξai ,M t−1−ig −M t−2−ig 〉∣∣∣∣∣ ≤ ‖Ξa0 + · · ·+ Ξat−1‖ · ‖g‖+\nt−2∑ i=0 ‖Ξa0 + · · ·+ Ξai‖ · ‖M t−1−ig −M t−2−ig‖\n≤ O( √ TC2 ( ‖g‖+\nt−2∑ i=0 ‖M t−1−ig −M t−2−ig‖\n) (using Lemma 3.2)\n≤ O( √ TC2 · (1 + ηδ)t−1)\nTogether, we have\n‖ψt − ψt‖ ≤ O(η √ TC2) · (1 + ηδ)t +O(RηT √ ηC1√ δ ν(1 + ηδ)t)\nUnder our assumption, we have ‖ψt−ψt‖ < 12000 ( √ ην(1+ηδ)t√ δ ) and therefore ‖ψT ‖ ≥ ‖ψT ‖−‖ψT −ψT ‖ ≥\n1 2000\n( √ ην(1+ηδ)t√\nδ ). Thus, within T iterations, we have ‖ψt‖ > R and this gives a contradiction." }, { "heading": "B FINAL: DOUBLE SAFE GUARD", "text": "We now come to our final Algorithm 3 which is our perturbed SGD algorithm with two safeguards. The two safeguard algorithm naturally divides itself into epochs, each consisting of T1 iterations. We will demonstrate that within most epochs, we make good progress. Thus, consider some iterate xmT1 , for some m < T/T1. Our goal will be to argue that we make good function value progress by iterate x(m+1)T1 , and that we do not settle into any saddle points. To slightly simplify notation, let w0 = xmT1 , and let the sequence of iterates be w0, . . . , wT1−1, so that wT1−1 = x(m+1)T1−1. For completeness’ sake we rewrite this as Algorithm 1.\nAlgorithm 3 Perturbed SGD with double safe guard (for analysis purpose)\nInput: w0 ∈ Rd, set good0 ⊇ good, rate η > 0, lengths T1 ≥ T0 ≥ 1, threshold T1 > T0 > 0; 1: for t← 0 to T1 − 1 do 2: last← max{t0 ∈ [t] : t0 is a multiple of T0} 3: for each i ∈ goodt do 4: receive∇t,i ∈ Rd from machine i; 5: Ai ← ∑t k=0 ∇k,i |goodk| and Bi ← ∑t k=last ∇k,i |goodk| ;\n6: Amed ← Ai where i ∈ goodt is any machine s.t. ∣∣{j ∈ goodt : ‖Aj −Ai‖ ≤ T1}∣∣ > m/2.\n7: Bmed ← Bi where i ∈ goodt is any machine s.t. ∣∣{j ∈ goodt : ‖Bj −Bi‖ ≤ T0}∣∣ > m/2.\n8: goodt+1 ← { i ∈ goodt : ‖Ai −Amed‖ ≤ 2T1 ∧ ‖Bi −Bmed‖ ≤ 2T0 } ;\n9: wt+1 = wt − η ( ξt +\n1 |goodt| ∑ i∈goodt ∇t,i ) ;\nOur main result is the following theorem.\nTheorem B.1. Let C3 = α2 + 1m . Suppose we pick parameters p, δ ∈ (0, 1), η ≤ Õ( δ3 C3 ), ν2 = Θ̃(C3), T0 = Θ̃ ( 1 η ) , T1 = Θ̃ ( 1 ηδ ) ≥ T0, T1 = Θ̃( √ T1), and T0 = Θ̃( √ T0). Then, starting from w0,\n(a) with probability at least 1− p we have\nf(w0)− f(wT1) ≥ 0.7η T1−1∑ t=0 ( ‖∇f(wt)‖2 − Õ(ηC3d) ) .\n(b) As long as ‖wt − w0‖ ≥ R for some t ∈ {1, 2, . . . , T1} and for R = Θ̃(δ) ≤ δ2 , then with probability at least 1− p we have then\nf(w0)− f(wT1) ≥ 0.5η T1−1∑ t=0 ( −Õ(ηC3d) ) + Ω̃(δ3)\n(c) if λmin(∇2f(w0)) ≤ −δ, we also have with probability at least 0.45,\nf(w0)− f(wT1) ≥ 0.5η T1−1∑ t=0 ( −Õ(ηC3d) ) + Ω̃(δ3)" }, { "heading": "B.1 WHY THEOREM B.1 IMPLIES THEOREM 2.3", "text": "Using the parameter choice η = Θ̃( ε 2\nC3d ) from Theorem 2.3, we know Õ(ηC3d) ≤ 0.1ε2. We claim two\nthings:\n• For at least 90% of the epochs, they must satisfy (denoting by w0 and wT1 the beginning and ending points of this epoch)\nf(w0)− f(wT1) ≤ 20 f(x0)−min f(x)\nT/T1 ≤ ε1.5\nThe last inequality uses our choice of T and δ = Θ̃( √ ε).\nThe reason for this is by way of contradiction. Suppose for at least 10% of the epochs it satisfies f(w0) − f(wT1) > 20 f(x0)−min f(x) T/T1\n, then, for the remainder of the epochs, they must at least satisfy f(w0) − f(wT1) ≥ −0.7ηT1 · 0.1ε2. Summing over all the epochs, we shall obtain f(x0) − f(xT ) > f(x0)−min f(x) but this gives a contradiction.\n• For at least 40% of the epochs, they must satisfy the three properties from Theorem B.1.\nIn particular, for at least 30% of the epochs, they must satisfy both. Since ε1.5 is so small that\nε1.5 ≥ f(w0)− f(wT1) ≥ 0.5η T1−1∑ t=0 ( −Õ(ηC3d) ) + Ω̃(δ3) ≥ Ω̃(δ3)− 0.05ηT1ε2\nwould give a contradiction (for instance, one can choose δ to be slightly larger than √ ε by some log factors), this means, for those 30% of the epochs, they must satisfy:\n• ε1.5 ≥ 0.7η ∑T1−1 t=0 ( ‖∇f(wt)‖2 − 0.1ε2 ) , • ‖wt − w0‖ ≤ δ2 for every t = 1, 2, . . . , T1, and • ∇2f(w0) −δI.\nThe latter two properties together implies ∇2f(wt) − δ2I for every t = 1, 2, . . . , T1 (by the second-order smoothness). The first property implies for at least 90% of the iterations t in this epoch, they must satisfy ‖∇f(x)‖ ≤ ε. This finishes the proof of Theorem 2.3." }, { "heading": "B.2 PROOF OF THEOREM B.1", "text": "We first have the following lemma Lemma B.2 (double safe guard). In Algorithm 3, suppose T1 = 8 √ T1 log(16mT1/p) and T0 =\n8 √ T0 log(16mT1/p). Then, with probability at least 1− p/2, for every t = 0, . . . , T1 − 1,\n• goodt ⊇ good.\n• ‖σt‖2 ≤ O( log(T1/p)m ), ‖∆t‖ 2 ≤ α2, ‖ξt‖2 ≤ O(ν2d log(T1/p)),\n• ‖σ0 + · · ·+ σt−1‖2 ≤ O(T1 log(T1/p)m ), ‖σlast + · · ·+ σt−1‖ 2 ≤ O(T0 log(T1/p) m ) • ‖∆0 + · · ·+ ∆t−1‖2 ≤ O(α2T1 log(mT1/p)) and ‖∆last + · · ·+ ∆t−1‖2 ≤ O(α2T0 log(mT1/p)) • ‖ξ0 + · · ·+ ξt−1‖2 ≤ O(ν2dT1 log(T1/p)) and ‖ξlast + · · ·+ ξt−1‖2 ≤ O(ν2dT0 log(T1/p)).\nWe call this probabilistic event EventdoubleT1,T0(w0) and Pr[Event double T1,T0(w0)] ≥ 1− p/2.\nThe proof is a direct corollary of Lemma 3.2, by combining events EventsingleT1 (w0), Event single T0 (w0),\nEventsingleT0 (wT0), Event single T0 (w2T0) and so on. The next lemma is a simple corollary by repeatedly applying Lemma 3.3. It proves Theorem B.1a.\nLemma B.3 (modified from Lemma 3.3). Denote by C1 = log(T1/p) and C2 = α2 log mT1p + log(T1/p) m . Suppose η ≤ 0.01 min{1, 1 C2 }, T0 = 1100η(1+√C2) and T1 ≥ T0. We start from w0 and apply Algorithm 3. Under event EventdoubleT1,T0(w0), it satisfies\nf(w0)− f(wT1) ≥ 0.7η T1−1∑ t=0 ( ‖∇f(wt)‖2 − η ·O(C2 + (C2)1.5)−O(C1ν2η(d+ √ C2)) ) The next lemma can be easily derived from Lemma 3.5.\nLemma B.4 (modified from Lemma 3.5). Suppose\nR = Θ( δ√\nC1 log(δ3/ηC2) ) and ν2 = Θ(C2 log\nδ3\nηC2 )\nSuppose η ≤ 0.01 min{1, δ 3\nC2 }, T0 = 1100η(1+√C2) and T1 = Θ( 1 ηδ\nlog δ 3\nηC2 ) ≥ T0. Let w0 ∈ Rd be any\npoint in the space and suppose λmin(∇2f(w0)) ≤ −δ for some δ ≥ 0. Given two coupled sequences defined as before, under events EventdoubleT1,T0(w a 0) and Event double T1,T0(w b 0), we have with probability at least 0.98\nmax { f(wa0)− f(waT1), f(w b 0)− f(wbT1) } ≥ 0.5η\nT1−1∑ t=0 ( −η ·O(C2 + (C2)1.5)−O(C1ν2η(d+ √ C2)) ) + Ω(\nδ3\nC1 log 3 δ3\nηC2\n)\nLemma B.4 directly proves the second half of Theorem B.1c, because given two coupled sequences with the same marginal distribution, we have\nPr[f(wa0)− f(waT1) ≥ X] ≥ 1\n2 Pr[max\n{ f(wa0)− f(waT1), f(w b 0)− f(wbT1) } ≥ X]\nProof of Lemma B.4. Our choice on r and R satisfy the requirement of Lemma 3.5. Suppose without loss of generality that the wat sequence leaves w0 by more than R. Let T a1 be the first iteration t ≤ T1 in which ‖wat − wa0‖ ≥ R.\n‖waT a1 − w a 0‖2 = η2‖∇f(wa0) + · · ·+∇f(waT a1−1) + Ξ a 0 + · · ·+ ΞaT a1−1 + ξ a 0 + · · ·+ ξaT a1−1‖ 2\n≤ O(η2T1) T a1−1∑ t=0 ‖∇f(wat)‖2 +O(C2η2T1) +O(C1η2ν2T1d)\nCombining this with Lemma B.3, we have f(wa0)− f(waT a1 ) ≥ 0.5η T a1−1∑ t=0 ( ‖∇f(wat)‖2 − η ·O(C2 + (C2)1.5)−O(C1ν2η(d+ √ C2)) ) + ‖waT a1 − w a 0‖2 100ηT1\n≥ 0.5η T a1−1∑ t=0 ( ‖∇f(wat)‖2 − η ·O(C2 + (C2)1.5)−O(C1ν2η(d+ √ C2)) ) + R2 100ηT1\nCombining this with Lemma B.3 again but for the remainder iterations, we have\nf(wa0)− f(waT1) ≥ 0.5η T1−1∑ t=0 ( ‖∇f(wat)‖2 − η ·O(C2 + (C2)1.5)−O(C1ν2η(d+ √ C2)) ) + R2 100ηT1\nIn fact, the above same proof of Lemma B.4 also implies Theorem B.1b. These together finish the proof of Theorem B.1." }, { "heading": "C MORE ON EXPERIMENTS", "text": "We conduct experiments on training a residual network ResNet-20 He et al. (2016) on the CIFAR-10/100 image classification tasks Krizhevsky et al. (2014)." }, { "heading": "C.1 SETTING AND IMPLEMENTED METHODS", "text": "In all of our experiments, we use 10 workers and mini-batch size 10 per worker. Given any attacker and any defender algorithm, we run SGD three times for 140 epochs, each time with a different initial learning rate η ∈ {0.1, 0.2, 0.4}.7 We let the learning rate decrease by a factor of 10 on epochs 80 and 110, and present present the best testing accuracies in the three runs (each corresponding to a different initial learning rate).\nWe use standard data augmentation (random crops, random flips, and channel normalization).\nWe compare against Geometric Median Chen et al. (2017), Coordinate-wise Median Yin et al. (2018; 2019), Krum Blanchard et al. (2017), and Zeno Xie et al. (2018b) with attacks. We set α = 0.4 so there are 4 Byzantine workers. (This exceeds the fault-tolerance of Krum, and so we also tested Krum with only 3 Byzantine workers.) We formally define those prior works as follows.\nDefinition C.1 (GeoMed Chen et al. (2017)). The geometric median of {y1, ..., ym}, denoted by geo med{y1, ..., ym}, is\ngeo med{y1, ..., ym} := arg miny∈Rd ∑m i=1 ‖y − yi‖\nIn our experiments, we choose the geometric median from set {y1, ..., ym}. Definition C.2 (coordinate-wise median Yin et al. (2018; 2019)). Coordinate-wise median g = med{y1, ..., ym} is defined as a vector with its k-th coordinate being g[k] = med{y1[k], ..., ym[k]} for each k ∈ [d], where med is the usual (one-dimensional) median. Definition C.3 (Krum Blanchard et al. (2017)).\nKR{y1, ..., ym} := yk where k = arg min i∈[m] ∑ i→j ‖yi − yj‖2\nand i→ j is the indices of them−b−2 nearest neighbours of yi in {y1, ..., ym}\\{yi} by Euclidean distances.\nNote that Krum requires 2b + 2 < m. So, we have also repeated the experiments for Krum with 3 Byzantine workers (out of 10 workers) to be more fair.\nDefinition C.4 (Zeno Xie et al. (2018b)).\nZenob{y1, ..., ym} = 1\nm− b m−b∑ i=1 ỹ(i)\nwhere {ỹ(i) : i ∈ [m]} are the gradient estimators with the m − b highest “scores”, and the so-called stochastic descendant score for any gradient estimator u, based on the current parameter x, learning rate η, and a constant weight ρ > 0, is defined as:\nScoreη,ρ(u, x) = fr(x)− fr(x− ηu)− ρ‖u‖2\nfr(x)− fr(x− ηu) is the estimated descendant of the loss function and ρ‖u‖2 is the magnitude of the update.\nIn our experiments, we let fr(x) be the estimated objective over a mini-batch of size nr = 10 (so the time to perform this estimation is on the same magnitude as the gradient evaluation for each individual worker). We also chose ρ = 0.0005 (and this value does not affect our experimental results by much).\nSafeguard SGD. Our Algorithm 1 is stated in a way to make our theoretical proofs as clean as possible. Here, we discuss how we actually implement it in practice.\nFirst of all, as common in the literature, we omit the Gaussian noise ξt that is added for theoretical purpose, and instead rely on the natural noise in the training process to escape saddle points.\nAlso, we make universal choices for our safeguard window sizes (across all attackers): for our algorithm with a single safeguard we have used a universal window size T = 3 epochs, and for our algorithm with double safeguards we have used window sizes T0 = 1 epoch and T1 = 6 epochs. We also provide an automatic empirical process to select safeguard thresholds and eliminate bad workers.8 The process to determine Amed (and likewise for Bmed) is described as follows. In each iteration, for every worker i ∈ [m], we sort { ‖Ai − Aj‖ } j∈[m] and pick the smallest dm/2 + 1e-th entry, and let this number be the “score” for worker i. We select the worker with the smallest “score” as Amed and call its “score” S. Then, we use 1.5 min{S, 5} as the safeguard threshold for this iteration. Namely, we declare any worker j satisfying\n7Recall a typical suggested initial learning rate is 0.1 for training ResNet with SGD+momentum; since we are using SGD without momentum, the initial learning rate can be appropriately enlarged.\n8In our first version of the paper, we pre-run the algorithm for 20 epochs to determine safeguard thresholds; in the newer version, we have avoided the pre-run.\n‖Aj −Amed‖ ≥ 1.5 max{S, 5} as a bad worker.9" }, { "heading": "C.2 EXPERIMENT RESULTS BY ATTACKS", "text": "The ideal accuracies are 91.7% / 68.0% for CIFAR-10/100, which correspond to applying SGD using only the stochastic gradients from the honest workers. Below we discuss about the experimental results one attack at a time.\nC.2.1 VARIANCE ATTACK\nWe begin by looking at the hardest proposed attack from prior works. The Variance attack follows the strategy prescribed by Baruch et al. (2019), by which Byzantine workers collude in order to shift the mean among all gradients by a factor β times the standard deviation of the gradient, while staying within population variance. More precisely, the maximal change to the mean that can be applied by an attacker without the fear of being detected, by using the properties of the Normal distribution, specifically the cumulative standard normal function, and compute the maximal possible shift so that the attackers’s values stay within population variance. (See (Baruch et al., 2019, Algorithm 3) for a precise description. Our β is zmax in their notation.) We implement this strategy coordinate-wise, the same way as they did. Their work observes that the shift β can be non-trivial in practice, since stochastic gradients tend to have large variance in neural network training (which we also observed in our setup). Critically, the attack cannot be defended against by historyless algorithms, as the attacker’s values are statistically indistinguishable from a regular execution in a single iteration.\nIn our setting, for 10 total nodes and α = 0.4, β is upper bounded by 0.3 (following the classic tables for the cumulative normal). We also ran the same attack in the setup from their paper (50 nodes total, of which 24 are Byzantine, which allows β ∼ 1.5) and observed a similar outcome. Results for this experiment are given in Figure 3.\nAs shown by the results, our algorithm provably circumvents this attack, and recovers full accuracy. This is explained by the fact that the algorithm has memory: in particular, Byzantine nodes following this strategy will progressively diverge from the (honest) “median”Amed (at a “linear” rate, recall Figure 2(a)), and therefore will eventually exceed the threshold and be marked as malicious by the algorithm.\nSpecifically, both variants of the algorithm successfully catch all the bad nodes after at most 150 iterations. Indeed, at the 100-th iteration, the pair-wise distances ‖Ai−Aj‖ among good workers i, j ∈ good are between 5.3 and 6.3, but the pair-wise distance between a good and a bad worker is at least 12.5." }, { "heading": "C.2.2 SIGN-FLIPPING ATTACK", "text": "We next move onto the sign-flipping attack. Recall that, in a sign-flipping attack, each Byzantine worker sends the negative gradient to the master. This is still a strong attack since if one does not avoid any bad workers, the test accuracy will suffer from a significant drop. The results are in Figure 4.\nFrom the plots, one can see that again our single and double safe-guard algorithms both outperform prior works. They also successfully catch all the bad workers within 150 iterations. (For instance, at iteration 150 for CIFAR-10 training, the distance ‖Amed − Aj‖ for a good worker j ∈ good is at most 6.9, but for a bad\n9This process appears a little different from Algorithm 1, but a similar proof also holds for this new empirical process. The constant factor 1.5 requires no tuning, and the constant threshold 5 is chosen so that the stochastic gradient of batch size 500 at random initialization has Euclidean norm no more than 5.\nworker j 6∈ good it can be more than 11.) In contrast, prior work Zeno completely fails because locally at a training step, using merely nr = 10 samples to evaluate the objective, it is statistically not possible to even distinguish if the sign of the stochastic gradient is flipped or not. For prior works Krum and GeoMedian, although they appear to have some non-negligible performances, but they are actually no better than simply applying SGD with the naive mean of gradients from all the workers (including those from bad workers).10 Therefore, we conclude that prior works all fail to be Byzantine fault tolerant under this attack." }, { "heading": "C.2.3 DELAYED-GRADIENT ATTACK", "text": "Recall that, in a delayed-gradient attack, each Byzantine worker sends an old gradient to the master. In our experiments, the delay is of D = 1000 iterations (= 2 epochs). We believe this is not a very strong attack, because delayed gradients are not sufficiently malicious: they are still “correct” to certain extent albeit being delayed. The results are shown in Figure 5.\nFrom the plots, one can see that our single and double safe-guard algorithms again both match the ideal accuracies. All the prior works suffer from a significant performance loss under this attack.\nIt is worth noting that our single and double safe-guard algorithms do not catch any bad worker under this attack, so they simply use the “naive mean” of gradients from all the workers (including those delayed gradients from bad workers). However, there is no performance loss even if we use those delayed gradients. That is why we believe the delayed-gradient attack is not very strong, as the gradients are not sufficiently malicious.\nPrior work Zeno suffers from some performance loss, because it only uses 6 workers out of 10, in which statistically only 6 × 0.6 ≈ 3 ∼ 4 gradients are correct.11 Other prior works suffer from performance loss,\n10We did not include this “naive mean” algorithm in the plots for cleanness, but under the sign-flipping attack, it gives 81.4% test accuracy on CIFAR-10 and 38.3% on CIFAR-100. (This should not be surprising, since using (1 − α)m = 6 positive gradients plus αm = 4 negative gradients still gives non-negligible information about the true gradient.)\n11In fact, we observed Zeno slightly favors delayed gradients, where each delay gradient is chosen with probability 63%, comparing to true stochastic gradients each chosen with probability 58%.\nbecause they only pick one single stochastic gradient from the 10 workers, and it is sometimes even from the bad worker." }, { "heading": "C.2.4 LABEL-FLIPPING ATTACK", "text": "Recall that, in the label-flipping attack, each Byzantine worker computes its gradient based on the cross-entropy loss with flipped labels: for CIFAR-10, label ` ∈ {0, ..., 9} is flipped to 9 − `, and for CIFAR-100, label ` is flipped to 99− `. The results are shown in Figure 6.\nFrom the plots, one can see that our single and double safe-guard algorithms even outperform the “ideal accuracies.” (92.4% accuracy vs “ideal accuracy” 91.7% under CIFAR-10; 69.4% accuracy vs “ideal accuracy” 68.0 under CIFAR-100.) In addition, we have found out that the safeguard algorithms did not catch any bad worker. This should not be surprising, since label-flipping (a.k.a. label smoothing) is known to be a regularization technique to actually improve test accuracy, as opposed to hurt performance.\nZeno also performs well under this attack (but it does not outperform the ideal accuracy). We have investigated into Zeno, and found out that it cannot distinguish good workers from bad workers under label-flipping attack; and therefore Zeno effectively always runs under 6 random workers as opposed to using the full power of the m = 10 workers (recall Zeno picks 6 workers with the topmost scores, see Definition C.4). This explains its (minor) under-performance comparing to safeguard.\nOther prior works perform significantly worse, and this should be alarming since label-flipping is one type of smoothing technique to improve test accuracy, as opposed to an actual “attack” to hurt performance." }, { "heading": "C.2.5 SAFEGUARD ATTACKS", "text": "Finally, in the safeguard attack that we design, Byzantine workers send negative but re-scaled gradient to the master. We choose the re-scale factor so that it hardly triggers the safe-guard conditions at the master. From our experiment, choosing the re-scale factor as 0.6 in all the cases do not trigger the safe-guard conditions, while choosing a re-scale factor as 0.7 enables the algorithm to catch Byzantine workers occasionally. Our results are shown in Figure 7 (for re-scale factor 0.6) and Figure 8 (for re-scale factor 0.7).\nRe-scale factor 0.6. In Figure 7, the performance of our (single and double) safeguard algorithms indeed get hurt a bit. Recall in Figure 7 the re-scale factor 0.6 is chosen to maximally impact our algorithm. The test accuracy drops from 91.7% to 89.3% under CIFAR-10; and drops from 68.0% to 60.0% under CIFAR-100 (for both single and double safeguards). In these cases, we confirm that both versions of the safeguard algorithms did not catch any bad worker. However, this still significantly outperforms all prior works.\nRe-scale factor 0.7. In Figure 8, we present the scenario when the re-scale factor is 0.7, so that the safeguard algorithms can occasionally catch some bad workers (depending on the randomness and learning rate). We confirm that in the three runs of single safeguard, it catches 1, 2, 3 bad workers for CIFAR-10, and 1, 0, 0 bad workers for CIFAR-100 respectively; in the three runs of double safeguard, it catches 1, 2, 4 bad workers for CIFAR-10, and 2, 2, 2 bad workers for CIFAR-100 respectively.\nSince there is a significant performance gain when our safeguard algorithms catch bad workers, this explains why safeguard algorithms in Figure 8 outperform their counterparts in Figure 7 with rescale factor 0.6. At the same time, we notice that the double safeguard algorithm has the ability to catch bad workers more easily. This is why double safeguard significantly outperforms single sageguard in Figure 8.\nIn contrast, all other prior algorithms perform extremely bad under this attack. To some extent, safeguard attack is even stronger than the previously proposed variance attack, since it can drag the 100-class test accuracy on CIFAR-100 for all prior defense algorithms to nearly 1%, while variance attack can only drag them down to around 10%." }, { "heading": "C.3 FULL COMPARISON TABLE", "text": "We also include the full test accuracy comparison table in Table 1.\nsafeguard(x0.7) attack 59.84 64.91 2.07 1.97 1.32 1.55 3.31" }, { "heading": "C.4 ATTACK AGAINST THE CONVEX ALGORITHM OF ALISTARH ET AL. (2018)", "text": "We now briefly describe an attack against this algorithm. The attack specifically leverages the fact that the algorithm does not use sliding windows.\nOne can first run the vanilla SGD to compute “the maximum deviation per good worker” for the accumulation vector used by the algorithm ∑T t=0∇t. This maximum deviation is therefore a lower bound for the threshold used in their algorithm. Next, we design an attacker who evenly distributes this total allowed deviation to e.g. 5 consecutive epochs, and behaves honestly for the remaining epochs. Such an attacker cannot be identified by this algorithm, because its total deviation across all the iterations is identical to that of a good worker. However, this leads the algorithm to divergence.\nSpecifically, suppose 4 Byzantine workers all maliciously report their stochastic gradients multiplied by the scalar −5, and the remaining 6 good workers report their true stochastic gradients. One can verify numerically that this attacker can run for 5 consecutive epochs (say, epochs a, a + 1, a + 2, a + 3, a + 4) without being caught by the algorithm. Now,\n• if a ≤ 75, within just 1 epoch of attack, the neural net weights diverge (value NAN). • if 80 ≤ a ≤ 115, this attack is applied after the first learning rate decay. Within just 1 epoch of the attack,\nthe objective explodes and accuracy becomes 10% (random), and within 3 epochs the algorithm diverges completely.\n• if 120 ≤ a ≤ 155, this attack is after the second learning rate decay. Within just 2 epochs of attack, the accuracy drops to 11%. Later, the accuracy never recovers above 40%." } ]
2,021
BYZANTINE-RESILIENT NON-CONVEX STOCHASTIC GRADIENT DESCENT∗
SP:14e55fd6a62febf4c0884964989ac6eb4ae70f63
[ "This work builds on the vulnerability of VAEs to adversarial attacks to propose investigate how training with alternative losses may alleviate this problem, with a specific focus on disentanglement. In particular it is found that disentanglement constraints may improve the robustness to adversarial attacks, to the detriment of the performance. In order to get the best of both, the author(s) propose a more flexible (hierarchical) model, trained with the beta-TC penalization on the ELBO. The algorithm, named Seatbelt-VAE, shows improvement over the beta-TC VAE in terms of reconstruction, as well as in term of adversarial robustness for several datasets (Chairs, 3D Faces, dSprites). " ]
Variational autoencoders (VAEs) have recently been shown to be vulnerable to adversarial attacks, wherein they are fooled into reconstructing a chosen target image. However, how to defend against such attacks remains an open problem. We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs. Namely, we first demonstrate that methods proposed to obtain disentangled latent representations produce VAEs that are more robust to these attacks. However, this robustness comes at the cost of reducing the quality of the reconstructions. We ameliorate this by applying disentangling methods to hierarchical VAEs. The resulting models produce high–fidelity autoencoders that are also adversarially robust. We confirm their capabilities on several different datasets and with current state–of–the–art VAE adversarial attacks, and also show that they increase the robustness of downstream tasks to attack.
[ { "affiliations": [], "name": "ADVERSARIAL ATTACK" }, { "affiliations": [], "name": "Matthew Willetts" }, { "affiliations": [], "name": "Alexander Camuto" }, { "affiliations": [], "name": "Tom Rainforth" }, { "affiliations": [], "name": "Stephen Roberts" }, { "affiliations": [], "name": "Chris Holmes" } ]
[ { "authors": [ "Naveed Akhtar", "Ajmal Mian" ], "title": "Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep Variational Information Bottleneck", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Alexander A Alemi", "Ben Poole", "Ian Fischer", "Joshua V Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a Broken ELBO", "venue": null, "year": 2018 }, { "authors": [ "Mathieu Aubry", "Daniel Maturana", "Alexei A Efros", "Bryan C Russell", "Josef Sivic" ], "title": "Seeing 3D chairs: Exemplar part-based 2D-3D alignment using a large dataset of CAD models", "venue": "In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Ben Barrett", "Alexander Camuto", "Matthew Willetts", "Tom Rainforth" ], "title": "Certifiably Robust Variational Autoencoders", "venue": "arXiv preprint,", "year": 2021 }, { "authors": [ "Anthony J Bell", "Terrence J Sejnowski" ], "title": "An information-maximisation approach to blind separation and blind deconvolution", "venue": "Neural Computation,", "year": 1995 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance Weighted Autoencoders", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner", "Deepmind London" ], "title": "Understanding disentangling in beta-VAE", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Richard H Byrd", "Peihuang Lu", "Jorge Nocedal", "Ciyou Zhu" ], "title": "A Limited Memory Algorithm for Bound Constrained Optimization", "venue": "SIAM J. Sci. Comput., 16(5):1190–1208,", "year": 1995 }, { "authors": [ "Alexander Camuto", "Matthew Willetts", "Stephen Roberts", "Chris Holmes", "Tom Rainforth" ], "title": "Towards a Theoretical Understanding of the Robustness of Variational Autoencoders", "venue": "arXiv preprint,", "year": 2020 }, { "authors": [ "Taylan Cemgil", "Sumedh Ghaisas", "Krishnamurthy Dvijotham", "Pushmeet Kohli" ], "title": "Adversarially robust representations with smooth encoders", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Ricky Chen", "Xuechen Li", "Roger Grosse", "David Duvenaud" ], "title": "Isolating Sources of Disentanglement in Variational Autoencoders", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Rewon Child" ], "title": "Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images", "venue": "In ICLR,", "year": 2021 }, { "authors": [ "Babak Esmaeili", "Hao Wu", "Sarthak Jain", "Alican Bozkurt", "N Siddharth", "Brooks Paige", "Dana H Brooks", "Jennifer Dy", "Jan-Willem van de Meent" ], "title": "Structured Disentangled Representations", "venue": null, "year": 2019 }, { "authors": [ "Partha Ghosh", "Arpan Losalka", "Michael J Black" ], "title": "Resisting Adversarial Attacks Using Gaussian Mixture Variational Autoencoders", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Ryan P Adams", "Ian Goodfellow", "David Andersen", "George E Dahl" ], "title": "Motivating the Rules of the Game for Adversarial Example Research", "venue": null, "year": 2018 }, { "authors": [ "George Gondim-Ribeiro", "Pedro Tabacof", "Eduardo Valle" ], "title": "Adversarial Attacks on Variational Autoencoders", "venue": "arXiv preprint,", "year": 2018 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei Rusu", "Loic Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner" ], "title": "DARLA: Improving Zero-Shot Transfer in Reinforcement Learning", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Matthew D Hoffman", "Matthew J Johnson" ], "title": "ELBO surgery: yet another way to carve up the variational evidence lower bound", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "Ilyes Khemakhem", "Diederik P Kingma", "Ricardo Pio Monti", "Aapo Hyvärinen" ], "title": "Variational Autoencoders and Nonlinear ICA: A Unifying Framework", "venue": "In AISTATS,", "year": 2020 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by Factorising", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Lei Ba" ], "title": "Adam: A Method for Stochastic Optimisation", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding Variational Bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved Variational Inference with Inverse Autoregressive Flow", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "J Kos", "I Fischer", "D Song" ], "title": "Adversarial Examples for Generative Models", "venue": "In IEEE Security and Privacy Workshops, pp. 36–42,", "year": 2018 }, { "authors": [ "Tejas D Kulkarni", "Will Whitney", "Pushmeet Kohli", "Joshua B Tenenbaum" ], "title": "Deep Convolutional Inverse Graphics Network", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Abhishek Kumar", "Ben Poole" ], "title": "On Implicit Regularization in β-VAE", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar Variational Autoencoder", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep Learning Face Attributes in the Wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucie", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations", "venue": null, "year": 2019 }, { "authors": [ "Lars Maaløe", "Marco Fraccaro", "Valentin Liévin", "Ole Winther" ], "title": "BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Alireza Makhzani", "Jonathon Shlens", "Navdeep Jaitly", "Ian Goodfellow", "Brendan Frey" ], "title": "Adversarial Autoencoders", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N. Siddharth", "Yee Whye Teh" ], "title": "Disentangling Disentanglement in Variational Autoencoders", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Pascal Paysan", "Reinhard Knothe", "Brian Amberg", "Sami Romdhani", "Thomas Vetter" ], "title": "A 3D face model for pose and illumination invariant face recognition", "venue": "IEEE International Conference on Advanced Video and Signal Based Surveillance,", "year": 2009 }, { "authors": [ "Tom Rainforth", "Robert Cornish", "Hongseok Yang", "Andrew Warrington", "Frank Wood" ], "title": "On nesting Monte Carlo estimators", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Michal Rolinek", "Dominik Zietlow", "Georg Martius" ], "title": "Variational autoencoders pursue pca directions (by accident)", "venue": "In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Lukas Schott", "Jonas Rauber", "Matthias Bethge", "Wieland Brendel" ], "title": "Toward the First Adversarially Robust Neural Network Model on MNIST", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "N Siddharth", "Brooks Paige", "Jan Willem Van De Meent", "Alban Desmaison", "Noah D Goodman", "Pushmeet Kohli", "Frank Wood", "Philip H.S. Torr" ], "title": "Learning disentangled representations with semi-supervised deep generative models", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder Variational Autoencoders", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Pedro Tabacof", "Julia Tavares", "Eduardo Valle" ], "title": "Adversarial Images for Variational Autoencoders", "venue": "In NeurIPS Workshop on Adversarial Training,", "year": 2016 }, { "authors": [ "Lucas Theis", "Wenzhe Shi", "Andrew Cunningham", "Ferenc Huszár" ], "title": "Lossy Image Compression with Compressive Autoencoders", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "James Townsend", "Tom Bird", "David Barber" ], "title": "Practical Lossless Compression with Latent Variables using Bits Back Coding", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Arash Vahdat", "Jan Kautz" ], "title": "NVAE: A Deep Hierarchical Variational Autoencoder", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Satosi Watanabe" ], "title": "Information Theoretical Analysis of Multivariate Correlation", "venue": "IBM Journal of Research and Development,", "year": 1960 }, { "authors": [ "Matthew Willetts", "Alexander Camuto", "Stephen Roberts", "Chris Holmes" ], "title": "Disentangling Improves VAEs", "venue": "Robustness to Adversarial Attacks. arXiv preprint,", "year": 2019 }, { "authors": [ "Weidi Xu", "Haoze Sun", "Chao Deng", "Ying Tan" ], "title": "Variational Autoencoder for Semi-supervised Text Classification", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Learning Hierarchical Features from Generative Models", "venue": "In ICML,", "year": 2017 }, { "authors": [ "qφ(z|x). A" ], "title": "DISENTANGLING VAES When learning disentangled representations (Bengio et al., 2013) in a VAE, one attempts to establish a one-to-one correspondence between dimensions of the learnt latent space and some interpretable aspect of the data (Higgins et al., 2017a", "venue": "Burgess et al.,", "year": 2018 }, { "authors": [ "Esmaeili" ], "title": "Other methods seek to offset this degradation in model quality by decomposing the ELBO and more precisely targeting the regularisation when obtaining disentangled representations. We can more insight into VAEs by defining the evidence lower bound not per data-point, but instead over the dataset D of size N , D = {xn}, so we have L(θ, φ,D) (Hoffman & Johnson, 2016", "venue": "Makhzani et al.,", "year": 2019 }, { "authors": [ "Definition A" ], "title": "The total correlation (TC) is a generalisation of mutual information to multiple variables (Watanabe, 1960) and is often used as the objective Independent Component Analysis", "venue": null, "year": 1995 }, { "authors": [ "minima. (Zhao" ], "title": "2017) gives a proof of this separation for the case where the model is perfectly", "venue": null, "year": 2017 }, { "authors": [ "M . Chen" ], "title": "2018) introduce r(BM |x), the probability of a sampled minibatch given that one member is x and the remaining M − 1 points are sampled iid from q(x)", "venue": null, "year": 2018 }, { "authors": [ "Rolinek" ], "title": "hyperparameter tuning required for disentangled representations Locatello et al", "venue": null, "year": 2019 }, { "authors": [ "We used the same convolutional network architectures as Chen" ], "title": "For the encoders of all our models (q(·|x)) we used purely convolutional networks with 5 convolutional layers", "venue": "When training", "year": 2018 }, { "authors": [ "Kulkarni" ], "title": "To train the model we used ADAM Kingma & Lei Ba (2015) with default parameters, a cosine decaying learning rate of 0.001, and a batch size of 1024. All data was pre-processed to fall on the interval -1 to 1. CelebA and Chairs were both downsampled and cropped as in Chen et al", "venue": "(Kingma et al.,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Variational autoencoders (VAEs) are a powerful approach to learning deep generative models and probabilistic autoencoders (Kingma & Welling, 2014; Rezende et al., 2014). However, previous work has shown that they are vulnerable to adversarial attacks (Tabacof et al., 2016; Gondim-Ribeiro et al., 2018; Kos et al., 2018): an adversary attempts to fool the VAE to produce reconstructions similar to a chosen target by adding distortions to the original input, as shown in Fig 1. This kind of attack can be harmful when the encoder’s output is used downstream, as in Xu et al. (2017); Kusner et al. (2017); Theis et al. (2017); Townsend et al. (2019); Ha & Schmidhuber (2018); Higgins et al. (2017b). As VAEs are often themselves used to protect classifiers from adversarial attack (Schott et al., 2019; Ghosh et al., 2019), ensuring VAEs are robust to adversarial attack is an important endeavour.\nDespite these vulnerabilities, little progress has been made in the literature on how to defend VAEs from such attacks. The aim of this paper is to investigate and introduce possible strategies for defence. We seek to defend VAEs in a manner that maintains reconstruction performance. Further, we are also interested in whether methods for defence increase the robustness of downstream tasks using VAEs.\nOur first contribution is to show that regularising the variational objective during training can lead to more robust VAEs. Specifically, we leverage ideas from the disentanglement literature (Mathieu et al., 2019) to improve VAEs’ robustness by learning smoother, more stochastic representations that are less vulnerable to attack. In particular, we show that the total correlation (TC) term used to encourage independence between latents of the learned representations (Kim & Mnih, 2018; Chen et al., 2018; Esmaeili et al., 2019) also serves as an effective regulariser for learning robust VAEs.\nThough a clear improvement over the standard VAE, a severe drawback of this approach is that the gains in robustness are coupled with drops in the reconstruction performance, due to the increased regularisation. Furthermore, we find that the achievable robustness with this approach can be limited (see Fig 1) and thus potentially insufficient for particularly sensitive tasks. To address this, we apply TC–regularisation to hierarchical VAEs. By using a richer latent space representation than a standard VAE, the resulting models are not only more robust still to adversarial attacks than single-layer models with TC regularisation, but can also provide reconstructions which are comparable to, and often even better than, the standard (unregularised, single-layer) VAE.\n∗Equal Contribution. Contact at: mwilletts@turing.ac.uk; acamuto@turing.ac.uk\nPublished as a conference paper at ICLR 2021\nAdversary Target 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 Improving VAEs’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs. Leveraging the insight that regularised VAEs are robust to adversarial attacks, we develop a class of hierarchical VAEs that are more resilient still. Our model, Seatbelt-VAE, provides both robustness to adversarial attack, and higher quality reconstructions, relative to single layer regularised VAEs. We show that regularised hierarchical VAEs, without our proposed extensions, are not robust to adversarial attack. See Figure 1 for a demonstration of how adversarial attacks are highly effective on vanilla VAEs, less effective on regularised VAEs and close to ineffective on our proposed Seatbelt-VAE. Thus our key contributions are: • A demonstration that regularised VAEs, trained with an up-weighted total correlation, are significantly more robust to adversarial attacks than vanilla VAEs. • We introduce a hierarchical VAE, the Seatbelt-VAE, that provides further robustness to adversarial attack. • New connections between robustness, disentangling and adversarial attack, linked through regularisation. 2. Background 2.1. Variational Autoencoders Variational autoencoders (VAEs) are a deep extension of factor analysis suitable for high-dimensional data like images (Kingma & Welling, 2013; Rezende et al., 2014). They have a joint distribution over data x and latent variables z: p✓(x, z) = p✓(x|z)p(z) where p(z) = N (0, I) and p✓(x|z) is an appropriate distribution given the form of the data, the parameters of which are represented by deep nets with parameters ✓. As exact inference is intractable for this model, in a VAE we perform amortised stochastic variational inference. By introducing a variational posterior distribution over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perform gradient ascent on the evidence lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ and jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x). 2.2. Attacks on VAEs In an adversarial attack an agent is trying to manipulate the behaviour of some machine learning model towards a goal of their choosing. Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s (a) (b) (c)\nFigure 1. Latent-space adversarial attacks on CelebA for different\nmodels: a) Vanilla VAE b) -TCVAE c) our proposed Seatbelt-\nVAE. Clockwise within each plot we show the initial input, its\nreconstruction, the best adversarial input the adversary could pro-\nduce, the adversarial distortion that was added to make the adver-\nsarial input, the adversarial input’s reconstruction, and the target\nimage. We are trying to make the initial input (Hugh Jackman)\nlook like the target (Anna Wintour). You can see that the ad-\nversarial reconstruction for the Vanilla VAE looks substantially like Wintour, indicating a successful attack. The -TCVAE adv. reconstruction does not look like Wintour, so the attack has not been successful, but it is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wintour. output. Attacks on VAEs have been proposed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018). The adversary wants draws from the model to be close to a target image when given a distorted image as input. See Figure 1.a) for an example of a successful attack on a vanilla VAE. Here we are trying to turn Hugh Jackman (Original, top left) into Anna Wintour (Target, bottom left). We can see that, by adding a well-chosen distortion (Distortion, bottom right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal. The current most effective mode of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\n055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109\nImproving VAEs’ Robustness to Adversarial Attack\ncorrelation penalty as being regularised VAEs.\nLeveraging the insight that regularised VAEs are robust to adversarial attacks, we develop a class of hierarchical VAEs that are more resilient still. Our model, Seatbelt-VAE, provides both robustness to adversarial attack, and higher quality reconstructions, relative to single layer regularised VAEs. We show that regularised hierarchical VAEs, without our proposed extensions, are not robust to adversarial attack.\nSee Figure 1 for a demonstration of how adversarial attacks are highly effective on vanilla VAEs, less effective on regularised VAEs and close to ineffective on our proposed Seatbelt-VAE.\nThus our key contributions are:\n• A demonstration that regularised VAEs, trained with an up-weighted total correlation, are significantly more robust to adversarial attacks than vanilla VAEs. • We introduce a hierarchical VAE, the Seatbelt-VAE, that provides further robustness to adversarial attack. • New connections between ro stness, disentangling and adversarial attack, linked through regularisation.\n2. Background 2.1. Variational Autoencoders\nVariational autoencoders (VAEs) are a deep extension of factor analysis suitable for high-dimensional data like images (Kingma & Welling, 2013; Rezende et al., 2014). They have a joint distribution over data x and latent variables z: p✓(x, z) = p✓(x|z)p(z) where p(z) = N (0, I) and p✓(x|z) is an appropriate distribution given the form of the data, the parameters of which are represented by deep nets with parameters ✓. As exact inference is intractable for this model, in a VAE we perform amortised stochastic variational inference. By introducing a variational posterior distribution over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perform gradient ascent on the evidence lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ and jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adversarial attack an agent is trying to manipulate the behaviour of some machine learning model towards a goal of their choosing. Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Latent-space adversarial attacks on CelebA for different models: a) Vanilla VAE b) -TCVAE c) our proposed SeatbeltVAE. Clockwise within each plot we show he initial input, its reconstruction, the best adversarial input the adversary could produce, the adversarial distorti n that was added to make the adversarial input, the adversarial input’s reconstruction, and the target image. We are trying to make the initial input (Hugh Jackman) look like the target (Anna Wintour). You can se that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour, indicating a successful attack. The -TCVAE adv. reconstruction does not look like Wintour, so the attack has not been successful, but it is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wint ur.\noutput.\nAttacks on VAEs have been proposed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018). The adversary wants draws from the model to be close to a target image when given a distorted image as input.\nSee Figure 1.a) for an example of a successful attack on a vanilla VAE. Here we are trying to turn Hugh Jackman (Original, top left) into Anna Wintour (Target, bottom left). We can see that, by adding a well-chosen distortion (Distortion, bottom right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal.\nThe current most effective mode of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\n055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109\nImprovi g VAEs’ Robustness t A i l Attack correlation penalty as being regularised VAEs.\nLeveraging the insight that regularised VAEs are robust to adversarial attacks, we develop a class of hierarchical VAEs that re more r silient still. Our model, Seatbelt-VAE, provides both robustness to adversarial attack, and higher quality reconstructions, relative to single layer regula ised VAEs. We show that regularised hierarch cal VAEs, without our proposed extensions, are not robust to adversarial att .\nSee Figure 1 for d monstration of how adversarial attacks are highly effective on vanilla s, less effective on regularised VAEs a d cl se to ineff ctive on our proposed Seatbelt-VAE.\nThus our key contributions are:\n• A demonstration that regularised VAEs, trained with an up-weighted total correlation, are significa tly more r bust to adversarial attacks than vanilla VAEs. • We introduce a hierarchical VAE, the Seatbelt-VAE, that provides further robustness to adversarial attack. • New connections between robustness, disentangling and adversarial attack, linked through regularisation.\n2. Background 2.1. Variational Autoencoders\nVariational autoencoders (VAEs) are a deep extensi n of factor analysis suitable for high-dimensional data like images (Kingma & Welling, 2013; Rezende et al., 2014). They have a joint distribution over data x and latent variables z: p✓(x, z) = p✓(x|z)p(z) where p(z) = N (0, I) and p✓(x|z) is an appropriate distribution given the form of the data, the parameters of which are represented by deep nets with parameters ✓. As exact inference is intractable for this model, in a VAE we perform amortised stochastic variational inf rence. By introducing a variational posterior distribution over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perform gradient ascent on the evidence lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ and jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adversarial attack an agent is trying to manipulate the behaviour of some machine learning model towards a goal of their choosing. Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Latent-space adversarial attacks on CelebA for different models: a) Vanilla VAE b) -TCVAE c) our proposed SeatbeltVAE. Clockwise within each plot we show the initial input, its reconstruction, the best adversarial input the adversary could produce, the adversarial distortion that was added to make the adversarial input, the adversarial input’s reconstruction, and the target image. We are trying to make the initial input (Hugh Jackman) look like the target (Anna Wintour). You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour, indicating a successful attack. The -TCVAE adv. reconstruction does not look like Wintour, so the attack has not been successful, but t is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wintour.\noutput.\nAttacks on VAEs have been proposed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018). The adversary wants draws from the model to be close to a target image when given a distorted image as input.\nSee Figure 1.a) for an example of a successful attack on a vanill VAE. Here we are trying to turn Hugh Jackman (Original, top left) int Anna Wintour (Target, bottom left). We can see that, by adding a well-chosen distortion (Distortion, bottom right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal.\nThe current most effectiv mode of attack VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109\nImproving VAEs’ Robustness to Adversarial Attack correlation penalty as being regularised VAEs.\nLeveraging the i sight that regulari ed VAEs are robust to adversarial attacks, we dev lop a lass of hierarchical VAEs tha are more resili nt still. Our model, Seatbelt-VAE, provides both robustness to adversarial attack, and higher quality reconstructions, relativ to sin le lay r regularised VAEs. We how th t r gularised hierarchical VAEs, without our proposed exten ions, ar not robust to adversaria ttack.\nSee Figure 1 for a d monstration of how adversarial attacks ar hig ly effective on vanilla VAEs, less effective on regulari d VAEs and cl se to ineffective on our proposed Seatbelt-VAE.\nTh s ur key contributions are:\n• A demonst ation that regularised VAEs, trained with an up-weighted tot correlation, are significantly more robust to adversarial attacks than vanilla VAEs. • We introduce a hierarchical VAE, the Seatbelt-VAE, that provid further robustness to adversarial attack. • N w co nections between robustness, disentangling and adversarial attack, linked through regularisation.\n2. Backgr und 2.1. Variational Autoencoders\nVariational autoencoders (VAEs) are a deep extension of factor analysis suitable for high-dimensional data like images (Kingma & Welling, 2013; Rezende t al., 2 1 ). They have a joint distribution over data x and latent variables z: p✓(x, = p✓(x|z)p(z) where p(z) = N (0, I) and ✓(x|z) is an appropriate distribution given the form of the data, the parameters of which are repres nted by deep n t with parameters ✓. As exac inference is intractable for this model, in a VAE we perf rm amortised stochastic variational inference. By n r ducing a va iational p sterior distrib tion over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perform gradi nt ascent on the evidence lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓ x z DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ a d jointly, using the reparameterisation trick o take gradients thr ugh Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an dversarial attack an agent is trying to manipulate the beh viour of some achine le ning model towards a goal of their choosing. Commonly in deep learni g this would be fo ling a classifier to misclassify an image thr ugh adding a small pe turbatio (Akhtar & Mian, 2018; Gilmer t al., 2018). Very small changes in input, of little importance o the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Latent-space adversarial attacks on CelebA for different m dels: a) Vanilla VAE b) -TCVAE c) our proposed SeatbeltVAE. Clockwise within each plot we show the initial input, its r construction, the best adversarial input the adversary could produce, the adversarial distorti n that was added to make the adversari l input, the adver arial input’s reconstruction, and the target image. W ar trying to make the initial input (Hugh Jackman) look like the ta get (Ann Wintour). You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour, indica ing a successful attack. The -TCVAE adv. reconstruction does not l ok like Wintour, so the attack has not bee successful, but it is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently ard to a tack that the output under att still lo ks l ke Jackman, not Wintour.\noutput.\nAtt cks on VAEs have been pr posed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018). The adversary wants draws from the model to be close to a target image when given d stor ed image as input.\nSee Figure 1.a) or an example of a successful attack on a vanill VAE. Here we ar trying to turn Hugh Jackman (Original, top l ft) i to Anna Wintour (Target, bottom left). We can see that, by adding a well-chosen distortion (Distortion, bott m right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial r c , bottom middle). The adversary has achieved their goal.\nThe urr nt most effective m de of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\n5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099\n0 1 2 3 4 5 106 107 108 109\nImproving VAEs’ Robustness to Adve sarial Attack\ncorrelation penalty as being regularised VAEs.\nLeveraging the insight that regularised VAEs are robust to adversarial at acks, we develop lass of hierarchical VAEs that are more resilient still. O r m del, Se tbelt-VAE, provides both robustness to adversarial attack, and higher quality reconstructions, relative to single layer regularised VAEs. We show that regularised hierarchical VAEs, without our proposed extensions, are not robust to dv rsarial attack.\nSee Figure 1 for a demonstration of how adversarial attacks are highly effective on vanilla VAEs, less effective on regularis d VAEs and close to ineffective n our proposed Seatbelt-VAE.\nThus our key contributions re:\n• A dem nstration that regularised VAEs, t ained with an up-weighted total corr lati n, are significa ly more robust to advers ri l attac s than vanilla VAEs. • We introduce a hierarchical V E, the Seatbelt-VAE, that provides further robustness to adversarial attack. • New connections between robustness, disentangling and adversarial attack, linked through regularisation.\n2. Background 2.1. Variational Autoencoders\nVariational autoencoders (VAEs) are a deep extension of factor analysis suitable for high-dimensional data like images (Kingma & Welling, 2013; Rezend et al., 2014). They hav a joint distribution over data x and latent variables z: p✓(x, z) = p✓(x|z)p(z) where p(z) = N (0, I) nd p✓(x|z) is an app opriate distri bution given the form of the d ta, th parameters f which are r presented by deep nets with parameters ✓. As exact inference is intractable for this model, in a VAE we perform amortised stochastic variation l inference. By introducing a variational posterior distribution over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perfo m gradient ascent on the evidenc lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ and jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adv sa ial attack an agent is trying to manipulate the behaviour of some m chine learning model towards a goal of their choosing. Commonly in d ep learning this would be fooling a classifier to misclassify an image through adding a small perturbation (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Lat nt-space adversarial attacks Ce ebA for diff rent models: a) Vanilla VAE b) -TCVAE c) our proposed S atbeltVAE. Clockwise wi hi each pl t we sh w th i itial input, its reco structio , the be t adversarial input the adversa y could pr - duce, the adversarial distortion that was d ed to make he adversarial input, the adversaria input’s reconstruction, nd the target image. We are trying to make the initial input (Hugh Jackman) look like the target (Anna Wintour). You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wintour, indicating a successful attack. The -TCVAE adv. reconstruction does not look like Wintour, so the attack has not been successful but it is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wintour.\noutput.\nAttacks on VAEs have been proposed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018). The adversary wants draws from the model to be close to a target image when given a distorted image as input.\nSee Figure 1.a) for an example of a successful attack on a vanilla VAE. Here we are trying to turn Hugh Jackman\n, top left) into Anna Wintour (Target, bottom left). e can see that, by adding a well-chosen distortion (Distortion, bottom right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal.\nThe current most effective mode of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\n055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109\nImproving VAEs’ Robustness to Adversarial Attack\ncorrelation penalty as being regularised VAEs.\nLeveraging the insight that regularised VAEs are robust to adversarial attacks, we develop a class of hierarchical VAEs that are more resilient till. Our model, Seatbelt-VAE, provides both robustness to adversarial attack, and higher quality reconstructions, relative to single layer regularised VAEs. We show that regularised hierarchical VAEs, without our proposed extensions, are not robust to adversarial attack.\nS e Figure 1 for a demon tration of how adv rsarial attacks are ighly effective on vanill VAEs, less effective on regularised VAEs nd close to i effective on our proposed Seatbelt-VAE.\nThus our key contributions are:\n• A demonstration at eg larised VAEs, tra ned with an up-weighted total corr lation, are significa tly more robust to adversarial att cks than v nilla VAEs. • We introduce a hierarchical VAE, the Seatbelt-VAE, that provides further robustness to adversarial attack. • New connections between robustness, disentangling and adversarial attack, linked through regularisation.\n2. Background 2.1. Variational Autoencoders\nVariatio al autoencoders (VAEs) are a deep extension of factor analysis suitable for high-dimensional data like images (Kingma & Welling, 2013; Rezende et al., 2014). They have a joint distribution ov r data x and latent variables z: p✓(x, z) = p✓(x|z)p(z) wh re p(z) = N (0, I) and p✓(x|z) is an appropriate distribution given the form of the data, the pa ameters of which are represented y dee nets with parameters ✓. As xact inference is intr c ble for this model, in VAE we perform mortised stochastic varia al inference. By i roducing a variational o terio distribution ov the latent vari bles q (z|x) = N (µ (x),⌃ (x)), we can perform gradient ascent on he evidence lower bou d (ELBO) L(x) = DKL(q (z|x)||p✓( , z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ nd jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adversarial attack an agent is trying to manipulate the behaviour of so e machine learning model towards a goal of their choosing. Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Late t-space adversarial attacks on CelebA for different m dels: a) Vanilla VAE b) -TCVAE c) our proposed SeatbeltVAE. Clockwise within each plot we show the initial input, its reconstructio , the best adversarial input the adversary could produce, the advers rial distortion that was added to make the adversarial input, the adversarial input’s reconstructio , and the target image. We are trying t make th initial input (Hugh Jackman) look like the targe (Anna Wintour). You can see that the adversarial reconstruction or the Vanilla VAE looks substantially like Wintour, indicat ng a successful attack. The -TCVAE adv. r c nstruct on d es not look like Wintour, so the attack has not been successful, but it is n t Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wintour.\noutput.\nAttacks on VAEs h ve be n pr p sed in Tabac f et al. (2016); Gon im-Ribeiro et al. (2018); K et l. (2018). Th adv rsary w nts draws from the model to e close to target image when given a distorted image as input.\nSee Figure 1.a) for an example of a successful attack on a v nill VAE. Here we are tryin to turn Hugh Jackman (Original, top left) into Anna Wi tour (Target, bottom left). We can see that, by adding a well-chosen distortion (Distortion, bottom right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., op middle) to a somewh t blurry vers on of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal.\nThe current most effective mode of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\n5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099\n0 1 2 3 4 5 106 107 108 109\nImproving VAEs’ Robustness to Adversarial Attack\ncorrelation penalty as being regularised VAEs.\nLeveraging the i si ht that regulari ed VAEs are robust to adversarial at acks, we evelop l ss of hierarchi l VAEs that are more resilient till. O r m del, Se tbe t- AE, provides both robustness to adv sarial ttack, and higher quality reconstructions, relative to single layer regularised VAEs. We show that regularised hierarchical VAEs, without our proposed extensions, are not robust to adversarial attack.\nSe Figure 1 for a demonstration of how adversarial attacks are high y effective on vanilla VAEs, less effective on regularis d VAEs and close to ineffective on our proposed Seatbelt-VAE.\nThus our k y contributions are:\n• A demonstration that regularised VAEs, t ained with an up-weighted otal corr lati n, a significa ly more robust to a ve s ri l attac than vanilla VAEs. • We introduce a hierarchical VAE, the Seatbelt-VAE, that provides further robustness to adversarial attack. • Ne connectio s betwee robustness, disentangling and adversarial attack, linked through regularisation.\n2. Background 2.1. Variational Autoencoders\nVariational autoencoders (VAEs) ar a deep extension of factor analy is suitable for high-dimensional data like images (Kingma & Welling, 2013; Rezend et al., 2 14). Th y hav a joint distribution over data x and latent variables z: p✓(x, z) = p✓(x|z)p(z) where p(z) = N (0, I) nd p✓(x|z) is an app opriate distri bution given the form of the d ta, th parameters of which are r presented by deep nets with parameters ✓. As exact inference is intractable for this model, in a VAE we perform amortised stochastic variational inference. By introducing a variational posterior distrib tion over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perfo m gradient ascent on the evide l wer bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ a d jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adv sa ial attack an agent is trying to man pulat he be viour of some m chine learning model towards a goal of their choosing. Commonly in d ep l arning this would be fooling a classifier to misclassify an image thr ugh adding a small perturbation (Akhtar & Mian, 2018; Gilmer t al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Latent-space adversarial attacks on CelebA for different models: a) Vanilla VAE b) -TCVAE c) our proposed SeatbeltVAE. Clockwise within each plot we show the initial input, its reconstruction, the best adversarial input the adver ary could produce, the adversarial distortion that was added to make the adversarial input, the adversarial input’s reconstruction, and the target image. We are trying to make the initial input (Hugh Jackman) look like the target (Anna Wintour). You can see that the adversarial reconstruction for the Vanill VAE looks substantially like Wintour, indicating a succ ssful ttack. The -TCVAE adv. reconstruction does n t look l ke Wi tour, so he attack has n t been successful, but it is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wintour.\noutput.\nAttacks on VAEs have been proposed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018). The adversary wants draws from the model to be close to a target image when given a distorted image as input.\nSee Figur 1.a) for an example of a successful attack on a vanilla VAE. Here we are trying to t rn Hugh Jackman\n, top left) i to Anna Wintour (Target, bottom left). e can see hat, by adding a well-chosen distortion (Distortion, bottom right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal.\nThe current most effective m de of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\n5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099\n0 1 2 3 4 5 106 107 108\nImpr ving VAEs’ Robustnes to Adversarial Attack\ncorrelation penalty as being regularised VAEs.\nLeveraging the insight that regularised VAEs are robust to adversarial attacks, we develop lass of hierarchical VAEs that are more resilient still. O r m del, Se tbelt-VAE, provides both robustness to adversarial attack, and higher quality reconstructions, rel tive to single layer regul rised VAEs. We show that reg larised hierarchical VAEs, without our proposed extensi ns, are not robust to adversarial att ck.\nSee Fig re 1 for a demonstration of how dv rsari l attacks are highly effective on vani la VAEs, less eff c ive on regularis d VAE and close to i eff ctive on our propose Seatbelt-VAE.\nThus our key contributions are:\n• A demonstration that regulari ed VAEs, t ained with an up-weighted total corr lati n, are significa tly more robust to advers ri l attac s than vanilla VAE . • We introduce a hierarchical VAE, the Seatbelt-VAE, hat provides further robustness to adversarial attack. • New connections between obustness, di ent ngling and adversarial ttack, linked through regularisation.\n2. Background 2.1. Variational Autoencoders\nVariatio al uto ncod rs (VAEs) ar d e extension of factor an lysis suitable for high-dimens onal data like images (Kingma & Welling, 2013; Rezende et al., 2014). They hav a joint distribution over data x and latent variabl s z: p✓(x, z) = p✓(x|z)p(z) where p(z) = N (0, I) and p✓(x|z) is an app opriate distri bution given the form of the d ta, the parameters of\nhich are r presented b deep nets with parameters ✓. As exact inference is intra table for this model, in a VAE we perform amortised stochastic variational inference. By introducing a variational posterior distribution over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perfo m gradient ascent on the evidenc lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ and jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adv sa ial attack an gent is trying to manipulate the behaviour of some m chine learning model towards a goal of their choosing. Commonly in d ep learning this would be fooling a classifier to misclassify an image through adding a small perturbation (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Latent-space adversarial attacks CelebA for differe t models: a) Vanilla VAE b) -TCVAE c) our proposed SeatbeltVAE. Clockwise within each pl t we sh w th i itial input, its reconstruction, the best adversarial input the adversary could produce, the adversarial distortion that was added to make he adversarial input, the adversarial input’s reconstruct on, nd the target image. We are trying to m ke the initial input (Hugh Jackman) look like the targe (Anna Wintour). You can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wint ur, indicating a successful attack. The -TCVAE adv. econstruction does ot lo k like Wintour, so the att k has not been succe sful, but it is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wintour.\noutput.\nAttacks on VAEs have been proposed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018). The adversary wants draws from the model to be close to a target image when given a distorted image as input.\nSee Figur 1.a) for an example of a successful attack on a vanilla VAE. Here we are trying to turn Hugh Jackman\n, top left) into Anna Wintour (Target, bottom left). e can see that, by adding a well-chosen distortion (Distortion, bottom right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal.\nThe current most effective mode of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\n55 56 57 58 59 0 1 62 63 64 65 66 67 68 69 0 1 72 73 74 75 76 77 78 79 0 1 82 83 84 85 86 87 88 89 0 1 092 093 094 095 096 097 098 099\n0 1 102 103 104 105 106 107 108 109\nImproving VAEs’ Robustness to Adversarial Attack\ncorrelation penalty as b ing regularised VAEs.\nLeveraging the insight that regularised VAEs are robust to adversarial att cks, we develop class of hierarc i al VAEs that re more r ilient still. Our model, Seatbelt-VAE, provides both robustness to advers rial attack, nd higher qualit reconstructions, rel tive to single layer regularised VAEs. We show tha regularised hierarchical VAEs, without our proposed extensions, are not robust to adversarial attack.\nSee Fig e 1 for d m n ration of how dversarial attacks are highly effectiv on vani la s, less eff c ive on regularised VAE a d cl s t ineffective on o r proposed Seatbelt-VAE.\nThus our key contributions are:\n• A demonstration t t regularised VAEs, trai d with an up-weighted total correlation, are significantly more r bust to adversarial attacks than vanilla VAEs. • We introduce a hierarchical VAE, the Seatbelt-VAE, that provides further robustness t adversarial att ck. • New con ections between obustness, di ent ngling and adversarial ttack, linked through regu arisation.\n2. Background 2.1. V ri tion l Autoencoders\nVariational autoencoders (VAEs) are a deep ext nsi n of factor analysis suitable for high-dim nsional dat like images (Kingma & W lling, 2013; Rezende et al., 2014). They h e a joint di tribution ov r dat x and latent variables z: p✓(x, z) = p✓(x|z)p(z) where p(z) = N (0, I) and p✓(x|z) is an appro riate distribution given the form of the ata, the parameters of\nhich are represented b deep nets with parameters ✓. As exact inferenc is intractable for t is model, in a VAE we perform amortised stochastic variational inf rence. By introducing a variational posterior distribution over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perform gradient ascent on the evidence lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓(x|z) DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ and jointly, using the reparameterisation trick to take gradients through Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adversarial attack an agent is trying to manipulate the behaviour of some machin learning model towards a goal of their choosing. Commonly in deep learning this would be fooling a classifier to misclassify an image through adding a small perturbation (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance to the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Lat nt-space adv rsarial attacks o CelebA for different mod ls: a) Vanill VAE b) -TCVAE c) o r proposed S atbeltVAE. Clockwise within each plot we how t initial input, its r tr ti , th b st dversarial input the advers ry could pro d ce, the dversarial dist rtion th t was added t m ke the adversarial put, the adversarial inp t’ reconstru tion, and the target image. We a e trying to make the initial input (Hugh Jackman) look like the target (Anna Wintour). You can see that the adversarial reconstruction for t e Va illa VAE looks s bstantially like Wint ur, indicating a uccessful attack. The -TCVAE dv. reconstruction does not lo k like Wintour, so the att ck has not been succe sful, but it is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently hard to attack that the output under attack still looks like Jackman, not Wintour.\noutput.\nAttacks n VAEs have been proposed in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Ko et al. (2018). The adversary wa ts draws from th m del to be close to a target image when given a distorte image as input.\nSee Figur 1.a) for n example of a succ ssful attack a vanill VAE. He e we are trying to turn Hugh Jackman (Original, top left) int Anna Wintour (Target, bottom left). We can see that, by adding a well-chosen distortion (Distortion, bottom right), t e reconstruction of Jackman goes from l oking like a s mewh blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial rec., bottom middle). The adversary has achieved their goal.\nThe current most effectiv mode of attack VAEs, the latent sp ce attack (Tabacof et al., 2016; Gondim-Ribeiro\n5 56 57 58 59 0 1 2 3 4 5 66 67 68 69 0 1 2 3 4 5 76 77 78 79 0 1 2 3 4 5 86 87 88 89 0 1 2 3 4 5 096 097 098 099\n0 1 2 3 4 5 106 107 108 109\nImproving VAEs’ Robustness to Adversarial Attack\ncorrelation penalty as being regularised VAEs.\nLeveraging the i sight that r l VAEs are robust to adversarial att cks, we dev lop lass of hierarchical VAEs tha are more resili nt till. O r m del, Se tbelt-VAE, provides bot robustne s to adversarial attack, and higher quality reconstructions, relativ to single layer regularised VAEs. W how th t regularised hierarchical VAEs, without our proposed extensions, are not robust to adversarial attack.\nSe Fig re 1 for a demonstration of how dversarial attacks a e hig ly effectiv on vani a s, less eff c ive on regulari d VAE and cl e to i ffe tive on our proposed Sea belt-VAE.\nTh s ur key contributions are:\n• A dem nst ation that regularised VAEs, t ained with an up-weighted total corr lati n, are significa tly more robust to advers ri l attac s than vanilla VAEs. • We introduce a hierarchical VAE, the Se tbelt-VAE, that provid further robustness to adversari l attack. • N w co nect ons between obustness, di ent ngling and adversarial attack, linked through regularisation.\n2. Background 2.1. Variatio al Autoencod rs\nVariational auto ncoders (VAEs) are a deep extension of factor analysis suitable for high-dimensional data like images (Kingma & Welling, 2013; Reze d t al., 2014). They h v a joint distribution over data x and l tent variables z: p✓(x, = p✓(x|z)p(z) where p(z) N (0, I) nd p✓(x|z) is an app o riate distri bution given the form of the d ta, th parameters f\nhich are r pres nted b de p n ts with parameters ✓. As exac inferenc is intr ctable f r this model, in a VAE we perf rm amortised stochastic variational inference. By n r ducing a va iational p sterior distrib tion over the latent variables q (z|x) = N (µ (x),⌃ (x)), we can perfo m gradi nt ascent on the evide c lower bound (ELBO) L(x) = DKL(q (z|x)||p✓(x, z)) = Eq (z|x) log p✓ x z DKL(q (z|x)||p(z)) log p(x) w.r.t.both ✓ a d jointly, using the reparameterisation trick o take gradients thr ugh Monte Carlo samples from q (z|x).\n2.2. Attacks on VAEs\nIn an adv sa ial attack an agent is trying to manipulat the beh viour of some chine le ning model towards a goal of their choosing. Commonly in d ep learning this would be fo ling a classifier to misclassify an image thr ugh adding a small pe turbat o (Akhtar & Mian, 2018; Gilmer et al., 2018). Very small changes in input, of little importance o the human eye, can produce large changes in the model’s\n(a)\n(b)\n(c)\nFigure 1. Latent-space adversari l at acks on CelebA for different mod ls: a) Vanilla VAE b) -TCVAE c) ur propose Seat ltVAE. Clockwise withi e ch plot we show the initial input, its r construction, the best adver arial input the adversary could produce, the adversari l distorti n that was added to make the adversari l input, the adv r arial input’s reconstruction, and the target image. W a trying to mak th initi l input (Hugh Jackman) look like th t g t (Ann Wi tour). Y u can see that the adversarial reconstruction for the Vanilla VAE looks substantially like Wint ur, indica a succe sful attack. The -TCVAE adv. reconstruc ion does not l k like Wintour, so the att ck has not been succe sful, but i is not Jackman either. Our proposed model, Seatbelt-VAE, is sufficiently ard to a tack that the output under att still lo ks l ke Jackman, not Wintour.\noutput.\nAtt cks on VAE have been pr posed in Tabacof et al. (2016); G ndim-Ribeiro et a . (2018); Kos et al. (2018). The adversa wants draws from the odel to be close to a target image when given d st r ed image as input.\nSee Figure 1.a) or an example of a successful attack on a vanill VAE. Here we are trying to turn Hugh Jackman\ni i l, top left) i to Anna Wintour (Target, bottom left). e can see that, by adding a well-chosen distortion (Distortion, bott m right), the reconstruction of Jackman goes from looking like a somewhat blurry version of the input (Original rec., top middle) to a somewhat blurry version of Wintour (Adversarial r c , bottom middle). The adversary has achieved their goal.\nThe urrent most effective m de of attack on VAEs, the latent space attack (Tabacof et al., 2016; Gondim-Ribeiro\na a es\na\nr e o st a e t o\nr s l\ne\nav a\np\nh\n. n e a . he ,\necons c , u , a\n, u c , .\n.\n, s . . ,\n, . , ,\n, .\ne\na a r\no\no t\non a\nVA E\nβTC\nVA E\nSe at\nbe lt-\nVA E\nAdversarial Input\nFigure 1: Adversarial attacks on CelebA for different models. Here we start with the image of Hugh Jackman and introduce an adversary that tries to produce reconstructions that look like Anna Wintour. This is done by applying a distortion (third column) to the original image to produce an adversarial input (second column). We can see that th adversarial reconstruction for the Vanilla VAE looks s bstantially like Wintour, indicating a successful attack. Adding a regularisation term using the β-TCVAE produc s an adversarial reconstruction that does not look like Wintour, but it is also far from a successful reconstruction. The hierarchical version of a β-TCVAE (which we call Seatbelt-VAE) is sufficiently hard to attack that the output under attack s ill looks like Jackman, not Wintour.\nTo summarise: We provide insights into what makes VAEs vulnerable to attack and how we might go about defending them. We unearth novel connections between disentanglement and adversarial robustness. We demonstrate that regularised VAEs, trained with an up-weighted total correlation, are much more robust to attacks than vanilla VAEs. Building on this we develop regularised hierarchical VAEs that are more robustness still and offer improved reconstructions. Finally, we show that robustness to adversarial attack also confers increased robustness to downstream tasks." }, { "heading": "2 BACKGROUND: ATTACKING VAES", "text": "In adversarial attacks an agent is trying to manipulate the ehaviour of some model towards a goal of their choosing (Akhtar & Mian, 2018; Gilmer et al., 2018). For many deep learning models, very small changes in the input can produce large changes in output. Attacks on VAEs have been proposed where the adv rsary looks to apply small input distortions that produce reconstructions close to a target adversarial image (Tabacof et al., 2016; Gondi -Ribeiro et al., 2018; Kos et al., 2018). An exa ple is hown in Fig 1: a standard VAE is successfully attacked, turning Jackman into Wintour.\nUnlike mor e t blished adversarial settings, only a small number of such VAE attacks have been u gested in the lite ature. The current k own most effective mode of attack is a latent space attack (Tabac f et al., 2016; Gondim-Ribeiro et al., 2018; Kos et al., 2018). This aims to find a distorted imag x∗ = x + d such tha its po terior qφ(z|x∗) i close to that f the agent’s chosen target image qφ(z|xt) under s me metric. This then implies that the likelihood pθ(xt|z) is high when given draws from the post rior of the adv rsarial example. It is particularly important to be robust to this attack if o is conce ned with using the encoder netwo k f a VAE s part of a downstream task. For a VAE with a s ngle stochastic layer, the latent-space adversarial objective is\n∆r(x,d,x t;λ) = r(qφ(z|x + d), qφ(z|xt)) + λ||d||2, (1)\nwhere r(·, ·) is some divergence or distance, commonly a DKL(Tabacof et al., 2016; Gondim-Ribeiro et al., 2018). e are pen lising the L2 norm of d too, so as to aim for attacks that change the image less. We can hen simply optimise to find a good distortion d.\nAlte ativ ly, we can aim to directly increase the ELBO for the target datapoint (Kos et al., 2018):\n∆output(x,d,x t;λ) =Eqφ(z|x+d)\n[ log(xt|z) ] −DKL(qφ(z|x + d)||p(z)) + λ||d||2. (2)" }, { "heading": "3 DEFENDING VAES", "text": "This pr ble was not con id red by prior works1. To address it, we first need to consider what makes VAEs vulnerable to adversarial attacks. We argue that two key factors dictate whether we c n rform a successful ttack on a VAE: a) whether we can induce significant changes in the encoding distribution qφ(z|x) through only small changes in the data x, and b) whether we can induce signific nt changes i the reco structed imag s through only small changes to the latents z. The first of the e relates to th smooth ess of the encoder mapping, the latter to the smoothness of the decoder mapping.\n1We note that the earliest version of this work appeared in June 2019 (Willetts et al., 2019), here extended. Since then other works, eg Camuto et al. (2020); Cemgil et al. (2020); Barrett et al. (2021), have built of our own to consider this problem of VAE robustness, including investigating it from a more theoretical standpoint.\n2\nConsider, for the sake of argument, the case where the encoder–decoder process is almost completely noiseless. Here successful reconstruction places no direct pressure for similar encodings to correspond to similar images: given sufficiently powerful networks, very small changes to embeddings z can imply very large changes to the reconstructed image; there is no ambiguity in the “correct” encoding of a particular datapoint. In essence, we can have lookup–table style behaviour – nearby realisations of z do not necessarily relate to each other and very different images can have very similar encodings.\nThis will now be very vulnerable to adversarial attacks: small input changes can lead to large changes in the encoding, and small encoding changes can lead to large changes in the reconstruction. It will also tend to overfit and have gaps in the aggregate posterior, qφ(z) = 1N ∑N n=1 qφ(z|xn), as each qφ(z|xn) will be sharply peaked. These gaps can then be exploited by an adversary. There are two mechanisms by which we can reduce this lookup-table behaviour, thereby reducing gaps in the aggregate posterior. First, we can try to regulate the level of noise in the per-datapoint posterior covariance, to then obtain smoothness in the overall embeddings. Having a stochastic encoding creates uncertainty in the latent that gives rise to a particular image, forcing similar latents to correspond to similar images. Adding noise forces the VAE to smooth the encode-decode process in that similar images will lead to similar embeddings in the latent space, ensuring that small changes in the input result in small changes in the latent space and result in small changes in the decoded outputs. This proportional input-output change is what we refer to as a ‘simple’ encode-decode process, which is the second mechanism that can reduce look-up table behaviour.\nThe fact that the VAE is vulnerable to adversarial attack suggests that its standard setup does not obtain sufficiently smooth and simple representations to provide an adequate defence. Introducing additional regularisation to enforce simplicity or increased posterior covariance thus provides a prospect for defending VAEs. We could attempt to obtain this by direct regularisation of the networks (e.g. weight decay). Here, however, we focus on macro-level regularisation approaches as discussed in the next section. The reason for this is that controlling the macroscopic behaviour of the networks through low-level regularisations can be difficult to control and, in particular, difficult to calibrate. Further, as the most effective attack on VAEs currently attack the latent space, it is reasonable that regularisation methods that directly act on the properties of the latent space form a good place to start." }, { "heading": "3.1 DISENTANGLING METHODS AND ROBUSTNESS", "text": "Recent research into disentangling VAEs (Higgins et al., 2017a; Siddharth et al., 2017; Kim & Mnih, 2018; Chen et al., 2018; Esmaeili et al., 2019; Mathieu et al., 2019) and the information bottleneck (Alemi et al., 2017; 2018) has looked to regularise the ELBO with the hope of providing more interpretable embeddings. These regularisers also have influences on the smoothness and stochasticity of the embeddings learned.\nOf particular relevance, Mathieu et al. (2019) introduce the notion of overlap in the embedding of a VAE: the level of overlap between per-datapoint posteriors as they combine to form the aggregate posterior. Controlling this is critical to achieving a smoothly varying latent embedding. Overlap encapsulates both the level of uncertainty in the encoding process and also a locality of this uncertainty. To learn a smooth representation we not only need our encoder distribution to have an appropriate entropy, we also want the different possible encodings to be similar to each other. Critically, Mathieu et al. (2019) show that many methods proposed for disentangling, and in particular the β-VAE (Higgins et al., 2017a; Alemi et al., 2017), provide a mechanism for directly controlling this overlap.\nGoing back to our previous arguments, we see that controlling this overlap may also provide a mechanism for improving VAEs’ robustness. This observation now hints at an interesting question: can we use methods initially proposed to encourage disentanglement to encourage robustness?\nIt is important to note here that disentangling can be difficult to achieve in practice, typically requiring precise choices in the hyperparameters of the model and the weighting of the added regularisation term, and often also a fair degree of luck (Locatello et al., 2019; Mathieu et al., 2019; Rolinek et al., 2019). As such, we are not suggesting to induce disentangled representations to induces robustness, or indeed that disentangled representations should be any more robust. Rather, as highlighted above, we are interested in whether the regularisers traditionally used to encourage disentanglement reliably lead to adversarially robust VAEs. Indeed, we will find that though our approaches—based on these regularisers—provide reliable and significant improvements in robustness, these improvements are not generally due to any noticeable improvements in disentanglement itself (see Appendix E.1).\nRegularising for Robustness There are a number of different disentanglement methods that one might consider using to train robust VAEs. Perhaps the simplest would be to use a β-VAE (Higgins et al., 2017a), wherein we up-weight the DKL term in the VAE’s ELBO by a factor β ≥ 1. However, as mentioned previously the β-VAE only increases overlap at the expense of substantial reductions in reconstruction quality as the data likelihood term has, in effect, been down-weighted (Kim & Mnih, 2018; Chen et al., 2018; Mathieu et al., 2019).\nBecause of these shortfalls, we instead propose to regularise through penalisation of a total correlation (TC) term (Kim & Mnih, 2018; Chen et al., 2018). As discussed in Section A.1, this looks to directly force independence across the different latent dimensions in aggregate posterior qφ(z), such that the aggregate posterior factorises across dimensions. This approach has been shown to have a smaller deleterious effect on reconstruction quality than found in β-VAEs (Chen et al., 2018). As seen in Fig 2 this method also gives greater overlap by increasing posterior variance. To summarise, the greater overlap and the lesser degradation of reconstruction quality induced by β-TCVAE make them highly suitable for our purposes." }, { "heading": "3.2 ADVERSARIAL ATTACKS ON TC-PENALISED VAES", "text": "We now consider attacking these TC-penalised VAEs and demonstrate one of the key contributions of the paper: that empirically this form of regularisation makes adversarial attacks on VAEs harder to carry out. To do this, we first train them under the β-TCVAE objective (i.e. Eq (15)), jointly optimising θ, φ for a given β. Once trained, we then attack the models using the latent-space attack method outlined in Section 2, finding an input distortion d that minimises the latent attack loss ∆ as per Eq (1) with r(·, ·) = DKL(·||·).\nOne possible metric for how successful such attacks have been is the achieved value reached of the attack loss ∆KL. If the latent space distributions for the original input and for the distorted input match closely for a small distortion, then ∆KL is small and the model has been successfully fooled – reconstructions from samples from the attacked posterior would be indistinguishable from those from the target posterior. Meanwhile, the larger the converged value of the attack loss the less similar these distributions are and the more different the reconstructed image is to the adversarial target image.\nWe carry our these attacks for dSprites (Matthey et al., 2017), Chairs (Aubry et al., 2014) and 3D faces (Paysan et al., 2009), for a range of β and λ values. We pick values of λ following standard methodology (Tabacof et al., 2016; Gondim-Ribeiro et al., 2018), and use L-BFGS-B for gradient descent (Byrd et al., 1995). We also varied the dimensionality of the latent space of the model, dz, but found it had little effect on the effectiveness of the attack.\nIn Fig 3 we show the effect on the attack loss ∆KL for varying β, averaged over different original input-target pairs and values of dz. Note that the plot is logarithmic in the loss. We see a clear pattern for each dataset that the loss values reached by the adversary increases as we increase β from the standard VAE (i.e. β = 1). This analysis is also borne out by visual inspection of the effectiveness of these attacks, for example as shown in Fig 1. We will return to give further experimental results in Section 5. An interesting aspect of Fig 3 is that in many cases the adversarial loss starts to decrease if β is too large: as β increases there is less pressure in the objective to produce good reconstructions." }, { "heading": "4 HIERARCHICAL TC–PENALISED VAES", "text": "We are now armed with the fact that penalising the TC in the ELBO induces robustness in VAEs. However, TC-penalisation in single layer VAEs comes at the expense of model reconstruction quality (Chen et al., 2018), albeit less than that in β-VAEs. Our aim is to develop a model that is robust to adversarial attack while mitigating this trade-off between robustness and sample quality. To achieve this, we now consider instead using hierarchical VAEs (Rezende et al., 2014; Sønderby et al., 2016; Kingma et al., 2016; Zhao et al., 2017; Maaløe et al., 2019; Vahdat & Kautz, 2020; Child, 2021). These are known for their superior modelling capabilities and more accurate reconstructions. As these gains stem from using more complex hierarchical latent spaces, rather than less noisy encoders, this suggests they may be able to produce better reconstructions and generative capabilities, while also remaining robust to adversarial attacks when appropriately regularised.\nThe simplest hierarchical extension of conditional stochastic variables in the generative model is the Deep Latent Gaussian Model (DLGM) of Rezende et al. (2014). Here the forward model factorises as a chain, pθ(x,~z) = pθ(x|z1) ∏L−1 i=1 pθ(z\ni|zi+1)p(zL), where each pθ(zi|zi+1) is a Gaussian distribution with mean and variance parameterised by deep nets, while p(zL) is an isotropic Gaussian. Unfortunately, we found that naively applying TC-correlation penalisation to DLGM-style VAEs did not confer the improved robustness we observed in single layer VAEs. We postulate that this observed weakness is inherent to the structure of chain factorisation in the generative model. This means that the data-likelihood depends solely on z1, the bottom-most latent variable, and attackers only need to manipulate z1 to produce a successful attack.\nTo account for this, we instead use a generative model in which the likelihood pθ(x|~z) depends on all the latent variables in the chain ~z, rather than just the bottom layer z1, as has been done in Kingma et al. (2016); Maaløe et al. (2019). This leads to the following factorisation of the generative structure:\npθ(x,~z) = pθ(x|~z) ∏L−1\ni=1 pθ(z\ni|zi+1)p(zL). (3)\nTo construct the ELBO, we must further introduce an inference network qφ(~z|x). On the basis of simplicity and that it produces effective empirical performance, we use the factorisation:\nqφ(~z|x) = qφ(z1|x) ∏L−1\ni=1 qφ(z\ni+1|zi,x), (4)\nwhere each conditional distribution qφ(zi+1|zi,x) takes the form of a Gaussian. Again, marginalising out intermediate zi layers, qφ(zL|x) is a non-Gaussian, highly flexible distribution. To defend this model against adversarial attack, we apply TC regularisation term as per the last section. We refer to the resulting models as Seatbelt-VAEs. We obtain a decomposition of the ELBO for this model, revealing the existence of a TC term for the top-most layer (see Appendix B for proof).\nTheorem 1. The Evidence Lower Bound, for a hierarchical VAE with forward model as in Eq (3) and amortised variational posterior as in Eq (4), can be decomposed to reveal the total correlation (see Definition A.1), of the aggregate posterior of the top-most layer of latent variables:\nL(θ, φ;D) = Eq(~z,x) log pθ(x|~z) + R + Sa + Sb −DKL ( q(zL)|| ∏ j q(zLj ) ) , (5)\nwhere the last term is the required TC term, and, using j to index over the coordinates in zL, R = ∫ dx ∏L\ni=1 (dzi)qφ(~z|x)q(x) log\n∏L−1 k=1 pθ(z\nk|zk+1) qφ(z1|x) ∏L−2 m=1 qφ(z m+1|zm,x) (6)\nSa =− Eqφ(zL−1))DKL(qφ(zL,x|zL−1)||qφ(zL)q(x)) (7) Sb =− ∑\nj DKL(qφ(z\nL j )||p(zLj )). (8)\nIn other words, following the Factor and β-TCVAEs, we up-weight the TC term for zL. We can upweight this term then recombine the decomposed parts of the ELBO, to give us the following compact form of this objective.\nDefinition 1. A Seatbelt-VAE is a hierarchical VAE with forward model as in Eq (3) and amortised variational posterior as in Eq (4), trained wrt its parameters θ, φ to maximise the objective:\nLSeatbelt(θ, φ;β,D) :=Eqφ(~z,x) [ log pθ(x,~z)\nqφ(~z|x)\n] − (β − 1)DKL ( q(zL)|| ∏ j q(zLj ) ) . (9)\nWe see that, when L = 1, a Seatbelt-VAE reduces to a β-TCVAE. We use the β = 1 case as a baseline in our experiments as it corresponds to a Vanilla VAE for L = 1 and for L > 1, β = 1 it produces a hierarchical model with a likelihood function conditioned on all latents.\nAs with the β-TCVAE, training LSeatbeltθ,φ;β,D using stochastic gradient ascent with minibatches of the data is complicated by the presence of aggregate posteriors qφ(z) which depend on the entire dataset. To deal with this, Appendix C we derive a minibatch estimator for TC-penalised hierarchical VAEs, building off that used for β-TCVAEs (Chen et al., 2018). We note that, as in Chen et al. (2018), large batch sizes are generally required to provide accurate TC estimates.\nAttacking Hierarchical TC–Penalised VAEs In the above hierarchical model the likelihood over data is conditioned on all layers, so manipulations to any layer have the potential to be significant. We focus on simultaneously attacking all layers, noting that, as shown in Appendix D, this is more effective that just targeting the top or base layers individually. Hence our adversarial objective for latent-space attacks on Seatbelt-VAEs is the following generalisation of that introduced in Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al. (2018), to attack all the layers at the same time:\n∆Seatbeltr (x,d,x t;λ) = λ||d||2 + ∑L i=1 r(qφ(z i|x + d), qφ(zi|xt)). (10)" }, { "heading": "5 EXPERIMENTS", "text": "Expanding on the brief experiments in Section 3.2, we perform a battery of adversarial attacks on each of the introduced models. We do this for three different adversarial attacks: first (as in Section 3.2) a latent attack, Eqs (1,10) using the DKL divergence between attacked and target posteriors; secondly, we attack via the model’s output, aiming to make the target maximally likely under the attacked model as in Eq (2); finally, a new latent attack method as per Eqs (1,10) where we use r(·, ·) = W2(·, ·), the 2-Wasserstein distance between attacked and target posteriors. We then evaluate the effectiveness of these attacks in three ways. First, like Fig 1, we can plot the attacks themselves, to see how effective these attacks are in fooling us. Secondly, we can measure the adversary’s loss under the attack objective. Thirdly, we give the negative adversarial likelihood of the target image xt given an attacked latent representation z∗. Larger, more positive, values of − log pθ(xt|z∗) correspond to less successful attacks as they correspond to large distances between the target and the adversarial reconstruction. Lower values correspond to successful attacks as they correspond to a small distance between the adversarial target and the reconstruction. We also measure\nreconstruction quality of these models, as a function of degree of regularisation. Finally, we also measure how downstream tasks that use output of these models perform under attack. We train classifiers, on the reconstructions and on the latent representations, and see how robust performance is when the upstream VAE is attacked.\nWe demonstrate that hierarchical TC–Penalised VAEs (Seatbelt-VAEs) confer superior robustness to β-TCVAEs and standard VAEs, while preserving the ability to reconstruct inputs effectively. Through this, we demonstrate that they are a powerful tool for learning robust deep generative models.\nFollowing previous work (Tabacof et al., 2016; Gondim-Ribeiro et al., 2018) we randomly sample 10 input-target pairs for each dataset and for each image pair we consider 50 different values of λ geometrically-distributed from 2−20 to 220. Thus each individual trained model undergoes 500 attacks for each attack mode. As before, we used L-BFGS-B for gradient descent (Byrd et al., 1995). We perform these experiments on Chairs (Aubry et al., 2014), 3D faces (Paysan et al., 2009), and CelebA (Liu et al., 2015). Details of neural architectures and training are given in Appendix G." }, { "heading": "5.1 VISUAL APPRAISAL OF ATTACKS", "text": "We first visually appraise the effectiveness of attacks that use the DKL divergence on vanilla VAEs, β-TCVAEs, and Seatbelt-VAEs. As mentioned in Section 1, Fig 1 shows the results of latent space attacks on three models trained on CelebA. It is apparent that the β-TCVAE provides additional resilience to the attacks compared with the standard VAE. Furthermore, this figure shows that SeatbeltVAEs are sufficiently robust to almost completely thwart the adversary: its adversarial construction still resembles the original input. Moreover, this was achieved while also producing a clearer non– adversarial reconstruction. One might expect attacks targeting a single generative factor underpinning the data to be easier. However, we find that these models protect effectively against this as well. For example, see Fig 4 for plots showing an attacker attempting to rotate a dSprites heart.\nIn both figures we follow the method of Gondim-Ribeiro et al. (2018) to plot attacks. Those shown are representative of the adversarial inputs the attacker was able to find over the 50 different values of λ. The Seatbelt-VAE input only undergoes a small perturbation because it is sufficiently robust that the attacker is not able to make the reconstruction look more like the target image in any meaningful way, such that the optimiser never drifts far from the initial input. Note that the β-TCVAE is also robust here. The attacker is unable to induce the desired adversarial reconstruction, even though the attack may be of large magnitude. In contrast, attacks on vanilla-VAEs are able to move through the latent space and find a perturbation that reconstructs to the adversary’s target image." }, { "heading": "5.2 QUANTITATIVE ANALYSIS OF ROBUSTNESS", "text": "Having ascertained perceptually that Seatbelt-VAEs offer the strongest protection to adversarial attack, we now demonstrate this quantitatively. Fig 5 shows − log pθ(xt|z∗) and ∆ over a range of datasets and βs for Seatbelt-VAEs (L = 4) and β-TCVAEs for our three different attacks. It demonstrates that the combination of depth and high TC-penalisation offers the best protection to\nadversarial attacks and that the hierarchical extension confers much greater protection to adversarial attack than a single layer β-TCVAE. As we go to the largest values of β for both Chairs and 3D Faces, adversarial loss ∆KL grows by a factor of≈ 107 and− log pθ(xt|z∗) for those attacks doubles for Seatbelt-VAE. For all attacks, TC-penalised models outperformed standard VAEs (β=1) and Seatbelt-VAEs outperform single-layer VAEs. β-TCVAEs do not experience such a large uptick in adversarial loss and negative adversarial likelihood. These results show that the hierarchical approach can offer very strong protection from the adversarial attacks studied.\nIn Appendix D we provide plots detailing these metrics for a range of L values. In Appendix E we also calculate the L2 distance between target images and adversarial outputs and show that the loss of effectiveness of adversarial attacks is not due to the degradation of reconstruction quality from increasing β. We also test VAE robustness to random noise. We noise the inputs and evaluate the model’s ability to reconstruct the original input. Through this we are evaluating their ability to denoise. See Appendix F for an illustration of this for TC-penalised models. It is plausible that the ability of these models’ to denoise is linked to their robustness to attacks.\nELBO and Reconstructions Though Seatbelt-VAEs offer better protection to adversarial attack than β-TCVAEs, we also motivate their utility by way of their reconstruction quality. In Fig 6 we plot the ELBO of the two TC-penalised models, calculated without the β penalisation that was applied during training. We further show the effect of depth and TC-penalisation on CelebA reconstructions. These plots show that Seatbelt-VAEs’ reconstructions are more resilient to increasing β than β-TCVAEs’." }, { "heading": "5.3 PROTECTION TO DOWNSTREAM TASKS", "text": "Finally, we consider the protection that Seatbelt-VAEs might provide to downstream tasks, noting that VAEs are often used as subcomponents in larger ML systems (Higgins et al., 2017b), or as a mechanism to protect another model from attack (Schott et al., 2019; Ghosh et al., 2019). Table 1 shows results for classification tasks using 2-layer MLPs and fully-convolutional nets trained on the reconstructions or on the embeddings. It shows the drop in accuracy caused by an adversary that picks a target with a different label and attacks the VAEs’ embedding using the attack objective with λ = 1. We see that Seatbelt-VAEs produced significantly better accuracies under these attacks." }, { "heading": "6 CONCLUSION", "text": "We have shown that VAEs can be rendered more robust to adversarial attacks by regularising the evidence lower bound. This increase in robustness can be strengthened by extending these regularisation methods to hierarchical VAEs, forming Seatbelt-VAEs, which uses a generative structure where the likelihood makes use of all the latent variables. Designing robust VAEs is becoming pressing as they are increasingly deployed as subcomponents in larger pipelines. As we have shown, methods typically used for disentangling, motivated by their ability to provide interpretable representations, also confer robustness. Studying the beneficial effects of these methods is starting to come to the fore of VAE research." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This research was directly funded by the Alan Turing Institute under Engineering and Physical Sciences Research Council (EPSRC) grant EP/N510129/1. MW was supported by EPSRC grant EP/G03706X/1. AC was supported by an EPSRC Studentship. SR gratefully acknowledges support from the UK Royal Academy of Engineering and the Oxford-Man Institute. CH was supported by the Medical Research Council, the Engineering and Physical Sciences Research Council, Health Data Research UK, and the Li Ka Shing Foundation\nWe thank Tomas Lazauskas, Jim Madge and Oscar Giles from the Alan Turing Institute’s Research Engineering team for their help and support." }, { "heading": "B TOTAL-CORRELATION DECOMPOSITION OF ELBO", "text": "Proof of Theorem 1\nHere we prove that the ELBO for a hierarchical VAE with forward model as in Eq (3) and amortised variational posterior as in Eq (4) can be decomposed to reveal a total-correlation in the top-most latent variable.\nSpecifically, now considering the ELBO for the whole dataset and using q(x) to indicate the empirical data distribution, we will obtain, denoting z0 = x:\nL (θ, φ;D) = Eqφ(~z,x) [log pθ(x|~z)]− Eqφ(~z|x)q(x) [ L−1∑ i=1 DKL(qφ(z i|zi−1,x)||pθ(zi|zi+1)) ] − Eqφ(zL−1)DKL(qφ(zL,x|zL−1)||qφ(zL)q(x))\n− ∑\nj DKL(qφ(z\nL j )||p(zLj ))− βDKL ( qφ(z L)|| ∏\nj qφ(z\nL j ) )\n(16)\nWe start with the forms of p and q given in Theorem 1. The likelihood is conditioned on all z layers: pθ(x|~z).\nL(θ, φ;D) =Eqφ(~z,x) log pθ(x,~z)\nqφ(~z,x) (17)\n=Eqφ(~z,x) [ log pθ(x|~z)]− Eq(x) [DKL(qφ(~z,x)||pθ(~z))] (18)\n=Eq(~z,x) log pθ(x|~z)− Eq(x) log q(x) + Eq(~z,x) log pθ(~z)\nq(~z|x) (19)\n=Eq(~z,x) log pθ(x|~z) +H(q(x)) (20)\n+ ∫ dx dz1 L∏ i=2 (dziqφ(z i|zi−1,x))qφ(z1|x)q(x) log\np(zL) ∏L−1 k=1 pθ(z\nk|zk+1) qφ(z1|x) ∏L−1 m=1 qφ(z\nm+1|zm,x)︸ ︷︷ ︸ W\nSo here we have three terms: an expectation over the data likelihood, the entropy of the empirical data distribution (a constant) and W . We now can expand W to a term involving the prior for the latent zL and a term involving the conditional distributions from the generative model for the remaining components of ~z:\nW = ∫ dx L∏ i=1 (dzi)qφ(~z|x)q(x) log ∏L−1 k=1 pθ(z k|zk+1) qφ(z1|x) ∏L−2 m=1 qφ(z\nm+1|zm,x)︸ ︷︷ ︸ R\n+ ∫ dx L∏ i=1 (dzi)qφ(~z|x)q(x) log p(zL)\nqφ(zL|zL−1,x)︸ ︷︷ ︸ S\n(21)\nThe first part R , it that part of W not involving the prior for ‘top-most’ latent variable zL, is the first subject of our attention. We split out the part of R involving the generative and posterior terms for the latent variable closest to the data, z1 and the rest:\nR = ∫ dx L∏ i=1 (dzi)qφ(~z|x)q(x) log pθ(z 1|z2) qφ(z1|x)︸ ︷︷ ︸\nRa\n+ L−1∑ m=2 ∫ dx L∏ i=1 (dzi)qφ(~z|x)q(x) log pθ(z m|zm+1) qφ(zm|zm−1,x)︸ ︷︷ ︸\nRb\n.\nThe first of these terms Ra is an expectation over a DKL:\nRa = −Eqφ(z2,x)DKL(qφ(z1|x)||pθ(z1|z2)). (22)\nAnd the rest, Rb , provides the DKL divergences in the ELBO for all latent variables other than zL\nand z1. It reduces to a sum of expectations over DKL divergences, one per latent variable.\nRb = L−1∑ m=2 ∫ dx L∏ i=1 (dzi)qφ(z 1|x)q(x) L−1∏ k=1,6=m (qφ(z k+1|zk,x))qφ(zm|zm−1,x) log pθ(z m|zm+1)\nqφ(zm|zm−1,x) (23)\n=− L−1∑ m=2 ∫ dx L∏ i=1 (dzi)qφ(z 1|x)q(x) L−1∏ k=1,6=m (qφ(z k+1|zk,x))DKL(qφ(zm|zm−1,x)||pθ(zm|zm+1))\n(24)\n=− L−1∑ m=2 Eqφ(zm+1,zm−1,x)DKL(qφ(z m|zm−1,x)||pθ(zm|zm+1)). (25)\nNow we have:\nL(θ, φ;D) =Eq(~z,x) log pθ(x|~z) +H(q(x)) + Ra + Rb + S (26)\nWe wish to apply TC decomposition to the top-most latent variable zL. S is an expectation over the DKL divergence between qφ(zL|zL−1,x) and p(zL)\nS = −Eqφ(zL−1,x)DKL(qφ(zL|zL−1,x)||p(zL)) (27)\nApplying the decomposition, with j indexes over units in zL. S =− Eqφ(zL,zL−1,x) [ log qφ(z L|zL−1,x)− log p(zL) + log qφ(zL)\n− log qφ(zL) + log ∏ j qφ(z L j )− log ∏ j qφ(z L j ) ]\n=− Eqφ(zL,zL−1,x) [ log\nqφ(z L|zL−1,x) qφ(zL)\n] − Eqφ(zL) [ log qφ(z L)∏\nj qφ(z L j )\n]\n− Eqφ(zL) [ log ∏ j qφ(z L j )\np(zL)\n]\n=− Eqφ(zL,zL−1,x) [ log\nqφ(z L|zL−1,x)q(x) qφ(zL)q(x)\n] − Eqφ(zL) [ log qφ(z L)∏\nj qφ(z L j )\n]\n− ∑ j Eqφ(zL)\n[ log qφ(z L j )\np(zLj ) ] =−Eqφ(zL−1))DKL(qφ(zL,x|zL−1)||qφ(zL)q(x))︸ ︷︷ ︸\nSa − ∑ j\nDKL(qφ(z L j )||p(zLj ))︸ ︷︷ ︸\nSb\n−DKL(qφ(zL)|| ∏ j\nqφ(z L j ))︸ ︷︷ ︸\nSc Where we have used p(zL) = ∏ j p(z L j ) for our chosen generative model, a product of independent unit-variance Gaussian distributions.\nL(θ, φ;D) = Eq(~z,x) log pθ(x|~z) +H(q(x)) + Ra + Rb + Sa + Sb + Sc (28)\nGiving us a decomposition of the evidence lower bound that reveals the TC-term in zL, as required. Multiplying this with a chosen pre-factor β gives us the required form." }, { "heading": "C MINIBATCH WEIGHTED SAMPLING", "text": "As in Chen et al. (2018), applying β-TC decomposition requires us to calculate terms of the form:\nEqφ(zi) log qφ(z i) (29)\nThe i = 1 case is covered in the appendix of Chen et al. (2018). First we will repeat the argument for i = 1 as made in Chen et al. (2018), but in our notation, and then we cover the case i > 1 for models with factorisation of qφ(~z|x) of Seatbelt VAEs.\nC.1 MWS FOR β-TCVAES\nWe denote BM = {x1,x2, ...,xM}, a minibatch of datapoints drawn uniformly iid from q(x) = 1/N ∑N n=1 δ(x − xn). For any minibatch we have p(BM ) = 1N\nM . Chen et al. (2018) introduce r(BM |x), the probability of a sampled minibatch given that one member is x and the remaining M − 1 points are sampled iid from q(x), so r(BM |x) = 1N M−1.\nEqφ(z1) log qφ(z 1) =Eqφ(z1,x) [ logEq(x) [qφ(z 1|x)]] (30)\n=Eqφ(z1,x) [ logEp(BM ) [ 1\nM M∑ m=1 qφ(z 1|xm)]] (31)\n≥Eqφ(z1,x) [ logEr(BM |x) [ p(BM ) r(BM |x) 1 M M∑ m=1 qφ(z 1|xm)]] (32)\n=Eqφ(z1,x) [ logEr(BM |x) [ 1\nNM M∑ m=1 qφ(z 1|xm)]] (33)\n(34)\nSo then during training, one samples a minibatch {x1,x2, ...,xM} and can estimate Eqφ(z1) log qφ(z1) as:\nEqφ(z1) log qφ(z 1) ≈ 1\nM M∑ i=1 [ log M∑ j=1 qφ(z 1 i |xj)− logNM ] (35)\nand z1i is a sample from qφ(z 1|xi).\nC.2 MINIBATCH WEIGHTED SAMPLING FOR SEATBELT-VAES Here we have that q(~z,x) = ∏L l=2 [qφ(z\nl|zl−1,x)]qφ(z1|x)q(x). Now instead of having a minibatch of datapoints, we have a minibatch of draws of zi−1: Bi−1M = {zi−11 , zi−12 , ..., zi−1M }. Each member of which is the result of sequentially sampling along a chain, starting with some particular datapoint xm ∼ q(x).\nFor i > 2, members of Bi−1M are drawn:\nzi−1j ∼ qφ(zi−1|zi−2j ,xj) (36)\nand for i = 2:\nz1j ∼ qφ(z1|xj) (37)\nThus each member of this batch Bi−1M is the descendant of a particular datapoint that was sampled in an iid minibatch BM as defined above. We similarly define r(Bi−1M |zi−1,x) as the probability of selecting a particular minibatch Bi−1M of these values out from our set {(xn, zi−1n )} (of cardinality N ) given that we have selected into our minibatch one particular pair of values (x, zi−1) from these N values. Like above, r(Bi−1M |zi−1,x) = 1N M−1\nNow we can consider Eqφ(zi) log qφ(zi) for i > 1:\nEqφ(zi) log qφ(z i) =Eqφ(zi,zi−1,x) [ logEqφ(zi−1,x) [qφ(z\ni|zi−1,x)]] (38)\n=Eqφ(zi,zi−1,x) [ logEp(Bi−1M ) [ 1\nM M∑ m=1 qφ(z i|zi−1m ,xm)]] (39)\n≥Eqφ(zi,zi−1,x) [ logEr(Bi−1M |zi−1,x) [ p(Bi−1M )\nr(Bi−1M |zi−1,x) 1 M M∑ m=1 qφ(z i|zi−1m ,xm)]]\n(40)\n=Eqφ(zi,zi−1,x) [ logEr(Bi−1M |zi−1,x) [ 1\nNM M∑ m=1 qφ(z i|zi−1m ,xm)]] (41)\nWhere we have followed the same steps as in the previous subsection.\nDuring training, one samples a minibatch {zi−11 , zi−12 , ..., zi−1M }, where each is constructed by sampling ancestrally. Then one can estimate Eqφ(zi) log qφ(zi) as:\nEqφ(zi) log qφ(z i) ≈ 1\nM M∑ k=1 [ log M∑ j=1 qφ(z i k|zi−1j ,xj)− logNM ] (42)\nand zik is a sample from qφ(z i|zi−1k ,xk). In our approach we only need terms of this form for i = L, so we have:\nEqφ(zL) log qφ(z L) ≈ 1\nM M∑ k=1 [ log M∑ j=1 qφ(z L k |zL−1j ,xj)− logNM ] (43)\nand zLk is a sample from qφ(z L|zL−1k ,xk)." }, { "heading": "D SEATBELT-VAE RESULTS", "text": "D.1 SEATBELT-VAE LAYERWISE ATTACKS\nD.2 SEATBELT-VAE ATTACKS BY MODEL DEPTH AND β\nE AGGREGATE ANALYSIS OF ADVERSARIAL ATTACK\n5 10\n3.5\n4.0\n4.5\n5.0\n5.5\n6.0 6.5 Ta rg et -R ec on D is ta nc e\n5 10\n8\n10\n12\n14\n16\n18\nA dv\ner sa\nria l-T\nar ge\nt D is\nta nc\ne\nLatent Space Attack Output Attack\n(a) dSprites Distances\n1 2 4 6 8 10\n10 4\n10 5\n10 6\n10 7\nA dv\ner sa\nria l L\nos s\nLatent\n1 2 4 6 8 10\n1.3\n1.5\n1.7\n1.9\n1e3 Output\n(b) dSprites Losses\n5 10 5.0\n5.5\n6.0\n6.5\n7.0\nTa rg\net -R\nec on\nD is\nta nc\ne\n5 10 6\n8\n10\n12\n14\n16\nA dv\ner sa\nria l-T\nar ge\nt D is\nta nc\ne\nLatent Space Attack Output Attack\n(c) Chairs Distances\n1 2 4 6 8 10\n10 3\n10 4\n10 5\n10 6\n10 7\nA dv\ner sa\nria l L\nos s\nLatent\n1 2 4 6 8 10\n6.0\n6.5\n7.0\n7.5 1e2 Output\n(d) Chairs Losses\n2.5 5.0 7.5 10.0\n2.6\n2.8\n3.0\n3.2\n3.4\nTa rg\net -R\nec on\nD is\nta nc\ne\n2.5 5.0 7.5 10.0\n4\n5\n6\n7\nA dv\ner sa\nria l-T\nar ge\nt D is\nta nc\ne\nLatent Space Attack Output Attack\n(e) 3D Faces Distances\n1 2 4 6 8 10\n10 3\n10 4\nA dv\ner sa\nria l L\nos s\nLatent\n1 2 4 6 8 10\n1.55\n1.60\n1.65\n1.70\n1e3 Output\n(f) 3D Faces Losses\nFigure E.10: Plots showing the effect of varying β in a β-TCVAE trained on dSprites (a,b), Chairs (c,d), and 3D Faces (d,e) on: the L2 distance from the adversarial target xt to its reconstruction when given as input (target-recon distance) and the L2 distance between the adversarial input x∗ and xt (adversarial-target distance); and the adversarial objectives ∆. We also include these metrics for “output” attacks Gondim-Ribeiro et al. (2018), which we find to be generally less effective. In such attacks the attacker directly tries to reduce the L2 distance between the reconstructed output and the target image. For latent attacks the adversarial-target L2 distance grows more rapidly than the target-recon distance (i.e the degradation of reconstruction quality) as we increase β. This effect is much less clear for output attacks. This makes it apparent that the robustness we see in β-TCVAE to latent space adversarial attacks is not due the degradation in reconstruction quality we see as β increases. It is also apparent that increasing β increases the adversarial loss for latent attacks and output attacks.\nE.1 DISENTANGLING AND ROBUSTNESS? Although we are using regularisation methods that were initially proposed to encourage disentangled representations, we are interested here in their effect on robustness not whether the representations we learn are in fact disentangled. This is not least due to the questions that have arisen about the hyperparameter tuning required for disentangled representations Locatello et al. (2019); Rolinek et al. (2019). For us the β pre-factor is just the degree of regularisation imposed.\nHowever, it may be of interest to see what relationship, if any, exists between the ease of attacking of a model and how disentangled it is. Here we show the MIG score (Chen et al., 2018) against the achieved adversarial loss on the Faces data for β-TCVAEs. MIG measures the degree to which representations are disentangled and larger adversarial losses correspond to a less successful attack. Shading is over the range of β and dz values. There does not seem to be any simple correspondence between increased MIG and increases in adversarial loss, indicative of a less successful attack.\n0.05 0.10 0.15 0.20 MIG\n2000\n3000\n4000\n5000\n6000\nAd ve\nrs ar\nia l L\nos s\n(a) Faces\n0.05 0.10 0.15 0.20 MIG\n101 102 103 104 105 106 107 Ad ve rs ar ia l L os s\n(b) Chairs\nFigure E.11: Adversarial attack loss reached vs MIG score for β-TCVAEs trained on Faces and Chairs presented for a range of β = {1, 2, 4, 6, 8, 10} and dz = {8, 32} values." }, { "heading": "F ROBUSTNESS TO NOISE", "text": "4000 2000 0 =1\n0.000\n0.002 0.004 chairs chairs + 0.01 chairs + 0.1 chairs +\n4000 2000 0 =2\n0.000\n0.002\n0.004\n4000 2000 0 =4\n0.000\n0.002\n0.004\n4000 2000 0 =6\n0.000\n0.002\n0.004\n4000 2000 0 =8\n0.000\n0.002\n0.004\n4000 2000 0 =10\n0.000\n0.002\n0.004\n(a) Chairs β-TCVAE log pθ(x|z)\n4000 3000 2000 1000 0 =1\n0.000\n0.002 0.004 chairs chairs + 0.01 chairs + 0.1 chairs +\n4000 3000 2000 1000 0 =2\n0.000\n0.002\n0.004\n4000 3000 2000 1000 0 =4\n0.000\n0.002\n0.004\n4000 3000 2000 1000 0 =6\n0.000\n0.002\n0.004\n4000 3000 2000 1000 0 =8\n0.000\n0.002\n0.004\n4000 3000 2000 1000 0 =10\n0.000\n0.002\n0.004\n(b) Chairs Seatbelt-VAE log pθ(x|z)\nFigure F.12: Here we measure the robustness of both β-TCVAE and Seatbelt-VAE when Gaussian noise is added to Chairs. Within each plot a range of β values are shown. We evaluate each model’s ability to decode a noisy embedding to the original non-noised data x by measuring the distribution of log pθ(x|z) when z ∼ qφ(z|x + a ) (a some scaling factor taking values in {0.1, 0.5, 1} and ∼ N (0, 1)) for which higher values indicate better denoising. We show these likelihood values as density plots for the β-TCVAE in (a) and for the Seatbelt-VAE with L = 4 in (b), taking β ∈ {1, 2, 4, 6, 8, 10}. Note the axis scalings are different for each subplot. We see that for both models using β > 1 produces autoencoders that are better at denoising their inputs. Namely, the mean of the density, i.e. Eqφ(z|x+ ) [ log pθ(x|z)], shifts dramatically to higher values for β > 1 relative to β = 1. In other words, for both these models, the likelihood of the dataset in the noisy setting is much closer to the non-noisy dataset when β > 1 across all noise scales (0.1 , 0.5 , ).\nG IMPLEMENTATION DETAILS All runs were done on the Azure cloud system on NC6 GPU machines.\nG.1 ENCODER AND DECODER ARCHITECTURES We used the same convolutional network architectures as Chen et al. (2018). For the encoders of all our models (q(·|x)) we used purely convolutional networks with 5 convolutional layers. When training on single-channel (binary/greyscale) datasets such as dSprites, 3D Faces, or Chairs the 5 layers took the following number of filters in order: {32, 32, 64, 64, 512}. For more complex RGB datasets, such as CelebA, the layers had the following number of filters in order: {64, 64, 128, 128, 512}. The mean and variance of the amortised posteriors are the output of dense layers acting on the output of the purely convolutional network, where the number of neurons in these layers is equal to the dimensionality of the latent space Z . Similarly, for the decoders (p(x|z)) of all our models we also used purely convolutional networks with 6 deconvolutional layers. When training on single-channel (binary/greyscale) datasets, dSprites, 3D Faces, or Chairs, the 6 layers took the following number of filters in order: {512, 64, 64, 32, 32, 1}. For CelebA the layers had the following number of filters in order: {512, 128, 128, 64, 64, 3}. The mean of the likelihood p(x|·) was directly encoded by the final de-convolutional layer. The variance of the decoder, σ, was fixed to 0.1.\nFor β-TCVAE the range of dz values used was {4, 6, 8, 16, 32, 64, 128}. For Seatbelt-VAEs the number of units in each layer zi decreases sequentially. There is a list z sizes for each dataset, and for a model of L layers that the last L entries to give dz,i, i ∈ {1, ..., L}.\n{dz}dSprites ={96, 48, 24, 12, 6} (44) {dz}Chairs ={96, 48, 24, 12, 6} (45) {dz}3DFaces ={96, 48, 24, 12, 6} (46) {dz}CelebA ={256, 128, 64, 32} (47)\nFor Seatbelt-VAEs we also have the mappings qφ(zi+1|zi,x) and pθ(zi|zi+1). These are amortised as MLPs with 2 hidden layers with batchnorm and Leaky-ReLU activation. The dimensionality of the hidden layers also decreases as a function of layer index i:\ndh(qφ(z i+1|zi,x)) = hsizes[i] (48)\ndh(pθ(z i|zi+1)) = hsizes[i] (49)\nhsizes = [1024, 512, 256, 128, 64] (50)\nTo train the model we used ADAM Kingma & Lei Ba (2015) with default parameters, a cosine decaying learning rate of 0.001, and a batch size of 1024. All data was pre-processed to fall on the interval -1 to 1. CelebA and Chairs were both downsampled and cropped as in Chen et al. (2018) and Kulkarni et al. (2015) respectively. We find that using free-bits regularisation (Kingma et al., 2016) greatly ameliorates the optimisation challenges associated with DLGMs." } ]
2,021
null
SP:cf9319c2a107d0d34ff04da0f53201f3cdff4c24
[ "The paper tackles the problem of molecule property optimisation. To this end, the authors proposes an alternating approach consisting of an explainer model and a molecule completion model. The explainer model takes a complete molecule as input and outputs a subgraph that represents the part that contributes most to property prediction. Then, the molecule completion model uses the subgraphs to sample a complete graph that can maximise the property scores. The loss function of molecule completion model directly maximises the properties, which is non-differentiable so that the authors use a REINFORCE algorithm for optimisation. " ]
Optimizing molecules for desired properties is a fundamental yet challenging task in chemistry, material science, and drug discovery. This paper develops a novel algorithm for optimizing molecular properties via an ExpectationMaximization (EM) like explainable evolutionary process. The algorithm is designed to mimic human experts in the process of searching for desirable molecules and alternate between two stages: the first stage on explainable local search which identifies rationales, i.e., critical subgraph patterns accounting for desired molecular properties, and the second stage on molecule completion which explores the larger space of molecules containing good rationales. We test our approach against various baselines on a real-world multi-property optimization task where each method is given the same number of queries to the property oracle. We show that our evolution-by-explanation algorithm is 79% better than the best baseline in terms of a generic metric combining aspects such as success rate, novelty, and diversity. Human expert evaluation on optimized molecules shows that 60% of top molecules obtained from our methods are deemed successful.
[ { "affiliations": [], "name": "Chengtao Li" }, { "affiliations": [], "name": "Hanjun Dai" }, { "affiliations": [], "name": "Le Song" } ]
[ { "authors": [ "REFERENCES Christophe Andrieu", "Nando De Freitas", "Arnaud Doucet", "Michael I Jordan" ], "title": "An introduction to mcmc for machine learning", "venue": "Machine learning,", "year": 2003 }, { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "PloS one,", "year": 2015 }, { "authors": [ "David Baehrens", "Timon Schroeter", "Stefan Harmeling", "Motoaki Kawanabe", "Katja Hansen", "KlausRobert MÞller" ], "title": "How to explain individual classification decisions", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "G Richard Bickerton", "Gaia V Paolini", "Jérémy Besnard", "Sorel Muresan", "Andrew L Hopkins" ], "title": "Quantifying the chemical beauty of drugs", "venue": "Nature chemistry,", "year": 2012 }, { "authors": [ "John Bradshaw", "Brooks Paige", "Matt J Kusner", "Marwin Segler", "José Miguel Hernández-Lobato" ], "title": "A model to search for synthesizable molecules", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nathan Brown", "Marco Fiscato", "Marwin HS Segler", "Alain C Vaucher" ], "title": "Guacamol: benchmarking models for de novo molecular design", "venue": "Journal of chemical information and modeling,", "year": 2019 }, { "authors": [ "Jianbo Chen", "Le Song", "Martin Wainwright", "Michael Jordan" ], "title": "Learning to explain: An information-theoretic perspective on model interpretation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hanjun Dai", "Hui Li", "Tian Tian", "Xin Huang", "Lin Wang", "Jun Zhu", "Le Song" ], "title": "Adversarial attack on graph structured data", "venue": "arXiv preprint arXiv:1806.02371,", "year": 2018 }, { "authors": [ "Hanjun Dai", "Yingtao Tian", "Bo Dai", "Steven Skiena", "Le Song" ], "title": "Syntax-directed variational autoencoder for structured data", "venue": "arXiv preprint arXiv:1802.08786,", "year": 2018 }, { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "Molgan: An implicit generative model for small molecular graphs", "venue": "arXiv preprint arXiv:1805.11973,", "year": 2018 }, { "authors": [ "Kevin Ellis", "Catherine Wong", "Maxwell Nye", "Mathias Sable-Meyer", "Luc Cary", "Lucas Morales", "Luke Hewitt", "Armando Solar-Lezama", "Joshua B Tenenbaum" ], "title": "Dreamcoder: Growing generalizable, interpretable knowledge with wake-sleep bayesian program learning", "venue": "arXiv preprint arXiv:2006.08381,", "year": 2020 }, { "authors": [ "Peter Ertl", "Ansgar Schuffenhauer" ], "title": "Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions", "venue": "Journal of cheminformatics,", "year": 2009 }, { "authors": [ "Anna Gaulton", "Anne Hersey", "Michał Nowotka", "A Patricia Bento", "Jon Chambers", "David Mendez", "Prudence Mutowo", "Francis Atkinson", "Louisa J Bellis", "Elena Cibrián-Uhalte" ], "title": "The chembl database in 2017", "venue": "Nucleic acids research,", "year": 2017 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamı́n Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "Sai Krishna Gottipati", "Boris Sattarov", "Sufeng Niu", "Yashaswi Pathak", "Haoran Wei", "Shengchao Liu", "Karam MJ Thomas", "Simon Blackburn", "Connor W Coley", "Jian Tang" ], "title": "Learning to navigate the synthetically accessible chemical space using reinforcement learning", "venue": "arXiv preprint arXiv:2004.12485,", "year": 2020 }, { "authors": [ "Gabriel Lima Guimaraes", "Benjamin Sanchez-Lengeling", "Carlos Outeiral", "Pedro Luis Cunha Farias", "Alán Aspuru-Guzik" ], "title": "Objective-reinforced generative adversarial networks (organ) for sequence generation models", "venue": "arXiv preprint arXiv:1705.10843,", "year": 2017 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Composing molecules with multiple property constraints", "venue": "arXiv preprint arXiv:2002.03244,", "year": 2020 }, { "authors": [ "Hiroshi Kajino" ], "title": "Molecular hypergraph grammar with its application to molecular optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Seokho Kang", "Kyunghyun Cho" ], "title": "Conditional molecular design with deep generative models", "venue": "Journal of chemical information and modeling,", "year": 2018 }, { "authors": [ "Steven Kearnes", "Li Li", "Patrick Riley" ], "title": "Decoding molecular graph embeddings with reinforcement learning", "venue": "arXiv preprint arXiv:1904.08915,", "year": 2019 }, { "authors": [ "James Kennedy", "Russell Eberhart" ], "title": "Particle swarm optimization", "venue": "In Proceedings of ICNN’95International Conference on Neural Networks,", "year": 1995 }, { "authors": [ "Pieter-Jan Kindermans", "Kristof Schütt", "Klaus-Robert Müller", "Sven Dähne" ], "title": "Investigating the influence of noise and distractors on the interpretation of neural networks", "venue": "arXiv preprint arXiv:1611.07270,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Ksenia Korovina", "Sailun Xu", "Kirthevasan Kandasamy", "Willie Neiswanger", "Barnabas Poczos", "Jeff Schneider", "Eric Xing" ], "title": "Chembo: Bayesian optimization of small organic molecules with synthesizable recommendations", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "arXiv preprint arXiv:1703.01925,", "year": 2017 }, { "authors": [ "Jules Leguy", "Thomas Cauchy", "Marta Glavatskikh", "Béatrice Duval", "Benoit Da Mota" ], "title": "Evomol: a flexible and interpretable evolutionary algorithm for unbiased de novo molecular generation", "venue": "Journal of Cheminformatics,", "year": 2020 }, { "authors": [ "Yibo Li", "Liangren Zhang", "Zhenming Liu" ], "title": "Multi-objective de novo drug design with conditional graph generative model", "venue": "Journal of cheminformatics,", "year": 2018 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander Gaunt" ], "title": "Constrained graph variational autoencoders for molecule design", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": null, "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "AkshatKumar Nigam", "Pascal Friederich", "Mario Krenn", "Alan Aspuru-Guzik" ], "title": "Augmenting genetic algorithms with deep neural networks for exploring the chemical space", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Marcus Olivecrona", "Thomas Blaschke", "Ola Engkvist", "Hongming Chen" ], "title": "Molecular de-novo design through deep reinforcement learning", "venue": "Journal of cheminformatics,", "year": 2017 }, { "authors": [ "Mariya Popova", "Olexandr Isayev", "Alexander Tropsha" ], "title": "Deep reinforcement learning for de novo drug design", "venue": "Science advances,", "year": 2018 }, { "authors": [ "Mariya Popova", "Mykhailo Shvets", "Junier Oliva", "Olexandr Isayev" ], "title": "Molecularrnn: Generating realistic molecular graphs with optimized properties", "venue": "arXiv preprint arXiv:1905.13372,", "year": 2019 }, { "authors": [ "S Prasanna", "RJ Doerksen" ], "title": "Topological polar surface area: a useful descriptor in 2d-qsar", "venue": "Current medicinal chemistry,", "year": 2009 }, { "authors": [ "Jean-Louis Reymond", "Ruud Van Deursen", "Lorenz C Blum", "Lars Ruddigkeit" ], "title": "Chemical space as a source for new drugs", "venue": null, "year": 2010 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Why should i trust you?: Explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "David Rogers", "Mathew Hahn" ], "title": "Extended-connectivity fingerprints", "venue": "Journal of chemical information and modeling,", "year": 2010 }, { "authors": [ "Bidisha Samanta", "DE Abir", "Gourhari Jana", "Pratim Kumar Chattaraj", "Niloy Ganguly", "Manuel Gomez Rodriguez" ], "title": "Nevae: A deep generative model for molecular graphs", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Marwin HS Segler", "Thierry Kogej", "Christian Tyrchan", "Mark P Waller" ], "title": "Generating focused molecule libraries for drug discovery with recurrent neural networks", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "Chence Shi", "Minkai Xu", "Zhaocheng Zhu", "Weinan Zhang", "Ming Zhang", "Jian Tang" ], "title": "Graphaf: a flow-based autoregressive model for molecular graph generation", "venue": "arXiv preprint arXiv:2001.09382,", "year": 2020 }, { "authors": [ "Avanti Shrikumar", "Peyton Greenside", "Anshul Kundaje" ], "title": "Learning important features through propagating activation differences", "venue": "In ICML, volume 70 of Proceedings of Machine Learning Research,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "arXiv preprint arXiv:1412.6806,", "year": 2014 }, { "authors": [ "David Weininger" ], "title": "Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules", "venue": "Journal of chemical information and computer sciences,", "year": 1988 }, { "authors": [ "Robin Winter", "Floriane Montanari", "Andreas Steffen", "Hans Briem", "Frank Noé", "Djork-Arné Clevert" ], "title": "Efficient multi-objective molecular optimization in a continuous latent space", "venue": "Chemical science,", "year": 2019 }, { "authors": [ "Zhitao Ying", "Dylan Bourgeois", "Jiaxuan You", "Marinka Zitnik", "Jure Leskovec" ], "title": "Gnnexplainer: Generating explanations for graph neural networks", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Zhitao Ying", "Vijay Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William L Hamilton", "Jure Leskovec" ], "title": "Graphrnn: Generating realistic graphs with deep auto-regressive models", "venue": "arXiv preprint arXiv:1802.08773,", "year": 2018 }, { "authors": [ "Zhenpeng Zhou", "Steven Kearnes", "Li Li", "Richard N Zare", "Patrick Riley" ], "title": "Optimization of molecules via deep reinforcement learning", "venue": "Scientific reports,", "year": 2019 }, { "authors": [ "Daniel Zügner", "Amir Akbarnejad", "Stephan Günnemann" ], "title": "Adversarial attacks on neural networks for graph data", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The space of organic molecules is vast, the size of which is exceeding 1060 (Reymond et al., 2010). Searching over this vast space for molecules of interest is a challenging task in chemistry, material science, and drug discovery, especially given that molecules are desired to meet multiple criteria, e.g., high potency and low toxicity in drug discovery. When human experts optimize molecules for better molecular properties, they will first come up with rationales within desirable molecules. Typically, the rationales are subgraphs in a molecule deemed to contribute primarily to certain desired molecular properties. Once rationales are identified, chemists will design new molecules on top of rationales hoping that, the desired properties of new molecules will be further enhanced due to the existence of rationale and changes of non-rationale parts. The cycle of identifying molecular rationales and redesigning new hypothetical molecules will be carried on until molecules that meet certain property criteria are discovered.\nIn this paper, we develop a novel algorithm that mimics the process of molecule optimization by human experts. Our algorithm finds new molecules with better properties via an EM-like explainable evolutionary process (Figure 1). The algorithm alternates between two stages. During the first stage, we use an explainable local search method to identify rationales within high-quality molecules that account for their high property scores. During the second stage, we use a conditional generative model to explore the larger space of molecules containing useful rationales.\nOur method is novel in that we are using explainable models to help us exploit useful patterns in the molecules, yet leveraging generative models to help us explore the molecule landscape. Comparing to existing methods that directly learn a generative model using Reinforcement Learning or perform continuous optimization in the latent space of molecules (Olivecrona et al., 2017; You et al., 2018a; Dai et al., 2018b), our method is more sample-efficient and can generate more novel and unique molecules that meet the criteria.\nWe evaluate our algorithm against several state-of-the-art methods on a molecule optimization task involving multiple properties. Compared with baselines, our algorithm is able to increase the success ∗Correspondence to: Binghong Chen <binghong@gatech.edu>. ∗ indicates equal contribution. Source code at https://github.com/binghong-ml/MolEvol.\nrate by 50%, novelty by 14%, while having a competitive diversity. We further propose a new metric, QNU score, to jointly consider all three aspects, and show that we achieve a score of 52.7% compared with 29.5% by the best baseline. We also ask experienced chemists to evaluate top-50 generated molecules and find that 30 of them are as good as existing ones.\nThe main contributions of this paper are summarized below:\n• We propose a novel EM-like evolution-by-explanation algorithm for molecule optimization; • We present a novel, principled, explainable graph model based on an information-theoretic ap-\nproach to extract subgraphs essential for maintaining certain desired properties; • Our approach outperforms existing state-of-the-arts by a large margin in terms of success rate\n(50% better), novelty (14% better), and an overall metric (79% better) on a real-world multiproperty optimization task." }, { "heading": "2 RELATED WORK", "text": "There has been a surge of interest in using machine learning to discover novel molecules with certain properties in recent years. Most of the existing work defines a generative model for either the SMILES strings (Weininger, 1988) or molecular graphs, and uses Reinforcement Learning algorithms to optimize the properties of the generated molecules (Segler et al., 2018; Olivecrona et al., 2017; Guimaraes et al., 2017; You et al., 2018a; Popova et al., 2018; 2019; Samanta et al., 2019; Zhou et al., 2019; De Cao & Kipf, 2018; Kearnes et al., 2019; Shi et al., 2020; Jin et al., 2020). Others optimize the continuous representation of molecules in a latent space learned by variants of variational autoencoders (Kusner et al., 2017; Dai et al., 2018b; Jin et al., 2018; Gómez-Bombarelli et al., 2018; Kang & Cho, 2018; Liu et al., 2018; Kajino, 2019). More recent work attempts Evolutionary algorithms (Nigam et al., 2020; Leguy et al., 2020; Winter et al., 2019), or focuses on finding high-quality molecules with synthesis paths (Bradshaw et al., 2019; Korovina et al., 2020; Gottipati et al., 2020). Most similar to our approach is RationaleRL (Jin et al., 2020), which extracts subgraphs from seed molecules using Monte Carlo Tree Search (MCTS) and generates full molecules by completing the subgraphs. Compared with previous work, our approach is the first to incorporate an explainable model in the iterative search process.\nExisting work on explainable models approaches the problems from three directions. The first line of work uses gradients of the outputs with respect to inputs to identify the salient features in the inputs (Simonyan et al., 2013; Springenberg et al., 2014; Baehrens et al., 2010); the second line of work approximates the model with simple interpretable models, such as locally additive mod-\nels (Bach et al., 2015; Kindermans et al., 2016; Ribeiro et al., 2016; Lundberg & Lee, 2017; Shrikumar et al., 2017); the third line of work defines input pattern selection operators, such that the outputs of the model based on the selected input patterns have high mutual information with the original model outputs (Chen et al., 2018; Ying et al., 2019). Our explainable model is different from GNNExplainer (Ying et al., 2019) in that we optimize the discrete subgraph structure with learned variational predictor, instead of directly feeding continuous edge masking into the target model." }, { "heading": "3 PROBLEM SETTING", "text": "In this paper, we study the problem of discovering molecules g from the molecular space G with a high property score, measured by a scoring function f . And usually, there is a set of seed molecules G0 ⊂ G from experts with high scores to start with. More formally, the problem can be stated as\nMolecule Optimization. Given a scoring function f : G 7→ [0, 1], and a set of seed molecules G0 ⊂ G, the goal is to learn a molecule generative model p(g) such that the expected score of the generated molecules is maximized, i.e.,\nmax p(·) Eg∼p(·)[f(g)] = ∫ g∈G p(g)f(g)dg (1)\nTo prevent the model p(g) from generating a small set of fixed molecules with high scores, we additionally require the learned distribution to be both novel and diverse, i.e., generating molecules that are dissimilar to the set of reference molecules (a subset of G0) and each other. The molecule optimization problem in Eq (1) is combinatorial in nature, which poses a significant challenge. To mimic the scientific discovery process, we allow the algorithm to query f on new molecules under a querying budget. Examples of some well-known scoring functions include the QED score measuring the drug-likeness (Bickerton et al., 2012), the SA score measuring the synthetic accessibility (Ertl & Schuffenhauer, 2009), the TPSA score measuring the ability to permeate cells (Prasanna & Doerksen, 2009), etc. The scoring function is general and could also encode multiproperty objectives (Olivecrona et al., 2017; Brown et al., 2019). Optimizing multiple properties together suffers from the sparsity of high scores, a scenario which is shown to be more challenging than single property optimization (Jin et al., 2020).\nWhen experts are optimizing the molecular property, they will first look for substructures that result in the formation of that property, and use them as the foundation for building novel molecules. These subgraphs are called rationales (examples in Figure 1). The set of rationales is formally defined as,\nS = {s | ∃g ∈ G, s.t. s is a subgraph of g}. (2)" }, { "heading": "4 OUR FRAMEWORK", "text": "Our novel framework for optimizing molecular property with generative models consists of a modeling component and an algorithm component. In our modeling component, we propose a rationalebased hierarchical generative model for p(g), which first generates rationales and then completes molecules. In our algorithm component, we design an alternating optimization procedure that interleaves between rationale distribution optimization and molecule generative model optimization. Furthermore, we develop a novel explainable graph model to effectively carry out the rationale model optimization. Next, we will first start describing our hierarchical generative model." }, { "heading": "4.1 RATIONALE-BASED HIERARCHICAL GENERATIVE MODEL", "text": "To tackle the challenging search problem, we develop a hierarchical generative model that mimics the process of molecule optimization by human experts. In our model, we first sample rationales s from a distribution p(s), and then molecules g will be generated according to conditional distribution pθ(g|s). More specifically, our overall molecular generative model pθ(g) can be defined as\npθ(g) = ∫ s∈S p(s) pθ(g|s) ds, (3)\nwhere θ is the parameter of the conditional generative model, p(s) is the latent rationales distribution.\nHere pθ(g|s) is a graph completion model from rationale s. The architecture of pθ(g|s) can be arbitrary. In this work, we use a latent variable model with a Gaussian prior p(z),\npθ(g|s) = ∫ z p(z)pθ(g|s, z)dz, (4)\nwhere pθ(g|s, z) is a variant of the GraphRNN (You et al., 2018b; Liu et al., 2018) by conditioning the graph generation on subgraphs. As part of the initialization, pθ(g|s) is first pretrained on ChEMBL (Gaulton et al., 2017), a drug-like molecule dataset, in the same fashion as the variational autoencoder (Kingma & Welling, 2013), where the encoder is a standard GCN with atoms as vertices and bonds as edges.\nNote that different from p(z), which is a fixed prior, p(s) will be updated in each round. And since representing a distribution on S is difficult, we will use particles to represent p(s) in the algorithm. In order to improve the diversity of the generated molecules, we will also regularize the entropy of the rationale distribution p(s), leading to the following diversity-promoting objective function\nJ(θ, p(s)) = Eg∼pθ(·)[f(g)] + λ ·H[p(s)], (5) with a hyperparameter λ > 0 controlling the strength of the regularization." }, { "heading": "4.2 ALTERNATING OPTIMIZATION ALGORITHM", "text": "As the rationales and the molecules are coupled in the molecular graph space, directly optimizing the diversity-promoting objective in Eq (5) would be challenging. Therefore we seek to optimize pθ(g|s) and p(s) in an alternating fashion, akin to the Expectation-Maximization (EM) algorithm. That is, the algorithm alternates between two stages:\n• Expectation step (E-step) for obtaining an updated distribution p(s), and • Maximization step (M-step) for improving the molecule completion model pθ(g|s).\nWe name this algorithm MolEvol (Algorithm 1) by making an analogy to evolving a group of molecules over time (Figure 1). Assume that, at iteration t − 1, we already have pθt−1(g|s) and pt−1(s), and the set of seed samples Gt−1 drawn from pθt−1(g|s). Then, at iteration t, we have, E-step. We want to maximize the objective J with respect to the latent distribution p(s) given pθt−1(g|s). That is\nmax p(s) Q(p(s)|θt−1) := ∫ s∈S p(s) (∫ g∈G pθt−1(g|s)f(g)dg ) ds− λ ∫ s∈S p(s) log p(s)ds. (6)\nwhich is a maximum entropy estimation problem. Interestingly, the solution of the above optimization problem can be obtained in close form.\npt(s) = argmax p(s) Q(p(s)|θt−1) = 1 Zθ exp ( 1 λ Eg∼pθt−1 (·|s)[f(g)] ) , (7)\nwhere Zθ is a normalizing constant. This updated distribution for the latent rationales will be needed for the later M-step. However, since directly integrating with respect to pt(s) is difficult, we will leverage sampling strategies and obtain m particles {si}mi=1 from this distribution for later use in M-step. However, computing the normalizing constant Zθ is difficult, making direct sampling from pt(s) not straightforward. Standard sampling algorithms like Markov Chain Monte Carlo (Andrieu et al., 2003) could be extremely slow due to the lack of a good proposal distribution and the absence of gradients in the discrete graph space.\nTo address this challenge, we will maintain a finite support set St as the proposal, which is obtained from an explainable graph model (more details in the next section). More specifically, suppose the explainable graph model, Explain(·) : G 7→ S , can take a graph input g and output the corresponding rationale swhich explains why the graph g can obtain a high property score according to f(g). Then support set St can be maintained as follows\nSt = t⋃ i=1 { Explain(g) : g ∈ Gi } , (8)\nwhere G0 is provided to the algorithm initially by experts. The rationales s ∈ St will be treated as the set of particle locations for representing pt(s). Furthermore, for each of these particle locations, we will compute its unnormalized probability according to pt(s) in Eq (7), and then re-sample a set of m particles, {si}mi=1, as the final representation for pt(s) (Andrieu et al., 2003). M-step. With {si}mi=1 from pt(s), the Monte Carlo estimate of the objective function in Eq (5) becomes\nQ(θ|pt(s)) ≈ m∑ i=1 ∫ pθ(g|si)f(g)dg + constant. (9)\nWe can then maximize it with respect to the parameters θ using REINFORCE,\nθt ← θt−1 + α 1 m m∑ i=1 f(gi)∇ log pθt−1(gi|si), where α > 0, gi ∼ pθt−1(·|si). (10)\nAfter the parameter is updated to θt, we will sample a seed set of molecules Gt from pθt(g|s) by completing the rationale samples {si}mi=1 using the updated model. That is\nGt = {gi}nsi=1, where gi ∼ gθt(·|s), s ∼ Uniform({s1, s2, . . . , sm}). (11)\nThe overall algorithm is summarized in Algorithm 1. p(s) and pθ(g|s) are updated in the E-step (line 3-4) and M-step (line 5-8), respectively. A discussion on its convergence can be found in Appendix A.2.\nAlgorithm 1: Molecule Optimization by Explainable Evolution (MolEvol) Input: Seed molecules G0, pretrained graph completion model pθ(g|s) on ChEMBL.\n1 Initialize S0 = {}. 2 for t← 1 to Nrounds do 3 St = St−1 ∪ {Explain(g) : g ∈ Gt−1}. 4 Sample s1, s2, · · · , sm from St using Eq (7) with self-normalization. 5 for j ← 1 to Nepochs do 6 Sample g1, · · · , gm from pθ(g|s1), · · · , pθ(g|sm) respectively. 7 Update θ with REINFORCE (Eq (10)). 8 Sample seed molecules Gt with Eq (11). 9 return pθ(g)" }, { "heading": "4.3 EXPLAINABLE GRAPH MODEL FOR RATIONALES", "text": "In the E-step, it is crucial to update the support set for the rationales, such that the particles can be placed in spaces where pt(s) is large. As we optimize pθt(g|s), this model can generate molecules with improved property scores. Intuitively, we would also like to have an increasingly “good” support set for the rationales. To do this, we will identify substructures in the current seed set of molecules Gt which can best explain their high property scores, and add these discovered substructures as new rationales. Furthermore, we can measure the goodness of these substructure using their mutual information with the property value, and optimize the selector for these substructures using a variational formulation (Chen et al., 2018). This entire procedure is illustrated in Figure 2 can also be seen as seeking explanations why molecules have high property scores.\nExplainer. A molecular graph g is represented by g = (Vg, Eg) with atoms Vg as vertices and bonds Eg as edges. For any subset U ⊆ Vg , the induced subgraph s = (U , EUg ) is a subgraph of g formed by the vertices U and the edges EUg = {e ∈ Eg|estart, eend ∈ U} connecting pairs of vertices in the subset. An explainer Explain(·) : G 7→ S takes a graph g as an input, and outputs an induced subgraph s of k vertices.\nVariational Objective. We want to learn an explainer for the conditional distribution P(Y = 1|g) , f(g) (treating f(g) as a probability), with random variables Y ∈ {0, 1} where Y = 1 indicates that the molecule has the property, and 0 otherwise. We will learn a graph vertex sampler hφ(g) jointly\nMLP\nwith a variational approximation Q(Y |g) of P(Y |g), such that the mutual information between Y and s is maximized\nmax hφ(·),Q\nEY∼P(·|g) [ logQ(Y | s) ] , such that s = (U , EUg ) and U ∼ hφ(g). (12)\nDetails on sampling U from hφ(g) are presented in the next paragraph. After sampling U , we can construct an induced subgraph s = (U , EUg ). During the explanation, we then perform an additional expanding step,\ns′ = (U ′, EU ′ g ), where U ′ = U ∪ {v|∃v ∈ U , s.t. e(u, v) ∈ Eg or u, v share a Benzene.}, (13)\nto obtain s′, which defines the mapping s′ = Explain(g) (Algorithm 2).\nParameterization of hφ(g). Sampling subgraph s from g is equivalent to sampling a size-k subset U from the vertices Vg . We use a GNN hφ to define a vertex sampling policy hφ(g) ∈ ∆|Vg| over the space of g’s vertices. Specifically, hφ(g) consists of two parts:\n1. A message passing network (MPN) which outputs a matrix Xg = MPNφ(g) ∈ R|Vg|×d representing the d-dimensional embeddings for each vertex; 2. A fully-connected layer (FC) followed by a softmax layer to implement the vertex sampling policy hφ(g) = Softmax(FCφ(Xg)) ∈ ∆|Vg|.\nThen we follow the procedure in L2X (Chen et al., 2018) for sampling k vertices one-by-one from the distribution hφ(g) using the Gumbel-softmax trick. The sampled feature matrix can be written as Xs = V (φ, ζ) Xg ∈ R|Vg|×d, where ζ is a collection of auxiliary random variables sampled independently from the Gumbel distribution, V (φ, ζ) ∈ {0, 1}|Vg| is a mask on the rows of Xg , and is the element-wise product. Parameterization of Q. Since directly using generic choices of Q to perform the variational approximation is hard, we approximate it with a MLP qψ that takes the aggregated masked embedding vector xs = ∑ rXs[r, :] ∈ Rd as input, and predicts the target via Q(Y = 1|s) = qψ(Xs) = Sigmoid(MLPψ(xs)).\nFinal Objective for Training. After applying the Gumbel-softmax trick, we transform the variational objective in Eq (12) into:\nmax φ,ψ\nEg,ζ [ f(g) log qψ(V (φ, ζ) Xg) + (1− f(g)) log(1− qψ(V (φ, ζ) Xg)) ] . (14)\nWe can then apply stochastic gradient ascent to jointly optimize φ and ψ by sampling molecule g from the dataset and ζ from the Gumbel distribution. Please refer to Appendix A.1 for more details of the training procedures as well as the implementation of the explainer.\nWe note that our design of the explainer model and the learning method is very different from those in GNNExplainer (Ying et al., 2019), which may be of independent interest in terms of explainable models for GNNs. For instance, our explainable model hφ by itself is a GNN model, and we introduce a variational distribution qψ which is optimized jointly with hφ.\nAlgorithm 2: Explainφ(g) Input: Molecule g, vertex sampling policy φ. hφ(g) = Softmax(FCφ(MPNφ(g))). Sample U ∼ hφ(g) with Gumbel-softmax trick. s′ = Expand((U , EUg )) as defined in Eq (13). return s′ Rationale Extraction as Explaining. During the E-step in our Algorithm 1, we utilize the trained explainer Explain(·) to extract rationales candidates s from the seed molecules. Then the candidates with the top Q-scores are added to the rationale support set to update St.\nRemark on Explanation. In this paper, we use the word “explanation” to refer to a critical component of the input that is of most importance for the final prediction, following the convention of L2X (Chen et al., 2018) and GNNExplainer (Ying et al., 2019). However, a more rigorous explanation using scientific language is rather important and helpful for scientific research. Generating such an explanation using a machine learning model could be highly relevant in general, but that is beyond the scope of this paper." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate MolEvol on a multi-property molecule optimization task (Li et al., 2018; Jin et al., 2020) involving four properties:\n• GSK-3β: inhibition levels against glycogen synthase kinase-3 beta (Li et al., 2018); • JNK3: inhibition levels against c-Jun N-terminal kinase 3 (Li et al., 2018); • QED: quantitative estimate of drug-likeness (Bickerton et al., 2012); • SA: synthetic accessibility (Ertl & Schuffenhauer, 2009). GSK-3β and JNK3 are potential targets in the treatment of Alzheimer’s disease. Their corresponding property predictors are random forests trained on real-world experimental data using Morgan fingerprint features (Rogers & Hahn, 2010). In our experiment, we consider all properties by combining their scores into a unified scoring function1:\nf(g) = [ GSK-3β(g) · JNK3(g) · QED(g) · SA(g) ] 1 4 . (15)\nNote that in the eMolecules dataset (eMolecules, 2020) of commercially available molecules, only 0.007% out of over 27M molecules meet the criteria with f(g) > 0.5.\nExperiment Setting. We provide a set of 3.4K seed molecules for the algorithms to start with. Each seed molecule has a high value in GSK-3β or JNK3 or both. There is a budget on both the time and the number of queries. Each algorithm is allowed to query f -scores no more than 5M times and to run no more than 1 day on a Ubuntu 16.04.6 LTS server with 1 Nvidia RTX 2080 Ti GPU, and 20 Intel(R) Xeon(R) E5-2678 2.50GHz CPUs. We evaluate the algorithms on 20K generated molecules using the following metrics. We call a molecule g qualified if f(g) > 0.5, novel if the distance between g and the reference molecule set is larger than a threshold2. The reference set contains 315 qualified molecules, which is a subset of the provided seed molecules.\n• Success rate: the percentage of qualified molecules out of 20K molecules. 1The range of GSK-3β, JNK3, QED are [0, 1]. We re-normalize SA to [0, 1] using SA(g)← 1\n9 ( 10SA(g) − 1).\n2Novel(g) = I(maxg′∈Gref Sim(g, g ′) < 0.4), Diversity = 1 − 2 n(n−1) ∑ g 6=g′ Sim(g, g ′), Sim(·, ·) is\nthe Tanimoto-similarity on Morgan fingerprints.\n• Novelty: the percentage of novel molecules out of all qualified molecules. • Diversity: the average pairwise distance between all qualified and novel molecules2. • QNU score: the percentage of qualified, novel and unique molecules out of 20K molecules. Success rate, novelty and diversity have been adopted as evaluation metrics in previous work (Olivecrona et al., 2017; Li et al., 2018; Jin et al., 2020). However, the trade-off among the three targets complicates the comparisons between algorithms. Therefore we propose a new metric, QNU score, to jointly consider the three aspects. QNU will serve as the major factor for comparison.\nImplementing MolEvol. We first pretrain the graph completion model pθ(g|s) on a dataset constructed from ChEMBL (Gaulton et al., 2017), which contains over 1.4M drug-like molecules. The pretraining dataset consists of 4.2M (s, g) tuples, where g is a random molecule from ChEMBL and s is a random subgraph of g. In our experiment, MolEvol is run for 10 rounds. Within each round, 200 rationales are added to the support set during the explainable local search stage. During the local search stage, 3 to 5 atoms will be sampled according to the vertex sampling policy hφ(g) and we include the neighbors of the sampled atoms, i.e., the atoms which share a common bond to the sampled atoms, to form the rationale (Eq (13)). In the molecule completion stage, the parameter θ is updated with gradient descent for 1 epoch using a total number of 20000 (s, g) pairs with a minibatch size of 10 and a learning rate of 1e-3.\nBaselines. We compare MolEvol against state-of-the-art molecule optimization algorithms below:\n• RationaleRL (Jin et al., 2020) learns a graph completion model, but relies on a fixed set of multiproperty rationales composed by single-property rationales extracted by MCTS. Concretely, each state in MCTS represents a subgraph of the molecule and the reward function is defined as the property score of the subgraph. • REINVENT (Olivecrona et al., 2017) learns a RNN model with Reinforcement Learning for generating molecules in the form of SMILES strings; • MSO (Winter et al., 2019) optimizes the property using Particle Swarm Optimization (PSO) (Kennedy & Eberhart, 1995) in a continuous latent space of molecules. • GA-D(t) (Nigam et al., 2020) employs a genetic algorithm enhanced with a neural network based discriminator component to promote diversity. The discriminator used here tries to distinguish between molecules generated by the GA and the reference molecule set. The time-dependent adaptive penalty is also used for further promoting exploration.\nSince MSO and GA-D(t) do not explicitly learn a generative model, we use the best 20K out of 5M molecules encountered in the search process for comparison.\nResults. The results are reported in Table 1. Comparing to the baselines, MolEvol achieves higher success rate in generating qualified molecules (30% higher than RationaleRL, MSO and GA-D(t), 45% higher than REINVENT). Meanwhile, MolEvol maintains high novelty (75.7%) which may benefit from the alternating process in the framework. Although the diversity is slightly lower than RationaleRL due to the distribution shift during optimization, the QNU score, which takes all the above metrics into consideration, is significantly higher than RationaleRL (52.7% versus 29.5%). Please refer to Appendix A.3 for more discussions.\nAblation Studies. We introduce baselines below to understand the importance of each component:\n• [MCTS] replaces the explainable local search with MCTS as in Jin et al. (2020); • [FixR] uses a fixed set of rationales, i.e. only having one round of explainable local search; • [FixM] uses a fixed (pretrained) model, i.e. having no molecule completion stage. As illustrated in Table 1, MolEvol achieves the highest QNU score among all variants. The large performance gap (success rate: 93.0% vs. 67.3%/66.3%; QNU score: 52.7% vs. 39.3%/28.3%) between MolEvol and [FixM]/[FixR] justifies the necessity of taking both E-step and M-step into consideration. Compared with [MCTS], the 5% QNU increase may result from the larger space when doing the local search, while MCTS only proposes connected subgraphs of molecules as rationales.\nDistribution of the Generated Molecules. In Figure 3-left we plot the evolution of the generative model performance over time. As we can see, the distribution gradually shifts to regions with higher property scores, which demonstrates that MolEvol does improve the molecule generative model via EM iteration. As shown in Figure 3-right, MolEvol can propose molecules with improved\nQED and SA compared to molecules in ChEMBL and the reference set. The distribution for the property scores of molecules generated by MolEvol is more compact than others, which suggests that MolEvol can propose molecules with high property score and low score variance.\nExample of Rationale/Generated Molecule. Figure 4 gives an example of molecules generated by some rationale discovered using MolEvol. The molecules are of high diversity and pertain consistently high level of scores, which proves MolEvol’s superiority.\nExpert Evaluation. We asked an experienced chemist to evaluate generated molecules. The top-scoring 50 molecules from MolEvol and ChEMBL are selected, shuffled, and grouped with one another to construct 50 pairs. Given a pair of molecules, the chemist is asked to provide a comparative score in each of the four criteria. For the sum of four scores, 30/50 molecules by MolEvol are higher or identical compared to their counterparts from ChEMBL. For individual scores, 7/50 molecules by MolEvol are all higher or identical compared to their counterparts. This result shows that our algorithm can propose high-quality realistic molecules that are competitive with existing ones. Please refer to Appendix A.4 for more details." }, { "heading": "6 DISCUSSION", "text": "In this paper, we proposed an EM-like algorithm for optimizing molecules by an explainable evolutionary process. Although we focus our paper and evaluation on molecule design, the method can be generically applied for optimizing discrete structures in other structured prediction domains like program synthesis (Ellis et al., 2020) and graph adversarial learning (Dai et al., 2018a; Zügner et al., 2018). Our method mimics humans’ general design process for discrete structures by first identifying useful structural elements and then improving the design based on these elements. The process of discovering more useful substructures and then reiterating the design is carried on to gradually improve the final product. Furthermore, the explainable graph model we developed in the paper can be applied to other general graph problems as well. We believe multiple aspects of our method have broader applications beyond the current molecule optimization problem." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work is supported in part by NSF grants CDS&E-1900017 D3SC, CCF-1836936 FMitF, IIS1841351, CAREER IIS-1350983, CNS-1704701, ONR MURI grant to L.S." }, { "heading": "A APPENDIX", "text": "A.1 EXPLAINER IMPLEMENTATION AND TRAINING\nHere we provide the details for implementing the graph explainer described in Section 4.3.\nFor the MPN of hφ in the explainer, we use a two-layer GCN with the embedding size of each layer equaling 32. The GCN input is node embedding derived by an Embedding Layer that embeds each node (atom) within the graph (molecule) into a 128-dimensional vector according to its type. The FC layer following MPN outputs a 1-dimensional vector which is used for Gumbel Softmax.\nFor the MLP of qφ in the explainer, we use a two-layer FC network to embed the information, each of whose hidden dimension equals 200. We add a batchnorm layer after each FC layer to make the training phase more stable. After that, a sigmoid layer is used to get the final prediction.\nTraining procedures are described below.\nAlgorithm 3: Training Procedures for the Explainer. Input: Molecules dataset D with each pair (g, y) denoting molecule and label, initial vertex\nsampling policy network φ, MLP network ψ for approximating Q. 1 for t← 1 to Nepochs do 2 Sample g1, · · · , gm from D. 3 for i← 1 to m do 4 Xig = MPNφ(gi) ∈ R|Vgi |×d. 5 Xis = V (φ, ζ) Xig ∈ R|Vgi |×d, where ζ ∼ Gumbel(0, 1). 6 ŷi = qψ(X i s). 7 Update φ and ψ using gradient ascent by maximizing f(gi) log ŷi + (1− f(gi)) log(1− ŷi).\n8 return φ, ψ\nA.2 CONVERGENCE ANALYSIS OF MOLEVOL\nFrom a theoretical standpoint, here we assume 1) we use the true support set S instead of the finite support set St in (Eq. 8), and 2) α and m in (Eq. 10) are carefully selected such that the gradient update has small enough variance.\nProof\n• As J(θ, p(s)) has an upper bound, we only need to show that it is non-decreasing over E step and M step.\n• E-step: we need to show that J(θt, pt+1(s)) ≥ J(θt, pt(s)). It is obvious with assumption 1) as (Eq. 6) has the closed form solution (Eq. 7), so the updated value of J is the maximum after the argmax operation.\n• M-step: we need to show that J(θt, pt(s)) ≥ J(θt−1, pt(s)). First, it is worth noticing we used the same trick as in REINFORCE to get Eq. 10 from Eq. 9, i.e. we can do SGD with the gradient we get in Eq. 10. Then, with assumption 2), by doing SGD, the unbiased gradient estimator with small variance will always converge to a non-decreasing result in the objective value.\n• By the above analysis, we can justify that this EM-like method can converge to a local optimum.\nNote that both assumptions are rather mild, since for assumption 1), St grows with time t and gradually converges to S, and for assumption 2), a large enough m and a small enough α should suffice.\nAs will be discussed later, the plot (Figure 5) of the final objective’s convergence curve justifies that our algorithm can converge empirically.\nA.3 MORE EXPERIMENT RESULTS AND DISCUSSIONS\nMolecule Distribution. We projected the generated molecules onto a two-dimensional space by tSNE (Maaten & Hinton, 2008) together with the reference molecules in Figure 5-left. The molecules generated by MolEvol occupy the chemical space expanded by the reference molecules and their neighboring regions.\nOptimization Objective. We plot the value of J(θ) in Eq (5) during training. As can be seen in Figure 5-right, the value of objective J(θ) is consistently improved, which shows that MolEvol does help to optimize the diversity-promoting objective in an alternating fashion.\nAnalysis of Baselines. The main reason for MSO’s low performance is that it produced molecules with relatively low diversity, so most queries were wasted for evaluating highly similar molecules. Therefore MSO is not well suitable for producing high-scoring molecules with high diversity since there is no regularization for the diversity of molecules it generates. GA-D(t) incorporates the discrimination score to promote unseen molecule generation. However, there is no guarantee that the generated molecules are dissimilar enough to be deemed novel, thus leading to the degradation of overall performance. In comparison with them, REINVENT and RationaleRL resort to REINFORCE for optimization, and achieve more competitive performance. Nevertheless, RationaleRL generates molecules from rationales in one-shot, which does not take the insight that the generated molecules might contains some subgraphs (i.e. rationales) that are more qualified into consideration.\nA.4 EXPERT EVALUATION EXPERIMENT\nWe provide more details on the setting of the expert evaluation experiment. We first construct the evaluation molecule set by choosing 50 top-scoring molecules of the same size from our generative model and ChEMBL dataset. The molecules are then grouped into pairs such that each pair contains one from the model and one from the dataset. The order of the two molecules in each pair is randomly shuffled. We then ask experts to evaluate these 50 pairs of molecule with respect to the four molecular properties, i.e., GSK-3β, JNK-3, QED, SA. For each property, the experts will provide their opinions using one of the following choices:\n1. The first molecule is clearly better; 2. The second one is clearly better; 3. The difference is minor and hard to tell.\nWe use the following two metrics to interpret the result.\n• [M-Single]: We score each molecule by summing over the results of all four criteria. A molecule scores 2 points on each criterion if it is clearly better, 1 point if the difference is hard to tell, and 0 points if it is clearly worse. We found that 30 out of 50 generated molecules have better or equivalent scores than its counterpart.\n• [M-Overall]: We count the number of pairs where all four properties of the generated molecule are better than or equivalent to the ChEMBL counterpart. Within these pairs, we discard the ones if there is no confident evaluation, i.e., the differences between the pair of molecules on all four criteria are hard to tell. We found that 7 out of 50 remains, meaning that 14% of all the generated molecules are strictly better than their counterpart." } ]
2,021
null